uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
2,877,628,088,354 | arxiv | \section{Introduction}\label{sect:intro}
Before formulating our targets and results in Subsection~\ref{subsect-target}, we give a short historical overview.
The history leading to the present work belongs to {four} topics, which are surveyed in the following {four} subsections.
According to our knowledge, {the first three of these four} topics have been studied independently so far; one of our goals is to find some connection among them.
\subsection{Strongly algebraically closed algebras {in categories of algebras}}
By an \emph{equation} in an algebra $A$ we mean a formal expression
\begin{equation*}
p(a_1,\dots,a_m, x_1,\dots, x_n)\approx q(a_1,\dots,a_m, x_1,\dots, x_n)
\end{equation*}
where $m\in\Nnul=\set{0,1,2,\dots}$, $n\in\Nplu=\Nnul\setminus\set 0$,
$p$ and $q$ are $(m+n)$-ary terms (in the language of $A$), the elements $a_1,\dots,a_m$ belong to $A$ and they are called \emph{parameters} (or \emph{coefficients}), and
$x_1,\dots, x_n$ are the \emph{unknowns} of this equation. Although a single equation contains only finitely many unknowns, we allow infinite \emph{systems} (that is, sets) of equations and such a system can contain infinitely many unknowns.
{By a \emph{category of algebras} we mean a concrete category $\alg X$ such that the objects of $\alg X$ are algebras of the same type, every morphism of $\alg X$ is a homomorphism, and whenever $A_1$ and $A_2$ are isomorphic algebras such that $A_1$ belongs to $\alg X$, then so does $A_2$.
Note that there can be homomorphisms among the objects of $\alg X$ that are not morphisms of $\alg X$.
If $\alg X$ happens to contain all homomorphisms among its objects as morphisms, then $\alg X$ is a \emph{class of algebras $($with all homomorphisms$)$}; the parenthesized part of this term is often dropped in the literature. Given a category $\alg X$ of algebras and objects $A,B$ in $\alg X$, we say that $B$ is an \emph{$\alg X$-extension} of $A$ if $A$ is a subalgebra of $B$ and, in addition, the map $\iota\colon A\to B$ defined by $x\mapsto x$ is a morphism in $\alg X$. (If $\alg X$ is a class of algebras with all homomorphisms and $A,B\in \alg X$, then ``extension" is the same as ''$\alg X$-extension".)}
{Note that the concept of ``$B$ is an $\alg X$-extension of $A$''
includes not only $A$ and $B$, but also the embedding $\iota\colon A\to B$ defined by $x\mapsto x$. Therefore, when we speak of ``all $\alg X$-extensions of $A$'', then the meaning is that all possible embeddings $\iota$ are considered. For example, if $A$ is the two-element chain in the class $\alg L$ of lattices with all homomorphisms, then $A$ has three essentially different $\alg L$-extensions into a three-element chain. }
For {a category $\alg X$ of algebras and} an algebra $A\in\alg X$,
we say that $A$ is \emph{strongly algebraically closed in} $\alg X$ if for every {$\alg X$-extension} $B\in \alg X$ of $A$ and for any system $\Sigma$ of equations with parameters taken from $A$, if $\Sigma$ has a solution in $B$, then it also has a solution in $A$.
Following Schmid~\cite{schmid}, if we replace ``any system $\Sigma$'' by ``any finite system $\Sigma$'', then we obtain the concept of an \emph{algebraically closed algebra} $A$ in $\alg X$.
These two concepts have been studied by many authors;
restricting ourselves to lattice theory, we only mention
Schmid~\cite{schmid} and
Molkhasi~\cite{molkhasi16,molkhasi18a,molkhasi18b,molkhasi20}.
\subsection{Absolute retracts} Given an algebra $B$ and a subalgebra $A$ of $B$, we say that $A$ is a \emph{retract} of $B$ if there exists a homomorphism $f\colon B\to A$ such that $f(a)=a$ for all $a\in A$.
The homomorphism $f$ in this definition is called a \emph{retraction map} or a \emph{retraction} for short.
Now let $A$ be {an algebra belonging to a category $\alg X$ of algebras.
We say that $A$ is an \emph{absolute retract for} $\alg X$ if for
any $\alg X$-extension $B$ of $A$, there exists a retraction $B\to A$ among the morphisms of $\alg X$.
Similarly, $A$ is an \emph{absolute $\Hop$-retract for} $\alg X$ if for any $\alg X$-extension $B$ of $A$, there exists a retraction $f\colon B\to A$ (but $f$ need not be a morphism of $\alg X$). The letter $\Hop$ in this terminology comes from ``homomorphism''. Although an absolute $\Hop$-retract is not a purely category theoretical notion, it helps us to state some of our assertions in a stronger form. Note that if $A$ is an absolute retract for $\alg X$, then it is also and absolute $\Hop$-retract for $\alg X$. Observe that
\begin{equation}
\parbox{8.5cm}{for a class $\alg X$ of algebras with all homomorphisms, absolute $\Hop$-retracts and absolute retracts are the same.}
\label{pbx:rmDhwhspKlQB}
\end{equation}
}
Absolute retracts emerged first in topology, and they appeared in classes of algebras as soon as 1946; see Reinhold~\cite{reinhold}.
There are powerful tools to deal with homomorphisms and, in particular, retractions in {several categories of lattices};
we will benefit from these tools in Sections \ref{sect-thm} {and \ref{sect:dist}}.
\subsection{Slim semimodular lattices}
{For a finite lattice $L$, let $\Jir L$ stand for the set of nonzero join-irreducible elements of $L$. Note that $\Jir L$
is a poset (i.e., partially ordered set) with respect to the order inherited from $L$.}
Following Cz\'edli and Schmidt \cite{czgschtJH}, we say that a lattice is \emph{slim} if it is finite and $\Jir L$ is the union of two chains.
Note that slim lattices are planar; see Lemma 2.2 of Cz\'edli and Schmidt \cite{czgschtJH}. As usual, a lattice $L$ is (upper) \emph{semimodular} if we have $x\vee z\preceq y\vee z$ for any $x,y,z\in L$ such that
$y$ covers or equals $x$ (in notation, $x\preceq y$).
Since the pioneering paper Gr\"atzer and Knapp \cite{gratzerknapp1},
recent years have witnessed a particularly intense activity in studying
\emph{slim semimodular lattices}; see
Cz\'edli~\cite{czgreprhomr,czgmatrix,czgtrajcolor,czganotesps,czgrectectdiag,czgqplanar,czgasymp,czgaxiombipart},
Cz\'edli, D\'ek\'any, Gyenizse and Kulin~\cite{czgdgyk},
Cz\'edli, D\'ek\'any, Ozsv\'art, Szak\'acs, Udvari~\cite{czg--nora},
Cz\'edli and Gr\"atzer~\cite{czgggltsta,czgggresections,czgginprepar},
Cz\'edli, Gr\"atzer, and Lakser \cite{czggghlswing},
Cz\'edli and Makay~\cite{czgmakay},
Cz\'edli and Schmidt~\cite{czgschtJH,czgschtvisual,czgschtcompser,czgschtpatchwork}, Gr\"atzer~\cite{ggonaresczg,ggswinglemma,ggtwocover,ggSPS8}, Gr\"atzer and Knapp \cite{gratzerknapp3}, and Gr\"atzer and Nation~\cite{gr-nation}.
For the impact of these lattices on (combinatorial) geometry, see {Adaricheva and Bolat~\cite{adaribolat},} Adaricheva and Cz\'edli~\cite{adariczg}, Cz\'edli~\cite{czgcircles}, and (the surveying) Section 2 of Cz\'edli and Kurusa~\cite{czgkurusa}, {and see their impact on lattice theory in Ranitovi\'c and Tepav\v cevi\'c \cite{andrejaranitovic,andrejamarijana}.}
\subsection{Finite and $n$-dimensional distributive lattices}
\label{subsect:Dfdim}
It is well known that a finite distributive lattice $D$ is determined by the poset $\Jir D$ up to isomorphism.
Borrowing a definition from Dushnik and Miller~\cite{dushnikmiller}, the \emph{order dimension} of a poset $P=(P;\leq_P)$, denoted by $\dim P$, is the least number $n$ such that the relation $\leq_P$ is the intersection of $n$ linear orderings on $P$. We know from Milner and Pouzet \cite{milnerpouzet} that $\dim P$ is also the least number $n$ such that $P$ has an order embedding into the direct product of $n$ chains.
The \emph{width} of a poset $P$ is defined to be the maximum size of an antichain in $P$; it will be denoted by $\width P$.
By Dilworth \cite[Theorem 1.1]{dilworth}, a finite poset $P$ is of width $n$ if and only if $P$ is the union of $n$ (not necessarily disjoint) chains but not a union of fewer chains.
As it is pointed out in the first paragraph of page 276 in Rabinovitch and Rival~\cite{rabinovitchrival}, it follows from Dilworth~\cite{dilworth} that
\begin{equation}
\text{for a finite distributive lattice $D$, $\dim D=\width{\Jir D}$.}
\label{eqtxt:dimDwJD}
\end{equation}
If $\dim D=n$, then $D$ is said to be \emph{$n$-dimensional}.
\subsection{Targets and results}\label{subsect-target}
First, we are going to prove the following easy proposition. By a finite algebra we mean a finite nonempty set equipped with \emph{finitely many} operations.
\begin{proposition}\label{prop}
If $A$ is an algebra in a {category} $\alg X$ of algebras, then the following two conditions are equivalent.
\begin{enumerate}
\item\label{propa} $A$ is strongly algebraically closed in $\alg X$.
\item\label{propb} $A$ is an absolute {$\Hop$-}retract for $\alg X$.
\end{enumerate}
Furthermore, if $\alg X$ consists of finite algebras, then each of (\ref{propa}) and (\ref{propb}) is equivalent to
\begin{enumerate}\setcounter{enumi}{2}
\item\label{propc} $A$ is algebraically closed in $\alg X$.
\end{enumerate}
\end{proposition}
This proposition will be proved in Section~\ref{sect:retr}.
Armed with Proposition \ref{prop}, we are going to prove the following result {in Section~\ref{sect-thm}}.
\begin{theorem}\label{thmsps}
Let $L$ be a slim semimodular lattice and let $\alg S$ denote the class of all slim semimodular lattices with all homomorphisms. Then the following four conditions are equivalent.
\begin{enumerate}
\item\label{thmnul} $L$ is algebraically closed in $\alg S$.
\item\label{thma} $L$ is strongly algebraically closed in $\alg S$.
\item\label{thmb} $L$ is an absolute retract for $\alg S$.
\item\label{thmd} $L$ is the one-element lattice, i.e., $|L|=1$.
\end{enumerate}
\end{theorem}
Since the singleton lattice does not look too exciting in itself, it is worth noting the following. First, we {know neither a really short proof of this theorem nor a proof without using some nontrivial tool from the theory of slim semimodular lattices}. {Second, Theorem~\ref{thmsps} together with Molkhasi~\cite{molkhasi16,molkhasi18a,molkhasi18b,molkhasi20} and Schmid~\cite{schmid} have just motivated a related result with infinitely many absolute retracts for the class of slim semimodular lattices with less morphisms than here; see
Cz\'edli \cite{czgpatchabsrectr}.
Third and mainly, as it is explained in Subsection~\ref{subsect:nBtPrf}, Theorem~\ref{thmsps} and the tools needed to prove it have paved the way to Theorem \ref{thmdst} below. To formulate it, let $\omega$ stand for the least infinite cardinal number, let $\Nplu:=\set{1,2,3,4,\dots}$, and
\begin{equation}
\parbox{7cm}{for $n\in\Nplu\cup\set{\omega}$, let $\Dnfin n$ denote the class of \emph{finite} distributive lattices with order dimension at most $n$, with all homomorphisms.}
\label{txtDnfindef}
\end{equation}
By a \emph{nontrivial} lattice we mean a lattice with more than one element.
}
\begin{theorem}[Main Theorem]\label{thmdst}
Let $n\in\Nplu\cup\set{\omega}$, see \eqref{txtDnfindef}, and let $D\in\Dnfin n$. Then the following four conditions are equivalent.
\begin{enumerate}
\item\label{thmdsta} $D$ is algebraically closed in $\Dnfin n$.
\item\label{thmdstb} $D$ is strongly algebraically closed in $\Dnfin n$.
\item\label{thmdstc} $D$ is an absolute retract for $\Dnfin n$.
\item\label{thmdstd} $D$ is a boolean lattice or $D$ is the direct product of $n$ nontrivial finite chains.
\end{enumerate}
\end{theorem}
We are going to prove this Theorem in Section~\ref{sect:dist}.
Since $\Dnfin \omega$ is \emph{the class of all finite distributive lattices}, $\Dnfin \omega=\bigcup_{n\in\Nplu} \Dnfin n$, and the direct product of $\omega$ many nontrivial chains cannot be finite, Theorem \ref{thmdst} clearly implies the following corollary.
\begin{corollary}\label{cor:D-omega} Let $D$ be a \emph{finite} distributive lattice.
Then the following four conditions are equivalent.
\begin{enumerate}
\item\label{cor:D-omegaa} $D$ is algebraically closed in the class $\Dnfin \omega$ of finite distributive lattices with all homomorphisms.
\item\label{cor:D-omegab} $D$ is strongly algebraically closed in $\Dnfin \omega$.
\item\label{cor:D-omegac} $D$ is an absolute retract for $\Dnfin \omega$.
\item\label{cor:D-omegad} $D$ is a boolean lattice.
\end{enumerate}
\end{corollary}
The proofs of the following three corollaries are given in Section~\ref{sect:dist}; note that two of them follow partly from the proof of Theorem~\ref{thmdst} rather than from the theorem itself.
\begin{corollary}\label{cor:DfnVGs} For a \emph{finite} distributive lattice $D$, the following four conditions are equivalent.
\begin{enumerate}
\item\label{cor:DfnVGsa} $D$ is algebraically closed in the class $\Dall$ of all (not necessarily finite) distributive lattices with all homomorphisms.
\item\label{cor:DfnVGsb} $D$ is strongly algebraically closed in $\Dall$.
\item\label{cor:DfnVGsc} $D$ is an absolute retract for $\Dall$.
\item\label{cor:DfnVGsd} $D$ is a boolean lattice.
\end{enumerate}
\end{corollary}
Note that while Schmid~\cite{schmid} only allows lattice embeddings and homomorphisms that preserve 0 and 1 whenever they exist, there is no such restriction in the present paper. Therefore, even the \eqref{cor:DfnVGsd} $\Rightarrow$ \eqref{cor:DfnVGsa} implication in Corollary~\ref{cor:DfnVGs}
is stronger than what Schmid~\cite{schmid} proves for a finite boolean lattice $D$. The classes $\Dnfin n$ for $n\in\Nplu$ have not occurred in this context previously. Let us emphasize that Corollary~\ref{cor:DfnVGs} does not describe the absolute retracts for $\Dall$; it describes only the finite absolute retracts for this class.
For finite lattices $A$ and $B$, a lattice homomorphism $f\colon A\to B$ is said to be a \emph{\covzo} if $f(0)=0$,
$f(1)=1$, and for all $x,y\in A$ such that $x\prec y$, we have that $f(x)\prec f(y)$.
Since any two maximal chains in a finite semimodular lattice are of the same length (this is the so-called \emph{Jordan--H\"older chain condition}), we easily obtain the following observation; see Lemma~\ref{lemmaKzTfPsz} for a bit more information.
\begin{equation}
\parbox{7.8cm}{if $A$ and $B$ are finite semimodular lattices and there exists a \covzo{} $A\to B$, then $A$ and $B$ are of the same length.}
\label{pbx:lnPrsHm}
\end{equation}
Note that distributive lattices, to which we will apply \eqref{pbx:lnPrsHm}, are semimodular.
For $n\in\Nplu\cup\set\omega$, let $\Dncovzo n$ denote the category consisting of finite distribute lattices of order dimension at most $n$ as objects and \covzo{s} as morphisms.
(So $\Dncovzo n$ has the same objects as $\Dnfin n$, but it has much less morphisms.)
\begin{corollary}\label{cor-kvsnLptv}
Let $n\in\Nplu\cup\set{\omega}$, and let
$D\in\Dncovzo n$. Then the following five conditions are equivalent.
\begin{enumerate}
\item\label{cor-kvsnLptva} $D$ is algebraically closed in $\Dncovzo n$.
\item\label{cor-kvsnLptvb} $D$ is strongly algebraically closed in $\Dncovzo n$.
\item\label{cor-kvsnLptvc} $D$ is an absolute $\Hop$-retract for $\Dncovzo n$.
\item\label{cor-kvsnLptvd} $D$ is an absolute retract for $\Dncovzo n$.
\item\label{cor-kvsnLptve} $D$ is a boolean lattice or $D$ is the direct product of $n$ nontrivial finite chains.
\end{enumerate}
\end{corollary}
Corollary~\ref{cor-kvsnLptv} shows that we can disregards many morphisms from the categories occurring in Theorem~\ref{thmdst} so that absolute retracts remain the same. This is not at all so for the category $\alg S$ occurring in Theorem~\ref{thmsps}; see Cz\'edli~\cite{czgpatchabsrectr} for details.
In the following corollary, ``nontrivial" means ``non-singleton"; let us repeat that planar lattices are finite by definition.
\begin{corollary}\label{cor:krSzwzlvnlZvl}
If $D$ is a planar distributive lattice, then the following five conditions are equivalent.
\begin{enumerate}
\item\label{cor:krSzwzlvnlZvla} $D$ is an absolute retract for the class of planar distributive lattices with all homomorphisms.
\item\label{cor:krSzwzlvnlZvlb} $D$ is an absolute $\Hop$-retract for the category of planar distributive lattices with \covzo{}s as morphisms.
\item\label{cor:krSzwzlvnlZvlc} $D$ is an absolute retract for the category of planar distributive lattices with \covzo{}s as morphisms.
\item\label{cor:krSzwzlvnlZvld} $|D|\leq 2$ or $D$ is the direct product of two nontrivial finite chains.
\end{enumerate}
\end{corollary}
Of course, Proposition~\ref{prop} is applicable for both classes mentioned in parts \eqref{cor:krSzwzlvnlZvla} and \eqref{cor:krSzwzlvnlZvlb} of Corollary~\ref{cor:krSzwzlvnlZvl}, and so we could add $2\cdot 2= 4$ additional equivalent conditions to this corollary.
\color{black}
\section{Proving our proposition}\label{sect:retr}
To ease the notation, we give the proof only for lattices; the general proof would be practically the same.
\begin{proof}[Proof of Proposition~\ref{prop}] First, we deal with the implication \eqref{propa} $\Rightarrow$ \eqref{propb} and, if $\alg X$ consists of finite lattices, also with the implication \eqref{propc} $\Rightarrow$ \eqref{propb}.
Assume that $\alg X$ is a class of lattices, $A\in \alg X$, and either $A$ is strongly algebraically closed in $\alg X$ or
$\alg X$ consists of finite lattices and $A$ is algebraically closed in $\alg X$.
Let $B\in \alg X$ be an $\alg X$-extension of $A$. We need to show the existence of a retraction $f\colon B\to A$. We can assume that $A$ is a proper sublattice of $B$, because the identity map of $B$
would obviously be a $B\to A$ retraction if $A=B$. The elements of $A$ and those of $B\setminus A$ will be called \emph{old elements} and \emph{new} elements, respectively.
For each new element $b$, we take an unknown $x_b$.
For each pair $(a,b)\in B\times B$ of elements such that at least one of $a$ and $b$ is new, we define an equation
$\Ejoin a b$ according to the following six rules.
\allowdisplaybreaks{
\begin{align}
\text{If $a$ is old, $b$ is new, and $a\vee b$ is old, then
$\Ejoin a b$ is $a\vee x_b \approx a\vee b$.}\label{joinono}
\\
\text{If $a$ is new, $b$ is old, and $a\vee b$ is old, then
$\Ejoin a b$ is $x_a\vee b \approx a\vee b$.}\label{joinnoo}
\\
\text{If $a$ and $b$ are new and $a\vee b$ is old, then
$\Ejoin a b$ is $x_a\vee x_b \approx a\vee b$.}\label{joinnno}
\\
\text{If $a$ is old, $b$ and $a\vee b$ are new, then
$\Ejoin a b$ is $a\vee x_b \approx x_{a\vee b}$.}\label{joinonn}
\\
\text{If $a$ and $a\vee b$ are new and $b$ is old, then
$\Ejoin a b$ is $x_a\vee b \approx x_{a\vee b}$.}\label{joinnon}
\\
\text{If $a$, $b$, and $a\vee b$ are all new, then
$\Ejoin a b$ is $x_a\vee x_b \approx x_{a\vee b}$.}\label{joinnnn}
\end{align}
Analogously, replacing $\vee$ by $\wedge$, we define
the equations $\Emeet a b$ for all $(a,b)\in B\times B$ such that at least one of $a$ and $b$ is a new element.
Let $\widehat E$ be the system of all equations we have defined so far. Note that if $\alg X$ consists of finite lattices, then $\widehat E$ is finite.
Clearly, $\widehat E$ has a solution in $B$. Indeed, we can let $x_b:=b$ for all new elements $b$ to obtain a solution of $\widehat E$.
Since we have assumed that either $A$ is strongly algebraically closed in $\alg X$ or $\alg X$ consists of finite lattices and
$A$ is algebraically closed in $\alg X$, it follows that $\widehat E$ also has a solution in $A$. This allows us to fix a solution of $\widehat E$ in $A$. That is, we can choose an element $u_b\in A$ for each new element $b$ such that
the equations \eqref{joinono}--\eqref{joinnnn} turn into true equalities when the unknowns $x_b$, for $b\in B\setminus A$, are replaced by the elements $u_b$.
Next, consider the map
\begin{equation*}
f\colon B\to A,\text{ defined by }c\mapsto
\begin{cases}
c,&\text{if $c$ is an old element,}\cr
u_c,&\text{if $c$ is a new element.}
\end{cases}
\end{equation*}
We claim that $f$ is a retraction. Clearly, $f$ acts identically on $A$. So we need only to show that $f$ is a homomorphism. It suffices to verify that $f$ commutes with joins since the case of meets is analogous. If $a,b\in A$, then $a\vee b$ is also in $A$, and we have that $f(a)\vee f(b)=a\vee b=f(a\vee b)$, as required. If, say,
$a, a\vee b\in A$ and $b\in B\setminus A$, then \eqref{joinono} applies and we obtain that $f(a)\vee f(b)= a\vee u_b= a \vee b=f(a\vee b)$, as required. If $a,b,a\vee b$ are all new, then we can use \eqref{joinnnn} to obtain that $f(a)\vee f(b)=u_a\vee u_b=u_{a\vee b}=f(a\vee b)$, as required. The rest of the cases follow similarly from
\eqref{joinnoo}--\eqref{joinnon}. Thus, we conclude that $f$ commutes with joins. We obtain analogously that it commutes with meets, whereby $f$ is a homomorphism. So $f$ is a retraction, proving that \eqref{propa} $\Rightarrow$ \eqref{propb}
and, if $\alg X$ consists of finite lattices, \eqref{propc} $\Rightarrow$ \eqref{propb}.
To prove the implication, \eqref{propb} $\Rightarrow$ \eqref{propa}, assume that $A\in \alg X$ is an absolute $\Hop$-retract for $\alg X$, $B\in \alg X$ is an $\alg X$-extension of $A$, and
a system $\widehat G$ of equations with constants taken from $A$ has a solution in $B$.
Let $x,y,z,\dots $ denote the unknowns occurring in $\widehat G$ (possibly, infinitely many), and let $b_x, b_y, b_z,\dots \in B$ form a solution of $\widehat G$. Since we have assumed that $A$ is an absolute $\Hop$-retract for $\alg X$, we can take a retraction $f\colon B\to A$. We define $d_x:=f(b_x)$, $d_y:=f(b_y)$, $d_z:=f(b_z)$, \dots; they are elements of $A$.
Let $p(a_1,\dots,a_k, x,y,z, ...)=q(a_1,\dots,a_k, x,y,z, ...)$ be one of the equations of $\widehat G$; here $p$ and $q$ are lattice terms, the constants $a_1,\dots, a_k$ are in $A$, and only finitely many unknowns occur in this equation, of course. Using that $f$ commutes with lattice terms and, at $=^\ast$, using also that $b_x$, $b_y$, $b_z$, \dots form a solution of the equation in question, we obtain that
\begin{align*}
p(a_1,\dots,a_k, d_x, d_y,d_z,\dots)
= p(f(a_1),\dots,f(a_k), f(b_x), f(b_y),f(b_z,)\dots)\cr
= f(p(a_1,\dots,a_k, b_x, b_y,b_z,\dots)) =^\ast f(q(a_1,\dots,a_k, b_x, b_y,b_z,\dots))=\cr
q(f(a_1),\dots,f(a_k), f(b_x), f(b_y),f(b_z),\dots)=q(a_1,\dots,a_k, d_x, d_y,d_z,\dots).
\end{align*}
This shows that $d_x,d_y,d_z,\dots \in A$ form a solution of $\widehat G$ in $A$. Therefore, $A$ is strongly algebraically closed in $\alg X$, showing the validity of \eqref{propb} $\Rightarrow$ \eqref{propa}.
Finally, the implication \eqref{propa} $\Rightarrow$ \eqref{propc} is trivial, completing the proof of Proposition~\ref{prop}.
\end{proof}
\section{{Proving} Theorem \ref{thmsps}}\label{sect-thm}
First, we recall briefly from Cz\'edli and Schmidt \cite{czgschtvisual} what we need to know about slim semimodular lattices. {Let us repeat that slim lattices are finite by definition; every lattice in this section is assumed to be \emph{finite}}.
For a slim semimodular lattice $L$, we always assume that a planar diagram of $L$ is fixed.
A cover-preserving four-element boolean sublattice of $L$ is called a \emph{$4$-cell}.
For $m,n\in\Nplu$, the direct product of an $(m+1)$-element chain and an $(n+1)$-element chain is called a \emph{grid} or, when we want to be more precise, an \emph{$m$-by-$n$ grid}; note that this grid has exactly $mn$ 4-cells.
We can add a \emph{fork} to a 4-cell of a slim semimodular lattice as it is shown in Figure 5 of \cite{czgschtvisual}; this is also shown here in Figure~\ref{figabsretr1}, where we have added a fork to the light-grey 4-cell of $S_7^{(1)}$ to obtain $S_7^{(2)}$, and in Figure~\ref{figllstr}, where we can obtain $R$ from the grid $G$ by adding a fork to the upper 4-cell of $G$. \emph{Corners} are particular doubly irreducible elements on the boundary of $L$, see Figure 2 in \cite{czgschtvisual}, but we do not need their definition here. Instead of the exact definition of slim rectangular lattices, it suffices to know their characterization, which is given by (the last sentence of) Theorem 11 and Lemma 22 in \cite{czgschtvisual} as follows:
\begin{equation}
\parbox{9cm}{$L$ is a
\emph{slim rectangular} lattice if and only if it can be obtained from a grid by adding forks, one by one, in a finite (possibly zero) number of steps.}
\label{pbx:sRctGlr}
\end{equation}
We know from Lemma 21 of \cite{czgschtvisual} that
\begin{equation}
\parbox{9cm}{a lattice $L$ is a slim semimodular lattice if and only if $|L|\leq 2$ or $L$ can be obtained from a slim rectangular lattice by removing finitely many corners, one by one.}
\label{pbx:sLcRGns}
\end{equation}
\begin{figure}[ht]
\centerline
{\includegraphics[width=\textwidth]{czgamfig1}}
\caption{$S_7^{(1)}$, $S_7^{(2)}$, and $S_7^{(7)}$ }\label{figabsretr1}
\end{figure}
\begin{proof}[Proof of Theorem \ref{thmsps}]
Since slim semimodular lattices are finite by definition, the equivalence of \eqref{thmnul} and \eqref{thma} follows trivially from Proposition~\ref{prop}. Also, Proposition~\ref{prop} yields the equivalence of \eqref{thma} and \eqref{thmb}. Since the one-element lattice is an absolute retract for any class of lattices containing it, the implication \eqref{thmd} $\Rightarrow$ \eqref{thmb} is trivial.
Thus, it suffices to prove the implication \eqref{thmb} $\Rightarrow$ \eqref{thmd}. To do so, it is sufficient to prove that whenever $L\in\alg S$ and $|L|\geq 2$, then $L$ is not an absolute retract for $\alg S$.
So let $L$ be a slim semimodular lattice with at least two elements. By \eqref{pbx:sLcRGns} (or trivially if $|L|=2$), we can pick a slim rectangular lattice $R$ such that $L$ is a sublattice of $R$. It follows from \eqref{pbx:sRctGlr} that there exist $m,n\in\Nplu$ such that $R$ can be obtained from an $m$-by-$n$ grid $G$ by adding forks, one by one. Let $t\in\Nplu$ denote the smallest number such that $m+n+1\leq t$ and $|L|<t$.
To present an example that helps the reader follow the proof, let $L$ be the 9-element slim semimodular lattice on the top left of Figure \ref{figllstr}. For this $L$, we define $R$ and $G$ by the top right diagram and the bottom right diagram of Figure \ref{figllstr}, respectively, and we have that $m=2$, $n=1$, and $t=10$.
\begin{figure}[ht]
\centerline
{\includegraphics[width=\textwidth]{czgamfig2}}
\caption{Illustrating the proof of Theorem~\ref{thmsps}}\label{figllstr}
\end{figure}
We define the lattices $S_7^{(i)}$ for $i\in\Nplu$ by induction as follows; see Figure~\ref{figabsretr1} for $i\in\set{1,2,7}$, and see the diagram in the middle of Figure~\ref{figllstr} for $i=10$ if we disregard the black-filled elements. (That is, $S_7^{(i)}=K\setminus\{$black-filled elements$\}$ in this diagram.)
Resuming the definition of the lattices $S_7^{(i)}$, we obtain $S_7^{(1)}$ by adding a fork to the only 4-cell of the four-element boolean lattice. From $S_7^{(i)}$, we obtain $S_7^{(i+1)}$ by adding a fork to the rightmost 4-cell of $S_7^{(i)}$ that contains $1$, the largest element of $S_7^{(i)}$.
(Note that we have also defined a fixed planar diagram of $S_7^{(i)}$ in this way.) The elements of $S_7^{(i)}$ (or those of a planar lattice diagram) not on the boundary of the diagram are called \emph{inner elements}. Let $a_1,a_2, \dots, a_i$ be the inner coatoms of $S_7^{(i)}$, listed from left to right. In our diagrams, they are grey-filled.
From now on, we only need $S_7^{(t)}$. It follows from \eqref{pbx:sRctGlr} that $S_7^{(t)}$ is a slim semimodular (in fact, a slim rectangular) lattice. The meet $a_1\wedge\dots \wedge a_t$ of its inner coatoms will be denoted by $b$, as it is indicated in Figure~\ref{figllstr}.
Since $m+n+1\leq t$, the interval $[b,a_{m+1}]$ of $S_7^{(t)}$ includes an $m$-by-$n$ grid $G'$ with top element $a_{m+1}$. In our example, $G'$ is indicated by the light-grey area in the sense that $G'$ consists of those six elements of $S_7^{(t)}=S_7^{(10)}$ that are on the boundary of the light-grey rectangle. (Remember that $S_7^{(10)}= K\setminus\set{\text{black-filled elements}}$ in the middle of Figure \ref{figllstr}.)
Since the grids $G'$ and $G$ have the same {``sizes''}, they are isomorphic. Thinking of the diagrams, we can even assume that $G'$ and $G$ are geometrically congruent. Hence, when we add forks to $G$ one by one in order to get $R$, we can simultaneously add forks to
$G'$ in the same way and, consequently, also to $S_7^{(t)}$. In this way, we obtain a slim rectangular lattice $K$ from $S_7^{(t)}$; this follows from \eqref{pbx:sRctGlr}. Note that $K\in \alg S$.
In the middle of Figure~\ref{figllstr}, $K$ consists of the empty-filled elements, the grey-filled elements, and the black-filled elements. In $K$, the former interval $G'$ has become an interval isomorphic to $R$. But $R$ is an extension of $L$, whereby $K$ has a sublattice $L'$ such that $L'$ is isomorphic to $L$. In the middle of the figure,
the elements of $L'$ are the pentagon-shaped larger elements. Note that the original inner coatoms $a_{1}, \dots, a_t$ are also inner coatoms of~$K$.
Next, for the sake of contradiction, suppose that $L$ is an absolute retract for $\alg S$. Then so is $L'$ since $L'\cong L$.
Since $K\in\alg S$ and $L'$ is a sublattice of $K$, there exists a retraction $f\colon K\to L'$. Let $\Theta:=\set{(x,y)\in K^2:
f(x)=f(y)}$ be the kernel of $f$. Then $\Theta$ is a congruence of $K$ with exactly $|L'|$ blocks. But $t > |L|=|L'|$, whence there are distinct $i,j\in\set{1,\dots, t}$ such that $a_i$ and $a_j$ belong to the same $\Theta$-block. Hence, $(a_i,a_j)\in\Theta$, implying that $(a_i,1)=(a_i\vee a_i, a_j\vee a_i)\in \Theta$. Thus, the $\Theta$-block $1/\Theta$ of $1$ contains $a_i$. By Gr\"atzer's Swing Lemma, see his paper \cite{ggswinglemma} (alternatively, see Cz\'edli, Gr\"atzer, and Lakser \cite{czggghlswing} or Cz\'edli and Makay \cite{czgmakay} for secondary sources), $\set{a_1,\dots,a_t}\subseteq 1/\Theta$. Since congruence blocks are sublattices, $b=a_1\wedge\dots\wedge a_t \in 1/\Theta$. Therefore, using the facts that congruence blocks are \emph{convex} sublattices,
$a_{m+1}\in 1/\Theta$, and $G'$ was originally a subinterval of $[b,a_{m+1}]$ in $S_7^{(t)}$, we obtain that $L'\subseteq [b,a_{m+1}]\subseteq 1/\Theta$ in the lattice $K$. Hence, for any $x,y\in L'$, we have that $(x,y)\in \Theta$. Consequently, the definition of $\Theta$ and that of a retraction yield that, for any $x,y\in L'$,
$x=f(x)=f(y)=y$. Therefore, $|L|=|L'|=1$, which is a contradiction.
This contradiction implies that neither $L'$, nor $L$ is an absolute retract for $\alg S$, completing the proof of Theorem~\ref{thmsps}.
\end{proof}
\section{Proving Theorem \ref{thmdst} and its corollaries}\label{sect:dist}
\subsection{Notes before the proof}\label{subsect:nBtPrf}
This subsection is to enlighten the way from Theorem~\ref{thmsps} to Theorem~\ref{thmdst}. The reader is not expected to check the in-line statements in this subsection; what is needed will be proved or referenced in due course.
In the proof of Theorem \ref{thmsps}, forks play a crucial role. This raises the question what happens if forks are excluded from \eqref{pbx:sRctGlr}. It follows from Cz\'edli and Schmidt~\cite[Lemma 15]{czgschtvisual} (and the proof of Corollary~\ref{cor:krSzwzlvnlZvl} here) that the lattices we obtain by means of \eqref{pbx:sRctGlr} and \eqref{pbx:sLcRGns} \emph{without} adding forks are exactly the members of $\Dnfin 2$.
But $\Dnfin 2$ is the class of \emph{distributive} slim semimodular lattices. Hence, utilizing the theory of slim semimodular lattices, the particular case $n=2$ of Theorem ~\ref{thmdst} becomes available with little effort. Although this section is more ambitious by allowing $n\in\Nplu\cup\set{\omega}$,
the ideas extracted from the theory of slim semimodular lattices and from the proof of Theorem~\ref{thmsps}
have been decisive in reaching Theorem ~\ref{thmdst}.
\subsection{Auxiliary lemmas}
Unless otherwise explicitly stated, every lattice in this section is assumed to be \emph{finite}.
By an \emph{$n$-dimensional grid} we mean the direct product of $n$ nontrivial (that is, non-singleton) finite chains. Clearly, the order dimension of an $n$-dimensional grid is $n$.
2-dimensional grids are simply called grids in Section \ref{sect-thm}.
For an $n$-dimensional grid $G$ and a maximal element $a\in\Jir G$, the principal ideal $\ideal a$ is a nontrivial chain. Chains of this form will be called the \emph{canonical chains} of $G$.
The following lemma follows trivially from the fact that in a direct product of finitely many finite chains we compute componentwise.
\begin{lemma}\label{lemma:cnChgR} If $n\in\Nplu$ and $G$ is an $n$-dimensional grid, then the following assertions hold.
\begin{enumerate}
\item\label{lemma:cnChgRa} $G$ has exactly $n$ canonical chains; in the rest of the lemma, they will be denoted by $C_1$, \dots, $C_n$.
\item\label{lemma:cnChgRb} Each element $x$ of $G$ can uniquely be written in the \emph{canonical form}
\begin{equation}
\parbox{9.2cm}{$x=\cj x1\vee\dots\vee \cj xn\,\,$ where $\cj x1:=x\wedge 1_{C_1}\in C_1$, \dots, $\cj xn:=x\wedge 1_{C_n}\in C_n$; the elements $\cj x1$,\dots,$\cj xn$ are called the \emph{canonical joinands} of $x$.}
\label{eq:cFnfRmsG}
\end{equation}
\item\label{lemma:cnChgRc} For each $i\in\set{1,\dots,n}$, the map $\pi_i\colon G\to C_i$ defined by $x\mapsto \cj xi$ is a surjective homomorphism.
\item\label{lemma:cnChgRd} The map $G\to C_1\times\dots\times C_n$ defined by $x\mapsto(\cj x1,\dots,\cj xn)$ is a lattice isomorphism.
\end{enumerate}
\end{lemma}
The notation $\cj x1$, \dots, $\cj xn$ will frequently be used, provided the canonical chains of an $n$-dimensional grid are fixed. The map $\pi_i$ above is often called the \emph{$i$-th projection}.
Note that, for an $n$-dimensional grid $G$, $\Jir G$ is the disjoint union of $C_1\setminus\set 0$, \dots, $C_n\setminus\set 0$. Thus, the \emph{set} $\set{C_1,\dots,C_n}$ of the canonical chains is uniquely determined, and only the \emph{order} of these chains needs fixing.
We also need the following lemma; the sublattices of a chain are called \emph{subchains}.
\begin{lemma}\label{lemma:gRd}
Assume that $n\in\Nplu$, $L$ and $K$ are $n$-dimensional grids, and $L$ is a sublattice of $K$. Then there are nontrivial subchains $E_1$, \dots, $E_n$ of the canonical chains $C_1$, \dots, $C_n$ of $K$, respectively, such that
\begin{equation}
L=\set{x\in K: \cj x1\in E_1,\dots, \cj x n\in E_n}.
\label{eq:dcEdjKwmC}
\end{equation}
\end{lemma}
The visual meaning of Lemma \ref{lemma:gRd} is that an $n$-dimensional grid cannot be embedded into another $n$-dimensional grid in a ``skew way''.
\begin{proof}[Proof of Lemma~\ref{lemma:gRd}] Assume that $n\in\Nplu$, $L$ and $K$ are $n$-dimensional grids, and $L$ is a sublattice of $K$. Then there are
integers $t_1\geq 2$, \dots, $t_n\geq 2$ and chains
$H_i=\set{0,1,\dots, t_i-1}$ (with the natural ordering of integer numbers) such that we can pick an isomorphism
$\phi\colon H_1\times\dots\times H_n\to L$.
The canonical chains of $K$ will be denoted by $C_1$, \dots, $C_n$.
The least element of $H_1\times\dots\times H_n$ and that of $L$ are $\vec 0:=(0,\dots,0)$ and $0_L=\phi(\vec 0)$, respectively. For $(i_1,\dots,i_n)\in H_1\times\dots\times H_n$, we write $\phi(i_1,\dots,i_n)$ rather than the more precise $\phi((i_1,\dots,i_n))$.
For $j\in\set{1,\dots,n}$ and $i\in H_j\setminus\set 0$, we are going to use the notation
\begin{equation}
\ata j i :=(\,\underbrace{0,\,\,\dots,\,\,0}_{j-1\text{ zeros}}\,,\,\,i,\,\,\underbrace{0,\,\,\dots,\,\,0}_{n-j\text{ zeros}}\,),
\label{eq:TmBjblChn}
\end{equation}
Clearly,
\begin{equation}
\Jir{H_1\times\dots\times H_n}=\set{\ata j i: j\in\set{1,\dots,n}\text{ and }i\in H_i\setminus\set 0}.
\end{equation}
It is also clear that the atoms of $H_1\times\dots\times H_n$ are
$\atom 1$, \dots, $\atom n$.
With the notation given in \eqref{eq:cFnfRmsG},
for $j\in \set{1,\dots,n}$ we let
\begin{equation}
I_j:=\set{i\in\set{1,\dots,n}: \cj{\phi(\atom j)}i > \cj{0_L}i }.
\label{eq:NshMdhzDfdPh}
\end{equation}
Since $\atom j>\vec 0$ and $\phi$ is an isomorphism, $I_j\neq\emptyset$.
We claim that
\begin{equation}
\text{if $j\neq k\in\set{1,\dots,n}$, then $I_j\cap I_k=\emptyset$.}
\label{eq:szTrvspr}
\end{equation}
For the sake of contradiction, suppose that $j\neq k$ but $i\in I_j\cap I_k$. Then $\cj{\phi(\atom j)}i > \cj{0_L}i$ and $\cj{\phi(\atom k)}i > \cj{0_L}i$. Since $j$ and $k$ play a symmetrical role and the elements $\cj{\phi(\atom j)}i$ and $\cj{\phi(\atom k)}i$ belonging to the same canonical chain $C_i$ of $K$ are comparable, we can assume that
$\cj {0_L}i < \cj{\phi(\atom j)}i \leq \cj{\phi(\atom k)}i$.
Hence, using Lemma~\ref{lemma:cnChgR}\eqref{lemma:cnChgRc},
\begin{align*}
\cj{\phi(\atom j)}i
&= \cj{\phi(\atom j)}i \wedge \cj{\phi(\atom k)}i =
\cj{\bigl(\phi(\atom j)\wedge \phi(\atom k)\bigr)}i \cr
&=\cj{\phi(\atom j\wedge \atom k)}i=\cj{\phi(\vec 0)}i=\cj{0_L}i,
\end{align*}
contradicting \eqref{eq:NshMdhzDfdPh} and proving \eqref{eq:szTrvspr}.
Using that $I_1$, \dots, $I_n$ are nonempty subsets of the finite set $\set{1,\dots, n}$ and they are pairwise disjoint by \eqref{eq:szTrvspr}, we have that
\[n\leq |I_1|+\dots+ |I_n|=|I_1\cup\dots\cup I_n|\leq |\set{1,\dots, n}|=n.
\]
Hence, none of the $I_1$, \dots, $I_j$ can have more than one element, and we obtain that $|I_1|=\dots=|I_n|=1$. Therefore, after changing the order of the direct factors in $H_1\times\dots\times H_n$ and so also the order of the atoms $\atom 1$, \dots, $\atom n$ if necessary, we can write that $I_1=\set 1$, \dots, $I_n=\set n$.
This means that, for all $j,k\in\set{1,\dots,n}$,
\begin{equation}
\cj{\phi(\ata j1)}k\geq \cj{0_L}k, \text{ and }
\cj{\phi(\ata j1)}k > \cj{0_L}k \iff k=j.
\label{eq:cSkmpSzkDfcRsb}
\end{equation}
Next, we generalize \eqref{eq:cSkmpSzkDfcRsb} by claiming that for $j,k\in\set{1,\dots,n}$ and $i\in H_j\setminus\set 0$,
\begin{equation}
\cj{\phi(\ata j i)}k\geq \cj{0_L}k, \text{ and }
\cj{\phi(\ata j i)}k > \cj{0_L}k \iff k=j.
\label{eq:rSmnVmsKrp}
\end{equation}
To prove this, we can assume that $i>1$ since otherwise \eqref{eq:cSkmpSzkDfcRsb} applies.
Using \eqref{eq:cSkmpSzkDfcRsb} together with the fact that $\pi_k$ and $\pi_j$ defined in Lemma~\ref{lemma:cnChgR}\eqref{lemma:cnChgRc} are order-preserving, we obtain that
$\cj{\phi(\ata ji)}k\geq \cj{\phi(\ata j1)}k\geq \cj{0_L}k$ for all $k\in\set{1,\dots,n}$, as required, and $\cj{\phi(\ata ji)}j\geq \cj{\phi(\ata j1)}j > \cj{0_L}j$. So all we need to show is that $\cj{\phi(\ata ji)}k > \cj{0_L}k$ is impossible if $k\neq j$. For the sake of contradiction, suppose that $k\neq j$, $k,j\in\set{1,\dots,n}$, and $\cj{\phi(\ata ji)}k > \cj{0_L}k$.
We also have that $\cj{\phi(\ata ki)}k > \cj{0_L}k$ since
$\pi_k$ is order-preserving and $\cj{\phi(\ata k1)}k > \cj{0_L}k$ by \eqref{eq:cSkmpSzkDfcRsb}. Belonging to the same canonical chain of $K$, the elements $\cj{\phi(\ata ji)}k$ and $\cj{\phi(\ata ki)}k$ are comparable, whence their meet is one of the meetands. Thus, $\cj{\phi(\ata ji)}k\wedge \cj{\phi(\ata ki)}k > \cj{0_L}k$. Hence, using that $\phi$ and $\pi_k$ are homomorphisms and $\ata ji \wedge \ata k i=\vec 0$, we obtain that
\begin{align*}
\cj{0_L}k &< \cj{\phi(\ata j i)}k\wedge \cj{\phi(\ata k i)}k =
\cj{\bigl(\phi(\ata j i) \wedge \phi(\ata k i)\bigr)}k \cr
&= \cj{\phi(\ata j i \wedge \ata k i)}k = \cj{\phi(\vec 0)}k
=\cj{0_L}k,
\end{align*}
which is a contradiction proving \eqref{eq:rSmnVmsKrp}.
Next, after extending the notation given in \eqref{eq:TmBjblChn} by letting $\ata j 0:=\vec 0$ for $j\in\set{1,\dots,n}$, we have that
\begin{equation}
\cj{\phi(\ata k i )}k\geq \cj{0_L}k \,\,
\text{ for all }k\in\set{1,\dots,n}\text{ and }i\in H_k
\label{eq:tdBlrbKlR}
\end{equation}
since $\ata k i\geq \ata k 0=\vec 0$, $\phi$ and $\pi_k$ are order-preserving maps, and $0_L=\phi(\vec 0)$.
For $j\in\set{1,\dots,n}$, we define
\begin{equation}
E_j:=\set{ \cj{\phi(\ata j i )}j: i\in H_j}.
\label{eq:Eunderscorej}
\end{equation}
By \eqref{eq:cFnfRmsG}, $E_j\subseteq C_j$, that is, $E_j$ is a subchain of $C_j$ for all $j\in\set{1,\dots,n}$.
We are going to show that these $E_j$ satisfy \eqref{eq:dcEdjKwmC}.
First, assume that $x\in K$ is of the form
$x=\cj x 1\vee\dots\vee \cj x n$ such that $\cj x j\in E_j$ for all $j\in\set{1,\dots, n}$. Then, for each $j\in\set{1,\dots,n}$, there is an $i(j)\in H_j$ such that $\cj x j = \cj{\phi(\ata j {i(j)} )}j$. Using what we already have, let us compute:
\allowdisplaybreaks{
\begin{align}
x&=\cj x 1\vee\dots\vee \cj x n=\cj{\phi(\ata 1 {i(1)} )}1\vee\dots\vee \cj{\phi(\ata n {i(n)} )}n
\label{eq:fPldMsTa}
\\
&\eqeqref{eq:tdBlrbKlR} \cj{\phi(\ata 1 {i(1)})}1
\vee\dots\vee \cj{\phi(\ata n {i(n)} )}n \vee \cj{0_L}1 \vee\dots \vee \cj{0_L}n
\cr
&\eqeqref{eq:rSmnVmsKrp}
\cj{\phi(\ata 1 {i(1)})}1
\vee\dots\vee \cj{\phi(\ata n {i(n)} )}n
\cr
&\phantom{m m i}\vee
\cj{\phi(\ata 1 {i(1)})}2\vee\dots\vee \cj{\phi(\ata 1 {i(1)})}n
\cr
&\phantom{m m i}\vee\dots\vee
\cj{\phi(\ata n {i(n)}}1\vee\dots\vee \cj{\phi(\ata n {i(n)})}{n-1}
\cr
&\eqeqref{eq:cFnfRmsG}
\phi(\ata 1 {i(1)})\vee \dots\vee \phi(\ata n {i(n)})
=\phi( \ata 1 {i(1)}\vee \dots\vee \ata n {i(n)} )
\label{eq:fPldMsTb}
\end{align}
Since $\phi( \ata 1 {i(1)}\vee \dots\vee \ata n {i(n)} )
\in\phi(H_1\times\dots\times H_n)=L$, the computation from \eqref{eq:fPldMsTa} to \eqref{eq:fPldMsTb} shows the ``$\supseteq$" part of \eqref{eq:dcEdjKwmC}.
To show the reverse inclusion, assume that $x\in L$. Applying Lemma \ref{lemma:cnChgR}\eqref{lemma:cnChgRb} to the $\phi$-preimage of $x$, we obtain the existence of $i(1),\dots,i(n)$ such that $x=\phi( \ata 1 {i(1)}\vee \dots\vee \ata n {i(n)} )$.
Reading the computation from \eqref{eq:fPldMsTb} to \eqref{eq:fPldMsTa} upward, it follows that $x=\cj{\phi(\ata 1 {i(1)})}1\vee\dots\vee \cj{\phi(\ata n {i(n)})}n$. By the uniqueness part of Lemma \ref{lemma:cnChgR}\eqref{lemma:cnChgRb},
$\cj x 1= \cj{\phi(\ata 1 {i(1)} )}1$, \dots, $\cj x n=\cj{\phi(\ata n {i(n)})}n$. Combining this with \eqref{eq:Eunderscorej},
we have that $\cj x 1\in E_1$, \dots, $\cj x n\in E_n$. This yields the ``$\subseteq$'' inclusion for \eqref{eq:dcEdjKwmC} and completes the proof of Lemma~\ref{lemma:gRd}.
\end{proof}
The following easy lemma sheds more light on the categories $\Dncovzo n$, $n\in\Nplu\cup\set\omega$. The \emph{length} of a lattice $M$ is denoted by $\length M$; for definition (in the finite case) we mention that if $C$ is a maximum-sized chain in $M$, then $\length M +1 = |C|$.
\begin{lemma}\label{lemmaKzTfPsz} Assume that $K, L$ are finite semimodular lattices (in particular, finite distributive lattices) and $f\colon K\to L$ is a map. Then the following two assertions hold.
\begin{enumerate}
\item\label{lemmaKzTfPsza} If $f$ is a \covzo, then $f$ is a \embcovzo{} and $\length K=\length L$.
\item\label{lemmaKzTfPszb} If $f$ is a lattice embedding and $\length K=\length L$, then $f$ is a \covzo{}.
\item\label{lemmaKzTfPszc} If $L$ is a sublattice of $K$ such that the map $L\to K$ defined by $x\mapsto x$ is a \embcovzo{} and $f$ is a retraction, then $f$ is a lattice isomorphism (and, in particular, $f$ is also a \embcovzo).
\end{enumerate}
\end{lemma}
\begin{proof} First, recall the following concept. A sublattice $S$ of a lattice $M$ is a \emph{congruence-determining sublattice} of $M$ if any congruence $\alpha$ of $M$ is uniquely determined by its restriction $\restrict \alpha S:=\set{(x,y)\in S^2: (x,y)\in\alpha}$. By Gr\"atzer and Nation~\cite{gr-nation},
\begin{equation}
\parbox{7.2cm}{every maximal chain of a finite semimodular lattice is a congruence-determining sublattice.}
\label{pbx:gRnNt}
\end{equation}
To prove part \eqref{lemmaKzTfPsza}, let $f\colon K\to L$ be a \covzo. We know from \eqref{pbx:lnPrsHm} that $\length K=\length L$. Let $\Theta:=\set{(x,y)\in K^2: f(x)=f(y)}$ be the kernel of $f$, and take a maximal chain $C$ in $K$. For $c,d\in C$ such that $c\prec d$, we have that $(c,d)\notin\Theta$ since $f(c)\prec f(d)$. Hence, using that the blocks of $\restrict\Theta C$ are convex sublattices of $C$,
it follows that $\restrict \Theta C=\diag C$. Applying \eqref{pbx:gRnNt}, we have that $\Theta=\diag K$. Thus, $f$ is injective, proving part \eqref{lemmaKzTfPsza}.
We prove part \eqref{lemmaKzTfPszb} by way of contradiction. Suppose that in spite of the assumptions, $f$ is not cover-preserving. Pick $a,b\in K$ such that $a\prec b$ but $f(a)\not\prec f(b)$. The injectivity of $f$ rules out that $f(a)=f(b)$. Hence, the interval $[f(a),f(b)]$ is of length at least 2. Extend $\set{a,b}$ to a maximal chain $C=\set{0=c_0,c_1,\dots, c_k=1}$ of $K$ such that $a=c_{i-1}$, $b=c_i$, and $c_0\prec c_1\prec\dots\prec c_k$. By the Jordan--H\"older chain condition, $k=\length K$.
Using the injectivity of $\phi$ again and the fact that $\phi$ is order-preserving, $\length{[f(c_{j-1}), f(c_j)]}\geq 1$ for all $j\in\set{1,\dots, k}$. So the summands in
\begin{equation}
\length K \geq \sum_{j=1}^k \length{[f(c_{j-1}), f(c_j)]}
\label{eqszGhsplBBszlHk}
\end{equation}
are positive integers but the $i$-th summand is at least two.
Therefore, this sum and $\length K$ are at least $k+1$, which is a contradiction completing the proof of
part \eqref{lemmaKzTfPszb}.
Next, to prove part \eqref{lemmaKzTfPszc}, observe that $0_L=0_K$ and $1_L=1_K$. Hence, since $f$ is a retraction, $f(0_K)=0_L$ and $f(1_K)=1_L$, as required. We are going to show that whenever $a\prec b$ in $K$, then $f(a)\prec f(b)$ in $L$. For the sake of contradiction, suppose that $a\prec b$ in $K$ but $f(a)\not\prec f(b)$ in $L$. Then there are two cases (since $f$ is order-preserving): either we have that $f(a)=f(b)$, or $f(a)<f(b)$ and the length of the interval $[f(a),f(b)]$ is at least 2.
For each of these two cases,
let $\Theta$ denote the kernel $\set{(x,y)\in K^2: f(x)=f(y)}$ of $f$, and let $U=\set{0=u_0,u_1,\dots, u_k=1}$ be a maximal chain of $L$. It is also a maximal chain of $K$ since the embedding $L\to K$ defined by $x\mapsto x$ is a \covzo.
We know from the Jordan--H\"older chain condition, $k=\length L$.
First, we deal with the first case, $f(a)=f(b)$. Then $(a,b)\in\Theta$ shows that $\Theta\neq \diag K$.
We have that $\restrict\Theta U\neq \diag U$ since \eqref{pbx:gRnNt} applies. Using that the blocks of $\restrict\Theta U$ are convex sublattices of $U$, it follows that $(u_{i-1},u_i)\in \restrict\Theta U$ for some $i\in\set{1,\dots,n}$. This means that $f(u_{i-1})=f(u_i)$.
This equality leads to a contradiction since $f$ is a retraction and so $u_{i-1}=f(u_{i-1})=f(u_{i})=u_{i}$.
Since the only conditions tailored to $a$ and $b$ were $a\prec b$ and $f(a)=f(b)$, we have also obtained that
\begin{equation}
\text{if $a'\prec b'$ in $K$, then $f(a')\neq f(b')$.}
\label{eq:pCpszLklChL}
\end{equation}
Next, we focus on the case $f(a)<f(b)$ and $\length{[f(a),f(b)]}\geq 2$. As in the proof of part \eqref{lemmaKzTfPszb}, we can extend $\set{a,b}$ to a maximal chain $C$ of $K$.
Since $U$ is also a maximal chain of $K$, the Jordan--H\"older chain condition gives that $\length C=\length K=\length U=k$. This allows us to write that $C=\set{0=c_0,c_1,\dots, c_k=1}$ where $a=c_{i-1}$ and $b=c_i$ for some $i\in\set{1,\dots,k}$, and $c_0\prec c_1\prec\dots\prec c_k$. By the Jordan--H\"older chain condition, \eqref{eqszGhsplBBszlHk} is still valid.
Each summand in \eqref{eqszGhsplBBszlHk} is at least 1 by
\eqref{eq:pCpszLklChL}, but the $i$-th summand is
$\length{[f(c_{i-1}),f(c_i)]}=\length{[f(a),f(b)]}\geq 2$. Hence, $k=\length K \geq k+1$, which is a contradiction again.
In this way, we have shown that $f$ is a \covzo.
Applying the already proven part \eqref{lemmaKzTfPsza} of Lemma~\ref{lemmaKzTfPsz}, we obtain that $f$ is a \embcovzo.
This yields that $|K|\leq |L|$. But we also have that $|L|\leq |K|$ since $L$ is a sublattice of $K$. Thus, $|K|=|L|$, whence
the embedding $f$ is a lattice isomorphism since these lattices are finite.
This completes the proof of part \eqref{lemmaKzTfPszc}
and that of Lemma~\ref{lemmaKzTfPsz}.
\end{proof}
\begin{lemma}\label{lemma:nLskPcjktNLTn}
If $L$ is a nontrivial finite distributive lattice with order dimension $n\in\Nplu$, then there is a \embcovzo{} of $L$ into an $n$-dimensional grid $G$.
\end{lemma}
\begin{proof}[Proof of Lemma \ref{lemma:nLskPcjktNLTn}] By \eqref{eqtxt:dimDwJD}, $\width{\Jir L}=n$.
It follows from Dilworth \cite[Theorem 1.1]{dilworth}, mentioned already in Subsection \ref{subsect:Dfdim}, that there are chains $C_1$, \dots, $C_n$ in $\Jir L$ such that
$\Jir L=C_1\cup\dots\cup C_n$. We define $E_1$, \dots, $E_n$ by induction as follows:
\[
E_1:=C_1 \text{ and, for $i\in\set{2,\dots,n}$, } E_i:=C_i\setminus(C_1\cup\dots\cup C_{i-1}).
\]
We show by an easy induction that
\begin{equation}
\parbox{7.6cm}{for $i\in\set{1,\dots,n}$,
$E_1\cup\dots\cup E_i=C_1\cup\dots\cup C_i$, and the sets
$E_1$, \dots, $E_i$ are pairwise disjoint.}
\label{pbx:szWrknCSkBrGlnD}
\end{equation}
Since this is trivial for $i=1$, assume that $i\in\set{2,\dots, n}$ and \eqref{pbx:szWrknCSkBrGlnD} holds for $i-1$. Then
$E_1\cup \dots \cup E_{i-1}\cup E_i= C_1\cup \dots \cup C_{i-1}\cup (C_i\setminus(C_1\cup\dots\cup C_{i-1}))=C_1\cup\dots\cup C_i$
shows the equality in \eqref{pbx:szWrknCSkBrGlnD} for $i$. The sets $E_1$, \dots, $E_{i-1}$ are pairwise disjoint by the induction hypothesis, while $E_i$ is disjoint from them because of $ E_i:=C_i\setminus(C_1\cup\dots\cup C_{i-1}) = C_i\setminus(E_1\cup\dots\cup E_{i-1})$.
This shows the validity of \eqref{pbx:szWrknCSkBrGlnD}.
Next, with $\Ep i:=E_i\cup\set 0$ for $i\in \set{1,\dots,n}$ and $0=0_L\notin E_i$, we define $G:=\Ep 1\times\dots \times \Ep n$.
Since $L$ is nontrivial (that is, $|L|>1$), we have that $|\Ep i|\geq 2$ and so $G$ is an $n$-dimensional grid. Clearly, $|\Jir G|=|E_1|+\dots+|E_n|$. This equality and \eqref{pbx:szWrknCSkBrGlnD} give that
$|\Jir G|=|E_1\cup\dots\cup E_n|=|C_1\cup\dots\cup C_n|=|\Jir L|$.
We know from the folklore or from Gr\"atzer~\cite[Corollary 112]{r:Gr-LTFound} that
\begin{align}
&\parbox{7.2cm}{the length of a finite distributive lattice equals the number of its join-irreducible elements,}\label{pbx:lJrsBlTlgR}\\
&\text{whereby $G$ and $L$ are of the same length.}
\label{eqtxt:sHmlFksRb}
\end{align}
For $x\in L$ and $i\in\set{1,\dots n}$, let $x_i$ stand for the largest element of $\Ep i\cap \ideal x$; this makes sense since $\Ep i$ is a chain of $L$ and $0\in \Ep i\cap \ideal x$ shows that $\Ep i\cap \ideal x\neq \emptyset$. We are going to show that
\begin{equation}
\parbox{6.5cm}{the map $\phi\colon L\to G$ defined by the rule $x\mapsto (x_1,\dots,x_n)$ is a lattice embedding.}
\label{pbx:zgmSzcskHt}
\end{equation}
To prove \eqref{pbx:zgmSzcskHt}, let $x,y\in L$. Denote $x\wedge y$ and $x\vee y$ by $u$ and $v$, respectively.
We have that $\phi(x)=(x_1,\dots,x_n)$ and $\phi(y)=(y_1,\dots,y_n)$. Here $y_i$ is the largest element of $\Ep i\cap \ideal y$, and analogous notation applies for $\phi(u)$ and $\phi(v)$. Since the lattice operations in the direct product $G$ are computed componentwise, we only need to show that, for every $i\in \set{1,\dots,n}$, $x_i\wedge y_i=u_i$ and
$x_i\vee y_i=v_i$. In fact, we only need to show that $x_i\wedge y_i\leq u_i$ and $x_i\vee y_i\geq v_i$ since the converse inequalities follow from the fact that $\phi$ is clearly order-preserving. Since $x_i$ and $y_i$ belong to the same chain, $\Ep i$, these two elements are comparable. They play a symmetrical role, whence we can assume that
$x_i\leq y_i$. Thus, the equalities $x_i=x_i\wedge y_i$ and $y_i=x_i\vee y_i$ reduce our task to show that $x_i\leq u_i$ and $y_i\geq v_i$. Since $x_i\in \Ep i\cap \ideal x$
and $x_i\leq y_i$ yields that $x_i\in \Ep i\cap \ideal y$,
we have that $x_i\in \Ep i\cap \ideal x\cap \ideal y=\Ep i\cap \ideal (x\wedge y)=\Ep i\cap \ideal u$. Taking into account that $u_i$ is the largest element of $\Ep i\cap \ideal u$, the required inequality $x_i\leq u_i$ follows.
It belongs to the folklore of lattice theory (and it occurs in the last paragraph of the proof of Theorem 107 in
Gr\"atzer \cite{r:Gr-LTFound}) that
\begin{equation}
\parbox{8cm}{if $D$ is a finite distributive lattice, $t\in\Nplu$, $p\in\Jir D$, $q_1,\dots,q_t\in D$, and $p\leq q_1\vee\dots\vee q_t$, then there is an $i\in\set{1,\dots,t}$ such that $p\leq q_i$.}
\label{eq:fLktnhLdSrbK}
\end{equation}
Indeed, if the premise of \eqref{eq:fLktnhLdSrbK} holds, then
$p=p\wedge(q_1\vee\dots\vee q_t)=(p\wedge q_1)\vee\dots\vee (p\wedge q_t)$ and $p\in \Jir D$ yield that $p=p\wedge q_i$ for some $i$, implying the required $p\leq q_i$.
Resuming our argument for $\phi$, we know that $v_i\leq v=x\vee y$ and $v_i\in \Ep i\subseteq \Jir L$. Hence
\eqref{eq:fLktnhLdSrbK} gives that $v_i\leq x$ or $v_i\leq y$. If $v_i\leq x$, then the definition of $x_i$ yields that $v_i\leq x_i$, whence $v_i\leq y_i$. If $v_i\leq y$, then the definition of $y_i$ immediately yields that $v_i\leq y_i$. So the required $y_i\geq v_i$ holds in both cases, and we have shown that $\phi$ is a lattice homomorphism.
Next, we claim that for each $x\in L$,
\begin{equation}
x = x_1\vee\dots\vee x_n.
\label{eq:GmrSlrkLrSztbrkK}
\end{equation}
By finiteness, there is a subset $H$ of $\Jir L$ such that $x=\bigvee H$. For each $h\in H$, \eqref{pbx:szWrknCSkBrGlnD} and $\Jir L=C_1\cup\dots\cup C_n$ yield an $i\in\set{1,\dots,n}$ such that $h\in \Ep i$. Then we have that $h\in\Ep i\cap\ideal x$, whereby $h\leq x_i\leq x_1\vee\dots\vee x_n$.
Since this holds for all $h\in H$, we have that $x=\bigvee H\leq x_1\vee\dots\vee x_n$. The converse inequality is trivial, and we conclude \eqref{eq:GmrSlrkLrSztbrkK}.
Clearly, \eqref{eq:GmrSlrkLrSztbrkK} implies the injectivity of $\phi$. Thus, we have shown \eqref{pbx:zgmSzcskHt}.
Finally, \eqref{eqtxt:sHmlFksRb}, \eqref{pbx:zgmSzcskHt}, and Lemma~\ref{lemmaKzTfPsz}\eqref{lemmaKzTfPszb} imply that $f$ is a \covzo, completing the proof of Lemma~\ref{lemma:nLskPcjktNLTn}.
\end{proof}
\begin{lemma}\label{lemma:notbOOle}
If $n\in\Nplu$, $L$ is an $n$-dimensional grid, but $L$ is not a boolean lattice, then $L$ is a sublattice of an $(n+1)$-dimensional grid $K$ such that $K$ and $L$ are of the same length.
\end{lemma}
\begin{proof} By the assumption, $L=C_1\times\dots\times C_n$ such that $C_1$, \dots, $C_n$ are nontrivial chains and at least one of them consists of at least three elements. Up to isomorphism, the order of the direct factors is irrelevant, whereby we can assume that $|C_1|\geq 3$. Let $q$ be the unique coatom of $C_1$. Then $0<q\prec 1$ in $C_1$ and $E_0:=\filter q=\set{q,1}$ is a two-element subchain of $C_1$.
The subchain $E_1:=\ideal q$ is still a nontrivial chain.
Define $K:=E_0\times E_1\times C_2\times \dots\times C_n$.
It is an $(n+1)$-dimensional grid. Since $\Jir L$ consists of the vectors with exactly one nonzero component and similarly for $\Jir K$,
$|\Jir L|=(|C_1|-1)+(|C_2|-1)+\dots+(|C_n|-1)= (|E_0|-1)+(|E_1|-1)+(|C_2|-1)+\dots+(|C_n|-1)=|\Jir K|$. Hence, \eqref{pbx:lJrsBlTlgR} gives that $L$ and $K$ are of the same length. We are going to show that $L$ can be embedded into $K$.
Instead of defining an injective homomorphism $L\to K$ and verifying its properties in a tedious way, recall the following. If $H_1$ and $H_2$ are lattices, $F_1$ is a filter of $H_1$, $I_2$ is an ideal of $H_2$, and $\psi\colon F_1\to I_2$ is a lattice isomorphism, then the quintuplet $(H_1,H_2,F_1,I_2,\psi)$ uniquely determines a lattice $H$ by identifying $x$ with $\psi(x)$, for all $x\in F_1$, in $H_1\cup H_2$. This $H$ is the well-known \emph{Hall--Dilworth gluing} of $H_1$ and $H_2$ or, to be more precise, the Hall--Dilworth gluing determined by the quintuplet; see, for example, Gr\"atzer~\cite[Lemma 298]{r:Gr-LTFound} for more details. Furthermore, it is also well known, see Gr\"atzer~\cite[Lemma 299]{r:Gr-LTFound}, that
\begin{equation}
\parbox{8.2cm}{if $M$ is a lattice, $M_1$ is an ideal of $M$, $M_2$ is a filter of $M$, and $T:=M_1\cap M_2\neq \emptyset$, then $M_1\cup M_2$ is a sublattice of $M$ and $M$ is isomorphic to the Hall--Dilworth gluing determined by $(M_1,M_2, T, T,\id T)$,}
\label{pbx:wGhrSgcSrgsJ}
\end{equation}
where $\id T\colon T\to T$ is the \emph{identity map} defined by $x\mapsto x$.
In the rest of \emph{this} proof, $\vec 0$ and $\vec 1$ will stand for $(0_{C_2},\dots,0_{C_n})\in C_2\times\dots \times C_n$ and $(1_{C_2},\dots,1_{C_n})\in C_2\times\dots \times C_n$, respectively.
In $L$, we let $I_L:=\ideal{(q,\vec 1\,)}$, $F_L:=\filter{(q,\vec 0\,)}$, and $T_L:=I_L\cap F_L=[ (q,\vec 0), (q,\vec 1) ]$.
In $K$, we let $I_K:=\ideal{(q,q,\vec 1\,)}$; remember that the first $q$ here is the least element of $E_0$ while the second $q$ is the largest element of $E_1$. Still in $K$, we also let $F_K:=\filter{(q,q,\vec 0\,)}$
and $T_K:=I_K\cap F_K=[ (q,q,\vec 0\,), (q,q,\vec 1\,) ]$.
Clearly, the map $\rho\colon I_L\to, I_K$ defined by $(x,\vec y\,)\mapsto (q,x,\vec y\,)$ is an isomorphism. Let
\begin{equation*}
\tau\colon F_L\to F_K\text{ be defined by } (x,\vec y\,)\mapsto (x,q,\vec y);
\end{equation*}
it is also an isomorphism. We have to check that each of the restrictions $\restrict \rho{T_L}$ and $\restrict \tau{T_L}$ are the same maps and they are $T_L\to T_K$ isomorphisms. But this is clear since $T_L=\set{ (q,\vec y\,): \vec y\in C_2\times\dots \times C_n}$ and $T_K=\set{ (q,q,\vec y\,): \vec y\in C_2\times\dots \times C_n}$.
Hence, it follows from \eqref{pbx:wGhrSgcSrgsJ} that
$I_K\cup F_K$ is a sublattice of $K$. It also follows from \eqref{pbx:wGhrSgcSrgsJ} that
$L$, which is the Hall--Dilworth gluing determined by $(I_L, F_L, T_L, T_L, \id{T_L})$, is isomorphic to this sublattice. Therefore, after replacing $K$ by an isomorphic copy if necessary, we conclude that $L$ is a sublattice of $K$, proving Lemma~\ref{lemma:notbOOle}.
\end{proof}
\subsection{Our lemmas at work}
Armed with our lemmas, we are ready to prove the main theorem of the paper and Corollaries~\ref{cor:DfnVGs}--\ref{cor:krSzwzlvnlZvl}. First, we disregard Corollary~\ref{cor:krSzwzlvnlZvl} in the proof below.
\begin{proof}[Proof of Theorem \ref{thmdst} and Corollaries~\ref{cor:DfnVGs}--\ref{cor-kvsnLptv}]
It follows from Proposition~\ref{prop} that
\begin{equation}
\parbox{7.4cm}{\eqref{thmdsta}, \eqref{thmdstb}, and \eqref{thmdstc} are equivalent in each of
Theorem~\ref{thmdst}, Corollary~\ref{cor:DfnVGs}, and Corollary~\ref{cor-kvsnLptv}.}
\label{pbx:sChsWsZjMSvZlf}
\end{equation}
Next, we are going to prove that for $n\in\Nplu$ and a \emph{finite} distributive lattice $D$,
\begin{align}
\parbox{8.4cm}{if $D\in \Dncovzo n$ is an absolute $\Hop$-retract for $\Dncovzo n$, then $D$ is boolean or $D$ is an $n$-dimensional grid;}
\label{pbx:whWbznTnka}
\\
\parbox{8.4cm}{if $D$ is boolean, then $D$ is an absolute retract for $\Dall$.}
\label{pbx:whWbznTnkb}
\\
\parbox{8.4cm}{if $D$ is an $n$-dimensional grid, then $D$ is an absolute retract for $\Dnfin n$;}
\label{pbx:whWbznTnkc}
\end{align}
To prove \eqref{pbx:whWbznTnka}, assume that $n\in\Nplu$ and $D\in \Dncovzo n$ is an absolute $\Hop$-retract for $\Dncovzo n$.
For the sake of contradiction, suppose that $D$ neither boolean nor it is an $n$-dimensional grid.
The first task in the proof is to find a \emph{proper} $\Dncovzo n$-extension $K$ of $D$.
Let $k:=\dim D$. By Lemma~\ref{lemma:nLskPcjktNLTn}, $D$ has a $\Dncovzo n$-extension $L$ such that $L$ is a $k$-dimensional grid. There are three cases depending on $k$ and $L$.
First, assume that $k<n$ and $L$ is boolean. Then $D\neq L$ since $D$ is not boolean. So if we let $K:=L$, then
\begin{equation}
\parbox{6.0cm}{$K\in\Dncovzo n$, $K\neq D$, and $K$ is a $\Dncovzo n$-extension of $D$.
}\label{pbx:wmRsznkvTDfrKs}
\end{equation}
Second, assume that $k<n$ and $L$ is not boolean. Then \eqref{pbx:lnPrsHm} gives that $\length L=\length D$. Lemma~\ref{lemma:notbOOle} allows us to take a $(k+1)$-dimensional
grid $K$ such that $\length K=\length L$ and $L$ is a sublattice of $K$. So $D$ is a sublattice of $K$ and $\length D=\length K$.
Hence if we apply Lemma~\ref{lemmaKzTfPsz}\eqref{lemmaKzTfPszb}
to the map $D\to K$ defined by $x\mapsto x$ and take $\dim K= k+1\leq n$ into account, we obtain that $K$ is a $\Dncovzo n$-extension of $D$. Since $\dim D=k\neq \dim K$, we have that $D\neq K$ and so \eqref{pbx:wmRsznkvTDfrKs} holds again.
Third, assume that $k=n$, that is, $\dim D=n$. Then, by Lemma~\ref{lemma:nLskPcjktNLTn}, $D$ has a $\Dncovzo n$-extension $K$ such that $K$ is an $n$-dimensional grid. Since we have assumed that $D$ is not an $n$-dimensional grid, \eqref{pbx:wmRsznkvTDfrKs} holds again.
We have seen that, in each of the three possible cases, \eqref{pbx:wmRsznkvTDfrKs} holds. Since $D\in
\Dncovzo n$ was assumed to be an absolute $\Hop$-retract for $\Dncovzo n$, there exists a retraction $f\colon K\to D$.
We know from \eqref{pbx:wmRsznkvTDfrKs} that the map $D\to K$ defined by $x\mapsto x$ is a \embcovzo. Hence $f$ is an isomorphism by Lemma~\ref{lemmaKzTfPsz}\eqref{lemmaKzTfPszc}, whereby $|K|=|D|$. This contradicts the fact that $D$ is a proper sublattice of $K$ by \eqref{pbx:wmRsznkvTDfrKs},
and we have proved \eqref{pbx:whWbznTnka}.
To prove \eqref{pbx:whWbznTnkb}, assume that a finite boolean lattice $D$ is a sublattice of a not necessarily finite distributive lattice $K$. We are going to show that there exists a retraction $K\to D$. Since this is trivial if $D$ is a singleton, we can assume that $|D|>1$. Let $n:=\dim D$. Combining \eqref{eqtxt:dimDwJD} and \eqref{pbx:lJrsBlTlgR} and taking into account that the join-irreducible elements of a finite boolean lattice are exactly its atoms, it follows that $D$ has exactly $n$ atoms and it is of length $n$. Hence, we can take a maximal chain
$C=\set{0=c_0, c_1,\dots, c_{n-1}, c_n=1}$ in $D$ such that $c_{i-1}\prec c_i$ for $i\in\set{1,\dots,n}$. For $i\in\set{1,\dots,n}$, the Prime Ideal Theorem allows us to pick a prime ideal $I_i$ of $K$ such that $c_{i-1}\in I_i$ but $c_i\notin I_i$. Since $I_i$ is a prime ideal, the partition $\set{I_i,K\setminus I_i}$ determines a congruence $\Theta_i$ of $K$. This congruence \emph{separates} $c_{i-1}$ and $c_i$, that is, $(c_{i-1}, c_i)\notin \Theta_i$. Let $\Theta:=\bigcap\set{\Theta_i: i\in\set{1,\dots n}}$. Now $\Theta$ is a congruence of $K$ and its restriction $\restrict \Theta C$ is a congruence of the sublattice $C$. We claim that $\restrict \Theta C=\diag C$; suppose the contrary. We know from the folklore that any congruence of a finite lattice is determined by the covering pairs it collapses, whence $(c_{i-1},c_i)\in\restrict \Theta C$ for some $i\in\set{1,\dots,n}$. But then $(c_{i-1},c_i)\in\restrict \Theta C \subseteq \Theta\subseteq \Theta_i$, contradicting the fact that $\Theta_i$ separates $c_{i-1}$ and $c_i$. This shows that $\restrict \Theta C=\diag C$. Therefore, it follows from \eqref{pbx:gRnNt} and
$\restrict \Theta C= \restrict{(\restrict \Theta D)} C$ that
\begin{equation}
\restrict \Theta D = \diag D.
\label{eq:tGHjszszDr}
\end{equation}
Observe that
\begin{equation}
\parbox{8.7cm}{if $\alpha$ and $\beta$ are congruences of a not necessarily finite lattice, $\alpha$ has exactly $m\in\Nplu$ blocks, and $\beta$ has exactly $n\in\Nplu$ blocks, then $\alpha\cap\beta$ has at most $mn$ blocks.}
\label{pbx:mkhszTvn}
\end{equation}
Indeed, each of the $m$ $\alpha$-blocks is cut into at most $n$ pieces by $\beta$. Since $\Theta_i$ has only two blocks, it follows from \eqref{pbx:mkhszTvn} that $\Theta$ has at most $2^n$ blocks. But the elements of $D$ belong to pairwise different $\Theta$-blocks by \eqref{eq:tGHjszszDr}, whereby $\Theta$ has exactly $2^n=|D|$ blocks.
Next, we define a map
\begin{equation}
f\colon K\to D\text{ by the rule }f(x)=d\in D\iff (x,d)\in\Theta.
\label{eq:bRhKlkPzLcsb}
\end{equation}
For later reference, we note that
\begin{equation}
\parbox{7.7cm}{to show that $f$ in \eqref{eq:bRhKlkPzLcsb} is a retraction, we will only use that $K$ is a lattice, $D$ is finite a sublattice of $K$, $\Theta$ has exactly $|D|$ blocks, and $\restrict \Theta D=\diag D$.}
\label{bpx:mzRdfZTrskgGN}
\end{equation}
Since $\Theta$ has exactly $2^n$ blocks, $|D|=2^n$ and \eqref{eq:tGHjszszDr} guarantee the properties mentioned in \eqref{bpx:mzRdfZTrskgGN}. The equality $\restrict \Theta D=\diag D$ yields that for each $x\in K$, there is at most one $d$ in \eqref{eq:bRhKlkPzLcsb}.
If there was an $x\in K$ with its $\Theta$-block $x/\Theta$ disjoint from $D$, then $\Theta$ would have more than $|D|$-blocks since $x/\Theta$ would be different from the pairwise distinct blocks of the elements of $D$. Thus, for each $x\in K$, there is exactly one $d\in D$ with $(x,d)\in\Theta$, whereby
\eqref{eq:bRhKlkPzLcsb} defines a map, indeed. If $f(x_1)=d_1$ and $f(x_2)=d_2$, then $(x_1,d_1)\in\Theta$ and $(x_2,d_2)\in\Theta$ yield that $(x_1\vee x_2,d_1\vee d_2)\in\Theta$, whence $f(x_1\vee x_2)=d_1\vee d_2\in D$. The same holds for meets, and so $f$ is a homomorphism. By the reflexivity of $\Theta$, $f(d)=d$ for all $d\in D$. Thus, $f$ is a retraction, proving \eqref{pbx:whWbznTnkb}.
Next, to prove \eqref{pbx:whWbznTnkc}, we begin with focusing on its simplest particular case. Namely, we claim that
\begin{equation}
\parbox{5cm}{If $E$ is a subchain of a finite chain $C$, then
$E$ is a retract of $C$.}
\label{pbx:lNcSzrLrtRct}
\end{equation}
To see this, let $E=\set{ e_1, e_2,\dots e_{k}}$ such that
$e_1<e_2<\dots<e_{k}$. Understanding the principal ideals below in $C$, it is trivial that the equivalence $\Theta$ with blocks $\ideal{e_1}$, $\ideal{e_2}\setminus \ideal{e_1}$, \dots, $\ideal{e_{k-1}}\setminus \ideal{e_{k-2}}$, $C\setminus \ideal{e_{k-1}}$ is a congruence of $C$. Since $\restrict \Theta E=\diag E$ and $\Theta$ has $|E|$ blocks, \eqref{bpx:mzRdfZTrskgGN} implies \eqref{pbx:lNcSzrLrtRct}.
Armed with \eqref{pbx:lNcSzrLrtRct}, assume that $n\in\Nplu$, $D$ is an $n$-dimensional grid,
$L\in \Dnfin n$, and $D$ is a sublattice of $L$. We are going to find a retraction $L\to D$. It follows from Milner and Pouzet \cite{milnerpouzet}, see Subsection~\ref{subsect:Dfdim} of the present paper, that $\dim D\leq\dim L$.
Combining this inequality with $n=\dim D$ and $L\in \Dnfin n$, we obtain that $\dim L=n$. Hence, by
Lemma~\ref{lemma:nLskPcjktNLTn}, there is a \embcovzo{} of $L$ into an $n$-dimensional grid $K$.
Then $D$ is a sublattice of $K$, and both $D$ and $K$ are $n$-dimensional grids. Let $C_1$, \dots, $C_n$ be the canonical chains of $K$. By Lemma \ref{lemma:gRd}, these canonical chains have nontrivial subchains $E_1$, \dots, $E_n$, respectively, such that \eqref{eq:dcEdjKwmC} holds (for $D$ in place of $L$). For $i\in\set{1,\dots,n}$, $\pi_i\colon K\to C_i$ defined by $x\mapsto \cj x i$ is a homomorphism by Lemma~\ref{lemma:cnChgR}\eqref{lemma:cnChgRc}.
Since $\cj x i=0\vee\dots\vee 0\vee \cj x i \vee 0\vee\dots\vee 0$ (where $\cj x i$ is the $i$-th joinand on the right), the uniqueness of the canonical form \eqref{eq:cFnfRmsG} gives that $\cj{(\cj x i)}i=\cj x i$. Hence, $\pi_i$ acts identically on $C_i$ and so $\pi_i$ is a retraction. Using \eqref{pbx:lNcSzrLrtRct}, we can take a retraction $g_i\colon C_i\to E_i$. Clearly, the composite map $f_i:=g_i\circ\pi_i$ is a retraction $K\to E_i$.
For $x\in D$ , \eqref{eq:dcEdjKwmC} gives that $\cj x i\in E_i$. Hence, for $x\in D$ and $i\in\set{1,\dots,n}$,
\begin{equation}
f_i(x)=g_i(\pi_i(x))=g_i(\cj x i) =\cj x i.
\label{eq:vszmsNgTPrsTb}
\end{equation}
Let $\Theta_i$ be the kernel of $f_i$. Since $f_i$, as any retraction, is surjective, $\Theta_i$ has exactly $|E_i|$ blocks. Therefore, if we let $\Theta:=\bigcap\set{\Theta_i: i\in\set{1,\dots,n}}$, then $\Theta$ is a congruence of $K$ with at most $|E_1|\cdots |E_n|=|D|$ blocks by \eqref{pbx:mkhszTvn}. On the other hand, if $(x,y)\in \Theta$ holds for $x,y\in D$, then $(x,y)\in \Theta_i$ and \eqref{eq:vszmsNgTPrsTb} give that $\cj x i = f_i(x)=f_i(y)=\cj y i$ for all $i\in\set{1,\dots,n}$, whence it follows from \eqref{eq:cFnfRmsG} that $x=y$. This means that $\restrict \Theta D=\diag D$. Thus, $\Theta$ has at least $|D|$ blocks, and obtain that $\Theta$ has exactly $|D|$-blocks.
Therefore, \eqref{eq:bRhKlkPzLcsb} and \eqref{bpx:mzRdfZTrskgGN} imply that there is a retraction $f\colon K\to D$. Since the restriction $\restrict f L\colon L\to D$, defined by $x\mapsto f(x)$, is clearly a retraction, we have shown the existence of a retraction $L\to D$, as required.
This completes the proof \eqref{pbx:whWbznTnkc}.
For categories $\alg X$ and $\alg Y$, we say that $\alg X$ is a
\emph{subcategory} of $\alg Y$ if every object of $\alg X$ is an object of $\alg Y$ and every morphism of $\alg X$ is a morphism of $\alg Y$. The following two observations are trivial.
\begin{align}
\parbox{9cm}{If $\alg X$ and $\alg Y$ are categories of lattices such that $\alg X$ is a subcategory of $\alg Y$ and a lattice $L\in\alg X$ is an absolute $\Hop$-retract for $\alg Y$, then $L$ is also an absolute $\Hop$-retract also for $\alg X$.}\label{pbx-hdNrzKfmCka}\\
\parbox{9cm}{An absolute retract for a category of lattices is also an absolute $\Hop$-retract for that category.}
\label{pbx-hdNrzKfmCkb}
\end{align}
For Theorem~\ref{thmdst}, in virtue of \eqref{pbx:sChsWsZjMSvZlf}, it suffices to show that \ref{thmdst}\eqref{thmdstc} and \ref{thmdst}\eqref{thmdstd} are equivalent conditions. Assume \ref{thmdst}\eqref{thmdstc}, that is, let $D\in \Dnfin n$ be an absolute retract of $\Dnfin n$. By \eqref{pbx-hdNrzKfmCkb}, $D$ is an absolute $\Hop$-retract for $\Dnfin n$. If $n\in\Nplu$, then \eqref{pbx-hdNrzKfmCka} gives that $D$ is an absolute $\Hop$-retract of $\Dncovzo n$, whereby \eqref{pbx:whWbznTnka} yields \ref{thmdst}\eqref{thmdstd}, as required. So we can assume that $n=\omega$. Denote $\dim D$ by $k$. Then $D\in\Dncovzo{k+1}$, and \eqref{pbx-hdNrzKfmCka} gives that $D$ is an absolute $\Hop$-retract of $\Dncovzo {k+1}$. By \eqref{pbx:whWbznTnka}, $D$ is boolean or $D$ is a $(k+1)$-dimensional grid. The second alternative is ruled out by $\dim D=k$, whence \ref{thmdst}\eqref{thmdstd} holds for $D$. We have seen that \ref{thmdst}\eqref{thmdstc} implies \ref{thmdst}\eqref{thmdstd}.
Conversely, assume that \ref{thmdst}\eqref{thmdstd} holds for finite distributive lattice $D$. If $D$ is boolean, then it is an absolute $\Hop$-retract for $\Dall$ by \eqref{pbx:whWbznTnkb} and \eqref{pbx:rmDhwhspKlQB}, whereby \ref{thmdst}\eqref{thmdstc} holds for $D$ by \eqref{pbx-hdNrzKfmCka} and \eqref{pbx:rmDhwhspKlQB}. If $D$ is an $n$-dimensional grid, then \eqref{pbx:whWbznTnkc} immediately implies that \ref{thmdst}\eqref{thmdstc} holds for $D$. We have proved Theorem~\ref{thmdst}.
Next, assume that a finite distributive lattice $D$ satisfies \ref{cor:DfnVGs}\eqref{cor:DfnVGsc}, that is, $D$ is an absolute retract for $\Dall$. Let $n:=\dim D$. Combining \eqref{pbx:rmDhwhspKlQB} and \eqref{pbx-hdNrzKfmCka}, we obtain that
$D$ is an absolute $\Hop$-retract for $\Dncovzo{n+1}$.
By \eqref{pbx:whWbznTnka}, $D$ is boolean or it is an $(n+1)$-dimensional grid. But $\dim D=n$ excludes the second alternative, and we conclude that $D$ satisfies \ref{cor:DfnVGs}\eqref{cor:DfnVGsd}. This shows implication \ref{cor:DfnVGs}\eqref{cor:DfnVGsc} $\Rightarrow$ \ref{cor:DfnVGs}\eqref{cor:DfnVGsd}. Since the converse implication is just \eqref{pbx:whWbznTnkb}, we have verified Corollary~\ref{cor:DfnVGs}.
For Corollary~\ref{cor-kvsnLptv}, observe that \ref{cor-kvsnLptv}\eqref{cor-kvsnLptvd} $\Rightarrow$ \ref{cor-kvsnLptv}\eqref{cor-kvsnLptvc} by \eqref{pbx-hdNrzKfmCkb}
while \ref{cor-kvsnLptv}\eqref{cor-kvsnLptvc} $\Rightarrow$
\ref{cor-kvsnLptv}\eqref{cor-kvsnLptve} is just \eqref{pbx:whWbznTnka}. So we only need to show that \ref{cor-kvsnLptv}\eqref{cor-kvsnLptve} $\Rightarrow$
\ref{cor-kvsnLptv}\eqref{cor-kvsnLptvd}. Assume that $D\in\Dncovzo n$ satisfies \ref{cor-kvsnLptv}\eqref{cor-kvsnLptve}.
There are two cases depending on whether $D$ is boolean or not. First, assume that $D\in \Dncovzo n$ is boolean. Then $D$ is an absolute $\Hop$-retract for $\Dall$ by \eqref{pbx:whWbznTnkb} and \eqref{pbx:rmDhwhspKlQB}. We obtain from \eqref{pbx-hdNrzKfmCka} that $D$ is as absolute $\Hop$-retract for $\Dncovzo n$. Second, assume that $D$ is an $n$-dimensional grid. Then $D$ is an absolute $\Hop$-retract for $\Dncovzo n$ by
\eqref{pbx:whWbznTnkc} and \eqref{pbx-hdNrzKfmCkb}.
So in both cases, $D$ is an absolute $\Hop$-retract for $\Dncovzo n$. Let $K$ be a $\Dncovzo n$-extension of $D$. Since $D$ is an absolute $\Hop$-retract for $\Dncovzo n$, there exists a retraction $f\colon K\to D$. By Lemma~\ref{lemmaKzTfPsz}\eqref{lemmaKzTfPszc}, $f$ is a morphism of $\Dncovzo n$. This shows that \ref{cor-kvsnLptv}\eqref{cor-kvsnLptvd} holds for $D$, as required.
We have verified Corollary~\ref{cor-kvsnLptv}, and the proof is complete.
\end{proof}
\begin{proof}[Proof of Corollary~\ref{cor:krSzwzlvnlZvl}]
By Proposition 5.2 of Kelly and Rival~\cite{kellyrival}, a finite lattice is planar if and only if its order dimension is at most 2. Hence, the class of planar distributive lattices is $\Dnfin 2$.
Thus, the equivalence of \ref{cor:krSzwzlvnlZvl}\eqref{cor:krSzwzlvnlZvla}
and \ref{cor:krSzwzlvnlZvl}\eqref{cor:krSzwzlvnlZvld} follows from Theorem~\ref{thmdst} while the equivalence of
\ref{cor:krSzwzlvnlZvl}\eqref{cor:krSzwzlvnlZvlb}, \ref{cor:krSzwzlvnlZvl}\eqref{cor:krSzwzlvnlZvlc}, and \ref{cor:krSzwzlvnlZvl}\eqref{cor:krSzwzlvnlZvld} is a consequence of Corollary~\ref{cor-kvsnLptv}.
\end{proof}
\color{black}
|
2,877,628,088,355 | arxiv | \section{Introduction}
Functional data analysis, FDA, is a branch of statistics whose foundational work goes back at least two decades. Its development and application has seen a precipitous increase in recent years due to the emergence of new data gathering technologies which incorporate high frequency sampling. Fundamentally, FDA is concerned with data which can be viewed as samples of curves, images, shapes, or surfaces.
While FDA is now a well established discipline with its core tools and concepts in place, there are still fundamental questions that have received relatively little attention. This work is concerned with developing, understanding, and visualizing confidence regions for functional data, a fundamental statistical concept which has received little attention in the FDA literature. Our approach is geometric in that we start with general hyper-ellipsoids and hyper-rectangles and show how they can be tailored to become proper confidence regions. A distinguishing feature of functional confidence regions is that, when the covariance of the estimator is estimated, nearly all confidence regions turn out to have \textit{zero-coverage} for the parameter;
this is primarily due to the infinite dimensional nature of the parameter. However, we demonstrate how most of these regions
are very close to the proper regions with respect to Hausdorff distance. Such an issue does not occur in multivariate statistics and is a distinct feature of FDA.
We refer to this phenomenon as \textit{ghosting}, namely, that while one uses a confidence regions with zero-coverage, they can be shown to be arbitrarily close to a proper confidence region with the desired coverage. Of course, for these \textit{ghost} regions to be useful, these distances must decrease faster than the rate at which the regions shrink down to a point.
Forming a confidence region for a functional parameter can equivalently be thought of as forming a confidence region for an infinite dimensional parameter. To see why this is a challenge, consider a classic multivariate confidence region. Suppose that $\theta \in \mbR^p$ and we have an estimator, $\hat \theta$, which is multivariate normal, $\hat \theta \sim \mcN_p(\theta, \Sigma)$. The classic approach to forming a $1-\alpha$ confidence region, $G_\alpha$, is to take the following ellipse
\begin{align}
G_\alpha = \{ x \in \mbR^p : (\hat \theta - x)^\top \Sigma^{-1} (\hat \theta - x) \leq \xi_\alpha \}.
\label{e:mult_conf}
\end{align}
The constant $\xi_\alpha$ is chosen so that the region achieves the proper coverage; when $\Sigma$ is known it is taken as the quantile of a $\chi^2$, while when $\Sigma$ is estimated it is taken from an $F$. For $p$ very large, at least two things happen: (1) the inversion of $\Sigma$ becomes very unstable due to small eigenvalues and (2) the constant $\xi_\alpha$ becomes very large. In fact, a naive functional analog would require $\xi_\alpha = \infty$ and the sample covariance operator would not even be invertible.
To address this problem in the functional case, there have been at least two main approaches. The first approach is to develop confidence bands via simulation techniques \citep{degras:2011,cao:2012,zheng:2014,cao:2014}.
These methods work quite well, but they shift the focus from Hilbert spaces, usually $L^2[0,1]$, to Banach spaces, such as $C[0,1]$. Given that Hilbert spaces are the foundation of the large majority of theory and methods for FDA, it is important to have a procedure which is based on Hilbert spaces. A more minor issue is that such bands, after taking into account point-wise variability, are usually built upon using a constant threshold across all time points. For most practical purposes, this works well, but for objects with highly complex intra-curve dependencies, it could be useful to adjust the bands. For example, in areas with high positive within curve correlation, the bands can be made narrower, and in areas with very weak correlation they should be made wider. Finally, simulation based approaches are computationally intensive, especially if one wants to invert the procedure to find very small p-values, which is very common in genetic studies, or increase evaluation points on the domain, i.e. work on a finer grid.
The second approach is based on functional principal component analysis, FPCA \citep{ramsay:silverman:2005,yao:muller:wang:2005JASA,goldsmith:2013}. There one uses FPCA for dimension reduction and builds multivariate confidence ellipses, which can be turned into bands using Scheff\'e's method. As we will show, this procedure produces ellipses which have \textit{zero-coverage}.
The dimension reduction inherently clips part of the parameter, meaning that the true parameter will never lie in the region. As a simple illustration, imagine trying to capture a two dimensional parameter with an ellipse versus a line segment. The probability of capturing the parameter with a random ellipse can usually be well controlled, but any random line segment will fail to capture the parameter with probability one. Additionally, the bands formed from these ellipses, depend heavily on the number of FPCs used.
The paper and its contributions are organized as follows.
In Section \ref{section:ConfRegion} we present a new geometric approach to constructing confidence regions in real separable Hilbert spaces using hyper-ellipsoids (Section \ref{s:ellipse}) and hyper-rectangles (Section \ref{s:rectangle}). We show how to transform confidence hyper-ellipses into confidence bands and propose a specific ellipse which gives the smallest average squared width when turned into a band (Section \ref{s:bands}). Simulations in Section \ref{section:Simulation} suggest that this ellipse is an excellent starting point for practitioners. We also propose a visualization technique using rectangular regions (Section \ref{s:rectangle-visual}).
In Section \ref{section:EstConfRegion}, we detail issues involved in using estimated/empirical versions based on estimated covariances. As a negative result, we will show that nearly all empirical regions have zero--coverage. However, we justify using these regions in practice by introducing the concept of \textit{`ghosting'}: using regions with deficient coverage as estimates for regions with proper coverage. %
Lastly, in Sections \ref{section:Simulation} and \ref{s:dti}, we provide a
simulation study and an application to \texttt{DTI} data in the \texttt{R} package \texttt{refund} \citep{goldsmith:2012a,goldsmith:2012b}.
\section{Constructing Functional Confidence Regions} \label{section:ConfRegion}
Throughout this paper we consider a general functional parameter $\theta \in \mcH,$ where $\mcH$ is a real separable Hilbert space with inner product $\langle \cdot, \cdot \rangle$. We assume that we have an estimator $\hat{\theta} \in \mcH$ which is asymptotically Gaussian in $\mcH$ in the sense that $\sqrt N (\hat \theta - \theta) \overset{d}{\to} \mcN(0, C_\theta)$, where $N$ is the sample size and $C_\theta$ is a covariance operator that can be estimated.
Although multivariate confidence regions \eqref{e:mult_conf} are ellipsoids, this geometric shape is a by--product of using quadratic forms. Here, however, we take the opposite approach. We first define the desired geometric shape and then demonstrate how to adjust the region to achieve the desired confidence level.
Recall that $1-\alpha$ (asymptotic) confidence region $G_{\hat{\theta}}$ for $\theta \in \mcH$ is a random subset of $\mcH$ which satisfies $\mbP(\theta \in G_{\hat{\theta}}) \to 1-\alpha$. We make the following assumption to simplify arguments.
\begin{assumption} \label{a:normal}
Assume that $\sqrt{N}(\hat \theta - \theta) \overset{d}{\to} \mcN(0, C_\theta)$, that is, is asymptotically Gaussian in $\mcH$ with mean zero and covariance operator $C_\theta$.
\end{assumption}
Assumption \ref{a:normal} is fairly weak and satisfied by many methods for dense functional data including mean estimation \citep{degras:2011}, covariance estimation \citep{zhang:wang:2016}, eigenfunction/value estimation \citep{kokoszka:reimherr:2013b}, and function-on-scalar regression \citep{ReNi:2014}. To achieve such a property one needs that (i) the bias of the estimate is asymptotically negligible and that (ii) the estimate is \textit{tight} so that convergence in distribution occurs in the strong topology. While these two conditions are often satisfied, there are still many FDA settings where they are not. The bias can usually be shown to be asymptotically negligible when the number of points sampled per curve is greater than $N^{1/4}$ \citep{li:hsing:2010,cai:2011,zhang:wang:2016}. Thus our approach will not work for sparse FDA settings. The tightness assumption is often violated when estimates stem from ill-posed inverse problems. For example, in scalar-on-function regression, typical slope estimates are not tight and not asymptotically normal in the strong topology \citep{cardot:2007}.
The backbone of our construction, and many other FDA methods, is the Karhunen-Lo\`eve, KL, expansion which gives
\begin{align}
\label{e:KL}
\sqrt{N} (\hat{\theta} - \theta) = \sum_{j=1}^\infty \sqrt{\lambda_j}Z_jv_j,
\end{align}
where $\{\lambda_j\}$ and $\{v_j\}$ are eigenvalues and eigenfunctions, respectively, of $C_\theta$, and $\{Z_j\}$ are uncorrelated with mean zero and unit variance. We note that this expansion holds for any random element in $\mcH$ with a finite second moment and that the infinite sum converges in $\mcH$.
In the next subsections we discuss two types of regions which exploit this expansion. The first is a hyper-ellipse which is, as in the multivariate case, much easier to construct. The second is a hyper-rectangle which is not mentioned as often in the multivariate literature due to the complexity of its form. However, in Section \ref{section:Simulation} we will show that in some settings the hyper-rectangle can outperform the ellipse and is usually much more interpretable.
\subsection{Hyper-Ellipsoid Form} \label{s:ellipse}
A hyper-ellipse in any Hilbert space can be defined as follows. One needs a center, $m \in \mcH$, axes, $e_1, e_2, \cdots $, which are an orthonormal basis for $\mcH$, and a radius for each axis, $r_1, r_2, \cdots$. The ellipse is then given by
\[
\left\{ h \in \mathcal{H} : \sum_{j=1}^{\infty} \frac{\langle h - m, e_j \rangle^2}{r_j^2} \leq 1 \right\}.
\]
We note that this definition makes sense even when $r_j = 0$ or $\infty$. In the former one is saying that the radius in that direction is zero or `closed', while in the latter one is saying that it is infinite or `opened'.
Since our aim is to construct a confidence region for $\theta$, we will replace the arbitrary axes above with the eigenfunctions $\{v_j\}$ and the center with $\hat \theta$, to get
\[
E_{\hat{\theta}} := \left\{ h \in \mathcal{H} : \sum_{j=1}^{\infty} \frac{\langle h - \hat{\theta}, v_j \rangle^2}{r_j^2} \leq 1 \right\}.
\]
This hyper-ellipsoid will be a $1-\alpha$ confidence region for $\theta$ if we find $\{r_j\}$ which give
\[
\mbP(\theta \in E_{\hat{\theta}})
= \mbP\left( \sum_{j=1}^{\infty} \frac{\langle \theta - \hat{\theta}, v_j \rangle^2}{r_j^2} \leq 1 \right)
\to 1-\alpha.
\]
Note that there are actually infinitely many options for $\{r_j\}$ but not all of them lead to `nice' regions.
We decompose $r_j^2 = N^{-1} \xi c_j^2$, where $\{c_j\}$ are predefined weights (based on $\{\lambda_j\}$) for each direction, and $\xi$ is adjusted to achieve proper coverage. We then have
\begin{align}
\label{e:EllipsoidBound}
E_{\hat{\theta}} = \left\{ h \in \mathcal{H} : \sum_{j=1}^{\infty} \frac{\langle \sqrt{N}(\hat{\theta} - h), v_j \rangle^2}{c_j^2} \leq \xi \right\}.
\end{align}
From \eqref{e:KL} it follows that the coverage is given by
\begin{align}
\label{e:EllipsoidProbRule}
\mbP \left(\theta \in E_{\hat{\theta}} \right)
= \mbP \left( W_\theta \leq \xi \right)
\qquad
\text{where}
\qquad
W_{\theta} = \sum_{j=1}^{\infty} \frac{\lambda_j}{c_j^2}Z_j^2.
\end{align}
Therefore, to achieve the desired asymptotic confidence level for a given $\{c_j\}$, one can take $\xi$ to be the $1-\alpha$ quantile of a weighted sum of chi-squared random variables.
Though the distribution of the weighted sum of chi-squares does not have a closed form expression, fast and efficient numerical approximations exist such as the {\tt imhof} function in {\tt R} \citep{Imhof:1961:CDQ}.
In choosing $\{c_j\}$ we suggest two important considerations. The first is that one wants $c_j \to 0$ so as to eliminate the effect of later dimensions. In doing so, one is also producing compact regions \citep{laha:roghatgi:1979}. Since probability measures over Hilbert spaces are necessarily tight \citep{billingsley:1995}, meaning they concentrate on compact sets, a region which is not compact is overly large. Conversely, the faster that $c_j \to 0$, the larger the mean of $W_\theta$, which increases all of the radii. Therefore, it seems desirable to balance these two concerns, choosing $c_j$ which go to zero, but not overly fast.
\begin{figure}
\centering
\makebox{\includegraphics[width=\textwidth]{ExampleOfShorfallOfLargeRegion.eps}}
\caption{\label{fig:ShortfallOfNoncompactRegion}For this illustration purpose, we take $\mcH$ = $L^2[0,1]$. The left plot shows an $iid$ mean zero Gaussian sample on $[0,1]$ along with its sample mean. Note that 95\% confidence region $E_{norm}$ contains all functions in $\mcH$ as long as they are close to the sample mean in $\mcH$ norm, in this case $L^2$ norm, like $h_1$ in the second column. However, a properly tailored confidence region $E_{c}$ yields a band form $B_{E_c}$ such that $E_{c} \subset B_{E_c}$, and $B_{E_c}$ (and therefore $E_{c}$) exclude functions that are not bounded in the band \textit{almost everywhere}. Third column shows more extreme example of unbounded confidence regions. For the region $E_{PC(10)}^\circ$ (an \textit{opened-up} version), $h_2 := \hat{\theta} + b v_{11}$ ($b$ being any real number) is inside the $E_{PC(10)}^\circ$ regardless of the confidence level.}
\end{figure}
Two popular hypothesis testing frameworks in FDA, the {\it norm approach} and {\it PC approach}, can be understood as two extreme cases in this framework. The norm approach to test $H_0: \theta = \theta_0$ uses $N\|\hat{\theta} - \theta_0 \|^2$ as the test statistic. If $H_0$ is true, this test statistic, asymptotically, is a weighted sum of $\chi^2_1$ random variables with weights $\{\lambda_j\}$. This corresponds to taking $c_j^2 = 1$ for all $j$. The resulting confidence region, which we denote as $E_{norm}$, is a ball in $\mcH$ and provides proper coverage for $\theta$. However, this region is not compact and therefore too large as illustrated in Figure \ref{fig:ShortfallOfNoncompactRegion}.
The PC approach to hypothesis testing uses $\sum_{j=1}^{J} { N \langle \hat{\theta} - \theta_0, v_j \rangle^2}{\lambda_j^{-1}}$ as the test statistic, for some finite $J$. If $H_0$ is true, this test statistic follows a $\chi^2_J$ distribution; therefore, $J$ must be a finite value even when the covariance is known. There are two possible confidence regions induced by this approach. Both regions take $c_j^2=\lambda_j$ for $j \leq J$, but for $j > J$, one could either \textit{close them off}, $c_j = 0 $, or \textit{open them up}, $c_j = \infty$. The former results in a compact confidence region, but we have $\mbP \left(\theta \in E_{\hat{\theta}} \right) = 0$, i.e. \textit{zero-coverage} even if we make the very artificial assumption that $\theta \in \Span \{v_1,\dots, v_J\}$ since the center of the region $\hat{\theta}$ sits outside $\Span \{v_1,\dots, v_J\}$ \textit{almost surely}.
On the other hand, the \textit{opened-up} region would achieve proper coverage, but the region is not even bounded, let alone compact.
There exists infinitely many options for proper $\{c_j\}$ and how to best choose them is an open question deserving further exploration. In preparing this work, a number of options
were initially considered, however, we propose using the following due to 1) its ability to achieve \textit{the narrowest average squared width band} using tools from Section \ref{s:bands} 2) excellent empirical performance, and 3) its simplicity:
\[
c_j^2 = \lambda_j^{1/2}
\qquad \text{and} \qquad E_c := \left\{ h \in \mathcal{H} : \sum_{j=1}^{\infty} \frac{\langle \sqrt{N}(\hat{\theta} - h), v_j \rangle^2}{\sqrt{\lambda_j}} \leq \xi \right\},
\]
i.e. the square root of the corresponding eigenvalues. Although $\sum_j {\lambda_j}{c^{-2}_j} \equiv \sum_j {\lambda_j^{1/2}} < \infty$ is not always guaranteed, this holds for most processes that are smoother than Brownian motion ($\lambda_j \approx j^{-2}$), and therefore would hold in most applications.
If the process is rough enough such that $\sum_j {\lambda_j^{1/2}} < \infty$ is not guaranteed, one may use another criteria suggested in the Appendices,
namely $c_j^2 = \left(\sum_{i\geq j} \lambda_i \right)^{1/2}$, which guarantees both $c_j \to 0$ and $\sum_j {\lambda_j}{c^{-2}_j} < \infty$ \citep[p. 80]{rudin:1976}.
\subsection{Hyper-Rectangular Form}
\label{s:rectangle}
Our second form is a slight modification of the previous form, switching from an ellipse to a rectangle.
In multivariate statistics a rectangular confidence region is often easier to interpret than an ellipse since it gives clear confidence intervals for each (principal component) coordinate. However, it is often much easier to compute an ellipse since the distributions of quadratic forms are well understood. Regardless, we will show that it can still be easily computed using a nearly closed form expression, up to a function involving the standard normal quantile function.
A hyper-rectangular region can be similarly constructed as:
\[
R_{\hat{\theta}}
= \left\{ h \in \mathcal{H} : \frac{| \langle h - \hat{\theta}, v_j \rangle |}{r_j} \leq 1, \forall \ j =1,2 \dots \ \right\} = \left\{ h \in \mathcal{H} : | \langle \sqrt{N}(h - \hat{\theta}), v_j \rangle | \leq c_j\sqrt{\xi} , \ \forall j \right\}
\]
using the same decomposition $r_j^2 = N^{-1} \xi c_j^2$.
From the KL expansion \eqref{e:KL}, we want
\begin{align}
\label{e:RectangleProbRule}
\mbP \left(\theta \in R_{\hat{\theta}} \right)
= \mbP \left( |\sqrt{\lambda_j}Z_j| \leq c_j \sqrt{\xi} , \ \forall j \right)
\to 1-\alpha.
\end{align}
When we define $z_j := \frac{c_j}{\sqrt{\lambda_j} } \sqrt{\xi}$, the remaining problem is to find proper $\{z_j\}$, or selecting the $\{c_j\}$ and finding the proper $\xi$. One may first determine $\{c_j\}$ and find the proper $\xi$, or find the proper $\{z_j\}$ directly. Again, there exists infinitely many criteria and some examples can be found in the Appendices. Among those, we propose using the following:
\[
z_j = \Phi_{sym}^{-1}\left[ \exp \left( {\frac{\lambda_j}{\sum_{k=1}^\infty \lambda_k} \log(1-\alpha)} \right) \right] \text{ for each } j,
\]
where $\Phi^{-1}_{sym}(\cdot)$ is defined as the inverse of $\Phi_{sym}(z):=\mbP(|Z| \leq z)$, $Z \stackrel{d}{=} \mcN(0,1)$. We denote this rectangular region as $R_{z}$. This criterion produces a region that is close to the one that minimizes $\sup \{ \| h - \hat{\theta} \|^2 : h \in R_{\hat{\theta}} \}$, i.e. the distance between the farthest point of the region from the center,
but in a much faster way. It is simple, easy to compute, and shows an excellent empirical performance.
\subsection{Visualizing Ellipses via Bands}
\label{s:bands}
Visualizing a confidence ellipse is challenging even in the finite dimensional setting; it is very difficult once one goes beyond two or three dimensions. In this sense, the rectangular regions are much easier to visualize since one can simply translate them into marginal intervals and examine each coordinate separately (while still achieving simultaneous coverage).
It is therefore useful to develop visualization techniques for elliptical regions. One option is to construct bands in the form of an infinite collection of point-wise intervals over the domain of the functions.
To make our discussion more concrete, in this section only we assume that $\mcH = L^2(\mcD)$, where $\mcD$ is some compact subset of $\mbR^d$.
For example, $d = 1$ for temporal curves and $d=2$ for spatial surfaces.
A symmetric confidence band in $\mcH$ around $\hat{\theta}$ can be understood as:
\begin{align}
\label{e:bandform}
B_{\hat{\theta}}
&= \left\{ h \in \mathcal{H} : | h(x) - \hat{\theta}(x) | \leq r(x), \ \text{for } x \in \mcD \text{\textit{ almost everywhere}} \right\}.
\end{align}
The caveat ``almost everywhere" here (i.e. except on a set of Lebesgue measure zero) cannot be dropped since we are working with $L^2$ functions. The downside of using the above band, however, is that an analytic expression for $r \in \mcH$ usually does not exist. One therefore typically resorts to simulation based methods as in \citet{degras:2011}.
The band suggested by \citet{degras:2011} takes $c_\alpha \hat{\sigma}(t)/\sqrt{N}$ as $r(t)$ where $\hat{\sigma}(t)$ is the estimated standard deviation of $\sqrt{N}\hat{\theta}(t)$. The proper scaling factor $c_\alpha$ is then found via parametric bootstrap.
We denote this band as $\hat{B}_{s}$, while denoting the one using the true covariance as $B_{s}$.
In traditional multivariate statistics and linear regression, ellipses can be transformed into point-wise intervals and bands using Scheff\'e's method which, at its heart, is an application of the Cauchy-Schwarz inequality. This approach cannot be applied \textit{as is} to our ellipses because they are infinite dimensional. However, a careful modification of Scheff\'e's method can be used to generate bands.
We now show a $1-\alpha$ ellipsoid confidence region $E_{\hat{\theta}}$ can be transformed into the confidence band $B_{\hat{\theta}}$ such that $E_{\hat{\theta}} \subset B_{\hat{\theta}}$ based on a modification of Scheff\'e's method.
Defining
\begin{align}
\label{e:BandZ1}
r(x) = \sqrt{ \frac{\xi}{N}\sum_{j=1}^\infty c_j^2 v_j^2(x)}\ ,
\end{align}
then we have the following theorem.
\begin{theorem}\label{t:band}
If Assumption \ref{a:normal} holds, $\sum c_j^2 < \infty$, and $\sum \lambda_j c_j^{-2}<\infty$, then $r(x) \in \mcH$ and $E_{\hat{\theta}} \subset B_{\hat{\theta}}$. Therefore, $\mbP(\theta \in B_{\hat{\theta}}) \geq 1-\alpha + o(1).$
\end{theorem}
These bands also lead to a convenient metric for choosing an ``optimal" sequence $c_j$. In particular, we choose the $c_j$ which lead to a band with the \textit{narrowest average squared width}. This is in general a difficult metric to quantify due to $\xi$. However, we can replace $\xi$, which is a quantile of a random variable, by its mean to obtain the following:
\begin{align*}
ASW(\{ c_j \} ) = \sum_{j=1}^\infty \frac{\lambda_j}{c_j^2} \sum_{i=1}^\infty {c_i^2}.
\end{align*}
Clearly the $\{c_j\}$ are unique only up to a constant multiple, however, it is a straightforward calculus exercise to show that one option is to take $c_j^2 = \lambda_j^{1/2}$, which is also conceptually very simple. It is also worth noting that this choice does not change with the smoothness of the underlying parameters or the covariance of the estimator; these quantities are implicitly captured by the eigenvalues themselves and thus already built into the $c_j$ with this choice.
In practice, the coverage of this band will be larger than $1-\alpha$, since $E_{\hat{\theta}} \subset B_{\hat{\theta}}$ and the coverage of $E_{\hat{\theta}}$ is $1-\alpha$. Our simulation studies show that this gap is non-trivial for rougher processes, but narrows substantially for smoother ones. The band formed this way from $E_c$ will be denoted as $B_{E_c}$.
Our suggested band \eqref{e:BandZ1} takes into account the covariance structure of the estimator via the eigenvalues (though the $c_j$) and the eigenfunctions. Thus, our band differs from those described in \cite{degras:2011} in that we do not use a constant threshold after taking into account the point-wise variance; our band adjusts locally to the within curve dependence of the estimator. We will illustrate this point further in Section \ref{section:Simulation} as one of our simulation scenarios will have a dependence structure which changes across the domain. Our band will adjust to this dependence, widening in areas with low within curve dependence and narrowing when this dependence is high.
Lastly, one practical issue arises in finding proper $\xi$ since finding the quantile of weighted sum of $\chi^2$ random variables is not straightforward. One may try to invert the approximate CDF, like \texttt{imhof} in $\texttt{R}$. Alternatively, one can use a gamma approximation by matching the first two moments \citep{feiveson:delaney:1968}. Our simulations showed that for typical choices of $\alpha$, such as $0.1, 0.05,$ or $0.01$, a gamma approximation works well.
\section{Estimating Confidence Regions and Ghosting}
\label{section:EstConfRegion}
We have, until now, treated $C_\theta$ as known for ease of exposition and to explore the infinite dimensional nature of the regions. In this section we consider the fully estimated versions. Issues arise here that do not in the multivariate setting. In particular, one typically has \textit{zero-coverage} when working with estimated regions, but we will show that these regions are still in fact useful since they are very close to regions with proper coverage. In this sense, we call them \textit{Ghost Regions} since they `ghost' the regions with proper coverage. Here we view the empirical regions as estimators of the desired regions which have proper coverage, and then show that the distance between the two quickly converges to zero. Our purpose in doing so is to provide a theoretical justification for using the regions in practice. In Section \ref{section:Simulation} we will also validate these regions through simulations.
We assume that we have an estimator $\hat{C}_\theta$ of $C_\theta$ which achieves root-$N$ consistency.
Consistency of $\hat{C}_\theta$ enables us to replace $\{( v_j , \lambda_j) \}_{j=1}^{\infty}$ with the empirical versions $\{( \hat{v}_j , \hat{\lambda}_j) \}_{j=1}^{N}$\footnote{In practice we usually have less than $N$ empirical eigenfunctions due to the estimation of other parameters.}.
When we replace $\{v_j\}_{j=1}^\infty$ with $\{\hat{v}_j\}_{j=1}^{N}$, however, we nearly always end up with a finite number of estimated eigenfunctions (with nonzero eigenvalues).
We present asymptotic theory for the hyper-ellipsoid form although similar arguments can be applied to the hyper-rectangular form.
Define $\cH_J := \Span ( \{\hat{v}_j\}_{j=1}^{J} ) \subset \mcH$ where $J \leq N$. We construct two versions of the estimated confidence regions
\begin{align}
\label{e:estimatedregion-open}
\hat{E}^{\circ}_{\hat{\theta}} = \left\{ h \in \cH : \sum_{j=1}^{J} \frac{\langle h - \hat{\theta}, \hat{v}_j \rangle^2}{N^{-1}c_j^2} \leq \xi \right\} \qquad \text{and}
\end{align}
\begin{align}
\begin{split}
\label{e:estimatedregion}
\hat{E}_{\hat{\theta}}
= \left\{ h \in \cH_J : \sum_{j=1}^{J} \frac{\langle h - \hat{\theta}, \hat{v}_j \rangle^2}{N^{-1} c_j^2} \leq \xi \right\}
= \left\{ h \in \cH: \sum_{j=1}^{\infty} \frac{\langle h - \hat{\theta}, \hat{v}_j \rangle^2}{N^{-1} c_j^2 \mathbf{1}_{j \leq J}} \leq \xi \right\},
\end{split}
\end{align}
though in our theoretical results we will let $J \to \infty $ with $N$. The empirical eigenfunctions $\{\hat{v}_j\}_{j=1}^J$ can be extended to give a full orthonormal basis of $\cH$. Note that $\hat{E}_{\hat{\theta}}$ is `closed off' while $\hat{E}^{\circ}_{\hat{\theta}}$ is `opened up' for those dimensions not captured by the first $J$ components.
We take $\xi$ to be the $1-\alpha$ quantile of a weighted sum of $\chi^2$ random variables with weights $\{{\hat \lambda_j}{c^{-2}_j} \}_{j=1}^J$.
Observe that $\hat{E}_{\hat{\theta}}^{\circ}$ achieves the proper coverage $\mbP ( \theta \in \hat{E}_{\hat{\theta}}^{\circ} ) \to 1-\alpha$. However, $\hat{E}_{\hat{\theta}}^{\circ}$ cannot be compact regardless of how $\{c_j\}$ is chosen unless $\mcH$ is finite dimensional. If we quantify the distance between sets using Hausdorff distance, $\hat{E}_{\hat{\theta}}^{\circ}$ does not converge to $E_{\hat{\theta}}$ since it is unbounded.
On the other hand, $\hat{E}_{\hat{\theta}}$ is always compact but has \textit{zero-coverage}; we almost always have $\mbP ( \theta \in \hat{E}_{\hat{\theta}} ) = 0$ regardless of the sample size.
Therefore, neither empirical confidence regions maintains the nice properties of the ones using a known covariance -- compactness and proper coverage -- at the same time. However, as we will show, $\hat{E}_{\hat{\theta}}$ is close to $E_{\hat{\theta}}$ in Hausdorff distance, meaning we can use $\hat{E}_{\hat{\theta}}$ as an estimate of the desired region ${E}_{\hat{\theta}}$. With this convergence result at hand, one may prefer the closed version $\hat{E}_{\hat{\theta}}$ over $\hat{E}_{\hat{\theta}}^{\circ}$ as a confidence region. Because $\hat{E}_{\hat{\theta}}$ does not have proper coverage we call it a \textit{ghost} region.
\subsection{Convergence in the Hausdorff Metric}
In this subsection we show that the Hausdorff distance, denoted $d_H$, between $\hat{E}_{\hat{\theta}}$ and ${E}_{\hat{\theta}}$ can be well controlled. In particular, we will show that this distance converges to zero faster than $N^{-1/2}$. Since this is the rate at which ${E}_{\hat{\theta}}$ shrinks to a point, this is necessary to ensure that $\hat{E}_{\hat{\theta}}$ is actually useful as a proxy for ${E}_{\hat{\theta}}$.
We begin by introducing a fairly weak assumption on the distribution of $\hat{C}_\theta$. Recall that $C_\theta$ is a Hilbert-Schmidt operator (all covariance operators are) in the sense that
$
\|C_{\theta}\|^2_{\mcS} := \sum_{j=1}^\infty \| C_{\theta}(e_j) \|^2_\mcH < \infty
$
where $\{e_j\}$ is any orthonormal basis of $\mcH$. We denote the vector space of Hilbert-Schmidt operators by $\mcS$, which is also a real separable Hilbert space with inner product
$
\langle \Psi, \Phi \rangle_{\mcS} := \sum_{j=1}^\infty \langle \Psi(e_j), \Phi(e_j) \rangle_{\mcH}.
$
A larger space, $\mcL$, consists of all bounded linear operators with norm
$
\| \Psi\|_{\mcL} = \sup_{h \in \mcH} \| \Psi (h)\| / \|h\|,
$
which is strictly smaller than the $\mcS$ norm, implying $\mcS \subset \mcL$.
We now assume that we have a consistent estimate of $C_\theta$.
\begin{assumption} \label{assumption:1}
Assume that we have an estimator $\hat C_\theta$ of $C_\theta$ which is root-$N$ consistent in the sense that $N \mbE \| \hat C_\theta - C_\theta\|_{\mcS}^2 = O(1)$.
\end{assumption}
The Hausdorff distance between two subsets $S_1$ and $S_2$ of $\cH$ is defined as
\[
d_H(S_1,S_2) = \max\{\rho(S_1,S_2), \rho(S_2,S_1) \}, \qquad \text{where} \qquad \rho(S_1,S_2) = \sup_{x \in S_1} \inf_{y \in S_2} \|x - y \|_\cH.
\]
We say two regions $S_1$ and $S_2$ converge to each other if $d_H(S_1,S_2)$ converges to $0$. Therefore, to achieve convergence of $\hat{E}_{\hat{\theta}}$ to $E_{\hat{\theta}}$ in probability, we need $d_H(\hat{E}_{\hat{\theta}},E_{\hat{\theta}}) \xrightarrow{P} 0 $ as $N \to \infty$.
To accomplish this, we separate the results for $\rho(\hat{E}_{\hat{\theta}},E_{\hat{\theta}})$ and $\rho({E}_{\hat{\theta}},\hat E_{\hat{\theta}})$. Since $\hat{E}_{\hat{\theta}}$ is the ``smaller" set, the former is primarily controlled by the distance between the empirical and population level eigenfunctions. The latter is additionally influenced by how large the remaining dimension of $E_{\hat{\theta}}$ is. We define $\{\alpha_j\}$ as
\[
\alpha_1 := \lambda_1 - \lambda_2 \quad \text{and} \quad \alpha_j := \min\{ \lambda_j - \lambda_{j+1}, \lambda_{j-1} - \lambda_{j} \} \text{ for } j = 2, \dots.
\]
Our primary convergence results are given in the following two theorems.
\begin{theorem}
\label{thm:Convergence1}
If Assumptions \ref{a:normal} and \ref{assumption:1} hold and $c_1 \geq c_2 \geq \dots$ then with probability one
\begin{align}
\label{e:order1}
\rho(\hat{E}_{\hat{\theta}},E_{\hat{\theta}}) \leq \left[ \sum_{j=1}^{J} \frac{8 \xi c_1^2 \|\hat C_\theta - C_\theta\|_\mcL^2}{N \alpha_j^{2}} \right]^{\frac{1}{2}}.
\end{align}
\end{theorem}
\begin{theorem}
\label{thm:Convergence2}
If Assumptions \ref{a:normal} and \ref{assumption:1} hold and $c_1 \geq c_2 \geq \dots$ then with probability one
\begin{align}
\rho(E_{\hat{\theta}}, \hat{E}_{\hat{\theta}})
\leq \left[c_J^2 N^{-1} \xi \right]^{\frac{1}{2}} + \left[ \sum_{j=1}^{J} \frac{8 \xi c_1^2 \|\hat C_\theta - C_\theta\|_\mcL^2}{N \alpha_j^{2}} \right]^{\frac{1}{2}}.
\label{e:order2}
\end{align}
\end{theorem}
With Theorems \ref{thm:Convergence1} and \ref{thm:Convergence2} in hand, we can characterize the overall convergence rate for $d_H(\hat{E}_{\hat{\theta}},E_{\hat{\theta}}) $, but we first need more explicit assumptions on the rates for the eigenvalues, $\lambda_j$, and weights, $c_j^2$.
\begin{assumption}
\label{a:rate1}
Assume that there exist constants $K > 1 $, $\delta > 1$, and $\gamma > 0$ such that
$$\frac{1}{Kj^{\delta}} \leq \lambda_j \leq \frac{K}{ j^{\delta}},
\qquad
\frac{1}{Kj^{\delta+1}} \leq \lambda_j - \lambda_{j+1} \leq \frac{K}{ j^{\delta+1}},
\quad \text{ and } \quad
\frac{1}{K j^{ 2 \gamma}} \leq c_j^2 \leq \frac{K}{j^{ 2 \gamma}},
$$
for all $j = 1, \dots$, where we have $0 < 2 \gamma < \delta -1$.
\end{assumption}
The first two assumptions are quite common in FDA. One needs to control the rate at which the eigenvalues go to zero as well as the spread of the eigenvalues which influences how well one can estimate the corresponding eigenfunctions, though this can likely be slightly relaxed \citep{reimherr:2015}. The rate at which $c_j^2$ decreases to zero also needs to be well controlled, and, in particular, it cannot go to zero much faster than $\lambda_j$.
\begin{theorem}
\label{thm:ConvergenceOverall}
Assume that Assumptions \ref{a:normal}, \ref{assumption:1} and \ref{a:rate1} hold, then
\begin{enumerate}
\item The $J$ which balances \eqref{e:order1} and \eqref{e:order2} is $J = N^{\frac{1}{2 \delta + 3 + 2 \gamma}}$.
\item The overall convergence rate is then
$\mbE [d_H(\hat{E}_{\hat{\theta}},E_{\hat{\theta}})^2 ] \leq
O \left( N^{-\left( 2 - \frac{2\delta + 3}{2 \delta + 3 + 2\gamma} \right)} \right)$.
\end{enumerate}
\end{theorem}
Theorem \ref{thm:ConvergenceOverall} shows that the squared distance between the ghost region $\hat{E}_{\hat{\theta}}$ and the desired region $E_{\hat{\theta}}$ goes to zero faster than $N^{-1}$, which is the rate at which $E_{\hat{\theta}}$ shrinks to a point. This suggests that $\hat{E}_{\hat{\theta}}$ is a viable proxy for ${E}_{\hat{\theta}}$, even though it has zero coverage.
Interestingly, the rate is better the faster that $c_j$ tends to zero, i.e. for larger values of $\gamma$. At first glance, this may suggest that one should actively try and find $c_j$ which tend to zero as fast as possible. However, by changing the $c_j$ one is changing the confidence region into a potentially less desirable one.
In particular, as we will soon see the choice of $c_j^2 = \lambda_j^{1/2}$ leads to a suboptimal convergence rate in Theorem \ref{thm:ConvergenceOverall}, but, in some sense, leads to an optimal confidence band and excellent empirical performance. Thus, this may be one of the few instances in statistics where it is not necessarily desirable to have the ``fastest" rate of convergence.
We finish this section by stating a Corollary for when $c_j^2 = \hat \lambda_j^{1/2}$. In this case, we also take into account that the $c_j$ are estimated from the data. Note that in Theorem \ref{thm:ConvergenceOverall} it is assumed that the $c_j$ are not random, while in Theorems \ref{thm:Convergence1} and \ref{thm:Convergence2} the $c_j$ can be random or deterministic as long as they are nonincreasing.
\begin{corol}
\label{corol:EchXConvRate1} Let $c_j^2 = \hat \lambda_j^{1/2}.$ Then under Assumptions \ref{assumption:1} and \ref{a:rate1},
we have
$$d(\hat E_{\hat \theta}, E_{\hat \theta})^2 = O_p\left(N^{ - \frac{6\delta + 6}{5 \delta + 6}} \right).$$
\end{corol}
\section{Simulation}
\label{section:Simulation}
In this section, we present a simulation study to evaluate and illustrate the proposed confidence regions and bands. Throughout this section, we only consider dense FDA designs. Section \ref{subsection:HT} first compares different regions for hypothesis testing. Note that comparing these regions presents a nontrivial challenge as we cannot just choose the ``smallest" one as we are working in infinite dimensional Hilbert spaces. We therefore turn to using the regions for hypothesis testing, evaluating each's ability to detect different types of changes from some prespecified patterns.
In Section \ref{subsection:ComparisonOfBands}, we visually compare bands and examine their local coverages. Lastly, in Section \ref{subsection:SimulationOnData}, we consider more complicated mean and covariance structures borrowed from the DTI data in Section \ref{s:dti} and examine the effects of smoothing.
\subsection{Hypothesis Testing}
\label{subsection:HT}
We consider the hypothesis testing $H_0 : \theta = \theta_0$ vs. $H_1 : \theta \neq \theta_0 $. For a given confidence region $G_{\hat{\theta}}$, the natural testing rule is to reject $H_0$ if $\theta_0 \notin G_{\hat{\theta}}$.
For ellipses and rectangles, however, we compare $\theta_0$ only in the directions included in the construction of the confidence regions to alleviate the ghosting issue and mimic how the methods would be used in practice.
We calculate p-values (detailed in the Appendices) and compare them to $\alpha$. For bands like $B_s$ and $B_{E_c}$, $H_0$ will be rejected if $\theta_0$ sits outside the bands at least one evaluation point over the domain.
In this section and in Section \ref{subsection:ComparisonOfBands}, we take $\mcH = L^2[0,1]$ and consider an $iid$ sample $\{X_i(t)\}_{i=1}^N$, $t\in[0,1]$ from a Gaussian process $\mcN(\theta, C_\theta)$.
To estimate $\theta$ and $C_\theta$ (when unknown), we use the standard estimates \citep{hkbook} $\hat{\theta}(t) = N^{-1}\sum_{i=1}^{N}X_i(t)$ and
$\hat{C}_\theta(t,s) = (N-1)^{-1} \sum_{i=1}^N ( X_i(t) - \hat{\theta}(t) ) ( X_i(s) - \hat{\theta}(s) )$.
To emulate functions on the continuous domain $[0,1]$, functions are evaluated at 100 equally spaced points over $[0,1]$.
\subsubsection{Verifying Type I Error}
\label{subsection:TypeIError}
\paragraph{Regions with Known Covariance:} We first verify Type I error rates assuming the true covariance operator is known. Each setting was repeated 50,000 times according to the following procedure:
\begin{enumerate}
\item Generate a sample $\{X_i\}_{i=1}^N \stackrel{iid}{\sim} \mcN(\theta,C_\theta)$. For the mean function we take $\theta(t) := 10t^3-15t^4+6t^5$, which was used in \citet{degras:2011} and \citet{Hart:1986:KRE}. For the covariance operator, we use a Mat\'ern covariance
$C_\theta(t,s) := \frac{.25^2}{\Gamma(\nu) 2^{\nu-1}} \left( \sqrt{2\nu}|t-s|\right)^\nu K_\nu \left(\sqrt{2\nu}|t-s| \right)$, where $K_\nu(\cdot)$ is the modified Bessel function of the second kind, and $\nu$ is the smoothness parameter.
\item Find $\hat{\theta}$ from the sample, and perform hypothesis testings on $\theta_0 = 10t^3-15t^4+6t^5$, which is the same as $\theta$, based on the confidence regions using $C_\theta$, the true covariance.
\end{enumerate}
To represent small/large sample size and rough/smooth processes, the four combinations of $N=25$, $N=100$ and $\nu=1/2,$ $\nu=3/2$ were used. Table \ref{tbl:ActualAlphaTrueCov} summarizes proportion of the rejections.
\begin{table}[h]
\centering
\caption{Type I error rates with known covariance. $E_{norm}$, $E_{PC}$, and $E_{c}$ represent ellipsoid regions from norm approach, FPCA approach, and the proposed one, respectively. $B_{s}$ is the simulation based band while $B_{E_c}$ is the band based on $E_{c}$. $R_{z}$ is the proposed rectangular region and $R_{zs}$ is the small sample version of $R_{z}$, which uses only eigenfunctions (but not eigenvalues) of $C_\theta$.}
\label{tbl:ActualAlphaTrueCov}
\begin{tabular}{rl|rrr|rrrr}
\hline
$N$ & $\nu$ & $E_{norm}$ & $E_{PC}$ & $B_{s}$ &
$E_{c}$ & $R_{z}$ & $R_{zs}$ &
$B_{E_c}$ \\ \hline
25 & $\sfrac{1}{2} \ (rough)$ & .048 & .049 & .049 & .049 & .048 & .049 & .000 \\
25 & $\sfrac{3}{2} \ (smooth)$ & .051 & .051 & .053 & .050 & .051 & .049 & .025 \\
100 & $\sfrac{1}{2} \ (rough)$ & .051 & .049 & .052 & .051 & .050 & .050 & .000 \\
100 & $\sfrac{3}{2} \ (smooth)$ & .050 & .049 & .047 & .049 & .049 & .049 & .023 \\ \hline
\end{tabular}
\end{table}
All the methods are satisfactory except for the transformed band $B_{E_c}$, which generates a conservative band as expected. For the ellipsoid and rectangular regions, up to the very last PCs were used -- trimming out only $\lambda_j < 10^{-18}$ -- and the results were still stable. Although not presented here, the results were robust against the number of PCs used.
\paragraph{Regions with Unknown Covariance:}
We now use $\hat C_\theta$ instead of $C_\theta$ in the step 2 above,
and use PCs to capture at least $99.9\%$ of estimated variance, i.e. took $J$ such that
$J = \min_j ( \sum_{i=1}^j \hat{\lambda}_i / \sum_{i=1}^{N-1}\hat{\lambda}_i \geq .999)$, for all ellipsoid and rectangular regions. For the FPCA based region, we additionally took $J=3$, which explained approximately $90\%$ of the variability. Table \ref{tbl:ActualAlpha} summarizes the proportions of the rejections and the following can be observed.
\begin{enumerate}
\item Coverage of $\hat{E}_{PC}$ is very sensitive to the number of PCs used and works well only when the number is relatively small. This reenforces the common concern of how to best choose $J$ in practice. In contrast, $\hat{E}_{norm}$ does not have this question. Our proposed methods $\hat{E}_{c}$ and $\hat{R}_{z}$ lie somewhere between the two and choosing $J$ is not a concern as long as the very late PCs are dropped.
\item When $N$ is small, the small sample modification of the rectangular region ($\hat{R}_{zs}$) achieves slightly conservative but seemingly the best result. $\hat{E}_{norm}$ follows closely, possibly due to its lower dependency on later PCs. The details on $\hat{R}_{zs}$ can be found in the Appendices.
\end{enumerate}
\begin{table}[h]
\centering
\caption{Type I error rates with an estimated covariance. $E_{norm}$, $E_{PC}$, and $E_{c}$ represent ellipsoid regions from norm approach, FPCA approach, and the proposed one, respectively. $B_{s}$ is the simulation based band while $B_{E_c}$ is the band based on $E_{c}$. $R_{z}$ is the proposed rectangular region and $R_{zs}$ is the small sample version of $R_{z}$, which uses only eigenfunctions (but not eigenvalues) of $C_\theta$.}
\label{tbl:ActualAlpha}
\begin{tabular}{rr|rrrr|rrrrrrr|r}
\hline
$N$ & $\nu$ & $\hat{E}_{norm}$ & $\hat{E}_{PC}$ & $\hat{E}_{PC(3)}$ & $\hat{B}_{s}$ &
$\hat{E}_{c}$ & $\hat{R}_{z}$ & $\hat{R}_{zs}$ &
$\hat{B}_{E_c}$ & $PC^*$ \\ \hline
25 & $\sfrac{1}{2}$ & .057 & .162 & .069 & .087 & .071 & .069 & .041 & .013 & 21 \\
25 & $\sfrac{3}{2}$ & .061 & .132 & .090 & .071 & .068 & .068 & .047 & .039 & 5 \\
100 & $\sfrac{1}{2}$ & .052 & .255 & .056 & .058 & .060 & .059 & .050 & .001 & 53 \\
100 & $\sfrac{3}{2}$ & .052 & .066 & .057 & .054 & .053 & .053 & .049 & .026 & 5 \\ \hline
\end{tabular}
\begin{center}
\textit{* Median number of PCs required to capture $\geq 99.9\%$ of estimated variance.}
\end{center}
\end{table}
We emphasize here the dependence on choosing $J$ for both the FPCA and our new approach. As is well known, FPCA based methods are very sensitive to the choice of $J$ as it places all eigenfunctions on an ``equal footing''. However, later eigenfunctions are often estimated very poorly, which can result in very bad type 1 error rates when $J$ is taken too large. In contrast, our approach is not as sensitive to the choice of $J$ since later eigenfunctions are down weighted. In our simulations, they remained well calibrated as long as the very late FPCs are dropped, e.g. after capturing $99\%$ of the variance.
\subsubsection{Comparing Power}
To compare the power of the hypothesis tests, we gradually perturb $\theta$ -- the actual sample generating mean function -- from $\theta_0$ by an amount $\Delta \in \mbR$. To emulate what one might encounter in practice, three scenarios are considered:
\begin{enumerate}
\item shift: $\theta_0(t) = 10t^3-15t^4+6t^5$, \quad $\theta(t) = \theta_0(t) + \Delta$,
\item scale: $\theta_0(t) = 10t^3-15t^4+6t^5$, \quad $\theta(t) = \theta_0(t)(1+\Delta)$,
\item local shift: $\theta_0(t) = \max\left\{0, -10|t-0.5| + 1 \right\}$, \quad $\theta(t) = \max\left\{0, -10|t-0.5| + 1+\Delta \right\}$.
\end{enumerate}
A visual representation of the three scenarios can be found in the left column of Figure \ref{fig:PowerComparison}. We estimate $C_\theta$ throughout, reduce the number of repetitions to 10,000, and use the same combinations of the sample size ($N=25, \ 100$) and smoothness ($\nu=$\sfrac{1}{2}$, \ $\sfrac{3}{2}$ $). For the $\hat{E}_{PC}$ method, the first 3 PCs were again used to ensure an acceptable Type I error. For other ellipsoid and rectangular regions, $J$ was taken to explain approximately 99.9\% of the variance as in the previous section. Power plots for $N=100$ and $\nu=\sfrac{1}{2}$ can be found in the right column of Figure \ref{fig:PowerComparison}, and a summary is given in Table \ref{tbl:AveragePower]}. The result for other combinations of sample size and smoothness can be found in the Appendices, but they all lead to the same conclusions:
\begin{enumerate}
\item In scenario 1, $\hat{E}_{PC(3)}$ has the lowest power while the other regions performs similarly.
\item In scenario 2, $\hat{E}_{PC(3)}$ has the highest power while $E_{norm}$ has the lowest. Our hyper-ellipse method $\hat E_c$ has only slightly less power than the FPCA method. Our rectangular method, $\hat R_z$, and the band of \citet{degras:2011} have about the same power, but both are lower than the ellipse.
\item In Scenario 3, our proposed regions $\hat{E}_{c}$ and $\hat{R}_{z}$ far outperform existing ones. Note that $\theta$ differs from $\theta_0$ only on a fraction of the domain and the size of the departure is also small. Due to the small $\| \theta - \theta_0\|$, therefore, $\hat{E}_{norm}$ performs the worst. The FPCA method, $\hat{E}_{PC(3)}$ and Degras's band fall quite a bit behind our proposed methods, but still better than the norm approach. The $\hat{E}_{PC(3)}$ performs much better when the process is smooth and therefore the `signal' is captured in earlier dimensions -- although it still falls short from the proposed ones.
\end{enumerate}
As a conclusion, we recommend using $\hat E_c$ in practice for hypothesis testing purposes. We base this recommendation on 1) its power is at the top or near the top in every scenario; 2) its type I error is well-maintained as long as very late PCs are dropped; 3) it is less sensitive to the number of PCs used as long as the number is reasonably large; 4) it is easy to compute; and 5) it can be used to construct a band. Being able to make this recommendation is quite substantial as previous work has focused on the norm versus PC approach, where clearly one does not always outperform the other \citep{ReNi:2014}.
\begin{table}
\centering
\caption{Average Power over $\Delta$ for each Scenario}
\label{tbl:AveragePower]}
\begin{tabular}{l|rrr|rrr}
\hline
Scenario & $\hat{E}_{norm}$ & $\hat{E}_{PC(3)}$ & $\hat{B}_{s}$ &
$\hat{E}_{c}$ & $\hat{R}_{z}$ & $\hat{R}_{zs}$ \\ \hline
1. Shift & .623 & .560 & .617 & .625 & .607 & .598 \\
2. Scale & .411 & .549 & .503 & .522 & .496 & .480 \\
3. Local Shift & .234 & .568 & .504 & .759 & .770 & .749 \\ \hline
\end{tabular}
\end{table}
\begin{figure}
\centering
\makebox{\includegraphics[width=\textwidth]{MeanFunctionAndPowerN100Nu1h.eps}}
\caption{\label{fig:PowerComparison}Power Comparison for each Scenario ($N=100, \ \nu=1/2$)}
\end{figure}
\subsection{Comparison of Bands}
\label{subsection:ComparisonOfBands}
In this section we compare the shape of $\hat{B}_{E_c}$ with $\hat{B}_{s}$, the two band forms of confidence regions, along with point-wise 95\% confidence intervals denoted as `naive-t'. For this purpose, we consider three different scenarios regarding the smoothness of $\hat{\theta}$. The procedure can be summarized as follows:
\begin{enumerate}
\item Generate a sample $\{X_i\}_{i=1}^N \stackrel{iid}{\sim} \mcN(\theta,C_\theta)$, using the same mean function $\theta(t)$ as in Section \ref{subsection:TypeIError}. For the covariance operator, three scenarios are considered:
\begin{enumerate}
\item The same Mat\'ern covariance in Subsection \ref{subsection:TypeIError} with $\nu = \sfrac{1}{2}$ (rough).
\item The same Mat\'ern covariance in Subsection \ref{subsection:TypeIError} with $\nu = \sfrac{3}{2}$ (smooth).
\item $C_\theta(t,s) := \frac{.25^2}{\Gamma(\nu) 2^{\nu-1}} \left( \sqrt{2\nu}|t^{10}-s^{10}|\right)^\nu K_\nu \left(\sqrt{2\nu}|t^{10}-s^{10}| \right)$ with $\nu=\sfrac{1}{2}$. This generate processes that transition from smooth to rough by `warping' the domain.
\end{enumerate}
\item Find $\hat{\theta}$ and $\hat{C}_\theta$, and generate symmetric bands around $\hat{\theta}$ using $\hat{C}_\theta$.
\end{enumerate}
Figure \ref{figure:ConfidenceBands} shows sample paths ($N=25$) from the three different covariance operators on the first row, their 95\% simultaneous confidence bands on the second row, and local coverage rates on the third row. The findings can be summarized as follows :
\begin{enumerate}
\item The proposed band $\hat{B}_{E_c}$ is wider than $\hat{B}_{s}$ for rougher processes, but almost identical to $\hat{B}_{s}$ for smoother ones, except for the far ends of the domain.
\item In the third case, $\hat{B}_{E_c}$ is narrower than $\hat{B}_{s}$ in the smoother areas, while $\hat{B}_{s}$ maintains the same width. $\hat{B}_{E_c}$ adjusts its width such that it gets narrow in the smooth areas (higher within curve dependence) and wider in the rough areas.
\item Due to its construction, $\hat{B}_{E_c}$ does not bear any local under-coverage issue, and therefore any pattern in the third row in Figure \ref{figure:ConfidenceBands} can be rather related to its over-coverage.
\end{enumerate}
\begin{figure}[h]
\centering
\makebox{\includegraphics[width=\textwidth]{sb_N25.eps}}
\caption{\label{figure:ConfidenceBands} For each column, we have sample paths from a sample (1st row), $95\%$ confidence bands constructed from the sample (2nd row), and point-wise coverage rates from multiple (10,000) samples (3rd row). The `ave. width's in legends of 2nd row are averaged over the domain $[0,1]$ and over the samples, and was shown as multiple of point-wise (true) standard deviation.}
\end{figure}
We conclude that the confidence band $\hat{B}_{E_c}$ is an effective visualization tool to use in practice especially when the estimate $\hat{\theta}$ is relatively smooth. For smoother estimates, it is nearly identical to the parametric bootstrap but is much faster to compute since it requires no simulation. This is important as our band is conservative, utilizing a Scheff\'e-type inequality. It suggests that not much is lost in using such an approach as long as the parameter estimate is sufficiently smooth. If the hypothesis tests and our confidence bands are in disagreement, say due to the conservative nature of $\hat{B}_{E_c}$, then it is recommended to follow up with a parametric bootstrap to get tighter bands.
\subsection{Simulation based on DTI data}
\label{subsection:SimulationOnData}
Although the mean and the covariances in Subsection \ref{subsection:HT} and \ref{subsection:ComparisonOfBands} are chosen to mimic common functional objects, actual data in practice may show much more complex structures. In this subsection we use a mean and a covariance structure from the \texttt{DTI} dataset in the \texttt{R} package \texttt{refund}. \textit{This DTI data were collected at Johns Hopkins University and the Kennedy-Krieger Institute}. More details on this data set can be found in \citet{goldsmith:2012a} and \citet{goldsmith:2012b}. This dataset contains fractional anisotropy tract profiles of the corpus callosum for two groups -- healthy control group and multiple sclerosis case group, observed over 93 locations. In this subsection, we took only the first visit scans of the case group in which the sample size is 99. We will denote this sample as original sample.
First, we estimated the sample mean and covariance from the original sample and considered them as parameters. While the mean was estimated by penalizing $2^{nd}$ derivative with leave-one-out cross-validation to achieve a smooth mean function, the covariance was estimated using the standard method (but using the smoothed mean) to mimic the roughness of the original sample.
Using these mean and covariance, we generated multiple (10,000) Gaussian simulation samples of the sample size $99$.
The use of Gaussian sample could be justified by the distribution of coefficients on each principal components in the original sample.
For each generated sample, two different estimation procedures were taken to look at the effect of smoothing. First approach is to simply smooth the sample using quardic bspline basis (with equally spaced knots) and use the standard estimates, and the second approach is to directly smooth the mean function by penalizing $2^{nd}$ derivative (or curvature), in which the covariance was estimated accordingly as shown in the Appendices. For both approaches, leave-one-out cross validation was used to find the number of basis functions and the penalty size, respectively. To reduce the computation time, those values were pre-determined from the original sample and applied to the simulation samples.
Other than the explicit differences in approaches for smoothing -- data first vs directly on the estimate, and bspline vs penalty on curvature --, the first smoothing would introduce bias because the 15 bspline functions would not fully recover the assumed mean function. The empirical bias from the first smoothing was $6.1$ times larger than the second one.
Figure \ref{figure:PointwiseCoverage} compares confidence bands from the two smoothing schemes. Although they do not snow any material difference in the shapes of the bands, we get slightly narrower bands from bspline smoothing (left column). This may cause under-coverage for $\hat{B}_s$, but does not work adversely for the proposed band $\hat{B}_{E_c}$ which generally provides over-coverage. While the narrow band for $\hat{B}_{E_c}$ is mainly caused by more explicit dimension reduction or more smoothing, but for $\hat{B}_s$ and naive-t, bias seems to be the main source of it -- This can be supported by the local coverage patterns in the figure.
\begin{figure}[h]
\centering
\makebox{\includegraphics[width=\textwidth]{PointwiseCoverage_case_bspline15_and_directpenn2_N99.eps}}
\caption{\label{figure:PointwiseCoverage}The 95\% confidence bands from a simulation sample (1st row) along with point-wise coverage rates from multiple samples (2nd row).}
\end{figure}
Table \ref{tbl:ActualAlpha_Sim2} compares coverage rates of non-band form regions using two different smoothing schemes and cutting points for $J$.
Note that we see only minor difference between the two smoothing approaches, and the effect of $J$ is essentially the same as in Section \ref{subsection:TypeIError}; the coverage of FPC based method $\hat{E}_{PC}$ deteriorate fast as $J$ increases while $\hat{E}_{norm}$ it not affected, and $J$ that explains about $99\%$ of variance does not raise major concern in the proposed regions $\hat{E}_{c}$, $\hat{R}_{z}$, and $\hat{R}_{zs}$.
\begin{table}[h]
\centering
\caption{Coverage rates of non-band form regions using two different smoothing approaches}
\label{tbl:ActualAlpha_Sim2}
\begin{tabular}{r|rr|rrr|r||rr|rrr|r}
\hline
Smoothing & \multicolumn{6}{c||}{Bspline on the sample (15 functions)} &
\multicolumn{6}{c}{Penalty on the $2^{nd}$ derivative} \\
\hline
var. $\geq$ & $\hat{E}_{norm}$ & $\hat{E}_{PC}$ &
$\hat{E}_{c}$ & $\hat{R}_{z}$ & $\hat{R}_{zs}$ & $PC^*$
& $\hat{E}_{norm}$ & $\hat{E}_{PC}$ &
$\hat{E}_{c}$ & $\hat{R}_{z}$ & $\hat{R}_{zs}$ & $PC^*$
\\ \hline
0.90 & .949 & .942 & .948 & .947 & .952 & 5 & .949 & .942 & .946 & .948 & .954 & 5 \\
0.95 & .949 & .937 & .946 & .945 & .951 & 7 & .949 & .933 & .944 & .946 & .953 & 8 \\
0.99 & .949 & .904 & .940 & .939 & .947 & 11 & .949 & .887 & .937 & .940 & .949 & 15 \\
0.999 & .949 & .849 & .935 & .933 & .943 & 15 & .948 & .572 & .922 & .904 & .926 & 24 \\ \hline
\end{tabular}
\begin{center}
\textit{* Median number of PCs required to capture desired (estimated) variance.}
\end{center}
\end{table}
\section{Data Example}\label{s:dti}
In this section, we further illustrate the usage of suggested methods using the same \texttt{DTI} dataset.
We now take both control and case group of the first visit scans to look at their differences in mean, in which the sample sizes are 42 and 99, respectively.
\subsection{Visualization via Bands}
\label{subsection:data_visualization}
The first step is to visually compare the two sample mean functions, and make confidence bands for the mean difference. Figure \ref{figure:DTIBand} shows the two sample means, followed by 95\% confidence bands for the mean difference using $\hat{B}_{E_c}$ and $\hat{B}_{s}$, assuming unequal variances.
Although the proposed band $\hat{B}_{E_c}$ is wider than $\hat{B}_{s}$ when the standard estimates from the raw data are used (middle), it gets narrower when the data are smoothed (right). For smoothing, we used quadric bsplines with two-fold cross-validations on the mean difference to choose the number of basis functions. In this case 11 basis functions were chosen and we used equally spaced knots.
We observe that the bands do not cover zero($0$) for most of the domain except for the beginning and the very end part.
\begin{figure}[h]
\centering
\makebox{\includegraphics[width=\textwidth]{DTIBand_diff_bspline11_norder4.eps}}
\caption{\label{figure:DTIBand}The sample mean functions for the two groups (left) and the difference of these two along with $95\%$ confidence bands (middle, right). For smoothed estimates (right), the gap between $\hat{B}_{E_c}$ and $\hat{B}_{s}$ narrows down.}
\end{figure}
\subsection{Hypothesis Testing}
The result of hypothesis testing $H_0 : \mu_{\text{ctrl}} = \mu_{\text{case}}$ versus $H_0 : \mu_{\text{ctrl}} \neq \mu_{\text{case}}$ using different regions is summarized in Table \ref{tbl:DTITesting}. The proposed regions $\hat{E}_{c}$ and $\hat{R}_{z}$ yield at least comparable p-values with existing ones like $\hat{E}_{norm}$ and $\hat{E}_{PC(3)}$. Since there exists an overall shift in the difference of the mean functions, little room could be found for the proposed regions to outperform $\hat{E}_{norm}$. Small sample version $\hat R_{zs}$ achieves a bit larger p-value as expected, but not materially.
\begin{table}[h]
\centering
\caption{P-values from hypothesis testings based on different regions.}
\label{tbl:DTITesting}
\begin{tabular}{r|r|rrr|rrr|r}
\hline
Data & Var. $\geq$ & $\hat{E}_{norm}$ & $\hat{E}_{PC}$ & $\hat{E}_{PC(3)}$ & $\hat{E}_{c}$ & $\hat{R}_{z}$ & $\hat{R}_{zs}$ & $PC^*$ \\ \hline
Raw & 0.99 & $6.6E^{-14}$ & $2.6E^{-10}$ & $2.1E^{-13}$ & $2.3E^{-14}$ & $2.5E^{-13}$ & $2.1E^{-11}$ & 22 \\
Smoothed & 0.99 & $1.6E^{-13}$ & $4.0E^{-14}$ & $8.7E^{-14}$ & $1.1E^{-14}$ & $2.2E^{-13}$ & $1.9E^{-11}$ & 11 \\
\hline
\end{tabular}
\begin{center}
\textit{* Number of PCs used to capture desired variance except for $\hat{E}_{PC(3)}$ which uses only $3$ PCs}
\end{center}
\end{table}
In \citet{pomann:2016} two sample tests were developed and illustrated using the same data. There they use a bootstrap approach to calculate p--values. A p--value of approximately zero is reported based on 5000 repetitions, which means that the p-value $< 2 \times 10^{-4}$. Since our approach is based on asymptotic distributions, not simulations, we are able to give more precise p--values which are of the order $10^{-14}$ for the lowest and $10^{-11}$ for the highest.
\subsection{Visual Decomposition using Rectangular Region}
\label{s:rectangle-visual}
One merit of a rectangular region is that it can be expressed as intersection of marginal intervals. Note that since eigenfunctions are uniquely determined up to signs, it does not help to look at the signs of coefficients.
Figure \ref{figure:DTI_Marginals} shows confidence intervals for the absolute values of coefficients for each PC using $\hat{R}_{z}$. We observe that only the confidence interval for the first PC does not cover zero. Based on this, we can infer that there exists a significant difference between the two mean functions along the $1^{st}$ PC, but the two means are not significantly different in any other features. In this sense, this visual decomposition serves as hypothesis testings on PCs while maintaining family-wise level at $\alpha$.
Although we made intervals for absolute coefficients to visually represent the importance of each PCs, one may choose to make intervals for absolute $z$-scores to make later intervals more visible.
\begin{figure}[h]
\centering
\makebox{\includegraphics[width=6.5in]{DTI_Marginals_bspline11.eps}}
\caption{\label{figure:DTI_Marginals}Confidence intervals for absolute coefficients along principal components. Each interval is centered at the absolute value of coefficient of each estimated PC, i.e. $| \langle \hat{\theta}, \hat v_j \rangle |$ for $j$-th PC, and the interval presents `reasonable candidates' for $| \langle \theta, v_j \rangle |$. The length of each interval can be used to roughly measure the importance of corresponding PC. The intervals are shown up to $10^{th}$ PC to maintain visibility but actual rectangular region using raw data used up to $22^{nd}$ PCs to capture at least $99\%$ of variance.}
\end{figure}
Once the overall shapes of confidence intervals are obtained, one may choose to examine specific PCs.
Figure \ref{figure:DTICoefPC1} shows the interval for the $1^{st}$ PC as a band along the $1^{st}$ eigenfunction. This now reveals that the departure is caused by the `downward' shift of the case group, and confirms that this is the main source of the mean departure in Figure \ref{figure:DTIBand}.
Lastly, we mention that smoothing here also makes little difference in the `shapes' of the intervals in Figure \ref{figure:DTI_Marginals} and \ref{figure:DTICoefPC1} except for the effect of smoothing itself -- smoother ($1^{st}$) eigenfunction and more variance captured in early PCs.
\begin{figure}[h]
\centering
\makebox{\includegraphics[width=6.5in]{DTICoef_PC1_bspline11.eps}}
\caption{\label{figure:DTICoefPC1} Representation of confidence interval for the $1^{st}$ PC as confidence band along $1^{st}$ eigenfunction.}
\end{figure}
\section{Discussion}
Each of the proposed and existing regions (and the corresponding hypothesis tests) has pros and cons, and therefore the decision on which region to use in practice would depend on many factors including the nature of the data, the purpose of the research, etc. However, we believe that we have clearly demonstrated that the proposed hyper-ellipses, $\hat E_c$, or hyper-rectangles, $\hat R_z$, make excellent candidates as the ``default" of choice. In our simulations, they were at the top or near the top, in terms of power, in every setting. Deciding between ellipses versus rectangles comes down to how the regions will be used. If the focus is on the principal components and interpreting their shapes, then the rectangular regions make an excellent choice. If the FPCs are of little to no interest, then the hyper--ellipses combined with their corresponding band make an excellent choice, especially if the parameter estimate is relatively smooth. However, for rougher estimates, we recommend sticking with the simulation based bands like \citet{degras:2011} as opposed to the bands generated from the ellipses.
We also believe that the discussed perspectives on coverage and ghosting will be useful for developing and evaluating new methodologies. From a theoretical point of view, working with infinite dimensional parameters presents difficulties which are not found in scalar or multivariate settings. In particular, it is common for methods to ``clip" the infinite dimensional parameters. In practice the clipping may or may not have much of an impact -- for example the FPCA methods are very sensitive to this clipping while our ellipses and rectangles are not -- but in all cases it introduces an interesting theoretical challenge. Our ghosting framework will be useful as it provides a sound basis for using regions with deficient coverage.
For the first time, the construction of confidence regions and bands has been placed into a Hilbert space based framework together which has become a primary model for many FDA methodologies. However, we believe there is a great deal of additional work to be done in this area and that it presents some exciting opportunities. For example, are there other metrics for determining which confidence region to use? Do these metrics lead to different choices of $c_j$?
How can we choose $J$, the number of PCs to use in practice without undermining proper coverage considering poor estimation of later PCs?
Are there other shapes beyond ellipses and rectangles which are useful? Can we use better metrics than Hausdorff for evaluating convergence? Many open questions remain which we hope other researchers will find interesting.
\bibliographystyle{rss}
|
2,877,628,088,356 | arxiv | \section{Introduction}
Maximal immersions are zero mean curvature immersions in the Lorentz-Minkowski space $\mathbb E_1^3$. These are very similar to the minimal surface in $\mathbb R^3$, but if we allow some singularities (where maps are not immersions), the theory of these two differs. Maximal surfaces with singularity are called generalized maximal surfaces. Singularity on the generalized maximal surface is branched and non-branched. Non-branched singular points are those points where limiting tangent space does not collapse, and it contains a light-like vector. Various aspects of non branched singularities have been studied in \cite{Estudillo1992}, \cite{Fujimori2007}, \cite{imaizumi2008}, \cite{Kim2007}, \cite{KOBAYASHI1984}, \cite{OT2018}, \cite{ UMEHARA2006} etc.
Umehara and Yamada in \cite{UMEHARA2006}, proved that every non branched maximal immersions as a map in $\mathbb R^3$ turn out to be frontal. They called non branched maximal immersions as maxface and discussed when this becomes the front near a singular point. Cuspidal-edge, swallowtails, cuspidal crosscaps, cuspidal butterflies, and cuspidal $S_1^-$ are few singularities that appear on a maxface $X$ as front or frontal.
On cuspidal-edges of the front, Saji, Umehara, and Yamada \cite{Saji2009}
introduced the singular curvature function which is closely related to the behavior of the Gaussian curvature of a surface near cuspidal edges. Further, Martins and Saji \cite{MARTINS2018209}, study differential geometric properties of cuspidal edges with boundary and given several differential geometric invariants. Toshizumi Fukui \cite{FukuiCuspidaledge} also studied the local differential geometry of cuspidal edges.
Not all maxfaces are front but if we approximate singularities on maxfaces with a cuspidal-edge (that is the first kind of singularities of fronts) then it may help us to understand the existence of some invariant related to other types of singularities as we have for the cuspidal-edge (\cite{FukuiCuspidaledge}, \cite{MARTINS2018209},\cite{Saji2009}). This idea motivates us to approximate singularities by a cuspidal-edge.
In this article in section 4, we construct a sequence (there may be many others) of maxfaces with a cuspidal edge that ``converges" to other singularities like shrinking or folded or as in the table \ref{intro:tab}. To prove this, we shall give necessary and sufficient conditions on the singular Björling data $\{\gamma,L\}$ such that it has a cuspidal-edge at some point.
Along with the cuspidal edge, in this article, we find the necessary and sufficient conditions on the singular Björling data $\{\gamma,L\}$ such that its corresponding maxface has swallowtails, cuspidal crosscaps, cuspidal butterflies, and cuspidal $S_1^-$. We summarize the conditions (given in the propositions: \ref{proposition_cuspidaledge}, \ref{proposition_swallowtails}, and \ref{propsition_crosscaps} ) here in the table \ref{intro:tab}.
\begin{table}[h]
\small
\begin{tabular}{|c |c| c| c| c| c| c|c|c| }\hline
\diagbox[width=4cm]{{Nature of}\\ {singularities at $p$}}{{Function's value}\\ at $p$}&$\gamma^\prime$&$L$&$\gamma^{\prime\prime}$&$\gamma^{\prime\prime\prime}$&$L^\prime$&$L^{\prime\prime}$&$\gamma_1^{\prime}\gamma_2^{\prime\prime}-\gamma_1^{\prime\prime}\gamma_2^{\prime}$&$L_1L_2^{\prime}-L_1^{\prime}L_2$\\
\hline
Cuspidal-edge&$\neq0$&$\neq0$&--&--&--&--&$\neq 0$&$\neq 0$\\
\hline
Swallowtails&$=0$&$\neq0$&$\neq0$&--&--&--&--&$\neq 0$\\
\hline
Cuspidal butterflies&$=0$&$\neq 0$&$=0$&$\neq 0$&--&--&--&$\neq 0$\\
\hline
Cuspidal $S_1^-$&$\neq0$&$=0$&--&--&$=0$&$\neq 0$&$\neq 0$&--\\
\hline
Cuspidal-Crosscaps&$\neq0$&$=0$&--&--&$\neq 0$&--&$\neq 0$&--\\
\hline
\end{tabular}
\caption{}
\label{intro:tab}
\end{table}
We believe the above table is very useful and apart from finding required convergent sequence, it may be a starting point of studying suitable interpolation problem (finding a maxface containing two disjoint curves with prescribed nature of singularities along the curve). In \cite{LOPEZ20072178}, L\'{o}pez discussed a kind of interpolation problem where he proves the existence of maximal immersion (not the generalized) spanning two disjoint circular
contours. Some discussion about finding maxface with two interpolating singular curve can be found in \cite{RPR2016}. But we believe, a general discussion requires an initial setup like the conditions as in the table \ref{intro:tab}. In article \cite{brander_2011}, David Brander has discussed similar conditions for the case of the non-maximal CMC surfaces with the special data.
The discussion of this article is close to \cite{brander_2011}, \cite{Kim2007}, and \cite{UMEHARA2006}.
\section{Preliminary}
This section reviews the definition of maxface, Weierstrass-Enneper representation, and the singular Björling problem.
The Lorentz-Minkowski space $\mathbb E_1^3$ is a vector space $\mathbb R^3$ with metric $\langle, \rangle : \mathbb R^3\times\mathbb R^3 \to \mathbb R$ defined by $\langle(a_1,b_1,c_1),(a_2,b_2,c_2)\rangle := a_1a_2 + b_1b_2 -c_1c_2$ and the generalised maximal immersion is an immersion of a Riemann surface (with boundary) $M$ to $\mathbb E_1^3$, such that pullback metric on $M$ does not vanish identically and it is positive definite wherever metric does't vanish. Moreover at non degenerate points mean curvature is zero. Maxfaces are those generalized maximal immersions where singularities are only those points of $M$ where the limiting tangent plane has a light-like vector. We have the following representation of the maxface.
\subsection{Weierstrass-Enneper representation \cite{UMEHARA2006}} For a maxface $X : M \to\mathbb E_1^3$, there is a pair $(g,\omega)$ of meromorphic function and a holomorphic 1-form on $M$ such that $|g|$ is not identically equal 1 and for $\Phi := (1 + g^2,i(1-g^2),-2g)\omega$, the map $X$ is given by $X(p) := Re \int_{0}^{p}\Phi.$
For a maxface, with the help of Weierstrass data $(g,\omega)$, below we define functions $\alpha, \beta,$ and $\eta$ as in \cite{Fujimori2007}, \cite{OT2018}, and \cite{UMEHARA2006}.
\begin{definition}
\label{alpha_beta_gamma}
At $p\in M,$ let $(U,z)$ be a coordinate chart and $\omega= f\,dz$, we define
$$ \alpha(z)=\frac{g^{\prime}(z)}{g^2(z)f(z)},\; \beta(z)=\frac{g(z)}{g^{\prime}(z)}\alpha^{\prime}(z),\,\, { \rm and }\,\, \eta(z)=\frac{g(z)}{g^{\prime}(z)}\beta^{\prime}(z). $$
\end{definition}
These functions help us to check the nature of singularity on a maxface. In \cite{Fujimori2007}, \cite{OT2018}, and \cite{UMEHARA2006}, we find criterion to check the nature of singularity. We mention it here.
\begin{table}[h]
\begin{tabular}{|c c c c c|}
\hline
Re$(\alpha)\neq 0$ & Im$(\alpha)\neq 0$ & & &\hspace{-1cm}$\Leftrightarrow p$ is a cuspidal-edge\\\hline
Re$(\alpha)\neq 0$ &Im$(\alpha)= 0$ & Re $(\beta)\neq 0$& &\hspace{-1cm}$\Leftrightarrow p$ is a swallowtails\\\hline
Re$(\alpha)\neq 0$ & Im$(\alpha)= 0$ & Re $(\beta)= 0$ &Im$(\eta)\neq 0$&$\Leftrightarrow p$ is a cuspidal butterlflies\\\hline
Re$(\alpha)=0$ & Im$(\alpha)\neq 0$ & Im$(\beta)= 0$ &Re$(\eta)\neq 0$&$\Leftrightarrow p$ is a cuspidal $S_1^-$\\\hline
Re$(\alpha)= 0$ & Im$(\alpha)\neq 0$ & Im $(\beta)\neq 0$& &$\Leftrightarrow p$ is a cuspidal crosscaps\\\hline
\end{tabular}
\caption{}
\end{table}
\subsection{Singular Björling problem \cite{Kim2007}} We explain the singular Bj\"{o}rling problem in the following.
\begin{definition}(Singular Björling data \cite{Kim2007}).
\label{bjorling_data}
Let $\gamma : I\to\mathbb E_1^3$ be a real analytic null curve and $L : I\to\mathbb E_1^3$ be a real analytic null vector field such that for all $u \in I$, $\gamma^\prime (u)$ and $L(u)$ are proportional, and $\gamma^{\prime}(u)$ and $L(u)$ do not vanish simultaneously. Such $\{\gamma,L\}$ is said to be a singular Bj\"{o}rling data.
\end{definition}
If the analytic extension of the function $g : I \to \mathbb C$,
\begin{equation}
\label{gauss_map}
g(u):=\begin{cases}
\dfrac{L_1+iL_2}{L_3};\; \text{ if } \gamma' \text{ vanishes identically}\\
\dfrac{{\gamma_1}'+i{\gamma_2}'}{{\gamma_3}'};\; \text { if } L \text{ vanishes identically}
\end{cases}
\end{equation}
satisfies $|g(z)| \not \equiv 1$
on some simply connected domain $\Omega \subset \mathbb C$, where $z = u + iv \in \Omega$ and $I\subset \Omega$. Then there is a unique generalized maximal immersion $X:\Omega\to \mathbb E_1^3$ is given by (for $u_0\in I$ fixed),
$
X(z)= \gamma(u_0)+ {Re}\left(\int_{u_0}^z (\gamma^{\prime}(w)-i L(w)) dw \right)
$
such that $X(u,0) = \gamma(u)$ and $X_v(u,0) = L(u)$. Moreover it has singularity set at least $I$. After a translation we can assume that $\gamma(u_0)=0$, so we consider the solution as
\begin{equation}
\label{bjorling_solution}
X_{\gamma,L}(z)= {Re}\left(\int_{u_0}^z (\gamma^{\prime}(w)-i L(w)) dw \right).
\end{equation}
The way singular Bj\"{o}rling data is taken, the singularity set contains an interval $I$ and for all $u$, $\gamma^{\prime}(u)$ and $L(u)$ do not vanish simultaneously. Therefore $X_{\gamma, L}$ turns out to be a maxface in a neighborhood of singular points. In fact the Weierstrass data for the maxface as in the equation \ref{bjorling_solution} is given by the analytic extension of $f(u) = (\gamma_1^\prime-iL_1)-i(\gamma_2^\prime-iL_2)$ and $g$ as in the equation \ref{gauss_map}.
\section{Necessary and sufficient conditions on the singular Björling data for prescribed type of singularity }
In this section, we will calculate $\alpha, \beta$, and $\eta$ as in the definition \ref{alpha_beta_gamma} for the maxface $X_{\gamma, L}$ at some singularity $t_0\in I$. We get necessary and sufficient conditions on $\{\gamma,L\}$ so that $X_{\gamma,L}$ have a cuspidal-edge, swallowtails, cuspidal cross caps, cuspidal butterflies and cuspidal $S_1^-$ singularities.
\subsection{For cuspidal-edge at $u\in I$.}
Let $\{\gamma,L\}$ be the singular Björling data as in the definition \ref{bjorling_data}. With the Gauss map as in the equation \ref{gauss_map}, we calculate $\alpha$ as in the definition \ref{alpha_beta_gamma}.
At $u\in I,$ if $\gamma^{\prime}(u) \neq 0$, then there exist a real number $c$ such that $L(u) = c\gamma^{\prime}(u)$. So that $f(u) = (1-ic)(\gamma_1^{\prime}(u)-i\gamma_2^{\prime}(u))$ and $g(u) =\dfrac{\gamma_1^{\prime}(u)+i\gamma_2^{\prime}(u)}{\gamma_3^{\prime}(u)}$. In this case, we have $\alpha(u) = \dfrac{g^{\prime}}{g^2f}(u)=\dfrac{g^{\prime}}{g(1-ic)\gamma_3^{\prime}}(u)$. Here replacing the value of f and g we get
$$
\alpha(u)=\frac{\gamma_3^\prime(\gamma_1^{\prime\prime}+i\gamma_2^{\prime\prime})-(\gamma_1^\prime+i\gamma_2^\prime)\gamma_3^{\prime\prime}}{\gamma_3^{\prime 2}(1-ic)\gamma_3^\prime}\frac{\gamma_3^\prime}{\gamma_1^\prime+i\gamma_2^\prime}=-\frac{\gamma_3^{\prime\prime}}{\gamma_3^{\prime 2}(1-ic)}+\frac{(\gamma_1^{\prime\prime}+i\gamma_2^{\prime\prime})(\gamma_1^\prime-i\gamma_2^\prime)}{\gamma_3^\prime(1-ic)(\gamma_1^{\prime 2}+\gamma_2^{\prime 2})}(u).
$$
That is we have,
$$
\alpha(u)=\frac{1}{\gamma_3^{\prime 3}(1-ic)}[-\gamma_3^{\prime\prime}\gamma_3^\prime+\gamma_1^{\prime\prime}\gamma_1^\prime+\gamma_2^{\prime\prime}\gamma_2^\prime+i(\gamma_1^\prime\gamma_2^{\prime\prime}-\gamma_1^{\prime\prime}\gamma_2^\prime)](u).
$$
We denote:
\begin{equation}
D(\gamma_{12}^\prime, \gamma_{12}^{\prime\prime}):=\gamma_1^\prime\gamma_2^{\prime\prime}-\gamma_1^{\prime\prime}\gamma_2^\prime;\quad D(L_{12}, L_{12}^\prime):=L_1L_2^\prime-L_2L_1^\prime.
\end{equation}
So that $\alpha(u)=i\dfrac{D(\gamma_{12}^\prime, \gamma_{12}^{\prime\prime})}{\gamma_3^{\prime 3}(1-ic)}$. Moreover $\gamma$ is a null curve and $c=\dfrac{L_3}{\gamma_3^\prime}$, therefore at $u$, when $\gamma^\prime(u)\neq 0$, we get:
\begin{equation}
\alpha(u)=-\frac{L_3D(\gamma_{12}^\prime, \gamma_{12}^{\prime\prime})}{(\gamma_3^{\prime 2}+L_3^2)\gamma_3^{\prime 2}}+i\frac{D(\gamma_{12}^\prime, \gamma_{12}^{\prime\prime})}{(\gamma_3^{\prime 2}+L_3^2)\gamma_3^{\prime }}.
\end{equation}
Similarly, for the case when $L(u)\neq 0$, we get the following:
\begin{equation}
\label{cuspidal_edge}
\alpha(u)=-\frac{D(L_{12}, L_{12}^\prime)}{(\gamma_3^{\prime 2}+L_3^2)L_3}+i\frac{\gamma_3^\prime D(L_{12}, L_{12}^\prime)}{(\gamma_3^{\prime 2}+L_3^2)L_3^2}.
\end{equation}
We know, $u$ is a cuspidal-edge for the maxface if and only if $Re(\alpha)$ and $Im(\alpha)$ at $u$ are non zero.
Therefore at those points $u\in I$, where $\gamma^\prime\neq 0$ and $u$ is cuspidal-edge for the maxface (as in the equation \ref{bjorling_solution}), we must have $\gamma^\prime\neq 0,L\neq 0$ and $D(\gamma_{12}^\prime,\gamma_{12}^{\prime\prime})\neq 0$ at $u$ and this implies $D(L_{12},L_{12}^\prime) \neq 0$ at $u$.
On the other hand, at those points $u \in I$, where $L\neq 0$ and $u$ is a cuspidal-edge, we must have $\gamma^\prime\neq 0, L\neq 0$ and $D(L_{12},L_{12}^\prime) \neq 0$ at $u$, and this implies $D(\gamma_{12}^\prime,\gamma_{12}^{\prime\prime}) \neq 0$ at $u$.
This proves the following;
\begin{prop}
\label{proposition_cuspidaledge}
Let $\{\gamma,L\}$ be the singular Bj\"{o}rling data. Then the maxface $X_{\gamma,L}$ as in the equation \ref{bjorling_solution} has cuspidal-edge at $u\in I$, if and only if at $u$, $\gamma^\prime\neq 0, L\neq 0$ and $D(\gamma_{12}^\prime,\gamma_{12}^{\prime\prime}) \neq 0$ or $D(L_{12},L_{12}^\prime) \neq 0$.
\end{prop}
This proposition has many applications, in particular, we will use it to prove the theorem \ref{main_theorem}. Moreover, constructing examples having cuspidal-edge singularity turns out be handy.
\begin{example}
Let $\gamma(u)=(sin\,u,- cos\,u, u)$ and $L(u)=u(cos\,u, sin\,u, 1)$ on $I=(0,1)$. Then we have $L(u)=u\gamma^\prime(u)$ and $D(\gamma_{12}^\prime,\gamma_{12}^{\prime\prime})=1$.
It is clear that $\gamma^\prime\neq 0,\; L\neq0$ and $D(\gamma_{12}^\prime,\gamma_{12}^{\prime\prime})\neq 0$ on $(0,1)$. So that all are cuspidal-edge on $(0,1)$.
\end{example}
\begin{example}
Let $\gamma(u)=(u-\frac{u^3}{3}, u^2, u+\frac{u^3}{3})$ and $L(u)=u^2(1-u^2,2u, 1+u^2)$ on $I=(0,1)$. Then $L(u)=u^2\gamma^\prime(u)$ and $D(\gamma_{12}^\prime,\gamma_{12}^{\prime\prime})=2(u^2+1)$. It is clear that $\gamma^\prime\neq 0, L\neq0$ and $D(\gamma_{12}^\prime,\gamma_{12}^{\prime\prime})\neq 0$ on $(0,1)$. Hence all points are cuspidal-edge on $(0,1)$.
\end{example}
In the following, we will find necessary and sufficient conditions on the singular Bj\"{o}rling data such that the maxface (as in the equation \ref{bjorling_solution}) has swallowtails, cuspidal cross-caps etc, at $u \in I$.
\subsection{For Swallowtails and cuspidal butterflies at $u$.}
If $L\neq 0$ at $u$, then $\gamma^\prime= dL$, where $d$ is a function in a neighborhood of $u$. In this case, from the equation \ref{cuspidal_edge}, we have $\alpha=i\dfrac{D(L_{12}, L_{12}^\prime)}{(d-i)L_3^3}$ therefore $\alpha^\prime=i\frac{D(L_{12}, L_{12}^{\prime\prime})(d-i)L_3^3-D(L_{12}, L_{12}^\prime)(d^\prime L_3^3+3(d-i)L_3^2L_3^\prime)}{(d-i)^2L_3^6}$.
This gives
\begin{equation}
\beta =\frac{\alpha^\prime}{(d-i)L_3\alpha}=\frac{D(L_{12}, L_{12}^{\prime\prime})(d-i)L_3^3-D(L_{12}, L_{12}^\prime)(d^\prime L_3^3+3(d-i)L_3^2L_3^\prime)}{(d-i)^2L_3^4D(L_{12}, L_{12}^\prime)},
\end{equation}
\begin{equation} \label{beta^prime}
\begin{split}
\beta^\prime&=\frac{D(L_{12}, L_{12}^{\prime})(d-i)L_3(D(L_{12}, L_{12}^{\prime\prime})+D(L_{12}, L_{12}^{\prime\prime\prime}))}{(d-i)^2L_3^2D^2(L_{12}, L_{12}^\prime)}\\
&-\frac{D(L_{12}, L_{12}^{\prime\prime})((d-i)L_3^\prime D(L_{12}, L_{12}^\prime)+(d-i) L_3D(L_{12}, L_{12}^{\prime\prime})+d^\prime L_3D(L_{12}, L_{12}^{\prime}))}{(d-i)^2L_3^2D^2(L_{12}, L_{12}^\prime)}\\
&-\frac{(d-i)^2L_3^2(d^{\prime\prime}L_3+4d^\prime L_3^\prime+3(d-i)L_3^{\prime\prime})}{(d-i)^4L_3^4}\\
&+\frac{(d^\prime L_3+3(d-i)L_3^\prime)(2(d-i)d^\prime L_3^2+2(d-i)^2L_3L_3^\prime)}{(d-i)^4L_3^4}.
\end{split}
\end{equation}
Moreover
\begin{equation} \label{eta}
\eta=\frac{g}{g^\prime}\beta^\prime=\frac{\beta^\prime}{(d-i)L_3\alpha}=-i\frac{\beta^\prime L_3^2}{D(L_{12}, L_{12}^\prime)}.
\end{equation}
We know $X_{\gamma,L}$ has swallowtails at $u\in I$ if and only if at $u$, $Re\,\alpha\neq 0, Im\,\alpha=0$ and $Re\,\beta\neq0$.
The first two conditions $Re\,\alpha\neq 0$ and $Im\,\alpha=0$ at $u$ if and only if at $u$, $D(L_{12}, L_{12}^\prime)\neq 0, L\neq 0$ and $\gamma^\prime=0$. Since at $u$, $d=\dfrac{\gamma^\prime}{L}=0$,
$$
\beta =\frac{D(L_{12}, L_{12}^{\prime})d^\prime L_3^3+i(D(L_{12}, L_{12}^{\prime\prime})L_3^3-3D(L_{12},L_{12}^\prime)L_3^2L_3^\prime)}{L_3^4D(L_{12}, L_{12}^\prime)}.
$$
Therefore at $u$, $Re\, \beta \neq 0$ if and only if $d^\prime \neq 0$ at $u$. That is $Re \,\beta \neq0$ if and only if $\gamma^{\prime\prime}\neq 0$ at $u$.
On the other hand, $X_{\gamma,L}$ as in the equation \ref{bjorling_solution} has a cuspidal butterflies at $u\in I$ if and only if at $u$, $Re\,\alpha\neq0, Im\, \alpha=0, Re\,\beta=0$ and $Im\,\eta\neq 0$.
The first two conditions hold at $u$ if and only if at $u$, $D(L_{12}, L_{12}^\prime)\neq0, L\neq0,\gamma^\prime=0$ and $d^\prime=0$. Since at $u$, $d=0$ and $Re \,\beta =0$ if and only if $d^\prime=0$, we get from the equation \ref{beta^prime} and \ref{eta}
\begin{align*}
\eta=\frac{D(L_{12}, L_{12}^{\prime})(D(L_{12}^\prime, L_{12}^{\prime\prime})+D(L_{12}, L_{12}^{\prime\prime\prime}))}{D^2(L_{12}^\prime, L_{12}^{\prime})}&+\frac{D(L_{12}^\prime, L_{12}^{\prime\prime})(L_3^\prime D(L_{12}^\prime, L_{12}^{\prime})+L_3 D(L_{12}^\prime, L_{12}^{\prime\prime}))}{D^2(L_{12}^\prime, L_{12}^{\prime})}\\
&-i\frac{L_3(d^{\prime\prime}L_3-3iL_3^{\prime\prime})+6iL_3^{\prime 2}}{L_3D(L_{12}^\prime,L_{12}^\prime)}.
\end{align*}
Therefore at $u, Im\,\eta\neq 0$ if and only if $d^{\prime\prime}\neq 0$. That implies at $u$, $Im\,\eta \neq0$ if and only if $\gamma^{\prime\prime\prime}\neq0$. So we have the following:
\begin{prop}
\label{proposition_swallowtails}
$X_{\gamma,L}$ has swallowtails at $u$ if and only if at $u$, $\gamma^\prime=0, \gamma^{\prime\prime}\neq0, L\neq0$, and $D(L_{12},L_{12}^\prime)\neq0$. On the other hand, $X_{\gamma,L}$ has cuspidal butterflies at $u$ if and only if $\gamma^\prime=0, \gamma^{\prime\prime}=0,\gamma^{\prime\prime\prime}\neq0, L\neq0$, and $D(L_{12},L_{12}^\prime)\neq0$.
\end{prop}
Similar calculation gives the following.
\begin{prop}
\label{propsition_crosscaps}
$X_{\gamma,L}$ has cuspidal cross caps at $u$ if and only if at $u$, $\gamma^\prime\neq0, L=0, L^\prime\neq 0$ and $D(\gamma_{12}^\prime,\gamma_{12}^{\prime\prime})\neq0$. And $X_{\gamma,L}$ has cuspidal $S_1^-$ at $u$ if and only if at $u$, $\gamma^\prime\neq0, L=0, L^\prime=0, L^{\prime\prime}\neq0$ and $D(\gamma_{12}^\prime,\gamma_{12}^{\prime\prime})\neq0$.
\end{prop}
In the table \ref{intro:tab}, we summarized all conditions of propositions \ref{proposition_cuspidaledge}, \ref{proposition_swallowtails} and \ref{propsition_crosscaps}. Using the conditions, it is direct to find a maxface with singularities: cuspidal-edge, cuspidal cross caps and cuspidal $S_1^-$ as in the following:
\begin{example}
Let $\delta$ be a null real analytic curve and $\mu$ be a null vector field defined on the interval $I$ such that $\mu=\delta^\prime$, $D(\delta_{12}^\prime,\delta_{12}^{\prime\prime})\neq 0$, and both $\delta$ and $\mu$ are never zero on $I$. Let $a,b$ and $c$ be three different real numbers on $I$. Now we construct $\gamma(u) = \delta(u)$ and $L(u) = (u-b)(u-c)^2\mu(u)$. Then $\gamma $ and $L$ are Björling data for the maxface $X_{\gamma,L}$ such that $L(u) = (u-b)(u-c)^2\gamma^\prime(u)$. We see that
\begin{itemize}
\item at $a$, $L\neq 0, \gamma^\prime\neq 0$ and $D(\gamma_{12}^\prime, \gamma_{12}^{\prime\prime})\neq 0,$
\item at $b$, $L=0, L^\prime\neq0, \gamma^{\prime}\neq 0$ and $D(\gamma_{12}^\prime, \gamma_{12}^{\prime\prime})\neq 0$ and
\item at $c$, $ L=0,L^\prime=0, L^{\prime\prime}\neq0, \gamma^\prime\neq 0$ and $D(\gamma_{12}^\prime, \gamma_{12}^{\prime\prime})\neq 0$.
\end{itemize}
Therefore $a,b$ and $c$ are cuspidal-edge, cuspidal crosscaps and cuspidal $S_1^-$ resp. for the maxface $X_{\gamma,L}$.
Similarly for three different real numbers $m,n$ and $p$ if we take $\gamma^\prime(u) = (u -n )(u-p)^2\delta^\prime(u)$ and $L(u) = \mu(u)$, where $\delta$ and $\mu$ are same as above, then $m,n$ and $p$ are cuspidal-edge, swallowtails and cuspidal butterflies resp.
\end{example}
\begin{example}
Let $\gamma(u) = (sin\,u,-cos\,u,u)$ and $L(u) = u(u-1)^2(cos\,u,sin\,u,1)$ be the Bj\"{o}rling data then $-1,0$ and $1$ are cuspidal-edge, cuspidal cross caps and cuspidal $S_1^-$ resp.
\end{example}
We can construct many such examples. Moreover as we mentioned in the introduction, other direct application of the table \ref{intro:tab} is to find a sequence of maxfaces converging to other types of singularities. We discuss this in the next section.
\section{approximating various singularity by a cuspidal-edge }
We start with an example, that will explain the essence of the main theorem \ref{main_theorem} of this section.
Let $X_{\gamma, L}$ be the maxface with the singular Bj\"{o}rling data given by $$\gamma(t)= (0,0,0), \;\;L(t)= (1-t^2, 2t, 1+t^2).$$ The maxface $X_{\gamma, L}$ has shrinking singularity on $I= (-1,1)$ (in fact on $\mathbb R)$. Moreover this is not a front, but below we will give a sequence of maxface (front) converging to $X_{\gamma, L}$ and having cuspidal edge.
For $n>1$, we define
\begin{eqnarray*}
&& L_n(t)=\left(1- \frac{1}{n}\right)\left(1-t^2, 2t, 1+t^2\right),\\
&&\gamma_n^\prime(t)=\frac{1}{n}L_n(t).
\end{eqnarray*}
\begin{figure}[h]
\centering
\includegraphics[scale=0.4]{approximatingcuspidal.png}
\caption{Sequence of maxfaces having cuspidal edges that bends to shrinking singularity}
\label{fig:introduction}
\end{figure}
\color{black}
Then for each $n$, the data $\{\gamma_n, L_n\}$ turns out to be a singular Bj\"{o}rling data. Let $X_{\gamma_n, L_n}$ be the corresponding maxface.
Moreover for each $t\in\mathbb R$, we have $\gamma_n^\prime(t)\neq 0$, $L_{n}(t)\neq 0$, and $D(\gamma_{n12}^\prime, \gamma_{n12}^{\prime\prime})\neq 0$. Therefore every point on $\mathbb R$ is a cuspidal edge. In the figure \ref{fig:introduction}, we have shown the maxfaces $X_{\gamma_n, L_n}$ for $n=3, 5, 15,$ and $50$.\color{black}
In this example we start with a maxface having shrinking singularity and we give a sequence of maxfaces having cuspidal edge ``converging" to the shrinking. Below we will give general discussion towards this. First we will define the norm in which we talk about the convergence.
Let $\Omega\subset \mathbb C$ be a bounded simply connected domain, $\overline{\Omega}$ be its closure. Let $X\in C(\overline{\Omega},\mathbb R^3)$, the space of continuous maps. For each $z \in\overline{\Omega}$, we denote
$$
\|X(z)\|:={\rm max}\{X_1(z),X_2(z),X_3(z)\}\,{ \rm and } \, \|X\|_{\Omega}:=\sup_{z\in{\overline \Omega}}\|X(z)\|.
$$
Here $C(\overline{\Omega},\mathbb R^3)$ becomes a Banach space under the norm $\|.\|_{\Omega}$.
In the proposition below, we will give a sequence of maxfaces for general $\{\gamma,L\}$.
\begin{prop}
\label{main_proposition}
Let $X_{\gamma,L}$ be a maxface and for $t_0 \in I,\gamma^\prime\neq 0,D(\gamma_{12}^\prime, \gamma_{12}^{\prime\prime}) \neq 0$. Then there is a sequence of maxfaces $X_n$ defined in a neighborhood $\Omega$ of $t_0$ such that each maxface $X_n$ has a cuspidal-edge at $t_0$ and $X_n\to X_{\gamma, L}$ in the norm $\|.\|_\Omega$.
\end{prop}
\begin{proof}
As $\gamma^\prime(t_0) \neq 0$, there is an interval $I_1$ containing $t_0$ such that for all $t \in I_1$, $\gamma_3^\prime(t) \neq 0$. Without loss of generality we can assume for all $t\in I_1$, $\gamma_3^\prime(t)>0$. \color{black} On $I_1$, we define $c(t) = \dfrac{L_3(t)}{\gamma_3^\prime(t)}$.
For each $n$, we define $\delta_n$ and $\mu_n$ such that
\begin{eqnarray*}
&&\delta_n^\prime= \gamma^\prime+\left(\frac{1}{n}, \frac{1}{n}, h_n\right); \\
&&\mu_n=\left(c(t)+\frac{1}{n}\right)\delta_n^\prime.
\end{eqnarray*}
Here $h_n= -\gamma_3^\prime+\sqrt{{\gamma_3^\prime}^2+\color{black}2\left(\frac{1}{n^2}+\frac{\gamma_1^\prime+\gamma_2^\prime}{n}\right)}$.
There is a $N$ and $I_2\subset I_1$ containing $t_0$ such that for all $t\in I_2$ and $n>N$, ${\gamma_3^\prime}^2+\color{black}2\left(\frac{1}{n^2}+\frac{\gamma_1^\prime+\gamma_2^\prime}{n}\right)\neq 0$, and $\frac{1}{n}\neq -c(t_0)$.
For $n>N$, $\{\delta_n, \mu_n\}$ turn out to be a singular Bj\"{o}rling data on $I_2$. These can be extended analytically on some domain $\mathcal{U}$ that contains $I_2$. We take $\Omega$ a bounded simply connected domain containing $t_0$, such that $\overline{\Omega} \subset\mathcal{U}$.
Moreover we see that at $t_0$ and $n>N_1>N \color{black}$, $\delta_n^\prime\neq 0$, $\mu_n\neq 0$, and $$D(\delta_{n12}^\prime,\delta_{n12}^{\prime\prime})\neq 0.
$$
Therefore for $n>N_1\color{black}$, the maxfaces $X_{\delta_n, \mu_n}$ for the singular Bj\"{o}rling data $\{\delta_n, \mu_n\}$, has cuspidal-edge at $t_0$ and hence cuspidal edge in an interval containing $t_0$.
Let $z\in \Omega,$ we have $X_{\delta_n, \mu_n}(z) -X_{\gamma, L}(z)=Re\int_{u_0}^{z}(\delta_n^\prime(w)-\gamma^\prime(w))(1-i(c(w)+\frac{1}{n}))\,dw.$ It is direct to see that
$\|X_{\delta_n, \mu_n} -X_{\gamma, L}\|_\Omega\to 0$.
\end{proof}
\begin{remark}\label{remark1}
If we have a maxface $X_{\gamma, L}$ with $L\neq0$ and $ D(L_{12}, L_{12}^\prime)\neq0$ at $0$, then with a little change we have a sequence of functions $g_n$ (similar to $h_n$ in the above proposition) and we can take
$$
\mu_n= L+\left(\frac{1}{n}, \frac{1}{n}, g_n\right) \mbox{ and }
\delta_n^\prime=\left(d(t)+\frac{1}{n}\right)\mu_n.
$$
With similar argument as above we find a sequence of maxfaces $X_{\delta_n, \mu_n}$ with singular Bj\"{o}rling data $\{\delta_n, \mu_n\}$ having cuspidal-edge at $0$ and $X_{\delta_n, \mu_n}\to X_{\gamma, L}$ in the norm $\|.\|_{\Omega}.$
\end{remark}
\begin{remark}\label{remark2}
For a constant null curve $\gamma$ and a null vector field $L$ such that for all $t$, $D(L_{12}, L_{12}^\prime)\neq 0$. The maxface $X_{\gamma,L}$ has shrinking singularity. For this case we can choose $\gamma_n$ and $L_n$ similar to the example at the beginning of the section. Moreover little variation will hold for the folded singularity as well.
\end{remark}
We conclude the article with the following theorem which is a direct consequence of the above proposition \ref{main_proposition} and remarks \ref{remark1}, \ref{remark2}.
\begin{theorem}
\label{main_theorem}
Let $X_{\gamma,L}$ be the maxface with singular Bj\"{o}rling data $\{\gamma,L\}$ such that at $t_0 \in I$, it has shrinking or folded singularity or any of the singular point as in table \ref{intro:tab}. Then there is a sequence of maxfaces $X_n$ defined on a domain $\Omega$ containing $t_0$, such that each $X_n$ has cuspidal-edge at $t_0$. Moreover $X_n\to X_{\gamma, L}$ in norm $\|.\|_\Omega$.
\end{theorem}
\section{acknowledgement}
Authors are very thankful to the anonymous referees for their valuable comments which helped a lot to improve the article.
\medskip
|
2,877,628,088,357 | arxiv | \section*{}
The achievement of Bose-Einstein condensation (BEC) in trapped
gases of $^{87}$Rb [1], $^{23}$Na [2] and $^7$Li [3] has provided new
impulse to the study of many-body and quantum statistical
effects in dilute fluids at very low temperature. The
formation of coexisting condensates by sympathetic cooling in
a mixture of Rb atoms in two different internal states has
also been achieved [4]. Trapping of fermionic species has been
reported for $^6$Li [5] and $^{40}$K [6]. Trapped mixtures of bosonic
and fermionic species are expected to become accessible to
experiment in the near future.
The density profiles of the separate fermionic and bosonic
species in such mixtures can in principle be experimentally
resolved. With this perspective we present in this work a
semiclassical three-fluid model extending our earlier studies
of the condensate fraction and internal energy of a trapped
interacting Bose gas [7, 8] to a Bose-condensed mixture of
interacting Bose and Fermi gases confined in a spherically
symmetric trap at finite temperature. We assume that the
mixture is in full thermal equilibrium: the efficiency of a
Bose condensate in cooling slow impurity atoms has been
discussed by Timmermans and C\^ot\'e [9]. The main emphasis
of our calculations is on the evaluation of the density
profile of the fermionic component with varying temperature
and composition for values of the coupling strengths relevant
to a mixture of $^{39}$K and $^{40}$K atoms. We also identify two
parameters which govern the behaviour of the profile. In the
limit of the Thomas-Fermi approximation at zero temperature we
recover the equations used by M\o lmer [10] to describe the
ground state of mixtures of bosons and fermions. Studies of
the thermodynamic properties of ideal Fermi gases in harmonic
traps have been reported by Butts and Rokhsar [11] and by
Schneider and Wallis [12].
Interaction effects are very small in the normal phase but
become significant as condensation induces a density increase
of the bosonic component near the centre of the trap. For
values of the (repulsive) coupling strengths in the range of
present interest, the condensate pushes both the bosonic
noncondensate and the fermionic fluid towards the periphery of
the trap. With regard to the latter fluid, such 'squeezing'
drives an increase in its chemical potential. As we shall see
below, these effects on the fermionic density profile are
especially striking in boson-rich mixtures, but rather low
temperatures relative to the BEC transition need to be
reached. Of course, an increase of the fermion-boson coupling,
as may be achieved by modifying the scattering lengths with
external fields [13, 14] and by changing the parameters of the
trap or the components of the mixture, will enhance the
BEC-induced changes in the fermionic fluid.
In the following we shall assume that the number of bosons in
the trap is large enough that the kinetic energy term in the
Gross-Pitaevskii equation for the wave function $\Psi(\vett
r)$ of the condensate can be neglected (see e.g. [15]). This
corresponds to the so-called Thomas-Fermi limit and is in
general a good approximation except in the immediate
neighbourhood of the BEC transition temperature. It yields
the strong-coupling result
\be
\Psi^2(r)=[\mu_b-V_b^{ext}(r) - 2 g n_{nc}(r) - f n_f(r)]/g
\ee
when the quantity in the square bracket is positive, and $\Psi^2(r) =
0$ otherwise. Here, $g=4 \pi \hbar^2 a_b/m_b$ and $f=2 \pi
\hbar^2 a_f /m_r$ with $a_f$ and $a_b$ the boson-boson and
boson-fermion $s$-wave scattering lengths and $m_r=m_b m_f/(m_b+m_f)$
with $m_b$ and $m_f$ the atomic masses; $\mu_b(T)$ is the chemical
potential of the bosons at temperature $T$, $V_b^{ext}(r)=m_b
\omega_b^2r^2/2$ is a spherically symmetric external potential
confining the bosons, $n_{nc}(r)$ is the average distribution of
non-condensed bosons and $n_f(r)$ is that of the fermions. The factor
2 in the third term on the RHS of eqn (1) arises from exchange [16]
and we have neglected a term involving the off-diagonal density of
non-condensed bosons.
As already proposed in early work on the confined Bose fluid
[17 - 19], we treat both the non-condensed bosons and the
fermions as ideal gases in effective potentials $V_b^{eff}(r)$
and $V_f^{eff}(r)$ involving the relevant interactions. We write
\be
V_b^{eff}(r)=V_b^{ext}(r)+ 2g \Psi^2(r)+2 g n_{nc}(r) + fn_f(r)
\ee
and
\be
V_f^{eff}(r)=V_f^{ext}(r)+f \Psi^2(r) +f n_{nc}(r)\;,
\ee
with $V_f^{ext}(r)=m_f \omega_f^2 r^2/2$. We are taking the fermionic
component as a dilute, spin-polarized Fermi gas: the
fermion-fermion interactions are then associated at leading
order with $p$-wave scattering and are demonstrably negligible
at the temperatures of present interest [11, 20]. We may then
evaluate the thermal averages with standard Bose-Einstein and
Fermi-Dirac distributions, taking the non-condensed particles
to be in thermal equilibrium with the condensate at the same
chemical potential $\mu_b(T)$ and the fermions at chemical
potential $\mu_f(T)$. In the semiclassical approximation we obtain
\be
n_f(r)=\frac{1}{h^3} \int d^3p \left \{ \exp \left[
\left(\frac{p^2}{2m_f}+
V_f^{eff}(r)-\mu_f\right)/k_BT\right]+1 \right\}^{-1}
\ee
and
\bea
n_{nc}(r)=\frac{1}{h^3}\int
d^3p\left\{\exp\left[\left(\frac{p^2}{2m_b}+V_b^{eff}(r)-\mu_b
\right)/k_BT\right]-1 \right\}^{-1}\nonumber \\=\left(\frac{2
\pi m_b k_BT}{h^2}\right)^{3/2}\sum_{j\geq 1}
\frac{\exp[-j(V_b^{eff}(r)-\mu_b)/k_BT]}{j^{3/2}}\;.
\eea
The chemical potentials are determined from the total numbers of
bosons and fermions,
\be
N_b=\int d^3r\,[\Psi^2(r)+n_{nc}(r)]
\ee
and
\be
N_f=\int d^3 r\,n_f(r)\;.
\ee
These equations complete the self-consistent closure of the model.
Before presenting relevant illustrative examples of the full
numerical solution of the set of equations (1) - (7), it is
useful to discuss a simplified form of the present three-fluid
model (see also [7]). This is obtained by introducing an
approximation which preserves only the repulsions exerted by
the condensate: namely, we set to zero the last two terms in
the RHS of eqns (1) and (2) and the last term in the RHS of
eqn (3). Evidently, this approximation rests on the fact that
both the non-condensed and the fermion component are very
dilute gases and becomes all the more accurate as one moves
towards boson-rich mixtures well below the BEC transition
temperature. In this approximation we can (i) introduce a
temperature-dependent scale length
$R=(2\mu_b/m_b\omega_b^2)^{1/2}$ defining the radius outside
which the condensate density vanishes, and (ii) scale the
effective potential acting on the fermions by writing it in
the form $V_f^{eff}(r)=\hbar \omega_f \tilde V_f^{eff}(x)$
where $x=r/R$ and
\be
\tilde V_f^{eff}(x)=
\left\{\begin{array}{cc}
\frac{1}{2}\gamma \left[\lambda +(1-\lambda)x^2\right] & {\rm
for } \hspace{0.3cm}x<1 \\
\frac{1}{2}\gamma x^2 &{\rm for } \hspace{0.3cm} x>1 \\
\end{array}\right.
\ee
with
\be
\lambda=\frac{fm_b
\omega_b^2}{g m_f \omega_f^2}
\ee
and
\be
\gamma=\frac{2\mu_bm_f\omega_f}{\hbar
m_b \omega_b^2}=\left[15N_c a_b \left(\frac{m_b\omega_b}
{\hbar}\right)^{1/2}\right]^{2/5}\frac{m_f\omega_f}{m_b\omega_b}\;.
\ee
Evidently, the dimensionless constant $\lambda$ controls the shape of
the effective potential seen by the fermions and hence their density
profile: it depends only on (i) the ratio of the two relevant
scattering lengths and (ii) the ratio of the spring constants of the
two traps. Instead the parameter $\gamma$, which depends on the
number $N_c(T)$ of bosons in the condensate through their chemical
potential, controls the depth of the effective potential (in units of
$\hbar \omega_f$). It is easily seen from eqn (8) that for $\lambda >
1$ $\tilde V_f^{eff}(x)$ decreases from the value $\gamma\lambda/2$ at
the centre of the trap towards its minimum value $\gamma/2$ at $r=R$,
and increases thereafter because of the confinement. In this regime
the fermions are squeezed away from the centre of the trap into a
shell overlying both the outer part of the condensate and the
non-condensate. On the other hand, for $\lambda<1$ the minimum value
of $\tilde V_f^{eff}(x)$ is $\gamma \lambda/2$ at the centre of the
trap: namely, in this regime the condensate merely raises the bottom
of the confining well for the fermions without changing the sign of
its central curvature.
The calculations that we report below refer to the fermion density
profiles in a trap with $\omega_f=\omega_b= 100 \, {\rm s}^{-1}$. We
shall first consider (left panels in Figures 1 and 2) the case
$a_b\simeq 80$ and $a_f\simeq 46$ Bohr radii for the boson-boson and
boson-fermion $s$-wave scattering lengths, corresponding to the
$^{39}$K-$^{40}$K mixture [21]. With the above values we find
$f\simeq 0.57 g$ and hence $\lambda <1$. We have also examined the
regime $\lambda>1$ by assuming $f=2g$ (right panels in the
Figures). We have studied the dependence of the profiles on the
composition of the mixture by considering two cases, namely (i)
$N_f=10^3$ and $N_b=10^6$ (top panels in the Figures) and (ii)
$N_f=N_b=10^4$ (bottom panels). The two characteristic temperatures of
the mixture shown in the Figures are the BEC transition temperature
$k_BT_c=\hbar\omega_b (N_b/\zeta(3))^{1/3}$ for an harmonically
confined ideal Bose gas and the Fermi temperature $T_f$. The latter
has been calculated from the chemical potential of the fermions in
the simplified model at zero temperature. The increase of $T_f$ due
to the boson-fermion interactions, relative to the confined ideal-gas
value $k_BT_f=\hbar \omega_f (6N_f)^{1/3}$, is quite large at
boson-rich compositions (about 40$\%$ in Figure 1.a and 86$\%$ in Figure 1.c).
Figure
1 compares our results for the density profile of the fermions
at two temperatures below $T_c$ with those for a confined
Fermi gas in the absence of all interactions. In this range
of parameters the results obtained in the simplified model
leading to eqn (8) are practically indistinguishable from
those obtained from the full numerical solution of eqns (1) -
(7). The interactions induce major distortions of the fermion
density profile near the centre of the trap in the
$^{39}$K-$^{40}$K mixture at low temperatures and at
boson-rich compositions (see the curve in Figure 1.a
referring to $T/T_c = 0.1$ and $N_f=10^{-3}N_b$). The
transition from the first to the second regime of coupling
strength $\lambda$ is clearly shown by the comparison between
the left and right-hand panels in Figure 1. For boson-rich
compositions at low temperature, the fermions are in fact
almost wholly expelled from the central region of the trap
(see Figure 1.c). This agrees with the results of the
ground-state calculations of M{\o }lmer [10]. The density
profiles of the bosons, that we do not show, are instead
hardly affected by the presence of the fermions in the range
of parameters that we have explored.
It is also interesting to point out that for $T=0.1T_c$ the
conditions of Figures 1.a and 1.c correspond to temperatures
of the same order as the Fermi temperature $T_f$. In this
case the effects of the interactions are more important than
those due to the quantum degeneracy of the Fermi gas. On the
contrary, the conditions of Figures 1.b and 1.d at $T=0.1T_c$
correspond to temperatures much smaller than $T_f$ and the
effects of quantum degeneracy are much more important, as is
illustrated by the comparison with the density profile for an
ideal Boltzmann gas at the same temperature (dots in Figures
1.a and 1.b).
Figure 2 shows the behaviour of the density $n_f(r=0)$ of
fermions at the centre of the trap as a function of
temperature, for the same coupling strengths and compositions
as in Figure 1. In Figure 2.a we have reported in an inset
the behaviour of the temperature derivative of $n_f(r=0)$, to
display its upturn occurring as quantum degeneracy develops at
sufficiently low values of $T/T_f$. Major effects should be
expected in boson-rich mixtures if the coupling strength can
be driven into the $\lambda>1$ regime, as is seen from Figure 2.c.
In summary, we have studied the equilibrium density profile of
a fermionic fluid in a confined, Bose-condensed mixture of
bosons and fermions in dependence of composition and
temperature. The fermion density at the centre of the trap
can be used to detect the onset of degeneracy in the Fermi
gas with decreasing temperature. Although we have assumed
spherically symmetric traps, the model can easily be extended
to asymmetric confinements through a simple change of variables.
As a final remark we notice that the density profile that we
have calculated for the fermionic component immediately
yields a semiclassical density of states $\rho_f(E)$ for the
calculation of thermodynamic properties through the relation
\be
\rho_f(E)=2\pi
(2m_f/h^2)^{3/2}\int
d^3r
[E-V_f^{eff}(r)]^{1/2}
\;.
\ee
A similar relation holds, of course, for the density of states of the
non-condensate [17-19, 7]. In the simplified model leading to
eqn (8), that we have seen to be quite accurate compared with
a fully self-consistent treatment of the mixture in the range
of parameters that we have explored, the integral in eqn (11)
is easily evaluated analytically. Calculations of
thermodynamic properties for boson-fermion mixtures will be
reported elsewhere.
\subsection*{Acknowledgments}
We are very grateful to J. P. Burke and C. Greene for providing us
with values of the scattering lengths of K isotopes prior to
publication. Useful discussions with M. L. Chiofalo, S. Giorgini and
G. M. Tino are also acknowledged. This work is supported by the
Istituto Nazionale di Fisica della Materia through the Advanced
Research Project on BEC.
|
2,877,628,088,358 | arxiv | \section{\label{sec:introduction1}Introduction}
The logistics domain is increasingly moving towards self-organization, meaning that freight transport is planned without direct human intervention. The Physical Internet is often considered as the ultimate form of self-organizing logistics, having smart modular containers equipped with sensors and intelligence able to interact with their surroundings and to route themselves \cite{vanheeswijk2019b}. Due to the standardized shapes of the containers, they can easily be combined into full truckloads and be decomposed with equal ease. The concept also suggests that the system should be able to function without a high degree of central governance, rather converging to an organically functioning system by itself. Moreover, it is more efficient than traditional logistics systems, being able to dynamically respond to disruptions and opportunities in the logistics system utilizing intelligent decision-making policies. It is this notion of autonomy and self-organizing systems that inspired the present study.
We model smart containers as independent job agents that -- on behalf of their shippers -- are able to place a bid on the transport service that they wish to use. In a dynamic setting, the bid price should depends on the state of the system. Rather than having the fixed contract prices that preside in contemporary transport markets, dynamic bidding mimics financial spot markets that constantly balance demand and supply. For instance, if a warehouse holds relatively few containers waiting for transport, a low bid may suffice to get accepted for the transport service, whereas higher bids might be required during busy times. Additionally, there is also an anticipatory element involved in the bidding decision. Assuming each container has a given due date, the bidding strategy should also take into account the probability of future bids getting accepted.
The optimal bidding strategy may be influenced by many factors. We want to have a policy that provides us with the bid that minimizes expected transport costs given the current state of the system. Such an optimal policy is very difficult to derive analytically, but may be approximated by means of reinforcement learning. However, as each job terminates upon delivery of the smart container, lifespans of individual jobs are very limited. As the quality of a learned bidding policy to a large extent depends upon the number of observations made, it is therefore difficult to learn policies individually. Semi-cooperative learning \cite{boukhtouta2011} might alleviate this problem; even though all endeavoring to minimize their own costs rather than striving towards a common goal, smart containers can share observations to jointly learn better bidding policies that benefit the individual agents. On the other hand, if competing containers are aware of the exact bidding strategies of other containers, they may easily be countered. A fully deterministic policy might therefore not be realistic, we explore whether a stochastic policy yields sensible bidding decisions. Another question that we seek to answer is whether sharing additional information (other than bid prices and acceptance) helps in improving bidding policies. System information (e.g., total container volume in warehouse) may allow for better bids, but the competing containers also utilize this information for the same purpose.
The contribution of this paper is as follows. First, we explore a setting in which smart containers may place bids on transport capacity; to the best of our knowledge this topic has not been studied before from an operations research perspective. In particular, we aim to provide insights into the drivers that determine the bid price and the effects of information sharing on policy quality. Second, we present a policy gradient reinforcement learning algorithm to learn stochastic bidding policies, aiming to mimic a reality in which competing smart containers may deviate from jointly learned policies. Due to the explorative nature of the paper, we present a simplified problem settings involving a single transport service with fixed capacity that operates on the real line. The focus is on the basic mechanisms that govern bidding dynamics absent regulation and centralized control.
\section{Literature}
This literature overview is structured as follows. First, we discuss the role of smart containers in the Physical Internet. Second, we highlight several studies on reinforcement learning in the Delivery Dispatching Problem, which links to our problem from a carrier's perspective. Third, we discuss studies that address the topic of bidding in freight transport.
The inspiration from this paper stems from the Physical Internet paradigm. We refer to the seminal works of Montreuil \cite{montreuil2011,montreuil2013} for a conceptual outline of the Physical Internet, thoroughly addressing the foundations of the Physical Internet. It envisions an open market at which logistics services are offered, stating that (potentially automated) interactions between smart containers and other constituents of the Physical Internet determine routing decisions. Sallez \textit{et al.} \cite{sallez2016} stress the active role that smart containers have, being able to communicate, memorize, negotiate, and learn both individually and collectively. Ambra \textit{et al.} \cite{ambra2019} present a recent literature review of work performed in the domain of the Physical Internet. Interestingly, their overview does not mention any paper that defines the smart container itself as the targeted actor. Instead, existing works seem to focus on traditional actors such as carriers, shippers and logistics service providers, even though smart containers supposedly route themselves in the Physical Internet.
The problem studied in this paper is related to the Delivery Dispatching Problem \cite{minkoff1993}, which entails dispatching decisions from a carrier's perspective. In this problem setting, transport jobs arrive at a hub according to some external stochastic process. The carrier subsequently decides which subset of jobs to take, also considering future jobs that arrive according to the stochastic process. The most basic instances may be solved with queuing models, but more complicated variants quickly become computationally intractable, such that researchers often resort to reinforcement learning to learn high-quality policies. We highlight some recent works in this domain. Klapp \textit{et al.} \cite{klapp2018} develop an algorithm that solves the dispatching problem for a transport service operating on the real line. Van Heeswijk \& La Poutr{\'e} \cite{vanheeswijk2018b} compare centralized and decentralized transport for networks with fixed line transport services, concluding that decentralized planning yields considerable computational benefits. Van Heeswijk \textit{et al.} \cite{vanheeswijk2015,vanheeswijk2019} study a variant that includes a routing component, using value function approximation to find policies. Voccia \textit{et al.} \cite{voccia2019} solve a variant that includes both pickups and deliveries.
We highlight some works on optimal bidding in freight transport; most of these studies seem to adopt a viewpoint in which competing carriers bid on transport jobs. For instance, Yan \textit{et al.} \cite{yan2018} propose a particle swarm optimization algorithm used by carriers to place bids on jobs. Miller \& Nie \cite{miller2019} present a solution that emphasizes the importance of integration between carrier competition, routing and bidding. Wang \textit{et al.} \cite{wang2019} design a reinforcement learning algorithm based on knowledge gradients to solve for a bidding structure with a broker intermediating between carriers and shippers. The broker aims to propose a price that satisfies both carrier and shipper, taking a percentage of accepted bids as its reward. In a Physical Internet context, Qiao \textit{et al.} \cite{qiao2019} model hubs as spot freight markets where carriers can place bids on transport bids. To this end, they propose a dynamic pricing model based on an auction mechanism.
\section{Problem definition}
This section formally defines our problem in the form of a Markov Decision Process (MDP) model. The model is designed from the perspective of a modular container -- denoted as a job $\boldsymbol{j}$ -- that aims to minimize its expected shipping costs over a finite discretized time horizon $\mathcal{T}_{\boldsymbol{j}}$, with each decision epoch $t \in \mathcal{T}_{\boldsymbol{j}}$ representing a day on which a bid for a capacitated transport service (the carrier) may be placed. In addition to this job-dependent time horizon, we define a system horizon $\{0,\ldots,T\}$ with corresponding decision epochs denoted by $t^\prime$. Thus, we use $t$ when referring to the individual job level and $t^\prime$ for the system level.
The cost function and job selection decision of the transport service are defined as well, yet the transport service agent has no learning capacity. As past bids and transport movements do not affect current decisions, the Markovian property is satisfied for this problem. Figure~\ref{fig:problem_illustration} illustrates the bidding problem.
\begin{figure}
\includegraphics[width=\textwidth]{problem_illustration}
\caption{Visual representation of the bidding problem. Modular smart containers (jobs) simultaneously place bids on a transport service with finite capacity; the bids are accepted or rejected based on their marginal contributions.} \label{fig:problem_illustration}
\end{figure}
We now define the jobs, with each job representing a modular container that needs to be transported. A job is represented by the following attribute vector:
\begin{equation}
\boldsymbol{j} =
\begin{pmatrix}
j_\tau & = & \text{time till due date} \\
j_d & = & \text{distance to destination} \\
j_v & = & \text{container volume} \notag
\end{pmatrix}
\end{equation}
The attribute $j_\tau \in [0,\tau^{max}] \cup \mathbb{Z}$ indicates how many decision epochs remain until the latest possible shipment date. When a new job enters the system, we set $\mathcal{T}_{\boldsymbol{j}}=\{j_\tau,j_\tau-1,\ldots,0\}$; note that this horizon may differ among jobs and decreases over time; the attribute $j_\tau$ is decremented with each time step. When $j_\tau=0$ and the job has not been shipped, it is considered to be a failed job, incurs a penalty, and is removed from the system. The attribute $j_d \in (0,d^{max}]\cup \mathbb{R}$ indicates the position of the destination on the real line; the further away the higher the transport costs. The job volume $j_v \in [1,\zeta^{max}] \cup \mathbb{Z}$ indicates how much transport capacity the job requires. Let $\boldsymbol{J}_{t^\prime}$ be the problem state, defined as a set containing the jobs present in the system at time ${t^\prime}$. Furthermore, let $\mathcal{J}_{t^\prime}$ be the set of feasible states at time ${t^\prime}$.
At each decision epoch ${t^\prime}$, a transport service with fixed capacity $C$ departs along the real line. For the transport service to decide which jobs to take, the selection procedure is modeled as a 0-1 knapsack problem that is solved using dynamic programming \cite{kellerer2004}. The value of each job is its bid price minus its transport costs. Jobs with negative values are always rejected. Note that when the transport capacity exceeds the cumulative job volume, the transport service will accept all positive bids. The decision vector for the carrier is denoted as $\boldsymbol{\gamma}=[\gamma_{\boldsymbol{j}}]_{\forall \boldsymbol{j} \in \boldsymbol{J}_{t^\prime}}$, with $\gamma_{\boldsymbol{j}} \in \{0,1\}$. The set $\Gamma(\boldsymbol{J}_{t^\prime})$ denotes the set of all feasible selections. The transport service's cost function for shipping a single job $\boldsymbol{j}$ is a function of distance and volume: $c_{\boldsymbol{j}} = c^{mile} \cdot j_v \cdot j_d$. It maximizes its direct rewards by selecting $\boldsymbol{\gamma}_{t^\prime}$ as follows:
\begin{align}\label{eq:selectioncarrier}
\argmax_{\boldsymbol{\gamma}\in \Gamma(\boldsymbol{J}_{t^\prime})} \left(\sum_{\boldsymbol{j} \in \boldsymbol{J}_{t^\prime}} {\gamma}_{\boldsymbol{j}}(x_{\boldsymbol{j}} - c_{\boldsymbol{j}})\right)\enspace,
\end{align}
s.t.
\begin{align}
\sum_{\boldsymbol{j} \in \boldsymbol{J}_{t^\prime}} {\gamma}_{\boldsymbol{j}} \cdot j_v < C\enspace. \notag
\end{align}
From the perspective of jobs, actions (i.e., bids) are defined on the level of individual containers. All bids $x_{\boldsymbol{j}}\in \mathcal{X}_{\boldsymbol{j}} \equiv \mathbb{R}$ are placed in parallel and independent of one another, yielding a vector $\boldsymbol{x} = [x_{\boldsymbol{j}}]_{\forall \boldsymbol{j} \in \boldsymbol{J}_{t^\prime}}$. Unshipped jobs with $j_\tau>0$ incur holding costs and unshipped jobs with $j_\tau=0$ incur a failed job penalty, both are proportional to the job volume. At any given decision epoch, the direct reward function for individual jobs is defined as follows:
\begin{align}
r_{\boldsymbol{j}}(\gamma_{\boldsymbol{j}},x_{\boldsymbol{j}})=
\begin{cases}
- x_{\boldsymbol{j}} & \text{if} \qquad \gamma_{\boldsymbol{j}}=1 \\
- c^{hold} \cdot j_v & \text{if} \qquad j_\tau>0 \land \gamma_{\boldsymbol{j}}=0 \\
- c^{pen} \cdot j_v & \text{if} \qquad j_\tau=0 \land \gamma_{\boldsymbol{j}}=0 \notag
\end{cases}
\end{align}
To obtain $x_{\boldsymbol{j}}$ at the current decision epoch (which may be denoted by $x_{\boldsymbol{j}} \equiv x_{t^\prime,\boldsymbol{j}}$ when explicitly including the decision epoch), we try to solve $\argmax_{x_{t^\prime,\boldsymbol{j}} \in \mathcal{X}_{\boldsymbol{j}}} \\ \mathbb{E}\bigl(\sum_{t^{\prime\prime}=t^\prime}^{t^\prime+|\mathcal{T}_j|} r_{\boldsymbol{j}}(\gamma_{t^{\prime\prime},\boldsymbol{j}},x_{t^{\prime\prime},\boldsymbol{j}})\bigr)$, i.e., the goal is to maximize the expected reward (minimize expected costs) over the horizon of the job. Note that -- as a container cannot observe the bids of other jobs, nor the cost function of the transport service, nor future states -- we can only make decisions based on \textit{expected} rewards depending on the stochastic environment. The solution method presented in Section~\ref{sec:solutionmethod} further addresses this problem.
Finally, we define the transition function for the system state that occurs in the time step from decision epoch $t^\prime$ to $t^\prime+1$. During this step two changes occur; we (i) decrease the due dates of jobs that are not shipped or otherwise removed from the system and (ii) add newly arrived jobs to the state. The set of new jobs arriving for $t^\prime+1$ is defined by $\boldsymbol{\tilde{J}}_{t^\prime+1} \in \mathcal{\tilde{J}}_{t^\prime+1}$. The transition function $S^M:(\boldsymbol{J}_{t^\prime},\boldsymbol{\tilde{J}}_{t^\prime+1},\boldsymbol{\gamma}) \mapsto \boldsymbol{J}_{t^\prime+1}$ is defined by the following sequential procedure:
\setcounter{algo}{0}
\begin{algo}Transition function $S^M(\boldsymbol{J}_{t^\prime},\boldsymbol{\tilde{J}}_{t^\prime+1},\boldsymbol{\gamma}_{t^\prime})$
\end{algo}
\footnotesize
\begin{tabular}{ l l l}
\toprule
0: & Input: $\boldsymbol{J}_{t^\prime},\boldsymbol{\tilde{J}}_{t^\prime+1},\boldsymbol{\gamma}_{t^\prime}$& $\blacktriangleright$ Current state, job arrivals, shipping selection\\
1: & $\boldsymbol{J}_{t^\prime+1} \mapsfrom \emptyset$ & $\blacktriangleright$ Initialize next state\\
2: & $\boldsymbol{J}_{t^\prime}^x \mapsfrom \boldsymbol{J}_{t^\prime}$& $\blacktriangleright$ Copy state (post-decision state)\\
3: & $\forall \boldsymbol{j} \in \boldsymbol{J}_{t^\prime}$ & $\blacktriangleright$ Loop over all jobs\\
4: & \qquad $\boldsymbol{J}_{t^\prime}^x \mapsfrom \boldsymbol{J}_{t^\prime}^x \setminus \boldsymbol{j} \mid \gamma_{{t^\prime},\boldsymbol{j}}=1$ & $\blacktriangleright$ Remove shipped job\\
5: & \qquad $\boldsymbol{J}_{t^\prime}^x \mapsfrom \boldsymbol{J}_{t^\prime}^x \setminus \boldsymbol{j} \mid j_{\tau}=0 \land \gamma_{{t^\prime},\boldsymbol{j}}=0$ &
$\blacktriangleright$ Remove unshipped job with due date 0 \\
6:& \qquad $j_{\tau} \mapsfrom j_{\tau} - 1 \mid j_{\tau}>0$ & $\blacktriangleright$ Decrement time till due date\\
7: & $\boldsymbol{J}_{{t^\prime}+1} \mapsfrom \boldsymbol{J}_{t^\prime}^x \cup \boldsymbol{\tilde{J}}_{{t^\prime}+1}$ & $\blacktriangleright$ Merge existing and new job sets \\
8: & Output: $\boldsymbol{J}_{{t^\prime}+1}$ & $\blacktriangleright$ New state\\
\bottomrule
\end{tabular}
\normalsize
\vspace{4mm}
\section{Solution method}\label{sec:solutionmethod}
To learn the bidding strategy of the containers, we draw from the widely used policy gradient framework. For a detailed description and theoretical background, we refer to the REINFORCE algorithm by Williams \cite{williams1992}. As noted earlier, the policy gradient algorithm returns a stochastic bidding policy, reflecting the deviations in bid prices adopted by individual containers. As bids can take any real value, we must adopt a policy suitable for continuous action spaces. In this paper we opt for a Gaussian policy, drawing bids from a normal distribution. The mean and standard deviation of this distribution are learned using reinforcement learning.
In policy-based reinforcement learning, we perform an action directly on the state and observe the corresponding rewards. Each simulated episode $n \in \{0,1,\ldots,N\}$ yields a batch of selected actions and related rewards according to the stochastic policy $\pi_{\boldsymbol{\theta}}^n(x_{\boldsymbol{j}} \mid {\boldsymbol{j}},\boldsymbol{J}_{t^\prime})=\mathbb{P}^{\boldsymbol{\theta}}(x_{\boldsymbol{j}} \mid \boldsymbol{j},\boldsymbol{J}_{t^\prime})$. Under our Gaussian policy, bids are drawn independently from other containers, i.e., $x_{\boldsymbol{j}} \sim \mathcal{N}(\mu_{\boldsymbol{\theta}}(\boldsymbol{j},\boldsymbol{J}_{t^\prime}), \sigma_{\boldsymbol{\theta}}), \forall \boldsymbol{j} \in \boldsymbol{J}_{t^\prime}$. The randomness in action selection allows the policy to keep exploring the action space and to escape from local optima. From the observed actions and rewards during each episode $n$, we deduce which actions result in above-average rewards and compute gradients ensuring that the policy is updated in that direction, yielding updated policies $\pi_{\boldsymbol{\theta}}^{n+1}$ until we reach $\pi_{\boldsymbol{\theta}}^{N}$. For consistent policy updates, we only use observations for jobs that are either shipped or removed, for which we need some additional notation. Let $\boldsymbol{K}^n=[K_0^n,\ldots,K_{\tau^{max}}^n]$ be a vector containing the number of bid observations for such completed jobs. For example, if a job had an original due date of 4 and is shipped at $j_{\tau}=2$, we would increment $K_{4}^n$, $K_{3}^n$ and $K_{2}^n$ by 1 (using an update function $k(\boldsymbol{j})$). Finally, we store all completed jobs (i.e., either shipped or failed) in a set $\hat{\boldsymbol{J}}^n$. For each episode, the cumulative rewards per job are defined as follows:
\begin{align}
\hat{v}_{t,\boldsymbol{j}}^n(\gamma_{\boldsymbol{j}},x_{\boldsymbol{j}})=
\begin{cases}
r_{\boldsymbol{j}}(\gamma_{\boldsymbol{j}},x_{\boldsymbol{j}}) & \text{if} \qquad t=0 \\
r_{\boldsymbol{j}}(\gamma_{\boldsymbol{j}},x_{\boldsymbol{j}})+\hat{v}_{t-1,\boldsymbol{j}}^n & \text{if} \qquad t>0 \notag
\end{cases}
\quad, \forall t \in \mathcal{T}_{\boldsymbol{j}}\enspace.
\end{align}
Let $\hat{\boldsymbol{v}}_{t^\prime}^n=\bigl[[\hat{v}_{t,\boldsymbol{j}}^n]_{t \in \mathcal{T}_j}\bigr]_{\forall \boldsymbol{j} \in \boldsymbol{J}_t}$ be the vector containing all observed cumulative rewards at time $t^\prime$ in episode $n$. At the end of each episode, we can then define the information vector
\begin{align}
\boldsymbol{I}^n =\biggl[[\boldsymbol{J}_{t^\prime}^n, \boldsymbol{x}_{t^\prime}^n, \hat{\boldsymbol{v}}_{t^\prime}^n,
\boldsymbol{\gamma}_{t^\prime}^n]_{\forall t^\prime \in \{0,\ldots,T\}},
\boldsymbol{K}^n, \hat{\boldsymbol{J}}^n\biggr] \notag
\end{align}
\noindent and corresponding updating function $i(\cdot)$; the information vector contains the states, actions and rewards necessary for the policy updates (i.e., a history similar to the well-known SARSA trajectory). The decision-making policy is updated according to the policy gradient theorem \cite{sutton2018}, which may be formalized as follows:
\begin{align}
\nabla_{\boldsymbol{\theta}} v_{j_\tau,\boldsymbol{j}}^{\pi_{\boldsymbol{\theta}}}
\propto \sum_{{t^\prime}=1}^{T} \left(\int_{\boldsymbol{J}_{t^\prime} \in \mathcal{J}_{t^\prime}} \mathbb{P}^{\pi_{\boldsymbol{\theta}}}(\boldsymbol{J}_{t^\prime} \mid \boldsymbol{J}_{{t^\prime}-1}) \int_{x_{\boldsymbol{j}} \in \mathcal{X}_{\boldsymbol{j}}} \nabla_{\boldsymbol{\theta}}{\pi_{\boldsymbol{\theta}}}({\boldsymbol{x}_{\boldsymbol{j}}} \mid \boldsymbol{j}, \boldsymbol{J}_{t^\prime})v_{j_\tau,\boldsymbol{j}}^{\pi_{\boldsymbol{\theta}}}\right) \enspace. \notag
\end{align}
Essentially, the theorem states that the gradient of the objective function is proportional to the value functions $v_{j_\tau,\boldsymbol{j}}^{\pi_{\boldsymbol{\theta}}}$ multiplied by the policy gradients for all actions in each state, given the probability measure $\mathbb{P}^{\pi_{\boldsymbol{\theta}}}$ implied by the prevailing decision-making policy $\pi_{\boldsymbol{\theta}}$.
We proceed to formalize the policy gradient theorem for our problem setting. Let $\boldsymbol{\theta}$ be a vector of weight parameters that defines the decision-making policy $\pi_{\boldsymbol{\theta}}: (\boldsymbol{j},\boldsymbol{J}_{t^\prime}) \mapsto \mathcal{X}_{\boldsymbol{j}}$. Furthermore, let $\boldsymbol{\phi}(\boldsymbol{j},\boldsymbol{J}_{t^\prime})$ be a feature vector that distills the most salient attributes from the problem state, e.g., the number of jobs waiting to be transported or the average time till due date. We will further discuss the features in Section~\ref{sec:numericalexperiments}. For the Gaussian case, we formalize the policy as follows:
\begin{align}
\pi_{\boldsymbol{\theta}}=\frac{1}{\sqrt{2\pi}\sigma_{\boldsymbol{\theta}}}e^{-\frac{\left(x_{\boldsymbol{j}}-\mu_{\boldsymbol{\theta}}\left(\boldsymbol{j},\boldsymbol{J}_{t^\prime}\right)\right)^2}{2\sigma_{\boldsymbol{\theta}}^2}}\enspace, \notag
\end{align}
with $x_{\boldsymbol{j}}$ being the bid price, $\mu_{\boldsymbol{\theta}}(\boldsymbol{j},\boldsymbol{J}_{t^\prime}) = \boldsymbol{\phi}(\boldsymbol{j},\boldsymbol{J}_{t^\prime})^\top {\boldsymbol{\theta}}$ the Gaussian mean and $\sigma_{\boldsymbol{\theta}}$ the parametrized standard deviation. The action $x_{\boldsymbol{j}}$ may be obtained from the inverse normal distribution. The corresponding gradients are defined by
\begin{align}
\nabla_{\mu_{\boldsymbol{\theta}}}(\boldsymbol{j},\boldsymbol{I}^n) = \nabla_{\boldsymbol{\theta}}(\boldsymbol{j},\boldsymbol{I}^n) = \\ \nabla_{\boldsymbol{\theta}} \log \pi_{\boldsymbol{\theta}} (
\boldsymbol{j},\boldsymbol{I}^n) =\\ \frac{(x_{\boldsymbol{j}}-\mu_{\boldsymbol{\theta}}(\boldsymbol{j},\boldsymbol{J}_{t^\prime}))\phi(\boldsymbol{j},\boldsymbol{J}_{t^\prime})}{\sigma_{\boldsymbol{\theta}}^2} \enspace, \notag
\end{align}
\begin{align}
\nabla_{\sigma_{\boldsymbol{\theta}}} (\boldsymbol{j},\boldsymbol{I}^n) = \frac{(x_{\boldsymbol{j}}-\mu_{\boldsymbol{\theta}}(\boldsymbol{j},\boldsymbol{J}_{t^\prime}))^2 - \sigma_{\boldsymbol{\theta}}^2}{\sigma_{\boldsymbol{\theta}}^3}\enspace. \notag
\end{align}
The gradients are used to update the policy parameters. As the observations may exhibit large variance, we add a non-biased baseline value (i.e., not directly depending on the policy), namely the average observed value during the episode \cite{sutton2018}. For the prevailing episode $n$, we define the baseline as
\begin{align}
\bar{v}_t^n = \frac{1}{K_{t}^n} \sum_{\boldsymbol{j} \in \boldsymbol{\hat{J}}^n} \hat{v}_{t,\boldsymbol{j}}^n, \forall t \in \{0,\ldots, \tau^{max}\} \enspace. \notag
\end{align}
For the Gaussian policy, the weight update rule for $\mu_{\boldsymbol{\theta}}$ (updating $\boldsymbol{\theta}^{n-1}$ to $\boldsymbol{\theta}^n$) is:
\begin{align}\label{eq:update_mu}
\Delta_{\mu_{\boldsymbol{\theta}}}(\boldsymbol{j},\boldsymbol{I}^n) = \Delta_{\boldsymbol{\theta}}(\boldsymbol{j},\boldsymbol{I}^n) = \\
\alpha_\mu \frac{1}{K_{j_\tau}^n}(\hat{v}_{j_\tau} - \bar{v}_{j_\tau}) \nabla_{\boldsymbol{\theta}} \log \pi_{\boldsymbol{\theta}} (\boldsymbol{j},\boldsymbol{I}^n) =\\
\alpha_\mu \frac{1}{K_{j_\tau}^n}(\hat{v}_{j_\tau} - \bar{v}_{j_\tau}) \left[\frac{(x_{\boldsymbol{j}}-\mu_{\boldsymbol{\theta}}(\boldsymbol{j},\boldsymbol{J}_{t^\prime}))\phi(\boldsymbol{j},\boldsymbol{J}_{t^\prime})}{\sigma_{\boldsymbol{\theta}}^2}\right]\enspace.
\end{align}
The standard deviation $\sigma_{\boldsymbol{\theta}}$ is updated as follows:
\begin{align}\label{eq:update_stdev}
\Delta_{\sigma_{\boldsymbol{\theta}}}(\boldsymbol{j},\boldsymbol{I}^n) = \alpha_\sigma \frac{1}{K_{j_\tau}^n} (\hat{v}_{j_\tau} - \bar{v}_{j_\tau}) \nabla_{\boldsymbol{\theta}} \log \pi_{\boldsymbol{\theta}} (
\boldsymbol{j},\boldsymbol{I}^n)= \\
\alpha_\sigma \frac{1}{K_{j_\tau}^n}(\hat{v}_{j_\tau} - \bar{v}_{j_\tau}) \left[\frac{(x_{\boldsymbol{j}}-\mu_{\boldsymbol{\theta}}(\boldsymbol{j},\boldsymbol{J}_{t^\prime}))^2 - \sigma_{\boldsymbol{\theta}}^2}{\sigma_{\boldsymbol{\theta}}^3}
\right]\enspace.
\end{align}
Intuitively, this means that after each episode we update the feature weights -- which in turn provide the state-dependent mean bidding price -- and the standard deviation of the bids. The mean bidding price -- taking into account both individual job properties and the state of the system -- represents the bid that is expected to minimize overall costs. If effective bids are very close to the mean, the standard deviation will decrease and the bidding policy will converge to an almost deterministic policy. However, if there is an expected benefit in varying bids, the standard deviation may grow larger. The algorithmic outline to update the parametrized policy is at follows:
\setcounter{algo}{1}
\begin{algo}\label{policygradientoutline}Outline of the policy gradient bidding algorithm (based on \cite{williams1992})
\end{algo}
\begin{tabular}{ l l l}
\toprule
0: & Input: $\pi_{\boldsymbol{\theta}}^0$ & $\blacktriangleright$ Differentiable parametrized policy\\
1: & $\alpha_\mu \mapsfrom (0,1), \alpha_\sigma \mapsfrom (0,1)$ & $\blacktriangleright$ Set step sizes\\
2: & $\sigma^0 \mapsfrom \mathbb{R}^+$ & $\blacktriangleright$ Initialize standard deviation\\
3: & $\boldsymbol{\theta} \mapsfrom \mathbb{R}^{|\boldsymbol{\theta}|}$ & $\blacktriangleright$ Initialize policy parameters\\
4: & $\forall \boldsymbol{n} \in \{0,\ldots,N\}$ & $\blacktriangleright$ Loop over episodes\\
5: & \qquad $\boldsymbol{\hat{J}}^n \mapsfrom \emptyset$ & $\blacktriangleright$ Initialize completed job set\\
6: & \qquad $\boldsymbol{I}^n \mapsfrom \emptyset$ & $\blacktriangleright$ Initialize information set\\
7: & \qquad $\boldsymbol{J}_0 \mapsfrom \mathcal{J}_0$ & $\blacktriangleright$ Generate initial state\\
8: & \qquad $\forall t^\prime \in \{0,\ldots,T\}$ & $\blacktriangleright$ Loop over finite time horizon\\
9: & \qquad\qquad $x_{\boldsymbol{j}}^n \mapsfrom \pi_{\boldsymbol{\theta}}^n(\boldsymbol{j},\boldsymbol{J}_{t^\prime}), \forall \boldsymbol{j} \in \boldsymbol{J}_{t^\prime}$ & $\blacktriangleright$ Bid placement jobs\\
10: & \qquad\qquad $\boldsymbol{\gamma}^n \mapsfrom \argmax_{\boldsymbol{\gamma}^n \in \Gamma(\boldsymbol{J}_{t^\prime})} $ & $\blacktriangleright$ Job selection carrier, Eq. \eqref{eq:selectioncarrier}\\
& \qquad\qquad $\left(\sum_{\boldsymbol{j} \in \boldsymbol{J}_{t^\prime}} {\gamma}_{\boldsymbol{j}}^n(x_{\boldsymbol{j}} - c_{\boldsymbol{j}})\right)$ & \\
11: & \qquad\qquad $\hat{v}_{j_\tau,\boldsymbol{j}}^n \mapsfrom r_{\boldsymbol{j}}(\gamma_{\boldsymbol{j}}^n,x_{\boldsymbol{j}}^n), \forall \boldsymbol{j} \in \boldsymbol{J}_{t^\prime}$ & $\blacktriangleright$ Compute cumulative rewards\\
12: & \qquad\qquad $\forall \boldsymbol{j} \in \boldsymbol{J}_{t^\prime} \mid j_{\tau}=0 \lor {\gamma}_{\boldsymbol{j}}^n=1 $& $\blacktriangleright$ Loop over completed jobs \\
13: & \qquad\qquad\qquad $\boldsymbol{\hat{J}}^n \mapsfrom \boldsymbol{\hat{J}}^n \cup \{\boldsymbol{j}\}$ & $\blacktriangleright$ Update set of completed jobs\\
14: & \qquad\qquad\qquad $\boldsymbol{K}^n \mapsfrom k(\boldsymbol{j})$ & $\blacktriangleright$ Update number of completed jobs\\
15: & \qquad\qquad $\boldsymbol{I}^n \mapsfrom i\biggl(\boldsymbol{J}_{t^\prime}^n, \boldsymbol{x}_{t^\prime}^n, \hat{\boldsymbol{v}}_{t^\prime}^n,
\boldsymbol{\gamma}_{t^\prime}^n, \boldsymbol{K}^n, \hat{\boldsymbol{J}}^n, \boldsymbol{I}^n\biggr)$ & $\blacktriangleright$ Store information\\
16: & \qquad\qquad $\boldsymbol{\tilde{J}}_{t^\prime}\mapsfrom \mathcal{\tilde{J}}_{t^\prime}$ & $\blacktriangleright$ Generate job arrivals\\
17: & \qquad\qquad $\boldsymbol{J}_{t^\prime+1} \mapsfrom S^M(\boldsymbol{J}_{t^\prime},\boldsymbol{\tilde{J}}_{t^\prime+1},\boldsymbol{\gamma}_{t^\prime}^n) $ & $\blacktriangleright$ Transition function, Algorithm 1\\
18: & \qquad $\forall t \in \{0,\ldots,\tau^{max}\}$ & $\blacktriangleright$ Loop till maximum due date\\
19: & \qquad\qquad $\forall \boldsymbol{j} \in \boldsymbol{\hat{J}}^n$ & $\blacktriangleright$ Loop over completed jobs\\
20: & \qquad\qquad\qquad $\mu_{\boldsymbol{\theta}}^{n+1} \mapsfrom \mu_{\boldsymbol{\theta}}^n+\Delta_{\mu_{\boldsymbol{\theta}}}(\boldsymbol{j},\boldsymbol{I}^n) $ & $\blacktriangleright$ Update Gaussian mean, Eq. \eqref{eq:update_mu} \\
21:& \qquad\qquad\qquad $\sigma_{\boldsymbol{\theta}}^{n+1} \mapsfrom \sigma_{\boldsymbol{\theta}}^n+\Delta_{\sigma_{\boldsymbol{\theta}}}(\boldsymbol{j},\boldsymbol{I}^n) $ & $\blacktriangleright$ Update standard deviation, Eq. \eqref{eq:update_stdev}\\
22: & Output: $\pi_{\boldsymbol{\theta}}^N$ & $\blacktriangleright$ Return tuned policy\\
\bottomrule
\end{tabular}
\section{Numerical experiments}\label{sec:numericalexperiments}
This section describes the numerical experiments and the results. Section~\ref{ssec:exploration} explores the parameter space and aids in tuning the hyperparameters. The algorithm is written in Python 3.7 and available online.\footnote{https://github.com/woutervanheeswijk/policygradientsmartcontainers}
\subsection{Exploration of parameter space}\label{ssec:exploration}
The purpose of this section is twofold: we explore the effects of various parameter settings on the performance of the algorithm and select a suitable set of parameters for the remainder of the experiments. We make use of the instance settings summarized in Table~\ref{table:instancesettings}. Note that the penalty for failed jobs is the main driver in determining bid prices; together with holding costs, it intuitively represents the maximum price the smart container is willing to bid to be transported.
\begin{table}
\centering
\caption{Instance settings}
\label{table:instancesettings}
\begin{tabular}{ l l }
\toprule
Max. \# job arrivals & [0-10] \\
Due date & [1-5] \\
Job transport distance & [10-100] \\
Job volume & [1-10]\\
Holding cost (per volume unit) & 1\\
Penalty failed job (per volume unit) & 10\\
Transport costs per mile (per volume unit) & 0.1 \\
Transport capacity & 80 \\
\bottomrule
\end{tabular}
\end{table}
To parametrize the policy we use several features. First, we use a scalar that serves as the bias. Second, we use the individual job properties of the job placing the bid, i.e., the time till due date, the job's transport distance and the container volume. Third, in case the job shares its own properties with the system, it also use the generic system features: the total number of jobs, average distance, total volume, and average due date. Recall that these system features only include the data of other smart containers that share their information. All weight parameters in $\boldsymbol{\theta}$ are initialized at 0, yielding initial bid prices of 0.
We perform a sequential search to set the simulation parameters. First, we tune the learning rates $\alpha_\mu$ (learning rate for mean) and $\alpha_\sigma$ (learning rate for standard deviation), starting with a standard normal distribution. We test learning rates $\{0.0001, 0.001, 0.01, 0.1\}$ for both parameters and find that $\alpha_\mu=0.1$ and $\alpha_\mu=0.01$ are stable (i.e., no exploding gradients) and converge reasonably fast. Taking smaller learning rates yields no eminent advantages in terms of stability or eventual policy quality.
Figure~\ref{fig:convergence_learning_rates} shows two examples of parameter convergence under various learning rates.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{mu_sigma_small}
\caption{$\alpha_\mu=0.01$ and $\alpha_\sigma=0.001$}
\label{fig:small_learning_rates}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{mu_sigma_large}
\caption{$\alpha_\mu=0.1$ and $\alpha_\sigma=0.01$}
\label{fig:large_learning_rates}
\end{subfigure}
\caption{Convergence of $\mu_{\boldsymbol{\theta}}$ and $\sigma_{\boldsymbol{\theta}}$ (normalized) for various learning rates. Higher learning rates achieve both faster convergence and lower average bid prices.}
\label{fig:convergence_learning_rates}
\end{figure}
Next, we tune the initial bias weight $\theta_0^0$ (using values $\{-50,-40,\ldots,40,50\}$) and the initial standard deviation $\sigma^0$ (using values $\{0.01, 0.1, 1, 10,25\}$). Anticipating non-zero bids, we test several initializations with nonzero bias weights. Large standard deviations allow for much exploration early on, but may also result in unstable policies or slow convergence. From the experiments, we observe that the bias weight converges to a small or negative weight and that there is no benefit in different initializations. For the standard deviation, we find that $\sigma^0 = 10$ yields the best results; the exploration triggered by setting large initial standard deviations helps avoiding local optima early on. In terms of performance, the average transport costs are 7.3\% lower than under the standard normal initialization. Standard deviations ultimately converge to similarly small values regardless the initialization.
Finally, we determine the number of episodes $N$ and the length of each horizon $T$. Longer horizons lead to larger and therefore more reliable batches of completed jobs per episode, but naturally require more computational effort. Thus, we compare settings for which the total number of time steps $N \cdot T$ is equivalent. Each alternative simulates 1,000,000 time steps, using $T=\{10,25,50,100,250,500,1000\}$ with corresponding values $N$. To test convergence, after each 10\% of completed training episodes we perform 10 validation episodes -- always with $T=1000$ for fair comparisons -- to evaluate policy qualities. We find that having large batches provides notable advantages. Furthermore, in all cases 400,000 time steps appear sufficient to converge to stable policies. To illustrate the findings, Figure~\ref{fig:offlineperformance} shows the average transport costs measured after each 100,000 time steps (using the then-prevailing policy); Figure~\ref{fig:largebatch} shows the quality of the eventual policies for each time horizon.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{offline_performance}
\caption{Comparison offline quality.}
\label{fig:offlineperformance}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{barchart_performance_batchsize}
\caption{Comparison final policy quality.}
\label{fig:largebatch}
\end{subfigure}
\caption{Policy performance for various time horizons. The horizon $T=100$ yields the best overall performance; batches too small diminish performance.}
\label{fig:figure_episodes}
\end{figure}
The final parameters to be used for the remainder of the numerical experiments are summarized as follows: $N=4,000$, $T=100$, $\sigma^0=10$, $\alpha_\mu=0.1$ and $\alpha_\sigma=0.01$.
\subsection{Analysis of policy performance}
Having determined the parameter settings, we proceed to analyze the performance of the jointly learned policies. This section addresses the effects of information sharing, the relevance of the used features in determining the bid, and the behavior of the bidding policy and its impact on carrier profits. All results in this section correspond to the performance of policies after training is completed. To obtain additional insights we sometimes define alternative instances; for conciseness we do not explicitly define the settings of these alternatives.
We first evaluate the effects of information sharing. According to preset ratios, we randomly assign whether or not a container shares its information with the system. Only containers that share information are able to see aggregate system information. Clearly, the more containers participate, the more accurate the system information will be. We test sharing rates ranging from 0\% to 100\% with increments of 10\%; Table~\ref{table:featureweights} shows the results. We observe that performance under full information sharing and no information sharing is almost equivalent, with partial information sharing always performing worse. The latter observation may be explained by the distorted view presented by only measuring part of the system state.
\begin{table}
\scriptsize
\centering
\caption{Feature weights for various information sharing rates.}
\label{table:featureweights}
\begin{tabular}{ l | c| c| c| c| c| c |c| c| c |c| c }
\hline
Feature &0\% &10\%&20\%&30\%&40\%&50\%&60\%&70\%&80\%&90\%&100\%\\
\hline
Scalar & -10.10 & -10.28&-8.03&-11.83&-10.98&-4.81&-7.17&-10.46&-8.71&-9.46&-12.12\\
Total job volume & -- &0.06&0.06&-0.12&0.44&0.03&0.29&0.42&1.18&0.64&0.63\\
Average due date & -- &-2.37&-2.73&-3.28&-2.57&-2.73&-2.69&-2.49&-1.79&-2.02&-1.56\\
Average distance & -- &1.83&3.07&2.59&2.36&3.18&3.58&3.16&3.44&2.26&2.92\\
\# jobs & -- &-0.03&-0.31&-0.69&-0.21&-0.86&-0.77&-1.39&-0.02&-1.25&-1.70\\
Job volume & 59.20 & 57.14&56.83&58.46&58.70&55.27&56.13&56.55&56.69&57.97&60.10\\
Job due date & -22.45 & -23.25&-24.02&-21.16&-21.95&-23.89&-25.96&-22.89&-22.26&-22.82&-21.27\\
Job distance & 49.49 &48.37&45.88&49.38&48.81&45.98&45.27&46.85&48.00&49.15&49.54\\
\hline
\textbf{Average reward} & \textbf{-46.87} &\textbf{-51.78}&\textbf{-55.96}&\textbf{-49.12}&\textbf{-49.54}&\textbf{-57.75}&\textbf{-56.75}&\textbf{-55.71}&\textbf{-53.12}&\textbf{-49.43}&\textbf{-46.32}\\
\hline
\end{tabular}
\end{table}
In the policy parametrization all features are scaled to the $[0,1]$ domain, such that the magnitudes of weights are comparable among each other. We see that the generic features have relatively little impact on the overall bid price. This underlines the limited difference observed between full information sharing and no information sharing. Job volume and job distance are by far the most significant drivers of a job's bid prices. In line with expectations, as the costs incurred by the carrier depend on these two factors, therefore requiring higher compensations. Furthermore, holding- and penalty costs are proportional to the job volume. The relationship with the time till due date is negative; if more time remains, it might be prudent to place lower bids without an imminent risk of penalties. On average, each job places 1.36 bids and 99.14\% of jobs is ultimately shipped; the capacity of the transport service is rarely restrictive. Figure~\ref{fig:bidding_behavior} illustrates bidding behavior with respect to transport distance and time till due date, respectively.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{bid_cloud}
\caption{Bids relative to transport distance.}
\label{fig:bid_cloud}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{bid_track}
\caption{Sample paths of bids over time.}
\label{fig:bids_over_time}
\end{subfigure}
\caption{Visualizations of bidding policies with respect to volume and due date Bids tend to increase when (a) transport distance is larger and (b) the job is closer to its due date.}
\label{fig:bidding_behavior}
\end{figure}
Next, we discuss some behavior of the bidding policy and its effect on carrier profits. As our carrier is a passive agent we omit overly quantitative assessments here. We simulate various toy-sized instances, adopting simplified deterministic settings with a single container type (time till due date is zero, identical volume and distance). If transport capacity is guaranteed to suffice, the learned bid prices converge to almost zero. If two jobs always compete for one transport service and the other incurs a penalty, the bid will be slightly below the penalty cost. Several other experiments with scarce capacity also show bid prices slightly below the expected penalty costs that would otherwise be incurred. For our base scenario, the profit margin for the carrier is 20.2\%. This positive margin indicates that the features do not encapsulate all information required to learn the minimum bidding price. For comparison, we run another scenario in which the carrier's transport costs -- which are unknown to the smart containers -- are the sole feature; in that case all jobs trivially learn the minimum bidding price. This result implies that carrier may have financial incentives not to divulge too much information to the smart containers.
For the carrier, the bidding policies deployed by the smart container greatly influence its profit. Scarcity of transport capacity drives up bid prices, yet also increases the likelihood of missed revenue. To gain more insight into this trade-off, we simulate various levels of transport capacity, from scarce to abundant. These experiment indeed confirm that a (non-trivial) capacity level exists that maximizes profit. In addition, a carrier needs not to accept all jobs whose bid exceed their marginal transport costs, as we presumed in this paper. Having a carrier represented by an active agent stretches beyond the scope of this paper.
We summarize the main findings, reiterating that the setting of this paper is a highly stylized environment. The key insights are as follows:
\begin{itemize}
\item Utilizing global system information only marginally reduces job's transport costs compared to sharing no system information;
\item Jointly learned policies converge to stable bidding policies with small standard deviations;
\item Jobs with more time remaining till their due date are prone to place lower bids;
\item Carriers have an incentive not to disclose true cost information when transport capacity is abundant.
\end{itemize}
\section{Conclusions}\label{sec:conclusions}
Traditional transport markets rely on (long-term) contracts between shippers and carriers in which price agreements are made. In contrast, self-organizing systems are expected to evolve into some form of spot market where demand and supply are dynamically matched based on the current state of the system. This paper explores the concept of smart containers placing bids on restricted transport capacity. We design a policy gradient algorithm to mimic joint learning of a bidding policy, sharing observations between autonomous smart modular containers. The stochastic policy reflects deviations made by individual containers, given that deterministic policies are easy to counter in a competitive environment. This stochastic approach appears effective. Standard deviations converge to small values, implying stable bidding policies. The performance of the policy is consistent with the effects of job volume, transport distance and till time due date, which are used to parametrize the bidding policy.
Numerical experiments show that sharing system information only marginally reduces bidding costs; individual job properties are the main driver in setting bids. The limited difference in policy quality with and without sharing system information is an interesting observation. This result implies that smart containers would only need to (anonymously) submit their key properties, submitted bid prices and incurred costs. There is no apparent need to share information on a system-wide level, which would greatly ease the system design.
The profitability of the transport service -- which is modeled as a passive agent in this paper -- strongly depends on the bidding policy of the smart containers. Experiments with varying transport capacities show that in turn, the carrier can also influence bidding policies by optimizing the transport capacity that is offered. Without central governance, unbalances between smart containers and transport services may cause disturbed performances. Based on the findings presented in this paper, one can imagine that the dynamic interplay between carriers and smart containers is a very interesting one that begs closer attention.
We re-iterate that this study is of an explorative nature; there are many avenues for follow-up research. In terms of algorithmic improvements, actor-critic methods (learning functions for expected downstream values rather than merely observing them) would be a logical continuation. Furthermore, the linear expression to compute the bidding price could be replaced by neural networks that capture potential non-linear structures. In addition, the basic problem presented in this paper lends itself for various extensions. So far the carrier has been assumed to be a passive agent, offering fixed transport capacity and services regardless of (anticipated) income. In reality the carrier will also make intelligent decisions based on the bidding policies of smart containers. A brokerage structure might also emerge in the Physical Internet context. Finally, we considered only a single transport service operating on the real line. Using the same algorithmic setup, this setting could be extended to more realistic networks with multiple carriers, routes and destination nodes.
\bibliographystyle{apacite}
\section{\label{sec:introduction1}Introduction}
The logistics domain is increasingly moving towards self-organization, meaning that freight transport is planned without direct human intervention. The Physical Internet is often considered as the ultimate form of self-organizing logistics, having smart modular containers equipped with sensors and intelligence able to interact with their surroundings and to route themselves \cite{vanheeswijk2019b}. Due to the standardized shapes of the containers, they can easily be combined into full truckloads and be decomposed with equal ease. The concept also suggests that the system should be able to function without a high degree of central governance, rather converging to an organically functioning system by itself. Moreover, it is more efficient than traditional logistics systems, being able to dynamically respond to disruptions and opportunities in the logistics system utilizing intelligent decision-making policies. It is this notion of autonomy and self-organizing systems that inspired the present study.
We model smart containers as independent job agents that -- on behalf of their shippers -- are able to place a bid on the transport service that they wish to use. In a dynamic setting, the bid price should depends on the state of the system. Rather than having the fixed contract prices that preside in contemporary transport markets, dynamic bidding mimics financial spot markets that constantly balance demand and supply. For instance, if a warehouse holds relatively few containers waiting for transport, a low bid may suffice to get accepted for the transport service, whereas higher bids might be required during busy times. Additionally, there is also an anticipatory element involved in the bidding decision. Assuming each container has a given due date, the bidding strategy should also take into account the probability of future bids getting accepted.
The optimal bidding strategy may be influenced by many factors. We want to have a policy that provides us with the bid that minimizes expected transport costs given the current state of the system. Such an optimal policy is very difficult to derive analytically, but may be approximated by means of reinforcement learning. However, as each job terminates upon delivery of the smart container, lifespans of individual jobs are very limited. As the quality of a learned bidding policy to a large extent depends upon the number of observations made, it is therefore difficult to learn policies individually. Semi-cooperative learning \cite{boukhtouta2011} might alleviate this problem; even though all endeavoring to minimize their own costs rather than striving towards a common goal, smart containers can share observations to jointly learn better bidding policies that benefit the individual agents. On the other hand, if competing containers are aware of the exact bidding strategies of other containers, they may easily be countered. A fully deterministic policy might therefore not be realistic, we explore whether a stochastic policy yields sensible bidding decisions. Another question that we seek to answer is whether sharing additional information (other than bid prices and acceptance) helps in improving bidding policies. System information (e.g., total container volume in warehouse) may allow for better bids, but the competing containers also utilize this information for the same purpose.
The contribution of this paper is as follows. First, we explore a setting in which smart containers may place bids on transport capacity; to the best of our knowledge this topic has not been studied before from an operations research perspective. In particular, we aim to provide insights into the drivers that determine the bid price and the effects of information sharing on policy quality. Second, we present a policy gradient reinforcement learning algorithm to learn stochastic bidding policies, aiming to mimic a reality in which competing smart containers may deviate from jointly learned policies. Due to the explorative nature of the paper, we present a simplified problem settings involving a single transport service with fixed capacity that operates on the real line. The focus is on the basic mechanisms that govern bidding dynamics absent regulation and centralized control.
\section{Literature}
This literature overview is structured as follows. First, we discuss the role of smart containers in the Physical Internet. Second, we highlight several studies on reinforcement learning in the Delivery Dispatching Problem, which links to our problem from a carrier's perspective. Third, we discuss studies that address the topic of bidding in freight transport.
The inspiration from this paper stems from the Physical Internet paradigm. We refer to the seminal works of Montreuil \cite{montreuil2011,montreuil2013} for a conceptual outline of the Physical Internet, thoroughly addressing the foundations of the Physical Internet. It envisions an open market at which logistics services are offered, stating that (potentially automated) interactions between smart containers and other constituents of the Physical Internet determine routing decisions. Sallez \textit{et al.} \cite{sallez2016} stress the active role that smart containers have, being able to communicate, memorize, negotiate, and learn both individually and collectively. Ambra \textit{et al.} \cite{ambra2019} present a recent literature review of work performed in the domain of the Physical Internet. Interestingly, their overview does not mention any paper that defines the smart container itself as the targeted actor. Instead, existing works seem to focus on traditional actors such as carriers, shippers and logistics service providers, even though smart containers supposedly route themselves in the Physical Internet.
The problem studied in this paper is related to the Delivery Dispatching Problem \cite{minkoff1993}, which entails dispatching decisions from a carrier's perspective. In this problem setting, transport jobs arrive at a hub according to some external stochastic process. The carrier subsequently decides which subset of jobs to take, also considering future jobs that arrive according to the stochastic process. The most basic instances may be solved with queuing models, but more complicated variants quickly become computationally intractable, such that researchers often resort to reinforcement learning to learn high-quality policies. We highlight some recent works in this domain. Klapp \textit{et al.} \cite{klapp2018} develop an algorithm that solves the dispatching problem for a transport service operating on the real line. Van Heeswijk \& La Poutr{\'e} \cite{vanheeswijk2018b} compare centralized and decentralized transport for networks with fixed line transport services, concluding that decentralized planning yields considerable computational benefits. Van Heeswijk \textit{et al.} \cite{vanheeswijk2015,vanheeswijk2019} study a variant that includes a routing component, using value function approximation to find policies. Voccia \textit{et al.} \cite{voccia2019} solve a variant that includes both pickups and deliveries.
We highlight some works on optimal bidding in freight transport; most of these studies seem to adopt a viewpoint in which competing carriers bid on transport jobs. For instance, Yan \textit{et al.} \cite{yan2018} propose a particle swarm optimization algorithm used by carriers to place bids on jobs. Miller \& Nie \cite{miller2019} present a solution that emphasizes the importance of integration between carrier competition, routing and bidding. Wang \textit{et al.} \cite{wang2019} design a reinforcement learning algorithm based on knowledge gradients to solve for a bidding structure with a broker intermediating between carriers and shippers. The broker aims to propose a price that satisfies both carrier and shipper, taking a percentage of accepted bids as its reward. In a Physical Internet context, Qiao \textit{et al.} \cite{qiao2019} model hubs as spot freight markets where carriers can place bids on transport bids. To this end, they propose a dynamic pricing model based on an auction mechanism.
\section{Problem definition}
This section formally defines our problem in the form of a Markov Decision Process (MDP) model. The model is designed from the perspective of a modular container -- denoted as a job $\boldsymbol{j}$ -- that aims to minimize its expected shipping costs over a finite discretized time horizon $\mathcal{T}_{\boldsymbol{j}}$, with each decision epoch $t \in \mathcal{T}_{\boldsymbol{j}}$ representing a day on which a bid for a capacitated transport service (the carrier) may be placed. In addition to this job-dependent time horizon, we define a system horizon $\{0,\ldots,T\}$ with corresponding decision epochs denoted by $t^\prime$. Thus, we use $t$ when referring to the individual job level and $t^\prime$ for the system level.
The cost function and job selection decision of the transport service are defined as well, yet the transport service agent has no learning capacity. As past bids and transport movements do not affect current decisions, the Markovian property is satisfied for this problem. Figure~\ref{fig:problem_illustration} illustrates the bidding problem.
\begin{figure}
\includegraphics[width=\textwidth]{problem_illustration}
\caption{Visual representation of the bidding problem. Modular smart containers (jobs) simultaneously place bids on a transport service with finite capacity; the bids are accepted or rejected based on their marginal contributions.} \label{fig:problem_illustration}
\end{figure}
We now define the jobs, with each job representing a modular container that needs to be transported. A job is represented by the following attribute vector:
\begin{equation}
\boldsymbol{j} =
\begin{pmatrix}
j_\tau & = & \text{time till due date} \\
j_d & = & \text{distance to destination} \\
j_v & = & \text{container volume} \notag
\end{pmatrix}
\end{equation}
The attribute $j_\tau \in [0,\tau^{max}] \cup \mathbb{Z}$ indicates how many decision epochs remain until the latest possible shipment date. When a new job enters the system, we set $\mathcal{T}_{\boldsymbol{j}}=\{j_\tau,j_\tau-1,\ldots,0\}$; note that this horizon may differ among jobs and decreases over time; the attribute $j_\tau$ is decremented with each time step. When $j_\tau=0$ and the job has not been shipped, it is considered to be a failed job, incurs a penalty, and is removed from the system. The attribute $j_d \in (0,d^{max}]\cup \mathbb{R}$ indicates the position of the destination on the real line; the further away the higher the transport costs. The job volume $j_v \in [1,\zeta^{max}] \cup \mathbb{Z}$ indicates how much transport capacity the job requires. Let $\boldsymbol{J}_{t^\prime}$ be the problem state, defined as a set containing the jobs present in the system at time ${t^\prime}$. Furthermore, let $\mathcal{J}_{t^\prime}$ be the set of feasible states at time ${t^\prime}$.
At each decision epoch ${t^\prime}$, a transport service with fixed capacity $C$ departs along the real line. For the transport service to decide which jobs to take, the selection procedure is modeled as a 0-1 knapsack problem that is solved using dynamic programming \cite{kellerer2004}. The value of each job is its bid price minus its transport costs. Jobs with negative values are always rejected. Note that when the transport capacity exceeds the cumulative job volume, the transport service will accept all positive bids. The decision vector for the carrier is denoted as $\boldsymbol{\gamma}=[\gamma_{\boldsymbol{j}}]_{\forall \boldsymbol{j} \in \boldsymbol{J}_{t^\prime}}$, with $\gamma_{\boldsymbol{j}} \in \{0,1\}$. The set $\Gamma(\boldsymbol{J}_{t^\prime})$ denotes the set of all feasible selections. The transport service's cost function for shipping a single job $\boldsymbol{j}$ is a function of distance and volume: $c_{\boldsymbol{j}} = c^{mile} \cdot j_v \cdot j_d$. It maximizes its direct rewards by selecting $\boldsymbol{\gamma}_{t^\prime}$ as follows:
\begin{align}\label{eq:selectioncarrier}
\argmax_{\boldsymbol{\gamma}\in \Gamma(\boldsymbol{J}_{t^\prime})} \left(\sum_{\boldsymbol{j} \in \boldsymbol{J}_{t^\prime}} {\gamma}_{\boldsymbol{j}}(x_{\boldsymbol{j}} - c_{\boldsymbol{j}})\right)\enspace,
\end{align}
s.t.
\begin{align}
\sum_{\boldsymbol{j} \in \boldsymbol{J}_{t^\prime}} {\gamma}_{\boldsymbol{j}} \cdot j_v < C\enspace. \notag
\end{align}
From the perspective of jobs, actions (i.e., bids) are defined on the level of individual containers. All bids $x_{\boldsymbol{j}}\in \mathcal{X}_{\boldsymbol{j}} \equiv \mathbb{R}$ are placed in parallel and independent of one another, yielding a vector $\boldsymbol{x} = [x_{\boldsymbol{j}}]_{\forall \boldsymbol{j} \in \boldsymbol{J}_{t^\prime}}$. Unshipped jobs with $j_\tau>0$ incur holding costs and unshipped jobs with $j_\tau=0$ incur a failed job penalty, both are proportional to the job volume. At any given decision epoch, the direct reward function for individual jobs is defined as follows:
\begin{align}
r_{\boldsymbol{j}}(\gamma_{\boldsymbol{j}},x_{\boldsymbol{j}})=
\begin{cases}
- x_{\boldsymbol{j}} & \text{if} \qquad \gamma_{\boldsymbol{j}}=1 \\
- c^{hold} \cdot j_v & \text{if} \qquad j_\tau>0 \land \gamma_{\boldsymbol{j}}=0 \\
- c^{pen} \cdot j_v & \text{if} \qquad j_\tau=0 \land \gamma_{\boldsymbol{j}}=0 \notag
\end{cases}
\end{align}
To obtain $x_{\boldsymbol{j}}$ at the current decision epoch (which may be denoted by $x_{\boldsymbol{j}} \equiv x_{t^\prime,\boldsymbol{j}}$ when explicitly including the decision epoch), we try to solve $\argmax_{x_{t^\prime,\boldsymbol{j}} \in \mathcal{X}_{\boldsymbol{j}}} \\ \mathbb{E}\bigl(\sum_{t^{\prime\prime}=t^\prime}^{t^\prime+|\mathcal{T}_j|} r_{\boldsymbol{j}}(\gamma_{t^{\prime\prime},\boldsymbol{j}},x_{t^{\prime\prime},\boldsymbol{j}})\bigr)$, i.e., the goal is to maximize the expected reward (minimize expected costs) over the horizon of the job. Note that -- as a container cannot observe the bids of other jobs, nor the cost function of the transport service, nor future states -- we can only make decisions based on \textit{expected} rewards depending on the stochastic environment. The solution method presented in Section~\ref{sec:solutionmethod} further addresses this problem.
Finally, we define the transition function for the system state that occurs in the time step from decision epoch $t^\prime$ to $t^\prime+1$. During this step two changes occur; we (i) decrease the due dates of jobs that are not shipped or otherwise removed from the system and (ii) add newly arrived jobs to the state. The set of new jobs arriving for $t^\prime+1$ is defined by $\boldsymbol{\tilde{J}}_{t^\prime+1} \in \mathcal{\tilde{J}}_{t^\prime+1}$. The transition function $S^M:(\boldsymbol{J}_{t^\prime},\boldsymbol{\tilde{J}}_{t^\prime+1},\boldsymbol{\gamma}) \mapsto \boldsymbol{J}_{t^\prime+1}$ is defined by the following sequential procedure:
\setcounter{algo}{0}
\begin{algo}Transition function $S^M(\boldsymbol{J}_{t^\prime},\boldsymbol{\tilde{J}}_{t^\prime+1},\boldsymbol{\gamma}_{t^\prime})$
\end{algo}
\footnotesize
\begin{tabular}{ l l l}
\toprule
0: & Input: $\boldsymbol{J}_{t^\prime},\boldsymbol{\tilde{J}}_{t^\prime+1},\boldsymbol{\gamma}_{t^\prime}$& $\blacktriangleright$ Current state, job arrivals, shipping selection\\
1: & $\boldsymbol{J}_{t^\prime+1} \mapsfrom \emptyset$ & $\blacktriangleright$ Initialize next state\\
2: & $\boldsymbol{J}_{t^\prime}^x \mapsfrom \boldsymbol{J}_{t^\prime}$& $\blacktriangleright$ Copy state (post-decision state)\\
3: & $\forall \boldsymbol{j} \in \boldsymbol{J}_{t^\prime}$ & $\blacktriangleright$ Loop over all jobs\\
4: & \qquad $\boldsymbol{J}_{t^\prime}^x \mapsfrom \boldsymbol{J}_{t^\prime}^x \setminus \boldsymbol{j} \mid \gamma_{{t^\prime},\boldsymbol{j}}=1$ & $\blacktriangleright$ Remove shipped job\\
5: & \qquad $\boldsymbol{J}_{t^\prime}^x \mapsfrom \boldsymbol{J}_{t^\prime}^x \setminus \boldsymbol{j} \mid j_{\tau}=0 \land \gamma_{{t^\prime},\boldsymbol{j}}=0$ &
$\blacktriangleright$ Remove unshipped job with due date 0 \\
6:& \qquad $j_{\tau} \mapsfrom j_{\tau} - 1 \mid j_{\tau}>0$ & $\blacktriangleright$ Decrement time till due date\\
7: & $\boldsymbol{J}_{{t^\prime}+1} \mapsfrom \boldsymbol{J}_{t^\prime}^x \cup \boldsymbol{\tilde{J}}_{{t^\prime}+1}$ & $\blacktriangleright$ Merge existing and new job sets \\
8: & Output: $\boldsymbol{J}_{{t^\prime}+1}$ & $\blacktriangleright$ New state\\
\bottomrule
\end{tabular}
\normalsize
\vspace{4mm}
\section{Solution method}\label{sec:solutionmethod}
To learn the bidding strategy of the containers, we draw from the widely used policy gradient framework. For a detailed description and theoretical background, we refer to the REINFORCE algorithm by Williams \cite{williams1992}. As noted earlier, the policy gradient algorithm returns a stochastic bidding policy, reflecting the deviations in bid prices adopted by individual containers. As bids can take any real value, we must adopt a policy suitable for continuous action spaces. In this paper we opt for a Gaussian policy, drawing bids from a normal distribution. The mean and standard deviation of this distribution are learned using reinforcement learning.
In policy-based reinforcement learning, we perform an action directly on the state and observe the corresponding rewards. Each simulated episode $n \in \{0,1,\ldots,N\}$ yields a batch of selected actions and related rewards according to the stochastic policy $\pi_{\boldsymbol{\theta}}^n(x_{\boldsymbol{j}} \mid {\boldsymbol{j}},\boldsymbol{J}_{t^\prime})=\mathbb{P}^{\boldsymbol{\theta}}(x_{\boldsymbol{j}} \mid \boldsymbol{j},\boldsymbol{J}_{t^\prime})$. Under our Gaussian policy, bids are drawn independently from other containers, i.e., $x_{\boldsymbol{j}} \sim \mathcal{N}(\mu_{\boldsymbol{\theta}}(\boldsymbol{j},\boldsymbol{J}_{t^\prime}), \sigma_{\boldsymbol{\theta}}), \forall \boldsymbol{j} \in \boldsymbol{J}_{t^\prime}$. The randomness in action selection allows the policy to keep exploring the action space and to escape from local optima. From the observed actions and rewards during each episode $n$, we deduce which actions result in above-average rewards and compute gradients ensuring that the policy is updated in that direction, yielding updated policies $\pi_{\boldsymbol{\theta}}^{n+1}$ until we reach $\pi_{\boldsymbol{\theta}}^{N}$. For consistent policy updates, we only use observations for jobs that are either shipped or removed, for which we need some additional notation. Let $\boldsymbol{K}^n=[K_0^n,\ldots,K_{\tau^{max}}^n]$ be a vector containing the number of bid observations for such completed jobs. For example, if a job had an original due date of 4 and is shipped at $j_{\tau}=2$, we would increment $K_{4}^n$, $K_{3}^n$ and $K_{2}^n$ by 1 (using an update function $k(\boldsymbol{j})$). Finally, we store all completed jobs (i.e., either shipped or failed) in a set $\hat{\boldsymbol{J}}^n$. For each episode, the cumulative rewards per job are defined as follows:
\begin{align}
\hat{v}_{t,\boldsymbol{j}}^n(\gamma_{\boldsymbol{j}},x_{\boldsymbol{j}})=
\begin{cases}
r_{\boldsymbol{j}}(\gamma_{\boldsymbol{j}},x_{\boldsymbol{j}}) & \text{if} \qquad t=0 \\
r_{\boldsymbol{j}}(\gamma_{\boldsymbol{j}},x_{\boldsymbol{j}})+\hat{v}_{t-1,\boldsymbol{j}}^n & \text{if} \qquad t>0 \notag
\end{cases}
\quad, \forall t \in \mathcal{T}_{\boldsymbol{j}}\enspace.
\end{align}
Let $\hat{\boldsymbol{v}}_{t^\prime}^n=\bigl[[\hat{v}_{t,\boldsymbol{j}}^n]_{t \in \mathcal{T}_j}\bigr]_{\forall \boldsymbol{j} \in \boldsymbol{J}_t}$ be the vector containing all observed cumulative rewards at time $t^\prime$ in episode $n$. At the end of each episode, we can then define the information vector
\begin{align}
\boldsymbol{I}^n =\biggl[[\boldsymbol{J}_{t^\prime}^n, \boldsymbol{x}_{t^\prime}^n, \hat{\boldsymbol{v}}_{t^\prime}^n,
\boldsymbol{\gamma}_{t^\prime}^n]_{\forall t^\prime \in \{0,\ldots,T\}},
\boldsymbol{K}^n, \hat{\boldsymbol{J}}^n\biggr] \notag
\end{align}
\noindent and corresponding updating function $i(\cdot)$; the information vector contains the states, actions and rewards necessary for the policy updates (i.e., a history similar to the well-known SARSA trajectory). The decision-making policy is updated according to the policy gradient theorem \cite{sutton2018}, which may be formalized as follows:
\begin{align}
\nabla_{\boldsymbol{\theta}} v_{j_\tau,\boldsymbol{j}}^{\pi_{\boldsymbol{\theta}}}
\propto \sum_{{t^\prime}=1}^{T} \left(\int_{\boldsymbol{J}_{t^\prime} \in \mathcal{J}_{t^\prime}} \mathbb{P}^{\pi_{\boldsymbol{\theta}}}(\boldsymbol{J}_{t^\prime} \mid \boldsymbol{J}_{{t^\prime}-1}) \int_{x_{\boldsymbol{j}} \in \mathcal{X}_{\boldsymbol{j}}} \nabla_{\boldsymbol{\theta}}{\pi_{\boldsymbol{\theta}}}({\boldsymbol{x}_{\boldsymbol{j}}} \mid \boldsymbol{j}, \boldsymbol{J}_{t^\prime})v_{j_\tau,\boldsymbol{j}}^{\pi_{\boldsymbol{\theta}}}\right) \enspace. \notag
\end{align}
Essentially, the theorem states that the gradient of the objective function is proportional to the value functions $v_{j_\tau,\boldsymbol{j}}^{\pi_{\boldsymbol{\theta}}}$ multiplied by the policy gradients for all actions in each state, given the probability measure $\mathbb{P}^{\pi_{\boldsymbol{\theta}}}$ implied by the prevailing decision-making policy $\pi_{\boldsymbol{\theta}}$.
We proceed to formalize the policy gradient theorem for our problem setting. Let $\boldsymbol{\theta}$ be a vector of weight parameters that defines the decision-making policy $\pi_{\boldsymbol{\theta}}: (\boldsymbol{j},\boldsymbol{J}_{t^\prime}) \mapsto \mathcal{X}_{\boldsymbol{j}}$. Furthermore, let $\boldsymbol{\phi}(\boldsymbol{j},\boldsymbol{J}_{t^\prime})$ be a feature vector that distills the most salient attributes from the problem state, e.g., the number of jobs waiting to be transported or the average time till due date. We will further discuss the features in Section~\ref{sec:numericalexperiments}. For the Gaussian case, we formalize the policy as follows:
\begin{align}
\pi_{\boldsymbol{\theta}}=\frac{1}{\sqrt{2\pi}\sigma_{\boldsymbol{\theta}}}e^{-\frac{\left(x_{\boldsymbol{j}}-\mu_{\boldsymbol{\theta}}\left(\boldsymbol{j},\boldsymbol{J}_{t^\prime}\right)\right)^2}{2\sigma_{\boldsymbol{\theta}}^2}}\enspace, \notag
\end{align}
with $x_{\boldsymbol{j}}$ being the bid price, $\mu_{\boldsymbol{\theta}}(\boldsymbol{j},\boldsymbol{J}_{t^\prime}) = \boldsymbol{\phi}(\boldsymbol{j},\boldsymbol{J}_{t^\prime})^\top {\boldsymbol{\theta}}$ the Gaussian mean and $\sigma_{\boldsymbol{\theta}}$ the parametrized standard deviation. The action $x_{\boldsymbol{j}}$ may be obtained from the inverse normal distribution. The corresponding gradients are defined by
\begin{align}
\nabla_{\mu_{\boldsymbol{\theta}}}(\boldsymbol{j},\boldsymbol{I}^n) = \nabla_{\boldsymbol{\theta}}(\boldsymbol{j},\boldsymbol{I}^n) = \\ \nabla_{\boldsymbol{\theta}} \log \pi_{\boldsymbol{\theta}} (
\boldsymbol{j},\boldsymbol{I}^n) =\\ \frac{(x_{\boldsymbol{j}}-\mu_{\boldsymbol{\theta}}(\boldsymbol{j},\boldsymbol{J}_{t^\prime}))\phi(\boldsymbol{j},\boldsymbol{J}_{t^\prime})}{\sigma_{\boldsymbol{\theta}}^2} \enspace, \notag
\end{align}
\begin{align}
\nabla_{\sigma_{\boldsymbol{\theta}}} (\boldsymbol{j},\boldsymbol{I}^n) = \frac{(x_{\boldsymbol{j}}-\mu_{\boldsymbol{\theta}}(\boldsymbol{j},\boldsymbol{J}_{t^\prime}))^2 - \sigma_{\boldsymbol{\theta}}^2}{\sigma_{\boldsymbol{\theta}}^3}\enspace. \notag
\end{align}
The gradients are used to update the policy parameters. As the observations may exhibit large variance, we add a non-biased baseline value (i.e., not directly depending on the policy), namely the average observed value during the episode \cite{sutton2018}. For the prevailing episode $n$, we define the baseline as
\begin{align}
\bar{v}_t^n = \frac{1}{K_{t}^n} \sum_{\boldsymbol{j} \in \boldsymbol{\hat{J}}^n} \hat{v}_{t,\boldsymbol{j}}^n, \forall t \in \{0,\ldots, \tau^{max}\} \enspace. \notag
\end{align}
For the Gaussian policy, the weight update rule for $\mu_{\boldsymbol{\theta}}$ (updating $\boldsymbol{\theta}^{n-1}$ to $\boldsymbol{\theta}^n$) is:
\begin{align}\label{eq:update_mu}
\Delta_{\mu_{\boldsymbol{\theta}}}(\boldsymbol{j},\boldsymbol{I}^n) = \Delta_{\boldsymbol{\theta}}(\boldsymbol{j},\boldsymbol{I}^n) = \\
\alpha_\mu \frac{1}{K_{j_\tau}^n}(\hat{v}_{j_\tau} - \bar{v}_{j_\tau}) \nabla_{\boldsymbol{\theta}} \log \pi_{\boldsymbol{\theta}} (\boldsymbol{j},\boldsymbol{I}^n) =\\
\alpha_\mu \frac{1}{K_{j_\tau}^n}(\hat{v}_{j_\tau} - \bar{v}_{j_\tau}) \left[\frac{(x_{\boldsymbol{j}}-\mu_{\boldsymbol{\theta}}(\boldsymbol{j},\boldsymbol{J}_{t^\prime}))\phi(\boldsymbol{j},\boldsymbol{J}_{t^\prime})}{\sigma_{\boldsymbol{\theta}}^2}\right]\enspace.
\end{align}
The standard deviation $\sigma_{\boldsymbol{\theta}}$ is updated as follows:
\begin{align}\label{eq:update_stdev}
\Delta_{\sigma_{\boldsymbol{\theta}}}(\boldsymbol{j},\boldsymbol{I}^n) = \alpha_\sigma \frac{1}{K_{j_\tau}^n} (\hat{v}_{j_\tau} - \bar{v}_{j_\tau}) \nabla_{\boldsymbol{\theta}} \log \pi_{\boldsymbol{\theta}} (
\boldsymbol{j},\boldsymbol{I}^n)= \\
\alpha_\sigma \frac{1}{K_{j_\tau}^n}(\hat{v}_{j_\tau} - \bar{v}_{j_\tau}) \left[\frac{(x_{\boldsymbol{j}}-\mu_{\boldsymbol{\theta}}(\boldsymbol{j},\boldsymbol{J}_{t^\prime}))^2 - \sigma_{\boldsymbol{\theta}}^2}{\sigma_{\boldsymbol{\theta}}^3}
\right]\enspace.
\end{align}
Intuitively, this means that after each episode we update the feature weights -- which in turn provide the state-dependent mean bidding price -- and the standard deviation of the bids. The mean bidding price -- taking into account both individual job properties and the state of the system -- represents the bid that is expected to minimize overall costs. If effective bids are very close to the mean, the standard deviation will decrease and the bidding policy will converge to an almost deterministic policy. However, if there is an expected benefit in varying bids, the standard deviation may grow larger. The algorithmic outline to update the parametrized policy is at follows:
\setcounter{algo}{1}
\begin{algo}\label{policygradientoutline}Outline of the policy gradient bidding algorithm (based on \cite{williams1992})
\end{algo}
\begin{tabular}{ l l l}
\toprule
0: & Input: $\pi_{\boldsymbol{\theta}}^0$ & $\blacktriangleright$ Differentiable parametrized policy\\
1: & $\alpha_\mu \mapsfrom (0,1), \alpha_\sigma \mapsfrom (0,1)$ & $\blacktriangleright$ Set step sizes\\
2: & $\sigma^0 \mapsfrom \mathbb{R}^+$ & $\blacktriangleright$ Initialize standard deviation\\
3: & $\boldsymbol{\theta} \mapsfrom \mathbb{R}^{|\boldsymbol{\theta}|}$ & $\blacktriangleright$ Initialize policy parameters\\
4: & $\forall \boldsymbol{n} \in \{0,\ldots,N\}$ & $\blacktriangleright$ Loop over episodes\\
5: & \qquad $\boldsymbol{\hat{J}}^n \mapsfrom \emptyset$ & $\blacktriangleright$ Initialize completed job set\\
6: & \qquad $\boldsymbol{I}^n \mapsfrom \emptyset$ & $\blacktriangleright$ Initialize information set\\
7: & \qquad $\boldsymbol{J}_0 \mapsfrom \mathcal{J}_0$ & $\blacktriangleright$ Generate initial state\\
8: & \qquad $\forall t^\prime \in \{0,\ldots,T\}$ & $\blacktriangleright$ Loop over finite time horizon\\
9: & \qquad\qquad $x_{\boldsymbol{j}}^n \mapsfrom \pi_{\boldsymbol{\theta}}^n(\boldsymbol{j},\boldsymbol{J}_{t^\prime}), \forall \boldsymbol{j} \in \boldsymbol{J}_{t^\prime}$ & $\blacktriangleright$ Bid placement jobs\\
10: & \qquad\qquad $\boldsymbol{\gamma}^n \mapsfrom \argmax_{\boldsymbol{\gamma}^n \in \Gamma(\boldsymbol{J}_{t^\prime})} $ & $\blacktriangleright$ Job selection carrier, Eq. \eqref{eq:selectioncarrier}\\
& \qquad\qquad $\left(\sum_{\boldsymbol{j} \in \boldsymbol{J}_{t^\prime}} {\gamma}_{\boldsymbol{j}}^n(x_{\boldsymbol{j}} - c_{\boldsymbol{j}})\right)$ & \\
11: & \qquad\qquad $\hat{v}_{j_\tau,\boldsymbol{j}}^n \mapsfrom r_{\boldsymbol{j}}(\gamma_{\boldsymbol{j}}^n,x_{\boldsymbol{j}}^n), \forall \boldsymbol{j} \in \boldsymbol{J}_{t^\prime}$ & $\blacktriangleright$ Compute cumulative rewards\\
12: & \qquad\qquad $\forall \boldsymbol{j} \in \boldsymbol{J}_{t^\prime} \mid j_{\tau}=0 \lor {\gamma}_{\boldsymbol{j}}^n=1 $& $\blacktriangleright$ Loop over completed jobs \\
13: & \qquad\qquad\qquad $\boldsymbol{\hat{J}}^n \mapsfrom \boldsymbol{\hat{J}}^n \cup \{\boldsymbol{j}\}$ & $\blacktriangleright$ Update set of completed jobs\\
14: & \qquad\qquad\qquad $\boldsymbol{K}^n \mapsfrom k(\boldsymbol{j})$ & $\blacktriangleright$ Update number of completed jobs\\
15: & \qquad\qquad $\boldsymbol{I}^n \mapsfrom i\biggl(\boldsymbol{J}_{t^\prime}^n, \boldsymbol{x}_{t^\prime}^n, \hat{\boldsymbol{v}}_{t^\prime}^n,
\boldsymbol{\gamma}_{t^\prime}^n, \boldsymbol{K}^n, \hat{\boldsymbol{J}}^n, \boldsymbol{I}^n\biggr)$ & $\blacktriangleright$ Store information\\
16: & \qquad\qquad $\boldsymbol{\tilde{J}}_{t^\prime}\mapsfrom \mathcal{\tilde{J}}_{t^\prime}$ & $\blacktriangleright$ Generate job arrivals\\
17: & \qquad\qquad $\boldsymbol{J}_{t^\prime+1} \mapsfrom S^M(\boldsymbol{J}_{t^\prime},\boldsymbol{\tilde{J}}_{t^\prime+1},\boldsymbol{\gamma}_{t^\prime}^n) $ & $\blacktriangleright$ Transition function, Algorithm 1\\
18: & \qquad $\forall t \in \{0,\ldots,\tau^{max}\}$ & $\blacktriangleright$ Loop till maximum due date\\
19: & \qquad\qquad $\forall \boldsymbol{j} \in \boldsymbol{\hat{J}}^n$ & $\blacktriangleright$ Loop over completed jobs\\
20: & \qquad\qquad\qquad $\mu_{\boldsymbol{\theta}}^{n+1} \mapsfrom \mu_{\boldsymbol{\theta}}^n+\Delta_{\mu_{\boldsymbol{\theta}}}(\boldsymbol{j},\boldsymbol{I}^n) $ & $\blacktriangleright$ Update Gaussian mean, Eq. \eqref{eq:update_mu} \\
21:& \qquad\qquad\qquad $\sigma_{\boldsymbol{\theta}}^{n+1} \mapsfrom \sigma_{\boldsymbol{\theta}}^n+\Delta_{\sigma_{\boldsymbol{\theta}}}(\boldsymbol{j},\boldsymbol{I}^n) $ & $\blacktriangleright$ Update standard deviation, Eq. \eqref{eq:update_stdev}\\
22: & Output: $\pi_{\boldsymbol{\theta}}^N$ & $\blacktriangleright$ Return tuned policy\\
\bottomrule
\end{tabular}
\section{Numerical experiments}\label{sec:numericalexperiments}
This section describes the numerical experiments and the results. Section~\ref{ssec:exploration} explores the parameter space and aids in tuning the hyperparameters. The algorithm is written in Python 3.7 and available online.\footnote{https://github.com/woutervanheeswijk/policygradientsmartcontainers}
\subsection{Exploration of parameter space}\label{ssec:exploration}
The purpose of this section is twofold: we explore the effects of various parameter settings on the performance of the algorithm and select a suitable set of parameters for the remainder of the experiments. We make use of the instance settings summarized in Table~\ref{table:instancesettings}. Note that the penalty for failed jobs is the main driver in determining bid prices; together with holding costs, it intuitively represents the maximum price the smart container is willing to bid to be transported.
\begin{table}
\centering
\caption{Instance settings}
\label{table:instancesettings}
\begin{tabular}{ l l }
\toprule
Max. \# job arrivals & [0-10] \\
Due date & [1-5] \\
Job transport distance & [10-100] \\
Job volume & [1-10]\\
Holding cost (per volume unit) & 1\\
Penalty failed job (per volume unit) & 10\\
Transport costs per mile (per volume unit) & 0.1 \\
Transport capacity & 80 \\
\bottomrule
\end{tabular}
\end{table}
To parametrize the policy we use several features. First, we use a scalar that serves as the bias. Second, we use the individual job properties of the job placing the bid, i.e., the time till due date, the job's transport distance and the container volume. Third, in case the job shares its own properties with the system, it also use the generic system features: the total number of jobs, average distance, total volume, and average due date. Recall that these system features only include the data of other smart containers that share their information. All weight parameters in $\boldsymbol{\theta}$ are initialized at 0, yielding initial bid prices of 0.
We perform a sequential search to set the simulation parameters. First, we tune the learning rates $\alpha_\mu$ (learning rate for mean) and $\alpha_\sigma$ (learning rate for standard deviation), starting with a standard normal distribution. We test learning rates $\{0.0001, 0.001, 0.01, 0.1\}$ for both parameters and find that $\alpha_\mu=0.1$ and $\alpha_\mu=0.01$ are stable (i.e., no exploding gradients) and converge reasonably fast. Taking smaller learning rates yields no eminent advantages in terms of stability or eventual policy quality.
Figure~\ref{fig:convergence_learning_rates} shows two examples of parameter convergence under various learning rates.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{mu_sigma_small}
\caption{$\alpha_\mu=0.01$ and $\alpha_\sigma=0.001$}
\label{fig:small_learning_rates}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{mu_sigma_large}
\caption{$\alpha_\mu=0.1$ and $\alpha_\sigma=0.01$}
\label{fig:large_learning_rates}
\end{subfigure}
\caption{Convergence of $\mu_{\boldsymbol{\theta}}$ and $\sigma_{\boldsymbol{\theta}}$ (normalized) for various learning rates. Higher learning rates achieve both faster convergence and lower average bid prices.}
\label{fig:convergence_learning_rates}
\end{figure}
Next, we tune the initial bias weight $\theta_0^0$ (using values $\{-50,-40,\ldots,40,50\}$) and the initial standard deviation $\sigma^0$ (using values $\{0.01, 0.1, 1, 10,25\}$). Anticipating non-zero bids, we test several initializations with nonzero bias weights. Large standard deviations allow for much exploration early on, but may also result in unstable policies or slow convergence. From the experiments, we observe that the bias weight converges to a small or negative weight and that there is no benefit in different initializations. For the standard deviation, we find that $\sigma^0 = 10$ yields the best results; the exploration triggered by setting large initial standard deviations helps avoiding local optima early on. In terms of performance, the average transport costs are 7.3\% lower than under the standard normal initialization. Standard deviations ultimately converge to similarly small values regardless the initialization.
Finally, we determine the number of episodes $N$ and the length of each horizon $T$. Longer horizons lead to larger and therefore more reliable batches of completed jobs per episode, but naturally require more computational effort. Thus, we compare settings for which the total number of time steps $N \cdot T$ is equivalent. Each alternative simulates 1,000,000 time steps, using $T=\{10,25,50,100,250,500,1000\}$ with corresponding values $N$. To test convergence, after each 10\% of completed training episodes we perform 10 validation episodes -- always with $T=1000$ for fair comparisons -- to evaluate policy qualities. We find that having large batches provides notable advantages. Furthermore, in all cases 400,000 time steps appear sufficient to converge to stable policies. To illustrate the findings, Figure~\ref{fig:offlineperformance} shows the average transport costs measured after each 100,000 time steps (using the then-prevailing policy); Figure~\ref{fig:largebatch} shows the quality of the eventual policies for each time horizon.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{offline_performance}
\caption{Comparison offline quality.}
\label{fig:offlineperformance}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{barchart_performance_batchsize}
\caption{Comparison final policy quality.}
\label{fig:largebatch}
\end{subfigure}
\caption{Policy performance for various time horizons. The horizon $T=100$ yields the best overall performance; batches too small diminish performance.}
\label{fig:figure_episodes}
\end{figure}
The final parameters to be used for the remainder of the numerical experiments are summarized as follows: $N=4,000$, $T=100$, $\sigma^0=10$, $\alpha_\mu=0.1$ and $\alpha_\sigma=0.01$.
\subsection{Analysis of policy performance}
Having determined the parameter settings, we proceed to analyze the performance of the jointly learned policies. This section addresses the effects of information sharing, the relevance of the used features in determining the bid, and the behavior of the bidding policy and its impact on carrier profits. All results in this section correspond to the performance of policies after training is completed. To obtain additional insights we sometimes define alternative instances; for conciseness we do not explicitly define the settings of these alternatives.
We first evaluate the effects of information sharing. According to preset ratios, we randomly assign whether or not a container shares its information with the system. Only containers that share information are able to see aggregate system information. Clearly, the more containers participate, the more accurate the system information will be. We test sharing rates ranging from 0\% to 100\% with increments of 10\%; Table~\ref{table:featureweights} shows the results. We observe that performance under full information sharing and no information sharing is almost equivalent, with partial information sharing always performing worse. The latter observation may be explained by the distorted view presented by only measuring part of the system state.
\begin{table}
\scriptsize
\centering
\caption{Feature weights for various information sharing rates.}
\label{table:featureweights}
\begin{tabular}{ l | c| c| c| c| c| c |c| c| c |c| c }
\hline
Feature &0\% &10\%&20\%&30\%&40\%&50\%&60\%&70\%&80\%&90\%&100\%\\
\hline
Scalar & -10.10 & -10.28&-8.03&-11.83&-10.98&-4.81&-7.17&-10.46&-8.71&-9.46&-12.12\\
Total job volume & -- &0.06&0.06&-0.12&0.44&0.03&0.29&0.42&1.18&0.64&0.63\\
Average due date & -- &-2.37&-2.73&-3.28&-2.57&-2.73&-2.69&-2.49&-1.79&-2.02&-1.56\\
Average distance & -- &1.83&3.07&2.59&2.36&3.18&3.58&3.16&3.44&2.26&2.92\\
\# jobs & -- &-0.03&-0.31&-0.69&-0.21&-0.86&-0.77&-1.39&-0.02&-1.25&-1.70\\
Job volume & 59.20 & 57.14&56.83&58.46&58.70&55.27&56.13&56.55&56.69&57.97&60.10\\
Job due date & -22.45 & -23.25&-24.02&-21.16&-21.95&-23.89&-25.96&-22.89&-22.26&-22.82&-21.27\\
Job distance & 49.49 &48.37&45.88&49.38&48.81&45.98&45.27&46.85&48.00&49.15&49.54\\
\hline
\textbf{Average reward} & \textbf{-46.87} &\textbf{-51.78}&\textbf{-55.96}&\textbf{-49.12}&\textbf{-49.54}&\textbf{-57.75}&\textbf{-56.75}&\textbf{-55.71}&\textbf{-53.12}&\textbf{-49.43}&\textbf{-46.32}\\
\hline
\end{tabular}
\end{table}
In the policy parametrization all features are scaled to the $[0,1]$ domain, such that the magnitudes of weights are comparable among each other. We see that the generic features have relatively little impact on the overall bid price. This underlines the limited difference observed between full information sharing and no information sharing. Job volume and job distance are by far the most significant drivers of a job's bid prices. In line with expectations, as the costs incurred by the carrier depend on these two factors, therefore requiring higher compensations. Furthermore, holding- and penalty costs are proportional to the job volume. The relationship with the time till due date is negative; if more time remains, it might be prudent to place lower bids without an imminent risk of penalties. On average, each job places 1.36 bids and 99.14\% of jobs is ultimately shipped; the capacity of the transport service is rarely restrictive. Figure~\ref{fig:bidding_behavior} illustrates bidding behavior with respect to transport distance and time till due date, respectively.
\begin{figure}
\centering
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{bid_cloud}
\caption{Bids relative to transport distance.}
\label{fig:bid_cloud}
\end{subfigure}%
\begin{subfigure}{.5\textwidth}
\centering
\includegraphics[width=1.0\linewidth]{bid_track}
\caption{Sample paths of bids over time.}
\label{fig:bids_over_time}
\end{subfigure}
\caption{Visualizations of bidding policies with respect to volume and due date Bids tend to increase when (a) transport distance is larger and (b) the job is closer to its due date.}
\label{fig:bidding_behavior}
\end{figure}
Next, we discuss some behavior of the bidding policy and its effect on carrier profits. As our carrier is a passive agent we omit overly quantitative assessments here. We simulate various toy-sized instances, adopting simplified deterministic settings with a single container type (time till due date is zero, identical volume and distance). If transport capacity is guaranteed to suffice, the learned bid prices converge to almost zero. If two jobs always compete for one transport service and the other incurs a penalty, the bid will be slightly below the penalty cost. Several other experiments with scarce capacity also show bid prices slightly below the expected penalty costs that would otherwise be incurred. For our base scenario, the profit margin for the carrier is 20.2\%. This positive margin indicates that the features do not encapsulate all information required to learn the minimum bidding price. For comparison, we run another scenario in which the carrier's transport costs -- which are unknown to the smart containers -- are the sole feature; in that case all jobs trivially learn the minimum bidding price. This result implies that carrier may have financial incentives not to divulge too much information to the smart containers.
For the carrier, the bidding policies deployed by the smart container greatly influence its profit. Scarcity of transport capacity drives up bid prices, yet also increases the likelihood of missed revenue. To gain more insight into this trade-off, we simulate various levels of transport capacity, from scarce to abundant. These experiment indeed confirm that a (non-trivial) capacity level exists that maximizes profit. In addition, a carrier needs not to accept all jobs whose bid exceed their marginal transport costs, as we presumed in this paper. Having a carrier represented by an active agent stretches beyond the scope of this paper.
We summarize the main findings, reiterating that the setting of this paper is a highly stylized environment. The key insights are as follows:
\begin{itemize}
\item Utilizing global system information only marginally reduces job's transport costs compared to sharing no system information;
\item Jointly learned policies converge to stable bidding policies with small standard deviations;
\item Jobs with more time remaining till their due date are prone to place lower bids;
\item Carriers have an incentive not to disclose true cost information when transport capacity is abundant.
\end{itemize}
\section{Conclusions}\label{sec:conclusions}
Traditional transport markets rely on (long-term) contracts between shippers and carriers in which price agreements are made. In contrast, self-organizing systems are expected to evolve into some form of spot market where demand and supply are dynamically matched based on the current state of the system. This paper explores the concept of smart containers placing bids on restricted transport capacity. We design a policy gradient algorithm to mimic joint learning of a bidding policy, sharing observations between autonomous smart modular containers. The stochastic policy reflects deviations made by individual containers, given that deterministic policies are easy to counter in a competitive environment. This stochastic approach appears effective. Standard deviations converge to small values, implying stable bidding policies. The performance of the policy is consistent with the effects of job volume, transport distance and till time due date, which are used to parametrize the bidding policy.
Numerical experiments show that sharing system information only marginally reduces bidding costs; individual job properties are the main driver in setting bids. The limited difference in policy quality with and without sharing system information is an interesting observation. This result implies that smart containers would only need to (anonymously) submit their key properties, submitted bid prices and incurred costs. There is no apparent need to share information on a system-wide level, which would greatly ease the system design.
The profitability of the transport service -- which is modeled as a passive agent in this paper -- strongly depends on the bidding policy of the smart containers. Experiments with varying transport capacities show that in turn, the carrier can also influence bidding policies by optimizing the transport capacity that is offered. Without central governance, unbalances between smart containers and transport services may cause disturbed performances. Based on the findings presented in this paper, one can imagine that the dynamic interplay between carriers and smart containers is a very interesting one that begs closer attention.
We re-iterate that this study is of an explorative nature; there are many avenues for follow-up research. In terms of algorithmic improvements, actor-critic methods (learning functions for expected downstream values rather than merely observing them) would be a logical continuation. Furthermore, the linear expression to compute the bidding price could be replaced by neural networks that capture potential non-linear structures. In addition, the basic problem presented in this paper lends itself for various extensions. So far the carrier has been assumed to be a passive agent, offering fixed transport capacity and services regardless of (anticipated) income. In reality the carrier will also make intelligent decisions based on the bidding policies of smart containers. A brokerage structure might also emerge in the Physical Internet context. Finally, we considered only a single transport service operating on the real line. Using the same algorithmic setup, this setting could be extended to more realistic networks with multiple carriers, routes and destination nodes.
\bibliographystyle{apacite}
|
2,877,628,088,359 | arxiv | \section{Introduction}
Since the discovery of the indirect exchange coupling \cite{r1} in magnetic multilayers, followed by the discovery of the related effect of giant magnetoresistance \cite{r2,r3}, numerous material systems both metallic and semiconducting were reported to poses it, among which the pioneering Fe/Cr/Fe and Co/Cu/Co structures have been the most studied (see refs.~\cite{r4,r5} and references therein). The distinctive feature of the IEC is that the strength as well as sign of the coupling are functions of the thickness of the nonmagnetic spacer \cite{r6}. Such oscillatory character results in either parallel or antiparallel mutual orientation of the magnetic moments of the neighboring ferromagnetic layers as the spacer thickness is varied. The mechanism of IEC is closely related to the Ruderman-Kittel–Kasuya-Yosida (RKKY) interaction acting between magnetic impurities in a non-magnetic host \cite{r7,r8,r9}. Even though the RKKY approach may not be fully applicable to the IEC in transition metal multilayers \cite{r4,r10}, starting with the early models \cite{r11,r12,r13} it generally captures well the physics involved. Most of the experimental data on IEC to date have been successfully described in terms of a more general quantum-well approach \cite{r14,r15}, where the spin-dependent density of states of the nonmagnetic spacer depends on the relative orientation of the ferromagnetic layers, which favors either parallel (ferromagnetic, FM) or antiparallel (antiferromagnetic, AFM) alignment of the respective magnetic moments. The period of the IEC oscillation depends on the properties of the spacer’s Fermi surface, while the amplitude as well as the phase may be affected by the interface roughness, interdiffusion, and related effects.
Variation in temperature has a relatively minor effect on the IEC in systems with metallic \cite{r16} or semiconducting spacer layers \cite{r17}. In the first case, a slight weakening of the coupling strength with increasing temperature is well explained by thermal broadening of the Fermi edge \cite{r13}. In the second case, increasing temperature has the inverse effect \cite{r17} due to increased thermal population of the conduction band of the semiconducting spacer \cite{r10,r18}. A number of reports have shown a stronger temperature dependence than that predicted by theory \cite{r19,r20,r21,r22,r23}. The measured change in the IEC strength was up to 75~\% over a 300~K interval, which was stronger than the predicted behavior \cite{r10}, though still far from being technologically interesting for thermo-magnetic control of nanodevices.
\begin{figure*}[t!]
\includegraphics[width=140mm]{Fig_1_color_online.eps}
\caption{(a) Illustration of reference samples Fe(2)/Cr($d$)/Fe(2) without IEC (right panel) and with IEC (left panel). Solid arrows denote the magnetic moments of the Fe layers subject to IEC; two-directional arrows denote mutually independent orientation of the Fe magnetic moments in the absence of IEC. (b) Modified multilayer layout with the outer Fe layers RKKY-coupled antiferromagnetically via two weakly ferromagnetic Fe$_x$Cr$_{100-x}$ alloy layers, suitably placed within the composite spacer of NfN-type. Right (left) panel shows the corresponding magnetic layout above (below) the Curie temperature of the Fe$_x$Cr$_{100-x}$ layers. The same picture holds for the structures with ultrathin Fe at the Cr/Fe-Cr interfaces (spin-enhanced interfaces). (c) Room temperature M-B loops for reference Fe(2)/Cr($d$)/Fe(2) samples with $d$ = 1.5 and 10~nm and (d) for NfN-samples with and without spin-enhancement, respectively.}
\label{fig_1}
\end{figure*}
Another approach towards thermal control of IEC was to change the sign of the RKKY interaction via Y spacers in Ga/Y/Tb tri-layers \cite{r24} and via Pt spacers in perpendicularly magnetized Co/Pt/[Co/Py]$_\text{n}$ multilayers \cite{r25}. In both systems, the observed effect was explained as due to thermal changes in the magnetization of the respective soft outer magnetic layers (Co and Ga), affecting the phase of the IEC. The reported changes in the IEC were rather weak, however, of the order of 1~mT, well within the coercivity of the ferromagnetic layers used and, as a result, no clear parallel-to-antiparallel thermal switching was obtained.
Here we report on a new multilayer design for efficient thermal switching of IEC, illustrated in fig.~\ref{fig_1}, in which a diluted ferromagnetic 3d-metal-alloy layer within a composite spacer is tailored to have its Curie point ($T_\text{C}$) at a desired transition temperature, and is spaced by nonmagnetic layers such as to either transmit or not transmit AFM-RKKY interaction, in its ferromagnetic (below $T_\text{C}$) or paramagnetic (above $T_\text{C}$) state, respectively. We demonstrate sharp on/off thermal switching of a rather strong IEC, with 10--20~K transition widths and 0.1--1~T switching field strengths, respectively. This, combined with a broad choice of suitable materials and wide tuneability as regards the physical and operating parameters, makes the demonstrated system highly technological.
\section{Samples and Experiment}
Three series of multilayers were investigated: (i) reference series Fe(2)/Cr($d$)/Fe(2), with $d~=$ 1--10~nm; (ii) series with composite spacers Fe(2)/[Cr(1.5)/Fe$_x$Cr$_{100-x}$(3)]$_{\times2}$/ Cr(1.5)/Fe(2); and (iii) series with spin-enhanced composite spacers Fe(2)/[Cr(1.5)/Fe(0.25)/ Fe$_x$Cr$_{100-x}$(3)/ Fe(0.25)]$_{\times2}$/Cr(1.5)/Fe(2). The thickness in parentheses are in 'nm'. The series with composite and spin-enhanced composite spacers had three samples in each, with $x$ = 30, 35 and 40 at.~\% Fe in the diluted ferromagnetic Fe$_x$Cr$_{100-x}$ layers.
The multilayers were deposited at room temperature onto Ar pre-etched un-doped Si (100) substrates by dc-magnetron sputtering. Layers of dilute Fe$_x$Cr$_{100-x}$ binary alloys of varied composition were deposited using co-sputtering from separate Fe and Cr targets. The composition of the Fe$_x$Cr$_{100-x}$ layers was controlled by setting the corresponding deposition rates of the individual Fe and Cr components, with relevant calibrations obtained by subsequent thickness profilometry. The in-plane magnetic measurements were performed in the temperature range of 20–120 $^{\circ}$C using a vibrating-sample magnetometer equipped with a high-temperature furnace (Lakeshore Inc.) as well as a magneto-optical Kerr effect magnetometer equipped with an optical cryostat (Oxford Instr.).
\section{Results and Discussion}
\begin{figure*}
\centering
\includegraphics[width=140mm]{Fig_2_color_online.eps}
\caption{(a) Linear interpolation of bulk Curie temperature vs. concentration $x$ for Fe$_x$Cr$_{100–x}$ binary alloy for $x$ = 30--40 at.~\% Fe (data from ref.~\cite{r28}). M-B loops for NfN-samples with $x$ = 30, 35, and 40~\%, measured at 100~$^{\circ}$C (b,c,d) and room temperature (e,f,g).}
\label{fig_2}
\end{figure*}
The Fe/Cr/Fe system is known for its strong IEC when the Cr thickness is below approximately 3~nm, as illustrated in fig.~\ref{fig_1}(a), and vanishing IEC for spacers thicker than 3~nm. This well-established property is at the core of our design, which, in essence, achieves a rather abrupt change of the effective thickness of a specially designed composite spacer layer.
Classical RKKY-type Fe(2~nm)/Cr($d$)/Fe(2~nm) tri-layers with single-layer nonmagnetic (N) Cr spacers of thickness $d =$ 1--10~nm were fabricated as reference samples. These conventional N-type samples for $d < 3$~nm show the expected AFM interlayer coupling. The respective M-B loops have zero remnant magnetic moment $M_\text{r}$ and high saturation field $B_\text{s}$, illustrated in fig.~\ref{fig_1}(c) for $d = 1.5$~nm. The zero remnant magnetic moment corresponds to AFM ordering of the magnetic moments of the two Fe layers, induced by a rather strong IEC –-- up to 0.5~T is needed to align the magnetic moments of the Fe layers in parallel. The step-like shape of the M-B loop is due to an interplay of the RKKY interlayer exchange and the intrinsic anisotropy of the Fe layers (as detailed below and in Supplementary material A and C). Increasing $d$ reduces the saturation field $B_\text{s}$ and, at $d \geq 3$~nm, a single rectangular M-B loop is observed indicating vanishing IEC, with the outer Fe layers exchange-decoupled and switching independently. The shape of the M-B loop for $d = 10$ nm is taken as a reference for the structure ‘without IEC’.
To demonstrate efficient thermal switching of IEC, the Fe/Cr/Fe tri-layer structure was modified to include a composite Cr/Fe$_x$Cr$_{100-x}$/Cr spacer, instead of the single-layer Cr spacer. Here, the layer of dilute ferromagnetic alloy Fe$_x$Cr$_{100-x}$, with $x =$ 30--40~at.~\%, is weakly ferromagnetic with a relatively low bulk Curie temperature, $T_\text{C} =$ 250--450~K \cite{r28}. In order to achieve an easy to detect AFM coupling between the outer Fe layers and, thereby, a clear illustration of the sought effect, a double composite spacer was used, with the specific structure of Fe(2)/Cr(1.5)/Fe$_x$Cr$_{100-x}$(3)/Cr(1.5)/Fe$_x$Cr$_{100-x}$(3)/ Cr(1.5)/Fe(2), illustrated in fig.~\ref{fig_1}(b). The initial M-B data revealed no AFM alignment in any of the samples: rectangular M-B loops characteristic of structures without IEC were observed, such as that for the structure with $x = 35~\%$ in fig.~\ref{fig_1}(c). Clearly, the RKKY in the structure was too weak to transmit through the composite spacer and needed to be enhanced.
Additional experiments were performed at room temperature and confirmed a ferromagnetic state in 20-nm thick single-layer films of the Fe$_x$Cr$_{100-x}$ alloy with $x = 35$ and 40~at.~\%, but a paramagnetic state for $x = 30$~at.~\% (see also Suppl.~B for more details). This indicated that the Fe$_x$Cr$_{100-x}$ layers and the respective Fe$_x$Cr$_{100-x}$/Cr interfaces for the selected alloy compositions are at the cusp of ferromagnetic ordering and, hence, have near vanishing interface spin polarization. In order to enhance the spin-polarization at the Cr/Fe$_x$Cr$_{100-x}$ interfaces \cite{r29}, which is the key for RKKY, while maintaining the chosen spacer’s $T_\text{C}$, the ferromagnetic alloy layer was clad with ultrathin (0.25~nm) pure Fe layers, essentially making all six RKKY-active interfaces to be of type Fe/Cr. The resulting multilayer structures, with modified spacers, Fe(2)/Cr(1.5)/Fe(0.25)/Fe$_x$Cr$_{100-x}$(3)/Fe(0.25)/Cr(1.5)/ Fe(0.25)/Fe$_x$Cr$_{100-x}$(3)/Fe(0.25)/Cr(1.5)/Fe(2) (hereafter NfN refers to spacers with interface-enhanced f-layers), were found to be highly effective in establishing strong AFM IEC in the system, as shown by the magnetization data for the NfN-sample with $x = 35~\%$ in fig.~\ref{fig_1}(d). The observed M-B loops have zero remnant magnetic moment, much like the classical RKKY for conventional F/N/F (Fe(2)/Cr(1.5)/Fe(2) in fig.~\ref{fig_1}(c)).
Fe$_x$Cr$_{100-x}$ alloys of the concentration interval $x =$ 30-40~at.~\% Fe are known to have good interatomic solubility and the ferromagnetic-to-paramagnetic transition just above room temperature \cite{r28} (fig.~\ref{fig_2}(a); $T_\text{C}$ = 20--100~$^{\circ}$C) --- the range of interest for applications. The M-B loops of the NfN-samples with $x = 30$ and 35~\% measured at 100 $^{\circ}$C are of rectangular shape, with high remanence (fig.~\ref{fig_2}(b)--(d)). In contrast, the M-B loop for $x = 40~\%$ has zero remnant magnetic moment ($M_\text{r}$) and high saturation field ($B_\text{s}$). This indicates strong AFM coupling in the structure with $x = 40~\%$, whereas for $x = 30$ and 35~\% the outer Fe layers are essentially decoupled. At room temperature, the IEC-character of the M-B loops for the NfN-samples with $x = 30$ and 40~\% are the same as those at 100~$^{\circ}$C (fig.~\ref{fig_2}(e)--(g)), decoupled and AFM-coupled, respectively. In contrast, the M-B loop for the structure with $x = 35~\%$ completely changes its character, to AFM-type at room temperature with zero moment at zero field. This clearly shows that in the temperature interval of 20--100~$^{\circ}$C the spacer containing the spin-enhanced Fe$_{35}$Cr$_{65}$ layer undergoes a para-to-ferromagnetic transition (effective Curie temperature, $T_\text{C}^\text{eff}$, is within this interval). The measured behavior confirms the proposed switching mechanism illustrated in fig.~\ref{fig_1}(b).
\begin{figure}
\centering
\includegraphics{Fig_3_color_online.eps}
\caption{(a) Magnetic moment vs. temperature for samples with $x = 30, 35\text{ and }40~\%$, measured on cool down with applied field of 1~mT. Arrows indicate the mutual orientation of the magnetic moments of the outer Fe layers. (b) Temperature evolution of the M-B loops for the structure with $x = 35~\%$. (c) Simulated temperature dependence of the RKKY exchange constant for NfN-sample with $x = 35~\%$; inset shows the calculated $M(B)$ for different representative strength of $J_\text{RKKY}$, with the vertical arrows indicating the corresponding saturation fields.}
\label{fig_3}
\end{figure}
The observed thermal switching between the parallel and antiparallel states of the above spin-valve-type structure should be attractive for device applications, provided a good switching performance can be achieved. In this regard, the temperature interval, within which the switching occurs, is the key characteristic. $M$ vs. $T$ was measured in a weak applied field of 1~mT after saturating the sample into its parallel magnetic state at 110~$^{\circ}$C using the field of 100~mT (fig.~\ref{fig_3}(a)). Switching between the parallel (P) and antiparallel (AP) states of the NfN-structure with $x = 35~\%$, driven by the phase transition in the RKKY-IEC at the Curie point of the spacer, occurs in the interval of $\sim15~^{\circ}$C, which is quite narrow for a thermo-magnetic transition in a multilayer system. $M(T)$ for the same structure with $x = 30\text{ and }40~\%$ reveals no such transition, which is consistent with the expected behavior for these compositions, with either only AFM or no interlayer coupling present in the operating temperature range (20--100~$^{\circ}$C). These results show that the effective Curie point ($T_\text{C}^\text{eff}$) is easily tunable by choosing the appropriate Fe-concentration in the Fe-Cr layer of the spacer (f). The M-B loops for the NfN sample with $x = 35~\%$, recorded at different temperatures and shown in fig.~\ref{fig_3}(b), illustrate the thermo-magnetic transition in detail. As the temperature increases, the saturation field $B_\text{s}$ decreases, which is a reflection of a gradual suppression of the IEC. The remnant magnetic moment, on the other hand, has a much steeper temperature profile (red line in fig.~\ref{fig_3}(a)), indicating that the P to AP transition at $T_\text{C}^\text{eff}$ is more of a threshold type.
Using the above data of fig.~\ref{fig_3}(b) for the temperature dependence of the hysteresis loop, one can obtain the temperature dependence of the effective RKKY-exchange constant (see Suppl.~C for details). Here we use a simple phenomenological model, with the RKKY coupling energy per unit area of bi-linear form, $W = J_\text{RKKY}(\mathbf{m}_1\cdot\mathbf{m}_2) = J_\text{RKKY}\cos\Delta\phi$, where $\Delta\phi$ is the angle between the two outer Fe magnetic moments $\mathbf{m}_1$ and $\mathbf{m}_2$ \cite{r26,Kobler1992}, which is appropriate in our case of zero remanence magnetization at zero field. Including the suitable Zeeman and Fe-layer anisotropy terms and minimizing the full magnetic energy yields $M(B)$, shown in the inset to fig.~\ref{fig_3}(c). The simulated $M(B)$ closely resemble our experimental data shown in fig.~\ref{fig_3}(b) with all the main features explained by the interplay of RKKY, anisotropy, and Zeeman contributions: the saturation field and the coercive step, for example, shift to lower fields as the RKKY strength is reduced (at higher $T$), while the coercivity of the minor loop increases. The model shows that the saturation field is a sum of the effective RKKY and anisotropy fields, $B_\text{s}=B_j+B_\text{a}$, with the latter readily obtained from calibration measurements on single Fe-films of the same thickness and morphology (deposited under same conditions) as those used in the RKKY multilayers --- in our case $B_\text{a}=80$~mT. Subtracting $B_\text{a}$ and converting according to $J_\text{RKKY} = 2MB_jd$, yields the temperature dependence of the effective RKKY exchange coupling strength for the structure, shown in fig.~\ref{fig_3}(c). The saturation magnetization $\mu_0 M$ and thickness $d$ of the Fe layers were taken to be 2~T and 2~nm, respectively. The obtained $J_\text{RKKY}(T)$ shows that the RKKY coupling changes by an order of magnitude in the vicinity of the Curie transition in the spacer, thus demonstrating the great efficiency of thermal control of RKKY in our heterostructure.
The following mechanism explains the observed thermal switching of the IEC in the multilayer. The indirect exchange interaction (RKKY) between the Fe and Fe-Cr layers is mediated by the conductance electrons in the Cr layers of the composite spacer. In this, the ultrathin Fe layers at the Cr/Fe-Cr interfaces provide a sufficient degree of spin polarization of scattered electrons for creating a coherent spin-density-wave state in the Cr layers. The exchange interaction within the weakly ferromagnetic Fe-Cr layers couples its two interfaces by direct exchange within the layer. Thus, the indirect exchange through the Cr layers and the direct exchange through the Fe-Cr layers form a serial sequence of interactions, providing the effective RKKY/direct-exchange interlayer coupling between the outer Fe layers (fig.~\ref{fig_1}(b)). With increasing temperature, above $T_\text{C}^\text{eff}$, the weakly ferromagnetic Fe-Cr layers undergo a FM-PM phase transition, which is rather sharp. As a result, the the direct exchange links within the Fe-Cr layers are suppressed and the effective outer Fe-Fe exchange is switched off. Only a small fraction of the available RKKY coupling ($<1~\%$ out of ~0.5~T in our experiment) is needed to rotate a relatively soft outer Fe layer (coercivity of about 0.01~T), so the resulting thermal transition is rather narrow.
Multiple optimization paths are straight forward, such as optimizing the choice of materials, compositional profiles, interfacial spin-enhancement, morphology (roughness), etc., should result in still better performance, likely with sub-10-K RKKY-Curie-transition widths. This, however, goes beyond the scope of this letter, focused on demonstrating the effect of thermal RKKY-switching in magnetic multilayer systems.
Temperature dependent IEC was reported for other material systems, based on \emph{direct exchange} via weakly ferromagnetic spacers between two strongly ferromagnetic outer layers \cite{Demirtas_PRB,r30,r31}. However, the direct-exchange designs exhibit strong proximity effects and associated limitations on the multilayer design. In contrast, efficient thermal control of RKKY makes it possible to have either AFM of FM ground state in the structure, which is not possible using only direct exchange.
\section{Conclusion}
In summary, we report on a new concept for thermo-magnetic switching in multilayers, exploiting a combination of indirect and direct exchange. We demonstrate such on/off IEC switching in a Fe/Cr/FeCr-based system and obtain RKKY switching as sharp as 10--20~K, essentially in any desired temperature range. High design tuneability in the physical parameter space of field-temperature-magnetization, along with availability of a wide choice of materials, make AFM-RKKY or no-RKKY ground states easily obtainable, unlike that in synthetic antiferro- or ferri-magnets. We believe that the demonstrated effect of thermally controlled indirect exchange coupling adds a new degree of freedom to designing future spin-electronic devices, such as memory\cite{Prejbeanu_2013} and oscillators \cite{Kadigrobov_2010}.
\acknowledgments{
Support from the Swedish Research Council (VR Grant No. 2014-4548) and the Swedish Stiftelse Olle Engkvist Byggm\"astare is gratefully acknowledged.}
|
2,877,628,088,360 | arxiv | \section{Introduction}
This paper deals with Lagrangian cobordisms in the symplectization $(\mathbb R\times Y,d(e^t\alpha))$ of a contact manifold $(Y,\alpha)$. These cobordisms are properly embedded Lagrangian submanifolds admitting cylindrical ends on Legendrian submanifolds of $Y$, and here $Y$ will be the contactization $(P\times\mathbb R,dz+\beta)$ of a Liouville manifold $(P,\beta)$.
Our goal is to define through SFT-techniques introduced in \cite{EGH} a unital $A_\infty$-category $\Fuk(\mathbb R\times Y)$ whose objects are Lagrangian cobordisms equipped with an augmentation of the Chekanov-Eliashberg algebra of its negative end, and whose morphism spaces are given by certain Floer-type complexes $\Cth_+(\Sigma_0,\Sigma_1)$. In particular, when $\Sigma_0$ is a cylinder over a Legendrian equipped with an augmentation and $\Sigma_1$ a parallel copy the complex $\Cth_+(\Sigma_0,\Sigma_1)$ is an SFT-formulation of the Lagrangian Rabinowitz Floer complex due to Merry \cite{Merry}.
There already exists several versions of Fukaya categories whose objects are (non compact) exact Lagrangians, notably the (partially) wrapped Fukaya categories of a Liouville domain (see \cite{AS,Abouzaid,ZS}) and more recently of Liouville sectors \cite{GPS}. In this paper we instead consider Lagrangian submanifolds in a trivial Liouville cobordism, meaning a trivial cylinder over a contact manifold. The main difference between this and the case of Liouville domains is that we have a non empty concave end.
It is known that additional assumptions are necessary in order to define Floer complexes in this setting (Lagrangian cobordisms with loose negative ends are known to satisfy some flexibility results). For that reason, we impose some restriction on the Lagrangians. More precisely, we consider only exact Lagrangian cobordisms with negative cylindrical ends over Legendrian submanifolds whose CE-algebra admit an augmentation. In particular, an exact Lagrangian filling implies the existence of an augmentation \cite{EHK}.
Generalizing the structures we define in this paper to the case of Lagrangians in a more general Liouville cobordism should be possible using the latest techniques of virtual perturbations in \cite{Pardon} or the polyfold technology developed in \cite{HWZ}.
Note that Cieliebak-Oancea in \cite{OC} have defined a version of Rabinowitz Floer homology for Lagrangians in a Liouville cobordism under the assumption that this cobordism admits a filling.
The Floer complex we define in this paper are similar to the ones defined by Cieliebak-Oancea; for instance, there is an identification on the level of generators. The main difference is that their differential is defined by Floer strips with a Hamiltonian perturbation term that corresponds to wrapping, while the differential considered here is defined in terms of honest SFT-type pseudo-holomorphic discs. It is expected that the two theories give quasi-isomorphic complexes. \\
We start by contrasting the complex considered here with the Floer-type complex for pairs of Lagrangian cobordisms considered in \cite{CDGG2}. Namely, given a pair of transverse exact Lagrangian cobordisms $(\Sigma_0,\Sigma_1)$ where $\Sigma_i$ has positive and negative cylindrical ends over Legendrians $\Lambda_i^+$ and $\Lambda_i^-$ respectively and $\Lambda_i^-$ are equipped with augmentations, the authors in \cite{CDGG2} define the Floer complex $(\Cth(\Sigma_0,\Sigma_1),\mathfrak{d})$ whose underlying vector space is given by
\begin{alignat*}{1}
\Cth(\Sigma_0,\Sigma_1)=C(\Lambda_0^+,\Lambda_1^+)\oplus CF(\Sigma_0,\Sigma_1)\oplus C(\Lambda_0^-,\Lambda_1^-)
\end{alignat*}
where $C(\Lambda_0^\pm,\Lambda_1^\pm)$ is generated by Reeb chords from $\Lambda_1^\pm$ to $\Lambda_0^\pm$ and $CF(\Sigma_0,\Sigma_1)$ is generated by intersection points in $\Sigma_0\cap\Sigma_1$. This complex is actually the cone of a map
\begin{alignat*}{1}
f_1:CF_{-\infty}(\Sigma_0,\Sigma_1):=CF(\Sigma_0,\Sigma_1)\oplus C(\Lambda_0^-,\Lambda_1^-)\to C(\Lambda_0^+,\Lambda_1^+)
\end{alignat*}
In \cite{L} the author defined a product structure on $CF_{-\infty}(\Sigma_0,\Sigma_1)$, as well as higher order maps satisfying the $A_\infty$-equations. Moreover, in the same paper it is proved that the map $f_1$ generalizes to a family of maps $\{f_d\}_{d\geq1}$,
\begin{alignat*}{1}
f_d:\Cth(\Sigma_{d-1},\Sigma_d)\otimes\dots\otimes\Cth(\Sigma_0,\Sigma_1)\to C(\Lambda_0^+,\Lambda_d^+)
\end{alignat*}
defined for a $(d+1)$-tuple of pairwise transverse exact Lagrangian cobordisms $(\Sigma_0,\dots,\Sigma_d)$, and satisfying the $A_\infty$-functor equations; where the $A_\infty$-structure maps on $C(\Lambda_{d-1}^+,\Lambda_d^+)\otimes\dots\otimes C(\Lambda_0^+,\Lambda_1^+)$ are given by the structure maps of the augmentation category $\mathcal{A}ug_-(\Lambda_0^+\cup\dots\cup\Lambda_d^+)$, see \cite{BCh}.
However, there exists no non trivial $A_\infty$-structure on the whole complex $\Cth(\Sigma_0,\Sigma_1)$ for example for degree reasons: the grading of Reeb chord generators in the positive end in $\Cth(\Sigma_0,\Sigma_1)$ is given by the Conley-Zehnder index plus $1$ (see Subsections \ref{sec:grad} and \ref{Cth-}) so a count of rigid pseudo-holomorphic discs with boundary on the positive cylindrical ends, with two negative Reeb chord asymptotics and one positive Reeb chord asymptotic would not provide a degree $0$ order $2$ map for example.
In this article, we use similar techniques for constructing a version
of the Rabinowitz complex $(\Cth_+(\Sigma_0,\Sigma_1),\mfm_1)$, on which it will be possible to define higher order structure maps. The underlying vector space is:
\begin{alignat*}{1}
\Cth_+(\Sigma_0,\Sigma_1)=C(\Lambda_1^+,\Lambda_0^+)\oplus CF(\Sigma_0,\Sigma_1)\oplus C(\Lambda_0^-,\Lambda_1^-)
\end{alignat*}
so the only difference with $\Cth(\Sigma_0,\Sigma_1)$ is the generators we consider in the positive end; unlike in the complex $\Cth(\Sigma_0,\Sigma_1)$ these generators consist of the chords which \emph{start} at $\Lambda_0^+$ and \emph{end} at $\Lambda_1^+$.
The differential is defined by a count of pseudo-holomorphic discs with boundary on the cobordisms and asymptotic to Reeb chords and intersection points such that
\begin{itemize}
\item $C(\Lambda_0^-,\Lambda_1^-)$ is a subcomplex which is the linearized Legendrian contact cohomology complex of $\Lambda_0^-\cup\Lambda_1^-$ restricted to chords from $\Lambda_1^-$ to $\Lambda_0^-$,
\item $C(\Lambda_1^+,\Lambda_0^+)$ is a quotient complex which is the linearized Legendrian contact homology complex of $\Lambda_0^+\cup\Lambda_1^+$ restricted to chords from $\Lambda_0^+$ to $\Lambda_1^+$.
\end{itemize}
In the case $\Sigma_0=\mathbb R\times\Lambda_0$ and $\Sigma_1$ is a cylinder over a perturbed copy of $\Lambda_0$ translated far in the positive Reeb direction, the complex $\Cth_+(\Sigma_0,\Sigma_1)$ is the complex of the $2$-copy of $\Lambda_0$ considered in \cite{EESa}.\\
Then we proceed to investigate the properties of this complex. Some of them resemble properties satisfied by the complex $\Cth(\Sigma_0,\Sigma_1)$ but there are also some significant differences.\\
\noindent\textbf{Acyclicity:}
Contrary to $\Cth(\Sigma_0,\Sigma_1)$, the complex $\Cth_+(\Sigma_0,\Sigma_1)$ is not always acyclic.
For example if $Y=J^1M$ for a closed manifold $M$, $\Sigma_0$ is a cylinder over the $0$-section in $J^1M$ and $\Sigma_1$ a cylinder over a Morse perturbation of the $0$-section, the homology of $\Cth_+(\Sigma_0,\Sigma_1)$ does not vanish but equals instead the Morse homology of $M$.
However, in the case $Y=P\times\mathbb R$ where any compact subset of $P$ is Hamiltonian displaceable, for example $Y=\mathbb R^{2n+1}$, the complex $\Cth_+(\Sigma_0,\Sigma_1)$ is always acyclic. It is also always acyclic whenever $\Lambda_0^-=\Lambda_1^-=\emptyset$ as in this case the complex is actually the same as the dual complex of $\Cth(\Sigma_1,\Sigma_0)$.\\
\noindent\textbf{Structure maps and continuation element:}
The new Cthulhu complex carries structure maps which satisfy the $A_\infty$-equations. More precisely, for any $(d+1)$-tuple $(\Sigma_0,\dots,\Sigma_d)$ of pairwise transverse exact Lagrangian cobordisms, we define a map
\begin{alignat*}{1}
\mfm_d:\Cth_+(\Sigma_{d-1},\Sigma_d)\otimes\dots\otimes\Cth_+(\Sigma_0,\Sigma_1)\to\Cth_+(\Sigma_0,\Sigma_d)
\end{alignat*}
by counts of SFT-buildings consisting of pseudo-holomorphic discs with boundary on the $\Sigma_i$'s and asymptotic to Reeb chords and intersection points. Then, for any $1\leq k\leq d$ and sub-tuple $(\Sigma_{i_0},\dots,\Sigma_{i_k})$ with $0\leq i_0<\dots< i_k\leq d$, one has
\begin{alignat}{1}
\sum\limits_{j=1}^k\sum\limits_{n=0}^{k-j}\mfm_{k-j+1}\big(\id^{\otimes k-j+1}\otimes\mfm_j\otimes\id^{\otimes n}\big)=0\label{Ainf-eq}
\end{alignat}
where the inner $\mfm_j$ has domain $\Cth_+(\Sigma_{i_{n+j-1}},\Sigma_{i_{n+j}})\otimes\dots\otimes\Cth_+(\Sigma_{i_n},\Sigma_{i_{n+1}})$ and $\mfm_{k-j+1}$ has domain $\Cth_+(\Sigma_{i_{k-1}},\Sigma_{i_k})\otimes\dots\otimes\Cth_+(\Sigma_{i_{n}},\Sigma_{i_{n+j}})\otimes\dots\otimes\Cth_+(\Sigma_{i_0},\Sigma_{i_1})$.\\
In the case when $\Sigma_1$ is a suitable small Hamiltonian perturbation of $\Sigma_0$ one establishes the existence of a \textit{continuation element} in $\Cth_+(\Sigma_0,\Sigma_1)$ (see Section \ref{sec:unit} for a precise description of the Hamiltonian perturbation $\Sigma_1$):
\begin{teo} There exists an element $e_{\Sigma_0,\Sigma_1}\in\Cth_+(\Sigma_0,\Sigma_1)$ satisfying that for any exact Lagrangian cobordism $\Sigma_2$ transverse to $\Sigma_0$ and $\Sigma_1$ the map
\begin{alignat*}{1}
\mfm_2(\,\cdot\,,e_{\Sigma_0,\Sigma_1}):\Cth_+(\Sigma_1,\Sigma_2)\to\Cth_+(\Sigma_0,\Sigma_2)
\end{alignat*}
is a quasi-isomorphism.
\end{teo}
Finally we use these ingredients to construct a unital $A_\infty$-category $\Fuk(\mathbb R\times Y)$ via localization, in the same spirit as the construction of the wrapped Fukaya category of Liouville sectors in \cite{GPS}:
\begin{teo}
There exists a unital $A_\infty$-category $\Fuk(\mathbb R\times Y)$ whose objects are Lagrangian cobordisms equipped with augmentations of its negative ends and whose morphism spaces in the cohomological category satisfy $H^*\hom_{\Fuk(\mathbb R\times Y)}(\Sigma_0,\Sigma_1)\cong H^*(\Cth_+(\Sigma_0,\Sigma_1),\mfm_1)$ whenever $\Sigma_0$ and $\Sigma_1$ are transverse.
\end{teo}
The homology of the Rabinowitz complex $\Cth_+(\Sigma_0,\Sigma_1)$ is invariant under cylindrical at infinity Hamiltonian isotopies (in particular under Legendrian isotopies of its ends). This implies that the quasi-equivalence class of the category $\Fuk(\mathbb R\times Y)$ does not depend on choices of representatives of Hamiltonian isotopy classes of Lagrangian cobordisms involved in its construction by localization (see Section \ref{sec:Fuk}).\\
\noindent\textbf{Behaviour under concatenation:}
Given a pair of concatenated cobordisms $(V_0\odot W_0,V_1\odot W_1)$, we describe the complex $\Cth_+(V_0\odot W_0,V_1\odot W_1)$ in terms of the complexes $\Cth_+(V_0,V_1)$ and $\Cth_+(W_0,W_1)$ and some \textit{transfer maps} fitting into a diagram
\begin{alignat*}{1}
\Cth_+(V_0,V_1)\xleftarrow{\boldsymbol{\Delta}_1^W}\Cth_+(V_0\odot W_0,V_1\odot W_1)\xrightarrow{\boldsymbol{b}_1^V}\Cth_+(W_0,W_1)
\end{alignat*}
We prove that $\boldsymbol{\Delta}_1^W$ and $\boldsymbol{b}_1^V$ are chain maps which induce a Mayer-Vietoris sequence and moreover preserve the continuation element in homology.\\
In addition, the transfer maps generalize also to families of maps $\{\boldsymbol{\Delta}_d\}_{d\geq1}$ and $\{\boldsymbol{b}_d\}_{d\geq1}$ satisfying the $A_\infty$-functor equations. That is to say, for a $(d+1)$-tuple of concatenated cobordisms $(V_0\odot W_0,\dots,V_d\odot W_d)$ there are maps
\begin{alignat*}{1}
&\mfm_d^{V\odot W}:\Cth_+(V_{d-1}\odot W_{d-1},V_d\odot W_d)\otimes\dots\otimes\Cth_+(V_0\odot W_0,V_1\odot W_1)\to\Cth_+(V_0\odot W_0,V_d\odot W_d)\\
&\boldsymbol{\Delta}_d^W:\Cth_+(V_{d-1}\odot W_{d-1},V_d\odot W_d)\otimes\dots\otimes\Cth_+(V_0\odot W_0,V_1\odot W_1)\to\Cth_+(V_0,V_d)\\
&\boldsymbol{b}_d^V:\Cth_+(V_{d-1}\odot W_{d-1},V_d\odot W_d)\otimes\dots\otimes\Cth_+(V_0\odot W_0,V_1\odot W_1)\to\Cth_+(W_0,W_d)
\end{alignat*}
such that for all $1\leq k\leq d$ and sub-tuple $(V_{i_0}\odot W_{i_0},\dots,V_{i_k}\odot W_{i_k})$ with $0\leq i_0<\dots< i_k\leq d$, the maps $\{\mfm_k^{V\odot W}\}_{1\leq k\leq d}$ satisfy the $A_\infty$-equations \eqref{Ainf-eq}, and the maps $\{\boldsymbol{\Delta}_k^W\}_{1\leq k\leq d}$ and $\{\boldsymbol{b}_k^V\}_{1\leq k\leq d}$ satisfy
\begin{alignat*}{1}
&\sum_{s=1}^k\sum_{j_1+\dots+j_s=k}\mfm_s^V\big(\boldsymbol{\Delta}_{j_s}^W\otimes\dots\otimes\boldsymbol{\Delta}_{j_1}^W\big)+\sum_{j=1}^{k}\sum_{n=0}^j\boldsymbol{\Delta}_{k-j+1}^W\big(\id^{\otimes k-j+1}\otimes\mfm_j^{V\odot W}\otimes\id^{\otimes n}\big)=0\\
&\sum_{s=1}^k\sum_{j_1+\dots+j_s=k}\mfm_s^W\big(\boldsymbol{b}_{j_s}^V\otimes\dots\otimes\boldsymbol{b}_{j_1}^V\big)+\sum_{j=1}^{k}\sum_{n=0}^j\boldsymbol{b}_{k-j+1}^V\big(\id^{\otimes k-j+1}\otimes\mfm_j^{V\odot W}\otimes\id^{\otimes n}\big)=0.
\end{alignat*}
\vspace{2mm}
\noindent\textbf{Acknowledgements} The author warmly thanks Baptiste Chantraine, Georgios Dimitroglou-Rizell and Paolo Ghiggini for helpful discussions and comments on earlier versions of the paper, as well as Alexandru Oancea for stimulating discussions. The author was partly supported by the grant KAW 2016.0198 from the Knut and Alice Wallenberg Foundation and the Swedish Research Council under the grant no. 2016-03338.
\section{Background}
\subsection{Geometric set-up}
Throughout the paper we will be working with a contact manifold $(Y,\alpha)$ given by the contactization of a \textit{Liouville manifold}. We briefly recall the definition of these terms.
A \textit{Liouville domain} $(\widehat{P},\theta)$ is the data of a $2n$-dimensional manifold with boundary $\widehat{P}$ as well as a $1$-form $\theta$ on $\widehat{P}$ such that $d\theta$ is symplectic, and the Liouville vector field $V$ defined by $\iota_Vd\theta=\theta$ is required to point outward on the boundary $\partial\widehat{P}$. In particular, $\theta_{|\partial\widehat{P}}$ is a contact form on $\partial\widehat{P}$.
The \textit{completion} of $(\widehat{P},\theta)$ is the exact symplectic manifold $(P=\widehat{P}\cup_{\partial \widehat{P}}[0,\infty)\times\partial \widehat{P},\omega=d\beta)$, where $\beta$ equals $\theta$ in $\widehat{P}$ and $e^\tau\theta_{|\partial\widehat{P}}$ on $[0,\infty)\times\partial\widehat{P}$ where $\tau$ is the coordinate on $[0,\infty)$. The Liouville vector field smoothly extends to the whole manifold $P$. We call $(P,\beta)$ a \textit{Liouville manifold}.
The \textit{contactization} of a Liouville manifold $(P,\beta)$ is the contact manifold $(Y,\alpha)$ where $Y$ is the $2n+1$-dimensional manifold $Y=P\times\mathbb R$ and $\alpha=dz+\beta$, where $z$ is the $\mathbb R$-coordinate. The Reeb vector field of $\alpha$ is given by $R_\alpha=\partial_z$ so in particular there are no closed Reeb orbits in $Y$. A \textit{Legendrian submanifold} of $(Y,\alpha)$ is a $n$-dimensional submanifold $\Lambda$ satisfying $\alpha_{|T\Lambda}=0$, and \textit{Reeb chords} of $\Lambda$ are trajectories of the Reeb flow starting and ending on $\Lambda$. We consider only Legendrian with a finite number of isolated Reeb chords, and denote $\mathcal{R}(\Lambda)$ the set of Reeb chords of $\Lambda$. These are called pure Reeb chords. Given two Legendrian $\Lambda_0$ and $\Lambda_1$, we denote $\mathcal{R}(\Lambda_1,\Lambda_0)$ the set of Reeb chords starting on $\Lambda_0$ and ending on $\Lambda_1$, these are called mixed Reeb chords.
The main objects under consideration in this article are exact Lagrangian cobordisms between Legendrian submanifolds of $Y$. These are Lagrangian submanifolds in the \textit{symplectization} of $(Y,\alpha)$ which is the symplectic manifold $(\mathbb R\times Y,d(e^t\alpha))$ where $t$ is the $\mathbb R$-coordinate.
\begin{defi}
Given $\Lambda^-,\Lambda^+\subset Y$ Legendrian, an \textit{exact Lagrangian cobordism} from $\Lambda^-$ to $\Lambda^+$ is a submanifold $\Sigma\subset\mathbb R\times Y$ such that there exists
\begin{itemize}
\item $T>0$ such that
\begin{enumerate}
\item $\Sigma\cap[T,\infty)\times Y=[T,\infty)\times\Lambda^+$,
\item $\Sigma\cap(-\infty,-T]\times Y=(-\infty,-T]\times\Lambda^-$,
\item $\Sigma\cap[-T,T]\times Y$ is compact.
\end{enumerate}
\item $f:\Sigma\to\mathbb R$ a smooth function called a \textit{primitive} of $\Sigma$, satisfying
\begin{enumerate}
\item $e^t\alpha_{|T\Sigma}=df$,
\item $f$ is constant on $[T,\infty)\times\Lambda^+$ and $(-\infty,-T]\times\Lambda^-$.
\end{enumerate}
\end{itemize}
\end{defi}
In all the paper, we will assume that the coefficient field is $\Z_2$. Moreover, we assume that $2c_1(P)=0$, and that the Legendrian submanifolds and Lagrangian cobordisms between them have Maslov number $0$. This will ensure a well-defined $\Z$ grading for the various complexes that will appear.
\subsection{Almost complex structure}
Given a family of pairwise transverse Lagrangian cobordisms $(\Sigma_0,\dots,\Sigma_d)$ with Legendrian cylindrical ends $\mathbb R\times\Lambda_i^\pm$, $0\leq i\leq d$, we consider several types of moduli spaces of pseudo-holomorphic discs with boundary on those Lagrangian cobordisms. Those discs are asymptotic to intersection points and/or Reeb chords of the links $\Lambda_0^\pm\cup\dots\cup\Lambda_d^\pm$. First, let us describe briefly the almost complex structure we consider on $\mathbb R\times Y$, in order to define the moduli spaces mentioned above and achieve transversality.
An almost complex structure $J$ on $(\mathbb R\times Y,d(e^t\alpha))$ is called \textit{cylindrical} if
\begin{itemize}
\item it is compatible with $d(e^t\alpha)$,
\item $J(\partial_t)=R_\alpha$,
\item $J(\xi)=\xi$,
\item $J$ is invariant by translation along the $t$-coordinate axis.
\end{itemize}
We denote $\mathcal{J}^{cyl}(\mathbb R\times Y)$ the set of cylindrical almost complex structures on $\mathbb R\times Y$.
An almost complex structure on $P$ is called \textit{admissible} if it is cylindrical on $P\backslash\widehat{P}$ outside of a compact set.
The \textit{cylindrical lift} of an admissible almost complex structure $J_P$ on $P$ is the unique cylindrical almost complex structure $\widetilde{J}_P$ on $\mathbb R\times(P\times\mathbb R)$ making the projection $\pi_P:\mathbb R\times(P\times\mathbb R)\to P$ holomorphic.
Let $J^+$ and $J^-$ be two cylindrical almost complex structures which coincide outside of $\mathbb R\times K$ for some compact $K\subset Y$. Assuming that the cobordisms we consider are all cylindrical outside of $[-T,T]\times Y$ for some fixed $T>0$, we take an almost complex structure $J$ which is equal to $J^-$ on $(-\infty,-T)\times Y$, to $J^+$ on $(T,+\infty)\times Y$, and to the cylindrical lift of an admissible complex structure $J_P$ on $[-T,T]\times (Y\backslash K)$. We denote $\mathcal{J}_{J^+,J^-}(\mathbb R\times Y)$ this class of almost complex structures on $\mathbb R\times Y$.
In order to achieve transversality for the moduli spaces later on, we will finally need domain dependent almost complex structures with values in $\mathcal{J}_{J^+,J^-}(\mathbb R\times Y)$, i.e. families of almost complex structures in $\mathcal{J}_{J^+,J^-}(\mathbb R\times Y)$ parametrized by the domains of the pseudo-holomorphic discs (punctured Riemann discs), which is part of a \textit{universal choice of perturbation data}, see \cite[Section (9h)]{S}.
\subsection{Moduli spaces of curves with boundary on Lagrangian cobordisms}\label{sec:mod}
Let $\mathcal{R}^{d+1}$ be the space of $d+1$ cyclically ordered points $\boldsymbol{y}=(y_0,\dots,y_d)\in (S^1)^{d+1}$ quotiented by the automorphisms of the unit disc $D^2$. This is the Deligne-Mumford space. For $\boldsymbol{y}\in\mathcal{R}$, let us denote $S_{\boldsymbol{y}}=D^2\backslash\{y_0,\dots,y_d\}$. In a sufficiently small neighborhood of the punctures $y_i$ in the disc, we have strip-like end coordinates $[s_i,t_i]\in(0,+\infty)\times[0,1]$, $0\leq i\leq d$.
Let us denote $\Sigma_{0...d}=(\Sigma_0,\dots,\Sigma_d)$ a $d+1$-tuple of Lagrangian cobordisms satisfying the following:
\begin{itemize}
\item if $\Sigma_0=\Sigma_d$, then $\Sigma_i=\Sigma_0$ for all $1\leq i\leq d$,
\item if $\Sigma_0\neq\Sigma_d$, then the ordered family $\Sigma_0,\dots,\Sigma_d$ is of the form
$$\Sigma_{i_0},\dots,\Sigma_{i_0},\Sigma_{i_1},\dots,\Sigma_{i_1},\Sigma_{i_2},\dots,\Sigma_{i_k},$$ with $\Sigma_{i_0}:=\Sigma_0$ and $\Sigma_{i_k}:=\Sigma_d$, and such that $\Sigma_{i_0},\Sigma_{i_1},\dots,\Sigma_{i_d}$ are pairwise transverse. In other words, we allow only consecutive repetition of a given Lagrangian.
\end{itemize}
The set of \textit{asymptotics} $A(\Sigma_{i-1},\Sigma_i)$ associated to the pair $(\Sigma_{i-1},\Sigma_i)$ consists of Reeb chords in $\mathcal{R}(\Lambda_{i-1}^\pm\cup\Lambda_i^\pm)$, and intersection points in $\Sigma_{i-1}\cap\Sigma_i$ when the cobordisms are transverse.
Consider a $d+1$-tuple of asymptotics $(a_0,\dots,a_d)$, with $a_i\in A(\Sigma_{i-1},\Sigma_i)$, $\Sigma_{-1}:=\Sigma_d$ and $\Lambda_{-1}^\pm:=\Lambda_d^\pm$. If $\Sigma_{i-1}=\Sigma_i$, then $a_i$ is called a \textit{pure asymptotic}, and it is a pure Reeb chord of $\Lambda_{i-1}^\pm=\Lambda_i^\pm$, while if $\Sigma_{i-1}\neq\Sigma_i$, then $a_i$ is called a \textit{mixed asymptotic}.
Given $J$ an almost complex structure on $\mathbb R\times Y$, we denote $\mathcal{M}_{\Sigma_{0,...,d},J}(a_0;a_1,\dots,a_d)$ the set of pairs $(\boldsymbol{y},u)$ where
\begin{enumerate}
\item $\boldsymbol{y}\in\mathcal{R}^{d+1}$,
\item $u:(S_{\boldsymbol{y}},j)\to(\mathbb R\times Y,J)$ is a pseudo-holomorphic map (with $j$ standard a.c.s. on $D^2$),
\item $u$ maps the boundary of $S_r$ contained between $y_i$ and $y_{i+1}$ for $0\leq i\leq d$ ($y_{d+1}:=y_0$) to $\Sigma_i$,
\item $\lim_{z\to y_i}u(z)=a_i$.
\end{enumerate}
Let us specify the condition (4) in the case $a_i$ is a Reeb chord, for which we also denote $a_i:[0,1]\to Y$ a parametrization. We say that
\begin{itemize}
\item $u$ has a \textit{positive asymptotic to $a_i$ at $y_i$} if $\lim\limits_{s_i\to+\infty}u(s_i,t_i)=a_i(t_i)$,
\item $u$ has a \textit{negative asymptotic to $a_i$ at $y_i$} if $\lim\limits_{s_i\to+\infty}u(s_i,t_i)=a_i(1-t_i)$.
\end{itemize}
\begin{rem}\label{rem:label}
Note that the fact that a mixed Reeb chord asymptotic is a positive or a negative asymptotic is entirely determined by the \textquotedblleft jump\textquotedblright\, of the chord. Namely, positive mixed Reeb chord asymptotics are mixed chords of $\Lambda_i^+\cup\Lambda_{i+1}^+$ from $\Lambda_i^+$ to $\Lambda_{i+1}^+$, and negative mixed Reeb chord asymptotics are mixed Reeb chords of $\Lambda_i^-\cup\Lambda_{i+1}^-$ from $\Lambda_{i+1}^-$ to $\Lambda_i^-$.
\end{rem}
\begin{nota}
From now on, we denote the Lagrangian boundary condition for discs only by the family $(\Sigma_{i_0},\Sigma_{i_2},\dots,\Sigma_{i_k})$, even though the pseudo-holomorphic discs we will consider can have pure Reeb chords asymptotics too.
\end{nota}
In the following two subsections we describe the several types of moduli spaces we will make use of later.
\subsubsection{Moduli spaces of curves with cylindrical boundary conditions}\label{mod_cyl}
The Lagrangian boundary conditions we consider here are trivial
cylinders over Legendrians, and we take an almost complex structure $J\in\mathcal{J}^{cyl}(\mathbb R\times Y)$. If the boundary conditions consists of only one cylinder $\mathbb R\times\Lambda$ then we denote
\begin{alignat*}{1}
\mathcal{M}_{\mathbb R\times\Lambda,J}(\gamma^+;\gamma_1,\dots,\gamma_d)
\end{alignat*}
the moduli space of discs with boundary on $\mathbb R\times\Lambda$, with a positive asymptotic to $\gamma^+$ and negative asymptotics at $\gamma_i$ for $1\leq i\leq d$. We call discs in such moduli spaces \textit{pure}, as all asymptotics are pure.
In case the Lagrangian conditions is a family of distinct transverse cylinders $\mathbb R\times\Lambda_{0...d}:=(\mathbb R\times\Lambda_0,\dots,\mathbb R\times\Lambda_d)$ with $d>0$, we consider the:
\begin{enumerate}
\item Banana-type moduli spaces:
\begin{alignat*}{1}
\mathcal{M}_{\mathbb R\times\Lambda_{0...d},J}(\gamma_{d,0};\boldsymbol{\delta}_0,\gamma_1,\boldsymbol{\delta}_1,\dots,\gamma_d,\boldsymbol{\delta}_d)
\end{alignat*}
where $\gamma_{d,0}\in\mathcal{R}(\Lambda_0,\Lambda_d)$ is a mixed Reeb chord from $\Lambda_d$ to $\Lambda_0$, $\gamma_i\in\mathcal{R}(\Lambda_{i-1},\Lambda_{i})\cup\mathcal{R}(\Lambda_{i},\Lambda_{i-1})$ are mixed chords of $\Lambda_{i-1}\cup\Lambda_i$ and $\boldsymbol{\delta}_i$ are words of Reeb chords of $\Lambda_i$ and are negative asymptotics. Note that according to Remark \ref{rem:label}, $\gamma_{d,0}$ is a positive Reeb chord asymptotic and then $\gamma_i$ is a positive asymptotic if it is in $\mathcal{R}(\Lambda_{i},\Lambda_{i-1})$ and a negative one if it is in $\mathcal{R}(\Lambda_{i-1},\Lambda_{i})$.
\item $\Delta$-type moduli spaces:
\begin{alignat*}{1}
\mathcal{M}_{\mathbb R\times\Lambda_{0...d},J}(\gamma_{0,d};\boldsymbol{\delta}_0,\gamma_1,\boldsymbol{\delta}_1,\dots,\gamma_d,\boldsymbol{\delta}_d)
\end{alignat*}
where $\gamma_{0,d}\in\mathcal{R}(\Lambda_d,\Lambda_0)$ is a negative Reeb chord asymptotic and with the same condition as above on asymptotics $\gamma_i$ and $\boldsymbol{\delta}_i$.
\end{enumerate}
The discs in moduli spaces of type (1) and (2) are called \textit{mixed} as $d+1$ asymptotics are mixed.
There is a $\mathbb R$-action by translation on moduli spaces with cylindrical Lagrangian boundary condition, we use the notation $\widetilde{\mathcal{M}}$ to denote the quotient of the moduli space $\mathcal{M}$ by $\mathbb R$.
\subsubsection{Moduli spaces of curves with boundary on non-cylindrical Lagrangians}\label{mod_non_cyl}
The Lagrangian boundary conditions consist of Lagrangian cobordisms $(\Sigma_0,\dots,\Sigma_d)$ such that at least one is not a trivial cylinder. Denote $\mathbf{J}$ a domain dependent almost complex structure with values in $\mathcal{J}_{J^+,J^-}(\mathbb R\times Y)$. If $d=0$, $\Sigma:=\Sigma_0$ is a non trivial cobordism from $\Lambda^-$ to $\Lambda^+$ and we denote
\begin{alignat*}{1}
\mathcal{M}_{\Sigma,\mathbf{J}}(\gamma^+;\gamma_1,\dots,\gamma_d)
\end{alignat*}
the moduli space of discs where $\gamma\in\mathcal{R}(\Lambda^+)$ is a positive Reeb chord asymptotic and $\gamma_i\in\mathcal{R}(\Lambda^-)$ are negative Reeb chord asymptotics. We call again those discs \textit{pure}. If the Lagrangian boundary condition consists of several distinct Lagrangians $\Sigma_{0...d}=(\Sigma_0,\dots,\Sigma_d)$ where $d>0$ and $\Sigma_i$ is a cobordism from $\Lambda_i^-$ to $\Lambda_i^+$, then we consider the following \textit{mixed} moduli spaces:
\begin{enumerate}
\item Banana-type moduli space: $\mathcal{M}_{\Sigma_{0...d},\mathbf{J}}(\gamma_{d,0};\boldsymbol{\delta}_0,a_1,\boldsymbol{\delta}_1,\dots,a_d,\boldsymbol{\delta}_d)$,
\item $\mfm_0$-type moduli space:
$\mathcal{M}_{\Sigma_{0...d},\mathbf{J}}(x;\boldsymbol{\delta}_0,a_1,\boldsymbol{\delta}_1,\dots,a_d,\boldsymbol{\delta}_d)$,
\item $\Delta$-type moduli space
$\mathcal{M}_{\Sigma_{0...d},\mathbf{J}}(\gamma_{0,d};\boldsymbol{\delta}_0,a_1,\boldsymbol{\delta}_1,\dots,a_d,\boldsymbol{\delta}_d)$,
\end{enumerate}
where $\gamma_{d,0}\in\mathcal{R}(\Lambda_0^+,\Lambda_d^+)$ is a positive Reeb chord asymptotic, $\gamma_{0,d}\in\mathcal{R}(\Lambda_d^-,\Lambda_0^-)$ is a negative Reeb chord asymptotic, $a_i$ are intersection points in $\Sigma_{i-1}\cap\Sigma_i$ or mixed Reeb chord asymptotics in $\mathcal{R}(\Lambda_i^+,\Lambda_{i-1}^+)\cup\mathcal{R}(\Lambda_{i-1}^-,\Lambda_{i}^-)$, and $\boldsymbol{\delta}_i$ are words of pure Reeb chords of $\Lambda_i^-$.
\subsection{Action and energy}
Consider a $d+1$-tuple of pairwise disjoint cobordisms $(\Sigma_0,\dots,\Sigma_d)$ with cylindrical ends over $\Lambda_i^{\pm}$. Let $T>0$ and $\varepsilon>0$ such that all cobordisms are cylindrical outside of $[-T+\varepsilon,T-\varepsilon]\times Y$. The \textit{length} of a chord $\gamma$ is defined by $\ell(\gamma):=\int_\gamma\alpha$. Then, the \textit{action} of asymptotics is defined as follows:
\begin{alignat*}{1}
&\mathfrak{a}(\gamma)=e^T\ell(\gamma)+\mathfrak{c}_j-\mathfrak{c}_i\,\,\mbox{ for } \gamma\in \mathcal{R}(\Lambda_i^+,\Lambda_j^+),\\
&\mathfrak{a}(x)=f_j(x)-f_i(x)\,\,\mbox{ for }\,x\in\Sigma_i\cap\Sigma_j,\,i<j,\\
&\mathfrak{a}(\gamma)=e^{-T}\ell(\gamma)\,\,\mbox{ for } \gamma\in \mathcal{R}(\Lambda_i^-,\Lambda_j^-),\\
&\mathfrak{a}(\gamma)=e^{\pm T}\ell(\gamma)\,\,\mbox{ for } \gamma\in \mathcal{R}(\Lambda^\pm).
\end{alignat*}
Given a function $\chi:\mathbb R\to\mathbb R$ satisfying for some $\varepsilon>0$
$$\chi(t)=\left\{\begin{array}{ccl}
e^T&\mbox{ for } &t\geq T\\
e^t&\mbox{ for }& -T+\varepsilon\leq t\leq T-\varepsilon\\
e^{-T}&\mbox{ for } &t\leq-T
\end{array}\right.$$
and $\chi'(t)\geq0$, one defines the \textit{energy} of a pseudo-holomorphic disc $u$ to be:
\begin{alignat*}{1}
E(u)=\int_ud(\chi(t)\alpha)
\end{alignat*}
This energy is always positive and vanishes if and only if the disc is constant. The energy of the pseudo-holomorphic discs considered in this paper is finite and can be expressed in terms of the action of the asymptotics.
\begin{prop} For the moduli spaces described in Sections \ref{mod_cyl} and \ref{mod_non_cyl}, we have the following:
\begin{enumerate}
\item If $u\in\mathcal{M}_{\mathbb R\times\Lambda}(\gamma^+;\gamma_1,\dots,\gamma_d)$, then
$E(u)=\mathfrak{a}(\gamma^+)-\sum_i\mathfrak{a}(\gamma_i)$.
\item Assume $\{\gamma_1,\dots,\gamma_d\}=\{\gamma_1^+,\dots,\gamma_{j^+}^+,\gamma_1^-,\dots,\gamma_{j^-}^-\}$ where $\gamma_i^+$ are positive Reeb chord asymptotics and $\gamma_i^-$ are negative Reeb chord asymptotics, then
\begin{enumerate}
\item if $u\in\mathcal{M}_{\mathbb R\times\Lambda_{0...d}}(\gamma_{d,0};\boldsymbol{\delta}_0,\gamma_1,\boldsymbol{\delta}_1,\dots,\gamma_d,\boldsymbol{\delta}_d)$, $$E(u)=\mathfrak{a}(\gamma_{d,0})+\sum_{i=1}^{j^+}\mathfrak{a}(\gamma_i^+)-\sum_{i=1}^{j^-}\mathfrak{a}(\gamma_i^-)-\sum_{i=0}^d\mathfrak{a}(\boldsymbol{\delta}_i),$$
\item if $u\in\mathcal{M}_{\mathbb R\times\Lambda_{0...d}}(\gamma_{0,d};\boldsymbol{\delta}_0,\gamma_1,\boldsymbol{\delta}_1,\dots,\gamma_d,\boldsymbol{\delta}_d)$, $$E(u)=-\mathfrak{a}(\gamma_{0,d})+\sum_{i=1}^{j^+}\mathfrak{a}(\gamma_i^+)-\sum_{i=1}^{j^-}\mathfrak{a}(\gamma_i^-)-\sum_{i=0}^d\mathfrak{a}(\boldsymbol{\delta}_i).$$
\end{enumerate}
\item If $u\in\mathcal{M}_\Sigma(\gamma^+;\gamma_1,\dots,\gamma_d)$, then $E(u)=\mathfrak{a}(\gamma^+)-\sum_i\mathfrak{a}(\gamma_i)$.
\item Assume $\{a_1,\dots,a_d\}=\{\gamma_1^+,\dots,\gamma_{j^+}^+,\gamma_1^-,\dots,\gamma_{j^-}^-,q_1,\dots,q_l\}$, where $\gamma_i^+$ are positive mixed Reeb chords of $\bigcup\Lambda_i^+$, $\gamma_i^-$ are negative mixed Reeb chords of $\bigcup\Lambda_i^-$, $p_i$ are intersection points asymptotics, then
\begin{enumerate}
\item if $u\in\mathcal{M}_{\Sigma_{0...d}}(\gamma_{d,0};\boldsymbol{\delta}_0,a_1,\boldsymbol{\delta}_1,\dots,a_d,\boldsymbol{\delta}_d)$,
$$E(u)=\mathfrak{a}(\gamma_{d,0})+\sum_{i=1}^{j^+}\mathfrak{a}(\gamma_i^+)-\sum_{i=1}^{j^-}\mathfrak{a}(\gamma_i^-)-\sum_{i=1}^{l}\mathfrak{a}(q_i)-\sum_{i=0}^d\mathfrak{a}(\boldsymbol{\delta}_i),$$
\item if $u\in\mathcal{M}_{\Sigma_{0...d}}(x;\boldsymbol{\delta}_0,a_1,\boldsymbol{\delta}_1,\dots,a_d,\boldsymbol{\delta}_d)$,
$$E(u)=\mathfrak{a}(x)+\sum_{i=1}^{j^+}\mathfrak{a}(\gamma_i^+)-\sum_{i=1}^{j^-}\mathfrak{a}(\gamma_i^-)-\sum_{i=1}^{l}\mathfrak{a}(q_i)-\sum_{i=0}^d\mathfrak{a}(\boldsymbol{\delta}_i),$$
\item if $u\in\mathcal{M}_{\Sigma_{0...d}}(\gamma_{0,d};\boldsymbol{\delta}_0,a_1,\boldsymbol{\delta}_1,\dots,a_d,\boldsymbol{\delta}_d)$,
$$E(u)=-\mathfrak{a}(\gamma_{0,d})+\sum_{i=1}^{j^+}\mathfrak{a}(\gamma_i^+)-\sum_{i=1}^{j^-}\mathfrak{a}(\gamma_i^-)-\sum_{i=1}^{l}\mathfrak{a}(q_i)-\sum_{i=0}^d\mathfrak{a}(\boldsymbol{\delta}_i),$$
\end{enumerate}
\end{enumerate}
\end{prop}
\subsection{Grading}\label{sec:grad}
Given cobordisms $(\Sigma_0,\dots,\Sigma_d)$ as above, we associate a grading to the asymptotics of pseudo-holomorphic discs with boundary on $\Sigma_{0...d}$ using the Conley-Zehnder index. We refer for example to \cite{EES1} for the definition of this index.
\begin{enumerate}
\item \textbf{Grading of Reeb chords:} Consider $\Lambda\subset Y$ a connected Legendrian, then the grading of a Reeb chord $\gamma\in\mathcal{R}(\Lambda)$ is defined to be
\begin{alignat*}{1}
|\gamma|=\CZ(\gamma)-1
\end{alignat*}
where $\CZ(\gamma)$ denotes the Conley-Zehnder index of a capping path for $\gamma$. Note that it does not depend on the choice of capping path as by hypothesis we consider Maslov $0$ Legendrians, and it does not depend neither on a choice of symplectic trivialization of $TP$ along the capping path, as $2c_1(P)=0$. If the Legendrian $\Lambda$ is not connected, there are no capping paths for Reeb chords connecting two distinct components so some additional choices are needed (see \cite{DR}). If $\gamma$ is a chord from $\Lambda_j$ to $\Lambda_i$, one fixes points $p_j\in\Lambda_j$ and $p_i\in\Lambda_i$ and a path $\Gamma_{i,j}$ from $p_i$ to $p_{j}$ as well as a path of Lagrangians from $T_{p_i}\pi_P(\Lambda_{i})$ to $T_{p_{j}}\pi_P(\Lambda_{j})$. Then, one takes as capping path for $\gamma$ a path from the ending point of $\gamma$ to $p_i$, followed by $\Gamma_{i,j}$, followed by a path from $p_{j}$ to the starting point of $\gamma$. The grading of mixed chords depends on those additional paths but the difference in grading of two chords does not.
\item \textbf{Grading of intersection points:}
Let $p\in\Sigma_i\cap\Sigma_j$, for $i<j$. Generically, the immersed Lagrangian $\Sigma_i\cup\Sigma_j$ lifts to an embedded Legendrian submanifold $\widetilde{\Sigma}_i\cup\widetilde{\Sigma}_j\subset\big((\mathbb R\times Y)\times\mathbb R_u,du+e^t\alpha\big)$ and $p$ is the projection of a Reeb chord $\gamma_p$ of $\widetilde{\Sigma}_i\cup\widetilde{\Sigma}_j$. If $\gamma_p$ is a chord from $\widetilde{\Sigma}_j$ to $\widetilde{\Sigma}_i$, then $|p|=\CZ(\gamma_p)$. If $\gamma_p$ is a chord from $\widetilde{\Sigma}_i$ to $\widetilde{\Sigma}_j$, then $|p|=n+1-\CZ(\gamma_p)$. These Conley-Zehnder indices are computed after a choice of path connecting $\widetilde{\Sigma}_i$ and $\widetilde{\Sigma}_j$ as explained above for the non-connected case.
Again, the vanishing of the Maslov number for Lagrangian cobordisms, and of the first Chern class of $P$ imply that the grading of intersection points does not depend on the choices made, except paths to connect any two distinct components of the Legendrian lift.
\end{enumerate}
The expected dimension of the moduli spaces described in Sections \ref{mod_cyl} and \ref{mod_non_cyl} can then be expressed in terms of the grading of asymptotics.
Consider the set of asymptotics $(a_1,\dots,a_d)$, and assume again that it decomposes as follows $\{a_1,\dots,a_d\}=\{\gamma^+_j\}_{1\leq j\leq j^+}\cup\{q_j\}_{1\leq j\leq l}\cup\{\gamma^-_j\}_{1\leq j\leq j^-}$ where $\gamma_j^+$ are positive Reeb chord asymptotics, $q_j$ are intersection points asymptotics and $\gamma_j^-$ are negative Reeb chord asymptotics. For cylindrical boundary conditions, we have obviously $l=0$. Moreover, we assume that negative asymptotics to pure Reeb chords $\boldsymbol{\delta}_i$ have degree $0$.
\begin{prop}\label{teo:grading} Under the decomposition of the set of asymptotics as above, we have:
\begin{alignat*}{1}\displaystyle
&\dim\widetilde{\mathcal{M}}_{\mathbb R\times\Lambda}(\gamma^+;\gamma_1,\dots,\gamma_d)=|\gamma^+|-\sum|\gamma_i|-1,\\
&\dim\widetilde{\mathcal{M}}_{\mathbb R\times\Lambda_{0...d}}(\gamma_{d,0};\boldsymbol{\delta}_0,\gamma_1,\boldsymbol{\delta}_1,\dots,\gamma_d,\boldsymbol{\delta}_d)=|\gamma_{d,0}|+\sum|\gamma_j^+|-\sum|\gamma_j^-|+(2-n)j^+-1,\\
&\dim\widetilde{\mathcal{M}}_{\mathbb R\times\Lambda_{0...d}}(\gamma_{0,d};\boldsymbol{\delta}_0,\gamma_1,\boldsymbol{\delta}_1,\dots,\gamma_d,\boldsymbol{\delta}_d)=-|\gamma_{0,d}|+\sum|\gamma_j^+|-\sum|\gamma_j^-|+(2-n)(j^+-1)-1,\\
&\dim\mathcal{M}_\Sigma(\gamma^+;\gamma_1,\dots,\gamma_d)=|\gamma^+|-\sum|\gamma_i|,\\
&\dim\mathcal{M}_{\Sigma_{0...d}}(\gamma_{d,0};\boldsymbol{\delta}_0,a_1,\boldsymbol{\delta}_1,\dots,a_d,\boldsymbol{\delta}_d)=|\gamma_{d,0}|+\sum|\gamma_j^+|-\sum|q_j|-\sum|\gamma_j^-|+(2-n)j^++l,\\
&\dim\mathcal{M}_{\Sigma_{0...d}}(x;\boldsymbol{\delta}_0,a_1,\boldsymbol{\delta}_1,\dots,a_d,\boldsymbol{\delta}_d)=|x|+\sum|\gamma_j^+|-\sum|q_j|-\sum|\gamma_j^-|+(2-n)j^++l-2,\\
&\dim\mathcal{M}_{\Sigma_{0...d}}(\gamma_{0,d};\boldsymbol{\delta}_0,a_1,\boldsymbol{\delta}_1,\dots,a_d,\boldsymbol{\delta}_d),\\
&\hspace{45mm}=-|\gamma_{0,d}|+\sum|\gamma_j^+|-\sum|q_j|-\sum|\gamma_j^-|+n+(2-n)j^++l-2.
\end{alignat*}
\end{prop}
\begin{nota}
Given a moduli space $\mathcal{M}_{\Sigma_{0...d}}(a_0;a_1,\dots,a_d)$, we add an exponent indicating the (expected) dimension of it as a smooth manifold: $\mathcal{M}^i_{\Sigma_{0...d}}(a_0;a_1,\dots,a_d)$. This dimension is equal to the index of the Fredholm operator obtained by linearizing $\bar\partial$ at a pseudo-holomorphic disc $u$.
\end{nota}
\subsection{Compactness}\label{sec:structure}
This section sums up what will be used to prove almost all the results in this paper. Namely, once transversality is achieved simultaneously for all moduli spaces considered above, which is possible using a domain dependent almost complex structure, $0$-dimensional (eventually after quotienting by an $\mathbb R$-action) moduli spaces are compact manifolds. We call discs in these moduli spaces \textit{rigid discs}.
Then, $1$-dimensional moduli spaces are not necessarily compact and can be compactified by adding \textit{broken discs}. The goal of this section is to describe the types of broken discs one can find in the boundary of the compactification of the moduli spaces in Sections \ref{mod_cyl} and \ref{mod_non_cyl}.
Consider a $1$-dimensional moduli space $\mathcal{M}^1_{L_{0...d}}(a_0;\boldsymbol{\delta}_0,a_1,\boldsymbol{\delta}_1,\dots,\boldsymbol{\delta}_{d-1},a_d,\boldsymbol{\delta}_d)$ of curves where $L_{0...d}=\mathbb R\times\Lambda_{0...d}$ or $\Sigma_{0...d}$ (in the first case $\mathcal{M}^1_{L_{0...d}}=\widetilde{\mathcal{M}^2}_{L_{0...d}}$), with mixed asymptotics $a_i$ and asymptotics to words of pure Reeb chords $\boldsymbol{\delta}_i$.
By results in \cite{BEHWZ},\cite[Theorem 3.20]{Abbas}, a sequence of curves in such a moduli space admits a subsequence converging to a \textit{pseudo-holomorphic building} (see the cited references for a precise definition) consisting of several pseudo-holomorphic discs together with \textit{nodes} connecting these components and choices of asymptotics for these nodes, satisfying the following:
\begin{itemize}
\item each disc in the pseudo-holomorphic building has positive energy, so in particular a component with only Reeb chords asymptotics must have at least one positive Reeb chord asymptotic,
\item each disc has a non negative Fredholm index because of the regularity of the almost complex structure,
\item if the building consists of the discs $u_1,\dots,u_k$, then the \textit{glued solution} $u$ has index
$\ind(u)=\nu+\sum_i\ind(u_i)$, where $\nu$ is the number of nodes asymptotic to intersection points.
\end{itemize}
Let us now precise what these conditions imply in particular in the case of moduli spaces described in the previous sections.
\begin{enumerate}
\item \textbf{Cylindrical boundary condition:}
Consider a $1$-dimensional moduli space $\widetilde{\mathcal{M}^2}_{\mathbb R\times\Lambda_{0...d}}(\gamma_0;\boldsymbol{\delta}_0,\gamma_1,\boldsymbol{\delta}_1,\dots,\boldsymbol{\delta}_{d-1},\gamma_d,\boldsymbol{\delta}_d)$
The conditions described above imply that a sequence of discs in this moduli space admits a subsequence which limits to a pseudo-holomorphic building consisting two index $1$ pseudo-holomorphic discs with boundary on cylinders, and which glue together along a node asymptotic to a Reeb chord. Remark that this Reeb chord can be pure or mixed, we will later be interested only in the case of nodes asymptotic to mixed Reeb chords, see Remark \ref{rem:delta}.
\item \textbf{Non-cylindrical boundary conditions:}
Consider a $1$-dimensional moduli space $\mathcal{M}^1_{\Sigma_{0...d}}(a_0;\boldsymbol{\delta}_0,a_1,\boldsymbol{\delta}_1,\dots,\boldsymbol{\delta}_{d-1},a_d,\boldsymbol{\delta}_d)$ of curves with boundary on the cobordisms $\Sigma_0,\dots,\Sigma_d$, with mixed asymptotics $a_i$ and asymptotics to words of pure Reeb chords $\boldsymbol{\delta}_i$.
A sequence of discs in such a moduli space admits a subsequence converging to a pseudo-holomorphic building which is
\begin{enumerate}
\item either a pseudo-holomorphic building with two index $0$ components (which are not trivial strips) with boundary on non cylindrical parts of the cobordisms and connected by node asymptotic to an intersection point, or
\item a pseudo-holomorphic building consisting of some index $0$ components with boundary on the non cylindrical parts of the cobordisms, and one index $1$ component with boundary on the positive or negative cylindrical ends of the cobordisms, connected to each index $0$ component via a node asymptotic to a Reeb chord.
\end{enumerate}
\end{enumerate}
\begin{nota}
We denote $\partial\overline{\mathcal{M}^2}_{\mathbb R\times\Lambda_{0...d}}:=\partial\overline{\widetilde{\mathcal{M}^2}}_{\mathbb R\times\Lambda_{0...d}}$, and $\partial\overline{\mathcal{M}^1}_{\Sigma_{0...d}}$ the set of pseudo-holomorphic buildings arising at the boundary of the compactification of the corresponding moduli spaces.
\end{nota}
\subsection{Legendrian contact homology}\label{sec:leg}
Consider a compact Legendrian submanifold $\Lambda\subset Y$. We denote by $C(\Lambda)$ the $\Z_2$-vector space generated by Reeb chords of $\Lambda$.
The Legendrian contact homology of $\Lambda$ is an invariant of $\Lambda$ up to Legendrian isotopy, introduced by Chekanov in \cite{Che} and Eliashberg \cite{E} (see also \cite{EES1,EES2}). It is the homology of a differential graded algebra (DGA) $(\mathcal{A}(\Lambda),\partial)$ associated to $\Lambda$. The algebra $\mathcal{A}(\Lambda)$, called the Chekanov-Eliashberg algebra of $\Lambda$, is the unital tensor algebra of $C(\Lambda)$, i.e. $\mathcal{A}(\Lambda)=\bigoplus_{i\geq0}C(\Lambda)^{\otimes i}$ with $C(\Lambda)^{\otimes0}:=\Z_2$.
The grading of Reeb chords is as defined in Section \ref{sec:grad}.
Given a cylindrical almost complex structure, the differential $\partial$ is defined as follows. For $\gamma\in\mathcal{R}(\Lambda)$ we have
\begin{alignat*}{1}
\partial\gamma=\sum_{d\geq 0}\sum_{\gamma_i\in\mathcal{R}(\Lambda)}\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda}(\gamma^+;\gamma_1,\dots,\gamma_d)\gamma_1\gamma_2\dots\gamma_d
\end{alignat*}
The differential $\partial$ extends to the whole algebra by Leibniz rule, and satisfies $\partial^2=0$. We consider the SFT definition of the differential here, which has been proved in \cite{DR} to give the same invariant as the original version with a differential defined by a count of discs in $P$ with boundary on $\pi_P(\Lambda)$.
For a generic cylindrical almost complex structure, the moduli spaces $\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda}(\gamma;\gamma_1,\dots,\gamma_d)$ are compact $0$-dimensional manifolds. The Legendrian contact homology of $\Lambda$, denoted $LCH_*(\Lambda)$, does not depend on a generic choice of almost complex structure and is invariant under Legendrian isotopy.
Consider now an almost complex structure $J\in\mathcal{J}_{J^+,J^-}(\mathbb R\times Y)$. It is proved in \cite{EHK} that an exact Lagrangian cobordism $\Sigma$ from $\Lambda^-$ to $\Lambda^+$ induces a DGA-map $\Phi_\Sigma:\mathcal{A}(\Lambda^+)\to\mathcal{A}(\Lambda^-)$, defined by
\begin{alignat*}{1}
\Phi_\Sigma(\gamma)=\sum_{d\geq 0}\sum_{\gamma_i\in\mathcal{R}(\Lambda)}\mathcal{M}^0_{\Sigma}(\gamma^+;\gamma_1,\dots,\gamma_d)\gamma_1\gamma_2\dots\gamma_d
\end{alignat*}
When $\Sigma$ is a Lagrangian filling of $\Lambda$, i.e. $\Lambda^-=\emptyset$, then $\Phi_\Sigma$ is a map from $\mathcal{A}(\Lambda)$ to $\mathcal{A}(\emptyset):=\Z_2$. It is an instance of an \textit{augmentation} of $\mathcal{A}(\Lambda)$. More generally, we have the following definition.
\begin{defi}
An augmentation of $(\mathcal{A}(\Lambda),\partial)$ over $\Z_2$ is a map $\varepsilon\colon\mathcal{A}(\Lambda)\to\Z_2$ satisfying
\begin{itemize}
\item $\varepsilon\circ\partial=0$
\item $\varepsilon(\gamma)=0$ if $|\gamma|\neq0$,
\item $\varepsilon(1)=1$,
\item $\varepsilon(\gamma_1\gamma_2)=\varepsilon(\gamma_1)\varepsilon(\gamma_2)$,
\end{itemize}
In other words, it is a unital DGA-map when considering $\Z_2$ as a DGA with a vanishing differential.
\end{defi}
Chekanov made use of augmentations to linearize the DGA $(\mathcal{A}(\Lambda),\partial)$, leading to finite dimensional invariants called \textit{linearized Legendrian contact homologies}. Bourgeois and Chantraine \cite{BCh} generalized this idea using two augmentations instead of one:
assume $\mathcal{A}(\Lambda)$ admits augmentations $\varepsilon_0,\varepsilon_1$, then there is a complex $(LCC_*^{\varepsilon_0,\varepsilon_1}(\Lambda),\partial^{\varepsilon_0,\varepsilon_1}_1)$ where $LCC_*^{\varepsilon_0,\varepsilon_1}(\Lambda):=C(\Lambda)$ and for a Reeb chord $\gamma$,
\begin{alignat*}{1}
\partial_1^{\varepsilon_0,\varepsilon_1}(\gamma)=\sum_{d\geq 0}\sum_{\gamma_1,\dots,\gamma_d\in\mathcal{R}(\Lambda)}\sum_{i=1}^d\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda}(\gamma^+;\gamma_1,\dots,\gamma_d)\varepsilon_0(\gamma_1\dots\gamma_{i-1})\varepsilon_1(\gamma_{i+1}\dots\gamma_d)\gamma_i
\end{alignat*}
This map satisfies $(\partial_1^{\varepsilon_0,\varepsilon_1})^2=0$.
The \textit{Legendrian contact homology of $\Lambda$ bilinearized by $(\varepsilon_0,\varepsilon_1)$} is the homology of this complex, denoted $LCH_*^{\varepsilon_0,\varepsilon_1}(\Lambda)$.
The dual complex $(LCC^*_{\varepsilon_0,\varepsilon_1}(\Lambda),\mu_{\varepsilon_0,\varepsilon_1}^1)$ is the complex of the \textit{bilinearized Legendrian contact cohomology of $\Lambda$}, $LCH^*_{\varepsilon_0,\varepsilon_1}(\Lambda)$. For Reeb chords $\gamma,\beta\in\mathcal{R}(\Lambda)$, if we denote by $\langle \partial_1^{\varepsilon_0,\varepsilon_1}(\beta),\gamma\rangle$ the coefficient of $\gamma$ in $\partial_1^{\varepsilon_0,\varepsilon_1}(\beta)$, then we have
\begin{alignat*}{1}
\mu_{\varepsilon_0,\varepsilon_1}^1(\gamma)=\sum_{\beta\in\mathcal{R}(\Lambda)}\langle \partial_1^{\varepsilon_0,\varepsilon_1}(\beta),\gamma\rangle \beta
\end{alignat*}
When $\varepsilon_0=\varepsilon_1$, these complexes correspond to the linearized Legendrian contact (co)homology complexes defined by Chekanov.
Finally, given an augmentation $\varepsilon^-$ of $\mathcal{A}(\Lambda^-)$ and an exact Lagrangian cobordism $\Lambda^-\prec_\Sigma\Lambda^+$, it induces an augmentation $\varepsilon^+:=\varepsilon^-\circ\Phi_\Sigma$ of $\mathcal{A}(\Lambda^+)$.
\subsection{The Cthulhu complex $\Cth$}\label{Cth-}
The Cthulhu homology is the homology of a Floer-type complex defined in \cite{CDGG2} for a pair $\Lambda_0^-\prec_{\Sigma_0}\Lambda_0^+$ and $\Lambda_1^-\prec_{\Sigma_1}\Lambda_1^+$ of transverse exact Lagrangian cobordisms in $\mathbb R\times Y$ such that the algebras $\mathcal{A}(\Lambda_0^-)$ and $\mathcal{A}(\Lambda_1^-)$ admit augmentations $\varepsilon_0^-$ and $\varepsilon_1^-$ respectively. The Cthulhu complex $\big(\Cth(\Sigma_0,\Sigma_1),\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-}\big)$ has three types of generators,
\begin{alignat*}{1}
\Cth(\Sigma_0,\Sigma_1)=C(\Lambda_0^+,\Lambda_1^+)[2]\oplus CF(\Sigma_0,\Sigma_1)\oplus C(\Lambda_0^-,\Lambda_1^-)[1]
\end{alignat*}
where $C(\Lambda_0^+,\Lambda_1^+)[2]$ denotes the $\Z_2$-vector space generated by Reeb chords from $\Lambda_1^+$ to $\Lambda_0^+$ with a grading shift, namely if $\gamma\in C(\Lambda_0^+,\Lambda_1^+)[2]$ then $|\gamma|_{\Cth(\Sigma_0,\Sigma_1)}=|\gamma|+2$, $CF(\Sigma_0,\Sigma_1)$ is the $\Z_2$-vector space generated by intersection points in $\Sigma_0\cap\Sigma_1$, and $C(\Lambda_0^-,\Lambda_1^-)$ is generated by Reeb chords from $\Lambda_1^-$ to $\Lambda_0^-$.
The differential is given by the matrix
\begin{alignat*}{1}
\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-}=\left(\begin{matrix} d_{++}&d_{+0}&d_{+-}\\
0&d_{00}&d_{0-}\\
0&d_{-0}&d_{--}\\
\end{matrix}\right)
\end{alignat*}
It is a degree $1$ map defined by a count of rigid pseudo-holomorphic discs with boundary on the cobordisms, as schematized on Figure \ref{fig:cthulhu_diff}. The study of broken discs arising at the boundary of the compactification of $1$-dimensional moduli spaces gives that $\mathfrak{d}_{\varepsilon_0^-,\varepsilon_1^-}$ squares to $0$, see \cite[Theorem 4.1]{CDGG2}.
\begin{figure}[ht]
\begin{center}\includegraphics[width=15cm]{Cthulhu2.eps}\end{center}
\caption{Curves contributing to the differential $\partial_{\varepsilon_0^-,\varepsilon_1^-}$, "\textit{in}" stands for input and "\textit{out}" for output. The "$0$" and "$1$" indicate the Fredholm index of the respective discs.}
\label{fig:cthulhu_diff}
\end{figure}
Denote by $\big(CF_{-\infty}(\Sigma_0,\Sigma_1),\mfm_1^{-\infty})$ the quotient complex of the Cthulhu complex $\Cth(\Sigma_0,\Sigma_1)$, with $CF_{-\infty}(\Sigma_0,\Sigma_1)=CF(\Sigma_0,\Sigma_1)\oplus C(\Lambda_0^-,\Lambda_1^-)[1]$ and $$\mfm_1^{-\infty}=\left(\begin{matrix}
d_{00}&d_{0-}\\
d_{-0}&d_{--}\\
\end{matrix}\right)$$
In \cite{L} the author proved that given a triple of pairwise transverse cobordisms $\Sigma_0,\Sigma_1$ and $\Sigma_2$, there is a non trivial map:
\begin{alignat*}{1}
\mfm_2^{-\infty}:CF_{-\infty}(\Sigma_1,\Sigma_2)\otimes CF_{-\infty}(\Sigma_0,\Sigma_1)\to CF_{-\infty}(\Sigma_0,\Sigma_2)
\end{alignat*}
satisfying the Leibniz rule $\mfm_2^{-\infty}(\mfm_1^{-\infty}\otimes\id)+\mfm_2^{-\infty}(\id\otimes\mfm_1^{-\infty})+\mfm_1^{-\infty}\circ\mfm_2^{-\infty}=0$, see Section \ref{section:def_product} for more details.
\section{The complex $\Cth_+$}\label{sec:Cth}
\subsection{Definition of the complex}\label{section:def_complex}
In this section, we define the Rabinowitz complex $\Cth_+(\Sigma_0,\Sigma_1)$ for a pair of transverse exact Lagrangian cobordisms $\Lambda_0^-\prec_{\Sigma_0}\Lambda_0^+$ and $\Lambda_1^-\prec_{\Sigma_1}\Lambda_1^+$. We assume again that $\mathcal{A}(\Lambda_i^-)$ admit augmentations $\varepsilon_i^-$ for $i=0,1$, inducing augmentations $\varepsilon_i^+$ of $\mathcal{A}(\Lambda_i^+)$. The complex $\Cth_+(\Sigma_0,\Sigma_1)$ is generated by three types of generators:
\begin{alignat*}{1}
\Cth_+(\Sigma_0,\Sigma_1)=C(\Lambda_1^+,\Lambda_0^+)^\dagger[n-1]\oplus CF(\Sigma_0,\Sigma_1)\oplus C(\Lambda_0^-,\Lambda_1^-)[1]
\end{alignat*}
If we denote $|\cdot|_{\Cth_+}$ the grading of generators in $\Cth_+(\Sigma_0,\Sigma_1)$, then we have
\begin{alignat*}{1}
&|\gamma_{01}|_{\Cth_+}=n-1-|\gamma_{01}|,\mbox{ for }\gamma_{01}\in C(\Lambda_1^+,\Lambda_0^+)^\dagger[n-1]\\
&|x|_{\Cth_+}=|x|,\mbox{ for }x\in CF(\Sigma_0,\Sigma_1)\\
&|\xi_{10}|_{\Cth_+}=|\xi_{10}|+1,\mbox{ for }\xi_{10}\in C(\Lambda_0^-,\Lambda_1^-)[1]
\end{alignat*}
The difference on generators between $\Cth_+(\Sigma_0,\Sigma_1)$ and the original Cthulhu complex $\Cth(\Sigma_0,\Sigma_1)$ in \cite{CDGG2} is that the generators that are Reeb chords in the positive end are chords from $\Lambda_0^+$ to $\Lambda_1^+$ in $\Cth_+(\Sigma_0,\Sigma_1)$, whereas they are chords from $\Lambda_1^+$ to $\Lambda_0^+$ in $\Cth(\Sigma_0,\Sigma_1)$. The differential on $\Cth_+(\Sigma_0,\Sigma_1)$ is then given by
\begin{alignat*}{1}
\mfm_1^{\varepsilon^-_0,\varepsilon^-_1}
=\left(
\begin{matrix}
\Delta_1^{+}&0&0\\
d_{0+}&d_{00}&d_{0-}\\
b_1^-\circ\Delta_1^\Sigma&b_1^-\circ\Delta_1^\Sigma&b_1^-
\end{matrix}\right)
\end{alignat*}
where:
\begin{enumerate}
\item $\Delta_1^+:C(\Lambda_1^+,\Lambda_0^+)^\dagger[n-1]\to C(\Lambda_1^+,\Lambda_0^+)^\dagger[n-1]$ is defined by
\begin{alignat*}{1}
&\Delta_1^+(\gamma^+_{01})=\sum\limits_{\gamma^-_{01}}\sum\limits_{\boldsymbol{\zeta}_i}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda^+_{01}}(\gamma_{01}^-;\boldsymbol{\zeta}_0,\gamma_{01}^+,\boldsymbol{\zeta}_1)\cdot\varepsilon_0^+(\boldsymbol{\zeta}_0)\varepsilon_1^+(\boldsymbol{\zeta}_1)\cdot\gamma^-_{01}
\end{alignat*}
and is of degree $1$ according to Proposition \ref{teo:grading}.
\item $\mfm_1^0:=d_{0+}+d_{00}+d_{0-}:\Cth_+(\Sigma_{0},\Sigma_1)\to CF(\Sigma_0,\Sigma_1)$ with
\begin{alignat*}{1}
&d_{0+}(\gamma_{01}^+)=\sum\limits_{x}\sum\limits_{\boldsymbol{\delta}_i}\#\mathcal{M}^0_{\Sigma_{0},\Sigma_1}(x;\boldsymbol{\delta}_0,\gamma_{01}^+,\boldsymbol{\delta}_1)\cdot\varepsilon_0^-(\boldsymbol{\delta}_0)\varepsilon_1^-(\boldsymbol{\delta}_1)\cdot x\\
&d_{00}(q)=\sum\limits_{x}\sum\limits_{\boldsymbol{\delta}_i}\#\mathcal{M}^0_{\Sigma_{0},\Sigma_1}(x;\boldsymbol{\delta}_0,q,\boldsymbol{\delta}_1)\cdot\varepsilon_0^-(\boldsymbol{\delta}_0)\varepsilon_1^-(\boldsymbol{\delta}_1)\cdot x\\
&d_{0-}(\gamma_{10}^-)=\sum\limits_{x}\sum\limits_{\boldsymbol{\delta}_i}\#\mathcal{M}^0_{\Sigma_{0},\Sigma_1}(x;\boldsymbol{\delta}_0,\gamma_{10}^-,\boldsymbol{\delta}_1)\cdot\varepsilon_0^-(\boldsymbol{\delta}_0)\varepsilon_1^-(\boldsymbol{\delta}_1)\cdot x
\end{alignat*}
is also of degree $1$.
\item $\Delta_1^\Sigma:\Cth^*_+(\Sigma_{0},\Sigma_1)\to C_{n-1-*}(\Lambda_1^-,\Lambda_0^-)$ is defined, for $a\in\Cth_+(\Sigma_0,\Sigma_1)$, by:
\begin{alignat*}{1}
&\Delta_1^\Sigma(a)=\sum\limits_{\gamma_{01}}\sum\limits_{\boldsymbol{\delta}_i}\#\mathcal{M}^0_{\Sigma_{0},\Sigma_1}(\gamma_{01};\boldsymbol{\delta}_0,a,\boldsymbol{\delta}_1)\cdot\varepsilon_0^-(\boldsymbol{\delta}_0)\varepsilon_1^-(\boldsymbol{\delta}_1)\cdot\gamma_{01}
\end{alignat*}
so in particular it vanishes for energy reasons on $C(\Lambda_0^-,\Lambda_1^-)$. This map is of degree $0$, i.e. $|\gamma_{01}|=n-1-|a|_{\Cth_+}$ where $|\gamma_{01}|$ is as defined in Section \ref{sec:grad}.
\item Let us denote $\mathfrak{C}^*(\Lambda_0^-,\Lambda_1^-)=C_{n-1-*}(\Lambda_1^-,\Lambda_0^-)\oplus C^{*-1}(\Lambda_0^-,\Lambda_1^-)$, one finally defines the map $b_1^-:\mathfrak{C}^*(\Lambda_0^-,\Lambda_1^-)\to C^{*-1}(\Lambda_0^-,\Lambda_1^-)$ by
\begin{alignat*}{1}
b_1^-(\gamma)=\sum\limits_{\gamma^+_{10}}\sum\limits_{\boldsymbol{\delta}_i}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda^-_{01}}(\gamma_{10};\boldsymbol{\delta}_0,\gamma,\boldsymbol{\delta}_1)\cdot\varepsilon_0^-(\boldsymbol{\delta}_0)\varepsilon_1^-(\boldsymbol{\delta}_1)\cdot\gamma_{10}
\end{alignat*}
where $\gamma$ is a positive asymptotic if it is in $C(\Lambda_1^-,\Lambda_0^-)$ and a negative asymptotic if it is in $C(\Lambda_0^-,\Lambda_1^-)$. This map is of degree $1$.
\end{enumerate}
On Figure \ref{fig:curves_diff} are schematized the pseudo-holomorphic curves contributing to the differential $\mfm_1^{\varepsilon_0^-,\varepsilon_1^-}$.
\begin{figure}[ht]
\begin{center}\includegraphics[width=11cm]{curves_diff.eps}\end{center}
\caption{Curves contributing to the differential $\mfm_1^{\varepsilon^-_0,\varepsilon^-_1}$.}
\label{fig:curves_diff}
\end{figure}
\begin{rem}\label{rem:relation}
In the definition of $\mfm_1^{\varepsilon_0^-,\varepsilon_1^-}$, all components are related to components of the differential of $\Cth(\Sigma_0,\Sigma_1)$ or $\Cth(\Sigma_1,\Sigma_0)$ as follows:
\begin{itemize}
\item the map $\Delta_1^+$ is the dual of $d_{++}$ in $\Cth(\Sigma_1,\Sigma_0)$, and it is the differential of the bilinearized Legendrian contact homology of $\Lambda_0^+\cup\Lambda_1^+$ restricted to $C(\Lambda_1^+,\Lambda_0^+)$,
\item the map $d_{0+}$ is the dual of $d_{+0}$ in $\Cth(\Sigma_1,\Sigma_0)$,
\item the map $\Delta_1^\Sigma$ restricted to the positive Reeb chords is the dual of $d_{+-}$ in $\Cth(\Sigma_1,\Sigma_0)$ and restricted to intersection points it is the dual of $d_{0-}$ in $\Cth(\Sigma_1,\Sigma_0)$,
\item $b_1^-$ restricted to $C(\Lambda_1^-,\Lambda_0^-)$ is the banana map in $\Cth(\Sigma_0,\Sigma_1)$, and restricted to $C(\Lambda_0^-,\Lambda_1^-)$ it is the map $d_{--}$ in $\Cth(\Sigma_0,\Sigma_1)$ that is to say the differential of the Legendrian contact cohomology of $\Lambda_0^-\cup\Lambda_1^-$ restricted to $C(\Lambda_0^-,\Lambda_1^-)$.
\end{itemize}
In particular, the Floer complex $(CF_{-\infty}(\Sigma_0,\Sigma_1),\mfm_1^{-\infty})$ is a subcomplex of $\Cth_+(\Sigma_0,\Sigma_1)$.
\end{rem}
\begin{teo}\label{diff}
$\mfm_1^{\varepsilon_0^-,\varepsilon_1^-}$ is a degree $1$ map satisfying $\mfm_1^{\varepsilon_0^-,\varepsilon_1^-}\circ\mfm_1^{\varepsilon_0^-,\varepsilon_1^-}=0$.
\end{teo}
\begin{rem}[$\partial$-breaking]\label{rem:delta}
Before we prove the theorem, let us give some precision about some types of pseudo-holomorphic buildings arising as limit of a sequence of pseudo-holomorphic discs in a $1$-dimensional moduli space. Namely, the buildings containing a non-trivial pure disc are a bit special.
Consider again the case 1. in Section \ref{sec:structure}, i.e. the limit of a sequence of discs with boundary on trivial cylinders. As we said, it consists of two index $1$ discs connected by a Reeb chord node. If this node is a pure Reeb chord $\gamma\in\mathcal{R}(\Lambda)$, then the (non trivial) disc $u$ in the building for which this node is a positive Reeb chord asymptotic is a pure disc, given the condition we take on the Lagrangian boundary conditions in Section \ref{sec:mod}, and thus contributes to the differential of $\gamma$ in the Legendrian contact homology DGA of $\Lambda$.
Then, applying an augmentation $\varepsilon$ of $\mathcal{A}(\Lambda)$ to all the negative pure Reeb chord asymptotics of $u$ results in a pseudo-holomorphic curve count contributing to $\varepsilon\circ\partial(\gamma)$.
The sum over all possible negative pure Reeb chord asymptotics of a pure disc with positive asymptotic $\gamma$ leads in a curve count giving the whole term $\varepsilon\circ\partial(\gamma)$ which vanishes by definition of an augmentation.
Consider now the case 2.(b) in Section \ref{sec:structure}, and the subcase where the index $1$ curve that we denote $u$ has boundary on the negative ends of the cobordisms and has one positive Reeb chord asymptotic which is a pure Reeb chord $\gamma\in\mathcal{R}(\Lambda^-)$. In this case, for the same reason as above, $u$ is a pure disc, and the contribution of such discs vanish once we apply an augmentation to pure negative Reeb chords.
In the subcase of 2.(b) where the index $1$ curve $u$ has boundary on the positive ends of the cobordisms, $u$ can not have a positive asymptotic to a pure Reeb chord because this is not a node and thus would imply that the sequence of discs we started with had a positive pure Reeb chord asymptotic. However, there can be one (or several) index $0$ pure curve with a positive asymptotic to a pure chord $\gamma$ of $\Lambda^+$, and with boundary on a cobordism $\Lambda^-\prec_\Sigma\Lambda^+$. Such an index $0$ curve, call it $v$, contributes thus to $\Phi_\Sigma(\gamma)$ where $\Phi_\Sigma:\mathcal{A}(\Lambda^+)\to\mathcal{A}(\Lambda^-)$ is the map induced by the cobordism. Applying the augmentation $\varepsilon^-$ to negative Reeb chord asymptotics of $v$ leads to a curve count contributing to $\varepsilon^-\circ\Phi_\Sigma(\gamma)=\varepsilon^+(\gamma)$ by definition of $\varepsilon^+$.
Fixing $\gamma$ and summing over all possible negative Reeb chords of $\Lambda^-$ leads in a curve count giving the term $\varepsilon^+(\gamma)$.
In this paper, every time we define a map via a count of mixed pseudo-holomorphic discs in some moduli spaces, we sum over all possible pure negative Reeb chord asymptotics, to which we apply then the given augmentations of the Legendrian negative ends. Thus,
\begin{enumerate}
\item[(A)] the contribution of broken discs having a non trivial pure disc component with boundary on cylindrical ends will vanish,
\item[(B)] applying the augmentation $\varepsilon_i^-$ to negative pure chords in $\mathcal{R}(\Lambda_i^-)$ corresponds to applying $\varepsilon_i^+$ to the potential pure chords asymptotics in $\mathcal{R}(\Lambda_i^+)$ of a disc with boundary on the positive cylindrical ends.
\end{enumerate}
This being said, we will now ignore the broken discs of case (A), and the use of the induced augmentations $\varepsilon_i^+$ when describing the boundary of the compactification of moduli spaces refers to breakings in case (B).
\end{rem}
\begin{proof}[Proof of Theorem \ref{diff}]
The degree of $\mfm_1^{\varepsilon_0^-,\varepsilon_1^-}$ follows from Proposition \ref{teo:grading}.
Then, we have
\begin{alignat*}{1}
&\mfm_1^{\varepsilon_0^-,\varepsilon_1^-}\circ\mfm_1^{\varepsilon_0^-,\varepsilon_1^-}\\
\small
&=\left(
\begin{matrix}
\Delta_1^{+}\circ\Delta_1^+&0&0\\
d_{0+}\circ\Delta_1^++d_{00}\circ d_{0+}+d_{0-}\circ b_1^-\circ\Delta_1^\Sigma&d_{00}^2+d_{0-}\circ b_1^-\circ\Delta_1^\Sigma&d_{00}\circ d_{0-}+d_{0-}\circ b_1^-\\
b_1^-\circ\Delta_1^\Sigma\circ\Delta_1^++b_1^-\circ\Delta_1^\Sigma\circ d_{0+}+(b_1^-)^2\circ\Delta_1^\Sigma&b_1^-\circ\Delta_1^\Sigma\circ d_{00}+(b_1^-)^2\circ\Delta_1^\Sigma&b_1^-\circ\Delta_1^\Sigma\circ d_{0-}+(b_1^-)^2
\end{matrix}\right)
\end{alignat*}
\normalsize
\begin{enumerate}
\item $\Delta_1^{+}\circ\Delta_1^+$ vanishes because for any $\gamma_{01}\in C(\Lambda_1^+,\Lambda_0^+)$, the discs contributing to $\Delta_1^{+}\circ\Delta_1^+(\gamma_{01})$ are in one-to-one correspondence with broken curves in the boundary of the compactification of moduli spaces
$\widetilde{\mathcal{M}^2}_{\mathbb R\times\Lambda^+_{01}}(\xi_{01};\boldsymbol{\zeta}_0,\gamma,\boldsymbol{\zeta}_1)$, for all possible chord $\xi_{01}\in C(\Lambda_1^+,\Lambda_0^+)$ and words of pure Reeb chords $\boldsymbol{\zeta}_i$. Observe that $\Delta_1^+$ is in fact the bilinearized Legendrian homology differential of $\Lambda_0^+\cup\Lambda_1^+$ restricted to the subcomplex $C(\Lambda_1^+,\Lambda_0^+)$.
\item For $\gamma_{01}\in C(\Lambda_1^+,\Lambda_0^+)$, the term $\big(d_{0+}\circ\Delta_1^++d_{00}\circ d_{0+}+d_{0-}\circ b_1^-\circ\Delta_1^\Sigma\big)(\gamma_{01})$ is given exactly by the count of broken curves in
$\partial\overline{\mathcal{M}^1}_{\Sigma_{0},\Sigma_1}(p;\boldsymbol{\delta}_0,\gamma_{01},\boldsymbol{\delta}_1)$ for all $p\in\Sigma_0\cap\Sigma_1$ and words $\boldsymbol{\delta}_i$.
\item For $\gamma_{01}\in C(\Lambda_1^+,\Lambda_0^+)$, the term $\big(b_1^-\circ\Delta_1^\Sigma\circ\Delta_1^++b_1^-\circ\Delta_1^\Sigma\circ d_{0+}+(b_1^-)^2\circ\Delta_1^\Sigma\big)(\gamma_{01})$ is given by the count of curves in
\begin{alignat*}{1}
&\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda^-_{01}}(\xi_{10};\boldsymbol{\delta}_0',\beta_{01},\boldsymbol{\delta}_1')\times\partial\overline{\mathcal{M}^1}_{\Sigma_{0},\Sigma_1}(\beta_{01};\boldsymbol{\delta}_0,\gamma_{01},\boldsymbol{\delta}_1)\\
\mbox{ and }\,&\partial\overline{\mathcal{M}^2}_{\mathbb R\times\Lambda^-_{01}}(\xi_{10};\boldsymbol{\delta}_0',\beta_{01},\boldsymbol{\delta}_1')\times\mathcal{M}^0_{\Sigma_{0},\Sigma_1}(\beta_{01};\boldsymbol{\delta}_0,\gamma_{01},\boldsymbol{\delta}_1)
\end{alignat*}
for all $\xi_{10}\in C(\Lambda_0^-,\Lambda_1^-),\beta_{01}\in C(\Lambda_1^-,\Lambda_0^-)$ and words of pure Reeb chords $\boldsymbol{\delta}_i,\boldsymbol{\delta}_i'$ of $\Lambda_i^-$. Indeed, the study of $\partial\overline{\mathcal{M}^2}_{\mathbb R\times\Lambda^-_{01}}(\xi_{10};\boldsymbol{\delta}_0',\beta_{01},\boldsymbol{\delta}_1')$ gives that the map $b_1^-$ restricted to $C(\Lambda_1^+,\Lambda_0^+)$ satisfies $(b_1^-)^2+b_1^-\circ\Delta_1^-=0$, where $\Delta_1^-$ is the obvious analogue of $\Delta_1^+$ but defined on $C(\Lambda_1^-,\Lambda_0^-)$. Then one can write
\begin{alignat*}{1}
&\big(b_1^-\circ\Delta_1^\Sigma\circ\Delta_1^++b_1^-\circ\Delta_1^\Sigma\circ d_{0+}+(b_1^-)^2\circ\Delta_1^\Sigma\big)(\gamma_{01})=b_1^-\big(\Delta_1^\Sigma\circ\Delta_1^++\Delta_1^\Sigma\circ d_{0+}+\Delta_1^-\circ\Delta_1^\Sigma\big)(\gamma_{01})
\end{alignat*}
and there is a one-to-one correspondence between broken discs contributing to $\Delta_1^\Sigma\circ\Delta_1^++\Delta_1^\Sigma\circ d_{0+}+\Delta_1^-\circ\Delta_1^\Sigma$ and broken discs in $\partial\overline{\mathcal{M}^1}_{\Sigma_{0},\Sigma_1}(\beta_{01};\boldsymbol{\delta}_0,\gamma,\boldsymbol{\delta}_1)$.
\item $\big(d_{00}^2+d_{0-}\circ b_1^-\circ\Delta_1^\Sigma\big)(x)$, for $x\in CF(\Sigma_0,\Sigma_1)$, counts broken curves in $\partial\overline{\mathcal{M}^1}_{\Sigma_{0},\Sigma_1}(p;\boldsymbol{\delta}_0,x,\boldsymbol{\delta}_1)$, for all $p\in\Sigma_0\cap\Sigma_1$ and words $\boldsymbol{\delta}_i$.
\item $\big(b_1^-\circ\Delta_1^\Sigma\circ d_{00}+(b_1^-)^2\circ\Delta_1^\Sigma\big)(x)$ counts broken curves in
\begin{alignat*}{1}
&\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda^-_{01}}(\xi_{10};\boldsymbol{\delta}_0',\beta_{01},\boldsymbol{\delta}_1')\times\partial\overline{\mathcal{M}^1}_{\Sigma_{0},\Sigma_1}(\beta_{01};\boldsymbol{\delta}_0,x,\boldsymbol{\delta}_1)\\
\mbox{ and }\,&\partial\overline{\mathcal{M}^2}_{\mathbb R\times\Lambda^-_{01}}(\xi_{10};\boldsymbol{\delta}_0',\beta_{01},\boldsymbol{\delta}_1')\times\mathcal{M}^0_{\Sigma_{0},\Sigma_1}(\beta_{01};\boldsymbol{\delta}_0,x,\boldsymbol{\delta}_1)
\end{alignat*}
for all $\xi_{10},\beta_{01}$ and words of Reeb chords $\boldsymbol{\delta}_i$ and $\boldsymbol{\delta}_i'$ as above.
\item $\big(d_{00}\circ d_{0-}+d_{0-}\circ b_1^-\big)(\gamma_{10})$, for $\gamma_{10}\in C(\Lambda_0^-,\Lambda_1^-)$, counts broken curves in $\partial\overline{\mathcal{M}^1}_{\Sigma_{0},\Sigma_1}(p;\boldsymbol{\delta}_0,\gamma_{10},\boldsymbol{\delta}_1)$.
\item $\big(b_1^-\circ\Delta_1^\Sigma\circ d_{0-}+(b_1^-)^2\big)(\gamma_{10})=(b_1^-)^2(\gamma_{10})$ for energy reasons, and vanishes as $b_1^-$ restricted to $C(\Lambda_0^-,\Lambda_1^-)$ is the bilinearized Legendrian contact cohomology differential of $\Lambda_0^-\cup\Lambda_1^-$ restricted to the subcomplex $C(\Lambda_0^-,\Lambda_1^-)$ (observe otherwise that the broken discs contributing to $(b_1^-)^2(\gamma_{10})$ are exactly the one appearing in $\partial\overline{\mathcal{M}^2}_{\mathbb R\times\Lambda^-_{01}}(\xi_{10};\boldsymbol{\delta}_0,\gamma_{10},\boldsymbol{\delta}_1)$, for all $\xi_{10},\boldsymbol{\delta}_0,\boldsymbol{\delta}_1$).
\end{enumerate}
\end{proof}
\begin{nota} A few remarks about notations of maps:
\begin{enumerate}
\item Very rigorously, we should write the augmentations involved in the definition of each map all the time, but we drop it to enlighten the notation.
\item Given a pair of cobordisms $(\Sigma_0,\Sigma_1)$, we will then write $\mfm_1^\Sigma$ for the differential on $\Cth_+(\Sigma_0,\Sigma_1)$, so without specifying the augmentations, and \textquotedblleft$\Sigma$\textquotedblright\, stands for the ordered pair $(\Sigma_0,\Sigma_1)$. If we want to explicit the order, we will sometimes write $\mfm_1^{\Sigma_0,\Sigma_1}$ or $\mfm_1^{\Sigma_{01}}$. Similarly, we write $\Delta_1^\Sigma$ instead of $\Delta_1^{\Sigma_0,\Sigma_1}$, and finally $b_1^-$ is a short notation for $b_1^{\Lambda_0^-,\Lambda_1^-}$ and $\Delta_1^+$ is a short notation for $\Delta_1^{\Lambda_0^+,\Lambda_1^+}$.
\item We write $\mfm_1^{\Sigma,+}$, $\mfm_1^{\Sigma,0}$ and $\mfm_1^{\Sigma,-}$ (or simply $\mfm_1^{+}$, $\mfm_1^{0}$ and $\mfm_1^{-}$ when the pair of cobordisms is clear from the context) the components of the differential with values in $C(\Lambda_1^+,\Lambda_0^+)^\dagger[n-1]$, $CF(\Sigma_0,\Sigma_1)$ and $C(\Lambda_0^-,\Lambda_1^-)[1]$ respectively, and then $\mfm_1^{\Sigma,ij}:=\mfm_1^{\Sigma,i}+\mfm_1^{\Sigma,j}$, for $i,j\in\{+,0,-\}$ distinct.
\item We will sometimes denote $CF_{+\infty}(\Sigma_0,\Sigma_1):= C(\Lambda_1^+,\Lambda_0^+)^\dagger[n-1]\oplus CF(\Sigma_0,\Sigma_1)$, but observe that contrary to $CF_{-\infty}(\Sigma_0,\Sigma_1)$, this is not a complex.
\end{enumerate}
\end{nota}
Considering the notations above, the components $\mfm_1^+$ and $\mfm_1^-$ of $\mfm_1^\Sigma$ can be expressed as:
\begin{alignat}{1}
&\mfm_1^+=\Delta_1^+\\
&\mfm_1^-=b_1^-\circ\boldsymbol{\Delta}_1^\Sigma
\end{alignat}
where $\boldsymbol{\Delta}_1^\Sigma:\Cth_+(\Sigma_0,\Sigma_1)\to C_{n-1-*}(\Lambda_1^-,\Lambda_0^-)\oplus C^{*-1}(\Lambda_0^-,\Lambda_1^-)$ is defined by:
\begin{alignat*}{1}
&\boldsymbol{\Delta}_1^\Sigma(a)=
\left\{\begin{array}{cl} a&\mbox{ if }a\in C(\Lambda_0^-,\Lambda_1^-)\\
\Delta_1^\Sigma(a)&\mbox{ otherwise}\end{array}\right.
\end{alignat*}
\begin{ex}[Case of concordances]
Consider a compact non-degenerate Legendrian submanifold $\Lambda\subset Y$, admitting augmentations $\varepsilon_0,\varepsilon_1$.
Consider a 2-copy of $\Lambda$ which we denote $\overline{\Lambda^{(2)}}$ and consisting of $\Lambda_0\cup\overline{\Lambda_1}$, such that $\Lambda_0:=\Lambda$ and $\overline{\Lambda_1}$ a push-off of $\Lambda$ in the positive Reeb direction lying entirely \textit{above} $\Lambda_0$ (the smallest $z$-coordinate of a point in $\overline{\Lambda}_1$ is greater than the biggest $z$ coordinate of a point in $\Lambda_0$) and then perturbed by a Morse function. This is the 2-copy of $\Lambda$ considered in \cite{EESa}. We have in this case $$\big(\Cth_+(\mathbb R\times\Lambda_0,\mathbb R\times\overline{\Lambda_1}),\mfm_1^{\varepsilon_0,\varepsilon_1}\big)=(C(\Lambda_1,\Lambda_0)^\dagger[n-1],\Delta_1^+)$$ and this complex is the complex of the bilinearized Legendrian contact homology of $\Lambda_0\cup\overline{\Lambda_1}$ restricted to mixed chords (chords from $\Lambda_0$ to $\overline{\Lambda_1}$), but with a choice of grading making the differential a degree $1$ map. For an horizontally displaceable $\Lambda_0$, this complex is acyclic and gives the duality long exact sequence for the Legendrian $\Lambda_0$ as proved in \cite{EESa}.
\begin{figure}[ht]
\begin{center}\includegraphics[width=5cm]{2-copy.eps}\end{center}
\caption{Left: $2$-copy $\Lambda^{(2)}$ of $\Lambda$; right: $2$-copy $\overline{\Lambda^{(2)}}$.}
\label{2-copy}
\end{figure}
Consider another $2$-copy $\Lambda^{(2)}$ of $\Lambda$ consisting of the components $\Lambda_0$ and $\Lambda_1$ such that $\Lambda_1$ is a copy of $\Lambda_0:=\Lambda$ perturbed by a small negative Morse function $f$, i.e. $\Lambda_1$ is identified with $j^1(f)$ in a neigborhood of $\Lambda$ identified with a neighborhood of the $0$-section of $J^1(\Lambda)$, as schematized on Figure \ref{2-copy}.
In this case we have:
\begin{alignat*}{1}
\Cth_+(\mathbb R\times\Lambda_0,\mathbb R\times\Lambda_1)=C(\Lambda_1,\Lambda_0)^\dagger[n-1]\oplus C(\Lambda_0,\Lambda_1)[1]=\mathfrak{C}^*(\Lambda_0,\Lambda_1)
\end{alignat*}
with differential:
\begin{alignat*}{1}
\mfm_1^{\varepsilon_0,\varepsilon_1}=\left(
\begin{matrix}
\Delta_1^+&0\\
b_1^-\circ\Delta_1^\Sigma&b_1^-
\end{matrix}\right)=\left(
\begin{matrix}
\Delta_1^+&0\\
b_1^-&b_1^-
\end{matrix}\right)
\end{alignat*}
because the cobordisms are trivial cylinders so the map $\Delta_1^\Sigma$ is the identity map.
There is actually a canonical isomorphism of complexes
\begin{alignat*}{1}
\big(\Cth_+(\mathbb R\times\Lambda_0,\mathbb R\times\Lambda_1),\left(
\begin{matrix}
\Delta_1^+&0\\
b_1^-&b_1^-
\end{matrix}\right)\big)\xrightarrow[]{\simeq}(\Cth_+(\mathbb R\times\Lambda_0,\mathbb R\times\overline{\Lambda_1}),\Delta_1^+)
\end{alignat*}
sending chords in $C(\Lambda_1,\Lambda_0)$ to their corresponding long chords in $C(\overline{\Lambda}_1,\Lambda_0)$. The other part of the isomorphism is obtained by considering pseudo-holomorphic strips with mixed Reeb chords asymptotics and inverting the role of the input and output, see for example \cite[Proposition 5.4]{NRSSZ}.
\end{ex}
\begin{ex}[$0$-section of a jet space]\label{ex:jet}
Consider the jet space $J^1(M)=T^*M\times\mathbb R$ of a smooth manifold $M$, endowed with the standard contact form $dz-\lambda$ where $z$ is the $\mathbb R$ coordinate and $\lambda$ the canonical form on $T^*M$. Then the $0$-section is a Legendrian $\Lambda_0:=M$. Take a small push-off of $\Lambda$ in the positive Reeb ($\partial_z$) direction and perturb it by a small Morse function $f:\Lambda_0\to\mathbb R$. Denote this Legendrian $\Lambda_1$. Consider the trivial augmentations $\varepsilon_i$, $i=0,1$. Then, the complex $\Cth_+(\mathbb R\times\Lambda_0,\mathbb R\times\Lambda_1),\mfm_1^{\varepsilon_0^-,\varepsilon_1^-})$ is just the complex $(C(\Lambda_1,\Lambda_0)^\dagger[n-1],\Delta_1^{+})$ which is canonically identified with the Morse complex of $f$.
\end{ex}
\subsection{Concatenation of cobordisms}\label{sec:conc}
\subsubsection{Definition of the complex $\Cth_+(V_0\odot W_0,V_1\odot W_1)$}
Consider Legendrian submanifolds $\Lambda_i^-,\Lambda_i,\Lambda_i^+$ for $i=0,1$, and cobordisms $\Lambda_i^-\prec_{V_i}\Lambda_i$ and $\Lambda_i\prec_{W_i}\Lambda_i^+$. As the positive end of $V_i$ is a cylinder over $\Lambda_i$, as well as the negative end of $W_i$, one can perform the concatenation of $V_i$ and $W_i$ denoted $V_i\odot W_i$, which is an exact Lagrangian cobordism from $\Lambda_i^-$ to $\Lambda_i^+$, see for example \cite[Section 5.1]{CDGG2}. Assume that $\mathcal{A}(\Lambda_i^-)$ admit augmentations $\varepsilon_0^-,\varepsilon_1^-$. These augmentations induce augmentations $\varepsilon_0$ and $\varepsilon_1$ of $\mathcal{A}(\Lambda_0)$ and $\mathcal{A}(\Lambda_1)$ respectively, and augmentations $\varepsilon_0^+$ and $\varepsilon_1^+$ of $\mathcal{A}(\Lambda_0^+)$ and $\mathcal{A}(\Lambda_1^+)$.
Assuming that the cobordisms $V_0\odot W_0$ and $V_1\odot W_1$ intersect transversely, the Cthulhu complex of the pair $(V_0\odot W_0,V_1\odot W_1)$ has four types of generators
\begin{alignat*}{1}
\Cth_+(V_0\odot W_0,V_1\odot W_1)&=C(\Lambda_1^+,\Lambda_0^+)\oplus CF(W_0,W_1)\oplus CF(V_0,V_1)\oplus C(\Lambda_0^-,\Lambda_1^-)\\
&=CF_{+\infty}(W_0,W_1)\oplus CF_{-\infty}(V_0,V_1)
\end{alignat*}
and the differential is given by
\begin{alignat*}{2}
\mfm^{V\odot W}_1
&=\left(\begin{matrix}
\mfm_1^{W,+}&0&0&0\\
\mfm_1^{W,0}(\id+\,b_1^V\circ\Delta_1^{W})&\mfm_1^{W,0}(\id+\,b_1^V\circ\Delta_1^{W})&\mfm_1^{W,0}\circ b_1^V&\mfm_1^{W,0}\circ b_1^V\\
\mfm_1^{V,0}\circ\Delta_1^{W}&\mfm_1^{V,0}\circ\Delta_1^{W}&\mfm_1^{V,0}&\mfm_1^{V,0}\\
\mfm_1^{V,-}\circ\Delta_1^{W}&\mfm_1^{V,-}\circ\Delta_1^{W}&\mfm_1^{V,-}&\mfm_1^{V,-}
\end{matrix}\right)
\end{alignat*}
where $b_1^V:\Cth_+(V_0,V_1)\to C(\Lambda_0,\Lambda_1)[1]$ is the degree $0$ map defined by
\begin{alignat*}{1}
&b_1^V(a)=\sum\limits_{\gamma_{10}}\sum\limits_{\boldsymbol{\delta}_i}\#\mathcal{M}^0_{V_{0},V_1}(\gamma_{10};\boldsymbol{\delta}_0,a,\boldsymbol{\delta}_1)\cdot\varepsilon_0^-(\boldsymbol{\delta}_0)\varepsilon_1^-(\boldsymbol{\delta}_1)\cdot\gamma_{10}
\end{alignat*}
We will extend the definitions of the maps $b_1^V$ and $\Delta_1^V$ to $\Cth_+(V_0\odot W_0,V_1\odot W_1)$ in order to obtain a compact formula for $\mfm_1^{V\odot W}$.
Namely, we define
$\boldsymbol{b}_1^V:\Cth_+(V_0\odot W_0,V_1\odot W_1)\to\Cth_+(W_0,W_1)$
by
$$\boldsymbol{b}_1^V(a)=\left\{
\begin{array}{cl}
a+b_1^V\circ\Delta_1^W(a)&\mbox{ for } a\in CF_{+\infty}(W_0,W_1)\\
b_1^V(a) &\mbox{ for } a\in CF_{-\infty}(V_0,V_1)
\end{array}\right.$$
and we define $\boldsymbol{\Delta}_1^W:\Cth_+(V_0\odot W_0,V_1\odot W_1)\to\Cth_+(V_0,V_1)$
by
$$\boldsymbol{\Delta}_1^W(a)=\left\{
\begin{array}{cl}
\Delta_1^W(a)&\mbox{ for } a\in CF_{+\infty}(W_0,W_1)\\
a &\mbox{ for } a\in CF_{-\infty}(V_0,V_1)
\end{array}\right.$$
This may seem confusing because we have already defined a map $\boldsymbol{\Delta}_1^\Sigma$ for the case of a pair of cobordisms $(\Sigma_0,\Sigma_1)$ in the previous section. However, this map $\boldsymbol{\Delta}_1^\Sigma$ can be recovered from the map $\boldsymbol{\Delta}_1^W$ for the pair $(V_0\odot W_0,V_1\odot W_1)$ where $(V_0,V_1)=(\mathbb R\times\Lambda_0^-,\mathbb R\times\Lambda_1^-)$ and $(W_0,W_1)=(\Sigma_0,\Sigma_1)$, see Section \ref{special_case} for more details. In the remaining of this section, to make it clear we write $\boldsymbol{\Delta}_1^{W\subset W}$ when we consider the map for the pair $(W_0,W_1)$ not in the concatenation.
One can thus write the product in the following more compact way:
\begin{alignat*}{1}
\mfm_1^{V\odot W}=\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V+\mfm_1^{V,0-}\circ\boldsymbol{\Delta}_1^W
\end{alignat*}
Let us now check that $\mfm_1^{V\odot W}$ is indeed a differential.
Using the definition of $\boldsymbol{b}_1^V$ and $\boldsymbol{\Delta}_1^W$, we have:
\begin{alignat*}{2}
\big(\mfm_1^{V\odot W}\big)^2=&\Big(\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V+\mfm_1^{V,0-}\circ\,\boldsymbol{\Delta}_1^W\Big)\circ\Big(\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V+\mfm_1^{V,0-}\circ\,\boldsymbol{\Delta}_1^W\Big)\\
=&\mfm_1^{W,+0}\circ\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V+\mfm_1^{W,+0}\circ\big(b_1^V\circ\Delta_1^W\big)\circ\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V+\mfm_1^{W,+0}\circ\,b_1^V\circ\mfm_1^{V,0-}\circ\,\boldsymbol{\Delta}_1^W\\
&+\mfm_1^{V,0-}\circ\,\Delta_1^W\circ\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V+\mfm_1^{V,0-}\circ\mfm_1^{V,0-}\circ\,\boldsymbol{\Delta}_1^W
\end{alignat*}
where by definition the term $\mfm_1^{W,+}\circ\,b_1^V\circ\boldsymbol{\Delta}_1^W$ vanishes but we keep it in the formula to make it more homogeneous. We use then the following
\begin{lem}\label{lem:rel}
The maps
\begin{enumerate}
\item $\Delta_1^W\circ\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V+\mfm_1^{V,+}\circ\,\boldsymbol{\Delta}_1^W:\Cth_+(V_0\odot W_0,V_1\odot W_1)\to C_{n-1-*}(\Lambda_1,\Lambda_0)$\label{rel1}, and
\item $b_1^V\circ\mfm_1^{V}\circ\,\boldsymbol{\Delta}_1^W+\,b_1^\Lambda\circ\boldsymbol{\Delta}_1^{W\subset W}\circ\boldsymbol{b}_1^V:\Cth_+(V_0\odot W_0,V_1\odot W_1)\to C^{*-1}(\Lambda_0,\Lambda_1)$\label{rel2}
\end{enumerate}
vanish.
\end{lem}
\begin{proof}
\textit{1.} For $a\in CF_{-\infty}(V_0,V_1)$ we have
\begin{alignat*}{1}
\Delta_1^W\circ\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V(a)+\mfm_1^{V,+}\circ\,\boldsymbol{\Delta}_1^W(a)=\Delta_1^W\circ\mfm_1^{W,+0}\circ\,b_1^V(a)+\mfm_1^{V,+}(a)
\end{alignat*}
and the first term vanishes for energy reason and the second one by definition.
Then, for $a\in CF_{+\infty}(W_0,W_1)$ we have
\begin{alignat*}{1}
\Delta_1^W\circ\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V(a)+\mfm_1^{V,+}\circ\,\boldsymbol{\Delta}_1^W(a)&=\Delta_1^W\circ\mfm_1^{W,+0}(a+b_1^V\circ\Delta_1^W(a))+\mfm_1^{V,+}\circ\,\Delta_1^W(a)\\
&=\Delta_1^W\circ\mfm_1^{W,+0}(a)+\mfm_1^{V,+}\circ\,\Delta_1^W(a)
\end{alignat*}
because $\Delta_1^W\circ\mfm_1^{W,+0}\circ b_1^V\circ\Delta_1^W(a)$ vanishes for energy reason. Consider the boundary of the one-dimensional moduli space $\overline{\mathcal{M}^1}_{W_{0},W_1}(\beta_{01};\boldsymbol{\delta}_0,a,\boldsymbol{\delta}_1)$ for $\beta_{01}\in C(\Lambda_1,\Lambda_0)$. The broken discs arising in the boundary (schematized on Figure \ref{broken_lemma} for the case $a=\gamma_{01}\in C(\Lambda_1^+,\Lambda_0^+)$) contribute exactly to
\begin{alignat}{1}
\big\langle(\Delta_1^W\circ\mfm_1^{W,+0}+\mfm_1^{V,+}\circ\,\Delta_1^W)(a),\beta_{01}\big\rangle\label{rel}
\end{alignat}
\begin{figure}[ht]
\begin{center}\includegraphics[width=9cm]{broken_lemma.eps}\end{center}
\caption{Types of broken discs in the boundary of $\overline{\mathcal{M}^1}_{W_{0},W_1}(\beta_{01};\boldsymbol{\delta}_0,\gamma_{01},\boldsymbol{\delta}_1)$.}
\label{broken_lemma}
\end{figure}
\textit{2.} For $a\in CF_{-\infty}(V_0,V_1)$ we have
\begin{alignat*}{1}
b_1^V\circ\mfm_1^{V}\circ\,\boldsymbol{\Delta}_1^W(a)+b_1^\Lambda\circ\boldsymbol{\Delta}_1^{W\subset W}\circ\boldsymbol{b}_1^V(a)=b_1^V\circ\mfm_1^V(a)+b_1^\Lambda\circ b_1^V(a)
\end{alignat*}
and for $a\in CF_{+\infty}(W_0,W_1)$ we have
\begin{alignat*}{1}
b_1^V\circ\mfm_1^{V}\circ\boldsymbol{\Delta}_1^W(a)+b_1^\Lambda\circ\boldsymbol{\Delta}_1^{W\subset W}\circ\boldsymbol{b}_1^V(a)&= b_1^V\circ\mfm_1^{V}\circ\Delta_1^W(a)+b_1^\Lambda\circ\boldsymbol{\Delta}_1^{W\subset W}(a+b_1^V\circ\Delta_1^W(a))\\
&=b_1^V\circ\mfm_1^{V}\circ\Delta_1^W(a)+b_1^\Lambda\circ\Delta_1^W(a)+b_1^\Lambda\circ b_1^V\circ\Delta_1^W(a)
\end{alignat*}
To conclude that this map vanishes, one has to consider the broken curves in the boundary of the compactification of moduli spaces of bananas with boundary on $V$, namely the boundary of $\overline{\mathcal{M}^1}_{V_0,V_1}(\gamma_{10};\boldsymbol{\delta}_0,a,\boldsymbol{\delta}_1)$ for $\gamma_{10}\in C(\Lambda_0,\Lambda_1)$, and $a\in\Cth_+(V_0\odot W_0,V_1\odot W_1)$, see Figure \ref{broken_bV} for the case $a=x\in CF(W_0,W_1)$.
\end{proof}
\begin{figure}[ht]
\begin{center}\includegraphics[width=11cm]{broken_bV.eps}\end{center}
\caption{Types of broken discs in $\partial\overline{\mathcal{M}^1}_{V_0,V_1}(\gamma_{10};\boldsymbol{\delta}_0,\gamma_{01},\boldsymbol{\delta}_1)$.}
\label{broken_bV}
\end{figure}
Thus, using part $1.$ of the lemma we can rewrite
\begin{alignat*}{2}
\big(\mfm_1^{V\odot W}\big)^2
=&\mfm_1^{W,+0}\circ\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V+\mfm_1^{W,+0}\circ\,b_1^V\circ\mfm_1^{V,+}\circ\boldsymbol{\Delta}_1^V+\mfm_1^{W,+0}\circ\,b_1^V\circ\mfm_1^{V,0-}\circ\,\boldsymbol{\Delta}_1^W\\
&+\mfm_1^{V,0-}\circ\mfm_1^{V,+}\circ\,\boldsymbol{\Delta}_1^V+\mfm_1^{V,0-}\circ\mfm_1^{V,0-}\circ\,\boldsymbol{\Delta}_1^W\\
&=\mfm_1^{W,+0}\circ\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V+\mfm_1^{W,+0}\circ\,b_1^V\circ\mfm_1^{V}\circ\boldsymbol{\Delta}_1^V+\mfm_1^{V,0-}\circ\mfm_1^{V}\circ\,\boldsymbol{\Delta}_1^V
\end{alignat*}
Now, using $\mfm_1^{W,+0}\circ\mfm_1^{W,+0}=\mfm_1^{W,+0}\circ\mfm_1^{W,-}=\mfm_1^{W,+0}\circ\,b_1^\Lambda\circ\boldsymbol{\Delta}_1^{W\subset W}$, part $2.$ of Lemma \ref{lem:rel}, and the fact that $\mfm_1^{V,0-}\circ\mfm_1^V=0$, one gets that $\big(\mfm_1^{V\odot W}\big)^2=0$.
\subsubsection{Transfer maps}
The maps $\boldsymbol{b}_1^V$ and $\boldsymbol{\Delta}_1^W$ defined in the previous section are in fact what we will call \textit{transfer maps}. In particular, they are chain maps as we prove now.
\begin{prop}\label{prop:Phi}
$\boldsymbol{b}_1^V:\Cth_+(V_0\odot W_0,V_1\odot W_1)\to\Cth_+(W_0,W_1)$ is a chain map.
\end{prop}
\begin{proof}
We need to prove that
\begin{alignat}{1}
\boldsymbol{b}_1^V\circ\mfm_1^{V\odot W}+\mfm_1^W\circ\,\boldsymbol{b}_1^V=0\label{eq:cotransfer}
\end{alignat}
By definition of $\mfm_1^{V\odot W}$ we have that the left-hand side of \eqref{eq:cotransfer} is equal to
\begin{alignat*}{1}
\boldsymbol{b}_1^V\circ\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V&+\boldsymbol{b}_1^V\circ\mfm_1^{V,0-}\circ\,\boldsymbol{\Delta}_1^W+\mfm_1^W\circ\,\boldsymbol{b}_1^V\\
&=\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V+b_1^V\circ\Delta_1^W\circ\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V+b_1^V\circ\mfm_1^{V,0-}\circ\,\boldsymbol{\Delta}_1^W+\mfm_1^W\circ\,\boldsymbol{b}_1^V\\
&=b_1^V\circ\Delta_1^W\circ\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V+b_1^V\circ\mfm_1^{V,0-}\circ\,\boldsymbol{\Delta}_1^W+\mfm_1^{W,-}\circ\,\boldsymbol{b}_1^V
\end{alignat*}
Using the first part of Lemma \ref{lem:rel} on the first term and the second part of the lemma on the second and third terms, recalling that $\mfm_1^{W,-}=b_1^\Lambda\circ\boldsymbol{\Delta}_1^{W\subset W}$, we get that the sum above vanishes.
\end{proof}
\begin{prop}\label{prop:Delta}
$\boldsymbol{\Delta}_1^W:\Cth_+(V_0\odot W_0,V_1\odot W_1)\to\Cth_+(V_0,V_1)$ is a chain map.
\end{prop}
\begin{proof}
We have to prove that
\begin{alignat}{1}
\boldsymbol{\Delta}_1^W\circ\mfm_1^{V\odot W}+\mfm_1^{V}\circ\,\boldsymbol{\Delta}_1^W=0
\end{alignat}
The left-hand side of the equation is
\begin{alignat*}{1}
\boldsymbol{\Delta}_1^W\circ\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V+\boldsymbol{\Delta}_1^W\circ\mfm_1^{V,0-}\circ\,\boldsymbol{\Delta}_1^W+\mfm_1^{V}\circ\,\boldsymbol{\Delta}_1^W
\end{alignat*}
whose first term equals $\mfm_1^{V,+}\circ\,\boldsymbol{\Delta}_1^W$ by Lemma \ref{lem:rel} and the second term equals $\mfm_1^{V,0-}\circ\,\boldsymbol{\Delta}_1^W$ by definition of $\boldsymbol{\Delta}_1^W$. Thus the sum vanishes.
\end{proof}
\subsubsection{Special cases}\label{special_case}
Let us have a look at the two following cases for the pair $(V_0\odot W_0,V_1\odot W_1)$:
\begin{enumerate}
\item $(W_0,W_1)=(\mathbb R\times\Lambda_0,\mathbb R\times\Lambda_1)$,
\item $(V_0,V_1)=(\mathbb R\times\Lambda_0,\mathbb R\times\Lambda_1)$
\end{enumerate}
In the first case, one has
$$\Cth_+(V_0\odot(\mathbb R\times\Lambda_0),V_1\odot(\mathbb R\times\Lambda_1))=C(\Lambda_1,\Lambda_0)^\dagger[n-1]\oplus CF(V_0,V_1)\oplus C(\Lambda_0^-,\Lambda_1^-)[1]=\Cth_+(V_0,V_1)$$
and we actually have an equality of complexes as $\Delta_1^W$ on $C(\Lambda_1,\Lambda_0)$ is the identity map (as it counts index $0$ discs with boundary on Lagrangian cylinders so it can only count trivial strips). Thus, the map $\boldsymbol{b}_1^V$ defined for a general pair of concatenated cobordisms before gives in this case a map
\begin{alignat*}{1}
\boldsymbol{b}_1^V:\Cth_+(V_0,V_1)\to\Cth_+(\mathbb R\times\Lambda_0,\mathbb R\times\Lambda_1)=\mathfrak{C}^*(\Lambda_0,\Lambda_1)
\end{alignat*}
satisfying $\boldsymbol{b}_1^V(a)=a+b_1^V(a)$ for $a\in C(\Lambda_1,\Lambda_0)$ and $\boldsymbol{b}_1^V(a)=b_1^V(a)$ for $a\in CF_{-\infty}(V_0,V_1)$.
In the second case, one has
$$\Cth_+((\mathbb R\times\Lambda_0)\odot W_0,(\mathbb R\times\Lambda_1)\odot W_1)=C(\Lambda_1^+,\Lambda_0^+)^\dagger[n-1]\oplus CF(W_0,W_1)\oplus C(\Lambda_0,\Lambda_1)[1]=\Cth_+(W_0,W_1)$$
and again this equality holds in terms of complexes as $b_1^V$ is the identity map on $C(\Lambda_0,\Lambda_1)$ and vanishes on $C(\Lambda_1,\Lambda_0)$ (no index $0$ banana with boundary on $\mathbb R\times(\Lambda_0\cup\Lambda_1)$ and two positive Reeb chord asymptotics), and $\Delta_1^V$ is the identity map on $C(\Lambda_1,\Lambda_0)$.
For such a pair of concatenated cobordisms, we get the map
\begin{alignat*}{1}
\boldsymbol{\Delta}_1^W:\Cth_+(W_0,W_1)\to\mathfrak{C}^*(\Lambda_0,\Lambda_1)
\end{alignat*}
satisfying $\boldsymbol{\Delta}_1^W(a)=\Delta_1^W(a)$ for $a\in CF_{+\infty}(W_0,W_1)$ and $\boldsymbol{\Delta}_1^W(a)=a$ for $a\in C(\Lambda_0,\Lambda_1)$, which recovers exactly the definition we gave at the end of the Section \ref{section:def_complex}.
\begin{nota}
From now on, we use the maps $\boldsymbol{b}_1^V$ and $\boldsymbol{\Delta}_1^W$ without specifying if we are in the case of a pair $(V_0\odot W_0,V_1\odot W_1)$, $\big(V_0\odot(\mathbb R\times\Lambda_0),V_1\odot(\mathbb R\times\Lambda_1)\big)$ or $\big((\mathbb R\times\Lambda_0)\odot W_0,(\mathbb R\times\Lambda_1)\odot W_1\big)$.
\end{nota}
\subsubsection{Mayer-Vietoris long exact sequence}
From the previous special cases, one deduces a Mayer-Vietoris sequence. Consider a pair of concatenations $(V_0\odot W_0,V_1\odot W_1)$. By definition, we have
\begin{lem}\label{bcircdelta}
$\boldsymbol{b}_1^V\circ\boldsymbol{\Delta}_1^W+\boldsymbol{\Delta}_1^W\circ\boldsymbol{b}_1^V:\Cth_+(V_0\odot W_0,V_1\odot W_1)\to\mathfrak{C}^*(\Lambda_0,\Lambda_1)$ vanishes.
\end{lem}
\begin{proof}
First, remember that $\boldsymbol{\Delta}_1^V:\Cth_+(V_0\odot W_0,V_1\odot W_1)\to\Cth_+(V_0,V_1)$, so the term $\boldsymbol{b}_1^V\circ\boldsymbol{\Delta}_1^W$ should be read as $\boldsymbol{b}_1^{V\subset V}\circ\boldsymbol{\Delta}_1^{W\subset V\odot W}$. Similarly,
$\boldsymbol{b}_1^V:\Cth_+(V_0\odot W_0,V_1\odot W_1)\to\Cth_+(W_0,W_1)$, thus the term $\boldsymbol{\Delta}_1^W\circ\boldsymbol{b}_1^V$ should be read as being $\boldsymbol{\Delta}_1^{W\subset W}\circ\boldsymbol{b}_1^{V\subset V\odot W}$.
Then we have for $a\in CF_{+\infty}(W_0,W_1)$:
\begin{alignat*}{1}
&\boldsymbol{b}_1^V\circ\boldsymbol{\Delta}_1^W(a)=\boldsymbol{b}_1^V\circ\Delta_1^W(a)=\Delta_1^W(a)+b_1^V\circ\Delta_1^W(a)\\
&\boldsymbol{\Delta}_1^W\circ\boldsymbol{b}_1^V(a)=\boldsymbol{\Delta}_1^W(a+b_1^V\circ\Delta_1^W(a))=\Delta_1^W(a)+\boldsymbol{\Delta}_1^W\circ b_1^V\circ\Delta_1^W(a)=\Delta_1^W(a)+b_1^V\circ\Delta_1^W(a)
\end{alignat*}
and for $a\in CF_{-\infty}(V_0,V_1)$,
\begin{alignat*}{1}
&\boldsymbol{b}_1^V\circ\boldsymbol{\Delta}_1^W(a)=\boldsymbol{b}_1^V(a)=b_1^V(a)\\
&\boldsymbol{\Delta}_1^W\circ\boldsymbol{b}_1^V(a)=\boldsymbol{\Delta}_1^W\circ b_1^V(a)=b_1^V(a)
\end{alignat*}
\end{proof}
From this, we get a short exact sequence of complexes
\begin{alignat*}{1}
0\to\Cth^*_+(V_0\odot W_0,V_1\odot W_1)\xrightarrow[]{\small(\boldsymbol{\Delta}_1^W,\boldsymbol{b}_1^V)}\Cth^*_+(V_0,V_1)\oplus\Cth^*_+(W_0,W_1)\xrightarrow[]{\small\boldsymbol{b}_1^V+\boldsymbol{\Delta}_1^W}\Cth^*_+(\mathbb R\times\Lambda_0,\mathbb R\times\Lambda_1)\to0
\end{alignat*}
which gives rise to a Mayer-Vietoris sequence
\begin{alignat*}{2}
\dots\to H^*\Cth_+(V_0\odot W_0,V_1\odot W_1)&\to H^*\Cth_+(V_0,V_1)\oplus H^*\Cth_+(W_0,W_1)\\
&\to H^*\Cth_+(\mathbb R\times\Lambda_0,\mathbb R\times\Lambda_1)\xrightarrow[]{g_*} H^{*+1}\Cth_+(V_0\odot W_0,V_1\odot W_1)\dots
\end{alignat*}
The connecting morphism $g_*$ is given on the chain level by
$\mfm_1^{W,0}\circ\,b_1^V+\mfm_1^{V,0-}$
on $C(\Lambda_1,\Lambda_0)^\dagger[n-1]\subset\Cth_+(\mathbb R\times\Lambda_0,\mathbb R\times\Lambda_1)$ and by $\mfm_1^{W,0}$
on $C(\Lambda_0,\Lambda_1)[1]\subset\Cth_+(\mathbb R\times\Lambda_0,\mathbb R\times\Lambda_1)$. Below we check that $g$ is indeed a chain map so induces a well-defined map in homology, and that the sequence is exact.
We need to prove that $g\circ\mfm_1^{\mathbb R\times\Lambda}=\mfm_1^{V\odot W}\circ g$. Instead of writing big matrices, let us prove it for the two types of generators separately. Consider $\gamma_{01}\in C(\Lambda_1,\Lambda_0)$, we have
\begin{alignat*}{1}
g\circ\mfm_1^{\mathbb R\times\Lambda}(\gamma_{01})+&\mfm_1^{V\odot W}\circ g(\gamma_{01})\\
=&g\big(\Delta_1^\Lambda(\gamma_{01})+b_1^\Lambda(\gamma_{01})\big)+\big(\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V+\mfm_1^{V,0-}\circ\boldsymbol{\Delta}_1^W\big)\circ\big(\mfm_1^{W,0}\circ\,b_1^V+\mfm_1^{V,0-}\big)(\gamma_{01})\\
=&\big(\mfm_1^{W,0}\circ\,b_1^V+\mfm_1^{V,0-}\big)\circ\Delta_1^\Lambda(\gamma_{01})+\mfm_1^{W,0}\circ\,b_1^\Lambda(\gamma_{01})
+\mfm_1^{W,0}\circ \mfm_1^{W,0}\circ\,b_1^V(\gamma_{01})\\
&+\mfm_1^{W,0}\circ\,b_1^V\circ\mfm_1^{V,0-}(\gamma_{01})+\mfm_1^{V,0-}\circ\mfm_1^{V,0-}(\gamma_{01})
\end{alignat*}
where we have removed terms vanishing for energy reasons. Now, observe that $\mfm_1^{V,0-}\circ\Delta_1^\Lambda(\gamma_{01})+\mfm_1^{V,0-}\circ\mfm_1^{V,0-}(\gamma_{01})=0$ as $\Delta_1^\Lambda(\gamma_{01})=\mfm_1^{V,+}(\gamma_{01})$ and $\mfm_1^V$ is a differential. Finally, the remaining terms are the algebraic contributions of the broken curves arising in the boundary of products of moduli spaces of the following type
\begin{alignat*}{1}
&\overline{\mathcal{M}^1}_{W_0,W_1}(x,\boldsymbol{\xi}_0,\gamma_{10},\boldsymbol{\xi}_1)\times\mathcal{M}_{V_0,V_1}^0(\gamma_{10},\boldsymbol{\delta}_0,\gamma_{01},\boldsymbol{\delta}_1)\\
&\mathcal{M}_{W_0,W_1}^0(x,\boldsymbol{\xi}_0,\gamma_{10},\boldsymbol{\xi}_1)\times\overline{\mathcal{M}^1}_{V_0,V_1}(\gamma_{10},\boldsymbol{\delta}_0,\gamma_{01},\boldsymbol{\delta}_1)
\end{alignat*}
for $x\in CF(W_0,W_1)$, $\gamma_{10}\in C(\Lambda_0,\Lambda_1)$, $\boldsymbol{\delta}_i$ words of pure Reeb chords of $\Lambda_i^-$ and $\boldsymbol{\xi}_i$ words of pure Reeb chords of $\Lambda_i$.
Then, consider $\gamma_{10}\in C(\Lambda_0,\Lambda_1)$, we have
\begin{alignat*}{1}
g\circ\mfm_1^{\mathbb R\times\Lambda}(\gamma_{10})+&\mfm_1^{V\odot W}\circ g(\gamma_{10})\\
=&g\circ \,b_1^\Lambda(\gamma_{10})+\big(\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V+\mfm_1^{V,0-}\circ\boldsymbol{\Delta}_1^W\big)\circ\mfm_1^{W,0}(\gamma_{10})\\
=&\mfm_1^{W,0}\circ \,b_1^\Lambda(\gamma_{10})+\mfm_1^{W,0}\circ\mfm_1^{W,0}(\gamma_{10})
\end{alignat*}
where we have removed terms vanishing for energy reason. Then the two remaining terms are algebraic contributions of the broken configurations in the boundary of the compactification of moduli spaces $\mathcal{M}^1_{W_0,W_1}(x,\boldsymbol{\xi}_0,\gamma_{10},\boldsymbol{\xi}_1)$.
Now we check the exactness of
\small
\begin{alignat*}{1}
&\dots\to H^{*-1}\Cth_+(\mathbb R\times\Lambda_0,\mathbb R\times\Lambda_1)\xrightarrow[]{g_{*-1}} H^{*}\Cth_+(V_0\odot W_0,V_1\odot W_1)\xrightarrow[]{\small(\boldsymbol{\Delta}_1^W,\boldsymbol{b}_1^V)}H^{*}\Cth_+(V_0,V_1)\oplus H^{*}\Cth_+(W_0,W_1)\to\dots
\end{alignat*}
\normalsize
Given a cycle $\gamma_{01}+\gamma_{10}\in\Cth_+(\mathbb R\times\Lambda_0,\mathbb R\times\Lambda_1)$, we need to check that in homology
$\boldsymbol{\Delta}_1^W\circ g_*(\gamma_{01}+\gamma_{10})=\boldsymbol{b}_1^V\circ g_*(\gamma_{01}+\gamma_{10})=0$.
We have
\begin{alignat*}{1}
\boldsymbol{\Delta}_1^W\circ g_*(\gamma_{01}+\gamma_{10})&=\boldsymbol{\Delta}_1^W\Big(\mfm_1^{W,0}(b_1^V(\gamma_{01})+\gamma_{10})+\mfm_1^{V,0-}(\gamma_{01})\Big)\\
&=\Delta_1^W\circ\mfm_1^{W,0}\big(b_1^V(\gamma_{01})+\gamma_{10}\big)+\mfm_1^{V,0-}(\gamma_{01})\\
&=\mfm_1^{V,0-}(\gamma_{01})
\end{alignat*}
for energy reason, and then $\mfm_1^{V,0-}(\gamma_{01})=\mfm_1^V(\gamma_{01})$ because $\mfm_1^{V,+}(\gamma_{01})=\Delta_1^\Lambda(\gamma_{01})=0$ by assumption. Thus $\boldsymbol{\Delta}_1^W\circ g_*(\gamma_{01}+\gamma_{10})\in\Cth_+(V_0,V_1)$ is a boundary so vanishes in homology. Then
\begin{alignat*}{1}
\boldsymbol{b}_1^V\circ g_*(\gamma_{01}+\gamma_{10})&=\boldsymbol{b}_1^V\Big(\mfm_1^{W,0}\big(b_1^V(\gamma_{01})+\gamma_{10}\big)+\mfm_1^{V,0-}(\gamma_{01})\Big)\\
&=\mfm_1^{W,0}\big(b_1^V(\gamma_{01})+\gamma_{10}\big)+b_1^V\circ\Delta_1^W\circ\mfm_1^{W,0}\big(b_1^V(\gamma_{01})+\gamma_{10}\big)+b_1^V\circ\mfm_1^{V,0-}(\gamma_{01})\\
&=\mfm_1^{W,0}\big(b_1^V(\gamma_{01})+\gamma_{10}\big)+b_1^V\circ\mfm_1^{V,0-}(\gamma_{01})
\end{alignat*}
Then, by the study of index $1$ bananas with boundary on $V_0\cup V_1$ as above, one has
$b_1^V\circ\mfm_1^{V,0-}(\gamma_{01})=b_1^\Lambda\circ b_1^V(\gamma_{01})+b_1^V\circ\Delta_1^\Lambda(\gamma_{01})+b_1^\Lambda(\gamma_{01})$. But $\Delta_1^\Lambda(\gamma_{01})=0$ by assumption, as well as $b_1^\Lambda(\gamma_{01})=b_1^\Lambda(\gamma_{10})=\mfm_1^{W,-}(\gamma_{10})$. Thus, we get
\begin{alignat*}{1}
\boldsymbol{b}_1^V\circ g_*(\gamma_{01}+\gamma_{10})&=\mfm_1^{W,0}\circ\, b_1^V(\gamma_{01})+\mfm_1^{W,0}(\gamma_{10})+b_1^\Lambda\circ b_1^V(\gamma_{01})+\mfm_1^{W,-}(\gamma_{10})
\end{alignat*}
Finally, $b_1^\Lambda\circ b_1^V(\gamma_{01})=\mfm_1^{W,-}\circ\,b_1^V(\gamma_{01})$ by definition, and one can add the terms $\mfm_1^{W,+}\circ\,b_1^V(\gamma_{01})$ and $\mfm_1^{W,+}(\gamma_{10})$ which vanish to obtain that $\boldsymbol{b}_1^V\circ g_*(\gamma_{01}+\gamma_{10})=\mfm_1^W\big(b_1^V(\gamma_{01})+\gamma_{10}\big)$ is a boundary in $\Cth_+(W_0,W_1)$.
This proves one part of the exactness of the Mayer-Vietoris sequence, the other part is proved in an analogous way and the details are left to the reader.
\section{Acyclicity for horizontally displaceable Legendrian ends}\label{sec:acy}
The acyclicity of the complex $\Cth_+(\Sigma_0,\Sigma_1)$ is proved in the same way as the acyclicity of the complex $\Cth(\Sigma_0,\Sigma_1)$ in \cite{CDGG2}. However, in the case of $\Cth_+$ we need some horizontal displaceability assumption of at least one of the two Legendrian ends to achieve acyclicity.
\begin{defi}
Two Legendrian submanifolds $\Lambda_0,\Lambda_1\subset Y=P\times\mathbb R$ are \textit{horizontally displaceable} if there exists an Hamiltonian isotopy $\varphi_t$ of $P$ which displace the Lagrangian projections $\Pi_P(\Lambda_0)$ and $\Pi_P(\Lambda_1)$, i.e. $\Pi_P(\Lambda_0)$ and $\varphi_1(\Pi_P(\Lambda_1))$ are contained in two disjoint balls.
A Legendrian is called \textit{horizontally displaceable} if it can be displaced from itself.
\end{defi}
The goal of the next subsections is to prove the following:
\begin{teo}\label{teo:acyclicity}
Let $\Lambda_0^-,\Lambda_1^-,\Lambda_0^+,\Lambda_1^+\subset Y$ be Legendrian submanifolds such that $\Lambda_0^-$ and $\Lambda_1^-$, or $\Lambda_0^+$ and $\Lambda_1^+$ are horizontally displaceable. Assume moreover that $\mathcal{A}(\Lambda_0^-)$ and $\mathcal{A}(\Lambda_1^-)$ admit augmentations $\varepsilon_0^-$ and $\varepsilon_1^-$. Then, for any pair of transverse exact Lagrangian cobordisms $\Lambda_0^-\prec_{\Sigma_0}\Lambda_0^+$ and $\Lambda_1^-\prec_{\Sigma_1}\Lambda_1^+$, the complex $\big(\Cth_+(\Sigma_0,\Sigma_1),\mfm_1^{\varepsilon_0^-,\varepsilon_1^-}\big)$ is acyclic.
\end{teo}
The hypothesis of horizontal displaceability is necessary. Indeed, in the setting of Example \ref{ex:jet} the $0$-section of a jet space is not horizontally displaceable, and in fact the complex is not acyclic.\\
When $\Cth_+(\Sigma_0,\Sigma_1)$ is acyclic, one recovers long exact sequences obtained in \cite{CDGG2}. Indeed, the complex $\Cth_+(\Sigma_0,\Sigma_1)$ is the cone of the degree $1$ map $d_{0+}+b_1^-\circ\Delta_1^\Sigma:C(\Lambda_1^+,\Lambda_0^+)^\dagger[n-1]\to CF_{-\infty}(\Sigma_0,\Sigma_1)$, which is then a quasi-isomorphism since the complex is acyclic, i.e. we have
\begin{alignat}{1}\label{eq:qiso}
H^{*}\big(C(\Lambda_1^+,\Lambda_0^+)^\dagger[n-1]\big)\simeq HF_{-\infty}^{*+1}(\Sigma_0,\Sigma_1)
\end{alignat}
Assume first that the Legendrian submanifolds $\Lambda_0^+$ and $\Lambda_1^+$ are horizontally displaceable. Then, the acyclicity of $\Cth_+(\mathbb R\times\Lambda_0^+,\mathbb R\times\Lambda_1^+)$ yields
\begin{alignat}{1}\label{eq:iso2}
H^*\big(C(\Lambda_1^+,\Lambda_0^+)^\dagger[n-1]\big)\simeq H^*(C(\Lambda_0^+,\Lambda_1^+))
\end{alignat}
as there are no intersection point generators, and the Legendrians in the negative end are also $\Lambda_0^+$ and $\Lambda_1^+$.
When $(\Sigma_0,\Sigma_1)$ is a \textit{directed} pair, then $d_{0-}=0$ and $CF^{*}_{-\infty}(\Sigma_0,\Sigma_1)$ is the cone of $d_{-0}:CF^{*}(\Sigma_0,\Sigma_1)\to C^{*}(\Lambda_0^-,\Lambda_1^-)$. When $(\Sigma_0,\Sigma_1)$ is a \textit{V-shaped} pair, then $d_{-0}=0$ and $CF^{*}_{-\infty}(\Sigma_0,\Sigma_1)$ is the cone of $d_{0-}:C^{*-1}(\Lambda_0^-,\Lambda_1^-)\to CF^{*+1}(\Sigma_0,\Sigma_1)$. The long exact sequence of a cone, together with the isomorphisms \eqref{eq:qiso} and \eqref{eq:iso2}, and the fact that by definition $H^*(C(\Lambda_0^+,\Lambda_1^+))= LCH^*_{\varepsilon_0^+,\varepsilon_1^+}(\Lambda_0^+,\Lambda_1^+)$ and $H^{*}(C(\Lambda_0^-,\Lambda_1^-))=LCH^{*}_{\varepsilon_0^-,\varepsilon_1^-}(\Lambda_0^-,\Lambda_1^-)$, give
$$\begin{array}{rcl}
\dots\to LCH^{k-1}_{\varepsilon_0^+,\varepsilon_1^+}(\Lambda_0^+,\Lambda_1^+)\to& HF^{k}(\Sigma_0,\Sigma_1)& \\
&\downarrow&\\
& LCH^{k}_{\varepsilon_0^-,\varepsilon_1^-}(\Lambda_0^-,\Lambda_1^-)&\to LCH^{k}_{\varepsilon_0^+,\varepsilon_1^+}(\Lambda_0^+,\Lambda_1^+)\to\dots
\end{array}$$
for a directed pair, and
$$\begin{array}{rcl}
\dots\to LCH^{k-1}_{\varepsilon_0^+,\varepsilon_1^+}(\Lambda_0^+,\Lambda_1^+)\to&LCH^{k-1}_{\varepsilon_0^-,\varepsilon_1^-}(\Lambda_0^-,\Lambda_1^-)& \\
&\downarrow&\\
& HF^{k+1}(\Sigma_0,\Sigma_1)&\to LCH^{k}_{\varepsilon_0^+,\varepsilon_1^+}(\Lambda_0^+,\Lambda_1^+)\to\dots
\end{array}$$
for a V-shaped pair.
These are the long exact sequences in \cite[Corollary 1.3]{CDGG2}.\\
In the case where $\Lambda_0^+$ and $\Lambda_1^+$ are not horizontally displaceable but $\Lambda_0^-$ and $\Lambda_1^-$ are, one gets the same exact sequences from the acyclicity of the dual complex $\Cth_+^{dual}(\Sigma_0,\Sigma_1)$:
\begin{alignat*}{1}
\Cth_+^{dual}(\Sigma_0,\Sigma_1)=C^*(\Lambda_1^+,\Lambda_0^+)^\dagger[n-1]\oplus CF_*(\Sigma_0,\Sigma_1)\oplus C_*(\Lambda_0^-,\Lambda_1^-)
\end{alignat*}
with the degree $-1$ differential
$$\left(\begin{matrix}
b_1^+&b_1^{\Sigma_1,\Sigma_0}&b_1^{\Sigma_1,\Sigma_0}\circ b_1^-\\
0&d^{\Sigma_1,\Sigma_0}_{00}&d^{\Sigma_1,\Sigma_0}_{0-}\\
0&\Delta_1^{\Sigma_1,\Sigma_0}&\Delta_1^-
\end{matrix}\right)$$
However as we do not especially need this dual complex in this article we will not give more details here.
\subsection{Wrapping the ends}\label{wrapping}
Given a pair of cobordisms $(\Sigma_0,\Sigma_1)$ cylindrical outside $[-T,T]\times Y$, we will wrap the positive and negative ends of $\Sigma_1$ in order to get a pair of cobordisms such that the associated $\Cth_+$ complex has only intersection points generators. The wrapping is done by Hamiltonian isotopy. A smooth function $h:\mathbb R\to\mathbb R$ gives rise to a Hamiltonian $H:\mathbb R\times P\times\mathbb R\to\mathbb R$ defined by $H(t,p,z)=h(t)$. The corresponding Hamiltonian vector field $X_h$ is defined through the equation $d(e^t\alpha)(X_h,\,\cdot\,)=-dH$, and its Hamiltonian flow $\varphi^s_h$ takes the following simple form
\begin{alignat*}{1}
\varphi^s_h(t,p,z)=(t,p,z+se^{-t}h'(t))
\end{alignat*}
Moreover, the image of an exact Lagrangian cobordism $\Sigma$ with primitive $f_\Sigma$ by an Hamiltonian isotopy $\varphi_h^s$ as above is still an exact Lagrangian cobordism $\Sigma_s=\varphi_h^s(\Sigma)$, with a primitive $f_{\Sigma_s}$ given by
\begin{alignat}{1}
f_{\Sigma_s}&=f_\Sigma+s(h'-h)\circ\pi_\mathbb R
\end{alignat}
where $\pi_\mathbb R:\mathbb R\times Y\to\mathbb R$ is the projection on the symplectization coordinate $t$.
Given $N>T$, consider a function $h_{T,N}^+:\mathbb R\to\mathbb R$ satisfying:
$$\left\{\begin{array}{l}
h_{T,N}^+(t)=0\,\mbox{ for }\,t\leq T+N,\\
h_{T,N}^+(t)=-e^t\,\mbox{ for }\,t\geq T+N+1,\\
(h_{T,N}^+)'(t)\leq0,\\
\end{array}\right.$$
such that the Hamiltonian vector field takes the form $\rho_{T,N}^+(t)\partial_z$ where $\rho_{T,N}^+:\mathbb R\to\mathbb R$ satisfies
$$\left\{\begin{array}{l}
\rho_{T,N}^+(t)=0\mbox{ for }t< T+N,\\
\rho_{T,N}^+(t)=-1\mbox{ for }t> T+N+1,\\
(\rho_{T,N}^+)'(t)\leq 0.\\
\end{array}\right.$$
\begin{figure}[ht]
\begin{center}\includegraphics[width=5cm]{wrapping+2.eps}\end{center}
\caption{Wrapping the positive end of $\Sigma_1$.}
\label{fig:wrapping+}
\end{figure}
Let $S_+>0$ greater than the length of the longest Reeb chord from $\Lambda_0^+$ to $\Lambda_1^+$. We set $W_1:=\varphi_{h_{T,N}^+}^{S_+}(\mathbb R\times\Lambda_1^+)$, and $W_0:=\mathbb R\times\Lambda_0^+$, and consider the pair $(\Sigma_0\odot W_0,\Sigma_1\odot W_1)$, where in fact $\Sigma_0\odot W_0=\Sigma_0$, see Figure \ref{fig:wrapping+}. The complex
$\Cth_+(\Sigma_0\odot W_0,\Sigma_1\odot W_1)$ has only three types of generators, namely
$$\Cth_+(\Sigma_0\odot W_0,\Sigma_1\odot W_1)=CF(W_0,W_1)\oplus CF(\Sigma_0,\Sigma_1)\oplus C(\Lambda_0^-,\Lambda_1^-)[1]$$
Under this decomposition, the transfer map $\boldsymbol{\Delta}_1^W:\Cth_+(\Sigma_0\odot W_0,\Sigma_1\odot W_1)\to\Cth_+(\Sigma_0,\Sigma_1)$ is equal to the matrix
\begin{alignat*}{1}
\boldsymbol{\Delta}_1^W=\left(\begin{matrix}
\Delta_1^W&0&0\\
0&\id&0\\
0&0&\id\\
\end{matrix}\right)
\end{alignat*}
We have then the following
\begin{prop}\label{prop:transfer}
The transfer map $\boldsymbol{\Delta}_1^W:\Cth_+(\Sigma_0\odot W_0,\Sigma_1\odot W_1)\to\Cth_+(\Sigma_0,\Sigma_1)$ is an isomorphism.
\end{prop}
\begin{proof}
The proof is the same as the proof of \cite[Proposition 8.2]{CDGG2}. After wrapping, each Reeb chord from $\Lambda_0^+$ to $\Lambda_1^+$ creates an intersection point in $W_0\cap W_1$, and observing that the wrapping in the positive end makes the Conley-Zehnder index increasing by $1$, there is a canonical identification of graded modules:
\begin{alignat}{1}
CF^*(W_0,W_1)=C^*(\Lambda_1^+,\Lambda_0^+)^\dagger[n-1]\subset\Cth_+(\Sigma_0,\Sigma_1)\label{ident2}
\end{alignat}
If $p\in CF(W_0,W_1)$ we denote by $\gamma_p\in C^*(\Lambda_1^+,\Lambda_0^+)^\dagger[n-1]$ the corresponding Reeb chord.
The goal is to prove that this identification also applies at the level of complexes. We will show that under the identification \eqref{ident2}, the map $\boldsymbol{\Delta}_1^W$ is the identity map.
We consider the component
$\Delta_1^{W}:CF^*(W_0,W_1)\to C_{n-1-*}(\Lambda_1^+,\Lambda_0^+)$ of $\boldsymbol{\Delta}_1^W$ which is of degree $0$. Let $u\in\mathcal{M}^0_{W_0,W_1}(\gamma_{01};\boldsymbol{\delta}_0,p,\boldsymbol{\delta}_1)$ where $p\in W_0\cap W_1$, $\gamma_{01}\in\mathcal{R}(\Lambda_1^+,\Lambda_0^+)$ is a negative Reeb chord asymptotic, and $\boldsymbol{\delta}_i$ are words of degree $0$ pure Reeb chords which are also negative asymptotics. This disc contributes to $\Delta_1^{W}(p)$. By rigidity of $u$, we have
\begin{alignat*}{1}
n-1-|\gamma_{01}|-|p|_{\Cth_+(W_0,W_1)}=0
\end{alignat*}
Now, the projection of $u$ to $P$ is a pseudo-holomorphic map in $\mathcal{M}_{\pi_P(\Lambda_0^+),\pi_P(\Lambda_1^+)}(\gamma_{01};\boldsymbol{\delta}_0,\pi_P(p),\boldsymbol{\delta}_1)$ which has dimension $|\pi_P(p)|-|\gamma_{01}|-1=|\gamma_p|-|\gamma_{01}|-1$, but we have
\begin{alignat*}{1}
0=n-1-|\gamma_{01}|-|p|_{\Cth_+(W_0,W_1)}=n-1-|\gamma_{01}|-(n-1-|\gamma_p|)=|\gamma_p|-|\gamma_{01}|
\end{alignat*}
where we have used the identification \eqref{ident2}.
This implies that $\pi_P(u)$ is in a moduli space of dimension $-1$ so it must be constant. Hence, $\gamma_{01}=\gamma_p$. On the other side, for each intersection point $p\in W_0\cap W_1$ a strip over $\gamma_p$ lifts to a disk in $\mathcal{M}^0_{W_0,W_1}(\gamma_p;\boldsymbol{\delta}_0,p,\boldsymbol{\delta}_1)$. We obtain that $\boldsymbol{\Delta}_1^W$ is the identity map.
\end{proof}
Next, we wrap the negative end of $\Sigma_1\odot W_1$ as schematized on Figure \ref{fig:wrapping-}, using a Hamiltonian defined by a function $h_{T,N}^-:\mathbb R\to\mathbb R$ satisfying:
$$\left\{\begin{array}{l}
h_{T,N}^-(t)=e^t\,\mbox{ for }\,t<-T-N-1,\\
h_{T,N}^-(t)=D\,\mbox{ for }\,t> -T-N,\\
(h_{T,N}^-)'(t)\geq0,\\
\end{array}\right.$$
for some positive constant $D\geq e^{-T-N}$, such that the Hamiltonian vector field is given by $\rho_{T,N}^-(t)\partial_z$ where $\rho_{T,N}^-:\mathbb R\to\mathbb R$ satisfies $\rho_{T,N}^-(t)=1$ for $t\leq -T-N$, $\rho_{T,N}^-(t)=0$ for $t\geq -T$, and $(\rho_{T,N}^-)'(t)\leq 0$.
Let $S_->0$ be greater than the length of the longest chord from $\Lambda_1^-$ to $\Lambda_0^-$ and define $V_1:=\varphi_{h_{T,N}^-}^{S_-}(\mathbb R\times\Lambda_1^-)$ and set $V_0:=\mathbb R\times\Lambda_0^-$. After concatenation, we obtain a pair $(V_0\odot\Sigma_0\odot W_0,V_1\odot\Sigma_1\odot W_1)=(\Sigma_0,V_1\odot\Sigma_1\odot W_1)$.
\begin{figure}[ht]
\begin{center}\includegraphics[width=5cm]{wrapping-2.eps}\end{center}
\caption{Wrapping the negative end of $\Sigma_1\odot W_1$.}
\label{fig:wrapping-}
\end{figure}
The Cthulhu complex of the pair $\big(\Sigma_0,V_1\odot(\Sigma_1\odot W_1)\big)$ has only intersection points generators, we have
$$\Cth_+\big(\Sigma_0,V_1\odot(\Sigma_1\odot W_1)\big)=CF(\Sigma_0,\Sigma_1\odot W_1)\oplus CF(V_0,V_1)$$
Under this decomposition, the map $\boldsymbol{b}_1^V:\Cth_+\big(\Sigma_0,V_1\odot(\Sigma_1\odot W_1)\big)\to\Cth_+(\Sigma_0,\Sigma_1\odot W_1)$
is given by
\begin{alignat*}{1}
\boldsymbol{b}_1^V=\left(\begin{matrix}
\id&0\\
b_1^V\circ\Delta_1^{\Sigma\odot W}&b_1^V\\
\end{matrix}\right)
\end{alignat*}
\begin{prop}\label{prop:transfer2}
The map $\boldsymbol{b}_1^V$ above is an isomorphism.
\end{prop}
\begin{proof}
It is the same kind of proof as for Proposition \ref{prop:transfer}. In this case we have a canonical identification:
\begin{alignat}{1}
CF(V_0,V_1)=C(\Lambda_0^-,\Lambda_1^-)[1]\subset\Cth_+(\Sigma_0,\Sigma_1\odot W_1)\label{ident}
\end{alignat}
Let us consider the component $b_1^V:CF^*(V_0,V_1)\to C^{*-1}(\Lambda_0^-,\Lambda_1^-)$ of $\boldsymbol{b}_1^V$ which is of degree $0$. Let $u\in\mathcal{M}^0_{V_0,V_1}(\gamma_{10};\boldsymbol{\delta}_0,p,\boldsymbol{\delta}_1)$ where $p\in V_0\cap V_1$, $\gamma_{10}\in\mathcal{R}(\Lambda_0^-,\Lambda_1^-)$ is a positive Reeb chord asymptotic, and $\boldsymbol{\delta}_i$ are words of degree $0$ pure Reeb chords which are also negative asymptotics, contributing to this component. By rigidity, we have
\begin{alignat*}{1}
(|\gamma_{10}|+1)-|p|_{CF(V_0,V_1)}=0
\end{alignat*}
The projection of $u$ to $P$ is a pseudo-holomorphic map in $\mathcal{M}_{\pi_P(\Lambda_0^-),\pi_P(\Lambda_1^-)}(\gamma_{10};\boldsymbol{\delta}_0,\pi_P(p),\boldsymbol{\delta}_1)$ which has dimension $|\gamma_{10}|-|\gamma_p|-1$.
Using the identification \eqref{ident} we have
$$0=|\gamma_{10}|-|p|_{CF(V_0,V_1)}+1=|\gamma_{10}|-(|\gamma_p|+1)+1=|\gamma_{10}|-|\gamma_p|$$
So the disk $\pi_P(u)$ must be constant and $\gamma_{10}=\gamma_p$.
\end{proof}
\subsection{Invariance by compactly supported Hamiltonian isotopy}
Let us consider a pair $(\Sigma_0,\Sigma_1)$ of exact Lagrangian cobordisms and a path of exact Lagrangian cobordisms $\Sigma_0^s$ for $s\in[0,1]$ induced by a compactly supported Hamiltonian isotopy, with $\Sigma_0^0:=\Sigma_0$. In particular, for all $s\in[0,1]$, $\Sigma_0^s$ have positive and negative cylindrical ends over $\Lambda_0^\pm$.
\begin{prop}\label{prop:iso}
The complexes $\big(\Cth_+(\Sigma_0^0,\Sigma_1),\mfm_1^{\varepsilon_0^-,\varepsilon_1^-}\big)$ and $\big(\Cth(\Sigma_0^1,\Sigma_1),\mfm_1^{\varepsilon_0^-,\varepsilon_1^-}\big)$ are homotopy equivalent.
\end{prop}
\begin{proof}
First, wrap the positive and negative ends of $\Sigma_1$ in the negative and positive Reeb direction respectively, as done in the previous section. One gets the pair of cobordisms $(\Sigma_0,V_1\odot\Sigma_1\odot W_1)$, whose Cthulhu complex is isomorphic to that of the pair $(\Sigma_0,\Sigma_1)$ by Propositions \ref{prop:transfer} and \ref{prop:transfer2}. Then, all along the isotopy the complex $(\Sigma_0^s,V_1\odot\Sigma_1\odot W_1)$ as only intersection point generators and the bifurcation analysis explained in \cite[Proposition 8.4]{CDGG2long} (see also \cite{E1} for the case of fillings) proves that the complexes $\Cth_+(\Sigma_0^0,V_1\odot\Sigma_1\odot W_1)$ and $\Cth_+(\Sigma_0^1,V_1\odot\Sigma_1\odot W_1)$ are homotopy equivalent. Finally, unwrapping the ends of $\Sigma_1$ leads again to an isomorphism of complexes.
\end{proof}
\subsection{Proof of Theorem \ref{teo:acyclicity}}
Consider a pair of Lagrangian cobordisms $(\Sigma_0,\Sigma_1)$ satisfying the hypothesis of the Theorem. We assume without loss of generality that $\Lambda_0^-$ and $\Lambda_1^-$ are horizontally displaceable (in the case $\Lambda_i^+$ are horizontally displaceable but $\Lambda_i^-$ are not, the same type of argument works but moving the wrapping in the positive end instead of the negative end, see below). By wrapping the cylindrical ends of $\Sigma_1$ we get the pair $(\Sigma_0,V_1\odot\Sigma_1\odot W_1)$ such that:
\begin{enumerate}
\item $\Sigma_0$ and $V_1\odot\Sigma_1\odot W_1$ are cylindrical outside $[-T-N,T+N]\times Y$.
\item $\Cth_+(\Sigma_0,V_1\odot\Sigma_1\odot W_1)$ has only intersection points generators.
\end{enumerate}
By a Hamiltonian isotopy $\varphi^1_{h_c}$ compactly supported in $[-T-N,T+N]\times Y$, we perturb $V_1\odot\Sigma_1\odot W_1$ in such a way that all the intersection points are in fact contained in $[-T-N,-T]\times Y$, and are in bijective correspondence with mixed chords of $\Lambda_0^-\cup\Lambda_1^-$, as schematized on Figure \ref{fig:compact_ham_iso}. For this purpose we use for example the Hamiltonian $H_c(t,p,z)=h_c(t)$ with $h_c:\mathbb R\to\mathbb R$ satisfying
$$\left\{\begin{array}{l}
h_c(t)=-e^t+C\mbox{ for }t\in[-T,T],\\
(-\infty,-T-N)\cup(T+N,\infty)\subset(h_c')^{-1}(0)\\
h_c'(t)\leq0
\end{array}\right.$$
with $C>0$ constant such that $h_c(t)=0$ for $t\leq-T-N$, to ensure the primitive of the perturbed cobordism to still vanish on the negative cylindrical end. The Hamiltonian vector field is given by $\rho_c(t)\partial_z$ with $\rho_c(t)=-1$ on $[-T,T]$ and $0$ on $(-\infty,-T-N)\cup(T+N,\infty)$.
\begin{figure}[ht]
\begin{center}\includegraphics[width=5cm]{compact_ham_iso.eps}\end{center}
\caption{Deformation by a compactly supported Hamiltonian isotopy.}
\label{fig:compact_ham_iso}
\end{figure}
Let us denote $\widetilde{\Sigma}_1=\varphi_{h_c}^S(V_1\odot\Sigma_1\odot W_1)$, with $S$ big enough so that there are no intersection points in $[-T,T+N]\times Y$ anymore. This $S$ exists as $\Sigma_0\cap[-T-N,T+N]\times Y$ and $V_1\odot\Sigma_1\odot W_1\cap[-T-N,T+N]\times Y$ are compact.
By Proposition \ref{prop:iso}, the complexes $\Cth_+(\Sigma_0,\Sigma_1)$ and $\Cth_+(\Sigma_0,\widetilde{\Sigma}_1)$ have the same homology. Now we prove that $\Cth_+(\Sigma_0,\widetilde{\Sigma}_1)$ is acyclic.
Given the Hamiltonian we used to perturb $V_1\odot\Sigma_1\odot W_1$, we have the canonical identification:
\begin{alignat*}{1}
\Cth_+(\Sigma_0,\widetilde{\Sigma}_1)=\Cth_+(\mathbb R\times\Lambda_0^-,\varphi_{h_c}^S(V_1))
\end{alignat*}
Then, we unwrap the negative end of $\varphi_{h_c}^S(V_1)$, and thus $\Cth_+(\mathbb R\times\Lambda_0^-,\varphi_{h_c}^S(V_1))$ is isomorphic to $\Cth_+(\mathbb R\times\Lambda_0^-,\mathbb R\times\widetilde{\Lambda}_1^-)$ where $\widetilde{\Lambda}_1^-$ is a translation of $\Lambda_1^-$ in the negative Reeb direction and lies entirely below $\Lambda_0^-$, see Figure \ref{concordances}.
In this case, we have $\Cth_+(\mathbb R\times\Lambda_0^-,\mathbb R\times\widetilde{\Lambda}_1^-)=C(\Lambda_0^-,\widetilde{\Lambda}_1^-)[1]$ with the differential $b_1^-$ being the Legendrian contact cohomology differential bilinearized by $\varepsilon_0^-$ and $\varepsilon_1^-$. But as the pair $(\Lambda_0^-,\Lambda_1^-)$ is a pair of horizontally displaceable Legendrians, so this complex is acyclic (observe that $\pi_P(\Lambda_1^-)=\pi_P(\widetilde{\Lambda}_1^-)$).
\begin{figure}[ht]
\begin{center}\includegraphics[width=9cm]{concordances.eps}\end{center}
\caption{Left: pair of concordances $\big(\mathbb R\times\Lambda_0^-,\varphi_{h_c}^S(V_1)\big)$; right: pair $(\mathbb R\times\Lambda_0^-,\mathbb R\times\widetilde{\Lambda}_1^-)$.}
\label{concordances}
\end{figure}
\section{Product structure}
\subsection{Definition of the map}\label{section:def_product}
Given $\Lambda_i^-\prec_{\Sigma_i}\Lambda_i^+$ for $i=0,1,2$ three exact Lagrangian cobordisms that are pairwise transverse such that $\mathcal{A}(\Lambda_i^-)$ admit augmentations $\varepsilon_0^-$, $\varepsilon_1^-$ and $\varepsilon_2^-$, we will define a map
\begin{alignat*}{1}
\mfm_2:\Cth_+(\Sigma_1,\Sigma_2)\otimes\Cth_+(\Sigma_0,\Sigma_1)\to\Cth_+(\Sigma_0,\Sigma_2)
\end{alignat*}
and we prove that it satisfies the Leibniz rule.
Let us denote the components of the product $\mfm_2$ by $\mfm_{ij}^k$, with $i,j,k\in\{+,0,-\}$ such that $\mfm_{ij}^k$ takes as arguments a generator of type $i$ in $\Cth_+(\Sigma_1,\Sigma_2)$, a generator of type $j$ in $\Cth_+(\Sigma_0,\Sigma_1)$ and has for output a generator of type $k$ in $\Cth_+(\Sigma_0,\Sigma_2)$. For example, $\mfm_{+-}^0$ is the component $C(\Lambda_2^+,\Lambda_1^+)^\dagger[n-1]\otimes C(\Lambda_0^-,\Lambda_1^-)[1]\to CF(\Sigma_0,\Sigma_2)$. We define $\mfm_2$ as follows. First, the eight components corresponding to the map
\begin{alignat*}{1}
CF_{-\infty}(\Sigma_1,\Sigma_2)\otimes CF_{-\infty}(\Sigma_0,\Sigma_1)\to CF_{-\infty}(\Sigma_0,\Sigma_2)
\end{alignat*}
are the same components as those defining the product $\mfm_2^{-\infty}$ in \cite{L}, we start by recalling its definition (see also Figure \ref{m_infini}). For a pair of asymptotics $(a_2,a_1)$ which is equal to $(x_{12},x_{01}),(x_{12},\gamma_{10}),(\gamma_{21},x_{01})$ or $(\gamma_{21},\gamma_{10})$ in $CF_{-\infty}(\Sigma_1,\Sigma_2)\otimes CF_{-\infty}(\Sigma_0,\Sigma_1)$ ($x_{ij}$ is an intersection point in $\Sigma_i\cap\Sigma_j$ and $\gamma_{ij}$ is a chord from $\Lambda_i^-$ to $\Lambda_j^-$) we have
\begin{alignat*}{1}
&\mfm_2^{0}(a_2,a_1)=\sum\limits_{p_{20}\in\Sigma_0\cap\Sigma_2,\boldsymbol{\delta}_i}\#\mathcal{M}^0_{\Sigma_{012}}(p_{20};\boldsymbol{\delta}_0,a_1,\boldsymbol{\delta}_1,a_2,\boldsymbol{\delta}_2)\varepsilon_0^-(\boldsymbol{\delta}_0)\varepsilon_1^-(\boldsymbol{\delta}_1)\varepsilon_2^-(\boldsymbol{\delta}_2)\cdot p_{20}
\end{alignat*}
where the sum is over all intersection points $p_{20}\in\Sigma_0\cap\Sigma_2$ and words $\boldsymbol{\delta}_i$ of pure Reeb chords of $\Lambda_i^-$. Then, for a pair $(x_{12},x_{01})\in CF(\Sigma_1,\Sigma_2)\otimes CF(\Sigma_0,\Sigma_1)$, we have
\small
\begin{alignat*}{1}
&\mfm_{00}^{-}(x_{12},x_{01})=\sum\limits_{\substack{\gamma_{20},\gamma_{02}\\\boldsymbol{\delta}_i,\boldsymbol{\delta}_i'}}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda^-_{012}}(\gamma_{20};\boldsymbol{\delta}_0,\gamma_{02},\boldsymbol{\delta}_2)\#\mathcal{M}^0_{\Sigma_{012}}(\gamma_{02};\boldsymbol{\delta}_0',x_{01},\boldsymbol{\delta}_1',x_{12},\boldsymbol{\delta}_2')\cdot\varepsilon_i^-(\boldsymbol{\delta}_i\boldsymbol{\delta}_i')\cdot \gamma_{20}\\
&+\sum\limits_{\substack{\gamma_{20},\gamma_{01},\gamma_{12}\\\boldsymbol{\delta}_i,\boldsymbol{\delta}_i',\boldsymbol{\delta}_i''}}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda^-_{012}}(\gamma_{20};\boldsymbol{\delta}_0,\gamma_{01},\boldsymbol{\delta}_1,\gamma_{12},\boldsymbol{\delta}_2)\#\mathcal{M}^0_{\Sigma_{01}}(\gamma_{01};\boldsymbol{\delta}_0',x_{01},\boldsymbol{\delta}_1')\#\mathcal{M}^0_{\Sigma_{12}}(\gamma_{12};\boldsymbol{\delta}_1'',x_{12},\boldsymbol{\delta}_2'')\cdot\varepsilon_i^-(\boldsymbol{\delta}_i\boldsymbol{\delta}_i'\boldsymbol{\delta}_i'')\cdot \gamma_{20}
\end{alignat*}
\normalsize
where $\varepsilon_i^-(\boldsymbol{\delta}_i)$ stands for the product of the augmentations applied to the corresponding pure chords. For a pair $(x_{12},\gamma_{10})\in CF(\Sigma_1,\Sigma_2)\otimes C(\Lambda_0^-,\Lambda_1^-)$, we have
\small
\begin{alignat*}{1}
&\mfm_{0-}^{-}(x_{12},\gamma_{10})=\sum\limits_{\substack{\gamma_{20},\gamma_{02}\\\boldsymbol{\delta}_i,\boldsymbol{\delta}_i'}}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda^-_{012}}(\gamma_{20};\boldsymbol{\delta}_0,\gamma_{02},\boldsymbol{\delta}_2)\#\mathcal{M}^0_{\Sigma_{012}}(\gamma_{02};\boldsymbol{\delta}_0',\gamma_{10},\boldsymbol{\delta}_1',x_{12},\boldsymbol{\delta}_2')\cdot\varepsilon_i^-(\boldsymbol{\delta}_i\boldsymbol{\delta}_i')\cdot \gamma_{20}\\
&\hspace{2cm}+\sum\limits_{\substack{\gamma_{20},\gamma_{12}\\\boldsymbol{\delta}_i,\boldsymbol{\delta}_i'}}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda^-_{012}}(\gamma_{20};\boldsymbol{\delta}_0,\gamma_{10},\boldsymbol{\delta}_1,\gamma_{12},\boldsymbol{\delta}_2)\#\mathcal{M}^0_{\Sigma_{12}}(\gamma_{12};\boldsymbol{\delta}_1',x_{12},\boldsymbol{\delta}_2')\cdot\varepsilon_i^-(\boldsymbol{\delta}_i\boldsymbol{\delta}_i')\cdot \gamma_{20}
\end{alignat*}
\normalsize
and the obvious symmetric formula for a pair $(\gamma_{21},x_{01})$,
and finally for a pair of Reeb chords $(\gamma_{21},\gamma_{10})\in C(\Lambda_1^-,\Lambda_2^-)\otimes C(\Lambda_0^-,\Lambda_1^-)$,
\begin{alignat*}{1}
&\mfm_{--}^{-}(\gamma_{21},\gamma_{10})=\sum\limits_{\gamma_{20},\boldsymbol{\delta}_i}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda^-_{012}}(\gamma_{20};\boldsymbol{\delta}_0,\gamma_{10},\boldsymbol{\delta}_1,\gamma_{21},\boldsymbol{\delta}_2)\cdot\varepsilon_i^-(\boldsymbol{\delta}_i\boldsymbol{\delta}_i')\cdot \gamma_{20}
\end{alignat*}
\begin{figure}[ht]
\begin{center}\includegraphics[width=13cm]{m_infini.eps}\end{center}
\caption{Pseudo-holomorphic discs contributing to $\mfm_2^{-\infty}$.}
\label{m_infini}
\end{figure}
Then, let us define the remaining components of the map $\mfm_2$, involving Reeb chords in the positive end. First, the components $\mfm_{00}^+$, $\mfm_{0-}^+$, $\mfm_{-0}^+$, and $\mfm_{--}^+$ vanish. It remains to define $\mfm_{++}^k$, $\mfm_{+i}^k$, and $\mfm_{i+}^k$ for $i\in\{0,-\}$ and $k\in\{+,0,-\}$.
Given a pair $(\gamma_{12},\gamma_{01})\in C(\Lambda_2^+,\Lambda_1^+)\otimes C(\Lambda_1^+,\Lambda_0^+)$, we have first
\small
\begin{alignat*}{1}
\mfm_{++}^+(\gamma_{12},\gamma_{01})&=\sum\limits_{\gamma_{02},\boldsymbol{\zeta}_i}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\gamma_{01},\boldsymbol{\zeta}_1,\gamma_{12},\boldsymbol{\zeta}_2)\varepsilon_i^+(\boldsymbol{\zeta}_i)\cdot\gamma_{02}\\
&+\sum\limits_{\substack{\gamma_{02},\gamma_{21}\\\boldsymbol{\zeta}_i,\boldsymbol{\delta}_i}}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\gamma_{01},\boldsymbol{\zeta}_1,\gamma_{21},\boldsymbol{\zeta}_2)\#\mathcal{M}^0_{\Sigma_{12}}(\gamma_{21};\boldsymbol{\delta}_1,\gamma_{12},\boldsymbol{\delta}_2)\cdot\varepsilon_i^+(\boldsymbol{\zeta}_i)\varepsilon_i^-(\boldsymbol{\delta}_i)\cdot\gamma_{02}\\
&+\sum\limits_{\substack{\gamma_{02},\gamma_{10}\\\boldsymbol{\zeta}_i,\boldsymbol{\delta}_i}}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\gamma_{10},\boldsymbol{\zeta}_1,\gamma_{12},\boldsymbol{\zeta}_2)\#\mathcal{M}^0_{\Sigma_{01}}(\gamma_{10};\boldsymbol{\delta}_0,\gamma_{01},\boldsymbol{\delta}_1)\cdot\varepsilon_i^+(\boldsymbol{\zeta}_i)\varepsilon_i^-(\boldsymbol{\delta}_i)\cdot\gamma_{02}\end{alignat*}
\normalsize
summing over $\gamma_{ij}\in C(\Lambda_j^+,\Lambda_i^+)$, $\boldsymbol{\zeta}_i$ words of Reeb chords of $\Lambda_i^+$, for $i=0,1,2$, and $\boldsymbol{\delta}_i$ words of Reeb chords of $\Lambda_i^-$, for $i=0,1,2$.
Then we have
\begin{alignat*}{1}
\mfm_{++}^0(\gamma_{12},\gamma_{01})=\sum\limits_{p_{20},\boldsymbol{\delta}_i}\#\mathcal{M}^0_{\Sigma_{012}}(p_{20};\boldsymbol{\delta}_0,\gamma_{01},\boldsymbol{\delta}_1,\gamma_{12},\boldsymbol{\delta}_2)\varepsilon_i^-(\boldsymbol{\delta}_i)\cdot p_{20}
\end{alignat*}
summing over $p_{02}\in\Sigma_0\cap\Sigma_2$ and $\boldsymbol{\delta}_i$ as above.
And finally the last component of the product for this pair of generators is:
\small
\begin{alignat*}{1}
&\mfm_{++}^-(\gamma_{12},\gamma_{01})=\sum\limits_{\substack{\gamma_{20},\xi_{02}\\\boldsymbol{\delta}_i,\boldsymbol{\delta}_i'}}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{02}^-}(\gamma_{20};\boldsymbol{\delta}_0,\xi_{02},\boldsymbol{\delta}_2)\#\mathcal{M}^0_{\Sigma_{012}}(\xi_{02};\boldsymbol{\delta}_0',\gamma_{01},\boldsymbol{\delta}_1',\gamma_{12},\boldsymbol{\delta}_2')\varepsilon_i^-(\boldsymbol{\delta}_i)\varepsilon_i^-(\boldsymbol{\delta}_i')\cdot \gamma_{20}\\
&+\sum\limits_{\substack{\gamma_{20},\xi_{01},\xi_{12}\\ \boldsymbol{\delta}_i,\boldsymbol{\delta}_i',\boldsymbol{\delta}_i''}}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^-}(\gamma_{20};\boldsymbol{\delta}_0,\xi_{01},\boldsymbol{\delta}_1,\xi_{12},\boldsymbol{\delta}_2)\#\mathcal{M}^0_{\Sigma_{01}}(\xi_{01};\boldsymbol{\delta}_0',\gamma_{01},\boldsymbol{\delta}_1')\#\mathcal{M}^0_{\Sigma_{12}}(\xi_{12};\boldsymbol{\delta}_1'',\gamma_{12},\boldsymbol{\delta}_2'')\cdot\varepsilon_i^-(\boldsymbol{\delta}_i\boldsymbol{\delta}_i'\boldsymbol{\delta}_i'')\cdot\gamma_{20}
\end{alignat*}
\normalsize
summing over $\gamma_{20}\in C(\Lambda_0^-,\Lambda_2^-)$, $\xi_{ij}\in C(\Lambda_j^-,\Lambda_i^-)$, and $\boldsymbol{\delta}_i$, $\boldsymbol{\delta}_i'$ words of Reeb chords of $\Lambda_i^-$.
Then, for a pair of generators $(\gamma_{12},x_{01})\in C(\Lambda_2^+,\Lambda_1^+)\otimes CF(\Sigma_0,\Sigma_1)$ we define:
\small
\begin{alignat*}{1}
&\mfm_{+0}^+(\gamma_{12},x_{01})=\sum\limits_{\substack{\gamma_{02},\gamma_{10}\\\boldsymbol{\zeta}_i,\boldsymbol{\delta}_i}}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\gamma_{10},\boldsymbol{\zeta}_1,\gamma_{12},\boldsymbol{\zeta}_2)\#\mathcal{M}^0_{\Sigma_{01}}(\gamma_{10};\boldsymbol{\delta}_0,x_{01},\boldsymbol{\delta}_1)\cdot\varepsilon_i^+(\boldsymbol{\zeta}_i)\varepsilon_i^-(\boldsymbol{\delta}_i)\cdot\gamma_{02}\\
&\mfm_{+0}^0(\gamma_{12},x_{01})=\sum\limits_{p_{20},\boldsymbol{\delta}_i}\#\mathcal{M}^0_{\Sigma_{012}}(p_{20};\boldsymbol{\delta}_0,x_{01},\boldsymbol{\delta}_1,\gamma_{12},\boldsymbol{\delta}_2)\varepsilon_i^-(\boldsymbol{\delta}_i)\cdot p_{02}\\
&\mfm_{+0}^-(\gamma_{12},x_{01})=\sum\limits_{\substack{\gamma_{20},\xi_{02}\\\boldsymbol{\delta}_i,\boldsymbol{\delta}_i'}}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{02}^-}(\gamma_{20};\boldsymbol{\delta}_0,\xi_{02},\boldsymbol{\delta}_2)\#\mathcal{M}^0_{\Sigma_{012}}(\xi_{02};\boldsymbol{\delta}_0',x_{01},\boldsymbol{\delta}_1',\gamma_{12},\boldsymbol{\delta}_2')\varepsilon_i^-(\boldsymbol{\delta}_i)\varepsilon_i^-(\boldsymbol{\delta}_i')\cdot \gamma_{20}\\
&+\sum\limits_{\substack{\gamma_{20},\xi_{01},\xi_{12}\\ \boldsymbol{\delta}_i,\boldsymbol{\delta}_i',\boldsymbol{\delta}_i''}}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^-}(\gamma_{20};\boldsymbol{\delta}_0,\xi_{01},\boldsymbol{\delta}_1,\xi_{12},\boldsymbol{\delta}_2)\#\mathcal{M}^0_{\Sigma_{01}}(\xi_{01};\boldsymbol{\delta}_0',x_{01},\boldsymbol{\delta}_1')\#\mathcal{M}^0_{\Sigma_{12}}(\xi_{12};\boldsymbol{\delta}_1'',\gamma_{12},\boldsymbol{\delta}_2'')\cdot\varepsilon_i^-(\boldsymbol{\delta}_i\boldsymbol{\delta}_i'\boldsymbol{\delta}_i'')\cdot\gamma_{20}
\end{alignat*}
\normalsize
We finish by defining the product for a pair $(\gamma_{12},\gamma_{10})\in C(\Lambda_2^+,\Lambda_1^+)\otimes C(\Lambda^-_0,\Lambda^-_1)$ as follows:
\small
\begin{alignat*}{1}
&\mfm_{+-}^+(\gamma_{12},\gamma_{10})=\sum\limits_{\substack{\gamma_{02},\xi_{10}\\\boldsymbol{\zeta}_i,\boldsymbol{\delta}_i}}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\xi_{10},\boldsymbol{\zeta}_1,\gamma_{12},\boldsymbol{\zeta}_2)\#\mathcal{M}^0_{\Sigma_{01}}(\xi_{10};\boldsymbol{\delta}_0,\gamma_{10},\boldsymbol{\delta}_1)\cdot\varepsilon_i^+(\boldsymbol{\zeta}_i)\varepsilon_i^-(\boldsymbol{\delta}_i)\cdot\gamma_{02}\\
&\mfm_{+-}^0(\gamma_{12},\gamma_{10})=\sum\limits_{p_{20},\boldsymbol{\delta}_i}\#\mathcal{M}^0_{\Sigma_{012}}(p_{20};\boldsymbol{\delta}_0,\gamma_{10},\boldsymbol{\delta}_1,\gamma_{12},\boldsymbol{\delta}_2)\varepsilon_i^-(\boldsymbol{\delta}_i)\cdot p_{02}\\
& \mfm_{+-}^-(\gamma_{12},\gamma_{10})=\sum\limits_{\substack {\gamma_{20},\xi_{02}\\\boldsymbol{\delta}_i,\boldsymbol{\delta}_i'}}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{02}^-}(\gamma_{20};\boldsymbol{\delta}_0,\xi_{02},\boldsymbol{\delta}_2)\#\mathcal{M}^0_{\Sigma_{012}}(\xi_{02};\boldsymbol{\delta}_0',\gamma_{10},\boldsymbol{\delta}_1',\gamma_{12},\boldsymbol{\delta}_2')\varepsilon_i^-(\boldsymbol{\delta}_i)\varepsilon_i^-(\boldsymbol{\delta}_i')\cdot \gamma_{20}\\
&\hspace{2cm}+\sum\limits_{\substack{\gamma_{20},\xi_{12}\\ \boldsymbol{\delta}_i,\boldsymbol{\delta}_i'}}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^-}(\gamma_{20};\boldsymbol{\delta}_0,\gamma_{10},\boldsymbol{\delta}_1,\xi_{12},\boldsymbol{\delta}_2)\#\mathcal{M}^0_{\Sigma_{12}}(\xi_{12};\boldsymbol{\delta}_1',\gamma_{12},\boldsymbol{\delta}_2')\cdot\varepsilon_i^-(\boldsymbol{\delta}_i\boldsymbol{\delta}_i')\cdot\gamma_{20}
\end{alignat*}
\normalsize
The components $\mfm_{0+}^k$ and $\mfm_{-+}^k$ for $k=+,0,-$ are defined analogously as $\mfm_{+0}^k$ and $\mfm_{+-}^k$. See Figures \ref{m++}, \ref{m+0} and \ref{m+-}.
\begin{figure}[ht]
\begin{center}\includegraphics[width=11cm]{m++.eps}\end{center}
\caption{Pseudo-holomorphic discs contributing to $\mfm_{++}^k$, $k=+,0,-$.}
\label{m++}
\end{figure}
\begin{figure}[ht]
\begin{center}\includegraphics[width=8cm]{m+0.eps}\end{center}
\caption{Pseudo-holomorphic discs contributing to $\mfm_{+0}^k$, $k=+,0,-$.}
\label{m+0}
\end{figure}
\begin{figure}[ht]
\begin{center}\includegraphics[width=8cm]{m+-.eps}\end{center}
\caption{Pseudo-holomorphic discs contributing to $\mfm_{+-}^k$, $k=+,0,-$.}
\label{m+-}
\end{figure}
\begin{teo}\label{teo_Leibniz_rule}
The map $\mfm_2$ satisfies the Leibniz rule, i.e. given three exact pairwise transverse Lagrangian cobordisms $\Lambda_i^-\prec_{\Sigma_i}\Lambda_i^+$ with augmentations $\varepsilon_i^-$ of $\mathcal{A}(\Lambda_i^-)$ for $i=0,1,2$, we have:
\begin{alignat*}{1}
\mfm_2(\mfm_1^{\varepsilon^-_1,\varepsilon^-_2},\cdot)+\mfm_2(\cdot,\mfm_1^{\varepsilon^-_0,\varepsilon^-_1})+\mfm_1^{\varepsilon_0^-,\varepsilon_2^-}\circ\mfm_2(\cdot,\cdot)=0
\end{alignat*}
\end{teo}
\begin{rem}
A \textquotedblleft complete\textquotedblright\, notation for the product would be something like $\mfm^{\Sigma_0,\Sigma_1,\Sigma_2}_{\varepsilon_0^-,\varepsilon_1^-,\varepsilon_2^-}$ as it depends on the choice of cobordisms and on the choice of augmentations of the negative ends. However, to simplify, we will just write it $\mfm_2$ as the choices mentioned are clear from the context.
\end{rem}
As for $\mfm_1$, we can write the components $\mfm_2^+$ and $\mfm_2^-$ as a composition of maps, it will be convenient when describing the boundary of the compactification of 1-dimensional moduli spaces. First, we introduce the maps
\begin{alignat*}{1}
&\Delta_2^+:\mathfrak{C}^*(\Lambda_1^+,\Lambda_2^+)\otimes\mathfrak{C}^*(\Lambda_0^+,\Lambda_1^+)\to C_{n-1-*}(\Lambda_2^+,\Lambda_0^+)\\
&\Delta_2^\Sigma:\Cth_+(\Sigma_1,\Sigma_2)\otimes\Cth_+(\Sigma_0,\Sigma_1)\to C_{n-1-*}(\Lambda_2^-,\Lambda_0^-)\\
&b_2^-:\mathfrak{C}^*(\Lambda_1^-,\Lambda_2^-)\otimes\mathfrak{C}^*(\Lambda_0^-,\Lambda_1^-)\to C^{*-1}(\Lambda_0^-,\Lambda_2^-)
\end{alignat*}
defined by
\begin{alignat*}{1}
&\Delta_2^+(\gamma_2,\gamma_1)=\sum\limits_{\gamma_{02},\boldsymbol{\zeta}_i}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\gamma_1,\boldsymbol{\zeta}_1,\gamma_2,\boldsymbol{\zeta}_2)\varepsilon_i^+(\boldsymbol{\zeta}_i)\cdot\gamma_{02}\\
&\Delta_2^\Sigma(a_2,a_1)=\sum\limits_{\gamma_{02},\boldsymbol{\delta}_i}\#\mathcal{M}^0_{\Sigma_{012}}(\gamma_{02};\boldsymbol{\delta}_0,a_1,\boldsymbol{\delta}_1,a_2,\boldsymbol{\delta}_2)\varepsilon_i^-(\boldsymbol{\delta}_i)\cdot\gamma_{02}\\
&b_2^-(\gamma_2,\gamma_1)=\sum\limits_{\gamma_{20},\boldsymbol{\delta}_i}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^-}(\gamma_{20};\boldsymbol{\delta}_0,\gamma_1,\boldsymbol{\delta}_1,\gamma_2,\boldsymbol{\delta}_2)\varepsilon_i^-(\boldsymbol{\delta}_i)\cdot\gamma_{20}
\end{alignat*}
and observe that $\Delta_2^\Sigma$ vanishes on $C(\Lambda_1,\Lambda_2)\otimes C(\Lambda_0,\Lambda_1)$ for energy reasons.
Using these maps, we have
\begin{alignat}{2}
&\mfm_2^+=\Delta_2^+(\boldsymbol{b}_1^\Sigma\otimes\boldsymbol{b}_1^\Sigma)\label{defm+}\\
&\mfm_2^-=b_1^-\circ\Delta_2^\Sigma+b_2^-(\boldsymbol{\Delta}_1^\Sigma\otimes\boldsymbol{\Delta}_1^\Sigma)\label{defm-}
\end{alignat}
where the maps $b_1^-, \boldsymbol{\Delta}_1^\Sigma$ are defined in Section \ref{section:def_complex} and $\boldsymbol{b}_1^\Sigma$ in Section \ref{sec:conc} (see also Section \ref{special_case}).
\subsection{Leibniz rule} \label{sec:Leibniz}
The map $\mfm_2$ restricted to $CF_{-\infty}(\Sigma_1,\Sigma_2)\otimes CF_{-\infty}(\Sigma_0,\Sigma_1)$ satisfies the Leibniz rule because $\mfm_2^{-\infty}$ satisfies it with respect to the differential $\mfm_1^{-\infty}$ (see \cite{L}) and there is no component of the differential $\mfm_1^{\varepsilon_0^-,\varepsilon_1^-}$ from the subcomplex $CF_{-\infty}(\Sigma_0,\Sigma_1)$ to $C(\Lambda_0^+,\Lambda_1^+)$. It remains to check the Leibniz rule for each pair of generators containing at least one Reeb chord in the positive end:
\begin{enumerate}
\item[(a)] $(\gamma_{12},\gamma_{01})\in C(\Lambda_2^+,\Lambda_1^+)\otimes C(\Lambda_1^+,\Lambda_0^+)$,
\item[(b)] $(\gamma_{12},x_{01})\in C(\Lambda_2^+,\Lambda_1^+)\otimes CF(\Sigma_0,\Sigma_1)$ and $(x_{12},\gamma_{01})\in CF(\Sigma_1,\Sigma_2)\otimes C(\Lambda_1^+,\Lambda_0^+)$,
\item[(c)] $(\gamma_{12},\gamma_{10})\in C(\Lambda_2^+,\Lambda_1^+)\otimes C(\Lambda_0^-,\Lambda_1^-)$ and $(\xi_{12},\gamma_{10})\in C(\Lambda_1^-,\Lambda_2^-)\otimes C(\Lambda_1^+,\Lambda_0^+)$,
\end{enumerate}
As usual, the Leibniz rule will follow from the study of the boundary of the compactification of some (product of) moduli spaces. Recall that we described in Section \ref{sec:structure} the different types of broken discs arising in this boundary. We focus now on some particular moduli spaces and specify the algebraic contribution of each broken disc. \\
\noindent\textbf{Leibniz rule for a pair of type (a):}
For the pair of generators of type (a), we will show that the following three relations are satisfied:
\begin{alignat}{1}
&\mfm_2^+(\mfm_1^{\varepsilon_1^-,\varepsilon_2^-}(\gamma_{12}),\gamma_{01})+\mfm_2^+(\gamma_{12},\mfm_1^{\varepsilon_0^-,\varepsilon_1^-}(\gamma_{01}))+\mfm_1^+\circ\mfm_2^+(\gamma_{12},\gamma_{01})=0\label{rel+++}\\
&\mfm_2^0(\mfm_1^{\varepsilon_1^-,\varepsilon_2^-}(\gamma_{12}),\gamma_{01})+\mfm_2^0(\gamma_{12},\mfm_1^{\varepsilon_0^-,\varepsilon_1^-}(\gamma_{01}))+\mfm_1^0\circ\mfm_2(\gamma_{12},\gamma_{01})\label{rel++0}\\
&\mfm_2^-(\mfm_1^{\varepsilon_1^-,\varepsilon_2^-}(\gamma_{12}),\gamma_{01})+\mfm_2^-(\gamma_{12},\mfm_1^{\varepsilon_0^-,\varepsilon_1^-}(\gamma_{01}))+\mfm_1^-\circ\mfm_2(\gamma_{12},\gamma_{01})\label{rel++-}
\end{alignat}
After adding in \eqref{rel+++} the vanishing terms $\mfm_1^+\circ\mfm_2^0(\gamma_{12},\gamma_{01})=\mfm_1^+\circ\mfm_2^-(\gamma_{12},\gamma_{01})=0$, the sum of these three relations gives the Leibniz rule for the pair $(\gamma_{12},\gamma_{01})\in C(\Lambda_1^+,\Lambda_2^+)\otimes C(\Lambda_0^+,\Lambda_1^+)$.
First, we see that relation \eqref{rel+++} follows from the study of the boundary of the compactification of the following products of moduli spaces:
\begin{alignat}{1}
&\widetilde{\mathcal{M}^2}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\gamma_{01},\boldsymbol{\zeta}_1,\gamma_{12},\boldsymbol{\zeta}_2)\\
&\widetilde{\mathcal{M}^2}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\gamma_{01},\boldsymbol{\zeta}_1,\gamma_{21},\boldsymbol{\zeta}_2)\times\mathcal{M}^0_{\Sigma_{12}}(\gamma_{21};\boldsymbol{\delta}_1,\gamma_{12},\boldsymbol{\delta}_2)\label{mod2}\\
&\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\gamma_{01},\boldsymbol{\zeta}_1,\gamma_{21},\boldsymbol{\zeta}_2)\times\mathcal{M}^1_{\Sigma_{12}}(\gamma_{21};\boldsymbol{\delta}_1,\gamma_{12},\boldsymbol{\delta}_2)\label{mod3}\\
&\widetilde{\mathcal{M}^2}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\gamma_{10},\boldsymbol{\zeta}_1,\gamma_{12},\boldsymbol{\zeta}_2)\times\mathcal{M}^0_{\Sigma_{01}}(\gamma_{10};\boldsymbol{\delta}_0,\gamma_{01},\boldsymbol{\delta}_1)\label{mod4}\\
&\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\gamma_{10},\boldsymbol{\zeta}_1,\gamma_{12},\boldsymbol{\zeta}_2)\times\mathcal{M}^1_{\Sigma_{01}}(\gamma_{10};\boldsymbol{\delta}_0,\gamma_{01},\boldsymbol{\delta}_1)\label{mod5}
\end{alignat}
\begin{figure}[ht]
\begin{center}\includegraphics[width=11cm]{bris++1.eps}\end{center}
\caption{Types of broken discs in the boundary of $\overline{\mathcal{M}^2}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\gamma_{01},\boldsymbol{\zeta}_1,\gamma_{12},\boldsymbol{\zeta}_2)$.}
\label{bris++1}
\end{figure}
The broken discs in $\partial\overline{\mathcal{M}^2}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\gamma_{01},\boldsymbol{\zeta}_1,\gamma_{12},\boldsymbol{\zeta}_2)$ are schematized on Figure \ref{bris++1}. The sum of their algebraic contributions vanishes, and thus gives:
\begin{alignat}{1}
\Delta_2^+(\mfm_1^+(\gamma_{12}),\gamma_{01})+\Delta_2^+(\gamma_{12},\mfm_1^+(\gamma_{01}))&+\mfm_1^+\circ\Delta_2^+(\gamma_{12},\gamma_{01})\label{rel++1}\\
&+\Delta_2^+(b_1^+(\gamma_{12}),\gamma_{01})+\Delta_2^+(\gamma_{12},b_1^+(\gamma_{01}))=0\nonumber
\end{alignat}
The boundary of the compactification of \eqref{mod2}, see Figure \ref{bris++2}, gives the algebraic relation:
\begin{alignat}{1}
\Delta_2^+(b_1^\Sigma(\gamma_{12}),\mfm_1^+(\gamma_{01}))+\mfm_1^+\circ\Delta_2^+(b_1^\Sigma(\gamma_{12}),\gamma_{01})+\Delta_2^+(b_1^+\circ b_1^\Sigma(\gamma_{12}),\gamma_{01})=0\label{rel++2}
\end{alignat}
\begin{figure}[ht]
\begin{center}\includegraphics[width=8cm]{bris++2.eps}\end{center}
\caption{Broken discs in $\partial\overline{\mathcal{M}^2}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\gamma_{01},\boldsymbol{\zeta}_1,\gamma_{21},\boldsymbol{\zeta}_2)\times\mathcal{M}^0_{\Sigma_{12}}(\gamma_{21};\boldsymbol{\delta}_1,\gamma_{12},\boldsymbol{\delta}_2)$.}
\label{bris++2}
\end{figure}
One gets the symmetric relation
\begin{alignat}{1}
\Delta_2^+(\mfm_1^+(\gamma_{12}),b_1^\Sigma(\gamma_{01}))+\mfm_1^+\circ\Delta_2^+(\gamma_{12},b_1^\Sigma(\gamma_{01}))+\Delta_2^+(\gamma_{12},b_1^+\circ b_1^\Sigma(\gamma_{01}))=0\label{rel++2bis}
\end{alignat}
by studying the boundary of \eqref{mod4}.
Finally, one gets the relation
\begin{alignat}{1}
\Delta_2^+(b_1^\Sigma\circ\mfm_1^+(\gamma_{12}),\gamma_{01})&+\Delta_2^+(b_1^+\circ b_1^\Sigma(\gamma_{12}),\gamma_{01})+\Delta_2^+(b_1^+(\gamma_{12}),\gamma_{01})\label{rel++3}\\
&+\Delta_2^+(b_1^\Sigma\circ\mfm_1^0(\gamma_{12}),\gamma_{01})+\Delta_2^+(b_1^\Sigma\circ\mfm_1^-(\gamma_{12}),\gamma_{01})=0\nonumber
\end{alignat}
and the symmetric
\begin{alignat}{1}
\Delta_2^+(\gamma_{12},b_1^\Sigma\circ\mfm_1^+(\gamma_{01}))&+\Delta_2^+(\gamma_{12},b_1^+\circ b_1^\Sigma(\gamma_{01}))+\Delta_2^+(\gamma_{12},b_1^+(\gamma_{01}))\label{rel++3bis}\\
&+\Delta_2^+(\gamma_{12},b_1^\Sigma\circ\mfm_1^0(\gamma_{01}))+\Delta_2^+(\gamma_{12},b_1^\Sigma\circ\mfm_1^-(\gamma_{01}))=0\nonumber
\end{alignat}
by studying first \eqref{mod3} and then \eqref{mod5} (see Figure \ref{bris++3}). Observe that for these last two, we consider the boundary of the compactification of moduli spaces of bananas with boundary on non cylindrical parts of the cobordisms and with two positive Reeb chord asymptotics, as we have done already in the proof of Lemma \ref{lem:rel}.
\begin{figure}[ht]
\begin{center}\includegraphics[width=10cm]{bris++3.eps}\end{center}
\caption{Broken discs in $\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\gamma_{01},\boldsymbol{\zeta}_1,\gamma_{21},\boldsymbol{\zeta}_2)\times\partial\overline{\mathcal{M}^1}_{\Sigma_{12}}(\gamma_{21};\boldsymbol{\delta}_1,\gamma_{12},\boldsymbol{\delta}_2)$.}
\label{bris++3}
\end{figure}
Summing \eqref{rel++1}, \eqref{rel++2}, \eqref{rel++2bis}, \eqref{rel++3} and \eqref{rel++3bis}, cancelling terms appearing twice and using the definition of $\boldsymbol{b}_1^\Sigma$ and $\mfm_2^+$ given in \eqref{defm+}, one obtains relation \eqref{rel+++}.
Then, the study of the boundary of the compactification of
\begin{alignat*}{1}
\mathcal{M}^1_{\Sigma_{012}}(p_{20};\boldsymbol{\delta}_0,\gamma_{01},\boldsymbol{\delta}_1,\gamma_{12},\boldsymbol{\delta}_2)
\end{alignat*}
gives relation \eqref{rel++0}, see Figure \ref{bris++0} for a description of broken discs. Indeed, the algebraic contributions of those discs are (from left to right and top to bottom on the figure):
\begin{alignat*}{1}
&\mfm_2^0(\mfm_1^+(\gamma_{12}),\gamma_{01})+\mfm_2^0(\gamma_{12},\mfm_1^+(\gamma_{01}))+\mfm_1^0\circ\Delta_2^+(\gamma_{12},\gamma_{01})+\mfm_1^0\circ\Delta_2^+(b_1^\Sigma(\gamma_{12}),\gamma_{01})\\
&+\mfm_1^0\circ\Delta_2^+(\gamma_{12},b_1^\Sigma(\gamma_{01}))+\mfm_2^0(\mfm_1^0(\gamma_{12}),\gamma_{01})+\mfm_2^0(\gamma_{12},\mfm_1^0(\gamma_{01}))+\mfm_1^0\circ\mfm_2^0(\gamma_{12},\gamma_{01})\\
&+\mfm_2^0(\mfm_1^-(\gamma_{12}),\gamma_{01})+\mfm_2^0(\gamma_{12},\mfm_1^-(\gamma_{01}))+\mfm_1^0\circ b_1^-\circ\Delta_2^\Sigma(\gamma_{12},\gamma_{01})+\mfm_1^0\circ b_2^-(\Delta_1^\Sigma(\gamma_{12}),\Delta_1^\Sigma(\gamma_{01}))=0
\end{alignat*}
And using the definitions of $\mfm_2^+$ and $\mfm_2^-$ given in \eqref{defm+} and \eqref{defm-} one deduces relation \eqref{rel++0}.
\begin{figure}[ht]
\begin{center}\includegraphics[width=12cm]{bris++0bis.eps}\end{center}
\caption{Broken discs in $\partial\overline{\mathcal{M}^1}_{\Sigma_{012}}(p_{20};\boldsymbol{\delta}_0,\gamma_{01},\boldsymbol{\delta}_1,\gamma_{12},\boldsymbol{\delta}_2)$.}
\label{bris++0}
\end{figure}
Finally, analogously to the previous cases, the broken curves in the boundary of the compactification of
\begin{alignat}{1}
&\widetilde{\mathcal{M}^2}_{\mathbb R\times\Lambda_{02}^-}(\gamma_{20};\boldsymbol{\delta}_0,\gamma_{02},\boldsymbol{\delta}_2)\times\mathcal{M}^0_{\Sigma_{012}}(\gamma_{02};\boldsymbol{\delta}_0',\gamma_{01},\boldsymbol{\delta}_1',\gamma_{12},\boldsymbol{\delta}_2')\\
&\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{02}^-}(\gamma_{20};\boldsymbol{\delta}_0,\gamma_{02},\boldsymbol{\delta}_2)\times\mathcal{M}^1_{\Sigma_{012}}(\gamma_{02};\boldsymbol{\delta}_0',\gamma_{01},\boldsymbol{\delta}_1',\gamma_{12},\boldsymbol{\delta}_2')\\
&\widetilde{\mathcal{M}^2}_{\mathbb R\times\Lambda_{012}^-}(\gamma_{20};\boldsymbol{\delta}_0,\xi_{01},\boldsymbol{\delta}_1,\xi_{12},\boldsymbol{\delta}_2)\times\mathcal{M}^0_{\Sigma_{01}}(\xi_{01};\boldsymbol{\delta}_0',\gamma_{01},\boldsymbol{\delta}_1')\times\mathcal{M}^0_{\Sigma_{12}}(\xi_{12};\boldsymbol{\delta}_1'',\gamma_{12},\boldsymbol{\delta}_2'')\label{prod1}\\
&\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^-}(\gamma_{20};\boldsymbol{\delta}_0,\xi_{01},\boldsymbol{\delta}_1,\xi_{12},\boldsymbol{\delta}_2)\times\mathcal{M}^1_{\Sigma_{01}}(\xi_{01};\boldsymbol{\delta}_0',\gamma_{01},\boldsymbol{\delta}_1')\times\mathcal{M}^0_{\Sigma_{12}}(\xi_{12};\boldsymbol{\delta}_1'',\gamma_{12},\boldsymbol{\delta}_2'')\label{prod2}\\
&\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^-}(\gamma_{20};\boldsymbol{\delta}_0,\xi_{01},\boldsymbol{\delta}_1,\xi_{12},\boldsymbol{\delta}_2)\times\mathcal{M}^0_{\Sigma_{01}}(\xi_{01};\boldsymbol{\delta}_0',\gamma_{01},\boldsymbol{\delta}_1')\times\mathcal{M}^1_{\Sigma_{12}}(\xi_{12};\boldsymbol{\delta}_1'',\gamma_{12},\boldsymbol{\delta}_2'')\label{prod3}
\end{alignat}
give relation \eqref{rel++-}. First, there are two types of broken discs arising in $\partial\overline{\mathcal{M}^2}_{\mathbb R\times\Lambda_{02}^-}(\gamma_{20};\boldsymbol{\delta}_0,\gamma_{02},\boldsymbol{\delta}_2)$ giving the algebraic relation
\begin{alignat*}{1}
b_1^-\circ b_1^-(\gamma_{02})+b_1^-\circ\Delta_1^-(\gamma_{02})=0
\end{alignat*}
Then, broken discs in the boundary of $\partial\overline{\mathcal{M}^1}_{\Sigma_{012}}(\gamma_{02};\boldsymbol{\delta}_0',\gamma_{01},\boldsymbol{\delta}_1',\gamma_{12},\boldsymbol{\delta}_2')$ are schematized on Figure \ref{leibniz++-}. From this, the broken discs in
\begin{alignat*}{1}
\partial\overline{\mathcal{M}^2}_{\mathbb R\times\Lambda_{02}^-}(\gamma_{20};\boldsymbol{\delta}_0,\gamma_{02},\boldsymbol{\delta}_2)\times\mathcal{M}^0_{\Sigma_{012}}(\gamma_{02};\boldsymbol{\delta}_0',\gamma_{01},\boldsymbol{\delta}_1',\gamma_{12},\boldsymbol{\delta}_2')
\end{alignat*}
contribute algebraically to
\begin{alignat}{1}
b_1^-\circ b_1^-\circ\Delta_2^\Sigma(\gamma_{12},\gamma_{01})+b_1^-\circ\Delta_1^-\circ\Delta_2^\Sigma(\gamma_{12},\gamma_{01})\label{rel_banane}
\end{alignat}
and the broken discs in the boundary of
\begin{alignat*}{1}
\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{02}^-}(\gamma_{20};\boldsymbol{\delta}_0,\gamma_{02},\boldsymbol{\delta}_2)\times\partial\overline{\mathcal{M}^1}_{\Sigma_{012}}(\gamma_{02};\boldsymbol{\delta}_0',\gamma_{01},\boldsymbol{\delta}_1',\gamma_{12},\boldsymbol{\delta}_2')
\end{alignat*}
to (from top to bottom and left to right on Figure \ref{leibniz++-})
\small
\begin{alignat*}{1}
&b_1^-\circ\Delta_2^\Sigma(\mfm_1^+(\gamma_{12}),\gamma_{01})+b_1^-\circ\Delta_2^\Sigma(\gamma_{12},\mfm_1^+(\gamma_{01}))+\mfm_1^-\circ\Delta_2^+(\gamma_{12},\gamma_{01})+\mfm_1^-\circ\Delta_2^+(b_1^\Sigma(\gamma_{12}),\gamma_{01})+\mfm_1^-\circ\Delta_2^+(\gamma_{12},b_1^\Sigma(\gamma_{01}))\\
&+\,b_1^-\circ\Delta_2^\Sigma(\mfm_1^0(\gamma_{12}),\gamma_{01})+b_1^-\circ\Delta_2^\Sigma(\gamma_{12},\mfm_1^0(\gamma_{01}))+\mfm_1^-\circ\mfm_2^0(\gamma_{12},\gamma_{01})\\
&+\,b_1^-\circ\Delta_2^\Sigma(\mfm_1^-(\gamma_{12}),\gamma_{01})+b_1^-\circ\Delta_2^\Sigma(\gamma_{12},\mfm_1^-(\gamma_{01}))+b_1^-\circ\Delta_2^-(\Delta_1^\Sigma(\gamma_{12}),\Delta_1^\Sigma(\gamma_{01}))+b_1^-\circ \Delta_1^-\circ\Delta_2^\Sigma(\gamma_{12},\gamma_{01})
\end{alignat*}
\normalsize
\begin{figure}[ht]
\begin{center}\includegraphics[width=12cm]{leibniz++-.eps}\end{center}
\caption{Broken discs in $\mathcal{M}^1_{\Sigma_{012}}(\gamma_{02};\boldsymbol{\delta}_0',\gamma_{01},\boldsymbol{\delta}_1',\gamma_{12},\boldsymbol{\delta}_2')$.}
\label{leibniz++-}
\end{figure}
Note that the three last terms on the first line give $\mfm_1^-\circ\mfm_2^+(\gamma_{12},\gamma_{01})$ by definition of $\mfm_2^+$. Then all the terms starting with $b_1^-\circ\Delta_2^\Sigma$ contribute to $\mfm_2^-(\mfm_1(\gamma_{12}),\gamma_{01})+\mfm_2^-(\gamma_{12},\mfm_1(\gamma_{01}))$ because recall that $\mfm_2^-=b_1^-\circ\Delta_2^\Sigma+b_2^-(\boldsymbol{\Delta}_1^\Sigma\otimes\boldsymbol{\Delta}_1^\Sigma)$. Then, the configuration contributing to $b_1^-\circ\Delta_2^-(\Delta_1^\Sigma(\gamma_{12}),\Delta_1^\Sigma(\gamma_{01}))$ appears also in the boundary of the compactification of \eqref{prod1}. Finally, summing with \eqref{rel_banane} the last term $b_1^-\circ \Delta_1^-\circ\Delta_2^\Sigma(\gamma_{12},\gamma_{01})$ appears twice (so disappears!) and we get the remaining term $b_1^-\circ b_1^-\circ\Delta_2^\Sigma(\gamma_{12},\gamma_{01})$ which contributes to $\mfm_1^-\circ\mfm_2^-(\gamma_{12},\gamma_{01})$. The study of the boundary of \eqref{prod1}, \eqref{prod2} and \eqref{prod2} gives the other terms of the Leibniz rule.\\
\noindent\textbf{Leibniz rule for a pair of type (b):}
Let us consider a pair $(\gamma_{12},x_{01})$ of generators of type (b). The Leibniz rule for such a pair decomposes into the following three relations:
\begin{alignat}{1}
&\mfm_2^+(\Delta_1^+(\gamma_{12}),x_{01})+\mfm^+_2(\gamma_{12},\mfm_1(x_{01}))+\mfm_1^+\circ\mfm^+_2(\gamma_{12},x_{01})=0\label{rel+0+}\\
&\mfm_2^0(\mfm_1(\gamma_{12}),x_{01})+\mfm_2^0(\gamma_{12},\mfm_1(x_{01}))+\mfm_1^0\circ\mfm_2(\gamma_{12},x_{01})\label{rel+00}\\
&\mfm_2^-(\mfm_1(\gamma_{12}),x_{01})+\mfm_2^-(\gamma_{12},\mfm_1(x_{01}))+\mfm_1^-\circ\mfm_2(\gamma_{12},x_{01})\label{rel+0-}
\end{alignat}
where for \eqref{rel+0+} we make use of the fact that $\mfm_2^+(\mfm_1^0(\gamma_{12}),x_{01}))$ and $\mfm_2^+(\mfm_1^-(\gamma_{12}),x_{01})$ vanish by definition ($\mfm_{00}^+=\mfm_{-0}^+=0$).
The study of the boundary of the compactification of the products
\begin{alignat*}{1}
&\widetilde{\mathcal{M}^2}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\xi_{10},\boldsymbol{\zeta}_1,\gamma_{12},\boldsymbol{\zeta}_2)\times\mathcal{M}^0_{\Sigma_{01}}(\xi_{10};\boldsymbol{\delta}_0,x_{01},\boldsymbol{\delta}_1)\\
&\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\xi_{10},\boldsymbol{\zeta}_1,\gamma_{12},\boldsymbol{\zeta}_2)\times\mathcal{M}^1_{\Sigma_{01}}(\xi_{10};\boldsymbol{\delta}_0,x_{01},\boldsymbol{\delta}_1)
\end{alignat*}
gives relation \eqref{rel+0+}. In order to get relation \eqref{rel+00} we need to study the boundary of
\begin{alignat*}{1}
\mathcal{M}^1_{\Sigma_{012}}(p_{20};\boldsymbol{\delta}_0,x_{01},\boldsymbol{\delta}_1,\gamma_{12},\boldsymbol{\delta}_2)
\end{alignat*}
and finally for relation \eqref{rel+0-}, we study
\begin{alignat*}{1}
&\widetilde{\mathcal{M}^2}_{\mathbb R\times\Lambda_{02}^-}(\gamma_{20};\boldsymbol{\delta}_0,\xi_{02},\boldsymbol{\delta}_2)\times\mathcal{M}^0_{\Sigma_{012}}(\xi_{02};\boldsymbol{\delta}_0',x_{01},\boldsymbol{\delta}_1',\gamma_{12},\boldsymbol{\delta}_2')\\
&\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{02}^-}(\gamma_{20};\boldsymbol{\delta}_0,\xi_{02},\boldsymbol{\delta}_2)\times\mathcal{M}^1_{\Sigma_{012}}(\xi_{02};\boldsymbol{\delta}_0',x_{01},\boldsymbol{\delta}_1',\gamma_{12},\boldsymbol{\delta}_2')\\
&\widetilde{\mathcal{M}^2}_{\mathbb R\times\Lambda_{012}^-}(\gamma_{20};\boldsymbol{\delta}_0,\xi_{01},\boldsymbol{\delta}_1,\xi_{12},\boldsymbol{\delta}_2)\times\mathcal{M}^0_{\Sigma_{01}}(\xi_{01};\boldsymbol{\delta}_0',x_{01},\boldsymbol{\delta}_1')\times\mathcal{M}^0_{\Sigma_{12}}(\xi_{12};\boldsymbol{\delta}_1'',\gamma_{12},\boldsymbol{\delta}_2'')\\
&\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^-}(\gamma_{20};\boldsymbol{\delta}_0,\xi_{01},\boldsymbol{\delta}_1,\xi_{12},\boldsymbol{\delta}_2)\times\mathcal{M}^1_{\Sigma_{01}}(\xi_{01};\boldsymbol{\delta}_0',x_{01},\boldsymbol{\delta}_1')\times\mathcal{M}^0_{\Sigma_{12}}(\xi_{12};\boldsymbol{\delta}_1'',\gamma_{12},\boldsymbol{\delta}_2'')\\
&\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^-}(\gamma_{20};\boldsymbol{\delta}_0,\xi_{01},\boldsymbol{\delta}_1,\xi_{12},\boldsymbol{\delta}_2)\times\mathcal{M}^0_{\Sigma_{01}}(\xi_{01};\boldsymbol{\delta}_0',x_{01},\boldsymbol{\delta}_1')\times\mathcal{M}^1_{\Sigma_{12}}(\xi_{12};\boldsymbol{\delta}_1'',\gamma_{12},\boldsymbol{\delta}_2'')
\end{alignat*}
\noindent\textbf{Leibniz rule for a pair of type (c):}
Finally, for a pair $(\gamma_{12},\gamma_{10})$ of generators of type (c), we decompose the Leibniz rule into:
\begin{alignat}{1}
&\mfm_2^+(\Delta_1^+(\gamma_{12}),\gamma_{10})+\mfm_2^+(\gamma_{12},\mfm_1(\gamma_{10}))+\mfm_1^+\circ\mfm_2^+(\gamma_{12},\gamma_{10})=0\label{rel+-+}\\
&\mfm_2^0(\mfm_1(\gamma_{12}),\gamma_{10})+\mfm_2^0(\gamma_{12},\mfm_1(\gamma_{10}))+\mfm_1^0\circ\mfm_2(\gamma_{12},\gamma_{10})\label{rel+-0}\\
&\mfm_2^-(\mfm_1(\gamma_{12}),\gamma_{10})+\mfm_2^-(\gamma_{12},\mfm_1(\gamma_{10}))+\mfm_1^-\circ\mfm_2(\gamma_{12},\gamma_{10})\label{rel+--}
\end{alignat}
and observe that one of the two terms contributing to $\mfm_2^-(\gamma_{12},\mfm_1^0(\gamma_{10}))$, namely $b_2^-(\Delta_1^\Sigma(\gamma_{12}),\Delta_1^\Sigma\circ\mfm_1^0(\gamma_{10}))$, vanishes for energy reasons. Relations \eqref{rel+-+}, \eqref{rel+-0} and \eqref{rel+--} are obtained respectively by studying the boundary of the compactification of
\begin{alignat*}{1}
&\widetilde{\mathcal{M}^2}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\xi_{10},\boldsymbol{\zeta}_1,\gamma_{12},\boldsymbol{\zeta}_2)\times\mathcal{M}^0_{\Sigma_{01}}(\xi_{10};\boldsymbol{\delta}_0,\gamma_{10},\boldsymbol{\delta}_1)\\
&\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^+}(\gamma_{02};\boldsymbol{\zeta}_0,\xi_{10},\boldsymbol{\zeta}_1,\gamma_{12},\boldsymbol{\zeta}_2)\times\mathcal{M}^1_{\Sigma_{01}}(\xi_{10};\boldsymbol{\delta}_0,\gamma_{10},\boldsymbol{\delta}_1)
\end{alignat*}
of $\mathcal{M}^1_{\Sigma_{012}}(p_{20};\boldsymbol{\delta}_0,\gamma_{10},\boldsymbol{\delta}_1,\gamma_{12},\boldsymbol{\delta}_2)$,
and of
\begin{alignat*}{1}
&\widetilde{\mathcal{M}^2}_{\mathbb R\times\Lambda_{02}^-}(\gamma_{20};\boldsymbol{\delta}_0,\xi_{02},\boldsymbol{\delta}_2)\times\mathcal{M}^0_{\Sigma_{012}}(\xi_{02};\boldsymbol{\delta}_0',\gamma_{10},\boldsymbol{\delta}_1',\gamma_{12},\boldsymbol{\delta}_2')\\
&\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{02}^-}(\gamma_{20};\boldsymbol{\delta}_0,\xi_{02},\boldsymbol{\delta}_2)\times\mathcal{M}^1_{\Sigma_{012}}(\xi_{02};\boldsymbol{\delta}_0',\gamma_{10},\boldsymbol{\delta}_1',\gamma_{12},\boldsymbol{\delta}_2')\\
&\widetilde{\mathcal{M}^2}_{\mathbb R\times\Lambda_{012}^-}(\gamma_{20};\boldsymbol{\delta}_0,\gamma_{10},\boldsymbol{\delta}_1,\xi_{12},\boldsymbol{\delta}_2)\times\mathcal{M}^0_{\Sigma_{12}}(\xi_{12};\boldsymbol{\delta}_1',\gamma_{12},\boldsymbol{\delta}_2')\\
&\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{012}^-}(\gamma_{20};\boldsymbol{\delta}_0,\gamma_{10},\boldsymbol{\delta}_1,\xi_{12},\boldsymbol{\delta}_2)\times\mathcal{M}^1_{\Sigma_{12}}(\xi_{12};\boldsymbol{\delta}_1',\gamma_{12},\boldsymbol{\delta}_2')
\end{alignat*}
\section{Product in the concatenation}\label{sec:prod_conc}
\subsection{Definition of the product}
Given a pair of concatenation $(V_0\odot W_0,V_1\odot W_1)$, we denote $\mfm_1^V,\mfm_1^W$ the differentials of the complexes $\Cth_+(V_0,V_1)$ and $\Cth_+(W_0,W_1)$ respectively. Given a third concatenation $V_2\odot W_2$, we denote again $\mfm_1^V$ and $\mfm_1^W$ the differentials on complexes $\Cth_+(V_i,V_j)$ and $\Cth_+(W_i,W_j)$ respectively, for $0\leq i\neq j\leq2$, without specifying the pair of cobordisms when it is clear from the context.
Moreover, we will use the transfer maps $\boldsymbol{b}_1^{V_i,V_j}:\Cth_+(V_i\odot W_i,V_j\odot W_j)\to\Cth_+(W_i,W_j)$ and $\boldsymbol{\Delta}_1^{W_i,W_j}:\Cth_+(V_i\odot W_i,V_j\odot W_j)\to\Cth_+(V_i,V_j)$ and will shorten the notations to $\boldsymbol{b}_1^V$ and $\boldsymbol{\Delta}_1^W$ as there should not be any risk of confusion about which pair of cobordisms is involved in the domain and codomain.
Finally, we denote $\mfm_2^V,\mfm_2^W$ the products $\Cth_+(V_1,V_2)\otimes\Cth_+(V_0,V_1)\to\Cth_+(V_0,V_2)$ and $\Cth_+(W_1,W_2)\otimes\Cth_+(W_0,W_1)\to\Cth_+(W_0,W_2)$ respectively. We now define a product:
\begin{alignat*}{1}
\mfm_2^{V\odot W}:\Cth_+(V_1\odot W_1,V_2\odot W_2)\otimes\Cth_+(V_0\odot W_0,V_1\odot W_1)\to\Cth_+(V_0\odot W_0,V_2\odot W_2)
\end{alignat*}
Using maps we already defined before, as well as the two inputs banana $b_2^V$ with boundary on $V_0\cup V_1\cup V_2$ (we encountered in Section \ref{section:def_product} the two inputs banana $b_2^-$ with boundary on cylindrical ends), defined by
\begin{alignat*}{1}
&b_2^V:\Cth_+(V_1,V_2)\otimes\Cth_+(V_0,V_1)\to C^{*-1}(\Lambda_0,\Lambda_1)\\
&b_2^V(a_2,a_1)=\sum_{\gamma_{20},\boldsymbol{\delta}_i}\#\mathcal{M}^0_{V_{012}}(\gamma_{20};\boldsymbol{\delta}_0,a_1,\boldsymbol{\delta}_1,a_2,\boldsymbol{\delta}_2)\cdot\varepsilon_i^-(\boldsymbol{\delta}_i)\cdot\gamma_{20}
\end{alignat*}
we set:
\begin{alignat*}{2}
\mfm_2^{V\odot W}&=\mfm_2^{W,+0}\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\mfm_1^{W,+0}\circ\,b_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\mfm_1^{W,+0}\circ\,b_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\\
&\,+\mfm_2^{V,0-}\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)+\mfm_1^{V,0-}\circ\,\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)
\end{alignat*}
where $\mfm_i^{W,+0}=\mfm_i^{W,+}+\mfm_i^{W,0}$, $i=1,2$ is the component of $\mfm_i^W$ with values in $C(\Lambda_2^+,\Lambda_0^+)\oplus CF(W_0,W_2)$, and $\mfm_i^{V,0-}=\mfm_i^{V,0}+\mfm_i^{V,-}$, $i=1,2$, is the component of $\mfm_i^V$ with values in $CF(V_0,V_2)\oplus C(\Lambda_0^-,\Lambda_2^-)$. Observe that $\mfm_1^{W,+}\circ\,b_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)$ and $\mfm_1^{W,+}\circ\,b_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)$ vanish, but we keep it in the formula to make it look more homogeneous, which helps a bit to check the Leibniz rule in the next section.
\subsection{Leibniz rule}\label{sec:LCconc}
This section is dedicated in proving that the map $\mfm_2^{V\odot W}$ satisfies the Leibniz rule with respect to $\mfm_1^{V\odot W}$. This is just computation.
We want to show
\begin{alignat*}{1}
\mfm_2^{V\odot W}(\mfm_1^{V\odot W}\otimes\id)+\mfm_2^{V\odot W}(\id\otimes\mfm_1^{V\odot W})+\mfm_1^{V\odot W}\circ\mfm_2^{V\odot W}=0
\end{alignat*}
We will actually decompose it into two equations:
\begin{alignat}{1}
&\mfm_2^{V\odot W,+0_W}(\mfm_1^{V\odot W}\otimes\id)+\mfm_2^{V\odot W,+0_W}(\id\otimes\mfm_1^{V\odot W})+\mfm_1^{V\odot W,+0_W}\circ\mfm_2^{V\odot W}=0\label{Leibniz_conc1}\\
&\mfm_2^{V\odot W,0_V-}(\mfm_1^{V\odot W}\otimes\id)+\mfm_2^{V\odot W,0_V-}(\id\otimes\mfm_1^{V\odot W})+\mfm_1^{V\odot W,0_V-}\circ\mfm_2^{V\odot W}=0\label{Leibniz_conc2}
\end{alignat}
The first one corresponds to the components of the Leibniz rule taking values in $C(\Lambda_2^+,\Lambda_0^+)\oplus CF(W_0,W_2)$, and the second one to the components taking values in $CF(V_0,V_2)\oplus C(\Lambda_0^-,\Lambda_2^-)$.\\
In the proof of the Leibniz rule, we will refer to the following equations:
\begin{alignat}{1}
&\mfm_2^{W,+0}(\mfm_1^W\otimes\id)+\mfm_2^{W,+0}(\id\otimes\mfm_1^W)+\mfm_1^{W,+0}\circ\mfm_2^W=0\label{eq000}\\
&\mfm_2^{V,0-}(\mfm_1^V\otimes\id)+\mfm_2^{V,0-}(\id\otimes\mfm_1^V)+\mfm_1^{V,0-}\circ\mfm_2^V=0\label{eq0000}\\
&\Delta_2^W(\mfm_1^W\otimes\id)+\Delta_2^W(\id\otimes\mfm_1^W)+\Delta_1^W\circ\mfm_2^W+\Delta_2^\Lambda(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W)+\Delta_1^\Lambda\circ\Delta_2^W=0\label{eq3}\\
&b_2^V(\mfm_1^V\otimes\id)+b_2^V(\id\otimes\mfm_1^V)+b_1^V\circ\mfm_2^V+\,b_2^\Lambda(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V)+b_1^\Lambda\circ b_2^V=0\label{eq4}\\
&\boldsymbol{b}_1^V\circ\boldsymbol{\Delta}_1^W=\boldsymbol{\Delta}_1^W\circ\boldsymbol{b}_1^V\label{eq5}
\end{alignat}
Equations \eqref{eq000} and \eqref{eq0000} come from the fact that $\mfm_2^W$ and $\mfm_2^V$ satisfy the Leibniz rule.
Equations \eqref{eq3} and \eqref{eq4} (for other Lagrangian boundary conditions) appear implicitly in Section \ref{sec:Leibniz}: they come respectively from the study the boundary of the compactification of moduli spaces
\begin{alignat*}{1}
\mathcal{M}^1_{W_{012}}(\gamma_{02};\boldsymbol{\delta}_0^W,a_1^W,\boldsymbol{\delta}_1^W,a_2^W,\boldsymbol{\delta}_2^W)\,\,\mbox{ and }\,\,\mathcal{M}^1_{V_{012}}(\gamma_{20},\boldsymbol{\delta}_0^V,a_1^V,\boldsymbol{\delta}_1^V,a_2^V,\boldsymbol{\delta}_2^V),
\end{alignat*}
for $\gamma_{02}\in C(\Lambda_2,\Lambda_0)$, $\gamma_{20}\in C(\Lambda_0,\Lambda_2)$, $(a_2^W,a_1^W)\in\Cth_+(W_1,W_2)\otimes\Cth_+(W_0,W_1)$, $(a_2^V,a_1^V)\in\Cth_+(V_1,V_2)\otimes\Cth_+(V_0,V_1)$, $\boldsymbol{\delta}_i^W$ words of pure Reeb chords of $\Lambda_i$, $\boldsymbol{\delta}_i^V$ words of pure Reeb chords of $\Lambda_i^-$.
Finally, Equation \eqref{eq5} is the content of Lemma \ref{bcircdelta}.
\subsubsection{Equation \eqref{Leibniz_conc1}}\label{sec:LC1}
Let us write the left-hand side of Equation \eqref{Leibniz_conc1} as (LR1), i.e. \eqref{Leibniz_conc1} $\Leftrightarrow$ (LR1)=0.
We start by developing the first term of (LR1), using the definition of $\mfm_2^{V\odot W}$ and the fact that $\boldsymbol{b}_1^V$ and $\boldsymbol{\Delta}_1^W$ are chain maps:
\begin{alignat*}{1}
\mfm&_2^{V\odot W,+0_W}(\mfm_1^{V\odot W}\otimes\id)\\
=&\big(\mfm_2^{W,+0}+\mfm_1^{W,+0}\circ\, b_1^V\circ\Delta_2^W\big)\Big[\boldsymbol{b}_1^V\circ\mfm_1^{V\odot W}\otimes\,\boldsymbol{b}_1^V\Big]+\mfm_1^{W,+0}\circ\,b_2^V\Big[\boldsymbol{\Delta}_1^W\circ\mfm_1^{V\odot W}\otimes\boldsymbol{\Delta}_1^W\Big]\\
=&\big(\mfm_2^{W,+0}+\mfm_1^{W,+0}\circ\, b_1^V\circ\Delta_2^W\big)\Big[\mfm_1^W\circ\,\boldsymbol{b}_1^V\otimes\,\boldsymbol{b}_1^V\Big]+\mfm_1^{W,+0}\circ\,b_2^V\Big[\mfm_1^V\circ\,\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\Big]\\
=&\mfm_2^{W,+0}\big(\mfm_1^{W}\otimes\id\big)\big[\boldsymbol{b}_1^V\otimes\,\boldsymbol{b}_1^V\big]+\mfm_1^{W,+0}\circ\, b_1^V\circ\Delta_2^W\big(\mfm_1^{W}\otimes\id\big)\big[\boldsymbol{b}_1^V\otimes\,\boldsymbol{b}_1^V\big]\\
&+\mfm_1^{W,+0}\circ\,b_2^V\big(\mfm_1^{V}\otimes\id\big)\big[\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big]
\end{alignat*}
One decomposes similarly the symmetric term $\mfm_2^{V\odot W,+0_W}(\id\otimes\mfm_1^{V\odot W})$.
Now let us take a look at $\mfm_1^{V\odot W,+0_W}\circ\mfm_2^{V\odot W}$. We have
\begin{alignat*}{1}
\mfm&_1^{V\odot W,+0_W}\circ\mfm_2^{V\odot W}=\mfm_1^{W,+0}\circ\boldsymbol{b}_1^V\circ\mfm_2^{V\odot W}\\
=&\mfm_1^{W,+0}\circ\,\boldsymbol{b}_1^V\big(\mfm_2^{V\odot W,+0_W}+\mfm_2^{V\odot W,0_V-}\big)\\
=&\mfm_1^{W,+0}\big(\mfm_2^{V\odot W,+0_W}+b_1^V\circ\Delta_1^W\circ\mfm_2^{V\odot W,+0_W}+b_1^V\circ\mfm_2^{V\odot W,0_V-}\big)\\
=&\big(\mfm_1^{W,+0}+\mfm_1^{W,+0}\circ\,b_1^V\circ\Delta_1^W\big)\Big[\mfm_2^{W,+0}\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\mfm_1^{W,+0}\circ\, b_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\\
&+\mfm_1^{W,+0}\circ \,b_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\Big]+\mfm_1^{W,+0}\circ\,b_1^V\Big[\mfm_2^{V,0-}\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)+\mfm_1^{V,0-}\circ\,\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\Big]
\end{alignat*}
The term $\mfm_1^{W,+0}\circ\,b_1^V\circ\Delta_1^W\circ\mfm_1^{W,+0}\circ\, b_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)$ vanishes for energy reasons, as well as $\mfm_1^{W,+0}\circ\,b_1^V\circ\Delta_1^W\circ\mfm_1^{W,+0}\circ \,b_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)$, hence we finally get:
\begin{alignat*}{1}
\mfm_1^{V\odot W,+0_W}\circ\mfm_2^{V\odot W}=&\mfm_1^{W,+0}\circ\mfm_2^{W,+0}\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\mfm_1^{W,+0}\circ\,b_1^V\circ\Delta_1^W\circ\mfm_2^{W,+0}\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\\
+&\mfm_1^{W,+0}\circ\mfm_1^{W,+0}\circ\, b_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\mfm_1^{W,+0}\circ\mfm_1^{W,+0}\circ \,b_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\\
+&\mfm_1^{W,+0}\circ\,b_1^V\circ\mfm_2^{V,0-}\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)+\mfm_1^{W,+0}\circ\,b_1^V\circ\mfm_1^{V,0-}\circ\,\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)
\end{alignat*}
Summing all together gives:
\begin{alignat*}{1}
&\text{(LR1)}=\\
&\big[\mfm_1^{W,+0}\circ\mfm_2^{W,+0}+\mfm_2^{W,+0}(\mfm_1^{W}\otimes\id)+\mfm_2^{W,+0}(\id\otimes\mfm_1^{W})\big]\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L1}\\
&+\big[\mfm_1^{W,+0}\circ\,b_1^V\big]\big[\Delta_1^W\circ\mfm_2^{W,+0}+\Delta_2^W(\mfm_1^{W}\otimes\id)+\Delta_2^W(\id\otimes\mfm_1^{W})\big]\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L2}\\
&+\mfm_1^{W,+0}\circ\mfm_1^{W,+0}\circ\, b_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L3}\\
&+\mfm_1^{W,+0}\circ\mfm_1^{W,+0}\circ \,b_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\tag{L4}\\
&+\mfm_1^{W,+0}\big[b_1^V\circ\mfm_2^{V,0-}+b_2^V(\mfm_1^{V}\otimes\id)+b_2^V(\id\otimes\mfm_1^{V})\big]\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\tag{L5}\\
&+\mfm_1^{W,+0}\circ\,b_1^V\circ\mfm_1^{V,0-}\circ\,\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L6}
\end{alignat*}
Now we use Equation \eqref{eq000} on (L1), Equation \eqref{eq3} on (L2), the fact that $\mfm_1^{W,+0}\circ\mfm_1^{W,+0}=\mfm_1^{W,+0}\circ\mfm_1^{W,-}$ on (L3) and (L4) and finally Equation \eqref{eq4} on (L5), to write
\begin{alignat*}{1}
&\text{(LR1)}=\\
&\mfm_1^{W,+0}\circ\mfm_2^{W,-}\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L1'}\\
&+\mfm_1^{W,+0}\circ\,b_1^V\big[\Delta_1^W\circ\mfm_2^{W,-}+\Delta_2^\Lambda\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W)+\Delta_1^\Lambda\circ\Delta_2^W\big]\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L2'}\\
&+\mfm_1^{W,+0}\circ\mfm_1^{W,-}\circ\, b_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L3'}\\
&+\mfm_1^{W,+0}\circ\mfm_1^{W,-}\circ \,b_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\tag{L4'}\\
&+\mfm_1^{W,+0}\big[b_1^V\circ\mfm_2^{V,+}+b_2^\Lambda(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V)+\,b_1^\Lambda\circ b_2^V\big]\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\tag{L5'}\\
&+\mfm_1^{W,+0}\circ\,b_1^V\circ\mfm_1^{V,0-}\circ\,\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L6'}
\end{alignat*}
We apply then the following modifications:
\begin{enumerate}
\item on (L1') we write $\mfm_2^{W,-}=b_1^\Lambda\circ\Delta_2^W+b_2^\Lambda(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W)$,
\item on (L2') observe that $\Delta_1^W\circ\mfm_2^{W,-}$ vanishes for energy reasons,
\item on (L3') and (L4') we have $\mfm_1^{W,-}\circ\,b_1^V=b_1^\Lambda\circ\boldsymbol{\Delta}_1^W\circ b_1^V=b_1^\Lambda\circ b_1^V$ and $\mfm_1^{W,-}\circ\,b_2^V=b_1^\Lambda\circ\boldsymbol{\Delta}_1^W\circ b_2^V=b_1^\Lambda\circ b_2^V$,
\item on (L5'), we write $\mfm_2^{V,+}=\Delta_2^\Lambda(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V)$,
\item finally, in the last term of (L2') we have $\Delta_1^\Lambda=\mfm_1^{V,+}$ so adding it to (L6') gives $\mfm_1^{W,+0}\circ\,b_1^V\circ\mfm_1^{V}\circ\,\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)$. But observe that by definition of $\boldsymbol{\Delta}_1^W$ one has $b_1^V\circ\mfm_1^{V}\circ\,\Delta_2^W=b_1^V\circ\mfm_1^{V}\circ\,\boldsymbol{\Delta}_1^W\circ\Delta_2^W$, which gives by Lemma \ref{lem:rel} $b_1^\Lambda\circ\boldsymbol{\Delta}_1^W\circ\boldsymbol{b}_1^V\circ\Delta_2^W$. We thus get
$$\mfm_1^{W,+0}\circ\,b_1^V\circ\mfm_1^{V}\circ\,\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)=\mfm_1^{W,+0}\circ\,b_1^\Lambda\circ\boldsymbol{\Delta}_1^W\circ\boldsymbol{b}_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)$$
which, using Equation \eqref{eq5}, is equal to:
\begin{alignat*}{1}
\mfm_1^{W,+0}\circ\,b_1^\Lambda\circ\boldsymbol{b}_1^V\circ\boldsymbol{\Delta}_1^W\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)&=\mfm_1^{W,+0}\circ\,b_1^\Lambda\circ\boldsymbol{b}_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\\
&=\big[\mfm_1^{W,+0}\circ\,b_1^\Lambda\big]\big[\Delta_2^W+b_1^V\circ\Delta_2^W\big]\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)
\end{alignat*}
\end{enumerate}
So finally we have:
\begin{alignat*}{1}
\text{(LR1)}&=\mfm_1^{W,+0}\circ b_1^\Lambda\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{R1}\\
&+\mfm_1^{W,+0}\circ b_2^\Lambda\big(\boldsymbol{\Delta}_1^W\circ\boldsymbol{b}_1^V\otimes\boldsymbol{\Delta}_1^W\circ\boldsymbol{b}_1^V\big)\tag{R2}\\
&+\mfm_1^{W,+0}\circ\,b_1^V\circ\Delta_2^\Lambda\big(\boldsymbol{\Delta}_1^W\circ\boldsymbol{b}_1^V\otimes\boldsymbol{\Delta}_1^W\circ\boldsymbol{b}_1^V\big)\tag{R3}\\
&+\mfm_1^{W,+0}\circ b_1^\Lambda\circ\, b_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{R4}\\
&+\mfm_1^{W,+0}\circ b_1^\Lambda\circ \,b_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\tag{R5}\\
&+\mfm_1^{W,+0}\circ\,b_1^V\circ\Delta_2^\Lambda\big(\boldsymbol{b}_1^V\circ\boldsymbol{\Delta}_1^W\otimes\boldsymbol{b}_1^V\circ\boldsymbol{\Delta}_1^W\big)\tag{R6}\\
&+\mfm_1^{W,+0}\circ\,b_2^\Lambda\big(\boldsymbol{b}_1^V\circ\boldsymbol{\Delta}_1^W\otimes\boldsymbol{b}_1^V\circ\boldsymbol{\Delta}_1^W\big)\tag{R7}\\
&+\mfm_1^{W,+0}\circ\,b_1^\Lambda\circ b_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\tag{R8}\\
&+\mfm_1^{W,+0}\circ\,b_1^\Lambda\circ\,\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\mfm_1^{W,+0}\circ\,b_1^\Lambda\circ\,b_1^{V}\circ\,\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{R9}
\end{alignat*}
We have (R1)+(R4)+(R9)=0 and (R5)+(R8)=0. Then, using Equation \eqref{eq5} gives (R2)+(R7)=0 and (R3)+(R6)=0. Thus (LR1)=0.
\subsubsection{Equation \eqref{Leibniz_conc2}}\label{sec:LC2}
Denote (LR2) the left-hand side of Equation \eqref{Leibniz_conc2} so that this equation is equivalent to (LR2)=0. Using again the fact that $\boldsymbol{b}_1^V$ and $\boldsymbol{\Delta}_1^W$ are chain maps, the first term of (LR2) is:
\begin{alignat*}{1}
\mfm&_2^{V\odot W,0_V-}(\mfm_1^{V\odot W}\otimes\id)\\
=&\mfm_2^{V,0-}\big(\boldsymbol{\Delta}_1^W\circ\mfm_1^{V\odot W}\otimes\,\boldsymbol{\Delta}_1^W\big)
+\mfm_1^{V,0-}\circ\,\Delta_2^W\big(\boldsymbol{b}_1^V\circ\mfm_1^{V\odot W}\otimes\boldsymbol{b}_1^V\big)\\
=&\mfm_2^{V,0-}\big(\mfm_1^V\circ\,\boldsymbol{\Delta}_1^W\otimes\,\boldsymbol{\Delta}_1^W\big)
+\mfm_1^{V,0-}\circ\,\Delta_2^W\big(\mfm_1^W\circ\,\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\\
=&\mfm_2^{V,0-}\big(\mfm_1^V\otimes\id\big)\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)
+\mfm_1^{V,0-}\circ\,\Delta_2^W\big(\mfm_1^W\otimes\id\big)\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)
\end{alignat*}
One writes analogously the symmetric term $\mfm_2^{V\odot W,0_V-}(\id\otimes\mfm_1^{V\odot W})$.
Now let us consider the third term of (LR2):
\begin{alignat*}{2}
\mfm&_1^{V\odot W,0_V-}\circ\mfm_2^{V\odot W}=\mfm_1^{V,0-}\circ\boldsymbol{\Delta}_1^W\circ\mfm_2^{V\odot W}\\
=&\mfm_1^{V,0-}\circ\,\Delta_1^W\circ\mfm_2^{V\odot W,+0_W}+\mfm_1^{V,0-}\circ\mfm_2^{V\odot W,0_V-}\\
=&\mfm_1^{V,0-}\circ\,\Delta_1^W\Big[\mfm_2^{W,+0}\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\mfm_1^{W,+0}\circ\,b_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\mfm_1^{W,+0}\circ\,b_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\Big]\\
&+\mfm_1^{V,0-}\circ\mfm_2^{V,0-}\big[\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big]+\mfm_1^{V,0-}\circ\mfm_1^{V,0-}\circ\Delta_2^W\big[\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big]
\end{alignat*}
The term $\mfm_1^{V,0-}\circ\boldsymbol{\Delta}_1^W\Big[\mfm_1^{W,+0}\circ\,b_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\mfm_1^{W,+0}\circ\,b_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\Big]$ vanishes for energy reasons. Then, observe that $\mfm_1^{V,0-}\circ\mfm_1^{V,0-}=\mfm_1^{V,0-}\circ\mfm_1^{V,+}$ and $\Delta_1^W\circ\mfm_2^{W,+0}=\Delta_1^W\circ\mfm_2^{W}$ because $\Delta_1^W\circ\mfm_2^{W,-}=0$, so we have:
\begin{alignat*}{2}
\mfm_1^{V\odot W,0_V-}\circ\mfm_2^{V\odot W}&=\mfm_1^{V,0-}\circ\boldsymbol{\Delta}_1^W\circ\mfm_2^{W}\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\mfm_1^{V,0-}\circ\mfm_2^{V,0-}\big[\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big]\\
&+\mfm_1^{V,0-}\circ\mfm_1^{V,+}\circ\,\Delta_2^W\big[\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big]
\end{alignat*}
Summing all together gives
\begin{alignat*}{1}
\text{(LR2)}=&
\mfm_1^{V,0-}\big[\boldsymbol{\Delta}_1^W\circ\mfm_2^{W}+\Delta_2^W(\mfm_1^W\otimes\id)+\Delta_2^W(\id\otimes\mfm_1^W)\big]\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L1}\\
+&\big[\mfm_1^{V,0-}\circ\mfm_2^{V,0-}+\mfm_2^{V,0-}(\mfm_1^V\otimes\id)+\mfm_2^{V,0-}(\id\otimes\mfm_1^V)\big]\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\tag{L2}\\
+&\mfm_1^{V,0-}\circ\mfm_1^{V,+}\circ\,\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)
\end{alignat*}
Use then Equation \eqref{eq3} on (L1) and Equation \eqref{eq0000} on (L2) to get
\begin{alignat*}{1}
\text{(LR2)}=&\mfm_1^{V,0-}\big[\Delta_2^\Lambda(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W)+\Delta_1^\Lambda\circ\Delta_2^W\big]\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L1'}\\
&+\mfm_1^{V,0-}\circ\mfm_2^{V,+}\big[\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big]\tag{L2'}\\
&+\mfm_1^{V,0-}\circ\mfm_1^{V,+}\circ\Delta_2^W\big[\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big]\tag{T3}
\end{alignat*}
But remark that $\mfm_1^{V,+}=\Delta_1^\Lambda$ and $\mfm_2^{V,+}=\Delta_2^\Lambda(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V)$ by definition so then by Equation \eqref{eq5}, we get (LR2)=0.
\subsection{Functoriality of the transfer maps}\label{sec:functoriality}
In this section, we prove that the product structures behave well under the transfer maps $\boldsymbol{b}_1^V$ and $\boldsymbol{\Delta}_1^W$. Namely, we have first the following:
\begin{prop}\label{prop:transferf1}
The map induced by $\boldsymbol{b}_1^V$ in homology preserves the product structures, in other words we have:
$\boldsymbol{b}_1^{V_{02}}\circ\mfm_2^{V\odot W}=\mfm_2^W\big(\boldsymbol{b}_1^{V_{12}}\otimes\boldsymbol{b}_1^{V_{01}}\big)$ in homology.
\end{prop}
\begin{proof}
Given a triple $(V_0\odot W_0,V_1\odot W_1,V_2\odot W_2)$, we define a map
\begin{alignat*}{1}
\boldsymbol{b}_2^V:\Cth_+(V_1\odot W_1,V_2\odot W_2)\otimes \Cth_+(V_0\odot W_0,V_1\odot W_1)\to\Cth_+(W_0,W_2)
\end{alignat*}
by $$\boldsymbol{b}_2^V=b_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+b_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)$$
In order to prove the proposition, we prove that the following relation is satisfied:
\begin{alignat}{1}
\boldsymbol{b}_2^V\big(\mfm_1^{V\odot W}\otimes\id\big)+\boldsymbol{b}_2^V\big(\id\otimes\mfm_1^{V\odot W}\big)+\boldsymbol{b}_1^V\circ\mfm_2^{V\odot W}+\mfm_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\mfm_1^W\circ\,\boldsymbol{b}_2^V=0\label{phi_2}
\end{alignat}
Let us first consider $\boldsymbol{b}_2^V\big(\mfm_1^{V\odot W}\otimes\id\big)$. We have
\begin{alignat*}{2}
\boldsymbol{b}_2^V\big(\mfm_1^{V\odot W}\otimes\id\big)=&\,b_1^V\circ\Delta_2^W\big[\boldsymbol{b}_1^V\circ\mfm_1^{V\odot W}\otimes\,\boldsymbol{b}_1^V\big]+b_2^V\big[\boldsymbol{\Delta}_1^W\circ\mfm_1^{V\odot W}\otimes\boldsymbol{\Delta}_1^W\big]\\
=&b_1^V\circ\Delta_2^W\big(\mfm_1^W\circ\,\boldsymbol{b}_1^V\otimes\,\boldsymbol{b}_1^V\big)+b_2^V\big(\mfm_1^V\circ\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\\
=&b_1^V\circ\Delta_2^W\big(\mfm_1^{W}\otimes\id\big)\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+b_2^V\big(\mfm_1^{V}\otimes\id\big)\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)
\end{alignat*}
Then we consider $\boldsymbol{b}_1^V\circ\mfm_2^{V\odot W}$. Observe that we have already computed this term in Section \ref{sec:LC1} when considering the term $\mfm_1^{V\odot W,+0_W}\circ\mfm_2^{V\odot W}$. So recall that we have
\begin{alignat*}{2}
\boldsymbol{b}_1^V\circ\mfm_2^{V\odot W}&=\mfm_2^{W,+0}\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\mfm_1^{W,+0}\circ\,b_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\mfm_1^{W,+0}\circ\,b_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\\
&+b_1^V\circ\Delta_1^W\circ\mfm_2^{W,+0}\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\,b_1^V\circ\mfm_2^{V,0-}\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)+b_1^V\circ\mfm_1^{V,0-}\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)
\end{alignat*}
The left-hand side of \eqref{phi_2}, rearranging terms according to the decompositions above is thus given by
\begin{alignat*}{1}
&b_1^V\big(\Delta_2^W\big(\mfm_1^{W}\otimes\id\big)+\Delta_2^W\big(\id\otimes\mfm_1^{W}\big)+\Delta_1^W\circ\mfm_2^{W,+0}\big)\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L1}\\
&+\big(b_2^V\big(\mfm_1^{V}\otimes\id\big)+b_2^V\big(\id\otimes\mfm_1^{V}\big)+b_1^V\circ\mfm_2^{V,0-}\big)\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\tag{L2}\\
&+\big(\mfm_2^{W,+0}+\mfm_2^W\big)\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L3}\\
&+\big(\mfm_1^{W,+0}\circ\,b_1^V\circ\Delta_2^W+\mfm_1^W\circ\,b_1^V\circ\Delta_2^W\big)\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L4}\\
&+\big(\mfm_1^{W,+0}\circ\,b_2^V+\mfm_1^W\circ\,b_2^V\big)\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\tag{L5}\\
&+b_1^V\circ\mfm_1^{V,0-}\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L6}\\
\end{alignat*}
We use now Equation \eqref{eq3} on (L1), Equation \eqref{eq4} on (L2) and the same modification as the point 5. on Section \ref{sec:LC1} on line (L6) to rewrite:
\begin{alignat*}{1}
&\big(b_1^V\circ\Delta_2^\Lambda\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)+b_1^V\circ\Delta_1^\Lambda\circ\Delta_2^W\big)\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\\
&+\big(b_1^V\circ\mfm_2^{V,+}+\,b_2^\Lambda(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V)+b_1^\Lambda\circ b_2^V\big)\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\\
&+\mfm_2^{W,-}\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\\
&+\mfm_1^{W,-}\circ\,b_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\\
&+\mfm_1^{W,-}\circ\,b_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\\ &+\big(b_1^V\circ\mfm_1^{V,+}\circ\,\Delta_2^W+\,b_1^\Lambda\circ\boldsymbol{b}_1^V\circ\Delta_2^W\big)\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)
\end{alignat*}
Finally, using
\begin{enumerate}
\item $\mfm_2^{V,+}=\Delta_2^\Lambda(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V)$ and $\mfm_1^{V,+}=\Delta_1^\Lambda$,
\item $\mfm_1^{W,-}\circ\,b_1^V=b_1^\Lambda\circ\boldsymbol{\Delta}_1^W\circ\,b_1^V=b_1^\Lambda\circ b_1^V$, and also $\mfm_1^{W,-}\circ\,b_2^V=b_1^\Lambda\circ b_2^V$,
\item $b_1^\Lambda\circ\boldsymbol{b}_1^V\circ\Delta_2^W=b_1^\Lambda\circ\Delta_2^W+b_1^\Lambda\circ b_1^V\circ\Delta_2^W$
\end{enumerate}
we rewrite
\begin{alignat*}{1}
&\big(b_1^V\circ\Delta_2^\Lambda\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)+b_1^V\circ\Delta_1^\Lambda\circ\Delta_2^W\big)\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\\
&+\big(b_1^V\circ\Delta_2^\Lambda(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V)+\,b_2^\Lambda(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V)+b_1^\Lambda\circ b_2^V\big)\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\\
&+\big(b_1^\Lambda\circ\Delta_2^W+b_2^\Lambda(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W)\big)\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\\
&+b_1^\Lambda\circ\,b_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\\
&+b_1^\Lambda\circ\,b_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)\\ &+\big(b_1^V\circ\Delta_1^\Lambda\circ\,\Delta_2^W+b_1^\Lambda\circ\Delta_2^W+b_1^\Lambda\circ b_1^V\circ\Delta_2^W\big)\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)
\end{alignat*}
and making use of Equation \eqref{eq5}, all the terms in the sum cancel by pair.
\end{proof}
The same functorial property applies for the map $\boldsymbol{\Delta}_1^W:\Cth_+(V_0\odot W_0,V_1\odot W_1)\to\Cth_+(V_0,V_1)$. Indeed, we have
\begin{prop}\label{prop:transferf2}
The map induced by $\boldsymbol{\Delta}_1^W$ in homology preserves the product structures, that is to say:
$\boldsymbol{\Delta}_1^{W_{02}}\circ\mfm_2^{V\odot W}=\mfm_2^V\big(\boldsymbol{\Delta}_1^{W_{12}}\otimes\boldsymbol{\Delta}_1^{W_{01}}\big)$ in homology.
\end{prop}
\begin{proof}
Given a triple $(V_0\odot W_0,V_1\odot W_1,V_2\odot W_2)$, we define a map
\begin{alignat*}{1}
\boldsymbol{\Delta}_2^W:\Cth_+(V_1\odot W_1,V_2\odot W_2)\otimes \Cth_+(V_0\odot W_0,V_1\odot W_1)\to\Cth_+(V_0,V_2)
\end{alignat*}
by
$$\boldsymbol{\Delta}_2^W=\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)$$
where the map $\Delta_2^W$ was defined in Section \ref{section:def_product} for the case of three pairwise transverse Lagrangian cobordisms. In order to prove the proposition, we prove that the following relation is satisfied:
\begin{alignat}{1}
\boldsymbol{\Delta}_2^W\big(\mfm_1^{V\odot W}\otimes\id\big)+\boldsymbol{\Delta}_2^W\big(\id\otimes\mfm_1^{V\odot W}\big)+\boldsymbol{\Delta}_1^W\circ\mfm_2^{V\odot W}+\mfm_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)+\mfm_1^V\circ\boldsymbol{\Delta}_2^W=0\label{phi_22}
\end{alignat}
First, we have
\begin{alignat*}{1}
\boldsymbol{\Delta}_2^W\big(\mfm_1^{V\odot W}\otimes\id\big)=&\Delta_2^W\big(\boldsymbol{b}_1^V\circ\mfm_1^{V\odot W}\otimes\boldsymbol{b}_1^V\big)=\Delta_2^W\big(\mfm_1^W\otimes\id\big)\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)
\end{alignat*}
Then, again we have already computed the term $\boldsymbol{\Delta}_1^W\circ\mfm_2^{V\odot W}$ when considering $\mfm_1^{V\odot W,0_V-}\circ\mfm_2^{V\odot W}$ in Section \ref{sec:LC2}. Recall that we have
\begin{alignat*}{1}
\boldsymbol{\Delta}_1^W\circ\mfm_2^{V\odot W}=&\Delta_1^W\circ\mfm_2^{W,+0}\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\mfm_2^{V,0-}\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)+\mfm_1^{V,0-}\circ\,\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)
\end{alignat*}
Hence, the left-hand side of Equation \eqref{phi_22} is equal to:
\begin{alignat*}{2}
&\big(\Delta_2^W(\mfm_1^{W}\otimes\id)+\Delta_2^W(\id\otimes\mfm_1^{W})+\Delta_1^W\circ\mfm_2^{W,+0}\big)\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L1}\\
+&\mfm_2^{V,0-}\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)+\mfm_1^{V,0-}\circ\,\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L2}\\
+&\mfm_2^V\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)+\mfm_1^V\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\tag{L3}
\end{alignat*}
Using Equation \eqref{eq3} on line (L1) and summing (L2) and (L3) gives
\begin{alignat*}{2}
&\Delta_1^W\circ\mfm_2^{W,-}\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\Delta_2^\Lambda\big(\boldsymbol{\Delta}_1^W\circ\boldsymbol{b}_1^V\otimes\boldsymbol{\Delta}_1^W\circ\boldsymbol{b}_1^V\big)+\Delta_1^\Lambda\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)\\
&+\mfm_2^{V,+}\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)+\mfm_1^{V,+}\circ\Delta_2^W\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)
\end{alignat*}
Observe that $\Delta_1^W\circ\mfm_2^{W,-}\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)=0$ for energy reasons. Then, $\mfm_1^{V,+}=\Delta_1^\Lambda$ and $\mfm_2^{V,+}=\Delta_2^\Lambda\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)$, so using Equation \eqref{eq5} one gets that the terms sum to $0$.
\end{proof}
Observe that given the maps $\boldsymbol{b}_2^V$ and $\boldsymbol{\Delta}_2^V$ defined in the proofs of Propositions \ref{prop:transferf1} and \ref{prop:transferf2}, we can rewrite the formula of the product $\mfm_2^{V\odot W}$ as follows:
\begin{alignat*}{2}
\mfm_2^{V\odot W}&=\mfm_2^{W,+0}\big(\boldsymbol{b}_1^V\otimes\boldsymbol{b}_1^V\big)+\mfm_1^{W,+0}\circ\,\boldsymbol{b}_2^V+\mfm_2^{V,0-}\big(\boldsymbol{\Delta}_1^W\otimes\boldsymbol{\Delta}_1^W\big)+\mfm_1^{V,0-}\circ\,\boldsymbol{\Delta}_2^W
\end{alignat*}
Moreover, if we restrict again to the special cases where the triples $(W_0,W_1,W_2)$ or $(V_0,V_1,V_2)$ are trivial cylinders, one has
\begin{enumerate}
\item $(W_0,W_1,W_2)=(\mathbb R\times\Lambda_0,\mathbb R\times\Lambda_1,\mathbb R\times\Lambda_2)$: The map $\boldsymbol{b}_2^V$ becomes a map
\begin{alignat*}{1}
\boldsymbol{b}_2^V:\Cth_+(V_1,V_2)\otimes\Cth_+(V_0,V_1)\to C(\Lambda_0,\Lambda_2)
\end{alignat*}
which is equal to the map $b_2^V$ for the case of three pairwise transverse Lagrangian cobordisms $(V_0,V_1,V_2)$, and $\boldsymbol{\Delta}_2^W$ vanishes.
\item $(V_0,V_1,V_2)=(\mathbb R\times\Lambda_0,\mathbb R\times\Lambda_1,\mathbb R\times\Lambda_2)$: in this case the map $\boldsymbol{b}_2^V$ vanishes and $\boldsymbol{\Delta}_2^W$ is a map
\begin{alignat*}{1}
\boldsymbol{\Delta}_2^W:\Cth_+(W_1,W_2)\otimes\Cth_+(W_0,W_1)\to C(\Lambda_2,\Lambda_0)
\end{alignat*}
which is actually equal to the map $\Delta_2^W$ for the case of three pairwise transverse Lagrangian cobordisms $(W_0,W_1,W_2)$.
\end{enumerate}
\section{Continuation element}\label{sec:unit}
Again let $\Sigma_0$ be an exact Lagrangian cobordism from $\Lambda_0^-$ to $\Lambda_0^+$ with $\mathcal{A}(\Lambda_0^-)$ admitting an augmentation $\varepsilon_0^-$.
In this section we prove that there is a \textit{continuation element} $e\in\Cth_+(\Sigma_0,\Sigma_1)$, where $\Sigma_1$ is a suitable small Hamiltonian perturbation of $\Sigma_0$.
Assume $\Sigma_0$ is cylindrical outside $[-T,T]\times Y$. Fix $\eta>0$ smaller than the length of any chord of $\Lambda_0^-$ and $\Lambda_0^+$, and $N>0$. Then we set $\Sigma_1:=\varphi^\eta_{\widetilde{H}}(\Sigma_0)$ for a Hamiltonian $\widetilde{H}:\mathbb R\times(P\times\mathbb R)\to\mathbb R$ being a small perturbation of $H(t,p,z)=h_{T,N}(t)$ for $h_{T,N}:\mathbb R\to\mathbb R$ satisfying
$$\left\{\begin{array}{l}
h_{T,N}(t)=-e^t\,\mbox{ for}\,t<-T-N\\
h_{T,N}(t)=-e^t+C\,\mbox{ for }\,t>T+N\\
h_{T,N}'(t)\leq 0\\
[-T,T]\subset(h')^{-1}(0)\\
\end{array}\right.$$
for a positive constant $C$, and whose corresponding Hamiltonian vector field is given by $\rho_{T,N}\partial_z$, with $\rho_{T,N}:\mathbb R\to\mathbb R$ satisfying $\rho(t)=-1$ for $t\leq -T-N$ and $t\geq T+N$, $\rho(t)=0$ for $t\in[-T,T]$, $\rho'\geq0$ in $[-T-N,-T]$ and $\rho'\leq0$ in $[T,T+N]$. Moreover, under an appropriate identification of a tubular neighborhood of $\Sigma_0$ with a standard neighborhood of the $0$-section in $T^*\Sigma_0$, see \cite[Section 6.2.2]{DR}, we assume that $\Sigma_1$ is given by the graph of $dF$ in $T^*\Sigma_0$, with $F:\Sigma_0\to\mathbb R$ Morse function satisfying under this identification the following properties:
\begin{enumerate}
\item the critical points of $F$ (in one-to-one correspondence with intersection points in $\Sigma_0\cap\Sigma_1$) are all contained in $(-T,T)\times Y$.
\item on the cylindrical ends of $\Sigma_0$, $F$ is equal to $e^t(f_\pm-\eta)$, for $f_\pm:\Lambda_0^\pm\to\mathbb R$ Morse functions such that the $C^0$-norm of $f_\pm$ is much smaller than $\eta$. In other words, it means that the cylindrical ends of $\Sigma_0\cup\Sigma_1$ are cylinders over the $2$-copy $\Lambda_0^\pm\cup\Lambda_1^\pm$ where $\Lambda_1^\pm$ is a Morse perturbation of $\Lambda_0^\pm-\eta\partial_z$ (translation of $\Lambda_0^\pm$ by $\eta$ in the negative Reeb direction). Moreover, we assume that $f_-$ admits a unique minimum on each connected component.
\item $F$ admits a unique minimum on each filling component of $\Sigma_0$ and has no minimum on each component of $\Sigma_0$ with a non empty negative end.
\end{enumerate}
See Figure \ref{fig:unit_cycle} for a schematization of the $2$-copy $\Sigma_0\cup\Sigma_1$.
\begin{figure}[ht]
\begin{center}\includegraphics[width=6cm]{unit_cycle.eps}\end{center}
\caption{Schematization of the perturbation $\Sigma_1$ of $\Sigma_0$.}
\label{fig:unit_cycle}
\end{figure}
The Chekanov-Eliashberg algebras $\mathcal{A}(\Lambda_0^-)$ and $\mathcal{A}(\Lambda_1^-)$ are canonically identified and thus an augmentation $\varepsilon_0^-$ of $\mathcal{A}(\Lambda_0^-)$ can be seen as an augmentation of $\mathcal{A}(\Lambda_1^-)$. Moreover, for $\eta$ small enough and Morse functions $F,f_\pm$ such that $\Sigma_1$ is sufficiently $C_1$-close to $\Sigma_0$, one has $\varepsilon_0^-\circ\Phi_{\Sigma_0}=\varepsilon_0^-\circ\Phi_{\Sigma_1}$, see \cite[Theorem 2.15]{CDGG1}.\\
Let us denote $e=e^0+e^-$, where $e^0=\sum e^0_i$ is the sum of the minima $e_i^0$ of $F$ and $e^-=\sum e_i^-$is the sum of the minima $e_i^-$ of $f_-$, where the sum is indexed over the connected components of $\Sigma$. Each $e_i^-$ corresponds to a Reeb chord from $\Lambda_1^-$ to $\Lambda_0^-$.
\begin{prop}\label{unit_cycle}
We have $\mfm_1^{\Sigma_{01}}(e)=0$, i.e. $e$ is a cycle.
\end{prop}
\begin{proof}
We develop $\mfm_1^{\Sigma_{01}}(e)=\sum\mfm_1^0(e_i^0)+\sum\mfm_1^0(e_i^-)+\sum\mfm_1^-(e_i^0)+\mfm_1^-(e_i^-)$.
First, for all $i$ $\mfm_1^-(e_i^0)=0$ for energy reasons because given the Hamiltonian perturbation we choose all intersection points have positive action (we assume the perturbation from $H$ to $\widetilde{H}$ is $C^1$-small).
Then, we prove that $\sum\mfm_1^-(e_i^-)=0$. This results from the analysis of pseudo-holomorphic discs with boundary on a $2$-copy of a Legendrian done in \cite{EESa}, as well as the fact that when the almost complex structure is the cylindrical lift of an admissible complex structure on $P$, there is a bijection between rigid pseudo-holomorphic discs with boundary on $\pi_P(\Lambda)$ and rigid pseudo-holomorphic discs with boundary on $\mathbb R\times\Lambda$, as proved in \cite{DR}. Recall that $\mfm_1^-(e_i^-)$ is defined by a count of rigid pseudo-holomorphic discs with boundary on $\mathbb R\times(\Lambda_0^-\cup\Lambda_1^-)$, with a negative asymptotic to $e_i^-$ and a positive asymptotic to an output Reeb chord from $\Lambda_1^-$ to $\Lambda_0^-$. In particular we distinguish two cases: either the output Reeb chord, call it $\beta_{10}$ is a Morse chord, or it is not.
If it is Morse, then such a disc has no pure Reeb chords asymptotics for action reasons and corresponds to a gradient flow line of the Morse function $f_-$ flowing down from the critical point corresponding to $\beta_{10}$ to the one corresponding to $e_i^-$. There are exactly two such flow lines.
If $\beta_{10}$ is not a Morse chord, we refer to
\cite[Theorem 5.5]{EESa}: a rigid disc with positive asymptotic at $\beta_{10}$ and negative asymptotic at $e_i^-$ corresponds to a rigid generalized disc with boundary on $\mathbb R\times\Lambda^-$. By rigidity, this generalized disc consists of a constant disc $u$ at $\beta_0$ (pure chord of $\Lambda_0^-$ corresponding to the chord $\beta_{10}$ of the $2$-copy) together with a gradient flow line starting on the boundary of $u$ and flowing down to $e_i^-$. There are two ways this descending gradient flow line can be attached to $u$: either on the starting point of $\beta$, or on its ending point.
Thus we get that the contribution of $\beta_{10}$ to $\sum\mfm_1^-(e_i^-)$ is given by $\varepsilon_0^-(\beta)+\varepsilon_1^-(\beta)=0$.
Finally, we prove $\sum\mfm_1^0(e_i^0)+\mfm_1^0(e_i^-)=0$.
Wrap the negative end of $\Sigma_1$ slightly in the positive Reeb direction using the Hamiltonian vector field $\rho^-_{T+N,N}\partial_z$ (see Section \ref{wrapping}). Let $V_1$ be the image of $\mathbb R\times\Lambda_1^-$ by the corresponding time-$s_-$ flow where $s_-$ is bigger than the longest Morse chord from $\Lambda_1^-$ to $\Lambda_0^-$ but much smaller than the shortest non Morse chord from $\Lambda_1^-$ to $\Lambda_0^-$. We set $V_0=\mathbb R\times\Lambda_0^-$.
Observe that each Morse chord becomes an intersection point in $V_0\cap V_1$. We denote $m_i$ the intersection point corresponding to $e_i^-$, see Figure \ref{wrap_unit3}.
\begin{figure}[ht]
\begin{center}\includegraphics[width=4cm]{wrap_unit3.eps}\end{center}
\caption{Schematization of the wrapping of the negative of $\Sigma_1$.}
\label{wrap_unit3}
\end{figure}
Consider the pair of concatenations $(V_0\odot\Sigma_0,V_1\odot\Sigma_1)$.
By projecting curves on $P$ as done in Section \ref{wrapping}, one can prove that $b_1^V(m_i)=e_i^-$. Then, by definition of the differential in a concatenation,
\begin{alignat*}{1}
\mfm_1^{V\odot\Sigma}(m_i)=\mfm_1^{\Sigma,+0}\circ\boldsymbol{b}_1^V(m_i)+\mfm_1^{V,0-}\circ\boldsymbol{\Delta}_1^\Sigma(m_i)
\end{alignat*}
which gives $\mfm_1^{V\odot\Sigma,0_\Sigma}(m_i)=\mfm_1^{\Sigma,0}\circ\,b_1^V(m_i)=\mfm_1^{\Sigma,0}(e_i^-)$.
The Hamiltonian used to wrap the negative end of $\Sigma_1$ is assumed to be sufficiently small so that $V_1\odot\Sigma_1$ is a small Morse perturbation of $V_0\odot\Sigma_0$ by a function $\widetilde{F}$ which equals $F$ on $[-T,T]\times Y \cap\Sigma_0$. Then, there is a one-to-one correspondence between curves contributing to $\mfm_1^{V\odot\Sigma,0_\Sigma}(m_i)$ and gradient flow lines of $\widetilde{F}$ from a critical point in $\Sigma_0$ to the critical point of $\widetilde{F}$ corresponding to $m_i$.
Now, each intersection point in $\Sigma_0\cap\Sigma_1$ which corresponds to an index $1$ critical point of $F$ and so also of $\widetilde{F}$, is the starting point of two descending gradient flow lines flowing down to a minimum $e_i^0$ or $m_i$.
Thus, modulo $2$ we have $\sum\mfm_1^0(e_i^0)+\mfm_1^0(e_i^-)=0$.
\end{proof}
\begin{teo}\label{teo_unit}
Consider $\Sigma_0$ and $\Sigma_1$ as above. Take $\Lambda_2^-\prec_{\Sigma_2}\Lambda_2^+$ another exact Lagrangian cobordism such that the intersection of $\Sigma_2$ with a small standard neighborhood of $\Sigma_0$ identified with $D_\varepsilon T^*\Sigma_0$ and containing also $\Sigma_1$, consists of a union of fibres, then we have:
\begin{alignat}{1}
\mfm_2^{\Sigma_{012}}(\,\cdot\,,e):\Cth_+(\Sigma_1,\Sigma_2)\to\Cth_+(\Sigma_0,\Sigma_2)\label{map_prod}
\end{alignat} is an isomorphism.
\end{teo}
\begin{rem}
Given $\Sigma_0$ and a transverse cobordism $\Sigma_2$, one can always find a sufficiently small perturbation $\Sigma_1$ of $\Sigma_0$ such that the intersection of $\Sigma_2$ with $D_\varepsilon T^*\Sigma_0$ which contains $\Sigma_1$, consists of a union of fibres. This way, there is a canonical identification of vector spaces $\Cth_+(\Sigma_1,\Sigma_2)\cong\Cth_+(\Sigma_0,\Sigma_2)$, and for a generator $\gamma_{12}$, $x_{12}$ or $\gamma_{21}$ in $\Cth_+(\Sigma_1,\Sigma_2)$, one denotes respectively $\gamma_{02}$, $x_{02}$ or $\gamma_{20}$ the corresponding generator in $\Cth_+(\Sigma_0,\Sigma_2)$.
\end{rem}
\begin{rem}
Proposition \ref{unit_cycle} states that the element $e$ is a cycle. Given Theorem \ref{teo_unit} one gets that it is a boundary if and only if $\Cth_+(\Sigma_1,\Sigma_2)$ is acyclic for every cobordism $\Sigma_2$ satisfying the hypothesis of the theorem, as proved in \cite[Lemma 4.17]{CDGG3}.
\end{rem}
\begin{proof}[Proof of Theorem \ref{teo_unit}]
First we write
\begin{alignat}{1}
\Cth_+(\Sigma_1,\Sigma_2)=C(\Lambda_2^+,\Lambda_1^+)^\dagger[n-1]\oplus CF^-(\Sigma_1,\Sigma_2)\oplus C(\Lambda_1^-,\Lambda_2^-)\oplus CF^+(\Sigma_1,\Sigma_2)\label{decomp}
\end{alignat}
where $CF^\pm(\Sigma_1,\Sigma_2)\subset CF(\Sigma_1,\Sigma_2)$ is the sub-vector space generated by positive, resp. negative action intersection points. According to this decomposition, ordering Reeb chords in $C(\Lambda_2^+,\Lambda_1^+)^\dagger[n-1]$ from biggest to smallest action and intersection points in $CF^-(\Sigma_1,\Sigma_2)\oplus CF^+(\Sigma_1,\Sigma_2)$ from smallest to biggest action, we will show that the matrix of the map \eqref{map_prod} is lower triangular with identity terms on the diagonal.\\
\noindent\textbf{1.} For $\gamma_{12}\in C(\Lambda_2^+,\Lambda_1^+)$, we have
\begin{alignat*}{2}
\mfm_2(\gamma_{12},e)&=\mfm_2^+(\gamma_{12},e)+\mfm_2^0(\gamma_{12},e)+\mfm_2^-(\gamma_{12},e)\\
&=\Delta_2^+\big(\gamma_{12},b_1^\Sigma(e)\big)+\mfm_2^0(\gamma_{12},e)+b_1^-\circ\Delta_2^\Sigma(\gamma_{12},e)+b_2^-\big(\Delta_1^\Sigma(\gamma_{12}),e^-\big)+b_2^-\big(\Delta_1^\Sigma(\gamma_{12}),\Delta_1^\Sigma(e^0)\big)\\
&=\Delta_2^+\big(\gamma_{12},b_1^\Sigma(e)\big)+\mfm_2^0(\gamma_{12},e)+b_1^-\circ\Delta_2^\Sigma(\gamma_{12},e)+b_2^-\big(\Delta_1^\Sigma(\gamma_{12}),e^-\big)
\end{alignat*}
where the last equality is because $\Delta_1^\Sigma(e_i^0)$ vanishes for energy reasons. On Figure \ref{unit3} we schematized pseudo-holomorphic configurations contributing to $\mfm_2(\gamma_{12},e)$. Let $i$ denote the index of the connected component of $\Sigma_1$ containing the starting point of $\gamma_{12}$. Note that by the hypothesis on the Morse function $F$, if this component has a non-empty negative end only the configurations A, B, C, and D are relevant, whereas if it is a filling component then only the configurations A', B' and C' are. We consider only the first case, the proof for the other one being similar. We will prove that $$\mfm_2(\gamma_{12},e)=\gamma_{02}+\boldsymbol{\zeta}_{02}+\boldsymbol{y}_{02}^-+\boldsymbol{\xi}_{20}+\boldsymbol{y}_{02}^+$$
where $\gamma_{02}\in C(\Lambda_2^+,\Lambda_0^+)$ is the Reeb chord canonically identified to $\gamma_{12}$, $\boldsymbol{\zeta}_{02}\in C(\Lambda_2^+,\Lambda_0^+)$ is a linear combination of Reeb chords whose action are smaller than the action of $\gamma_{02}$, $\boldsymbol{y}_{02}^\pm\in CF^\pm(\Sigma_0,\Sigma_2)$ and $\boldsymbol{\xi}_{20}\in C(\Lambda_0^-,\Lambda_2^-)$.
\begin{figure}[ht]
\begin{center}\includegraphics[width=12cm]{unit3.eps}\end{center}
\caption{Types of curves (potentially) contributing to $\mfm_2(\gamma_{12},e)$.}
\label{unit3}
\end{figure}
Let us consider the configurations of type A. Denote $v$ a rigid disc with boundary on the positive cylindrical ends, with a positive asymptotic to $\gamma_{12}$, a negative Reeb chord asymptotic $\beta_{10}\in C(\Lambda_0^+,\Lambda_1^+)$, and an output negative Reeb chord asymptotic $\gamma_{out}\in C(\Lambda_2^+,\Lambda_0^+)$, and $u$ a rigid disc with boundary on $\Sigma_0\cup\Sigma_1$ with a positive asymptotic to $\beta_{10}$ and a negative asymptotic to a minimum Morse Reeb chord $e_i^-$.
We distinguish two cases: either $\beta_{10}$ is a Morse chord, or it is not a Morse chord.
\begin{enumerate}
\item[(a)] If $\beta_{10}$ is a Morse chord. First, rigidity implies that $|\beta_{10}|=|e_i^-|=-1$, and thus $\beta_{10}$ corresponds to the (only one by assumption) minimum of $f_+$ on the component of $\Lambda_0^+$ containing the starting point of $\gamma_{02}$. Then for action reasons the disc $u$ has no pure Reeb chords asymptotics.
Similarly as in the proof of Proposition \ref{unit_cycle} we show that the count such discs $u$ coincides with the count of some rigid gradient flow lines of a Morse function $\widetilde{F}$ which equals $F$ on $\Sigma_0\cap([-T,T]\times Y)$.
To get this correspondence, wrap the negative and the positive ends of $\Sigma_1$ slightly in the positive Reeb direction:
take $V_1$, resp $W_1$, to be the image of $\mathbb R\times\Lambda_1^-$, resp $\mathbb R\times\Lambda_1^+$, by the time $s_-$, resp $s_+$, flow of the Hamiltonian vector field $\rho^-_{T+N,N}\partial_z$, resp $-\rho^+_{T+N,N}\partial_z$, with $s_\pm$ bigger than the longest Morse chord from $\Lambda_1^\pm$ to $\Lambda_0^\pm$ but smaller than the shortest non Morse chord from $\Lambda_1^\pm$ to $\Lambda_0^\pm$. See Figure \ref{wrap_unit2} for a schematization of the perturbation.
\begin{figure}[ht]
\begin{center}\includegraphics[width=4cm]{wrap_unit2.eps}\end{center}
\caption{Schematization of wrapping of the negative and positive ends of $\Sigma_1$.}
\label{wrap_unit2}
\end{figure}
This way, $e_i^-$ corresponds canonically an intersection point $m_i\in CF(V_0,V_1)$ and $\beta_{10}$ corresponds to an intersection point $x_\beta\in CF(W_0,W_1)$.
As before, by projecting discs on $P$ one can prove that
$b_1^V(m_i)=e_i^-$ and $\mfm_1^{W,0}(\beta_{10})=x_\beta$.
Now, by definition of the differential for the pairs of concatenated cobordisms $(V_0\odot(\Sigma_0\odot W_0),V_1\odot(\Sigma_1\odot W_1))$ and $(\Sigma_0\odot W_0,\Sigma_1\odot W_1)$ one has
\begin{alignat}{1}
\mfm_1^{V\odot(\Sigma\odot W)}(m_i)=\mfm_1^{\Sigma\odot W,+0}\circ\boldsymbol{b}_1^V(m_i)+\mfm_1^{V,0-}\circ\boldsymbol{\Delta}_1^{\Sigma\odot W}(m_i)\label{correspond}
\end{alignat}
and
\begin{alignat*}{1}
\mfm_1^{\Sigma\odot W,+0}\circ\boldsymbol{b}_1^V(m_i)=\mfm_1^{W,+0}\circ\boldsymbol{b}_1^\Sigma\circ\boldsymbol{b}_1^V(m_i)+\mfm_1^{\Sigma,0}\circ\boldsymbol{\Delta}_1^{W}\circ\boldsymbol{b}_1^V(m_i)
\end{alignat*}
Considering the components with values in $CF(W_0,W_1)$ on both sides of \ref{correspond} gives
$$\mfm_1^{V\odot(\Sigma\odot W),0_W}(m_i)=\mfm_1^{W,0}\circ\boldsymbol{b}_1^\Sigma\circ\boldsymbol{b}_1^V(m_i)=\mfm_1^{W,0}\circ b_1^\Sigma\circ b_1^V(m_i)=\mfm_1^{W,0}\circ b_1^\Sigma(e_i^-)$$
The coefficient of $\beta_{10}$ in $b_1^\Sigma(e_i^-)$ is thus equal to the coefficient of $x_\beta$ in $\mfm_1^{V\odot(\Sigma\odot W),0_W}(m_i)$.
The wrapping of $\Sigma_1$ being sufficiently small, one can view $V_1\odot\Sigma_1\odot W_1$ as a Morse perturbation of $\Sigma_0$ by a Morse function $\widetilde{F}$ which is equal to $F$ in the non-cylindrical part of $\Sigma_0$.
Thus, pseudo-holomorphic strips asymptotic to $m_i$ and $x_\beta$ are in one-to-one correspondence with gradient flow lines of $\widetilde{F}$ from the critical point corresponding to $x_\beta$ to the one corresponding to $m_i$, and there exists exactly one such gradient flow line. Indeed, $x_\beta$ is the starting point of two gradient flow lines of $d\widetilde{F}$, but according to the perturbation we performed one of them has the $t$ coordinate going to $+\infty$ while the other one flows down to $m_i$.
It remains to understand the pseudo-holomorphic disc $v$ with boundary on $\mathbb R\times(\Lambda_0^+\cup\Lambda_1^+\cup\Lambda_2^+)$ with a positive asymptotic to $\gamma_{12}$ and negative asymptotics to a minimum Morse chord $\beta_{10}$ and a chord $\gamma_{out}\in C(\Lambda_2^+,\Lambda_0^+)$. Again by \cite[Theorem 5.5]{EESa}, such a rigid disc corresponds to a rigid generalized disc, which in this case is a disc with boundary on $\mathbb R\times(\Lambda_0^+\cup\Lambda_2^+)$ with a gradient flow line of $f_+$ flowing from a point on the boundary of the disc in $\mathbb R\times\Lambda_0^+$ to the minimum corresponding to the chord $\beta_{10}$. By rigidity,
one has $\gamma_{out}=\gamma_{02}$.
Conversely, following the flow of $df_+$ from the starting point of $\gamma_{02}$ leads to the minimum of $f_+$ on the corresponding connected component. Such a flow line is a generalized disc which corresponds to such a disc $v$ with boundary on $\mathbb R\times(\Lambda_0^+\cup\Lambda_1^+\cup\Lambda_2^+)$.
We have proved that the coefficient of $\gamma_{02}$ in $\mfm_2(\gamma_{12},e)$ is $1$.
\item[(b)] If $\beta_{10}$ is not a Morse chord. Given $R\geq0$ such that the three cobordisms $\Sigma_0,\Sigma_1$ and $\Sigma_2$ are cylindrical outside of $[-R,R]\times Y$, the energy of the disc $v$ with boundary on the positive cylindrical ends is given by
\begin{alignat*}{1}
E(v)=\mathfrak{a}(\gamma_{12})-\mathfrak{a}(\beta_{10})-\mathfrak{a}(\gamma_{out})-\sum\limits_{i=0}^2\mathfrak{a}(\boldsymbol{\delta_i})
\end{alignat*}
with $\mathfrak{a}(\gamma_{12})=e^R\ell(\gamma_{12})+\mathfrak{c}_2-\mathfrak{c}_1$, $\mathfrak{a}(\beta_{10})=e^R\ell(\beta_{10})+\mathfrak{c}_0-\mathfrak{c}_1$ and $\mathfrak{a}(\gamma_{out})=e^R\ell(\gamma_{out})+\mathfrak{c}_2-\mathfrak{c}_0$.
One can check that
\begin{alignat*}{1}
&|\mathfrak{a}(\gamma_{12})-\mathfrak{a}(\gamma_{02})|\leq e^R(\max\|f_+\|_{\mathcal{C}^0}+\eta)+\mathfrak{c}_0-\mathfrak{c}_1\\
&|\mathfrak{a}(\beta_{10})-\mathfrak{a}(\beta_0)|\leq e^R(\max\|f_+\|_{\mathcal{C}^0}+\eta)+\mathfrak{c}_0-\mathfrak{c}_1
\end{alignat*}
and thus
\begin{alignat*}{1}
\mathfrak{a}(\gamma_{02})-\mathfrak{a}(\gamma_{out})\geq E(v)+\mathfrak{a}(\beta_0)-2\big(e^R(\max\|f_+\|_{\mathcal{C}^0}+\eta)+\mathfrak{c}_0-\mathfrak{c}_1\big)
\end{alignat*}
and for $\eta$ sufficiently small, the term on the right hand side is strictly positive, so the action of $\gamma_{out}$ is strictly smaller than that of $\gamma_{02}$.
\end{enumerate}
Thus, together with the curves of type B, C, and D (actually one can prove that the configuration D never happens, by projecting on $P$ the curve with boundary on the negative cylindrical ends), we obtain as expected
\begin{alignat*}{1}
\mfm_2(\gamma_{12},e)=\gamma_{02}+\boldsymbol{\zeta}_{02}+\boldsymbol{y}_{02}^-+\boldsymbol{\xi}_{20}+\boldsymbol{y}_{02}^+
\end{alignat*}
\textbf{2.} For $x_{12}\in CF(\Sigma_1,\Sigma_2)=CF^+(\Sigma_1,\Sigma_2)\oplus CF^-(\Sigma_1\cup\Sigma_2)$ we have
\begin{alignat*}{1}
\mfm_2(x_{12},e)&=\mfm_2^0(x_{12},e)+\mfm_2^-(x_{12},e)\\
&=\mfm_2^0(x_{12},e)+b_1^-\circ\Delta_2^\Sigma(x_{12},e)+b_2^-\big(\Delta_1^\Sigma(x_{12}),e^-\big)+b_2^-\big(\Delta_1^\Sigma(x_{12}),\Delta_1^\Sigma(e^0)\big)\\
&=\mfm_2^0(x_{12},e)+b_1^-\circ\Delta_2^\Sigma(x_{12},e)+b_2^-\big(\Delta_1^\Sigma(x_{12}),e^-\big)
\end{alignat*}
see Figure \ref{unit6}, and we will prove that for $x_{12}^+\in CF^+(\Sigma_1,\Sigma_2)$ and $x_{12}^-\in CF^-(\Sigma_1,\Sigma_2)$, one has
\begin{alignat*}{1}
&\mfm_2(x^+_{12},e)=x^+_{02}+\boldsymbol{y}^+_{02},\mbox{ and }\\
&\mfm_2(x^-_{12},e)=x^-_{02}+\boldsymbol{z}^-_{02}+\boldsymbol{\xi}_{20}+\boldsymbol{z}^+_{02}
\end{alignat*}
where $\boldsymbol{y}_{02}^+, \boldsymbol{z}_{02}^+\in CF^+(\Sigma_0,\Sigma_2)$, $\boldsymbol{z}_{02}^-\in CF^-(\Sigma_0,\Sigma_2)$ and $\boldsymbol{\xi}_{20}\in C(\Lambda_0^-,\Lambda_2^-)$, and each intersection point in $\boldsymbol{y}_{02}^+$, resp. $\boldsymbol{z}^-_{02}$, has action strictly bigger than $x_{02}^+$, resp. $x_{02}^-$.
\begin{figure}[ht]
\begin{center}\includegraphics[width=10cm]{unit6.eps}\end{center}
\caption{Curves contributing to $\mfm_2(x_{12},e)$.}
\label{unit6}
\end{figure}
As for the previous case, we assume that $x_{12}$ is an intersection point of $\Sigma_2$ with the i-th connected component of $\Sigma_1$ having a non empty negative end. Then, only configurations E, F and G are relevant.
We consider first configurations of type E. Let $u$ be a pseudo-holomorphic disc with boundary on $\Sigma_0\cup\Sigma_1\cup\Sigma_2$ negatively asymptotic to $e_i^-$, and asymptotic to $x_{12}\in CF(\Sigma_1,\Sigma_2)$ and an output $x_{out}\in CF(\Sigma_0,\Sigma_2)$.
The action of intersection points are assumed to be much smaller than that of pure Reeb chords, so $u$ has no pure Reeb chord asymptotes.
To understand what can be the output of such a disc, we wrap as before the negative end of $\Sigma_1$ slightly in the positive Reeb direction to get the pair $(V_0\odot\Sigma_0,V_1\odot\Sigma_1)$ where the Morse Reeb chords $e_j^-$ in $C(\Lambda_0^-,\Lambda_1^-)$ correspond to intersection points $m_j$ in $CF(V_0,V_1)$ and $\boldsymbol{b}_1^V(m_j)=b_1^V(m_j)=e_j^-$. By definition of the product for a pair of concatenated cobordisms (see Section \ref{sec:prod_conc}) we have:
\begin{alignat*}{2}
\mfm_2&^{V\odot \Sigma,0_\Sigma}(x_{12},m_i)\\
&=\mfm_2^{\Sigma,0}(\boldsymbol{b}_1^V(x_{12}),\boldsymbol{b}_1^V(m_i))
+\mfm_1^{\Sigma,0}\circ\,b_1^V\circ\Delta_2^\Sigma(\boldsymbol{b}_1^V(x_{12}),\boldsymbol{b}_1^V(m_i))+\mfm_1^{\Sigma,0}\circ\,b_2^V(\boldsymbol{\Delta}_1^\Sigma(x_{12}),\boldsymbol{\Delta}_1^\Sigma(m_i))\\
&=\mfm_2^{\Sigma,0}(\boldsymbol{b}_1^V(x_{12}),e_i^-)
+\mfm_1^{\Sigma,0}\circ\,b_1^V\circ\Delta_2^\Sigma(\boldsymbol{b}_1^V(x_{12}),e_i^-)+\mfm_1^{\Sigma,0}\circ\,b_2^V(\Delta_1^\Sigma(x_{12}),m_i)\\
&=\mfm_2^{\Sigma,0}(x_{12},e_i^-)+\mfm_2^{\Sigma,0}(b_1^V\circ\Delta_1^\Sigma(x_{12}),e_i^-)
+\mfm_1^{\Sigma,0}\circ\,b_1^V\circ\Delta_2^\Sigma(x_{12},e_i^-)+\mfm_1^{\Sigma,0}\circ\,b_2^V(\Delta_1^\Sigma(x_{12}),m_i)
\end{alignat*}
See Figure \ref{unit2}. All these terms except the first one involve bananas with two positive Reeb chord asymptotics and with boundary on $V_0\cup V_1\cup V_2$ where $V_0=\mathbb R\times\Lambda_0^-$, $V_1$ is a wrapping of $\mathbb R\times\Lambda_1^-$ and $V_2:=\mathbb R\times\Lambda_2^-$. These rigid bananas project to rigid discs with boundary on $\pi_P(\Lambda_0^-\cup\Lambda_1^-\cup\Lambda_2^-)$ and for dimension reasons they must be constant. This is not possible as they all have two distinct positive Reeb chord asymptotics (a constant curve with boundary on $\pi_P(\Lambda_0^-\cup\Lambda_1^-\cup\Lambda_2^-)$ does not lift to a banana with two positive asymptotics but on a trivial strip).
So we are left with $\mfm_2^{V\odot \Sigma,0_\Sigma}(x_{12},m_i)=\mfm_2^{\Sigma,0}(x_{12},e_i^-)$.
Let us denote again $\widetilde{F}$ the Morse function such that $V_1\odot\Sigma_1$ is viewed as a $1$-jet perturbation of $\Sigma_0$ by $\widetilde{F}$, and $\widetilde{F}$ equals $F$ on $\Sigma_0\cap([-T,T]\times Y)$. The intersection point $m_i$ is a minimum of $\widetilde{F}$ and the gradient flow line of $\widetilde{F}$ flowing from $x_{02}$ to $m_i$ corresponds to a pseudo-holomorphic triangle asymptotic to $x_{20}$, $m_i$ and $x_{12}$ (if the component of $\Sigma_1$ containing $x_{12}$ is a filling, then we don't need to wrap the negative end and consider the one-to-one correspondence between gradient flow lines of $F$ from $x_{02}$ to $e_i^0$ and pseudo-holomorphic triangles with vertices $x_{20}$, $e_i^0$ and $x_{12}$). Thus the coefficient of $x_{02}$ in $\mfm_2^{\Sigma,0}(x_{12},e_i^-)$ is $1$.
Note also that the energy of this triangle is given by
\begin{alignat}{1}
E(u)=\mathfrak{a}(x_{20})-\mathfrak{a}(e_i^-)-\mathfrak{a}(x_{12})\label{energy}
\end{alignat}
and by definition of the action one can check that it can be made as small as possible by taking smaller $\eta$.
Now suppose there is another pseudo-holomorphic triangle with asymptotics $x_{12}$, $e_i^-$ and $y_{02}\neq x_{02}$, contributing to the coefficient of $y_{02}$ in $\mfm_2(x_{12},e)$. This triangle necessary leaves a small neighborhood of the gradient flow line from $x_{20}$ to $m_i$ and thus according to the relation \eqref{energy} between the energy of such a triangle and the action of its asymptotics, the action of $y_{02}$ is strictly bigger than the action of $x_{02}$, independently of how small $\eta$ is.
\begin{figure}[ht]
\begin{center}\includegraphics[width=10cm]{unit2.eps}\end{center}
\caption{Curves contributing to $\mfm_2^{V\odot \Sigma,0_\Sigma}(x,m_i)$.}
\label{unit2}
\end{figure}
Then, about configurations of type F and G, observe that a disc with boundary on the non-cylindrical parts in such configurations exists only if the action of $x_{12}$ is negative.
To sum up, the configurations of type E, F, and G (but for the same reasons as in \textbf{1.} it can be proved that G configurations never happen) give that
\begin{alignat*}{1}
&\mfm_2(x^+_{12},e)=x^+_{02}+\boldsymbol{y}^+_{02},\mbox{ and }\\
&\mfm_2(x^-_{12},e)=x^-_{02}+\boldsymbol{z}^-_{02}+\boldsymbol{\xi}_{20}+\boldsymbol{z}^+_{02}
\end{alignat*}
\textbf{3.} Finally, for $\xi_{21}\in C(\Lambda_1^-,\Lambda_2^-)$ we have
\begin{alignat*}{1}
\mfm_2(\xi_{21},e)&=\mfm_2^0(\xi_{21},e)+\mfm_2^-(\xi_{21},e)\\
&=\mfm_2^0(\xi_{21},e^-)+b_2^-(\xi_{21},e^-)
\end{alignat*}
because the Morse function $F$ as no minima $e_i^0$ on the component of $\Sigma_1$ involved as this component has a non empty negative end. See Figure \ref{unit7}. For energy reasons, if a disc of type H exists then the output intersection point must have positive action. Then a disc of type I is such that $\xi_{out}$ is the chord in $C(\Lambda_0^-,\Lambda_2^-)$ canonically identified with $\xi_{21}\in C(\Lambda_1^-,\Lambda_2^-)$, by \cite[Theorem 5.5]{EESa}. So we have $\mfm_2(\xi_{21},e)=\xi_{20}+\boldsymbol{y}^+_{02}$.
\begin{figure}[ht]
\begin{center}\includegraphics[width=5cm]{unit7.eps}\end{center}
\caption{Curves contributing to $\mfm_2(\xi_{21},e)$}
\label{unit7}
\end{figure}
\end{proof}
\begin{rem}
Generalizing the conjectural \cite[Lemma 4.10]{E1} to the case of cobordisms, one could probably prove that with the choice of basis given above the matrix of $\mfm_2(\,\cdot\,,e)$ is actually the identity matrix. However we don't need such a strong statement on the chain level here, but what we get is that it is the identity in homology (see details at end of the current section).
\end{rem}
We will apply now the previous theorem to a $3$-copy $(\Sigma_0,\Sigma_1,\Sigma_2)$ of $\Sigma_0$. By $3$-copy we mean $\Sigma_1$, resp $\Sigma_2$, is viewed as the graph of $dF_{01}$, resp $dF_{02}$, in a standard neighborhood of $\Sigma_0$ and $\Sigma_2$ is viewed as the graph of $dF_{12}$ in a standard neighborhood of $\Sigma_1$, for Morse functions $F_{01}, F_{02}, F_{12}$ satisfying the properties listed at the beginning of the Section.
We have the following:
\begin{coro}\label{coro:unit}
Given the $3$-copy $(\Sigma_0,\Sigma_1,\Sigma_2)$ described above, we have
$$\mfm_2(e_{\Sigma_1,\Sigma_2},e_{\Sigma_0,\Sigma_1})=e_{\Sigma_0,\Sigma_2}$$
\end{coro}
\begin{proof}
It is enough to consider the case where $\Sigma_0$ is connected. The case of a filling is already known, see for example \cite{GPS}. We recall a proof in our setting. Assume $\Sigma_0$ is a connected filling of $\Lambda_0^+$, then we have $e_{\Sigma_0,\Sigma_1}=e_{\Sigma_0,\Sigma_1}^0$ and $e_{\Sigma_1,\Sigma_2}=e_{\Sigma_1,\Sigma_2}^0$ and according to Theorem \ref{teo_unit}:
\begin{alignat*}{1}
\mfm_2(e_{\Sigma_1,\Sigma_2}^0,e_{\Sigma_0,\Sigma_1}^0)=e_{\Sigma_0,\Sigma_2}^0+\boldsymbol{y}_{02}^+
\end{alignat*}
where $\boldsymbol{y}_{02}^+\in CF^+(\Sigma_0,\Sigma_2)$, and each element in $\boldsymbol{y}_{02}^+$ has action bigger than the action of $e_{\Sigma_1,\Sigma_2}^0$. Observe then that any triangle asymptotic to $y_{02}^+\neq e_{\Sigma_0,\Sigma_2}^0, e_{\Sigma_0,\Sigma_1}^0$, and $e_{\Sigma_1,\Sigma_2}^0$ would have to leave a small neighborhood of a gradient flow line of $F_{01}$ from $e_{\Sigma_1,\Sigma_2}^0$ to $e_{\Sigma_0,\Sigma_1}^0$. But for a sufficiently small perturbation no $y_{02}^+\neq e_{\Sigma_0,\Sigma_2}^0$ has action big enough for such a triangle to exist.
Suppose now that $\Sigma_0$ is a connected cobordism from $\Lambda_0^-\neq\emptyset$ to $\Lambda_0^+$. Then $e_{\Sigma_0,\Sigma_1}=e_{\Sigma_0,\Sigma_1}^-$ and $e_{\Sigma_1,\Sigma_2}=e_{\Sigma_1,\Sigma_2}^-$ and according to Theorem \ref{teo_unit}:
\begin{alignat*}{1}
\mfm_2(e_{\Sigma_1,\Sigma_2}^-,e_{\Sigma_0,\Sigma_1}^-)=e_{\Sigma_0,\Sigma_2}^-+\boldsymbol{y}_{02}^+
\end{alignat*}
The proof is the same as in the filling case after wrapping slightly the negative ends of $\Sigma_1$ and $\Sigma_2$ in the positive Reeb direction. We wrap so that the negative end becomes a cylinder over $\Lambda_0^-\cup\widetilde{\Lambda}_1^-\cup\widetilde{\Lambda}_1^2$ where $\widetilde{\Lambda}_1^-$ is a small push-off of $\Lambda_0^-$ in the positive Reeb direction and $\widetilde{\Lambda}_2^-$ is a small push-off of $\widetilde{\Lambda}_1^-$ in the positive Reeb direction. The pseudo-holomorphic disc asymptotic to $e_{\Sigma_0,\Sigma_2}^-$, $e_{\Sigma_0,\Sigma_1}^-$ and $e_{\Sigma_1,\Sigma_2}^-$ corresponds after wrapping to a triangle asymptotic to the corresponding intersection points. So then, for the same reasons as before a disc asymptotic to $y_{02}^+,e_{\Sigma_0,\Sigma_1}^-$ and $e_{\Sigma_1,\Sigma_2}^-$ can not exist.
\end{proof}
We end this section by proving that the transfer maps preserve the continuation element. Consider a pair $(V_0\odot W_0,V_1\odot W_1)$ such that $V_1\odot W_1$ is a small perturbation of $(V_0\odot W_0)$ the same way as we perturbed $\Sigma_0$ to get $\Sigma_1$ previously, in particular $\Lambda_1^\pm$ is a perturbation of a push-off of $\Lambda_0^\pm$ in the negative Reeb direction. We assume moreover that the Morse function $F$ used to perturb the compact part of $V_0\odot W_0$ is such that $(V_0,V_1)$ and $(W_0,W_1)$ are also pairs of cobordisms of the same type, so $\Lambda_1$ is a perturbation of a push-off of $\Lambda_0$ in the negative Reeb direction.
Giving this, by what we did previously, there are continuation elements $e_V\in\Cth_+(V_0,V_1)$, $e_W\in\Cth_+(W_0,W_1)$ and $e_{V\odot W}\in\Cth_+(V_0\odot W_0,V_1\odot W_1)$, described as follows
\begin{alignat*}{1}
&e_V=e_V^0+e_V^-=\sum e_{V,i}^0+\sum e_{V,i}^-\\
&e_W=e_W^0+e_W^-=\sum e_{W,i}^0+\sum e_{W,i}^-\\
&e_{V\odot W}=e_W^0+e_V^0+e_V^-
\end{alignat*}
\begin{prop}\label{prop:cont1}
The transfer map $\boldsymbol{\Delta}_1^W$ preserves the continuation element.
\end{prop}
\begin{proof}
Directly from the definition, one has
\begin{alignat*}{1}
\boldsymbol{\Delta}_1^W(e_{V\odot W})&=\boldsymbol{\Delta}_1^W(e_W^0+e_V^0+e_V^-)=\Delta_1^W(e_W^0)+e_V^0+e_V^-=e_V^0+e_V^-
\end{alignat*}
where the last equality holds for energy reason.
\end{proof}
\begin{prop}\label{prop:cont2}
The transfer map $\boldsymbol{b}_1^V$ preserves the continuation element in homology, i.e. $[\boldsymbol{b}_1^V(e_{V\odot W})]=[e_W]$.
\end{prop}
\begin{proof}
Observe first that
\begin{alignat*}{1}
\boldsymbol{b}_1^V(e_{V\odot W})&=\boldsymbol{b}_1^V(e_{W}^0+e_{V}^0+e_{V}^-)=e_{W}^0+b_1^V\circ\Delta_1^W(e_{W}^0)+b_1^V(e_{V}^0+e_{V}^-)=e_{W}^0+b_1^V(e_{V}^0+e_{V}^-)
\end{alignat*}
for energy reasons. Now, wrapping slightly the positive and negative cylindrical ends of $V_1$ in the positive Reeb direction one can prove that $b_1^V(e_{V}^0+e_{V}^-)=e_{W}^-+E_{W}^-$,
where $E_{W}^-\in C(\Lambda_0,\Lambda_1)$ is a linear combination of non Morse chords (same type of argument as in the proof of Theorem \ref{teo_unit}).
Now, take a third copy $V_2\odot W_2$ as in Corollary \ref{coro:unit} such that $(V_0,V_1,V_2)$ and $(W_0,W_1,W_2)$ are also $3$-copies.
We claim that the map
\begin{alignat}{1}
\mfm_2^W(\,\cdot\,,\boldsymbol{b}_1^V(e_{V\odot W_{01}})):\Cth_+(W_1,W_2)\to\Cth_+(W_0,W_2)
\label{map1}
\end{alignat}
is a quasi-isomorphism. Again this follows from studying the pseudo-holomorphic curves involved, repeating some arguments of the proof of Theorem \ref{teo_unit}. As we are working over a field, it admits an inverse.
Consider finally a fourth copy $V_3\odot W_3$ being a perturbation of $V_2\odot W_2$ using the same type of perturbation as before. From the third $A_\infty$-relation satisfied by $\mfm^W$ (see Section \ref{A-inf maps}), the fact that $\mfm_2^{V\odot W}(e_{V\odot W_{12}},e_{V\odot W_{01}})=e_{V\odot W_{02}}$, and the fact that $\boldsymbol{b}_1^V$ preserves the product structures in homology, we get that the maps
\begin{alignat*}{1}
&\mfm_2^W\big(\mfm_2^W(\,\cdot\,,\boldsymbol{b}_1^V(e_{V\odot W_{12}})),\boldsymbol{b}_1^V(e_{V\odot W_{01}})\big):\Cth_+(W_2,W_3)\to\Cth_+(W_0,W_3)\\
&\mfm_2^W(\,\cdot\,,\boldsymbol{b}_1^V(e_{V\odot W_{02}})):\Cth_+(W_2,W_3)\to\Cth_+(W_0,W_3)
\end{alignat*}
are homotopic.
It implies that the map \eqref{map1} is homotopic to the identity map (after canonical identification of the generators of the complexes $\Cth_+(W_1,W_2)$ and $\Cth_+(W_0,W_1)$).
Finally, as $e_{W_{12}}$ is a continuation element we have
\begin{alignat*}{1}
\mfm_2^W(e_{W_{12}},\boldsymbol{b}_1^V(e_{V\odot W_{01}}))&=\mfm_2^W(e_{W_{12}},e_{W_{01}}+E_{W_{01}}^-)=\mfm_2^W(e_{W_{12}},e_{W_{01}})+\mfm_2^W(e_{W_{12}},E_{W_{01}}^-)\\
&=e_{W_{02}}+\mfm_2^{W,0}(e_{W_{12}},E_{W_{01}}^-)+\mfm_2^{W,-}(e_{W_{12}},E_{W_{01}}^-)\\
&=e_{W_{02}}+\mfm_2^{W,-}(e_{W_{12}}^-,E_{W_{01}}^-)\\
&=e_{W_{02}}+E_{W_{02}}^-
\end{alignat*}
where the second to last equality comes from the fact that the connected components of the cobordisms $W_0,W_1$ have non empty negative end so there is no minimum of the perturbation Morse function, so $\mfm_2^{W,0-}(e_W^0,E_W^-)=0$, and then $\mfm_2^{W,0}(e_W^-,E_W^-)=0$ for action reasons. We have thus $\mfm_2^W(e_W,\boldsymbol{b}_1^V(e_{V\odot W}))=\boldsymbol{b}_1^V(e_{V\odot W})$. As \eqref{map1} is the identity in homology, we get $[e_W]=[\boldsymbol{b}_1^V(e_{V\odot W})]$.
\end{proof}
\begin{rem} Observe that the same arguments show that
\begin{alignat*}{1}
\mfm_2^{\Sigma_{012}}(\,\cdot\,,e):\Cth_+(\Sigma_1,\Sigma_2)\to\Cth_+(\Sigma_0,\Sigma_2)
\end{alignat*}
is the identity in homology as we have proved that it is an isomorphism (Theorem \ref{teo_unit}) and that $\mfm_2(e_{\Sigma_1,\Sigma_2},e_{\Sigma_0,\Sigma_1})=e_{\Sigma_0,\Sigma_2}$ (Corollary \ref{coro:unit}).
\end{rem}
\begin{rem}
In Sections \ref{sec:higher} and \ref{sec:higher_conc} we will extend the algebraic structures we have encountered to $A_\infty$ ones. In particular we will define an $A_\infty$-category of cobordisms in $\mathbb R\times Y$, $\Fuk(\mathbb R\times Y)$, and generalize the transfer maps to families of maps satisfying the $A_\infty$-functor equations.
Once the technical details to extend our algebraic constructions to Lagrangian cobordisms in a more general Liouville cobordism are carried out, the transfer maps will provide $A_\infty$-functors $\Fuk^{dec}(X_0\odot X_1)\to\Fuk(X_i)$ from the full subcategory $\Fuk^{dec}(X_0\odot X_1)\subset\Fuk(X_0\odot X_1)$ generated by decomposable Lagrangian cobordisms to the Fukaya category of each cobordism. By Proposition \ref{prop:cont1} and \ref{prop:cont2} these functors will be cohomologically unital.
\end{rem}
\section{An $A_\infty$-category of Lagrangian cobordisms}\label{sec:higher}
\subsection{Higher order maps}\label{A-inf maps}
In this section, we extend the differential $\mfm_1^\Sigma$ and the product $\mfm_2^\Sigma$ to families of maps $\mfm_d^\Sigma$ defined for each $(d+1)$-tuple of pairwise transverse exact Lagrangian cobordisms $(\Sigma_0,\dots,\Sigma_d)$ for all $d\geq1$. Remember that we denote $\mathfrak{C}(\Lambda^\pm_i,\Lambda^\pm_j)=C_{n-1-*}(\Lambda^\pm_i,\Lambda^\pm_j)\oplus C^{*-1}(\Lambda^\pm_j,\Lambda^\pm_i)$. We define first six families of maps, $b_d^+$, $b_d^-$, $\Delta_d^+$, $\Delta_d^-$, $b_d^\Sigma$ and $\Delta_d^\Sigma$:
\begin{alignat*}{1}
&b_d^\pm:\mathfrak{C}(\Lambda_{d-1}^\pm,\Lambda_d^\pm)\otimes\mathfrak{C}(\Lambda_{d-2}^\pm,\Lambda_{d-1}^\pm)\otimes\dots\otimes\mathfrak{C}(\Lambda_0^\pm,\Lambda_1^\pm)\to C^{*-1}(\Lambda_0^\pm,\Lambda_d^\pm)\\
&\Delta_d^\pm:\mathfrak{C}(\Lambda_{d-1}^\pm,\Lambda_d^\pm)\otimes\mathfrak{C}(\Lambda_{d-2}^\pm,\Lambda_{d-1}^\pm)\otimes\dots\otimes\mathfrak{C}(\Lambda_0^\pm,\Lambda_1^\pm)\to C_{n-1-*}(\Lambda_d^\pm,\Lambda_0^\pm)\\
&b^\Sigma_d:\Cth_+(\Sigma_{d-1},\Sigma_d)\otimes\Cth_+(\Sigma_{d-2},\Sigma_{d-1})\otimes\dots\otimes\Cth_+(\Sigma_0,\Sigma_1)\to C^{*-1}(\Lambda_0^+,\Lambda_d^+)\\
&\Delta^\Sigma_d:\Cth_+(\Sigma_{d-1},\Sigma_d)\otimes\Cth_+(\Sigma_{d-2},\Sigma_{d-1})\otimes\dots\otimes\Cth_+(\Sigma_0,\Sigma_1)\to C_{n-1-*}(\Lambda_d^-,\Lambda_0^-)
\end{alignat*}
as follows:
\begin{alignat*}{1}
&b_d^+(a_d,\dots,a_1)=\sum\limits_{\gamma_{d,0}}\sum\limits_{\boldsymbol{\zeta}_i}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{0,\dots,d}^+}(\gamma_{d,0};\boldsymbol{\zeta}_0,a_1,\dots,a_d,\boldsymbol{\zeta}_d)\cdot\varepsilon_i^+(\boldsymbol{\zeta}_i)\cdot\gamma_{d,0}\\
&b_d^-(a_d,\dots,a_1)=\sum\limits_{\gamma_{d,0}}\sum\limits_{\boldsymbol{\delta}_i}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{0,\dots,d}^-}(\gamma_{d,0};\boldsymbol{\delta}_0,a_1,\dots,a_d,\boldsymbol{\delta}_d)\cdot\varepsilon_i^-(\boldsymbol{\delta}_i)\cdot\gamma_{d,0}\\
&\Delta_d^+(a_d,\dots,a_1)=\sum\limits_{\gamma_{0,d}}\sum\limits_{\boldsymbol{\zeta}_i}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{0,\dots,d}^+}(\gamma_{0,d};\boldsymbol{\zeta}_0,a_1,\dots,a_d,\boldsymbol{\zeta}_d)\cdot\varepsilon_i^+(\boldsymbol{\zeta}_i)\cdot\gamma_{0,d}\\
&\Delta_d^-(a_d,\dots,a_1)=\sum\limits_{\gamma_{0,d}}\sum\limits_{\boldsymbol{\delta}_i}\#\widetilde{\mathcal{M}^1}_{\mathbb R\times\Lambda_{0,\dots,d}^-}(\gamma_{0,d};\boldsymbol{\delta}_0,a_1,\dots,a_d,\boldsymbol{\delta}_d)\cdot\varepsilon_i^-(\boldsymbol{\delta}_i)\cdot\gamma_{0,d}\\
&b_d^\Sigma(a_d,\dots,a_1)=\sum\limits_{\gamma_{d,0}}\sum\limits_{\boldsymbol{\delta}_i}\#\mathcal{M}^0_{\Sigma_{0,\dots,d}}(\gamma_{d,0};\boldsymbol{\delta}_0,a_1,\dots,a_d,\boldsymbol{\delta}_d)\cdot\varepsilon_i^-(\boldsymbol{\delta}_i)\cdot\gamma_{d,0}\\
&\Delta_d^\Sigma(a_d,\dots,a_1)=\sum\limits_{\gamma_{0,d}}\sum\limits_{\boldsymbol{\delta}_i}\#\mathcal{M}^0_{\Sigma_{0,\dots,d}}(\gamma_{0,d};\boldsymbol{\delta}_0,a_1,\dots,a_d,\boldsymbol{\delta}_d)\cdot\varepsilon_i^-(\boldsymbol{\delta}_i)\cdot\gamma_{0,d}
\end{alignat*}
Observe that these maps for the case $d=1$ have already been considered in Section \ref{sec:Cth}, and $\Delta_2^+$, $\Delta_2^\Sigma$ and $b_2^-$ have been defined in Section \ref{section:def_product} already.
Given these families of maps, we define the higher order maps $\mfm_d$ as being the sum $\mfm_d=\mfm_d^++\mfm_d^0+\mfm_d^-$, where each component is defined by:
\begin{alignat}{1}
&\mfm_d^+(a_d,\dots,a_1)=\sum\limits_{j=1}^d\sum\limits_{i_1+\dots+ i_j=d}\Delta^+_j\big(\boldsymbol{b}_{i_j}^\Sigma(a_d,\dots,a_{d-i_j+1}),\dots,\boldsymbol{b}_{i_1}^\Sigma(a_{i_1+1},\dots,a_1)\big)\label{defmap+}\\
&\mfm_d^0(a_d,\dots,a_1)=\sum\limits_{x\in\Sigma_0\cap\Sigma_d}\sum\limits_{\boldsymbol{\delta}_i}\#\mathcal{M}^0_{\Sigma_{0,...,d}}(x;\boldsymbol{\delta}_0,a_1,\boldsymbol{\delta}_1,\dots,a_d,\boldsymbol{\delta}_d)\cdot\varepsilon_i^-(\boldsymbol{\delta}_i)\cdot x\label{defmap0}\\
&\mfm_d^-(a_d,\dots,a_1)=\sum\limits_{j=1}^d\sum\limits_{i_1+\dots+ i_j=d}b^-_j\big(\boldsymbol{\Delta}_{i_j}^\Sigma(a_d,\dots,a_{d-i_j+1}),\dots,\boldsymbol{\Delta}_{i_1}^\Sigma(a_{i_1+1},\dots,a_1)\big)\label{defmap-}
\end{alignat}
where the maps $\boldsymbol{b}_1^\Sigma$ and $\boldsymbol{\Delta}_1^\Sigma$ are special cases of transfer maps as explained in Section \ref{special_case}, and for $j\geq2$ one has $\boldsymbol{b}_j^\Sigma:=b_j^\Sigma$ and $\boldsymbol{\Delta}_j^\Sigma:=\Delta_j^\Sigma$.
In the formulas above, for $1\leq j\leq d$ fixed and an index $i_s$, the maps $\boldsymbol{b}_{i_s}^\Sigma$ and $\boldsymbol{\Delta}_{i_s}^\Sigma$ are defined on (with convention $i_0=-1$):
\begin{alignat*}{1}
\Cth_+(\Sigma_{i_{s}+\dots+i_1},\Sigma_{1+i_{s}+\dots+i_1})\otimes\dots\otimes\Cth_+(\Sigma_{1+i_{s-1}+\dots+i_1},\Sigma_{2+i_{s-1}+\dots+ i_1})
\end{alignat*}
and the maps $\Delta_j^+$ and $b_j^-$ on
\begin{alignat*}{1}
\Cth_+(\Sigma_{1+i_{j-1}+\dots+i_1},\Sigma_d)\otimes\dots\otimes\Cth_+(\Sigma_0,\Sigma_{i_1+1})
\end{alignat*}
For $d=1,2$, the Formulas \ref{defmap+}, \ref{defmap0} and \ref{defmap-} recover the definitions of the differential $\mfm_1$ and the product $\mfm_2$ given in Sections \ref{section:def_complex} and \ref{section:def_product}.
\begin{rem}
Observe that for energy reasons, depending on the $d$-tuple of asymptotics, it can happen that a lot of terms in the Formulas \eqref{defmap+} and \eqref{defmap-} vanish, but for example, if $a_i$ is a Reeb chord in $C(\Lambda_{i+1}^+,\Lambda_i^+)$ for $i=0,\dots d$, then none of them vanish.
\end{rem}
\begin{rem}
The maps $b_d^+$ and $\Delta_d^-$ defined previously are not useful to define the maps $\mfm_d$ but they naturally appear in the proof of the $A_\infty$-equations, see Sections \ref{proofrelinf+} and \ref{proofrelinf-} below.
\end{rem}
Now we want to show that the maps $\{\mfm_d\}_{d\geq1}$ satisfy the $A_\infty$-equations, i.e. for all $k\geq1$ and all $(k+1)$-tuple of transverse cobordisms $(\Sigma_0,\dots,\Sigma_k)$, we want to check that for every $1\leq d\leq k$ and $(d+1)$-sub-tuple $(\Sigma_{i_0},\dots,\Sigma_{i_d})$ with $i_0<\dots<i_d$, we have:
\begin{alignat*}{1}
\sum\limits_{j=1}^d\sum\limits_{n=0}^{d-j}\mfm_{d-j+1}(\id^{\otimes d-j-n}\otimes\mfm_j\otimes\id^{\otimes n})=0
\end{alignat*}
To simplify notations in the following we assume that the $(d+1)$-tuple $(\Sigma_{i_0},\dots,\Sigma_{i_d})$ is $(\Sigma_0,\dots,\Sigma_d)$.
As usual, we decompose this equation into three equations to check:
\begin{alignat}{1}
&\sum\limits_{j=1}^d\sum\limits_{n=0}^{d-j}\mfm^+_{d-j+1}(\id^{\otimes d-j-n}\otimes\mfm_j\otimes\id^{\otimes n})=0\label{relinf+}\\
&\sum\limits_{j=1}^d\sum\limits_{n=0}^{d-j}\mfm^0_{d-j+1}(\id^{\otimes d-j-n}\otimes\mfm_j\otimes\id^{\otimes n})=0\label{relinf0}\\
&\sum\limits_{j=1}^d\sum\limits_{n=0}^{d-j}\mfm^-_{d-j+1}(\id^{\otimes d-j-n}\otimes\mfm_j\otimes\id^{\otimes n})=0\label{relinf-}
\end{alignat}
\subsubsection{Proof of Equation \eqref{relinf+}}\label{proofrelinf+}
Consider the boundary of the compactification of $\widetilde{\mathcal{M}^2}_{\mathbb R\times\Lambda_{0,\dots,d}^+}(\gamma_{0,d};\boldsymbol{\zeta}_0,a_1,\dots,a_d,\boldsymbol{\zeta}_d)$. According to the compactness results for one dimensional moduli spaces of pseudo-holomorphic discs with cylindrical Lagrangian boundary conditions as recalled in Section \ref{sec:structure}, the non trivial components of broken discs in the boundary consist of two index $1$ discs glued along a node asymptotic to a Reeb chord. If it is a positive asymptotic for the index $1$ disc not containing the output puncture, this disc contributes to a map $b_j^+$, and if it is a negative asymptotic, this disc contributes to a map $\Delta_j^+$. Hence we get the following:
\begin{lem}\label{relDelta-}
For all $1\leq d\leq k$, we have $\sum\limits_{j=1}^d\sum\limits_{n=0}^{d-j}\Delta^+_{d-j+1}\big(\id^{\otimes d-j-n}\otimes(b_j^++\Delta_j^+)\otimes\id^{\otimes n}\big)=0$
\end{lem}
Then, we also have:
\begin{lem}\label{relbSigma}
For all $1\leq d\leq k$, we have
\begin{alignat*}{1}
\sum_{j=1}^d\sum_{n=0}^{d-j}b_{d-j+1}^\Sigma\big(\id^{\otimes d-j-n}\otimes\mfm_j\otimes\id^{\otimes n}\big)+\sum\limits_{j=1}^d\sum\limits_{i_1+\dots+i_j=d}b^+_{j}\big(\boldsymbol{b}^\Sigma_{i_j}\otimes\dots\otimes\boldsymbol{b}_{i_1}^\Sigma\big)=0
\end{alignat*}
\end{lem}
\begin{proof}
This time we have to consider the boundary of the compactification of a moduli space
$\mathcal{M}^1_{\Sigma_{0,\dots,d}}(\gamma_{d,0};\boldsymbol{\delta}_0,a_1,\dots,a_d,\boldsymbol{\delta}_d)$. Again as recalled in Section \ref{sec:structure}, the broken discs are of two types. It can first consist of two index $0$ discs glued at a common intersection point. In this case, the one not containing the output puncture asymptotic to $\gamma_{d,0}$ contributes $\mfm_j^0$, and the disc containing the output contributes to a banana $b_{d-j+1}^\Sigma$. The other type of possible broken disc consists of several (possibly 0 !) non trivial index $0$ components and an index $1$ disc with boundary on the negative or positive cylindrical ends, such that each index $0$ disc is connected to the index $1$ one via a Reeb chord. Observe that the output puncture is asymptotic to a chord in the positive end, so there are two subcases:
\begin{enumerate}
\item the output puncture is contained in the index $1$ disc. In this case, this disc has boundary on the positive ends and contributes to $b_j^+$ while the index $0$ discs must then have at least one positive Reeb chord asymptotic (connecting it to the index $1$ disc) and so each of them contributes to a banana $b^\Sigma$. Note that if among the asymptotics $a_1,a_2,\dots,a_d$ there is a chord in the positive end, this chord could be an asymptotic of the index $1$ disc or of an index $0$ banana $b^\Sigma$, this is why we have the bold symbols maps $\boldsymbol{b}^\Sigma$ in the formula.
\item the output puncture is contained in an index $0$ disc. This disc contributes thus to a map $b_{d-j+1}^\Sigma$. Then, if the index $1$ disc has boundary in the positive ends, it contributes, with the index $0$ discs not containing the output, to $\mfm_j^+$. If the index $0$ disc has boundary on the negative ends, it will contribute, with the index $0$ discs not containing the output, to $\mfm_j^-$.
\end{enumerate}
Summing the algebraic contributions of all the different types of broken discs described above gives the relation.
\end{proof}
Now we can compute
\begin{alignat*}{1}
&\sum\limits_{j=1}^d\sum\limits_{n=0}^{d-j}\mfm^+_{d-j+1}(\id^{\otimes d-j-n}\otimes\mfm_j\otimes\id^{\otimes n})\\
&=\sum_{j=1}^d\sum_{n=0}^{d-j}\sum_{k=1}^{d-j+1}\sum_{s=1}^k\sum_{\substack{i_1+...+i_k=d-j+1\\0\leq r=n-i_1-...-i_{s-1}\leq i_s}}\Delta^+_{k}\big(\boldsymbol{b}^\Sigma_{i_k}\otimes\dots\otimes\boldsymbol{b}^\Sigma_{i_s}(\id^{\otimes i_s-j-r}\otimes\mfm_j\otimes\id^{\otimes r})\otimes\dots\otimes\boldsymbol{b}^\Sigma_{i_1}\big)
\end{alignat*}
In this sum, we fix first the number $j$ of entries for the map $\mfm_j$, and then a partition of $d-j+1$ for the maps $\boldsymbol{b}^\Sigma$. Note that if $i_s<j$ the terms $\boldsymbol{b}^\Sigma_{i_s}(\id^{\otimes i_s-j-r}\otimes\mfm_j\otimes\id^{\otimes r})$ vanish. We could also choose first a partition of $d$ and then the number of entries for the $\mfm$ "in the middle". Thus, the sum above is equal to:
\begin{alignat*}{1}
&=\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^k\sum_{j=1}^{i_s}\sum_{n=0}^{i_s-j}\Delta^+_{k}\big(\boldsymbol{b}^\Sigma_{i_k}\otimes\dots\otimes\boldsymbol{b}^\Sigma_{i_s-j+1}(\id^{\otimes i_s-j-n}\otimes\mfm_j\otimes\id^{\otimes n})\otimes\dots\otimes\boldsymbol{b}^\Sigma_{i_1}\big)\\
&=\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^k\Delta^+_{k}\big(\boldsymbol{b}^\Sigma_{i_k}\otimes\dots\otimes\sum_{j=1}^{i_s}\sum_{n=0}^{i_s-j}\boldsymbol{b}^\Sigma_{i_s-j+1}(\id^{\otimes i_s-j-n}\otimes\mfm_j\otimes\id^{\otimes n})\otimes\dots\otimes\boldsymbol{b}^\Sigma_{i_1}\big)
\end{alignat*}
Using the definition of $\boldsymbol{b}$ and then Lemma \ref{relbSigma}, we have
\begin{alignat*}{1}
\sum_{j=1}^{i_s}\sum_{n=0}^{i_s-j}\boldsymbol{b}^\Sigma_{i_s-j+1}(\id^{\otimes i_s-j-n}\otimes\mfm_j\otimes\id^{\otimes n})&=\sum_{j=1}^{i_s}\sum_{n=0}^{i_s-j}b^\Sigma_{i_s-j+1}(\id^{\otimes i_s-j-n}\otimes\mfm_j\otimes\id^{\otimes n})+\mfm_{i_s}^+\\
&=\sum\limits_{u=1}^{i_s}\sum\limits_{t_1+\dots+t_u=i_s}b^+_{u}\big(\boldsymbol{b}^\Sigma_{t_u}\otimes\dots\otimes\boldsymbol{b}_{t_1}^\Sigma\big)+\mfm_{i_s}^+
\end{alignat*}
Given this, we rewrite
\begin{alignat*}{1}
&\sum\limits_{j=1}^d\sum\limits_{n=0}^{d-j}\mfm^+_{d-j+1}(\id^{\otimes d-j-n}\otimes\mfm_j\otimes\id^{\otimes n})\\
&=\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^k\Delta^+_{k}\big(\boldsymbol{b}^\Sigma_{i_k}\otimes\dots\otimes\sum\limits_{u=1}^{i_s}\sum\limits_{t_1+\dots+t_u=i_s}b^+_{u}\big(\boldsymbol{b}^\Sigma_{t_u}\otimes\dots\otimes\boldsymbol{b}_{t_1}^\Sigma\big)\otimes\dots\otimes\boldsymbol{b}^\Sigma_{i_1}\big)\\
&\hspace{45mm}+\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^k\Delta^+_{k}\big(\boldsymbol{b}^\Sigma_{i_k}\otimes\dots\otimes\mfm_{i_s}^+\otimes\dots\otimes\boldsymbol{b}^\Sigma_{i_1}\big)
\end{alignat*}
and we finally use Lemma \ref{relDelta-} to obtain
\begin{alignat*}{1}
&\sum\limits_{j=1}^d\sum\limits_{n=0}^{d-j}\mfm^+_{d-j+1}(\id^{\otimes d-j-n}\otimes\mfm_j\otimes\id^{\otimes n})\\
&=\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^k\Delta^+_{k}\big(\boldsymbol{b}^\Sigma_{i_k}\otimes\dots\otimes\sum\limits_{u=1}^{i_s}\sum\limits_{t_1+\dots+t_u=i_s}\Delta^+_{u}\big(\boldsymbol{b}^\Sigma_{t_u}\otimes\dots\otimes\boldsymbol{b}_{t_1}^\Sigma\big)\otimes\dots\otimes\boldsymbol{b}^\Sigma_{i_1}\big)\\
&\hspace{3cm}+\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^k\Delta^+_{k}\big(\boldsymbol{b}^\Sigma_{i_k}\otimes\dots\otimes\mfm_{i_s}^+\otimes\dots\otimes\boldsymbol{b}^\Sigma_{i_1}\big)\\
&=\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^k\Delta^+_{k}\big(\boldsymbol{b}^\Sigma_{i_k}\otimes\dots\otimes\mfm_{i_s}^+\otimes\dots\otimes\boldsymbol{b}^\Sigma_{i_1}\big)\\
&\hspace{3cm}+\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^k\Delta^+_{k}\big(\boldsymbol{b}^\Sigma_{i_k}\otimes\dots\otimes\mfm_{i_s}^+\otimes\dots\otimes\boldsymbol{b}^\Sigma_{i_1}\big)\\
&=0
\end{alignat*}
\subsubsection{Proof of Equation \eqref{relinf0}}
This equation is obtained after describing the broken discs in the boundary of the compactification of $\mathcal{M}^1_{\Sigma_{0,...,d}}(x;\boldsymbol{\delta}_0,a_1,\boldsymbol{\delta}_1,\dots,a_d,\boldsymbol{\delta}_d)$. As well as in the proof of Lemma \ref{relbSigma}, there are different types of broken discs, depending on if it contains an index $1$ component or not, but the total algebraic contribution of them gives the relation \eqref{relinf0}.
\subsubsection{Proof of Equation \eqref{relinf-}}\label{proofrelinf-}
Finally, to get Equation \eqref{relinf-}, we study the broken discs in the boundary of the compactification of the moduli spaces
\begin{alignat}{1}
\widetilde{\mathcal{M}^2}_{\mathbb R\times\Lambda_{0,\dots,d}^-}(\gamma_{d0};\boldsymbol{\delta}_0,a_1,\dots,a_d,\boldsymbol{\delta}_d)
\end{alignat}
and
\begin{alignat}{1}
\mathcal{M}^1_{\Sigma_{0,\dots,d}}(\gamma_{0d};\boldsymbol{\delta}_0,a_1,\dots,a_d,\boldsymbol{\delta}_d)
\end{alignat}
This gives us respectively the following lemmas:
\begin{lem}\label{relb-}
For all $d\geq1$, we have $\sum\limits_{j=1}^d\sum\limits_{n=0}^{d-j}b^-_{d-j+1}\big(\id^{\otimes d-j-n}\otimes(b_j^-+\Delta_j^-)\otimes\id^{\otimes n}\big)=0$
\end{lem}
and
\begin{lem}\label{relDeltaSigma}
For all $d\geq1$, we have
\begin{alignat*}{1}
\sum\limits_{j=1}^d\sum\limits_{n=0}^{d-j}\Delta^\Sigma_{d-j+1}\big(\id^{\otimes d-j-n}\otimes\mfm_j\otimes\id^{\otimes n}\big)+\sum\limits_{j=1}^d\sum\limits_{i_1+\dots+i_j=d}\Delta^-_{j}\big(\boldsymbol{\Delta}^\Sigma_{i_j}\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^\Sigma\big)=0
\end{alignat*}
\end{lem}
We can now prove Equation \eqref{relinf-} for $d\geq1$ in a direct way:
\begin{alignat*}{1}
&\sum_{j=1}^d\sum_{n=0}^{d-j}\mfm^-_{d-j+1}\big(\id^{\otimes d-j-n}\otimes\mfm_j\otimes\id^{\otimes n}\big)\\
&=\sum_{j=1}^d\sum_{n=0}^{d-j}\sum_{k=1}^{d-j+1}\sum_{s=1}^k\sum_{\substack{i_1+...+i_k=d-j+1\\0\leq r=n-i_1-...-i_{s-1}\leq i_s}}b^-_{k}\big(\boldsymbol{\Delta}^\Sigma_{i_k}\otimes\dots\otimes\boldsymbol{\Delta}^\Sigma_{i_s}(\id^{\otimes i_s-j-r}\otimes\mfm_j\otimes\id^{\otimes r})\otimes\dots\otimes\boldsymbol{\Delta}^\Sigma_{i_1}\big)\\
&=\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^k\sum_{j=1}^{i_s}\sum_{n=0}^{i_s-j}b^-_{k}\big(\boldsymbol{\Delta}^\Sigma_{i_k}\otimes\dots\otimes\boldsymbol{\Delta}^\Sigma_{i_s-j+1}(\id^{\otimes i_s-j-n}\otimes\mfm_j\otimes\id^{\otimes n})\otimes\dots\otimes\boldsymbol{\Delta}^\Sigma_{i_1}\big)\\
&=\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^kb^-_{k}\big(\boldsymbol{\Delta}^\Sigma_{i_k}\otimes\dots\otimes\sum_{j=1}^{i_s}\sum_{n=0}^{i_s-j}\boldsymbol{\Delta}^\Sigma_{i_s-j+1}(\id^{\otimes i_s-j-n}\otimes\mfm_j\otimes\id^{\otimes n})\otimes\dots\otimes\boldsymbol{\Delta}^\Sigma_{i_1}\big)
\end{alignat*}
Observe that using the definition of $\boldsymbol{\Delta}$, then adding the vanishing term $\Delta_1^\Sigma\circ\mfm_{i_s}^-$, and then applying Lemma \ref{relDeltaSigma}, we have the following consecutive equalities:
\begin{alignat*}{1}
\sum_{j=1}^{i_s}&\sum_{n=0}^{i_s-j}\boldsymbol{\Delta}^\Sigma_{i_s-j+1}(\id^{\otimes i_s-j-n}\otimes\mfm_j\otimes\id^{\otimes n})\\
&=\sum_{j=1}^{i_s-1}\sum_{n=0}^{i_s-j}\Delta^\Sigma_{i_s-j+1}(\id^{\otimes i_s-j-n}\otimes\mfm_j\otimes\id^{\otimes n})+\Delta_1^\Sigma\circ\mfm_{i_s}^++\Delta_1^\Sigma\circ\mfm_{i_s}^0+\mfm_{i_s}^-\\
&=\sum_{j=1}^{i_s}\sum_{n=0}^{i_s-j}\Delta^\Sigma_{i_s-j+1}(\id^{\otimes i_s-j-n}\otimes\mfm_j\otimes\id^{\otimes n})+\mfm_{i_s}^-\\
&=\sum\limits_{u=1}^{i_s}\sum\limits_{t_1+\dots+t_u=i_s}\Delta^-_{u}\big(\boldsymbol{\Delta}^\Sigma_{t_u}\otimes\dots\otimes\boldsymbol{\Delta}_{t_1}^\Sigma\big)+\mfm_{i_s}^-
\end{alignat*}
If we plug it into the expression above, we get:
\begin{alignat*}{1}
&\sum_{j=1}^d\sum_{n=0}^{d-j}\mfm^-_{d-j+1}\big(\id^{\otimes d-j-n}\otimes\mfm_j\otimes\id^{\otimes n}\big)\\
&=\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^kb^-_{k}\big(\boldsymbol{\Delta}^\Sigma_{i_k}\otimes\dots\otimes\big(\sum\limits_{u=1}^{i_s}\sum\limits_{t_1+\dots+t_u=i_s}\Delta^-_{u}\big(\boldsymbol{\Delta}^\Sigma_{t_u}\otimes\dots\otimes\boldsymbol{\Delta}_{t_1}^\Sigma\big)\big)\otimes\dots\otimes\boldsymbol{\Delta}^\Sigma_{i_1}\big)\\
&\hspace{3cm}+\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^kb^-_{k}\big(\boldsymbol{\Delta}^\Sigma_{i_k}\otimes\dots\otimes\mfm_{i_s}^-\otimes\dots\otimes\boldsymbol{\Delta}^\Sigma_{i_1}\big)
\end{alignat*}
Finally we apply Lemma \ref{relb-} and we obtain
\begin{alignat*}{1}
&\sum_{j=1}^d\sum_{n=0}^{d-j}\mfm^-_{d-j+1}\big(\id^{\otimes d-j-n}\otimes\mfm_j\otimes\id^{\otimes n}\big)\\
&=\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^kb^-_{k}\big(\boldsymbol{\Delta}^\Sigma_{i_k}\otimes\dots\otimes\big(\sum\limits_{u=1}^{i_s}\sum\limits_{t_1+\dots+t_u=i_s}b^-_{u}\big(\boldsymbol{\Delta}^\Sigma_{t_u}\otimes\dots\otimes\boldsymbol{\Delta}_{t_1}^\Sigma\big)\big)\otimes\dots\otimes\boldsymbol{\Delta}^\Sigma_{i_1}\big)\\
&\hspace{3cm}+\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^kb^-_{k}\big(\boldsymbol{\Delta}^\Sigma_{i_k}\otimes\dots\otimes\mfm_{i_s}^-\otimes\dots\otimes\boldsymbol{\Delta}^\Sigma_{i_1}\big)\\
&=\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^kb^-_{k}\big(\boldsymbol{\Delta}^\Sigma_{i_k}\otimes\dots\otimes\mfm_{i_s}^-\otimes\dots\otimes\boldsymbol{\Delta}^\Sigma_{i_1}\big)\\
&\hspace{3cm}+\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^kb^-_{k}\big(\boldsymbol{\Delta}^\Sigma_{i_k}\otimes\dots\otimes\mfm_{i_s}^-\otimes\dots\otimes\boldsymbol{\Delta}^\Sigma_{i_1}\big)=0
\end{alignat*}
\subsection{Fukaya category of Lagrangian cobordisms}\label{sec:Fuk}
We define an $A_\infty$-category $\Fuk(\mathbb R\times Y)$ whose objects are exact Lagrangian cobordisms whose negative ends are cylinders over Legendrian admitting augmentations. We define this category by localization, in the same spirit as the definition of the wrapped Fukaya category of a Liouville sector in \cite{GPS} to which we refer for details about quotient and localization, as well as \cite{LO}.
\begin{defi}
A Hamiltonian isotopy $\varphi_h^s$ of $\mathbb R\times Y$ is called \textit{cylindrical at infinity} if there exists a $R>0$ such that $\varphi_h^s$ does not depend on the symplectization coordinate $t$ in $(-\infty,-R)\times Y$ and $(R,\infty)\times Y$.
\end{defi}
Let $E$ be a countable set of exact Lagrangian cobordisms in $\mathbb R\times Y$, with negative cylindrical ends on Legendrian submanifolds of $Y$ admitting an augmentation. Assume that any exact Lagrangian cobordism $\Lambda^-\prec_\Sigma\Lambda^+$ such that $\Lambda^-$ admits an augmentation is isotopic to one in $E$ through a cylindrical at infinity Hamiltonian isotopy.
For each cobordism $\Lambda^-\prec_\Sigma\Lambda^+$ in $E$, we choose a sequence $\Sigma^\bullet$ of cobordisms
\begin{alignat*}{1}
\Sigma^\bullet=(\Sigma^{(0)},\Sigma^{(1)},\Sigma^{(2)},\dots)
\end{alignat*}
as follows. First $\Sigma^{(0)}=\Sigma$, and then we need to make several choices:
\begin{enumerate}
\item a sequence $\{\eta_i\}_{i\geq1}$ of real numbers such that $\sum\limits_{j>0}\eta_j$ is strictly smaller than the length of the shortest Reeb chord of $\Lambda^+\cup\Lambda^-$, and denote $\tau_i=\sum\limits_{j=1}^i\eta_j$,
\item given that $\Sigma$ is cylindrical outside $[-T,T]\times Y$, and given $N>0$, we choose Hamiltonians $H_i:\mathbb R\times Y\to\mathbb R$, $i\geq1$, being small perturbations of $h_{T,N}$ (see Section \ref{sec:unit}) and set $\Sigma^{(i)}=\varphi_{H_i}^{\tau_i}(\Sigma)$ such that $\Sigma^{(i)}$ is the graph of $dF_i$ in a standard neighborhood of the $0$-section in $T^*\Sigma$, for $F_i:\Sigma\to\mathbb R$ Morse function satisfying the following:
\begin{enumerate}
\item on $\Sigma\cap\big([T+N,\infty)\times Y\big)$, resp. $\Sigma\cap\big((-\infty,-T-N]\times Y\big)$, $F_i$ is equal to $e^t(f_i^+-\tau_i)$, resp $e^t(f_i^--\tau_i)$ where $f_i^\pm:\Lambda^\pm\to\mathbb R$ are Morse functions such that the $\mathcal{C}^0$-norm of $f_i^\pm$ is strictly smaller than $\min\{\eta_i,\eta_{i+1}\}/2$,
\item the functions $F_i-F_j$, $f_i^\pm-f_j^\pm$ are Morse for $i\neq j$,
\item the functions $f_i^-$ and $f_i^\pm-f_j^\pm$ admit a unique minimum on each connected component while the functions $F_i$ and $F_i-F_j$ admit a unique minimum on each filling connected component and no minimum on each connected component of $\Sigma$ admitting a non empty negative end.
\end{enumerate}
\end{enumerate}
We call such a sequence of cobordisms \textit{cofinal}. Note that an augmentation of $\Lambda^-$ gives canonically an augmentation of the negative end of $\Sigma^{(i)}$ for $i\geq1$.
The construction is inductive and the different choices above are made so that for any $(d+1)$-tuple of cobordisms $\Sigma_0,\Sigma_1,\dots,\Sigma_d$ in $E$ (not necessarily distinct), and any strictly increasing sequence of integers $i_0<i_1<\dots<i_d$, the cobordisms $\Sigma_0^{(i_0)},\Sigma_1^{(i_1)},\dots,\Sigma_d^{(i_d)}$ are pairwise transverse.
Let us construct now a strictly unital $A_\infty$-category $\mathcal{O}$ as follows:
\begin{itemize}
\item Obj($\mathcal{O}$): pairs $(\Sigma^{(i)},\varepsilon^-)$ where $\Sigma\in E$ is an exact Lagrangian cobordism from $\Lambda^-$ to $\Lambda^+$, and $\varepsilon^-$ is an augmentation of $\Lambda^-$,
\item $\hom_\mathcal{O}\big((\Sigma_0^{(i)},\varepsilon_0^-),(\Sigma_1^{(j)},\varepsilon_1^-)\big)=\left\{
\begin{array}{ll}
\Cth_+\big(\Sigma_0^{(i)},\Sigma_1^{(j)}\big)&\mbox{ if }i<j\\
\mathbb{Z}_2e_{\varepsilon_0^-}^{(i)}&\mbox{ if }\Sigma_0=\Sigma_1, i=j \mbox{ and } \varepsilon_0^-=\varepsilon_1^-\\
0&\mbox{otherwise}
\end{array}\right.$
\end{itemize}
where $e_{\varepsilon_0^-}^{(i)}$ is a formal degree $0$ element.
The $A_\infty$-operations are given by the maps defined in Section \ref{A-inf maps} for each $(d+1)$-tuple of cobordisms $\Sigma_0^{(i_0)},\Sigma_1^{(i_1)},\dots,\Sigma_d^{(i_d)}$ with $i_0<i_1<\dots<i_d$, i.e. for such a tuple we have a map
\begin{alignat*}{1}
\mfm_d:\Cth_+(\Sigma_{d-1}^{(i_{d-1})},\Sigma_d^{(i_d)})\otimes\dots\otimes\Cth_+(\Sigma_0^{(i_0)},\Sigma_1^{(i_1)})\to\Cth_+(\Sigma_0^{(i_0)},\Sigma_d^{(i_d)})
\end{alignat*}
These maps extend to maps defined for any $(d+1)$-tuple $\Sigma_0^{(i_0)},\Sigma_1^{(i_1)},\dots,\Sigma_d^{(i_d)}$ with the condition that the elements $e_{\varepsilon_j^-}^{(i)}\in\hom_\mathcal{O}((\Sigma_j^{(i)},\varepsilon_j^-),(\Sigma_j^{(i)},\varepsilon_j^-))$ behave as strict units.\\
We finally define the Fukaya category $\Fuk(\mathbb R\times Y)$ of Lagrangian cobordisms in $\mathbb R\times Y$ as a quotient of $\mathcal{O}$ by the set of continuation elements, as follows.
Consider $\Sigma\in E$ together with an augmentation $\varepsilon^-$ of $\Lambda^-$. For all $i<j$, there is a continuation element $e_{\Sigma^{(i)},\Sigma^{(j)}}\in\hom_\mathcal{O}\big((\Sigma^{(i)},\varepsilon^-),(\Sigma^{(j)},\varepsilon^-)\big)$ as described in Section \ref{sec:unit}, which is a cycle in $\mathcal{O}$.
Let $Tw(\mathcal{O})$ denote the $A_\infty$-category of twisted complexes of $\mathcal{O}$ and $\mathcal{C}$ the full subcategory of $Tw(\mathcal{O})$ generated by cones of the continuation elements. We define $\Fuk(\mathbb R\times Y):=\mathcal{O}[\mathcal{C}^{-1}]$ to be the image of $\mathcal{O}$ in the quotient $Tw(\mathcal{O})/\mathcal{C}$.
Defined as follows, the category $\Fuk(\mathbb R\times Y)$ depends on various choices, namely:
\begin{enumerate}
\item the choice for each $\Sigma$ in $E$ of a cofinal sequence $\Sigma^{\bullet}=(\Sigma^{(0)},\Sigma^{(1)},\dots)$.
\item the choice of the countable set $E$ of representatives of Hamiltonian isotopy classes of exact Lagrangian cobordisms with negative end admitting an augmentation,
\end{enumerate}
The fact that the quasi-equivalence class of the category does not depend on the choice of a cofinal sequence for each element in $E$ is purely algebraic.
Assume $\Sigma^{\widetilde\bullet}$ is a cofinal sequence for $\Sigma$ which is a subsequence of a bigger cofinal sequence $\Sigma^\bullet$. Then, denote $\widetilde{\mathcal{O}}$ the category constructed using the cofinal sequence $\Sigma^{\widetilde\bullet}$ and $\mathcal{O}$ the one constructed using $\Sigma^{\bullet}$. The inclusion functor $\widetilde{\mathcal{O}}\to\mathcal{O}$ is full and faithful, and if $\widetilde{\mathcal{C}}\subset Tw\widetilde{\mathcal{O}}$ denotes the full subcategory generated by cones of continuation elements, one gets a cohomologically full and faithful functor $\widetilde{\mathcal{O}}[\widetilde{\mathcal{C}}^{-1}]\to\mathcal{O}[\mathcal{C}^{-1}]$. Moreover, the continuation elements become quasi-isomorphisms in $\mathcal{O}[\mathcal{C}^{-1}]$ thus this functor is a quasi-equivalence.
Now if $\Sigma^{\bullet,1}$ and $\Sigma^{\bullet,2}$ are two cofinal sequences for $\Sigma$, then one can find a cofinal sequence $\Sigma^{\widetilde{\bullet}}$ such that $\Sigma^{\bullet,i}\cup\Sigma^{\widetilde{\bullet}}$, $i=1,2$, are also cofinal sequences. As $\Sigma^{\widetilde{\bullet}}$ and $\Sigma^{\bullet,i}$ are subsequences of $\Sigma^{\bullet,i}\cup\Sigma^{\widetilde{\bullet}}$, the Fukaya category constructed using the cofinal sequence $\Sigma^{\widetilde{\bullet}}$ is quasi-equivalent to the one using $\Sigma^{\bullet,i}\cup\Sigma^{\widetilde{\bullet}}$ which is quasi-equivalent to the one using $\Sigma^{\bullet,i}$.
Then, the fact that the category does not depend (up to quasi-equivalence) to the choice of representatives of cylindrical at infinity Hamiltonian isotopy classes follows from the invariance result:
\begin{prop}
Let $\Sigma_0$ be an exact Lagrangian cobordism and $(\varphi^s_h)_{s\in[0,1]}$ a cylindrical at infinity Hamiltonian isotopy such that $\Sigma_0$ and $\Sigma_1:=\varphi_h^1(\Sigma_0)$ are transverse. Then, for any $T$ exact Lagrangian cobordism transverse to $\Sigma_0$ and $\Sigma_1$, the complexes $\Cth_+(\Sigma_0,T)$ and $\Cth_+(\Sigma_1,T)$ are homotopy equivalent.
\end{prop}
\begin{proof}
All the ingredients to prove this proposition already appeared in Section \ref{sec:acy}.
Observe first that if $\Sigma_0$ is a cobordism from $\Lambda_0^-$ to $\Lambda_0^+$ then $\Sigma_1$ is a cobordism from $\Lambda_1^-$ to $\Lambda_1^+$ with $\Lambda_1^\pm$ Legendrian isotopic to $\Lambda_0^\pm$.
The isotopy from $\Sigma_0$ to $\Sigma_1$ can be decomposed as a cylindrical at infinity isotopy of $\Sigma_0$ giving a cobordism $\widetilde{\Sigma}_0$ from $\Lambda_1^-$ to $\Lambda_1^+$, followed by a compactly supported Hamiltonian isotopy from $\widetilde{\Sigma}_0$ to $\Sigma_1$. The proof of the proposition now goes as follows.
Start from $\Sigma_0$ and wrap its positive, resp. negative, end in the positive, resp negative, Reeb direction to obtain the cobordism $\Sigma_0^s$, $s\geq0$, having cylindrical ends over $\Lambda_{0,-s}^-$ and $\Lambda_{0,s}^+$ where $\Lambda_{0,\pm s}^\pm=\Lambda_0^\pm\pm s\partial_z$. Take $s$ big enough so that the Cthulhu complex $\Cth_+(\Sigma_0^s,T)$ has only intersection points generators.
Denote also $\Lambda_{1,a}^+:=\Lambda_1^++a\partial_z$ and $\Lambda_{1,-b}^-:=\Lambda_1^--b\partial_z$ for $a,b\geq0$ so that $\Lambda_{1,a}^+$ lies entirely above $\Lambda_T^+$ and $\Lambda_{1,-b}^-$ lies entirely below $\Lambda_T^-$.
Denote $C^+$ a concordance from $\Lambda_{0,s}^+$ to $\Lambda_{1,a}^+$ and $C^-$ a concordance from $\Lambda_{1,-b}^-$ to $\Lambda_{0,-s}^-$. We can assume that $C^+\cap(\mathbb R\times\Lambda_T^+)=C^-\cap(\mathbb R\times\Lambda_T^-)=\emptyset$.
Concatenating the concordances $C^-$ and $C^+$ with $\Sigma_0^s$ gives $C^-\odot\Sigma_0^S\odot C^+$ which is an exact Lagrangian cobordism from $\Lambda_{1,-b}^-$ to $\Lambda_{1,a}^+$, satisfying $\Cth_+(C^-\odot\Sigma_0^s\odot C^+,T)=\Cth_+(\Sigma_0^s,T)$ by construction, where it is an equality of complexes. Finally, wrap the ends of $C^-\odot\Sigma_0^s\odot C^+$ in such a way that it \textquotedblleft translates back\textquotedblright\, $\Lambda_{1,a}^+$ to $\Lambda_1^+$ and $\Lambda_{1,-b}^-$ to $\Lambda_1^-$, to obtain a cobordism $\widetilde{\Sigma}_0$ from $\Lambda_1^-$ to $\Lambda_1^+$, Hamiltonian isotopic to $\Sigma_1$ by a compactly supported Hamiltonian isotopy. Invariance of the Cthulhu complex by wrapping the ends and compactly supported Hamiltonian isotopy ends the proof.
\end{proof}
\section{Higher order maps in the concatenation}\label{sec:higher_conc}
In case of a $(d+1)$-tuple of concatenated cobordisms $(V_0\odot W_0,\dots,V_d\odot W_d)$ one can also define higher order maps $\mfm_d^{V\odot W}$, which will recover the maps $\mfm_d^\Sigma$ defined in the previous section in the case of concatenation of a cobordism with a trivial cylinder.
We define recursively, for $d\geq1$, the maps
\begin{alignat*}{1}
&\boldsymbol{\Delta}_d^W:\Cth_+(V_{d-1}\odot W_{d-1},V_d\odot W_d)\otimes\dots\Cth_+(V_0\odot W_0,V_1\odot W_1)\to\Cth_+(V_0,V_d)\\
&\boldsymbol{b}_d^V:\Cth_+(V_{d-1}\odot W_{d-1},V_d\odot W_d)\otimes\dots\Cth_+(V_0\odot W_0,V_1\odot W_1)\to\Cth_+(W_0,W_d)
\end{alignat*}
as follows. First, $\boldsymbol{b}_1^V$ and $\boldsymbol{\Delta}_1^W$ are the transfer maps from Section \ref{sec:conc}, and then for $d\geq2$ one sets:
\begin{alignat*}{1}
&\boldsymbol{\Delta}_d^W=\sum_{s=2}^d\sum_{\substack{1\leq i_1,\dots,i_s\\i_1+\dots+i_s=d}}\Delta_s^W\big(\boldsymbol{b}_{i_s}^V\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)\\
&\boldsymbol{b}_d^V=\sum_{s=1}^d\sum_{\substack{1\leq i_1,\dots,i_s\\i_1+\dots+i_s=d}}b_s^V\big(\boldsymbol{\Delta}_{i_s}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)
\end{alignat*}
Using the maps $\Delta_s^W,b_s^V$ from Section \ref{A-inf maps}. Observe that the maps $\boldsymbol{b}_2^V$ and $\boldsymbol{\Delta}_2^W$ already appeared in Section \ref{sec:functoriality}.
Given this, we define
\begin{alignat*}{1}
\mfm_d^{V\odot W}:\Cth_+(V_{d-1}\odot W_{d-1},V_d\odot W_d)\otimes\dots\Cth_+(V_0\odot W_0,V_1\odot W_1)\to\Cth_+(V_0\odot W_0,V_d\odot W_d)
\end{alignat*}
by
\begin{alignat*}{1}
\mfm_d^{V\odot W}=\sum_{s=1}^d\sum_{\substack{1\leq i_1,\dots,i_s\\i_1+\dots+i_s=d}}\mfm_s^{W,+0}\big(\boldsymbol{b}_{i_s}^V\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)+\mfm_s^{V,0-}\big(\boldsymbol{\Delta}_{i_s}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)
\end{alignat*}
We will first prove that the maps $\boldsymbol{\Delta}_j^W$ and $\boldsymbol{b}_j^V$ satisfy the $A_\infty$-functor equations, and then we will prove that the maps $\mfm_j^{V\odot W}$ satisfy the $A_\infty$ equations. We start by proving the following:
\begin{lem}\label{lem:deltabanane}
For all $d\geq1$,
\begin{alignat*}{1}
\sum_{j=1}^d\sum_{i_1+\dots+i_j=d}\boldsymbol{b}_j^V\big(\boldsymbol{\Delta}_{i_j}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)+\boldsymbol{\Delta}_j^W\big(\boldsymbol{b}_{i_j}^V\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)=0
\end{alignat*}
\end{lem}
\begin{proof}
This holds by definition of the maps. Observe that we have already made use of the case $d=1$ in Section \ref{sec:LCconc}. For $d\geq2$, one has
\begin{alignat*}{1}
&\sum_{j=1}^d\sum_{i_1+\dots+i_j=d}\boldsymbol{b}_j^V\big(\boldsymbol{\Delta}_{i_j}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)+\boldsymbol{\Delta}_j^W\big(\boldsymbol{b}_{i_j}^V\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)\\
&=\boldsymbol{b}_1^V\circ\boldsymbol{\Delta}_d^W+\boldsymbol{\Delta}_1^W\circ\boldsymbol{b}_d^V+\sum_{j=2}^d\sum_{i_1+\dots+i_j=d}\boldsymbol{b}_j^V\big(\boldsymbol{\Delta}_{i_j}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)+\boldsymbol{\Delta}_j^W\big(\boldsymbol{b}_{i_j}^V\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)
\end{alignat*}
Observe that $\boldsymbol{\Delta}_{i_j}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W$ takes values in $\Cth_+(V_{d-i_j},V_d)\otimes\dots\otimes\Cth_+(V_0,V_{i_1})$ and $\boldsymbol{b}_{i_j}^V\otimes\dots\otimes\boldsymbol{b}_{i_1}^V$ takes values in $\Cth_+(W_{d-i_j},W_d)\otimes\dots\otimes\Cth_+(W_0,W_{i_1})$, hence the maps $\boldsymbol{b}_j^V$ and $\boldsymbol{\Delta}_j^W$ are the one corresponding to one cobordism only and not the concatenation, as defined in Section \ref{A-inf maps}. So the sum above equals
\begin{alignat*}{1}
&\boldsymbol{\Delta}_d^W+b_1^V\circ\boldsymbol{\Delta}_d^W+\boldsymbol{b}_d^V+\sum_{j=2}^d\sum_{i_1+\dots+i_j=d}b_j^V\big(\boldsymbol{\Delta}_{i_j}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)+\Delta_j^W\big(\boldsymbol{b}_{i_j}^V\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)\\
&=\boldsymbol{\Delta}_d^W+\boldsymbol{b}_d^V+\sum_{j=1}^d\sum_{i_1+\dots+i_j=d}b_j^V\big(\boldsymbol{\Delta}_{i_j}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)+\Delta_j^W\big(\boldsymbol{b}_{i_j}^V\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)
\end{alignat*}
where note that in the sum on the right we have $\Delta_1^W\circ\boldsymbol{b}_d^V=0$. Then, this gives $0$ by definition of $\boldsymbol{b}_d^V$ and $\boldsymbol{\Delta}_d^W$.
\end{proof}
Now we can prove:
\begin{lem}\label{lem:functoriality} For all $d\geq1$
\begin{alignat*}{1}
&\sum_{s=1}^d\sum_{i_1+\dots+i_s=d}\mfm_s^V\big(\boldsymbol{\Delta}_{i_s}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)+\sum_{j=1}^d\sum_{n=0}^{d-j}\boldsymbol{\Delta}_{d-j+1}^W\big(\id\otimes\dots\otimes\id\otimes\mfm_j^{V\odot W}\otimes\underbrace{\id\otimes\dots\otimes\id}_{n}\big)=0
\end{alignat*}
and
\begin{alignat*}{1}
&\sum_{s=1}^d\sum_{i_1+\dots+i_s=d}\mfm_s^W\big(\boldsymbol{b}_{i_s}^V\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)+\sum_{j=1}^d\sum_{n=0}^{d-j}\boldsymbol{b}_{d-j+1}^V\big(\id\otimes\dots\otimes\id\otimes\mfm_j^{V\odot W}\otimes\underbrace{\id\otimes\dots\otimes\id}_{n}\big)=0
\end{alignat*}
\end{lem}
\begin{proof}
We prove it by recursion on $d$. For $d=1$, the relations above means that $\boldsymbol{b}_1^V$ and $\boldsymbol{\Delta}_1^W$ are chain maps, which is the content of Proposition \ref{prop:Phi} and Proposition \ref{prop:Delta}. Note that we have also proved the case $d=2$ in Section \ref{sec:functoriality}.
For $d\geq2$, we have
\begin{alignat*}{1}
&\sum_{s=1}^d\sum_{i_1+\dots+i_s=d}\mfm_s^V\big(\boldsymbol{\Delta}_{i_s}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)+\sum_{j=1}^d\sum_{n=0}^{d-j}\boldsymbol{\Delta}_{d-j+1}^W\big(\id\otimes\dots\otimes\id\otimes\mfm_j^{V\odot W}\otimes\underbrace{\id\otimes\dots\otimes\id}_{n}\big)\\
&=\sum_{s=1}^d\sum_{i_1+\dots+i_s=d}\mfm_s^V\big(\boldsymbol{\Delta}_{i_s}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)+\boldsymbol{\Delta}_1^W\circ\mfm_d^{V\odot W}\\
&\hspace{5mm}+\sum_{j=1}^{d-1}\sum_{n=0}^{d-j}\boldsymbol{\Delta}_{d-j+1}^W\big(\id\otimes\dots\otimes\id\otimes\mfm_j^{V\odot W}\otimes\underbrace{\id\otimes\dots\otimes\id}_{n}\big)
\end{alignat*}
but by definition of $\mfm_d^{V\odot W}$:
\begin{alignat*}{1}
&\boldsymbol{\Delta}_1^W\circ\mfm_d^{V\odot W}\\
&=\boldsymbol{\Delta}_1^W\circ\Big(\sum_{s=1}^d\sum_{i_1+\dots+i_s=d}\mfm_s^{W,+0}\big(\boldsymbol{b}_{i_s}^V\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)+\mfm_s^{V,0-}\big(\boldsymbol{\Delta}_{i_s}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)\Big)\\
&=\sum_{s=1}^d\sum_{i_1+\dots+i_s=d}\Delta_1^W\circ\mfm_s^{W,+0}\big(\boldsymbol{b}_{i_s}^V\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)+\mfm_s^{V,0-}\big(\boldsymbol{\Delta}_{i_s}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)
\end{alignat*}
So we get
\begin{alignat*}{1}
&\sum_{s=1}^d\sum_{i_1+\dots+i_s=d}\mfm_s^V\big(\boldsymbol{\Delta}_{i_s}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)+\sum_{j=1}^d\sum_{n=0}^{d-j}\boldsymbol{\Delta}_{d-j+1}^W\big(\id\otimes\dots\otimes\id\otimes\mfm_j^{V\odot W}\otimes\id\otimes\dots\otimes\id\big)\\
&=\sum_{s=1}^d\sum_{i_1+\dots+i_s=d}\mfm_s^{V,+}\big(\boldsymbol{\Delta}_{i_s}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)+\sum_{s=1}^d\sum_{i_1+\dots+i_s=d}\Delta_1^W\circ\mfm_s^{W,+0}\big(\boldsymbol{b}_{i_s}^V\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)\\
&\hspace{5mm}+\sum_{j=1}^{d-1}\sum_{n=0}^{d-j}\boldsymbol{\Delta}_{d-j+1}^W\big(\id\otimes\dots\otimes\id\otimes\mfm_j^{V\odot W}\otimes\id\otimes\dots\otimes\id\big)\\
\end{alignat*}
Then one has
\begin{alignat*}{1}
&\sum_{j=1}^{d-1}\sum_{n=0}^{d-j}\boldsymbol{\Delta}_{d-j+1}^W\big(\id\otimes\dots\otimes\id\otimes\mfm_j^{V\odot W}\otimes\id\otimes\dots\otimes\id\big)\\
&=\sum_{j=1}^{d-1}\sum_{n=0}^{d-j}\sum_{s=2}^{d-j+1}\sum_{i_1+\dots+i_s=d-j+1}\Delta_{s}^W\big(\boldsymbol{b}_{i_s}^V\otimes\dots\otimes\boldsymbol{b}_{i_1}\big)\big(\id\otimes\dots\otimes\id\otimes\mfm_j^{V\odot W}\otimes\id\otimes\dots\otimes\id\big)\\
&=\sum_{j=1}^{d-1}\sum_{n=0}^{d-j}\sum_{s=2}^{d-j+1}\sum_{i_1+\dots+i_s=d-j+1}\sum_{l=1}^s\Delta_{s}^W\big(\boldsymbol{b}_{i_s}^V\otimes\dots\otimes\boldsymbol{b}_{i_l}^V(\id\otimes\dots\otimes\mfm_j^{V\odot W}\otimes\dots\id)\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)\\
&=\sum_{s=2}^{d}\sum_{i_1+\dots+i_s=d}\sum_{l=1}^s\sum_{j=1}^{i_l}\sum_{n=0}^{i_l-j}\Delta_{s}^W\big(\boldsymbol{b}_{i_s}^V\otimes\dots\otimes\boldsymbol{b}_{i_l}^V(\id\otimes\dots\otimes\mfm_j^{V\odot W}\otimes\dots\id)\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)\\
&=\sum_{s=2}^{d}\sum_{i_1+\dots+i_s=d}\sum_{l=1}^s\Delta_{s}^W\big(\boldsymbol{b}_{i_s}^V\otimes\dots\otimes\sum_{j=1}^{i_l}\sum_{n=0}^{i_l-j}\boldsymbol{b}_{i_l}^V(\id\otimes\dots\otimes\mfm_j^{V\odot W}\otimes\dots\id)\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)
\end{alignat*}
Observe that $i_l\leq d-1$ so by recursion we have that
\begin{alignat*}{1}
&\sum_{j=1}^{d-1}\sum_{n=0}^{d-j}\boldsymbol{\Delta}_{d-j+1}^W\big(\id\otimes\dots\otimes\id\otimes\mfm_j^{V\odot W}\otimes\id\otimes\dots\otimes\id\big)\\
&=\sum_{s=2}^{d}\sum_{i_1+\dots+i_s=d}\sum_{l=1}^s\Delta_{s}^W\big(\boldsymbol{b}_{i_s}^V\otimes\dots\otimes\sum_{u=1}^{i_l}\sum_{t_1+\dots+t_u=i_l}\mfm_u^W(\boldsymbol{b}_{t_u}^V\otimes\dots\otimes\boldsymbol{b}_{t_1}^V)\otimes\dots\otimes\boldsymbol{b}_{i_1}\big)
\end{alignat*}
So we get
\begin{alignat*}{1}
&\sum_{s=1}^d\sum_{i_1+\dots+i_s=d}\mfm_s^V\big(\boldsymbol{\Delta}_{i_s}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)+\sum_{j=1}^d\sum_{n=0}^{d-j}\boldsymbol{\Delta}_{d-j+1}^W\big(\id\otimes\dots\otimes\id\otimes\mfm_j^{V\odot W}\otimes\id\otimes\dots\otimes\id\big)\\
&=\sum_{s=1}^d\sum_{i_1+\dots+i_s=d}\mfm_s^{V,+}\big(\boldsymbol{\Delta}_{i_s}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)+\sum_{s=1}^d\sum_{i_1+\dots+i_s=d}\Delta_1^W\circ\mfm_s^{W,+0}\big(\boldsymbol{b}_{i_s}^V\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)\\
&\hspace{5mm}+\sum_{s=2}^{d}\sum_{i_1+\dots+i_s=d}\sum_{l=1}^s\Delta_{s}^W\big(\boldsymbol{b}_{i_s}^V\otimes\dots\otimes\sum_{u=1}^{i_l}\sum_{t_1+\dots+t_u=i_l}\mfm_u^W(\boldsymbol{b}_{t_u}^V\otimes\dots\otimes\boldsymbol{b}_{t_1}^V)\otimes\dots\otimes\boldsymbol{b}_{i_1}\big)\\
&=\sum_{s=1}^d\sum_{i_1+\dots+i_s=d}\mfm_s^{V,+}\big(\boldsymbol{\Delta}_{i_s}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)\\
&\hspace{5mm}+\sum_{s=1}^{d}\sum_{i_1+\dots+i_s=d}\sum_{l=1}^s\Delta_{s}^W\big(\boldsymbol{b}_{i_s}^V\otimes\dots\otimes\sum_{u=1}^{i_l}\sum_{t_1+\dots+t_u=i_l}\mfm_u^W(\boldsymbol{b}_{t_u}^V\otimes\dots\otimes\boldsymbol{b}_{t_1}^V)\otimes\dots\otimes\boldsymbol{b}_{i_1}\big)\\
\end{alignat*}
where we have used the fact that $\Delta_1^W\circ\mfm_s^{W,-}=0$.
By definition of $\mfm_s^{V,+}$, it gives
\begin{alignat*}{1}
&=\sum_{s=1}^d\sum_{i_1+\dots+i_s=d}\sum_{u=1}^s\sum_{t_1+\dots+t_u=s}\Delta_u^\Lambda\big(\boldsymbol{b}_{t_u}^V\otimes\dots\otimes\boldsymbol{b}_{t_1}^V\big)\big(\boldsymbol{\Delta}_{i_s}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)\\
&\hspace{5mm}+\sum_{s=1}^{d}\sum_{i_1+\dots+i_s=d}\sum_{l=1}^s\Delta_{s}^W\big(\boldsymbol{b}_{i_s}^V\otimes\dots\otimes\sum_{u=1}^{i_l}\sum_{t_1+\dots+t_u=i_l}\mfm_u^W(\boldsymbol{b}_{t_u}^V\otimes\dots\otimes\boldsymbol{b}_{t_1}^V)\otimes\dots\otimes\boldsymbol{b}_{i_1}\big)
\end{alignat*}
and finally, using Lemma \ref{relDeltaSigma} which states in this case that
\begin{alignat*}{1}
\sum\limits_{j=1}^d\sum\limits_{n=0}^{d-j}\Delta^W_{d-j+1}\big(\id^{\otimes d-j-n}\otimes\mfm_j^W\otimes\id^{\otimes n}\big)+\sum\limits_{j=1}^d\sum\limits_{i_1+\dots+i_j=d}\Delta^\Lambda_{j}\big(\boldsymbol{\Delta}^W_{i_j}\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)=0
\end{alignat*}
and Lemma \ref{lem:deltabanane}, one obtains that the sum vanishes, and the maps $\boldsymbol{\Delta}_j^W$ satisfy the $A_\infty$-functor equations.
One proves analogously that the maps $\boldsymbol{b}_j^V$ satisfy the $A_\infty$-functor equations.
\end{proof}
\begin{prop}
The maps $\mfm_d^{V\odot W}$ satisfy the $A_\infty$-equations.
\end{prop}
\begin{proof} For all $d\geq1$, one has
\begin{alignat*}{1}
&\sum_{j=1}^d\sum_{n=0}^{d-j}\mfm_{d-j+1}^{V\odot W}\big(\id\otimes\dots\otimes\id\otimes\mfm_j^{V\odot W}\otimes\overbrace{\id\otimes\dots\otimes\id}^{n}\big)\\
&=\sum_{j,n}\Big(\sum_{k=1}^{d-j+1}\sum_{i_1+\dots+i_k=d-j+1}\mfm_k^{W,+0}\big(\boldsymbol{b}_{i_k}^V\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)+\mfm_k^{V,0-}\big(\boldsymbol{\Delta}_{i_k}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)\Big)\big(\id\otimes\dots\otimes\mfm_j^{V\odot W}\otimes\dots\otimes\id\big)\\
&=\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^k\sum_{j=1}^{i_s}\sum_{n=0}^{i_s-j}\mfm_k^{W,+0}\big(\boldsymbol{b}_{i_k}^V\otimes\dots\otimes\boldsymbol{b}_{i_s-j+1}^V(\id\otimes\dots\otimes\mfm_j^{V\odot W}\otimes\dots\otimes\id)\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)\\
&\hspace{4cm}+\mfm_k^{V,0-}\big(\boldsymbol{\Delta}_{i_k}^W\otimes\dots\otimes\boldsymbol{\Delta}_{i_s-j+1}^V(\id\otimes\dots\otimes\mfm_j^{V\odot W}\otimes\dots\otimes\id)\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)\\
&=\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^k\mfm_k^{W,+0}\big(\boldsymbol{b}_{i_k}^V\otimes\dots\otimes\sum_{j=1}^{i_s}\sum_{n=0}^{i_s-j}\boldsymbol{b}_{i_s-j+1}^V(\id\otimes\dots\otimes\mfm_j^{V\odot W}\otimes\dots\otimes\id)\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)\\
&+\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^k\mfm_k^{V,0-}\big(\boldsymbol{\Delta}_{i_k}^W\otimes\dots\otimes\sum_{j=1}^{i_s}\sum_{n=0}^{i_s-j}\boldsymbol{\Delta}_{i_s-j+1}^V(\id\otimes\dots\otimes\mfm_j^{V\odot W}\otimes\dots\otimes\id)\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)
\end{alignat*}
Using Lemma \ref{lem:functoriality}, the sum above is equal to:
\begin{alignat*}{1}
&\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^k\mfm_k^{W,+0}\big(\boldsymbol{b}_{i_k}^V\otimes\dots\otimes\sum_{u=1}^{i_s}\sum_{t_1+\dots+t_u=i_s}\mfm_u^W(\boldsymbol{b}_{t_u}^V\otimes\dots\otimes\boldsymbol{b}_{t_1}^V)\otimes\dots\otimes\boldsymbol{b}_{i_1}^V\big)\\
&+\sum_{k=1}^d\sum_{i_1+...+i_k=d}\sum_{s=1}^k\mfm_k^{V,0-}\big(\boldsymbol{\Delta}_{i_k}^W\otimes\dots\otimes\sum_{u=1}^{i_s}\sum_{t_1+\dots+t_u=i_s}\mfm_u^V(\boldsymbol{\Delta}_{t_u}^W\otimes\dots\otimes\boldsymbol{\Delta}_{t_1}^W)\otimes\dots\otimes\boldsymbol{\Delta}_{i_1}^W\big)\\
&=\sum_{j=1}^d\sum_{r_1+\dots+r_j=d}\sum_{u=1}^j\sum_{s=1}^{j-u+1}\mfm_{j-u+1}^{W,+0}\big(\underbrace{\id\otimes\dots\otimes\id}_{k-s}\otimes\mfm_u^W\otimes\underbrace{\id\otimes\dots\otimes\id}_{s-1}\big)\big(\boldsymbol{b}_{r_j}^V\otimes\dots\otimes\boldsymbol{b}_{r_1}^V\big)\\
&+\sum_{j=1}^d\sum_{r_1+\dots+r_j=d}\sum_{u=1}^j\sum_{s=1}^{j-u+1}\mfm_{j-u+1}^{V,0-}\big(\underbrace{\id\otimes\dots\otimes\id}_{k-s}\otimes\mfm_u^V\otimes\underbrace{\id\otimes\dots\otimes\id}_{s-1}\big)\big(\boldsymbol{\Delta}_{r_j}^V\otimes\dots\otimes\boldsymbol{\Delta}_{r_1}^V\big)\\
&=0
\end{alignat*}
as the maps $\mfm_j^W$ and $\mfm_j^V$ satisfy the $A_\infty$-equations.
\end{proof}
\normalsize
\bibliographystyle{alpha}
|
2,877,628,088,361 | arxiv | \section{Introduction}
We are currently in an exciting era of high-precision cosmology,
with most of the ``standard'' cosmological model parameters
now known to within a few percent. In particular, the recent analysis
of the cosmic microwave background (CMB) temperature fluctuations
recorded by the \textit{Planck} satellite has found that the present day
density of baryons, $\Omega_{\rm b,0}$, contributes just $(2.205\pm0.028)/$$\,h^2$\
per cent of the critical density \citep{Efs13}, where $h$ is the Hubble constant
in units of 100 km s$^{-1}$ Mpc$^{-1}$. \textit{Planck}'s determination of $\Omega_{\rm b,0}$$\,h^2$,
which is now limited by cosmic variance, is the most precise measure of
this fundamental physical quantity for the foreseeable future
(when derived from the CMB).
For a long time (e.g. \citealt{WagFowHoy67}), it has been appreciated that a complementary measurement of
$\Omega_{\rm b,0}$$\,h^2$\ can be deduced from the relative abundances of the
light elements that were created during big bang nucleosynthesis (BBN). Aside
from protons, the only stable nuclei that were produced in astrophysically accessible
abundances are $^2$H (a.k.a. deuterium, D), $^3$He, $^4$He, and trace amounts
of $^7$Li (see \citealt{Ste07} for a comprehensive review).
In particular, considerable effort
has been devoted to measuring the abundance of deuterium (D/H), the mass
fraction of $^4$He (${Y}_{\rm P}$), and the abundance of $^7$Li. Of these, the primordial abundance of
deuterium is generally accepted as the best `baryometer', owing to its sensitivity and
monotonic relationship to $\Omega_{\rm b,0}$$\,h^2$.
The mass fraction of $^4$He, on the other hand, is relatively insensitive to
$\Omega_{\rm b,0}$$\,h^2$, but depends quite strongly on the expansion rate
of the early Universe (typically parameterized in non-standard models of
BBN as an effective number of neutrino species, $N_{\rm eff}$). At present,
this measurement is primarily limited by systematic uncertainties in
converting the He and H emission lines of metal-poor \textrm{H}\,\textsc{ii}\ regions
into an estimate of ${Y}_{\rm P}$\ (see e.g. \citealt{IzoStaGus13,Ave13}). Unlike ${Y}_{\rm P}$, the
$^7$Li/H\ ratio depends modestly on both $\Omega_{\rm b,0}$$\,h^2$\ and $N_{\rm eff}$. However, the
observationally inferred value for the ``primordial'' level, derived from
metal-poor Galactic halo stars \citep{SpiSpi82,Mel10}, is discrepant
by a factor of $\sim3$ compared with the standard model predictions,
becoming even more discrepant at the lowest metallicities \citep{Sbo10}.
As pointed out recently by \citet{NolHol11} (see also \citealt{Cyb04}),
\textit{precise} measures of the primordial D/H ratio can provide interesting
bounds on $N_{\rm eff}$, when combined with a measure of $\Omega_{\rm b,0}$$\,h^2$\ from the CMB.
The promise of this method was recently demonstrated by \citet{PetCoo12},
who obtained the most precise measure of the D/H ratio to date, using a
metal-poor ([Fe/H]$=-2.33$) damped Lyman~$\alpha$ system (DLA), seen
in absorption against a bright background quasar.
The prospect of measuring the D/H ratio in gas clouds seen in absorption
along the line-of-sight to a high-redshift quasar was first pointed out by \citet{Ada76}.
This vision was only realized much later, with the advent of 8-10\,m class telescopes
equipped with high resolution echelle spectrographs \citep{BurTyt98a,BurTyt98b}.
Since these first discoveries, a handful of additional cases have been identified
(see \citealt{PetCoo12} for the `\textit{top-ten}' list), including one system that appears
to be chemically pristine \citep{FumOmePro11}. One lingering concern with this
prime set of measurements is that the dispersion in the reported D/H measures
is significantly larger than the quoted errors (first noted by \citealt{Ste01}).
Indeed, a simple $\chi^{2}$ test reveals that the errors for all D/H measurements
would need to be scaled upwards by $\sim33\%$, if the observed dispersion were due
to chance alone. An alternative possibility, if the quoted random errors are in fact
realistic, is that the analyses may suffer from small systematic biases that
are now important to recognize and correct for.
\begin{figure*}
\centering
\includegraphics[angle=0,width=17.0cm]{rcooke_f1.ps}
\caption{
Flux-calibrated spectrum of J1358$+$6522 (black histogram). The zero-level and
error spectrum are shown as the green dashed and solid blue lines respectively.
The \textrm{H}\,\textsc{i}\ Lyman series lines of the DLA are indicated by red tick marks. Since the
SDSS data do not extend below 3800\,\AA, we extrapolated the response curve
for fluxing the data a further 100\,\AA\ to the blue.
}
\label{fig:fluxqso}
\end{figure*}
This concern has prompted us to identify the rare handful of systems which
afford \textit{precise} measurements of the primordial abundance of deuterium
together with realistic error estimates. This effort is part
of our ongoing survey to study
the chemistry of the most metal-poor
DLAs, described in more detail
in \citet{Pet08a} and \citet{Coo11}.
In this paper, we present the \textit{Precision Sample} of
primordial deuterium abundance measurements, consisting
of the current best estimates of D/H in QSO
absorption line systems. We also report a DLA where a new,
precise measurement of the primordial deuterium abundance
could be obtained. All of these systems have been reanalyzed
in a self-consistent manner, taking into account the dominant
sources of systematic uncertainty. In the following section, we provide
the details of the observations and data reduction procedures for the
newly observed DLA, and discuss the selection criteria we have adopted
to define the Precision Sample. In Section~\ref{sec:profanalysis}, we
describe the analysis of the new system with deuterium absorption,
and estimate its D/H ratio. In Section~\ref{sec:cosmology}
we derive the value of (D\,/\,H)$_{\rm p}$\ from the Precision Sample,
discuss its cosmological implications, and consider
current limitations in using D/H
abundance measurements for cosmology.
Section~\ref{sec:newphysics} deals with the
implications of our results for physics beyond
the standard model, considering in particular bounds
on the effective
number of neutrino families and on
the lepton asymmetry.
Finally, in Section~\ref{sec:conc}, we summarize the
main findings from this work.
\section{Observations and data reduction}
\label{sec:obs}
\subsection{The DLA towards J1358$+$6522}
Most newly discovered DLAs are now identified using the low-resolution spectra
provided by the Sloan Digital Sky Survey (SDSS). Recent searches for DLAs have
yielded several thousand systems \citep{ProWol09,Not12}.
At the modest resolution ($R\sim 2000$) and
signal-to-noise ratio (S/N$\,\sim20$) of the SDSS spectra, the metal absorption
lines of the most metal-poor DLAs are unresolved and undetected. Follow-up,
high spectral resolution observations are thus required to pin-down their chemical
abundances (see \citealt{Coo11}).
\citet{Pen10} were the first to recognize the very metal-poor DLA at redshift
$z_{\rm abs}\simeq3.0674$ towards the $z_{\rm em}\simeq3.173$ quasar
SDSS J1358$+$6522. On the basis of their intermediate resolution
observations (60\,km s$^{-1}$ full-width at half maximum; FWHM) with the
Keck Echellette Spectrograph and Imager, these authors concluded
that this DLA has a metallicity of [Fe/H] $\le-3.02$, and is thus among the most
metal-poor known \citep{Coo11}.
We re-observed this QSO with the Keck High Resolution Echelle
Spectrometer (HIRES; \citealt{Vog94}) using the red cross-disperser to
perform an accurate chemical abundance analysis \citep{Coo12}.
The HIRES data pinned down the metallicity of the DLA ([Fe/H]\,$ = -2.84\pm$0.03)
and provided the first indication for the presence of a handful of
resolved \textrm{D}\,\textsc{i}\ absorption lines.
Taking advantage of the high efficiency of the HIRES UV cross-disperser
at blue wavelengths, we then performed dedicated HIRES observations
with the goal of obtaining a precise measure of the deuterium abundance.
Our observations included a total of $24\,300$\,s on 2012, May 11, (divided
equally into nine exposures) and an additional $29\,750$\,s on 2013, May 3 and 5.
An identical instrument setup was employed for both runs; we used the
C1 decker ($7''\times0.861''$, well-matched to the seeing conditions of $0.6''-0.8''$),
which delivers a nominal spectral resolution of
$R\sim48\,000$ ($\equiv6.3$ km s$^{-1}$ FWHM)
for a uniformly illuminated slit. Our setup covers the wavelength
range $3650-6305$\,\AA, with small gaps near 4550\,\AA\ and 5550\,\AA\
due to the gaps between the HIRES detector chips.
The exposures were binned on-chip $2\times2$.
The data were reduced following the standard procedures of bias subtraction,
flat-fielding, order definition and extraction. The data were mapped onto a
vacuum, heliocentric wavelength scale from observations of a ThAr lamp
which bracketed the science exposures. To test the quality of the data reduction,
we reduced the data using two pipelines: \textsc{makee} which is maintained by
T.~Barlow\footnote{\textsc{makee} is available
at:\\ \texttt{http://www.astro.caltech.edu/$\sim$tb/makee/}},
and \textsc{HIRedux} which is maintained by
J.~X.~Prochaska\footnote{\textsc{HIRedux} can be obtained
from:\\ \texttt{http://www.ucolick.org/$\sim$xavier/HIRedux/index.html}}.
We found that the skyline subtraction performed by \textsc{HIRedux} was superior
to that of \textsc{makee}, and substantially improved the S/N
of the spectra at blue wavelengths. This was very important for the removal of
telluric features near the redshifted \textrm{D}\,\textsc{i}\ and \textrm{H}\,\textsc{i}\ transitions of the DLA. All data
analyzed and presented herein for J1358$+$6522 were reduced with \textsc{HIRedux}.
The individual orders were combined with \textsc{UVES\_popler}, which is maintained
by M.~T.~Murphy\footnote{\textsc{UVES\_popler} can be downloaded
from:\\ \texttt{http://astronomy.swin.edu.au/$\sim$mmurphy/UVES\_popler/}}.
As a final step, we flux-calibrated the
echelle data by comparison to the SDSS data.
The combined S/N ratio of the final spectrum per 2.5 km s$^{-1}$ pixel
is $\sim30$ near 4000\,\AA, $\sim45$ at 5000\,\AA\ and $\sim20$ near 6000\,\AA.
The reduced, flux-calibrated
spectrum is presented in Figure~\ref{fig:fluxqso}, where the red tick marks
indicate the wavelengths of the redshifted \textrm{H}\,\textsc{i}\ Lyman series transitions.
\subsection{Literature D/H QSO Absorption Line Systems}
\label{sec:litsyst}
\subsubsection{Selection Criteria}
\label{sec:criteria}
In this section, we outline the strict set of rules that we have
used to define the Precision Sample. Our goal is to identify
the small handful of systems currently known where the most
accurate and precise measures of D/H can potentially be
obtained. The set of restrictions we applied are as follows:
\begin{itemize}
\item We require that the \textrm{H}\,\textsc{i}\ Ly$\alpha$\ absorption line must exhibit
Lorentzian damping wings. This criterion is satisfied for absorption
line systems with log $N(\textrm{H}\,\textsc{i})$/cm$^{-2}\, \ge19$. Such absorption
lines lie on the damping regime of the curve-of-growth,
where $N(\textrm{H}\,\textsc{i})$\ can be derived independently of the cloud model;
the damping wings uniquely determine the total \textrm{H}\,\textsc{i}\ column density.
\item Due to the power of the \textrm{H}\,\textsc{i}\ Ly$\alpha$\ transition in establishing
the total \textrm{H}\,\textsc{i}\ column density, we require that the wings of this
transition are not strongly blended with nearby, unrelated,
strong \textrm{H}\,\textsc{i}\ absorption systems. If there exist additional
strong \textrm{H}\,\textsc{i}\ absorption systems nearby, we impose
that the edges of their Ly$\alpha$\ absorption troughs
should be separated by at least 500 km s$^{-1}$,
and that all such absorption systems should be modelled simultaneously.
\item At least two, resolved and apparently unblended, optically
thin transitions of \textrm{D}\,\textsc{i}\ must be available. This ensures that the
total \textrm{D}\,\textsc{i}\ column density can be measured accurately, and
independently of the cloud model.
\item The data must have been obtained with a high resolution
echelle spectrograph, with
$R\ge30\,000$ (i.e. $v_{\rm FWHM} \le 10$ km s$^{-1}$), to
resolve the broadening of the \textrm{D}\,\textsc{i}\ lines, and recorded at
S/N $\gtrsim 10$ per pixel ($\sim3$ km s$^{-1}$) at both
Ly$\alpha$\ and the weakest \textrm{D}\,\textsc{i}\ absorption line used in the analysis.
\item Several unblended metal lines with a range of oscillator
strengths for a given species (ideally O\,\textsc{i}\ and Si\,\textsc{ii}) must be
present if there is only one optically thin \textrm{D}\,\textsc{i}\ transition
in order to determine the velocity structure of the absorbing
cloud, and ensure that the presence of any partly ionized gas
(which may contribute to the \textrm{H}\,\textsc{i}\ column density) is
accurately modelled. However, if there exist at least
two optically thin \textrm{D}\,\textsc{i}\ transitions, metal lines are not strictly
required.
\end{itemize}
Of course, we are also limited by data that we, and others,
have access to. Data that are not publicly available on archives
could not be used in this study. In total, there are four systems in
the literature that meet the above criteria. We now give a
brief description of these systems.
\subsubsection{HS0105$+$1619, $z_{\rm abs}=2.53651$}
Keck+HIRES observations of the QSO HS\,0105$+$1619
were obtained as follows:
1800\,s in 1999 (Program ID: U05H, PI: A.~Wolfe),
85\,000\,s in 1999--2000 by O'Meara et al. (2001),
and 21\,500\,s in 2005 (Program ID: G10H, PI: D.~Tytler).
The data obtained by \citet{OMe01} are not available
from the Keck Observatory Archive.
Thus, we have reanalysed only the last dataset which,
in any case, was obtained with
a much improved HIRES detector.
As reported by \citet{OMe01},
the spectrum shows nine clean \textrm{H}\,\textsc{i}\ transitions in the Lyman series
together with four optically thin \textrm{D}\,\textsc{i}\ absorption lines;
the absorption is confined to
a single velocity component at $z_{\rm abs}=2.536509$.
The first ions exhibit slightly asymmetric profiles,
arising from ionized gas blueshifted by $7.1$ km s$^{-1}$
relative to the main component
(see Figure~\ref{fig:HS0105p1619}).
The \textrm{H}\,\textsc{i}\ column density of this ionized
component contributes just 0.01 dex to the total column
density $\log N$(H\,{\textsc i})/cm$^{-2} = 19.42 \pm 0.01$.
The data and best-fitting
model are shown in Figure~\ref{fig:HS0105p1619}.
\subsubsection{Q0913$+$072, $z_{\rm abs}=2.61829$}
The quasar Q0913$+$072 intersects a DLA
with $\log N$(H\,{\textsc i})/cm$^{-2} = 20.34 \pm 0.04$ at
$z_{\rm abs}=2.61829$.
Q0913$+$072 was observed with the Ultraviolet and Visual
Echelle Spectrograph (UVES) on the European Southern
Observatory's (ESO) Very Large Telescope (VLT) facility
in 2002 (Program ID: 68.B-0115(A), PI: P.~Molaro),
and then again in 2007 to obtain UV coverage down to the
Lyman limit (Program ID: 078.A-0185(A), PI: M.~Pettini),
resulting in a total exposure time of 77\,500\,s.
The details of this system's chemical composition
were reported by \citet{Pet08a} (see also \citealt{Ern06}),
and the analysis of the D\,{\textsc i} lines was presented
by \citet{Pet08b}; with [O/H]\,$= -2.40$ this
is currently the most metal-poor DLA known with resolved \textrm{D}\,\textsc{i}\ lines.
There is a small column density of ionized gas at velocities near that
of the DLA; we have carefully modelled
its contribution to the overall absorption
in the metal lines and the \textrm{H}\,\textsc{i}\ Lyman
series (see Figure~\ref{fig:Q0913p072}).
Since Q0913$+$072 does not have an SDSS spectrum from which we
could perform a flux calibration, we obtained a set of VLT+FORS2
(Focal Reducer/low dispersion Spectrograph 2)
exposures from the VLT archive to flux calibrate the UVES spectrum.
(Program ID: 70.A-0425(C), PI: L.~Wisotzki). The FORS2 data
were reduced following standard procedures, using the
software routines from the ESO FORS2 data reduction
pipeline\footnote{We used version 3.9.6, obtained
from:\\ \texttt{http://www.eso.org/sci/software/pipelines/}}.
The UVES data and the corresponding model fits are shown in
Figure~\ref{fig:Q0913p072}.
\subsubsection{J1419$+$0829, $z_{\rm abs}=3.04973$}
The chemical properties of this DLA were discussed by \citet{Coo11},
while \citet{PetCoo12} reported the analysis of the deuterium lines.
This system currently holds the record for the most precise
determination of the D/H ratio. The analysis technique
developed in \citet{PetCoo12} is the same as that employed here
(see Section~\ref{sec:analysis}), except for two aspects:
(1) in the present work all QSO spectra were analysed ``blind,''
i.e., without knowledge of the value of D/H until
the line fitting procedure was concluded; and
(2) we now allow for an arbitrary continuum fitting with
Legendre polynomials, rather than the combination
of power-law continuum and Gaussian
emission lines used by \citet{PetCoo12}.
To maintain consistency with the other DLAs investigated here,
we have reanalyzed this system with the new prescription for
the continuum definition, and the analysis was performed blind.
We found that the final value of D/H from this new analysis
deviated by just 0.005 dex from that reported by \citet{PetCoo12},
with an identical estimate for the error. This small shift is due to
our new approach for dealing with the systematic uncertainty
in the continuum level.
\subsubsection{J1558$-$0031, $z_{\rm abs}=2.70242$}
The data for J1558$-$0031 consist of a 11\,300\,s
Keck+HIRES spectrum obtained in 2006 (Program ID: U152Hb, PI: J.~X.~Prochaska),
first reported by \citet{OMe06}.
With $\log N$(H\,{\textsc i})/cm$^{-2} = 20.67 \pm 0.05$,
this $z_{\rm abs}=2.70242$
DLA is the highest \textrm{H}\,\textsc{i}\ column
density system currently known with at least one optically thin \textrm{D}\,\textsc{i}\
absorption feature. There exists another metal-poor DLA along this
sightline, at $z_{\rm abs}=2.629756$, that also exhibits damped Ly$\alpha$\ absorption
(log\,$N(\textrm{H}\,\textsc{i})$/cm$^{-2}$ = 19.726$\pm0.007$).
This second system contains a single absorption component, as evidenced
by several Si\,\textsc{ii}\ absorption lines
(see bottom-right panel of Figure~\ref{fig:J1558m0031}).
Although \textrm{D}\,\textsc{i}\ absorption is only seen
in the higher redshift DLA, we include the \textrm{H}\,\textsc{i}\ and
metal lines of both DLAs in the fitting procedure,
since the damping wings of the two DLAs
overlap slightly.
Portions of the HIRES spectrum, together with the best-fitting model,
are reproduced in
Figure~\ref{fig:J1558m0031}.
Unlike the \citet{OMe06} analysis, which uses data acquired
with both Magellan+MIKE and Keck+HIRES, our analysis
exclusively uses the Keck+HIRES data described above.
\\
\begin{figure*}
\centering
\includegraphics[angle=0, width=17.0cm]{rcooke_f2a.ps}
\includegraphics[angle=0, width=17.0cm]{rcooke_f2b.ps}
\caption{
The top panel displays a portion of the flux-calibrated HIRES
spectrum near the damped
Ly$\alpha$\ line at $z_{\rm abs}=3.06726$ toward
J1358$+$6522 (black histogram) together with the error spectrum
(continuous blue line). The dashed green line marks the best-fitting
zero level of the data and the dashed blue line shows the best-fitting
continuum level. The solid red line shows the overall best-fitting model
to the DLA. The bottom panel shows a zoom-in of the data and model;
the weak absorption feature that we have modelled on the blue wing of
Ly$\alpha$ is Si\,\textsc{iii}\,$\lambda 1206.5$ at the redshift of the DLA
($\sim4907$\,\AA\ in the observed frame).
}
\label{fig:lya}
\end{figure*}
\section{Profile Analysis}
\label{sec:profanalysis}
\begin{table*}
\centering
\begin{minipage}[c]{0.75\textwidth}
\caption{\textsc{Best-fitting Model Parameters for the DLA at $z_{\rm abs}=3.067259$ toward the QSO J1358$+$6522}}
\begin{tabular}{@{}crrrcccc}
\hline
\hline
\multicolumn{1}{c}{Comp.}
& \multicolumn{1}{c}{$z_{\rm abs}$}
& \multicolumn{1}{c}{$T$}
& \multicolumn{1}{c}{$b_{\rm turb}$}
& \multicolumn{1}{c}{$\log N$\/(H\,{\sc i})}
& \multicolumn{1}{c}{$\log {\rm (D\,\textsc{i}/H\,\textsc{i})}$}
& \multicolumn{1}{c}{$\log N$\/(C\,{\sc ii})}
& \multicolumn{1}{c}{$\log N$\/(N\,{\sc i})}\\
\multicolumn{1}{c}{}
& \multicolumn{1}{c}{}
& \multicolumn{1}{c}{(K)}
& \multicolumn{1}{c}{(km~s$^{-1}$)}
& \multicolumn{1}{c}{(cm$^{-2}$)}
& \multicolumn{1}{c}{}
& \multicolumn{1}{c}{(cm$^{-2}$)}
& \multicolumn{1}{c}{(cm$^{-2}$)}\\
\hline
1a & $3.0672594$ & $5\,700$ & $2.4$ & $20.21$ & $-4.587^{\rm a}$ & $14.32$ & $12.86$ \\
& $\pm 0.0000005$ & $\pm 1\,000$ &$\pm 0.2$ & $\pm 0.06$ & $\pm 0.012$ & $\pm 0.06$ & $\pm 0.06$ \\
1b & $3.0672591$ & $4\,000$ & $8.8$ & $20.16$ & $-4.587^{\rm a}$ & \ldots$^{\rm b}$ & \ldots$^{\rm b}$ \\
& $\pm 0.0000018$ & $\pm 500$ &$\pm 0.5$ & $\pm 0.07$ & $\pm 0.012$ & & \\
2 & $3.06702$ & 13\,100 & $8.5$ & $17.4$ & $-4.587^{\rm a}$ & $12.30$ & \ldots$^{\rm b}$ \\
& $\pm 0.00001$ & $\pm1\,500$ &$\pm 1.4$ & $\pm 0.03$ & $\pm 0.012$ & $\pm0.15$ & \\
\hline
\end{tabular}
\smallskip\smallskip\smallskip
\hspace{0.5cm}\begin{tabular}{@{}ccccccc}
\hline
\hline
\multicolumn{1}{c}{Comp.}
& \multicolumn{1}{c}{$\log N$\/(O\,{\sc i})}
& \multicolumn{1}{c}{$\log N$\/(Al\,{\sc ii})}
& \multicolumn{1}{c}{$\log N$\/(Si\,{\sc ii})}
& \multicolumn{1}{c}{$\log N$\/(Si\,{\sc iii})}
& \multicolumn{1}{c}{$\log N$\/(S\,{\sc ii})}
& \multicolumn{1}{c}{$\log N$\/(Fe\,{\sc ii})}\\
\multicolumn{1}{c}{}
& \multicolumn{1}{c}{(cm$^{-2}$)}
& \multicolumn{1}{c}{(cm$^{-2}$)}
& \multicolumn{1}{c}{(cm$^{-2}$)}
& \multicolumn{1}{c}{(cm$^{-2}$)}
& \multicolumn{1}{c}{(cm$^{-2}$)}
& \multicolumn{1}{c}{(cm$^{-2}$)}\\
\hline
1a & $14.85$ & $11.93$ & $13.44$ & $11.87$ & 13.00 & $13.09$ \\
& $\pm 0.02$ & $\pm 0.03$ & $\pm 0.02$ & $\pm0.04$ & $\pm0.08$ & $\pm 0.02$ \\
1b & \ldots$^{\rm b}$ & \ldots$^{\rm b}$ & \ldots$^{\rm b}$ & \ldots$^{\rm b}$ & \ldots$^{\rm b}$ & \ldots$^{\rm b}$ \\
& & & & & & \\
2 & \ldots$^{\rm b}$ & \ldots$^{\rm b}$ & $11.53$ & $11.80$ & \ldots$^{\rm b}$ & \ldots$^{\rm b}$ \\
& & & $\pm0.06$ & $\pm0.06$ & & \\
\hline
\end{tabular}
\smallskip
\hspace{0.5cm}$^{\rm a}${Forced to be the same for all components.}
\hspace{0.5cm}$^{\rm b}${Absorption is undetected for this ion in this component.}\\
\label{tab:compstruct}
\end{minipage}
\end{table*}
The analysis we have adopted to measure the D/H ratio is
an improvement over previous studies, with two main
differences:
(1) the profile fitting and $\chi^2$ minimization process was
performed with new, purpose-built software routines; and (2) we
have tried to identify the dominant systematics, and have
attempted to account for these affects. Before we discuss the details
of our line profile analysis, we briefly describe the important aspects
of the new software we have written to measure D/H
in metal-poor DLAs.
We will focus our discussion on the newly discovered system
with a precise value of the D/H abundance
(J1358$+$6522), but we stress that all five DLAs were analyzed in
an identical manner.
\subsection{Line Profile Fitting Software}
To account for the dominant systematics that could potentially affect
the measurement of D/H, we have developed a new suite of
software routines written in the \textsc{python} software environment.
Our Absorption LIne Software (\textsc{alis}; described in more detail by
R.~Cooke 2014, in preparation) uses a modified version of the \textsc{mpfit}
package \citep{Mar09}. \textsc{mpfit} employs a Levenberg-Marquardt
least-squares minimization algorithm to derive the model parameters
that best fit the data (i.e. the parameters that minimize the difference
between the data and the model, weighted by the error in the data).
Unlike the software packages that have previously been used to
measure D/H, \textsc{alis}\ has the advantage of being able to
fit an arbitrary emission profile for the quasar whilst simultaneously
fitting to the absorption from the DLA; any uncertainty in the continuum
is therefore automatically folded into the final uncertainty in the D/H ratio.
For all transitions, we use the atomic data compiled by \citet{Mor03}.
\begin{figure*}
\centering
\includegraphics[angle=0, height=21.65cm]{rcooke_f3.ps}
\caption{
A montage of the full Lyman series absorption in the DLA
at $z_{\rm abs}=3.067259$
toward J1358$+$6522.
The black histogram shows the data,
fully adjusted to the best-fitting continuum and zero levels,
while the red continuous line is
the model fit.
The minimum $\chi^{2}$/dof for this fit is $6282.3/6401$.
Tick marks above the spectrum indicate
the location of the velocity components
(red ticks for \textrm{H}\,\textsc{i}, green ticks for \textrm{D}\,\textsc{i}).
}
\label{fig:lyman}
\end{figure*}
\subsection{Analysis Technique for Measuring D/H}
\label{sec:analysis}
\begin{figure*}
\centering
\includegraphics[angle=0, width=0.99\textwidth]{rcooke_f4.ps}
\caption{
A selection of metal absorption lines for the DLA at $z_{\rm abs}=3.067259$
toward the QSO J1358$+$6522 (black histogram, cf. \citealt{Coo12}). The
best-fitting model is shown as the solid red line, while fitted blends are shown
as the thick orange lines. In the top panels, the red tick marks above the spectrum
indicate the location of the two absorption components, while the bottom panels
only display the tick mark for the main absorption component (i.e. component 1a+1b).
The long-dashed blue and dashed green lines show the continuum and zero level
respectively. The absorption feature near $+25$ km s$^{-1}$
in the Si\,\textsc{ii}\,$\lambda1260$ panel is Fe\,\textsc{ii}\,$\lambda1260.5$.
}
\label{fig:metals}
\end{figure*}
We first highlight one of the main strengths of our analysis which,
to our knowledge, has not been implemented in previous D/H studies.
In order to remove human bias from the results,
we have adopted a blind analysis strategy,
whereby the only information revealed during the profile fitting process is the
best-fitting $\chi^2$ value, and plots of the corresponding model fits
to the data; all of the parameter fitting results are entirely hidden from view.
Only after the analysis converged on the best-fitting model
were the full results of the model fit uncovered, and
these are the final values quoted here. No
further changes were made to the model fit
nor data reduction after the results were
revealed.
To allow for an efficient computation, a small wavelength interval
(typically $\pm300$ km s$^{-1}$) around each absorption line of interest
was extracted. For each of these intervals, we manually identified the
regions to be used in the fitting process by selecting pixels that we
attribute to either the quasar continuum or the DLA's absorption
lines. In some cases, it was also necessary to model unrelated
absorption lines in the fitting procedure if there was significant line-blending.
We have also attempted to account for the main systematics and limitations
that might affect the reported D/H value. The dominant systematics likely include
the instrumental broadening function, the cloud model, and the placement
of the quasar continuum and zero-level. We marginalize over the first of these
systematics
by assuming that the instrumental profile is a Gaussian\footnote{We confirmed that
emission lines from the ThAr wavelength calibration spectrum were well-fit by a
Gaussian profile.}, where the Gaussian FWHM
was allowed to be a free model parameter during the
$\chi^2$ minimization process.
Any uncertainty or bias in the choice of the cloud model is overcome by our
selection criteria listed in Section~\ref{sec:criteria}.
Specifically, DLAs
have two advantages over other absorption line systems
for the determination of the D/H ratio, both stemming
from their high \textrm{H}\,\textsc{i}\ column densities:
(1) The \textrm{H}\,\textsc{i}\ Ly$\alpha$\ line exhibits well-defined Lorentzian damping
wings\footnote{\citet{Lee13} recently suggested that
the wings of the damped Ly$\alpha$\ absorption line exhibit an
asymmetric deviation from a Lorenzian. Such a deviation
from a Lorentzian profile would be marginalized over during
the arbitrary continuum fitting process described below, and
is thus expected to have a negligible impact on our results.}
from which the total \textrm{H}\,\textsc{i}\ column
density can be deduced precisely and
independently of the cloud model; and
(2) many \textrm{D}\,\textsc{i}\ transitions in the Lyman series are normally
detectable. With their widely varying
oscillator strengths these multiple \textrm{D}\,\textsc{i}\ lines can pin down
the \textrm{D}\,\textsc{i}\ column density very precisely;
the availability of unsaturated, high-order lines
ensures that the value of D/H thus deduced does
not depend on the detailed kinematic structure
of the gas (i.e. the cloud model).
Neither of these two advantages applies to Lyman limit
systems whose \textrm{H}\,\textsc{i}\ column density can be three
orders of magnitude lower than that of DLAs.
Finally, if the background level is not accurately determined,
a small offset can systematically
affect the derived D/H value.
We have overcome this concern by allowing the zero level
to be fit during the minimization process
(assuming the offset to be the same at all wavelengths).
Arguably, the above systematic effects
are sub-dominant compared to the choice of the
continuum level. One of the improvements
over previous analyses is that we have not
specified \textit{a priori} the level of the quasar continuum.
Instead, we have marginalized
over the continuum level during the fitting process
by fitting Legendre polynomials to the
flux-calibrated quasar continuum.
The order of the polynomial was chosen to be sufficiently
large to allow flexibility in arbitrarily defining the continuum level,
whilst ensuring that the continuum was not being over-fit to the noise
(examples are shown in the top panels of Figs.~\ref{fig:lya},
\ref{fig:HS0105p1619}, \ref{fig:Q0913p072}, and \ref{fig:J1558m0031}).
We emphasize that by including the above systematics into the fitting process,
the uncertainties in the continuum, zero level, and instrumental
broadening are incorporated
into the final uncertainty in the derived D/H value.
To summarize, we have
marginalized over the continuum, zero level and instrumental
FWHM, whilst simultaneously fitting for all of the DLA's
unblended metal lines, the DLA's
Ly$\alpha$\ transition, and the entire \textrm{H}\,\textsc{i}\ and \textrm{D}\,\textsc{i}\ Lyman series.
Our analysis therefore includes all of the available information
that can help pin down the
D/H value, and it accounts for the systematics that
are currently thought to contribute the most to the
overall error budget.
We finally point out that every \textrm{H}\,\textsc{i}\ component used
to fit the DLA was forced to have the same \textrm{D}\,\textsc{i}\,/\,\textrm{H}\,\textsc{i}\ ratio.
We have therefore fit directly to the \textrm{D}\,\textsc{i}\,/\,\textrm{H}\,\textsc{i}\ value,
rather than deriving separately the column densities of \textrm{D}\,\textsc{i}\ and \textrm{H}\,\textsc{i}.
All absorption lines were modeled as Voigt profiles
where the line broadening has both
thermal and turbulent contributions.
All of the DLAs in our survey were first modeled
with a single absorption component. If the metal lines
exhibited an asymmetric profile, an additional component
was included to improve the quality of the fit.
To confirm that the model parameters had
converged to their best-fitting values, we performed
a convergence test; once the difference in the
$\chi^{2}$ between successive iterations was $<0.01$,
the parameter values were stored and the $\chi^{2}$
minimization recommenced with a tolerance of $10^{-3}$.
Once a successive iteration reduced the $\chi^{2}$ by
$<10^{-3}$, the minimization was terminated and the
parameter values from the two convergence
criteria were compared. If all parameter values differed by
$<0.2\sigma$ (i.e. 20 per cent of the parameter error) then
the model fit has converged.
As a final step, we repeated the $\chi^{2}$ minimization process 20 times,
perturbing the starting parameters of each run by the covariance matrix. This
exercise ensures that our choice of starting parameters does not influence the
final result. We found that the choice of starting parameters has a negligible
contribution to the error on \textrm{D}\,\textsc{i}\,/\,\textrm{H}\,\textsc{i}\ (typically 0.002 dex), but can introduce
a small bias (again, typically 0.002 dex). We have accounted
for this small bias in all of the results quoted herein.
\subsection{Component Structure}
\label{sec:compstruc}
Most of the narrow, low-ionization metal lines of the DLA toward
J1358$+$6522 consist of a single component at
$z_{\rm abs}=3.067259$.
A second weaker component,
blueshifted by $17.4$ km s$^{-1}$ ($z_{\rm abs} = 3.06702$)
contributes to
Si\,\textsc{iii}~$\lambda 1206.5$ and to the strongest C\,\textsc{ii}\ and Si\,\textsc{ii}\
lines. Evidently, this weaker absorption arises in nearby ionized gas.
In fitting the absorption lines, we tied the redshift, turbulent Doppler parameter
and kinetic temperature of the gas to be the same for the metal,
\textrm{D}\,\textsc{i}\ and \textrm{H}\,\textsc{i}\ absorption lines. We allowed all of the cloud model
parameters to vary, whilst simultaneously fitting for the continuum
near every absorption line. Relevant parameters of the
best-fitting cloud model so determined are
collected in Table~\ref{tab:compstruct}.
Figures~\ref{fig:lya}, \ref{fig:lyman} and \ref{fig:metals}
compare the data and model fits for, respectively, the damped
Ly$\alpha$\ line, the full Lyman series, and selected metal lines.
[Since the metal lines analyzed here are the same as those
shown in Figure~1 of \citet{Coo12}, albeit now with
a higher S/N ratio, we only present a small selection of them in
Figure~\ref{fig:metals} to avoid repetition].
The best-fitting chi-squared value for
this fit is also provided for completeness\footnote{We
caution that the quoteed chi-squared value is likely underestimated
in our analysis because: (1) there is some degree of correlation
between neighbouring pixels that is not accounted for in the
error spectrum, and (2) the continuum regions selected tend
to have lower fluctuations about the mean than average.}.
\begin{figure*}
\centering
{\hspace{-0.25cm} \includegraphics[angle=0,width=8.5cm]{rcooke_f5a.ps}}
{\hspace{0.25cm} \includegraphics[angle=0,width=8.5cm]{rcooke_f5b.ps}}
\caption{
Values of D/H for the Precision Sample of DLA measurements
analyzed in this paper. The orange
point represents the new case reported here (J1358+6522). The left and right panels
show respectively the D/H measures as a function of the DLA oxygen abundance and
\textrm{H}\,\textsc{i}\ column density. The dark and light green bands are the 1$\sigma$ and
2$\sigma$ determinations of $\Omega_{\rm b,0}$$\,h^2$\ from the analysis
of the CMB temperature fluctuations recorded by the \textit{Planck}
satellite \citep{Efs13} assuming the standard model of physics.
The conversion from D/H to $\Omega_{\rm b,0}$$\,h^2$\ is given by eqs.~\ref{eqn:dhptoetad}
and \ref{eqn:etad}.
}
\label{fig:dhsample}
\end{figure*}
Returning to Table~\ref{tab:compstruct}, it can be seen
that we found it necessary to separate the main absorption
into two separate components,
labeled 1a and 1b in the Table.
A statistically acceptable fit\footnote{The addition of an extra absorption
component (i.e. three components as opposed to two, and four additional
free parameters) reduces the minimum
chi-squared value by $\Delta\chi^{2}_{\rm min}\simeq660$, which is
\textit{highly} significant (see e.g. \citealt{LamMarBow76}).} to the metal, \textrm{D}\,\textsc{i}\ and \textrm{H}\,\textsc{i}\
lines could not be achieved with a single absorbing cloud in which
the turbulent broadening is the same for all species and
the thermal broadening is proportional to the square root
of the ion mass (i.e. $b_{\rm th}^2 = 2KT/m$, where $K$
is the Boltzmann constant).
The main absorption component of this DLA appears
to consist of two `clouds' with very similar redshifts,
temperatures and \textrm{H}\,\textsc{i}\ column densities, but with
significantly different turbulence parameters
(see Table ~\ref{tab:compstruct}).
The turbulent broadening for component
1a is bounded by the metal-lines, whereas the thermal
broadening is bounded by the relatively narrow \textrm{H}\,\textsc{i}\
line profiles. This combination of turbulent and thermal
broadening is unable to reproduce the observed widths
of the strongest \textrm{D}\,\textsc{i}\ lines, which require an additional
component with a larger contribution of turbulent broadening.
Surprisingly, metal absorption is only seen in the low-turbulence
cloud (component 1a); component 1b has less than 1/100
of the metallicity of component 1a.
We propose three possible interpretations for this unusual finding:
(1) there exists an essentially pristine cloud of gas near the
very metal-poor DLA; (2) the metals see a different potential
to \textrm{D}\,\textsc{i}\ and \textrm{H}\,\textsc{i}\ (i.e. the metals have not been entirely mixed into
the \textrm{H}\,\textsc{i}); (3) the assumption of a constant kinetic temperature for
all elements through the entire sightline is incorrect.
While any of these explanations is interesting in its own right,
we stress here
that \textit{the choice of the cloud model does not affect the derived
value of D/H} in this system, for the reasons discussed in
Section~\ref{sec:analysis}. Provided that there exists a series of
optically thin \textrm{D}\,\textsc{i}\ lines, and \textrm{H}\,\textsc{i}\ Ly$\alpha$\ exhibits
damping wings
(as indeed is the case for the
DLA towards J1358+6522---see Figures~\ref{fig:lya}
and \ref{fig:lyman}), the derived total \textrm{D}\,\textsc{i}\ and \textrm{H}\,\textsc{i}\ column densities will
be independent of the cloud model. We therefore only require
that the model provides a good fit to the data.
The final, best-fitting value of the deuterium abundance for the DLA
towards J1358$+$6522 is (expressed as
a logarithmic and linear quantity):
\begin{equation}
\log\,(\textrm{D}\,\textsc{i}/\textrm{H}\,\textsc{i}) = -4.588 \pm 0.012
\end{equation}
\begin{equation}
10^5\,\textrm{D}\,\textsc{i}/\textrm{H}\,\textsc{i} = 2.58 \pm 0.07
\end{equation}
where the quoted error term includes the random (observational)
error in addition to the systematic uncertainties that were
marginalized over during the fitting process.
\section{The Primordial Abundance of Deuterium from the Precision Sample}
\label{sec:cosmology}
\begin{table*}
\begin{minipage}{0.9\textwidth}
\caption{\textsc{The Precision Sample of D/H Measurements in QSO Absorption Line Systems}}
\begin{tabular}{lccc|cc|cc|l}
\hline
\hline
\multicolumn{4}{c}{}
& \multicolumn{2}{c}{Literature}
& \multicolumn{2}{c}{This work}
& \multicolumn{1}{c}{}\\
\multicolumn{1}{c}{QSO}
& \multicolumn{1}{c}{$z_{\rm em}$}
& \multicolumn{1}{c}{$z_{\rm abs}$}
& \multicolumn{1}{c}{[O/H]$^{\rm a}$}
& \multicolumn{1}{c}{$\log N$\/(H\,{\sc i})}
& \multicolumn{1}{c}{$\log {\rm (D/H)}$}
& \multicolumn{1}{c}{$\log N$\/(H\,{\sc i})}
& \multicolumn{1}{c}{$\log {\rm (D/H)}$}
& \multicolumn{1}{c}{Ref.$^{\rm b}$}\\
\multicolumn{1}{c}{}
& \multicolumn{1}{c}{}
& \multicolumn{1}{c}{}
& \multicolumn{1}{c}{}
& \multicolumn{1}{c}{(cm$^{-2}$)}
& \multicolumn{1}{c}{}
& \multicolumn{1}{c}{(cm$^{-2}$)}
& \multicolumn{1}{c}{}
& \multicolumn{1}{c}{}\\
\hline
HS\,0105+1619 & 2.652 & 2.53651 & $-1.77$ & $19.42 \pm 0.01$ & $-4.60 \pm 0.04$ & $19.426 \pm 0.006$ & $-4.589 \pm 0.026$ & 1, 2 \\
Q0913+072 & 2.785 & 2.61829 & $-2.40$ & $20.34 \pm 0.04$ & $-4.56 \pm 0.04$ & $20.312 \pm 0.008$ & $-4.597 \pm 0.018$ & 1, 3, 4 \\
SDSS~J1358$+$6522 & 3.173 & 3.06726 & $-2.33$ & $\ldots$ & $\ldots$ & $20.495 \pm 0.008$ & $-4.588 \pm 0.012$ & 1 \\
SDSS~J1419$+$0829 & 3.030 & 3.04973 & $-1.92$ & $20.391 \pm 0.008$ & $-4.596 \pm 0.009$ & $20.392 \pm 0.003$ & $-4.601 \pm 0.009$ & 1, 5, 6 \\
SDSS~J1558$-$0031 & 2.823 & 2.70242 & $-1.55$ & $20.67 \pm 0.05$ & $-4.48 \pm 0.06$ & $20.75 \pm 0.03$ & $-4.619 \pm 0.026$ & 1, 7 \\
\hline
\end{tabular}
\smallskip
$^{\rm a}${We adopt the solar value $\log ({\rm O/H})_{\odot} + 12 = 8.69$ \citep{Asp09}.}\\
$^{\rm b}${References -- (1)~This work,
(2)~\citet{OMe01},
(3)~\citet{Pet08a},
(4)~\citet{Pet08b},
(5)~\citet{PetCoo12},
(6)~\citet{Coo11},
(7)~\citet{OMe06}.
}\\
\label{tab:precision}
\end{minipage}
\end{table*}
Including the new D\,/\,H case reported herein, there are just
five metal-poor absorption line systems in which a \textit{precise}
measure of the primordial abundance of deuterium can be obtained,
based on the criteria outlined in Section~\ref{sec:litsyst}.
Relevant details of these
systems are collected in Table~\ref{tab:precision}.
The four DLAs from the literature were all reanalyzed in
an identical manner to that described above (Section 3)
for J1358$+$6522. In particular, we adopted the same blind
analysis strategy and marginalized over the
important systematic uncertainties. We refer
to this sample of five high quality measurements
as the \textit{Precision Sample}.
In Table~\ref{tab:precision}, we provide a measure of
the \textit{total} \textrm{H}\,\textsc{i}\ column density, along with the associated error.
Many of our systems contain more than one component in
\textrm{H}\,\textsc{i}\ and the column density estimates for these multiple
components are correlated with one another.
To calculate the error on the total \textrm{H}\,\textsc{i}\ column density,
we have drawn 10\,000 realizations of the component column
densities from the covariance matrix. We then calculated the total
column density for each realization; in Table~\ref{tab:precision},
we provide the mean and $1\sigma$ error derived from
this Monte Carlo analysis.
We consider the five measures of \textrm{D}\,\textsc{i}\,/\,\textrm{H}\,\textsc{i}\ in these DLAs
as five independent determinations of the primordial
abundance of deuterium, (D\,/\,H)$_{\rm p}$, for the following reasons:
(1) we are not aware of any physical mechanism that would
alter the ionization balance of D compared to H. Thus, to our
knowledge, \textrm{D}\,\textsc{i}\,/\,\textrm{H}\,\textsc{i}\,$\equiv$\,D\,/\,H;
(2) the degree of astration of D (i.e. its destruction
when gas is turned into stars) is expected to be
negligible at the low metallicities ([O/H]\,$< -1.5$)
of the DLAs considered here (e.g., see
Figure 2 of Romano et al. 2006);
thus (D\,/\,H)$_{\rm DLA}$ = (D\,/\,H)$_{\rm p}$.
(3) the lack of dust in metal-poor DLAs
makes it extremely unlikely that
selective depletion of D onto grains
occurs in the cases considered here
[it has been proposed that such a mechanism
may be responsible for the local variations in (D\,/\,H)$_{\rm ISM}$---see
Linsky et al. 2006];
(4) the five DLAs sample entirely independent
sites in the distant Universe.
As can be seen from Table~\ref{tab:precision}
and Figure~\ref{fig:dhsample}, the five measures of D/H
in the Precision Sample are in very good mutual agreement
and the dispersion of the measurements is consistent with
the errors estimated with our improved analysis.
A $\chi^{2}$ test indeed confirms that the five measurements
are consistent within $2\sigma$ of being drawn from a single
value of D/H.
We can therefore combine the five independent determinations
of (D\,/\,H)$_{\rm DLA}$ to deduce the weighted mean value
of the primordial abundance of deuterium:
\begin{equation}
{ \log\,{\rm (D/H)}_{\rm p}}=-4.597\pm0.006
\label{eqn:primdh_log}
\end{equation}
\begin{equation}
\label{eqn:primdh}
10^5\,{\rm (D/H)_{\rm p}}=2.53\pm0.04
\end{equation}
This value of (D\,/\,H)$_{\rm p}$\ is not markedly different from
other recent estimates
\citep{Pet08b, FumOmePro11, PetCoo12},
but its precision is significantly better than
achieved in earlier papers that considered a
more heterogeneous set of
${\rm (D/H)_{DLA}}$ determinations.
For completeness, we have recalculated the weighted mean for
all the known D/H measurements listed in Table~2 of
\citet{PetCoo12}, after updating the D/H values of the systems we
have reanalyzed here. The resulting weighted mean value of the
primordial deuterium abundance is (D\,/\,H)$_{\rm p}$\,$ =-4.596\pm0.006$.
This compares very well with
the value derived from the Precision Sample (eqs.~\ref{eqn:primdh_log}
and \ref{eqn:primdh}).
Perhaps this is not surprising, since
the literature systems that
did not meet our selection criteria (see Section~\ref{sec:criteria})
have larger uncertainties, and thus their contribution to the weighted mean
value of D\,/\,H is relatively low.
\subsection{The Cosmic Density of Baryons}
\label{sec:omega_b}
Using the most up-to-date calculations of the network of
nuclear reactions involved in BBN,
the primordial abundance of deuterium is related to the
cosmic density of baryons (in units of the critical density),
$\Omega_{\rm b,0}$, via the following relations
(\citealt{Ste12}; G.~Steigman 2013, private communication):
\begin{equation}
\label{eqn:dhptoetad}
{\rm (D\,/\,H)}_{\rm p} = 2.55\times10^{-5}\,(6/\eta_{\rm D})^{1.6}\times(1\pm0.03)
\end{equation}
\begin{equation}
\label{eqn:etad}
\eta_{\rm D} = \eta_{\rm 10} - 6(S-1) + 5\xi/4
\end{equation}
where $\eta_{10}=273.9\,\Omega_{\rm b,0}\,h^{2}$,
$S = [1 + 7(N_{\rm eff}-3.046)/43]^{1/2}$ is the expansion factor
and $\xi$ is the neutrino degeneracy parameter (related to the
lepton asymmetry by Equation 14 from \citealt{Ste12}). The rightmost term in
eq.~\ref{eqn:dhptoetad} represents the current 3\%
uncertainty in the conversion of (D\,/\,H)$_{\rm p}$\ to $\eta_{\rm D}$
due to the uncertainties in the relevant nuclear reactions rates
(see Section~\ref{sec:limitation}).
For the standard model,
$N_{\rm eff}$$\,\simeq3.046$ and $\xi=0$.
In this case, the Precision Sample of D/H measurements
implies a cosmic density of baryons:
\begin{equation}
\label{eq:dhobary}
100\,\Omega_{\rm b,0}\,h^2 ({\rm BBN})
= 2.202\pm0.020~~(\textrm{random})~\pm0.041~~(\textrm{systematic})
\end{equation}
where we have decoupled the error terms from our measurement
(i.e. the random error term)
and the systematic uncertainty in converting the D abundance into
the baryon density parameter.
As can be seen from Figure~\ref{fig:dhsample},
this value of $\Omega_{\rm b,0}$$\,h^2$\ is in excellent agreement
with that derived from the analysis of the CMB temperature
fluctuations measured by the \textit{Planck} satellite \citep{Efs13}:
\begin{equation}
\label{eq:dhobary_CMB}
100 \, \Omega_{\rm b,0} \, h^2 {\rm (CMB)} =2.205 \pm 0.028 .
\end{equation}
\subsection{The Current Limitation}
\label{sec:limitation}
In the era of high-precision cosmology, we feel that it is important to
highlight the main limitations affecting the use
of (D\,/\,H)$_{\rm p}$\ in the estimation of cosmological parameters.
As can be seen from eq.~\ref{eq:dhobary},
the main source of error is in the conversion of
(D\,/\,H)$_{\rm p}$\ to the baryon density parameter ($\eta_{\rm D}$,
and hence $\Omega_{\rm b,0}$$\,h^2$).
In large part, this systematic uncertainty is due to
the relative paucity of experimental measures for several nuclear
cross-sections that are important in the network of
BBN reactions, particularly deuteron--deuteron reactions
and the $d$($p,\gamma$)$^3$He\ reaction rate at the relevant energies
\citep{Fio98, NolBur00, Cyb04, Ser04}.
Since these studies, estimates for
the deuteron--deuteron reaction cross-sections \citep{Leo06} have
improved and their contribution to the error
budget has been reduced.
The main lingering concern involves the reaction rate $d$($p,\gamma$)$^3$He,
for which only a single reliable dataset of the {\it S}-factor is
currently available in the relevant energy range
\citep{Ma97}.\footnote{To avoid possible confusion
caused by the unfortunate
nomenclature, we point out that the
{\it S}-factor in question is directly related to the reaction
cross-section as a function of energy, $\sigma(E)$.
It has nothing to do with the expansion factor
$S$ in eq.~\ref{eqn:etad}.}
The concern for $d$($p,\gamma$)$^3$He\ is made
worse by the fact that the theoretical and experimental
values of the {\it S}-factor do not agree.
This paucity of data, in addition to the difficulties of obtaining
an accurate and precise measure of $d$($p,\gamma$)$^3$He\ at BBN energies, led
\citet{NolHol11} to adopt a theoretical curve for the {\it S}-factor
between 50--500\,keV. This has resulted in the improved
conversion given in eq.~\ref{eqn:dhptoetad}.
It is worth pointing out here that a further reduction
by a modest factor of two in the uncertainty of the conversion from
(D\,/\,H)$_{\rm p}$\ to $\eta_{\rm D}$ would make the precision of
$\Omega_{\rm b,0}$$\,h^2$(BBN) from the DLA Precision Sample
comparable to that of $\Omega_{\rm b,0}$$\,h^2$(CMB)
achieved by the \textit{Planck} mission
(cf. eqs.~\ref{eq:dhobary} and \ref{eq:dhobary_CMB}).
\begin{figure*}
\centering
{\hspace{-0.25cm} \includegraphics[angle=0,width=8.8cm]{rcooke_f6a.ps}}
{\hspace{0.25cm} \includegraphics[angle=0,width=8.8cm]{rcooke_f6b.ps}}
\caption{
The $1\sigma$ and $2\sigma$ confidence contours (dark and light shades respectively)
for $N_{\rm eff}$\ and $\Omega_{\rm b,0}$$\,h^2$\ derived from the primordial
deuterium abundance (blue), the
CMB (green), and the combined confidence contours (red).
The left panel illustrates the current situation,
while the right panel shows the effect of reducing the
uncertainty in the conversion from (D\,/\,H)$_{\rm p}$\ to $\Omega_{\rm b,0}$$\,h^2$\
by a factor of two (see discussion in Section~\ref{sec:limitation}).
Dashed and dotted lines indicate the
hidden contour lines for BBN and CMB bounds respectively.
}
\label{fig:neff}
\end{figure*}
\section{Beyond the Standard Model}
\label{sec:newphysics}
In this section we combine the results presented here
with those of the CMB analysis by the
\textit{Planck} Collaboration to place bounds
on some parameters denoting new physics
beyond the standard model of cosmology and
particle physics.
In particular, we use the Markov-Chain
Monte Carlo chains from the combined analysis of
the \textit{Planck} temperature fluctuations \citep{Efs13},
the low-multipole \textit{WMAP} polarization data
($l<23$; \citealt{Ben12}), and high-resolution CMB
data (see \citealt{Efs13} for an appropriate list of
references). Throughout, we refer to this combined
dataset as \textit{Planck}+WP+highL, for consistency
with the work by the \citet{Efs13}. In what follows, we
have assumed that $N_{\rm eff}$\ and the
baryon-to-photon ratio remained unchanged from the
epoch of BBN to the epoch of recombination.
\subsection{Combined CMB+BBN Measure of $N_{\rm eff}$ and $\Omega_{\rm b,0}$$\,h^2$}
\label{sec:neffobary}
It has long been known that the mass fraction of $^4$He, ${Y}_{\rm P}$,
is potentially a very sensitive
probe of additional light degrees of freedom \citep{SteSchGun77}.
Unfortunately, systematic uncertainties in the determination
of ${Y}_{\rm P}$\ have limited its use as an effective probe
of $N_{\rm eff}$\ (see Figure~8 of \citealt{Ste12}).
However, \citet{NolHol11} (see also \citealt{Cyb04}) have
recently highlighted the potential of using precise measurements
of the primordial deuterium abundance in conjunction
with observations of the CMB to place a strong,
independent bound on $N_{\rm eff}$.
In the left panel of Figure~\ref{fig:neff}, we show the
current $1\sigma$ and $2\sigma$
confidence contours for $N_{\rm eff}$\ and $\Omega_{\rm b,0}$$\,h^2$\ derived from the
\textit{Planck}+WP+highL CMB analysis\footnote{We used the base
cosmology set with $N_{\rm eff}$\ added as a free parameter
(see Section 6.4.4 of \citealt{Efs13}).} (green), and from the
BBN-derived (D\,/\,H)$_{\rm p}$\ reported here (blue).
The combined confidence bounds are
overlaid as red contours.
In what follows, it is instructive to recall that
the CMB-only bounds are \citep{Efs13}:
\begin{equation}
100\,\Omega_{\rm b,0}\,h^2 = 2.23\pm0.04
\label{eq:dhobary_CMB_ns}
\end{equation}
\begin{equation}
N_{\rm eff} = 3.36\pm 0.34 \, .
\end{equation}
(Note that solving simultaneously for $\Omega_{\rm b,0}$$\,h^2$\
and $N_{\rm eff}$\ leads to a slightly different best-fitting
value of $\Omega_{\rm b,0}$$\,h^2$\ than that obtained
for the standard model; cf. eqs.~\ref{eq:dhobary_CMB} and
\ref{eq:dhobary_CMB_ns}).
For comparison, from the joint BBN+CMB analysis
we deduce:
\begin{equation}
100\,\Omega_{\rm b,0}\,h^2 = 2.23\pm0.04\\
\label{eq:dhobary_ns}
\end{equation}
\begin{equation}
N_{\rm eff} = 3.28 \pm 0.28 \, .
\end{equation}
Thus, combining (D\,/\,H)$_{\rm p}$\ with the CMB does not significantly
change the uncertainty in $\Omega_{\rm b,0}$$\,h^2$, but does reduce the error
on $N_{\rm eff}$\ by $\sim20$ per cent. The uncertainty
on $N_{\rm eff}$\ could be reduced further by an improvement
in the cross-section of the $d$($p,\gamma$)$^3$He\
(see right panel of Figure~\ref{fig:neff},
and Section~\ref{sec:limitation}).
Based on the current bound on $N_{\rm eff}$\ from CMB+(D\,/\,H)$_{\rm p}$, we can nevertheless
rule out the existence of an additional (sterile) neutrino
(i.e. $N_{\rm eff}$\, = 4.046) at $99.3\%$
confidence (i.e. $\sim 2.7\sigma$),
provided that $N_{\rm eff}$\ and $\eta_{10}$
remained unchanged between
BBN and recombination.
However, as noted recently by \citet{Ste13}, if the CMB photons
are heated after the neutrinos have decoupled [for example, by
a weakly interacting massive particle that annihilates
to photons], $N_{\rm eff}$\ will be less than $3.046$ for three standard model
neutrinos; a sterile neutrino can in principle exist even when $N_{\rm eff}$~$<4.046$.
Looking to the future, ${Y}_{\rm P}$\ has contours that are almost orthogonal
to those of the CMB and (D\,/\,H)$_{\rm p}$\ (see e.g. \citealt{Ste07}). Thus,
measures of ${Y}_{\rm P}$\ that are not limited by systematic uncertainties
could potentially provide a very strong bound,
when combined with (D\,/\,H)$_{\rm p}$, on the number of equivalent neutrinos
during the epoch of BBN, independently of CMB observations.
Following improvements in the He\,{\textsc i} emissivity
calculations (Porter et al. 2012, 2013), there have been
two recent reassessments of $Y_{\rm P}$, by
\citet{IzoStaGus13} and
by \citet{Ave13} respectively.
\citet{IzoStaGus13} find $Y_{\rm P} = 0.253 \pm 0.003$;
this value includes a small correction of $-0.001$ to reflect
the fact that their \textit{Cloudy} modelling overpredicts
$Y_{\rm P}$ by this amount at the lowest metallicities --- see
their Figure~7(a). \citet{Ave13} find $Y_{\rm P} = 0.254 \pm 0.004$
from the average of all the low metallicity \textrm{H}\,\textsc{ii}\
regions in their sample.
Thus, the values deduced by these two teams
are in good mutual agreement; in the analysis that
follows we adopt $Y_{\rm P} = 0.253 \pm 0.003$
from \citet{IzoStaGus13}. Using the following conversion
relation for ${Y}_{\rm P}$\ (Steigman 2012; G.~Steigman 2013, private communication):
\begin{equation}
\label{eqn:yptoetad}
{Y}_{\rm P} = 0.2469 \pm 0.0006 + 0.0016\,(\eta_{\rm He} - 6)
\end{equation}
\begin{equation}
\label{eqn:etahe}
\eta_{\rm He} = \eta_{\rm 10} + 100(S-1) - 575\xi/4
\end{equation}
combined with the \citet{IzoStaGus13} measure of ${Y}_{\rm P}$,
we derive the following BBN-only bound on the
baryon density and the effective number of neutrino
species:
\begin{equation}
100\,\Omega_{\rm b,0}\,h^2 = 2.28\pm0.05\\
\label{eq:dhobary_ns}
\end{equation}
\begin{equation}
N_{\rm eff} = 3.50 \pm 0.20 \, .
\end{equation}
The corresponding contours are shown in
Figure~\ref{fig:ypdhcont}. Thus, it appears
that even with the most recent reappraisals
of the primordial abundance of $^4$He by \citet{IzoStaGus13} and \citet{Ave13},
there is better agreement (within the standard model) between
(D/H)$_{\rm p}$ and the CMB, than between (D/H)$_{\rm p}$ and ${Y}_{\rm P}$.
\begin{figure}
\centering
\includegraphics[angle=0,width=8.8cm]{rcooke_f7.ps}
\caption{
The $1\sigma$ and $2\sigma$ confidence contours (dark and light shades respectively)
for $N_{\rm eff}$\ and $\Omega_{\rm b,0}$$\,h^2$\ derived from the primordial
deuterium abundance (blue), the
primordial He mass fraction (green), and the combined confidence contours (red).
Dashed and dotted lines indicate the
hidden contour lines for (D\,/\,H)$_{\rm p}$\ and ${Y}_{\rm P}$\ bounds respectively.
}
\label{fig:ypdhcont}
\end{figure}
\subsection{Deuterium and the Lepton Asymmetry}
In the past, the primordial deuterium abundance has been commonly
used as a tool for measuring the present-day universal density
of baryons (see e.g. \citealt{Ste07}), and more recently as a probe of the
effective number of neutrino families
\citep[][see also Section~\ref{sec:neffobary}]{Cyb04,NolHol11,PetCoo12}.
Here, we demonstrate that \textit{precise} measures of the primordial
deuterium abundance (in combination with the CMB)
can also be used to estimate the neutrino degeneracy parameter, $\xi$,
which is related to the lepton asymmetry by Equation 14
from \citet{Ste12}.
\citet{Ste12} recently suggested that combined estimates for
(D\,/\,H)$_{\rm p}$, ${Y}_{\rm P}$, and a measure of $N_{\rm eff}$\ from the CMB, can provide interesting
limits on the neutrino degeneracy parameter ($\xi \le 0.079$, $2\sigma$; see also,
\citealt{SerRaf05}; \citealt{PopVas08}; and \citealt{SimSte08}). By combining
(D\,/\,H)$_{\rm p}$\ and ${Y}_{\rm P}$, this approach effectively removes the dependence on $\Omega_{\rm b,0}$$\,h^2$.
Using the conversion relations for (D\,/\,H)$_{\rm p}$\ and ${Y}_{\rm P}$\ (eqs.~\ref{eqn:dhptoetad}--\ref{eqn:etad}
and \ref{eqn:yptoetad}--\ref{eqn:etahe})
and the current best determination of ${Y}_{\rm P}$\ ($0.253\pm0.003$; \citealt{IzoStaGus13}),
in addition to the \textit{Planck}+WP+highL\footnote{We used the base
cosmology set with $N_{\rm eff}$\ and ${Y}_{\rm P}$\ added as free parameters
(see Section 6.4.5 of \citealt{Efs13}).} constraint on $N_{\rm eff}$\ and
the precise determination of (D\,/\,H)$_{\rm p}$\ reported here, we derive a $2\sigma$
upper limit on the neutrino degeneracy parameter, $|\xi|\le0.064$, based on the
approach by \citet{Ste12}.
We propose that an equally powerful technique for estimating $\xi$
does \textit{not} involve removing the dependence on $\Omega_{\rm b,0}$$\,h^2$\ by
combining (D\,/\,H)$_{\rm p}$\ and ${Y}_{\rm P}$, as in \citet{Ste12}. Instead, one can obtain
a measure of both $\Omega_{\rm b,0}$$\,h^2$\ and $N_{\rm eff}$\ from the CMB, and use \textit{either}
(D\,/\,H)$_{\rm p}$\ or ${Y}_{\rm P}$\ to obtain two separate measures of $\xi$. This has the clear
advantage of decoupling (D\,/\,H)$_{\rm p}$\ and ${Y}_{\rm P}$; any systematic biases in either
of these two values could potentially bias the measure of $\xi$.
Separating (D\,/\,H)$_{\rm p}$\ and ${Y}_{\rm P}$\ also allows one to check that the two
estimates agree with one another.
Our calculation involved a Monte Carlo
technique, whereby we generated random values from the Gaussian-distributed
primordial D/H abundance measurements, whilst simultaneously drawing
random values from the (correlated) distribution between
$\Omega_{\rm b,0}$$\,h^2$\ and $N_{\rm eff}$\ from the \textit{Planck}+WP+highL CMB data
\citep{Efs13}\footnote{Rather than drawing values of $\Omega_{\rm b,0}$$\,h^2$\ and $N_{\rm eff}$\
from the appropriate distribution, we instead used the
Markov-Chain Monte Carlo chains provided by the \textit{Planck} science
team, which are available at:\\ \texttt{http://www.sciops.esa.int/wikiSI/planckpla/index.php?\\title=Cosmological\_Parameters\&instance=Planck\_Public\_PLA}}.
Using Equation~19 from \citet[][equivalent to eq.~\ref{eqn:etad} here]{Ste12},
we find $\xi_{\rm D}=+0.05\pm0.13$ for
(D\,/\,H)$_{\rm p}$, leading to a $2\sigma$ upper limit of
$|\xi_{\rm D}|\le0.31$.
With the technique outlined above, we have also computed the
neutrino degeneracy parameter from the current observational bound on ${Y}_{\rm P}$.
For this calculation, we have used the MCMC chains from the
\textit{Planck}+WP+highL CMB base cosmology with $N_{\rm eff}$\ and
${Y}_{\rm P}$\ added as free parameters. In this case, the CMB distribution
was weighted by the observational bound on ${Y}_{\rm P}$\
(${Y}_{\rm P}$\,=\,$0.253\pm0.003$; \citealt{IzoStaGus13}).
Using Equations~19--20 from
\citet[][equivalent to eqs.~\ref{eqn:etad} and \ref{eqn:etahe} here]{Ste12},
we find $\xi_{\rm D}=+0.04\pm0.15$ for
(D\,/\,H)$_{\rm p}$\ and $\xi_{\rm He}=-0.010\pm0.027$ for ${Y}_{\rm P}$.
These values translate into
corresponding $2\sigma$ upper limits
$|\xi_{\rm D}|\le0.34$ and
$|\xi_{\rm He}|\le0.064$.
Combining these two constraints then gives $\xi=-0.008\pm0.027$,
or $|\xi|\le0.062$ ($2\sigma$).
Alternatively, if we assume that the effective number of neutrino
species is consistent with three standard model neutrinos (i.e. $N_{\rm eff}$~$\simeq3.046$),
we obtain the following BBN-only bound on the neutrino degeneracy parameter
by combining (D\,/\,H)$_{\rm p}$\ and ${Y}_{\rm P}$,
$\xi=-0.026\pm0.015$, or $|\xi|\le0.056$ ($2\sigma$).
We therefore conclude that all current estimates of the
neutrino degeneracy parameter, and hence the lepton
asymmetry, are consistent with the standard model
value, $\xi=0$.
From the above calculations, it is clear that ${Y}_{\rm P}$\ is the more sensitive
probe of any lepton asymmetry, as is already appreciated.
However, (D\,/\,H)$_{\rm p}$\ offers an additional bound on
$\xi$ that is complementary to that obtained
from ${Y}_{\rm P}$. We note further that, if the uncertainty in the conversion of
(D\,/\,H)$_{\rm p}$\ to $\eta_{\rm D}$ could be reduced by a factor of two,
the bound on $\xi_{\rm D}$ would be reduced by 15\%,
corresponding to a $1\sigma$ uncertainty on $\xi$ of $\sim0.11$
from the CMB and deuterium measurements alone.
\section{Summary and Conclusions}
\label{sec:conc}
We have reported a new precise measurement
of the primordial abundance of deuterium
in a very metal-poor
damped Lyman\,$\alpha$ system at $z_{\rm abs} = 3.06726$
towards the QSO SDSS~J1358+6522.
Furthermore, we have reanalyzed self-consistently
all literature systems that meet a set of strict criteria
(four cases). These criteria were imposed to identify
the systems where both accurate and precise
measures of D\,/\,H could potentially be obtained.
The new system, plus the four from the literature,
constitute the \textit{Precision Sample} of DLAs
that are best suited for a precise determination of (D\,/\,H)$_{\rm p}$.
Our reanalysis was performed blind (to remove human
bias), and took advantage of a new software package
that we have specifically developed for the measurement of
D\,/\,H in QSO absorption line systems.
From the analysis of this sample,
we draw the following conclusions.
\smallskip
\noindent ~~(i) The very metal-poor DLA
([Fe/H]$\,=-2.88\pm0.03$) towards SDSS~J1358+6522
provides a strong bound on
the primordial abundance of deuterium, (D\,/\,H)$_{\rm p}$\ $= (2.58 \pm 0.07)\times10^{-5}$.
A weighted mean of the five systems in the Precision Sample gives
the most stringent limit on (D\,/\,H)$_{\rm p}$\ to date:
(D\,/\,H)$_{\rm p}$\ $= (2.53 \pm 0.04) \times10^{-5}$.
The corresponding baryon density is
100\,$\Omega_{\rm b,0}$$\,h^2$\ $=2.202 \pm 0.046$,
assuming the standard model of particle physics
with three families of neutrinos and no lepton asymmetry.
This value is in excellent agreement with
100\,$\Omega_{\rm b,0}$$\,h^2$\,(CMB)\,$=2.205 \pm 0.028$
deduced from the analysis of \textit{Planck}
observations of the CMB.
\smallskip
\noindent ~~(ii) The main limitation in using (D\,/\,H)$_{\rm p}$\ for
cosmological parameter estimation is the conversion of
(D\,/\,H)$_{\rm p}$\ into the ratio of baryons-to-photons. In particular,
modern measurements of the cross-section for the reaction $d$($p,\gamma$)$^3$He\
at energies between 50--500 keV, where there currently exists
a paucity of reliable data, are highly desirable. We
estimate that a factor of two improvement in the conversion
from (D\,/\,H)$_{\rm p}$\ to $\Omega_{\rm b,0}$$\,h^2$\
would provide a measurement of the universal baryon density
from D/H as precise as that derived
from the published \textit{Planck} data.
\smallskip
\noindent ~~(iii) By combining our D/H determination
with \textit{Planck} observations of the CMB,
we can place a tight bound
on both $\Omega_{\rm b,0}$$\,h^2$\ and $N_{\rm eff}$. The best-fitting parameters
so derived are 100\,$\Omega_{\rm b,0}$$\,h^2$=$2.23\pm0.04$ and
$N_{\rm eff}$=$3.28\pm0.28$. We therefore rule out the existence
of an additional (sterile) neutrino at $99.3$\% confidence,
provided that $N_{\rm eff}$\ and $\eta_{10}$ remained unchanged
from BBN to recombination.
The uncertainty on $N_{\rm eff}$\ could be reduced further by
a more accurate set of reaction
cross-sections for $d$($p,\gamma$)$^3$He\ and, to a lesser degree, $d+d$.
\smallskip
\noindent ~~(iv) For the first time, we have combined the
\textit{Planck} CMB results with our measure of (D\,/\,H)$_{\rm p}$\ to place a
bound on the neutrino degeneracy parameter, $\xi_{\rm D}=+0.05\pm0.13$.
When including the current best-estimate of the
$^4$He mass fraction, ${Y}_{\rm P}$, derived from metal-poor H\,\textsc{ii} regions,
the combined bound on the neutrino degeneracy parameter is
$\xi=-0.008\pm0.027$, or $|\xi|<0.062$ at $2\sigma$.
\smallskip
We conclude by re-emphasizing that the most
metal-poor DLAs are potentially the best
environments where both accurate and precise
measures of the primordial abundance of deuterium
can be obtained. A combined effort to measure anew
several important nuclear reaction
cross-sections, in addition to dedicated searches
for the rare metal-poor DLAs that exhibit resolved
deuterium absorption, are the next steps necessary to pin
down further the value of (D\,/\,H)$_{\rm p}$.
This approach offers a
promising and exciting avenue to test for departures
from the standard model of cosmology
and particle physics.
\section*{Acknowledgments}
We are grateful to
the staff astronomers at the ESO VLT and Keck
Observatory for their assistance
with the observations.
We are indebted to Gary Steigman
for kindly communicating ahead of publication
the latest fitting formulae used in this work, and
for providing valuable comments on the manuscript.
We also thank an anonymous referee who provided
valuable comments that improved the presentation of this work.
Discussions, advice and help with various
aspects of the work described in this
paper were provided by Antony Lewis, Jordi Miralda-Escud{\'e},
Paolo Molaro, Ken Nollett, Pasquier Noterdaeme, John O'Meara,
Jason X. Prochaska, Signe Riemer-S{\o}renson, Donatella Romano,
and John Webb. We thank the Hawaiian
people for the opportunity to observe from Mauna Kea;
without their hospitality, this work would not have been possible.
R.~J.~C. is partially supported by NSF grant AST-1109447.
R.~A.~J. is supported by an NSF Astronomy and Astrophysics
Postdoctoral Fellowship under award AST-1102683.
M.~T.~M. thanks the Australian Research Council
for a QEII Research Fellowship (DP0877998)
and a Discovery Project grant (DP130100568).
|
2,877,628,088,362 | arxiv | \section{Introduction}
Latent variable models have become ubiquitous in statistical data
analysis, spanning over a diverse set of applications ranging from text
\citep{BleNgJor02}, images \citep{QuaColDar04} to user behavior
\citep{AlyHatJosNar12}. In these works, latent variables are introduced to
represent unobserved properties or hidden causes of the observed
data. In particular, Bayesian Nonparametrics such as the Dirichlet mixture
models \citep{Neal98b}, the Indian Buffet Process (IBP) \citep{GriGha11}
and the Hierarchical Dirichlet Process (HDP) \citep{TehJorBeaBle06} allow
for flexible representation and adaptation in terms model complexity.
In recent years spectral methods have become a credible alternative to
sampling \citep{GriSte04} and variational methods
\citep{BleJor05,DemLaiRub77} for the inference of such structures.
In particular, the work of
\cite{AnaGeHsuKakTel12,AnaChaHsuKakSonZha2011,BooGreGeo13,HsuKakZha09,SonBooSidGorSmo10}
demonstrates that it is possible to infer latent variable structure
accurately, despite the problem being nonconvex, thus exhibiting many
local minima. A particularly attractive aspect of spectral methods is
that they allow for efficient means of inferring the model complexity
in the same way as the remaining parameters, simply by thresholding
eigenvalue decomposition appropriately. This makes them suitable for
nonparametric Bayesian approaches.
While the issue of spectral inference with the Dirichlet Distribution is
largely settled \citep{AnaGeHsuKakTel12, AnaGeHsuKak13}, the domain of
nonparametric tools is much richer and it is therefore desirable to
see whether the methods can be extended to popular nonparametric models
such as the IBP. As sampling-based methods are computationally
expensive for models with complicated hierarchical structure, another attractive
direction is to apply spectral method to nonparametric hierarchical model such
as the HDP.
By using countsketch FFT technique for fast tensor decomposition
\citep{WanTunSmoAna15}, spectral method for the Latent Dirichlet Allocation (LDA), which can be viewed as the simplest case in thespectral algorithm for the HDP, already outperform sampling-based algorithms significantly both in terms of perplexity and speed.
Since the time complexity of the proposed spectral method for the HDP
does not scale with the number of layers, the algorithm enjoys
significant improvement in time over HDP samplers. In a nutshell, this work
contributes to completing the tool set of spectral methods.
This is an important goal to ensure that entire models can be
translated wholly into spectral algorithms, rather than just parts.
We provide a full analysis of the tensors arising from the IBP and the HDP.
For the IBP, we show how spectral algorithms need to be modified, since a
degeneracy in the third order tensor requires fourth order terms, to
successfully infer all the hidden variables.
For the HDP, we derive the generalized form in obtaining tensors for any
arbitrary hierarchical structure. To recover the
parameters and latent factors, we use Excess Correlation Analysis
(ECA) \citep{AnaFosHsuKakLiu12} to whiten the higher order tensors and
to reduce their dimensionality. Subsequently we employ the power
method to obtain symmetric factorization of the higher-order
terms. The methods provided in this work are simple to implement and have
high efficiency in recovering the latent factors and related
parameters. We demonstrate how this approach can be used in inferring
an IBP structure in the models discussed in \cite{GriGha11} and
\cite{KnoGha07} and the generalized spectral method for the HDP, which can
be used in modeling problems involving grouped data such that mixture
components are shared across all the groups.
Moreover, we show that empirically the spectral algorithms
outperform sampling-based algorithms and variational approaches
both in terms of perplexity and speed. Statistical guarantees for recovery and
stability of the estimates conclude the paper.
{\bf Outline:}
The key idea of spectral methods is to use the method of moments to solve the
underlying parameters, which includes the following steps:
\begin{itemize}
\item Construct equations for obtaining diagonalized tensors using moments of
the latent variables defined in the probabilistic graphical model.
\item Replace the theoretical moments with the empirical moments and obtain
an empirical version of the diagonalized tensor.
\item Use tensor decomposition solvers to decompose the empirical diagonalized
tensor and obtain its eigenvalues/eigenvectors, which corresponds to the desired
hidden vectors/topics.
\end{itemize}
In order to use tensor decomposition solver, a decomposable symmetric tensor
must be constructed.
A tensor is decomposable and symmetric if it can be written as a summation of
the outer products of its eigenvectors weighted by their correspnding eigenvalues.
In the two dimensional case (i.e, as a matrix), a rank-k symmetric tensor is
decomposable and symmetric since it can be decomposed as $M = \sum_{i=1}^k \lambda_i v_i v_i^T,$ where $\lambda_i/v_i$ are the eigenvalue/eigenvector pairs.
In the first step, we construct a tensor that has such properties using theoretical
moments so that the tensor can be further estimated using empirical moments
and decomposed by tensor decomposition tools.
The paper is structured as follows: In Section \ref{sec:model} we introduce the IBP and the HDP models.
In Section \ref{sec:moments} we construct equations for obtaining the diagonalized tensors using moments of the IBP and apply them on two applications, the linear Gaussian latent factor model and the infinte sparse factor analysis. We also derive the generalized tensors for the HDP that are applicable on any arbitrary hierarchical structure.
In Section \ref{sec:alg} the spectral algorithms for thel IBP and the HDP is proposed.
We also list out several tensor decomposition tools that can be used to solve our problem.
In Section \ref{sec:concentration} we show the concentration measure of moments
and tensors for these two models and provide overall guarantees on $L_2$ distance
between the recovered latent vectors and the ground truth.
In Section \ref{sec:experiment} we demonstrate the power of the spectral IBP by
showing that the method is able to produce comparable results to that of
variational approaches with much lesser time.
We also applied it on image data and gene expression data to show that the algorithm is able to infer mearningful patterns in real data.
For the spectral method for the HDP, we show that (1) computational does not increase with number of layer using our method, while obviously the factor will significantly affect Gibbs sampling
and (2) when the number of samples underneath each nodes in a hierarchical structure
is highly unbalanced, the spectral for the HDP is able to obtain solutions better than that of spectral LDA in terms of perplexity.
\section{Model Settings}
\label{sec:model}
We begin with defining the models of the IBP and the HDP.
\subsection{The Indian Buffet Process}
\label{sec:ibp}
The Indian Buffet Process defines a distribution over equivalence
classes of binary matrices $Z$ with a finite number of rows and a
(potentially) infinite number of columns \citep{GriGha06,GriGha11}. The
idea is that this allows for automatic adjustment of the number of
binary entries, corresponding to the number of independent sources,
underlying causes, etc.\ This is a very useful strategy and it has led
to many applications including structuring Markov transition
matrices \citep{FoxSudJorWil10}, learning hidden causes with a bipartite
graph \citep{WooGriGha06} and finding latent features in link prediction \citep{MilGriJor09}.
Denote by $n \in \mathbb{N}$ the number of rows of $Z$, i.e.\ the number of
customers sampling dishes from the `` Indian Buffet'', let $m_k$ be the
number of customers who have sampled dish $k$, let $K_+$ be the total number
of dishes sampled, and denote by $K_h$ the number of dishes with a
particular selection history $h \in \cbr{0;1}^n$. That is, $K_h > 1$
only if there are two or more dishes that have been selected by
exactly the same set of customers. Then the probability of
generating a particular matrix $Z$ is given by \cite{GriGha11}
\begin{align}
\label{eq:pz}
p(Z) = \frac{\alpha^{K_+}}{\prod_h K_h!} \exp\Biggl[{-\alpha
\sum_{j=1}^n \textstyle \frac{1}{j}}\Biggr] \prod_{k=1}^{K_+} \frac{(n-m_k)! (m_k-1)!}{n!}
\end{align}
Here $\alpha$ is a parameter determining the expected number of
nonzero columns in $Z$. Due to the conjugacy of the prior an
alternative way of viewing $p(Z)$ is that each column (aka dish)
contains nonzero entries $Z_{ij}$ that are drawn from the binomial
distribution $Z_{ij} \sim \mathrm{Bin}(\pi_i)$. That is, if we
\emph{knew} $K_+$, i.e.\ if we knew how many nonzero features $Z$
contains, and if we knew the probabilities $\pi_i$, we could draw $Z$
efficiently from it. We take this approach in our analysis: determine
$K_+$ and infer the probabilities $\pi_i$ directly from the data. This
is more reminiscent of the model used to derive the IBP --- a
hierarchical Beta-Binomial model, albeit with a variable number of entries:
$$
\begin{tikzpicture}[>=latex,text height=1.5ex,text depth=0.25ex]
\matrix[row sep=0.8cm,column sep=0.6cm] {
\node (alpha) [observed]{$\alpha$}; &
\node (pi) [latent]{$\pi_i$}; &
\node (z) [latent]{$Z_{ij}$};
\\
};
\path[->]
(alpha) edge[thick] (pi)
(pi) edge[thick] (z)
;
\begin{pgfonlayer}{background}
\node (customers) [plate, fit=(z)] {\
\\[9.5mm] \tiny $j \in \cbr{n}$};
\node (dishes) [plate, fit=(pi) (customers)] {\
\\[16.5mm]\tiny $i \in \cbr{K_+}$};
\end{pgfonlayer}
\end{tikzpicture}
$$
In general, the binary attributes $Z_{ij}$ are \emph{not}
observed. Instead, they capture auxiliary structure pertinent to a
statistical model of interest. To make matters more concrete, consider
the following two models proposed by \cite{GriGha11} and
\cite{KnoGha07}. They also serve to showcase the algorithm design in
our paper.
\paragraph{Linear Gaussian Latent Feature Model \citep{GriGha11}.}
The assumption is that we observe vectorial data $x$. It is generated
by linear combination of dictionary atoms $\Phi$ and an associated unknown
number of binary causes $z$, all corrupted by some additive noise
$\epsilon$. That is, we assume that
\begin{align}
\label{eq:lingalaf}
x = \Phi z + \epsilon
\text{ where } \epsilon \sim \Ncal(0, \sigma^2\one)
\text{ and } z \sim \mathrm{IBP}(\alpha).
\end{align}
The dictionary matrix $\Phi$ is considered to be fixed but unknown. In
this model our goal is to infer both $\Phi$, $\sigma^2$ and the
probabilities $\pi_i$ associated with the IBP model. Given that, a
maximum-likelihood estimate of $Z$ can be obtained efficiently.
\paragraph{Infinite Sparse Factor Analysis \citep{KnoGha07}.}
A second model is that of sparse independent component analysis. In a
way, it extends \eq{eq:lingalaf} by replacing binary attributes with
sparse attributes. That is, instead of $z$ we use the entry-wise
product $z \mathrm{.*} y$. This leads to the model
\begin{align}
\label{eq:linzy}
x = \Phi (z \mathrm{.*} y) + \epsilon
\text{ where } \epsilon \sim \Ncal(0, \sigma^2 \one)
\text{ , } z \sim \mathrm{IBP}(\alpha)
\text{ and } y_i \sim p(y)
\end{align}
Again, the goal is to infer the dictionary $\Phi$, the probabilities $\pi_i$ and then to
associate likely values of $Z_{ij}$ and $Y_{ij}$ with the data. In
particular, \cite{KnoGha07} make a number of alternative assumptions
on $p(y)$, namely either that it is iid Gaussian or that it is iid
Laplacian. Note that the scale of $y$ itself is not so important since
an equivalent model can always be found by re-scaling matrix $\Phi$ suitably.
Note that in \eq{eq:linzy} we used the shorthand $\mathrm{.*}$ to
denote point-wise multiplication of two vectors in 'Matlab'
notation. While \eq{eq:lingalaf} and \eq{eq:linzy} appear rather
similar, the latter model is considerably more complex since it not
only amounts to a sparse signal but also to an additional
multiplicative scale. \cite{KnoGha07} refer to the model as Infinite
Sparse Factor Analysis (isFA) or Infinite Independent Component
Analysis (iICA) depending on the choice of $p(y)$ respectively.
\subsection{The Hierarchical Dirichlet Process (HDP)}
\label{sec:hdp}
The HDP mixture models are useful in modeling problems involving groups of
data, where each observation within a group is drawn from a mixture
model and it is desirable to share mixture components across all the
groups. A natural application with this property is topic modeling
for documents, possibly supplemented by an ontology.
The HDP \citep{TehJorBeaBle06} uses a Dirichlet Process (DP) \citep{Antoniak74,
Ferguson73} $G_j$ for each group $j$ of data to handle uncertainty
in number of mixture components. At the same time, in order to share mixture
components and clusters across groups, each of these DPs is drawn from a
global DP $G_0$. The associated graphical model is given below:
\begin{center}
\begin{tikzpicture}[>=latex,text height=1.5ex,text depth=0.25ex]
\matrix[row sep=7mm,column sep=0.4cm] {
&
\node (gamma) [observed] {$\gamma_0$}; &
\node (alpha) [observed] {$\gamma_1$}; &
\\
\node (H) [observed] {$H$}; &
\node (G0) [latent] {$G_0$}; &
\node (Gj) [latent] {$G_i$}; &
\node (theta) [latent] {$\theta_{ij}$}; &
\node (x) [observed] {$x_{ij}$}; &
\\
};
\path[->]
(gamma) edge[thick] (G0)
(alpha) edge[thick] (Gj)
(H) edge[thick] (G0)
(G0) edge[thick] (Gj)
(Gj) edge[thick] (theta)
(theta) edge[thick] (x)
;
\begin{pgfonlayer}{background}
\node (child) [plate, fit=(theta) (x)] {\
\\[8.5mm]\tiny for all $j \in \cbr{N_i}$};
\node (parent) [plate, fit=(child) (Gj)] {\
\\[16.0mm]\tiny for all $i \in \cbr{I}$};
\end{pgfonlayer}
\end{tikzpicture}
\end{center}
More formally, we have the following statistical description of a two
level HDP. Extensions to more than two levels are straightforward (we
provide a general multilevel HDP spectral inference algorithm).
\begin{enumerate*}
\item Sample $G_0|\gamma_0,H \sim \mathrm{DP}(\gamma, H)$
\item For each $i \in \cbr{I}$ do
\begin{enumerate*}
\item Sample $G_i|\gamma_1, G_0 \sim \mathrm{DP}(\gamma_0,G_0)$
\item For each $j \in \cbr{N_i}$ do
\begin{enumerate*}
\item Sample $\theta_{ij} \sim G_i$
\item Sample $x_{ij}|\theta_{ij} \sim F(\theta_{ij}),$
\end{enumerate*}
\end{enumerate*}
\end{enumerate*}
Here $H$ is the base distribution which governs the a priori
distribution over data items, $\gamma_0$ is a concentration parameter
which controls the amount of sharing across groups and $\gamma_1$ is
a concentration parameter which governs the a priori number of clusters
and a parametric distribution $F(\theta)$. This process can be
repeated to achieve deeper hierarchies, as needed.
More formally, we have the following statistical description of a $L$-level HDP.
\begin{center}
\begin{figure}
\centering
\begin{tikzpicture}[>=latex,text height=1.4ex,text depth=0.25ex]
\matrix[row sep=5mm,column sep=3mm] {
\node (0) {$0$}; & & &
\node (G0) [latent] {$G_0$}; &
\\
\node (1) {$1$}; &
\node (G00) [latent] {$G_{00}$}; &
\node (G01) [latent] {$G_{01}$}; & &
\node (G02) [latent] {$G_{02}$}; &
\\
\node (2) {$2$}; &
\node (G010) [latent] {$G_{010}$}; &
\node (G011) [latent] {$G_{011}$}; &
\node (G020) [latent] {$G_{020}$}; &
\node (G021) [latent] {$G_{021}$}; &
\node (G022) [latent] {$G_{022}$}; &
\\[-2mm]
\node {$\vdots$}; &
\node {$\vdots$}; &
\node {$\vdots$}; &
\node {$\vdots$}; &
\node {$\vdots$}; &
\node {$\vdots$}; &
\\[-4mm]
\node (L) {$L-1$}; &
\node (Gi) [latent] {$G_{{\bf i}}$}; &
\node {$\ldots$}; &
\node (Gii) [latent] {$G_{{\bf i}'}$}; &
\node {$\ldots$}; &
\node (Giii) [latent] {$G_{{\bf i}''}$}; &
\\
\node (docs) {docs}; &
\node (thetaij) [latent] {$\theta_{{\bf i} j}$}; &
\node {$\ldots$}; &
\node (thetaiij) [latent] {$\theta_{{\bf i}' j}$}; &
\node {$\ldots$}; &
\node (thetaiiij) [latent] {$\theta_{{\bf i}'' j}$}; &
\\
&
\node (xij) [observed] {$x_{{\bf i} j}$}; &
\node {$\ldots$}; &
\node (xiij) [observed] {$x_{{\bf i}' j}$}; &
\node {$\ldots$}; &
\node (xiiij) [observed] {$x_{{\bf i}'' j}$}; &
\\
};
\path[->]
(G0) edge[thick] (G00)
(G0) edge[thick] (G01)
(G0) edge[thick] (G02)
(G01) edge[thick] (G010)
(G01) edge[thick] (G011)
(G02) edge[thick] (G020)
(G02) edge[thick] (G021)
(G02) edge[thick] (G022)
(Gi) edge[thick] (thetaij)
(thetaij) edge[thick] (xij)
(Gii) edge[thick] (thetaiij)
(thetaiij) edge[thick] (xiij)
(Giii) edge[thick] (thetaiiij)
(thetaiiij) edge[thick] (xiiij)
;
\begin{pgfonlayer}{background}
\node (childi) [plate, fit=(thetaij) (xij)] {~};
\node (childii) [plate, fit=(thetaiij) (xiij)] {~};
\node (childiii) [plate, fit=(thetaiiij) (xiiij)] {~};
\end{pgfonlayer}
\end{tikzpicture}
\vspace{-2mm}
\caption{Hierarchical Dirichlet Process with observations at the leaf
nodes.
\label{fig:tree}}
\end{figure}
\end{center}
\paragraph{Trees}
Denote by $\Tcal = (V,E)$ a tree of depth $L$. For any vertex ${\bf i} \in
\Tcal$ we use $p({\bf i}) \in V$, $c({\bf i}) \subset V$ and
$l({\bf i})\in\cbr{0,1...,L-1}$ to denote the parent, the set of children
and level of the vertex respectively. When needed, we enumerate the
vertices of $\Tcal$ in dictionary order. For instance, the root node
is denoted by ${\bf i} = (0)$, whereas ${\bf i} = (0,4,2)$ is the node
obtained by picking the fourth child of the root node and then the
second child thereof respectively. Finally, we have sets of
observations $X_{{\bf i}}$ associated with the vertices ${\bf i}$ (in some
cases only the leaf nodes may contain observations). This yields
\begin{align*}
G_0 & \sim \mathrm{DP}(H, \gamma_0) &
\theta_{{\bf i} j} & \sim G_{{\bf i}} \\
G_{{\bf i}} & \sim \mathrm{DP}\rbr{G_{p({\bf i})}, \gamma_{l({\bf i})}} &
x_{{\bf i} j} & \sim \mathrm{Categorical}(\theta_{{\bf i} j})
\end{align*}
Here $\gamma_{{\bf i}}$ denotes the concentration parameter at vertex
${\bf i}$ and $H$ is the base distribution which governs the a priori
distribution over data items. Figure~\ref{fig:tree} illustrates the full model.
As explained earlier, the distributions $G_{\bf i}$ have a stick breaking representation sharing common atoms:
\begin{equation}
G_{\bf i} = \sum_{v=1}^\infty \pi_{{\bf i} v}\delta_{\phi_v} \text{ with } \phi_v
\sim H.
\end{equation}
\section{Spectral Characterization}
\label{sec:moments}
We are now in a position to define the moments of the IBP and the HBP.
Our analysis begins by deriving moments
for the IBP proper. Subsequently we apply this to the two models
described above. Next, following the similar procedure, we derive the moments for the HDP. All proofs are deferred to the Appendix. For
notational convenience we denote by $\symm$ the symmetrized version of
a tensor where care is taken to ensure that existing multiplicities
are satisfied. That is, for a generic third order tensor
we set $\symm_6[A]_{ijk} = A_{ijk} + A_{kij} + A_{jki} + A_{jik} + A_{kji} +
A_{ikj}$. However, if e.g.\ $A = B \otimes C$ with $B_{ij} = B_{ji}$,
we only need $\symm_3[A]_{ijk} = A_{ijk} + A_{jki}+ A_{kij}$ to
obtain a symmetric tensor.
\subsection{Tensorial Moments for the IBP}
In our approach we assume that $Z \sim
\mathrm{IBP}(\alpha)$. We assume that the number of nonzero attributes
$k$ is unknown (but fixed).
In our derivation, a degeneracy in the third order tensor requires that we compute a
fourth order moment. We can exclude the cases of $\pi_i = 0$ and
$\pi_i = 1$ since the former amounts to a nonexistent feature and the
latter to a constant offset. We use $M_i$ to denote moments of order
$i$ and $S_i$ to denote diagonal(izable) tensors of order
$i$. Finally, we use $\pi \in \RR^{K_+}$ to denote the vector of probabilities $\pi_i$.
\begin{description*}
\item[Order 1] This is straightforward, since we have
\begin{align}
\label{eq:tensor-1}
M_1 := \Eb_z\sbr{z} = \pi =: S_1.
\end{align}
\item[Order 2] The second order tensor is given by
\begin{align}
M_2 & := \Eb_z \sbr{z \otimes z} = \pi \otimes \pi +
\mathrm{diag}\rbr{\pi -\pi^2} = {S}_1 \otimes
{S}_1 + \mathrm{diag}\rbr{\pi -\pi^2}.
\intertext{Solving for the diagonal tensor we have}
\label{eq:tensor-2}
S_2 & := {M}_2 - {S}_1 \otimes {S}_1 =
\mathrm{diag}\rbr{\pi - \pi^2}.
\end{align}
The degeneracies $\cbr{0, 1}$ of $\pi - \pi^2 = (1-\pi) \pi$ can be
ignored since they amount to non-existent and
degenerate probability distributions.
\item[Order 3] The third order moments yield
\begin{align}
{M}_3 := & \Eb_z \sbr{z \otimes z \otimes z} = \pi \otimes \pi
\otimes \pi + \symm_3 \sbr{\pi \otimes \mathrm{diag}\rbr{\pi -
\pi^2}} + \mathrm{diag}\rbr{\pi - 3 \pi^2 + 2 \pi^3} \\
= & {S}_1 \otimes {S}_1 \otimes
{S}_1 + \symm_3 \sbr{ {S}_1 \otimes
{S}_2} + \mathrm{diag}\rbr{\pi - 3 \pi^2 + 2 \pi^3}. \\
{S}_3 := & {M}_3- \symm_3\sbr{{S}_1 \otimes {S}_2} - {S}_1 \otimes {S}_1 \otimes {S}_1 = \mathrm{diag}\rbr{\pi - 3 \pi^2 + 2 \pi^3} .
\label{eq:tensor-3}
\end{align}
Note that the polynomial $\pi - 3 \pi^2 + 2 \pi^3 = \pi (2\pi - 1)
(\pi - 1)$ vanishes for $\pi = \frac{1}{2}$. This is undesirable for
the power method --- we need to compute a
fourth order tensor to exclude this.
\item[Order 4] The fourth order moments are
\begin{align}
{M}_4 := & \Eb_z \sbr{z \otimes z \otimes z \otimes z} = {S}_1
\otimes {S}_1 \otimes {S}_1 \otimes {S}_1 + \symm_6 \sbr{ {S}_2
\otimes {S}_1 \otimes {S}_1} + \symm_3 \sbr{ {S}_2
\times {S}_2} \nonumber\\
& + \symm_4 \sbr{ {S}_3 \otimes {S}_1} + \mathrm{diag} \rbr{
\pi- 7\pi^2 + 12\pi^3 -6 \pi^4 } \nonumber \\
\label{eq:tensor-4}
{S}_4 := & {M}_4 - {S}_1 \otimes {S}_1 \otimes {S}_1
\otimes {S}_1 - \symm_6 \sbr{ {S}_2 \otimes {S}_1 \otimes
{S}_1} - \symm_3 \sbr{ {S}_2 \times {S}_2} - \symm_4
\sbr{ {S}_3 \otimes {S}_1} \nonumber \\
=& \mathrm{diag} \rbr{ \pi- 7\pi^2 + 12\pi^3 -6 \pi^4 }.
\end{align}
The roots of the polynomial are $\cbr{0, \frac{1}{2} - 1/\sqrt{12},
\frac{1}{2} + 1/\sqrt{12}, 1}$. Hence the latent factors and
their corresponding $\pi_k$ can be inferred either by ${S}_3$ or
by ${S}_4$.
\end{description*}
\subsection{Applications of the IBP}
The above derivation showed that if we were able to access $z$ directly, we
could infer $\pi$ from it by reading off terms from a diagonal
tensor. Unfortunately, this is not quite so easy in practice since $z$
generally acts as a \emph{latent} attribute in a more complex
model. In the following we show how the models of \eq{eq:lingalaf} and
\eq{eq:linzy} can be converted into spectral form. We need some
notation to indicate multiplications of a tensor $M$ of order $k$ by a set of
matrices $A_i$.
\begin{align}
\label{eq:tensor-mat}
\sbr{T(M, A_1, \ldots, A_k)}_{i_1, \ldots i_k} := \sum_{j_1, \ldots
j_k} M_{j_1, \ldots j_k} \sbr{A_1}_{i_1 j_1} \cdot \ldots \cdot
\sbr{A_k}_{i_k j_k}.
\end{align}
Note that this includes matrix multiplication. For instance, $A_1^\top
M A_2 = T(M, A_1, A_2)$. Also note that in the special case where the
matrices $A_i$ are vectors, this amounts to a reduction to a
scalar. Any such reduced dimensions are assumed to be dropped
implicitly. The latter will become useful in the context of the tensor
power method in \cite{AnaGeHsuKakTel12}.
Here are two tensor operations that are frequently used in the derivation for linear applications of the IBP. First, for $x = Az$ (e.g. observation $x$ is a linear combination of some columns in matrix $A$ indicated by the IBP binary vector $z$), the $i$-th order moment
$M_i^x$ where the superscript denotes the variable for the moments,
can be obtained by multiplying the $i$-th order moment of $z,$ $M_i^z,$ with the affine matrix $A$ on all dimension, i.e.,
\begin{align}
M_i^z = T(M_i^z, A, \cdots, A)
\end{align}
Another property is addition. Suppose $y = x + \sigma$ (e.g. there exists some additional noise.), then, by using addition rule of expectation, we have
\begin{align}
M_1^y = M_1^x + M_1^\sigma
\end{align}
Higher order moments can be obtained by taking the expansion of the polynomial expression $(x + \sigma)^{\otimes k},$ which yields
\begin{align}
&M_2^y = \Eb[(x+\sigma) \otimes (x+\sigma)] = M_2^x + M_2^\sigma + \symm_2\rbr{\Eb \sbr{x\sigma}} \\
&M_3^y = M_3^x + M_3^\sigma + \symm_3 \sbr{x \otimes x \otimes \sigma + \sigma \otimes \sigma \otimes x} \\
&M_4^y = M_4^x + M_4^\sigma + \symm_4 \sbr{ \sigma \otimes \sigma \otimes \sigma \otimes x + \sigma \otimes x \otimes x \otimes x } + \symm_6 \sbr{\sigma \otimes \sigma \otimes x \otimes x}
\end{align}
If $\sigma$ is Gaussian or some symmetric random variable, then its first and third moments become zero, thus the third order moment becomes $M_3^y = M_3^x + \symm_3 \sbr{ \sigma \otimes \sigma \otimes x}.$ Similarly, the forth-order moment
reduces to $M_4^y = M_4^x + M_4^\sigma + \symm_6 \sbr{\sigma \otimes \sigma \otimes x \otimes x}.$
\noindent {{\bfseries Linear Gaussian Latent Factor Model.}
When dealing with \eq{eq:lingalaf} our goal is to infer both $\Phi$ and
$\pi$. The main difference is that rather
than observing $z$ we have $\Phi z$, hence all tensors are
colored. Moreover, we also need to deal with the terms arising from
the additive noise $\epsilon$. This yields
\begin{align}
\label{eq:gaussian-tensor-1}
S_1 := & M_1 = T(\pi, \Phi) \\
\label{eq:gaussian-tensor-2}
S_2 := & M_2 - S_1 \otimes S_1 - \sigma^2 \one =
T(\mathrm{diag}(\pi - \pi^2), \Phi, \Phi) \\
\label{eq:gaussian-tensor-3}
S_3 := & M_3 - S_1 \otimes S_1 \otimes S_1 -\symm_3 \sbr{ S_1 \otimes
S_2}- \symm_3 \sbr{ {m_1 \otimes \one}} = T\rbr{\mathrm{diag}\rbr{\pi - 3\pi^2+2\pi^3}, \Phi, \Phi, \Phi}
\\
\label{eq:gaussian-tensor-4}
S_4 := & M_4 - S_1 \otimes S_1 \otimes S_1 \otimes S_1 - \symm_6 \sbr{S_2 \otimes S_1 \otimes S_1} - \symm_3 \sbr{S_2 \otimes S_2} - \symm_4 \sbr{S_3 \otimes S_1} \\
&- \sigma^2\symm_6 \sbr{S_2 \otimes \one} - m_4 \symm_3 \sbr{ \one \otimes \one}
\nonumber \\
= & T\rbr{\mathrm{diag}\rbr{-6 \pi^4 +12 \pi^3 -7 \pi^2 +
\pi}, \Phi, \Phi, \Phi, \Phi}
\nonumber
\end{align}
Here we used the auxiliary statistics $m_1$ and $m_4$. Denote by
$v$ the eigenvector with the smallest eigenvalue of the covariance
matrix of $x$. Then the auxiliary variables are defined as
\begin{align}
\label{eq:gaussian-variance-m1}
m_1 :=& \Eb_x \sbr{x \inner{v}{\rbr{x-\Eb\sbr{x}}}^2} = \sigma^2 T(\pi, \Phi) \\
\label{eq:gaussian-variance-m4}
m_4 :=& \Eb_x \sbr{\inner{v}{\rbr{x-\Eb_x\sbr{x}}}^4} /3 = \sigma^4.
\end{align}
These terms are used in a tensor power method to infer both $\Phi$ and
$\pi.$
\begin{proof}
To easily apply the addition property of moments, we define $y = \Phi x.$
\begin{description*}
\item[Order 1 tensor:] By using Equation \eq{eq:tensor-1}, we have
\begin{align}
S_1 := M_1 &= \Eb_x \sbr{x} = M_1^y + M_1^\sigma = T(\Eb[z], \Phi)= T( \pi, \Phi),
\end{align}
where we apply the addition property of moments in the third equation, and linear transformation property at the fourth equation.
To infer the \emph{number} of latent variables $k$ and deal with the
noise term, we need to determine the rank of the covariance matrix $\Eb_x
\sbr{\rbr{x- \Eb_x{\sbr{x}}}\otimes \rbr{x-
\Eb_x{\sbr{x}}}}$. Because there is additive noise, the smallest
$(d-K)$ eigenvalues will not be exactly zero. Instead, they amount to
the variance arising from $\epsilon$ since
\begin{align}
\mathrm{cov}[\Phi z + \epsilon] = \Phi^\top \mathrm{cov}[z] \Phi + \mathrm{cov}[\epsilon].
\end{align}
Consequently the smallest eigenvalues of the covariance matrix of $x$
allow us to read off the variance $\sigma^2$: for any normal vector
$v$ corresponding to the $d-k$ smallest eigenvalues we have
\begin{align}
\label{eq:gaussian-variance-1}
\Eb_x \sbr{ \rbr{ v^\top \rbr{x-\Eb \sbr{X}}}^2}=
v^\top \Phi^\top \mathrm{cov}[z] \Phi v + v^\top \mathrm{cov}[\epsilon] v
= \sigma^2.
\end{align}
\item[Order 2 tensor:]
Here we plug in Equation \eq{eq:tensor-2} and use
independence of $z$ and $\epsilon$. Linear terms in $\epsilon$
vanish. Thus we get
\begin{align}
\nonumber
M_2 =& M_2^y + M_2^\sigma + \Eb[y \otimes \sigma]
= T\rbr{ \Eb_z \sbr{z\otimes z},\Phi,\Phi} + \sigma^2 \one \\
=& T\rbr{ \pi \otimes \pi + \mathrm{diag}\rbr{\pi -\pi^2}, \Phi,\Phi}
+ \sigma^2 \one \\
=& S_1 \otimes S_1 + T\rbr{ \mathrm{diag}\rbr{\pi -
\pi^2}, \Phi, \Phi} + \sigma^2 \one,
\end{align}
where the second equations follow the linear transformation property of moments.
This yields the statement in Equation \eq{eq:gaussian-tensor-2}.
\item[Order 3 tensor:] As before, denote by $v$ an eigenvector
corresponding to the $(d-k)$ smallest eigenvalues, i.e.\ $v^\top \Phi=
0$. We first define an auxiliary term
\begin{align}
\nonumber
m_1 :=& \Eb_x \sbr{x \rbr{v^\top \rbr{x-\Eb{\sbr{x}}}}^2} = \Eb_x
\sbr{x\rbr{v^\top \rbr{\Phi\rbr{z-\pi}+\varepsilon}}^2} \\
=& \Eb_x\sbr{x\rbr{v^\top \varepsilon}^2} = \sigma^2 T(\pi, \Phi).
\end{align}
Since the Normal Distribution is symmetric, only even moments of
$\epsilon$ survive. Using \eq{eq:tensor-3}, the third order moments yield
\begin{align}
M_3 & = M_3^y + \Eb_z \sbr{\symm_3 \sbr{\Phi z \otimes \epsilon \otimes \epsilon} }\\
& = T\rbr{\Eb_z[z\otimes z \otimes z],\Phi,\Phi,\Phi}
+ \symm_3 \rbr{m_1\otimes \one } \\
&=S_1 \otimes S_1 \otimes S_1 +\symm_3 \sbr{ S_1 \otimes S_2}
+T\rbr{ \mathrm{diag}\rbr{\pi - 3\pi_i^2+2\pi_i^3},\Phi,\Phi,\Phi}
+ \symm_3\rbr{m_1\otimes \one }
\nonumber
\end{align}
Thus, we get Equation \eq{eq:gaussian-tensor-3}.
\item[Order 4 tensor:] We obtain the fourth-order tensor by first
calculating an auxiliary variable related to the additive noise term
\begin{align}
m_4 :=& \Eb_x \sbr{\rbr{v^\top \rbr{x-\Eb_x\sbr{x}}}^4}/3 = \Eb[\rbr{v^\top \epsilon}^4]/3= \sigma^4.
\end{align}
Here the last equality followed from the isotropy of Gaussians.
With Equation \eq{eq:tensor-4}, the forth order moments are
\begin{align}
\nonumber
M_4 =& M_4^y + M_4^\epsilon + \Eb_x \sbr{\symm_6 \sbr{y \otimes y \otimes \epsilon \otimes \epsilon} }\\
=& T\rbr{ \Eb_z \sbr{z \otimes z \otimes z \otimes z}, \Phi, \Phi,\Phi,\Phi}
+\sigma^2 \symm_6\sbr{S_2 \otimes \one}
+ \sigma^4 \symm_3 \sbr{\one\otimes \one} \nonumber\\
=& S_1 \otimes S_1 \otimes S_1 \otimes S_1 \nonumber
+ \symm_6 \sbr{S_2 \otimes S_1 \otimes S_1}
+ \symm_3 \sbr{S_2 \times S_2} \nonumber
+ \symm_4 \sbr{S_3 \otimes S_1} \\
& +T\rbr{ \mathrm{diag}\rbr{-6 \pi^4 +12 \pi^3 -7 \pi^2 + \pi}, \Phi, \Phi, \Phi}
+ \sigma^2\symm_6\sbr{S_2 \otimes \one}
+ m_4 \symm_3 \sbr{\one\otimes \one}. \nonumber
\end{align}
\end{description*}
\end{proof}
\noindent {\bfseries Infinite Sparse Factor Analysis (isFA)}
Using the model of \eq{eq:linzy} it follows that $z$ is a
\emph{symmetric} distribution with mean $0$ provided that $p(y)$ has
this property.
Here we state the property of moments by using such prior. For $x = z \odot y,$
$M_i^x = M_i^z \odot M_i^y.$ If $y$ is symmetric so that the first and the third
order moments vanish, we have $M_1^x = M_3^x = 0.$ From that it follows that
the first and third order
moments and tensors vanish, i.e.\ $S_1=0$ and $S_3=0$. We have the
following statistics:
\begin{align}
\label{eq:isfa-tensor-1}
S_2 := & M_2 -\sigma^2 \one = T\rbr{c \cdot \mathrm{diag}(\pi), \Phi, \Phi}\\
\label{eq:isfa-tensor-4}
S_4 := & M_4 -\symm_3 \sbr{S_2 \otimes S_2} -
\sigma^2\symm_6 \sbr{S_2 \otimes \one} - m_4 \symm_3 \sbr{\one \otimes \one}
= T\rbr{\mathrm{diag}(f(\pi)), \Phi, \Phi, \Phi, \Phi}.
\end{align}
Here $m_4$ is defined as in \eq{eq:gaussian-variance-m4}. Whenever $p(y)$ in
\eq{eq:linzy} is Gaussian, we have $c = 1$ and $f(\pi) = \pi -
\pi^2$. Moreover, whenever $p(y)$ follows the Laplace distribution, we
have $c=2$ and $f(\pi) = 24 \pi - 12 \pi^2$.
\begin{proof}
Since both $Y$ and $\epsilon$ are symmetric and have zero mean, the
odd order tensors vanish. That is $M_1 = 0$ and $M_3 = 0$. It suffices
for us to focus on the even terms.
\begin{description*}
\item[Order 2 tensor:] Using covariance matrix of \eq{eq:tensor-2} yields
\begin{align}
M_2 &= \Eb_x \sbr{x \otimes x} = T\rbr{ \Eb_z{ \sbr{ \rbr{z \odot y} \otimes \rbr{z \odot y} }},\Phi,\Phi }+ \sigma^2\one \\
&= T\rbr{ \rbr{ \Eb_z{\sbr{z \otimes z}} \odot \Eb_y \sbr{y^2} \one},\Phi, \Phi } + \sigma^2\one \\
&=T \rbr{ \rbr{ \pi \otimes \pi + \mathrm{diag}\rbr{\pi -\pi^2}} \odot \Eb_y \sbr{y^2} \one, \Phi,\Phi} + \sigma^2\one \\
\label{eq:y2}
& = T\rbr{\Eb_y \sbr{y^2} \mathrm{diag}\rbr{\pi}, \Phi,\Phi} +\sigma^2\one = T\rbr{\mathrm{diag}\rbr{\pi}, \Phi,\Phi} +\sigma^2\one,
\end{align}
where the second equation follows the element-wise multiplication property of the moment.
As before, the variance $\sigma^2$ of $\epsilon$ can be inferred by
Equation \eq{eq:gaussian-variance-1}. Here we get Equation
\eq{eq:isfa-tensor-1}.
\item[Order 4 tensor:] With Equation \eq{eq:tensor-4} and $\Eb_y\sbr{y^4} = 3$, we have
\begin{align}
M_4 =& \Eb_x \sbr{x \otimes x \otimes x \otimes x} \nonumber\\
=& \Eb_z{ \sbr{ \Phi \rbr{z \odot y} \otimes \Phi \rbr{z \odot y} \otimes
\Phi \rbr{z \odot y} \otimes \Phi \rbr{z \odot y} }} \nonumber\\
\nonumber
& +\Eb_z{ \sbr{ \symm_6\sbr{ \Phi \rbr{z \odot y} \otimes \Phi \rbr{z \odot y} \otimes \epsilon \otimes \epsilon} }} + \Eb \sbr{\epsilon\otimes \epsilon \otimes \epsilon\otimes \epsilon} \nonumber\\
=& T\rbr{\Eb_z \sbr{z \otimes z \otimes z \otimes z} \odot \Eb_y \sbr{y^4} \one,\Phi , \Phi, \Phi, \Phi} + \sigma^2\symm_6\sbr{S_2 \otimes \one} + \sigma^4 \symm_3 \sbr{\one\otimes \one}\nonumber\\
=& \symm_3 \sbr{S_2 \otimes S_2} +T\rbr{ \mathrm{diag}
\rbr{\Eb_y\sbr{y^4} \pi_i - 3\Eb_y\sbr{y^2}^2 \pi_i^2}, \Phi, \Phi, \Phi, \Phi }
\nonumber\\
\label{eq:y4}
& + \sigma^2 \symm_6\sbr{S_2 \otimes \one} + \sigma^4 \symm_3 \sbr{\one\otimes \one} \\
=& \symm_3 \sbr{S_2 \otimes S_2} +T\rbr{3\rbr{ \pi_i - \pi_i^2}, \Phi, \Phi, \Phi, \Phi }
+ \sigma^2 \symm_6\sbr{S_2 \otimes \one}+ m_4 \symm_3 \sbr{\one\otimes \one} \nonumber
\end{align}
where $m_4$ can be inferred by \eq{eq:gaussian-variance-m4}.
\end{description*}
If the prior on $Y$ is drawn from a Laplace distribution the model is
called an infinite Inde-pendent Component Analysis (iICA) \citep{KnoGha07}. The lower-order moments are similar to that of isFA, except for
$\Eb_y\sbr{y^2} = 2$ and $\Eb_y\sbr{y^4} = 24$. Replacing these terms
in Equation \eq{eq:y2} and \eq{eq:y4} yields the claim.
\end{proof}
\begin{lemma}
\label{lem:bound}
Any linear model of the form \eq{eq:lingalaf} or \eq{eq:linzy} with
the property that $\epsilon$ is symmetric and satisfies
$\Eb[\epsilon^i] = 0$ for $i \in \{1,3, 5,\cdots\},$ the same
properties for $y$, will yield the same moments.
\end{lemma}
\begin{proof}
This follows directly from the fact that $z$, $\epsilon$ and $y$
are independent and that the latter two have zero mean and are
symmetric. Hence the expectations carry through regardless of the
actual underlying distribution.
\end{proof}
\subsection{Tensorial Moments for the HDP}
\label{sec:spec}
To construct tensors for the HDP, a crucial step is to derive the orthogonally
decomposable tensors from the moments.
\begin{description*}
\item[Order 1 tensor:] The first-order moment is equivalent to the weighted sum of latent topics using a topic distribution under node ${\bf i},$ so it is simply the weighted combination of $\Phi$ where the weight vector is $\pi_0,$ i.e,
\begin{align}
M_1 := \Eb \sbr{x} = \Eb \sbr{\Phi h_{{\bf i} j}} = \Eb \sbr{\Phi \pi_{{\bf i}}} = \Phi \pi_{0}.
\end{align}
The last equation uses the fact that, for $\pi \sim \mathrm{Dirichlet}( \gamma_0 \pi_0), $ $\Eb[\pi] = \pi_0.$
\item[Order 2 tensor:] For such variable $\pi,$ using the definition of Dirichlet distribution, we have $\Eb[[\pi^2]_{ii}] = \frac{\gamma_0}{\gamma_0 + 1} \pi_{0i} (\pi_{0i}+1)$ and $\Eb[[\pi^2]_{ij}] = \frac{\gamma_0}{\gamma_0 + 1} \pi_{0i} \pi_{0j}.$ The second-order moment thus becomes
\begin{align}
M_2 :=\Eb \sbr{x_1 \otimes x_2} = \Eb \sbr{\Phi h_{{\bf i} 1} h_{{\bf i} 2}^T \Phi^T } = \Phi \Eb \sbr{ \pi_{{\bf i}} \pi_{{\bf i}}^T }\Phi^T = \Phi A \Phi^T,
\end{align}
where $[A]_{ii} = \frac{1}{\gamma_0 + 1} \pi_{0i} (\gamma_0\pi_{0i}+1) $ and $[A]_{ij} = \frac{\gamma_0}{\gamma_0 + 1} \pi_{0i} \pi_{0j}.$ Matrix $A$ can be decompose as the summation of a diagonal matrix and a symmetric matrix, $\pi_0 \otimes \pi_0.$ By replacing $A$ with these two matrices, the second-order moment can be re-written as
\begin{align}
M_2 = \Phi A \Phi^T &= \Phi \rbr{\frac{\gamma_0}{\gamma_0 + 1}\pi_0 \otimes \pi_0 + \frac{1}{\gamma_0 + 1} \mathrm{diag}(\pi_0) }\Phi^T,
\end{align}
where $\Phi \pi_0$ in the first term can be further replaced with $M_1.$ Thus, we define the second term as the second-order tensor, which is a rank-$k$ matrix,
\begin{align}
S_2 := M_2 - \frac{\gamma_0}{\gamma_0 + 1} M_1 \otimes M_1 = \Phi \rbr{\frac{1}{\gamma_0 + 1} \mathrm{diag}(\pi_0) }\Phi^T = \sum \limits_{i = 1}^K \frac{\pi_{0i}}{\gamma_0+1} \phi_i \otimes \phi_i.
\end{align}
\item[Order 3 tensor:] The third-order tensor is defined in the form of $S_3 := \sum_{i = 1}^K C_6 \cdot \pi_{0i} \cdot \phi_i \otimes \phi_i \otimes \phi_i,$ and can be derived using $M_1,$ $M_2$ and $M_3$ by applying the same technique of decomposing matrix or tensor into the summation of symmetric tensors and diagonal tensor. The derivation details for a multi-layer HDP tensor is provided in the Appendix.
\end{description*}
\noindent Before stating the generalized tensors for the HDP, we define $M_r^{{\bf i}}$ as the $r$-th moment at node ${\bf i}$. The moment can be obtained by averaging corresponding moments of its child nodes.
\begin{align}
\label{eq:mi-general}
M_r^{{\bf i}} := \frac{1}{|c({\bf i})|} \sum \limits_{{\bf j} \in c({\bf i})} M_r^{{\bf j}}
\end{align}
starting with $M_r^{{\bf i}} = \Eb \sbr{\otimes_{s=1}^r x_{{\bf i} s}} $ whenever ${\bf i}$ represents an leaf node. In other words, for a $L$-layer model, after obtaining moments at the leaf nodes (e.g. moments on layer $L-1$), we are able to calculate moments, $M_r^{{\bf i}}$, for node ${\bf i}$ on layer $L-2$, by averaging the associated moments over all of its children.
Lemma \ref{lem:hdp3moments_general} shows the generalized tensors for HDP with different number of layers. Using Lemma \ref{lem:hdp3moments_general}, we found that the coefficient and moment for different hierarchical tree can be derived recursively using a bottom-up approach, i.e., coefficient for $k-$layer HDP can be derived using the coefficient of $(k-1)$-layer HDP and moments at a node ${\bf i}$ can be derived using the moments calculating under its children, $c({\bf i})$. The recursive rule is provided in Lemma \ref{lem:hdp3moments_general}.
\vspace{-2mm}
\begin{lemma}[Symmetric Tensors of the HDP]
\label{lem:hdp3moments_general}
Given a $L$-level HDP, with hyperparameters $\gamma_{{\bf i}}$, the symmetric Tensors for a node ${\bf i}$ at layer $l$ can be expressed as:
\begin{align*}
S_1^{{\bf i}} &:= M_1^{{\bf i}} = T(\pi_{{\bf i}}, \Phi ),\,\, S_2^{{\bf i}} := M_2^{{\bf i}} - C_2^{l} \cdot S_1^{{\bf i}} {S_1^{{\bf i}}}^T = T( C_3^{l} \cdot \mathrm{diag}\rbr{\pi_{{\bf i}} }, \Phi,\Phi),\\
S_3^{{\bf i}} & := M_3^{{\bf i}} - C_4^{l} \cdot S_1^{{\bf i}} \otimes S_1^{{\bf i}} \otimes S_1^{{\bf i}} -C_5^{l} \cdot \symm_3\sbr{ S_2^{{\bf i}} \otimes M_1^{{\bf i}}}= T( C_6^{l} \cdot \mathrm{diag}\rbr{\pi_{{\bf i}}}, \Phi,\Phi,\Phi),
\end{align*}
where
\begin{align}
C_2^{(l)} &= \frac{\gamma_{l+1} }{ \gamma_{l+1} +1 } C_2^{(l+1)},\,\, C_3^{(l)} = C_3^{(l+1)} + \frac{C_2^{(l)}}{\gamma_{l+1} }, \,\,C_4^{(l)} = \frac{{\gamma_{l+1}}^2 }{ \rbr{\gamma_{l+1} +1}\rbr{\gamma_{l+1} +2} } C_4^{(l+1)} \nonumber\\
C_5^{(l)} &= \frac{\gamma_{l+1} }{ \rbr{\gamma_{l+1} +1} } \frac{C_3^{(l+1)}}{C_3^{(l)}} C_5^{(l+1)} + \frac{1}{\gamma_{l+1}C_3^{(l)}}C_4^{(l)}, \,\,C_6^{(l)} = C_6^{(l+1)} + 3 \cdot \frac{ C_5^{(l)}C_3^{(l)}}{\gamma_{l+1}} - \frac{C_4^{(l)}}{\gamma_{l+1}^2} \nonumber
\end{align}
with initialization on the bottom layer ($(L - 1)$-layer) being $C_2^{L-1} = 1, $ $C_3^{L-1} =0 ,$ $C_4^{L-1} =1 ,$ $C_5^{L-1} =0,$
and $C_6^{L-1} = 0.$
\end{lemma}
\section{Spectral Algorithms for the IBP and the HDP}
\vspace{-2mm}
\label{sec:alg}
Here we introduce a way to estimate moments on the leaf nodes, which
are used to estimate the diagonalized tensors.
Next, we provide two simple methods for estimating number of topics, $k.$
Finally we review Excess Correlation Analysis (ECA) and several tensor
decomposition techniques that are used obtained the estimated topic vectors.
\paragraph{Moment estimation}
For the IBP, we can directly estimate the moments by replacing the theoretical moments with its emperical version. The interesting part comes in the moment estimation for multi-layer HDP.
A $L$-level HDP could be viewed as a $L$-level tree, where each node
represents a DP. The estimated moments for the whole model can
be calculated recursively by Equation \eqref{eq:mi-general} and the empirical
$r$-th order moments at the leaf node ${\bf i}$ which are defined as:
\begin{align*}
\hat{M}_r^{{\bf i}} & := \varphi_r\rbr{\mathbf{x}_{{\bf i}} } \text{ where }
\varphi_r(\mathbf{x}_{{\bf i}}) := \frac{(m_{\bf i}-r) !}{m_{\bf i} !}\sum \limits_{j_1,j_2}x_{{\bf i} j_1} \otimes x_{{\bf i} j_2} \cdots \otimes x_{{\bf i} j_r},
\end{align*}
where $m_{\bf i}$ is the number of words in the observation $x_{{\bf i}}$.
Here $(x_{{\bf i} j_1}, x_{{\bf i} j_2}, \cdots, x_{{\bf i} j_r})$ denote the ordered tuples in
$\mathbf{x_{{\bf i}}}$, with $x_{{\bf i} j}$ encoded as a binary vector, i.e.\ $x_{{\bf i} j}= e_k$
iff the $j$-th data is $k$. The empirical tensors is obtained by plugging in these empirical
moments to the tensor equations derived in the previous section.
The concentration of measure bounds for these estimated
quantities are given in Section \ref{sec:conc}.
\paragraph{Inferring the number of mixtures}
We first present the method of inferring the number of
latent features, $K$, which can be viewed as the rank of the covariance
matrix, for models with additive noise.
An efficient way of avoiding eigen decomposition on a $d \times
d$ matrix is to find a low-rank approximation $R \in \mathbb{R}^{d \times K'}$
such that $K < K' \ll d$ and $R$ spans the same space as the covariance
matrix. One efficient way to find such matrix is to set $R$ to be
\begin{equation}
R = \rbr{M_2 - M_1 \times M_1}\Theta,
\end{equation}
where $\Theta \in \mathbb{R}^{d \times K'}$ is a random matrix with
entries sampled independently from a standard normal. This is
described, e.g. by \cite{HalMarTro09}. Since there is noise in the
data, it is not possible that we get exactly $K$ non-zero eigenvalues
with the remainder being constant at noise floor $\sigma^2$. An
alternative strategy to thresholding by $\sigma^2$ is to determine $K$
by seeking the largest slope on the curve of sorted eigenvalues.
For the HDP, in contrast to the Chinese Restaurant Franchise where the number
of mixture components, $k,$ is
settled by means of repeated sampling in the sampling-based algorithms, we use an approach that
directly infers $k$ from data itself.
The concatenation of all the first-order moments spans the space of $\Phi$ with high probability. Thus, the number of
linearly independent mixtures $k,$ is close to the rank of the matrix,
$\tilde{M}_1,$ where each column corresponds to the first order
moments on one of the leaf nodes.
While direct calculation of the rank of $\tilde{M}_1$
is expensive, one can estimate $k$ by the following procedure:
draw a random matrix $R \in \mathbb{R}^{n_l \times k'}$ for some $k'
\geq k,$ and examine the eigenvalues of $ \tilde{M_1'} =
\rbr{\tilde{M_1}R}^T\rbr{\tilde{M_1}R} \in \mathbb{R}^{k' \times
k'}$. We estimate the rank of $\tilde{M_1}$ to be the point where the magnitude of eigenvalues decrease abruptly.
\paragraph{Excess Correlation Analysis (ECA)}
We then apply Excess Correlation Analysis (ECA) to infer hidden topics, $\Phi.$
Dimensionality reduction and whitening is then performed on the diagnoalized tensor
at the root node, i.e., $\hat{S}_r^0,$ to make the eigenvectors of it orthogonal and
to project to a lower dimensional space. We whiten the observations by multiplying
data with a whitening matrix, $W \in \mathrm{R}^{d \times K}$.
This is computationally efficient, since we can apply this
directly to $x$, thus yielding third and fourth order tensors $W_3$
and $W_4$ of size $k \times k \times k$ and
$k \times k \times k \times k$, respectively.
Moreover, approximately factorizing $S_2$ is a
consequence of the decomposition and random projection techniques
arising from \cite{HalMarTro09}.
To find the singular vectors of $W_3$ and $W_4,$ we use tensor decomposition
techniques to obtain their eigenvectors. From the eigenvectors we
found in the last step, $\Phi$ could be recovered by multiplying a weighted
inverse matrix, $W^\dag$. The fact that this algorithm
only needs projected tensors makes it very efficient. Streaming
variants of the robust tensor power method are subject of future
research. We introduce
the tensor decomposition techniques for the need of our algorithms.
\paragraph{Tensor Decomposition}
With the derived symmetric tensors, we need to separate the hidden
vectors $\Phi,$ the latent distribution $\pi$, and the additive
noise, as appropriate. In a nutshell the approach is as follows: we
first identify the noise floor using the assumption that the number of
nonzero probabilities in $\pi$ is lower than the dimensionality of the
data. Secondly, we use the noise-corrected second order tensor to
whiten the data. This is akin to methods used in ICA
\citep{Cardoso98}. Finally, we perform tensor decomposition on the
data to obtain \smash{$S_3$ and $S_4$,} or rather, their applications to
data. Note that the eigenvalues in the re-scaled tensors differ slightly
since we use {$S_2^{\dag \frac{1}{2}} x$} directly rather than $x$.
There are several tensor decomposition algorithms that can be applied.
\cite{AnaGeHsuKakTel12} showed that robust tensor power method has
nice theoretical convergence property. However, this approaches is slow in
practice. An alternative is alternating least square (ALS), which expend the
third order tensor into matrix and treat the tensor decomposition as
a least square problem. However, ALS is not stable and does not guarantee
to converges to the global minima. Recently, \cite{WanTunSmoAna15} proposed
a fast tensor power method using count sketch with FFT. The method is
shown to be faster than the robust tensor power method by a factor of $10$ to $100.$.
In this work, we show how different solvers affect the performance in both time and perplexity. We briefly review these solvers.
\paragraph{Tensor Decomposition 1: Robust Tensor Power Method}
\label{sec:rptm}
Our reasoning follows that of \cite{AnaGeHsuKakTel12}. It is our goal
to obtain an \emph{orthogonal} decomposition of the tensors $S_i$
into an orthogonal matrix $V$ together with a set of corresponding
eigenvalues $\lambda$ such that $S_i = T[\mathrm{diag}(\lambda),
V^\top, \ldots, V^\top]$. This is accomplished by generalizing the
Rayleigh quotient and power iterations
described in \citep[Algorithm 1]{AnaGeHsuKakTel12}:
\begin{align}
\label{eq:tensorpower}
\theta \leftarrow {T[S, \one, \theta, \ldots,
\theta]}
\text{ and }
\theta \leftarrow \nbr{\theta}^{-1} \theta.
\end{align}
In a nutshell, we use a suitable number of random initialization $L$,
perform a few iterations ($T$) and then proceed with the most
promising candidate for another $T$ iterations. The rationale for
picking the best among $L$ candidates is that we need a high
probability guarantee that the selected initialization is
non-degenerate. After finding a good candidate and normalizing its
length we deflate (i.e.\ subtract) the term from the tensor $S$.
\paragraph{Tensor Decomposition 2: Alternating Least Square (ALS)}
\label{sec:td2}
Another commonly used method for
solving tensor decomposition is alternating least square method. The
main idea is to concatenate the tensors into a matrix and then
minimize the Frobenius norm of the difference:
\begin{align}
\label{eq:als}
&\min \nbr{ [S_{3}(W,W,W)]_{(1)} - V \mathrm{diag}({\bf \lambda}) (V\odot V)^T}_F
\end{align}
where the definition of the operators used are:
\begin{align}
&S_{(1)} = \Big[ S[:,:,1] \,\,\, S[:,:,2] \, \cdots \, S[:,:,K]\Big] \\
& V \odot V = [v_1 \boxplus v_1 \,\,\, v_2 \boxplus v_2 \, \cdots \, v_K \boxplus v_K ].
\end{align}
The notation $\odot$ denotes the Khatri-Rao product and $\boxplus$ denotes
the Kronecker product. Taking the second and third $V$ in the objective function
\eqref{eq:als} as some fixed matrices, we get the closed form solution of the
optimization problem as:
\begin{align}
\nonumber
V \mathrm{diag}({\bf \lambda}) = [S_{3}(W,W,W)]_{(1)}\rbr{V \odot V}\rbr{(V^TV).\wedge2 }^{\dag}
\end{align}
where the notation $.\wedge$ denoting point-wise power. By iteratively updating matrix$V$ until it converges, we solve the optimization problem in \eqref{eq:als}.
\paragraph{Tensor Decomposition 3: Fast Tensor via sketching (FC) }
\label{sectd3}
\cite{WanTunSmoAna15} introduced a tensor CANDECOMP/PARAFAC (CP)
decomposition algorithm based on tensor sketching. Tensor sketches are constructed
by hashing elements into fixed length sketches by their index. With the special property
of count sketch, power iteration described in Equation \eqref{eq:tensorpower} is
transformed into convolution operators and can be calculated using FFT and
inverse FFT. The method is faster than standard Robust Tensor Power Method by a factor of $10$ to $100$.
\noindent {\bfseries Further Details on the projected tensor power method.}
Explicitly calculating tensors $M_2,M_3,M_4$ is not practical in high
dimensional data. It may not even be desirable to compute the
projected variants of $M_3$ and $M_4$, that is, $W_3$ and $W_4$ (after
suitable shifts). Instead, we can use kernel tricks to
simplify the tensor power iterations to
\begin{align}
\nonumber
W^\top T(M_l, \one, Wu, \ldots, W u)
= \frac{1}{m} \sum_{i=1}^m {W^\top x_i}
\inner{ x_i}{W u}^{l-1}
= \frac{W^\top}{m} \sum_{i=1}^m {x_i}
\inner{W^\top x_i}{u}^{l-1}
\end{align}
By using incomplete expansions memory complexity and storage are
reduced to $O(d)$ per term. Moreover, precomputation is $O(d^2)$ and
it can be accomplished in the first pass through the data.
The overall algorithms for the spectral algorithms for linear-Gaussian models with IBP
prior and the HDP are shown in Algorithm \ref{alg:eca} and
Algorithm \ref{alg:hdpeca}, respectively.
\begin{algorithm}[t]
\caption{Excess Correlation Analysis for Linear-Gaussian model with IBP prior
\label{alg:eca}}
\textbf{Inputs: } the moments $M_1, M_2, M_3, M_4$.
\begin{algorithmic}[1]
\STATE \textbf{Infer K and $\sigma^2$:}
\STATE Optionally find a subspace $R \in \mathbb{R}^{d \times K'}$ with
$K<K'$ by random projection.
\[
\mathsf{Range}\rbr{R} = \mathsf{Range}\rbr{M_2 - M_1 \otimes M_1}
\text{ and project down to } R
\]
\STATE Set $\sigma^2 := \lambda_{\mathrm{min}} \rbr{M_2 - M_1 \otimes M_1}$
\STATE Set $S_2 = \rbr{M_2 - M_1 \otimes M_1 - \sigma^2 \one}_\epsilon$
by truncating to eigenvalues larger than $\epsilon$
\STATE Set $K = \mathop{\mathrm{rank}} S_2$
\STATE Set $W = U\Sigma^{-\frac{1}{2}}$, where $[U, \Sigma] = \mathrm{svd}(S_2)$
\STATE \textbf{Whitening:} (best carried out by preprocessing $x$)
\STATE Set $W_3 := T(S_3, W, W, W)$ and $W_4 := T(S_4, W, W,
W, W)$
\STATE \textbf{Tensor Decomposition: }
\STATE Compute top $K_1$ (eigenvalues, eigenvector) pairs of $W_3$ such that all the eigenvalues has absolute value larger than $1$.
\STATE Deflate $W_4$ with $(\lambda_i, v_i)$ for all $i \leq K_1$ and obtain top $K - K_1$ (eigenvalue, eigenvector) pairs $(\lambda_i, v_i)$ of deflated $W_4$
\STATE \textbf{Reconstruction: } With corresponding eigenvalues $\cbr{\lambda_1, \cdots, \lambda_K}$, return the set $A$:
\begin{align}
\Phi= \cbr{\frac{1}{Z_i} \rbr{W^{\dag}}^\top v_i: v_i \in \Lambda} \label{eq:findA}
\end{align}
where $Z_i = \sqrt{\pi_i - \pi_i^2}$ with $\pi_i = f^{-1}(\lambda_i)$. $f(\pi) = \frac{-2\pi+1}{\sqrt{\pi - \pi^2}} $ if $i \in \sbr{K_1}$ and $f(\pi)= \frac{6\pi^2-6\pi+1}{\pi-\pi^2} $ otherwise. (The proof of Equation \eq{eq:findA} is provided in the Appendix.)
\end{algorithmic}
\end{algorithm}
\begin{algorithm}
\caption{Spectral Algorithm for HDP}
\label{alg:hdpeca}
\begin{algorithmic}[1]
\REQUIRE Observations $\mathbf{x}$
\STATE \textbf{Inferring mixture number} \\
Using all leaf nodes ${\bf i}_j$ of the HDP tree compute the rank $k$ of
$\tilde{M}_1:=\sbr{ M_1^{{\bf i}_1}, M_1^{{\bf i}_2},\cdots
M_1^{{\bf i}_{n^{L-2}}}}$.
\STATE \textbf{Moment estimation} \\
Compute moment estimates $\hat{M}_r^{0}$ and tensors $\hat{S}_r^{0}$.
\STATE \textbf{Dimensionality reduction and whitening} \\
Find $W\in \mathbb{R}^{d\times k}$ such that $W^T\hat{S}_2^0W = I_k$.
\STATE \textbf{Tensor decomposition} \\
Obtain eigenvectors $v_i$ and eigenvalues $\lambda_i$ of $\hat{S}_3^0$.
\STATE \textbf{Reconstruction}
\begin{align}
\label{eq:find0}
\text{Result set }
\hat{\Phi} = \cbr{ \lambda_i \frac{C_3}{C_6} \rbr{W^{\dag}}^Tv_i},
\end{align}
where $C_3$ and $C_6$ are coefficients defined in Lemma
\ref{lem:hdpgmoments}.
\end{algorithmic}
\end{algorithm}
\section{Concentration of Measure Bounds}
\label{sec:concentration}
There exist a number of concentration of measure inequalities for
\emph{specific} statistical models using rather specific moments
\citep{AnaFosHsuKakLiu12}. In the following we derive a general tool
for bounding such quantities, both for the case where the statistics
are bounded and for unbounded quantities alike. Our analysis borrows
from \cite{AltSmo06} for the bounded case, and from the average-median
theorem, see e.g. \cite{AloMatSze99}, for dealing with unbounded random variables with bounded higher order moments.
\subsection{Concentration measure of moments}
\subsubsection{Bounded Moments}
We begin with the analysis for bounded moments. Denote by $\phi: \Xcal
\to \Fcal$ a set of statistics on $\Xcal$ and let $\phi_r$ be the
$r$-times tensorial moments obtained from $x$.
\begin{align}
\phi_1(x) & := \phi(x); \; \;\; \phi_2(x) := \phi(x) \otimes
\phi(x); \; \;\; \phi_r(x) := \phi(x) \otimes \ldots \otimes \phi(x)
\end{align}
In this case we can define inner
products via
\begin{align*}
k_l(x,x') := \inner{\phi_r(x)}{\phi_r(x')} =
T[\phi_r(x), \phi(x'), \ldots, \phi(x')] =
\inner{\phi(x)}{\phi(x')}^r = k^r(x,x')
\end{align*}
as reductions of the statistics of order $l$ for a kernel $k(x,x') :=
\inner{\phi(x)}{\phi(x')}$. Finally, denote by
\begin{align}
\label{eq:mi-general2}
M_r := \Eb_{x \sim p(x)}[\phi_r(x)]
\text{ and }
\hat{M}_r := \frac{1}{m} \sum_{j=1}^m \phi_r(x_j)
\end{align}
the expectation and empirical averages of $\phi_r$. Note that these
terms are identical to the statistics used in
\cite{GreBorRasSchetal12} whenever a polynomial kernel is used. It is
therefore not surprising that an analogous concentration of measure
inequality to the one proven by \cite{AltSmo06} holds:
\begin{theorem}
\label{th:momentbounds}
Assume that the sufficient statistics are bounded via $\nbr{\phi(x)}
\leq R$ for all $x \in \Xcal$. With probability at most $1 - \delta$
the following guarantee holds:
\begin{align}
\nonumber
\Pr\cbr{\sup_{u: \nbr{u} \leq 1} \abr{ T(M_r, u,\cdots, u) -T(\hat{M}_r, u, \cdots, u)} > \epsilon_l} \leq \delta
\text{ where }
\epsilon_r \leq \frac{\sbr{2 + \sqrt{-2\log\delta}}{R^r}}{\sqrt{m}}.
\end{align}
\end{theorem}
\begin{proof}
Denote by $X$ the $m$-sample used in generating $\hat{M}_r$. Moreover,
denote by
\begin{align}
\label{eq:xi-dev}
\Xi[X] := \sup_{u: \nbr{u} \leq 1} \abr{T[M_r, u,\cdots, u] -T[\hat{M}_r, u, \cdots, u]}
\end{align}
the largest deviation between empirical and expected moments, when
applied to the test vectors $u$. Bounding this quantity directly is
desirable since it allows us to avoid having to derive
\emph{pointwise} bounds with regard to $M_r$. We prove that
$\Xi[X]$ is concentrated using the bound of
\cite{McDiarmid89}. Substituting single observations in $\Xi[X]$ yields
\begin{align}
\abr{\Xi[X] - \Xi[(X \backslash \cbr{x_j}) \cup \cbr{x'}]}
& \leq \frac{1}{m} \sbr{T\sbr{\phi_r(x_j) - \phi_r(x'), u, \ldots u}} \\
& \leq \frac{1}{m} \sbr{\nbr{\phi(x_j)}^r + \nbr{\phi(x')}^r}
\leq \frac{2}{m} R^r.
\end{align}
Plugging the bound of $2 R^r/m$ into McDiarmid's theorem shows that
the random variable $\Xi[X]$ is concentrated for $\Pr\cbr{\Xi[X] -
\Eb_{X}[\Xi[X]] > \epsilon} \leq \delta$ with probability $\delta
\leq \exp\rbr{-\frac{m \epsilon^2}{2 R^{2r}}}$. Solving the bound for
$\epsilon$ shows that with probability at least $1-\delta$ we have
that $\epsilon \leq \sqrt{-2 \log \delta/m} R^r$.
The next step is to bound the expectation of $\Xi[X]$. For this we
exploit the ghost sample trick and the convexity of expectations. This
leads to the following:
\begin{align}
\Eb_{X}\sbr{\Xi[X]}
\leq& \Eb_{X, X'} \sbr{\sup_{u: \nbr{u} \leq 1}
\abr{T[M_r, u,\cdots, u] -T[\hat{M}_r, u, \cdots, u]}} \nonumber \\
=& \Eb_{\sigma} \Eb_{X, X'} \sbr{\sup_{u: \nbr{u} \leq 1}
\abr{\frac{1}{m} \sum_{j=1}^m \sigma_j \rbr{T[\phi_r(x_j),u,\cdots, u] -
T[\phi_r(x_j')u,\cdots, u ]}}} \nonumber \\
\leq & \frac{2}{m} \Eb_{\sigma} \Eb_{X} \sbr{\sup_{u: \nbr{u} \leq 1}
\abr{\sum_{j=1}^m \sigma_j T[\phi_r(x_j), u,\cdots, u]}} \\
\leq & \frac{2}{m} \Eb_{\sigma} \Eb_{X} \sbr{\nbr{\sum_{j=1}^m
\sigma_j \phi_r(x_j)}}
\leq \frac{2}{m} \Eb_{X} \sbr{\Eb_{\sigma} \sbr{\nbr{\sum_{j=1}^m
\sigma_j \phi_r(x_j)}^2}}^{\frac{1}{2}}
\leq \frac{2 R^r}{\sqrt{m}}
\end{align}
Here the first inequality follows from convexity of the
argument. The subsequent equality is a consequence of the fact that
$X$ and $X'$ are drawn from the same distribution, hence a swapping
permutation with the ghost-sample leaves terms unchanged; The
following inequality is an application of the triangle
inequality. Next we use the Cauchy-Schwartz inequality, convexity
and last the fact that $\nbr{\phi(x)} \leq R$. Combining both bounds
yields $\epsilon \leq \sbr{2 + \sqrt{-2 \log \delta}} R^r/\sqrt{m}$.
\end{proof}
\vspace{-5mm}
Using tensor equations derived in Section \ref{sec:moments}, this means that we have concentration of
measure immediately for the symmetric tensors $S_1, \ldots S_4$.
In particular, we need a chaining result that allows us to compute bounds for products of terms
efficiently.
To prove the guarantees for tensors, we rely on the triangle
inequality on tensorial reductions
\begin{align}
\sup_u \abr{T(A+B,u) -T(A'+B',u)}
\leq
\sup_u \abr{T(A,u) - T(A',u)} +
\sup_u \abr{T(B,u) - T(B',u)} \nonumber
\end{align}
and moreover, the fact that for products of bounded random variables
the guarantees are additive, as stated in the lemma below:
\begin{lemma}
\label{lem:chaining}
Denote by $f_i$ random variables and by $\hat{f}_i$ their
estimates. Moreover, assume that each of them is bounded via $|f_i|
\leq R_i$ and $|\hat{f}_i| \leq R_i$ and
\begin{align}
\label{eq:bound-eff}
\Pr\cbr{|\Eb[f_i] - \hat{f}_i| > \epsilon_i} \leq \delta_i.
\end{align}
In this case the product is bounded via
\begin{align}
\label{eq:product}
\Pr\cbr{\abr{\prod_i \Eb[f_i] - \prod_i \hat{f}_i} > \epsilon} \leq
\sum_i \delta_i \;\;\;
\text{ where }
\epsilon = \sbr{\prod_i R_i} \sbr{\sum_i \frac{\epsilon_i}{R_i}}
\end{align}
\end{lemma}
\begin{proof}
We prove the claim for two variables, say $f_1$ and $f_2$. We have
\begin{align*}
\abr{\Eb[f_1] \Eb[f_2] - \hat{f}_1 \hat{f}_2} &\leq
\abr{(\Eb[f_1] - \hat{f}_1) \Eb[f_2]} + \abr{\hat{f}_1 (\Eb[f_2] - \hat{f}_2)}
\leq
\epsilon_1 R_2 + R_1 \epsilon_2
\end{align*}
with probability at least $1 - \delta_1 - \delta_2$, when applying
the union bound over $\Eb[f_1] - \hat{f}_1$ and $\Eb[f_2] -
\hat{f}_2$ respectively. Rewriting terms yields the claim for $n =
2$. To see the claim for $n > 2$ simply use the fact that we can
decompose the bound into a chain of inequalities involving exactly
one difference, say $\Eb[f_i] - \hat{f}_i$ and $n-1$ instances of
$\Eb[f_j]$ or $\hat{f}_j$ respectively. We omit details since they
are straightforward to prove (and tedious).
\end{proof}
By utilizing an approach similar to
\cite{AnaFosHsuKakLiu12}, overall guarantees for
reconstruction accuracy can be derived.
\subsubsection{Unbounded Moments}
We are interested in proving concentration of measure for the following four tensors in \eq{eq:gaussian-tensor-2}, \eq{eq:gaussian-tensor-3},
\eq{eq:gaussian-tensor-4} and one scalar in
\eq{eq:gaussian-variance-1}. Whenever the statistics are unbounded,
concentration of moment bounds are less trivial and require the use of
subgaussian and gaussian inequalities \citep{HsuKakZha09}. We derive a
bound for fourth-order subgaussian random variables (previous work
only derived up to third order bounds). Lemma \ref{lem:submoment} and
\ref{lem:moments} has details on how to obtain such guarantees.
\paragraph{Concentration measure of unbounded moments for the spectral IBP}
Here we demonstrate and example for linear model with Gaussian noise. The concentration behavior is more complicated than that of the bounded moments in Theorem \ref{th:momentbounds} due to the additive Gaussian noise. Here we restate the model as
\begin{align}
x= \Phi z + \epsilon.
\end{align}
In order to utilize the bounds for Gaussian random vectors, we need to bound the difference between empirical moments and expectations. The bounds for observations generated by different $z$ are examined separately. Let $B = \cbr{x_1, x_2, \cdots, x_n}$ and, for a specific $z_i \in \{0,1\}^K$, write $B_{z_i} := \cbr{x \in B: z = z_i}$ and $\hat{w}_{z_i} = \abr{B_{z_i}}/\abr{B}$ for $i \in \cbr{0,1 \cdots 2^K-1}$ and $z_i = \mathrm{binary} (i)$. Define the conditional moments t and their corresponding empirical moments as
\begin{align}
M_{r, z_i} &:= \Eb \sbr{x^{\otimes r} | z = z_i}, \,\,\, \hat{M}_{r, z_i} := \abr{B_{z_i}}^{-1} \sum \limits_{x \in B_{z_i}} x^{\otimes r}.
\end{align}
\begin{lemma}{(Concentration of conditional empirical moments)}
\label{lem:submoment}
Given scalars $r, K, \delta, w, n,$ and $l,$ we define four functions
\begin{align*}
b_1(r, K, \delta, w, n, l) &= \sqrt{ \frac{r +2\sqrt{r \ln \rbr{l \cdot 2^K/\delta} }
+ 2 \ln \rbr {l \cdot 2^K/\delta} }{w \cdot n} },\\
b_2(r, K, \delta, w, n, l) &= \sqrt{\frac{128 \rbr{r \ln 9 + \ln \rbr{l \cdot 2^{K+1}/\delta}}}{w \cdot n} }+
\frac{4 \rbr{r \ln9+ \ln \rbr{l \cdot 2^{K+1}/\delta}}}{w \cdot n} ,\\
b_3(r, K, \delta, w, n, l) &= \sqrt{ \frac{108e^3 \lceil r \ln13+\ ln \rbr{l \cdot2^K/\delta} \rceil^3}{w \cdot n} } ,\\
b_4(r, K, \delta, w, n, l) &= \sqrt{\frac{8192 \rbr{r\ln 17 +\ln \rbr{l \cdot \cdot 2^{K+1} /\delta}}^2 }{(w \cdot n)^2} +\frac{32\rbr{r\ln 17 +\ln \rbr{l\cdot 2^{K+1}/\delta}}^3 }{(w \cdot n)^3} } ,
\end{align*}
With probability greater than $1 - \delta$, pick any $\delta \in \rbr{0,1}$ and any random matrix $V \in \mathrm{R}^{d \times r}$ of rank $r$, the following guarantee holds\\
1. For the first-order moments, we have, for $ i \in \cbr{0,1 \cdots 2^K-1},$
\begin{align*}
\nbr{T \rbr{\hat{M}_{1,z_i} - M_{1,z_i}, V}}_2 \leq \sigma \nbr{V}_2 b_1(r, K, \delta, \hat{w}_{z_i}, n, 1).
\end{align*}
2. For the second-order moments, we have, for $ i \in \cbr{0,1 \cdots 2^K-1},$
\begin{align*}
\nbr{ T\rbr{\hat{M}_{2,z_i} - M_{2,z_i},V,V}}_2 \leq
& \sigma^2 \nbr{V}_2^2 b_2(r, K, \delta, \hat{w}_{z_i}, n, 2)\\
&+ 2 \sigma \nbr{V}_2 \nbr{V^\top M_{1,z_i}}_2 b_1(r, K, \delta, \hat{w}_{z_i}, n, 2).
\end{align*}
3. For the third-order moments, we have, for $ i \in \cbr{0,1 \cdots 2^K-1},$
\begin{align*}
\nbr{ T\rbr{\hat{M}_{3,z_i} - M_{3,z_i},V,V,V}}_2 \leq &\sigma^3 \nbr{V}_2^3 b_3(r, K, \delta, \hat{w}_{z_i}, n, 3) \\
&+ 3 \sigma^2 \nbr{V^\top M_{1,z_i}}_2 \nbr{V}_2^2 b_2 (r, K, \delta, \hat{w}_{z_i}, n, 3)\\
& + 3\sigma \nbr{V^\top M_{1,z_i}}_2^2 \nbr{V}_2 b_1 (r, K, \delta, \hat{w}_{z_i}, n, 3).
\end{align*}
4. For the fourth-order moments, we have, for $ i \in \cbr{0,1 \cdots 2^K-1},$
\begin{align*}
\nbr{ T \rbr{\hat{M}_{4,z_i} - M_{4,z_i},V,V,V,V}}_2 \leq& \sigma^4 \nbr{V}_2^4 b_4(r, K, \delta, \hat{w}_{z_i}, n, 4)\\
&+ 4 \sigma^3 \nbr{V^\top M_{1,z_i}}_2 \nbr{V}_2^3 b_3(r, K, \delta, \hat{w}_{z_i}, n, 3) \nonumber\\
& + 6\sigma^2 \nbr{V^\top M_{1,z_i}}_2^2 \nbr{V}_2^2 b_2(r, K, \delta, \hat{w}_{z_i}, n, 4)\nonumber\\
& + 4 \sigma \nbr{V^\top M_{1,z_i}}_2^3 \nbr{V}_2 b_1(r, K, \delta, \hat{w}_{z_i}, n, 4).
\end{align*}
\end{lemma}
The proof is provided in the Appendix. We finish the proof by adding the bounds for every term. By using inequalities for conditional moments, we get the bounds for moments by the following Lemma.
\begin{lemma}{( Lemma 6 in \cite{HsuKak12}; Concentration of empirical moments)}
\label{lem:moments}
For a fixed matrix $V \in \mathbb{R}^{d \times r}$,
\begin{align*}
&\nbr{T\rbr{\hat{M}_i - M_i,V, \cdots, V}}_2 \\
&\leq (1+ 2^{K/2} \epsilon_w) \max_{z_j } \nbr{T\rbr{\hat{M}_{i,z_j} - M_{i,z_j},V, \cdots, V}}_2 + 2^{K/2}\max_{z_j} \nbr{T\rbr{M_{i,z_j}, V, \cdots, V}}_2\varepsilon_{w} \\
&\,\,\, \forall i \in [4],\forall j \in \cbr{0,1 \cdots 2^K-1}
\end{align*}
where $\varepsilon_{w} = \rbr{ \sum \limits_{z_j} \rbr{ \hat{w}_{z_j}-w_{z_j} }^2}^{\frac{1}{2}} \leq \frac{1+\sqrt{\ln \rbr{1/\delta}}}{\sqrt{n}}$.
\end{lemma}
\subsection{Concentration of Measure of the IBP}
Using the results of unbounded moments, we further get the bounds for the tensors based on the concentration of
moment in Lemma \ref{lem:sigbounds} and
\ref{lem:spectralbounds}. Bounds for reconstruction accuracy of our
algorithm are provided. The full proof is given in the Appendix.
\begin{theorem}{(Reconstruction Accuracy)}
\label{th:reconstruction} Let $\varsigma_k \sbr{S_2}$ be the $k-th$ largest singular value of $S_2$. Define $\pi_{min} = \argmax_{i \in [K]} \abr{\pi_i - 0.5}$, $\pi_{max} = \argmax_{i \in [K]} \pi_i$ and $\tilde{\pi} = \prod_{\{i:\pi_i \leq 0.5\}} \pi_i \prod_{\{i:\pi_i > 0.5\}} (1-\pi_i)$. Pick any $\delta, \epsilon \in (0,1)$. There exists a polynomial $\mathrm{poly}(\cdot)$ such that if sample size $m$ statisfies
\begin{align*}
m \geq \mathrm{poly}\Biggl({d, K, \frac{1}{\epsilon}, \log(1/\delta), \frac{1}{\tilde{\pi}},\frac{ \varsigma_1\sbr{S_2}}{ \varsigma_K\sbr{S_2}}, \frac{\sum \limits_{i=1}^K \nbr{\Phi_i}_2^2}{ \varsigma_K\sbr{S_2} }, \frac{\sigma^2}{ \varsigma_K\sbr{S_2}},\frac{1}{\sqrt{\pi_{min}-\pi_{min^2}}}, \frac{\pi_{max}}{\sqrt{\pi_{max}-\pi_{max}^2 }}}\Biggr)
\end{align*}
with probability greater than $1-\delta$, there is a permutation $\tau$ on $[K]$ such that the $\hat{A}$ returns by Algorithm \ref{alg:eca} satifies
$\displaystyle
\nbr{\hat{\Phi}_{\tau(i)} - \Phi_i} \leq \rbr{\nbr{\Phi_i}_2 + \sqrt{ \varsigma_1 \sbr{S_2}}}\epsilon
\text{ for all } i \in [K]$.
\end{theorem}
\subsection{Concentration of measure of the HDP}
\label{sec:conc}
We derive theoretical guarantees for the spectral inference algorithms
in an HDP. Specifically we provide guarantees for moments $M_r^{{\bf i}}$,
tensors $S_r^{{\bf i}}$, and latent factors $\Phi$. The technical
challenge relative to conventional models is that the data are not
drawn iid. Instead, they are drawn from a predefined hierarchy and
they are only exchangeable within the hierarchy. We address this by
introducing a more refined notion of effective sample size which
borrows from its counterpart in particle filtering \citep{DouFreGor01}.
We define $n_{\bf i}$ to be the effective sample
size, obtained by hierarchical averaging over the HDP tree. This yields
\begin{align}
\label{eq:nibr}
n_{\bf i} := \begin{cases}
1 & \text{ for leaf nodes}
\\
|c({\bf i})|^2 \sbr{\sum_{{\bf j} \in c({\bf i})} \frac{1}{n_{\bf i}}}^{-1}
& \text{ otherwise}
\end{cases}
\end{align}
One may check that in the case where all leaves have an equal number
of samples and where each vertex in the tree has an equal number of
children, $n_{{\bf i}}$ is the overall sample size. The intuition is that, for a balanced tree, every leaf nodes should contribute equally to the overall moments, which can be viewed as a two layer model with all the leaf nodes connected directly to the root node. Using similar approach for obtaining concentration measure for bounded moments, we extend the results that apply to moments for different hierarchical structure as in Theorem \ref{theorem:momentbounds}.
\begin{theorem}
\label{theorem:momentbounds}
For any node ${\bf i}$ in a $L$-layer HDP with $r$-th order moment
$M_r^{{\bf i}}$ and for any $\delta \in (0,1)$ the following bound holds
for the tensorial reductions $M_r(u) :=T(M_r^{{\bf i}}, u, \cdots, u)$ and
its empirical estimate $\hat{M}_r := T(\hat{M}_r^{{\bf i}}, u, \cdots, u)$.
\begin{align}
\nonumber
\Pr\cbr{\sup_{u: \nbr{u} \leq 1} \abr{ M_r(u)-\hat{M}_r(u)} \leq
\frac{2 + \sqrt{-\ln \delta}}{\sqrt{n_{\bf i}}}} \geq 1 - \delta
\end{align}
\end{theorem}
As indicated, $n_{\bf i}$ plays the role of an effective sample
size. Note that an unbalanced tree has a smaller effective sample size
compared to a balanced one with same number of leaves.
\begin{theorem}
\label{theorem:tensorbounds}
Given a $L$-layer HDP with symmetric tensor $S_r^{{\bf i}}$. Assume that
$\delta \in (0, 1)$ and denote the tensorial reductions as before
$S_r(u) := T(S_r^{{\bf i}} u, \cdots, u)$ and $\hat{S}_r(u) :=
T(\hat{S}_r^{{\bf i}}, u, \cdots, u)$. Then we have for $r \in \cbr{2, 3}$ and any node ${\bf i}$ in the $L$-layer HDP,
\begin{align}
\label{eq:bounddeviation2}
\Pr\cbr{\sup_{u: \nbr{u} \leq 1} \abr{ S_r -\hat{S_r}} \leq c_r n_{{\bf i}}^{-\frac{1}{2}} \sbr{2+\sqrt{\ln
(3/\delta)}}} \geq 1-\delta
\end{align}
where $c_2 = 3$ and $c_3$ is some constant.
\end{theorem}
This shows that not only the moments but also the symmetric tensors
directly related to the statistics $\Phi$ are directly available.
The following theorem guarantees the accurate reconstruction of the
latent feature factors in $\Phi$. Again, a detailed proof is relegated
to Appendix \ref{proof:reconstruction}.
\begin{theorem}
\label{theorem:reconstruction}
Given a $L$-layer HDP with hyperparamter $\gamma_{{\bf i}}$ at node
${\bf i}$. Let $\sigma_k \rbr{\Phi}$ denote the smallest non-zero singular
value of $\Phi$, and $\phi_i$ denote the $i$-th column of $\Phi$.
For sufficiently large sample size, and for suitably chosen $\delta
\in (0,1)$, i.e.
$$3 n_{{\bf i}}^{-\frac{1}{2}}
\sbr{2+\sqrt{\ln (3/\delta)}}\leq\frac{ C_3 \gamma_0 \min_{j}
\pi_{0j} \sigma_k \rbr{\Phi}^2}{6}$$
we have
$\displaystyle \Pr\cbr{\nbr{\Phi_i -\hat{\Phi}_{\sigma\left(i\right)}}
\leq \epsilon} \geq 1-\delta$ where
\begin{align*}
\epsilon :=\frac{ck^3 \rbr{\min_{{\bf i}} \gamma_{{\bf i}}+2}^2}{ \delta
\min_j \pi_{0j} \sigma_k \rbr{\Phi}^3 } n_{\bf i}^{-\frac{1}{2}} \sbr{2+\sqrt{\ln(3/\delta)}}
\end{align*}
Here $\cbr{ \hat{\Phi}_1, \hat{\Phi}_2, \cdots, \hat{\Phi}_k}$ is the
set that Algorithm \ref{alg:hdpeca} returns, for some permutation $\sigma$
of $\cbr{1,2,\cdots, k}$, $i \in {1,2, \cdots, k}$, and some constant
$c$.
\end{theorem}
The theorem gives the guarantees on $l_2$ norm accuracy for the
reconstruction of latent factors. Note that all the bounds above are
functions of the effective sample sizes $n_{\bf i}$. The latter are a
function of both the number of data and the structure of the
tree.
\section{Experiments}
\label{sec:experiment}
\subsection{IBP}
We evaluate the algorithm on a number of problems suitable for the two
models of \eq{eq:lingalaf} and \eq{eq:linzy}. The problems are largely
identical to those put forward in \cite{GriGha11} in order to keep our
results comparable with a more traditional inference approach. We demonstrate
that our algorithm is faster, simpler, and achieves comparable or superior accuracy.
\paragraph{Synthetic data}
Our goal is to demonstrate the ability to recover latent structure of
generated data. Following \cite{GriGha11} we generate images via
linear noisy combinations of $6 \times 6$ templates. That is, we use
the binary additive model of \eq{eq:lingalaf}. The goal is to recover
both the above images and to assess their respective presence in
observed data. Using an additive noise variance of $\sigma^2 = 0.5$ we
are able to recover the original signal quite accurately (from left to
right: true signal, signal inferred from 100 samples, signal inferred
from 500 samples). Furthermore, as the second row indicates, our
algorithm also correctly infers the attributes present in the images.
\noindent
\includegraphics[width=0.29\columnwidth]{image66_origin}
\hspace{0.05\columnwidth}
\includegraphics[width=0.29\columnwidth]{66image_100}
\hspace{0.05\columnwidth}
\includegraphics[width=0.29\columnwidth]{66image_300} \\[-5mm]
\noindent
\hspace{0.34\columnwidth}
\includegraphics[width=0.29\columnwidth]{66image_4data}
\hspace{0.05\columnwidth}
\includegraphics[width=0.30\columnwidth]{66image_100_regen_check}
For a more quantitative evaluation we compared our results to the
infinite variational algorithm of \cite{DosMilGaeTeh09}. The data is
generated using $\sigma \in \cbr{0.1, 0.2, 0.3, 0.4, 0.5}$ and with
sample size $n \in \cbr{100, 200, 300, 400, 500}$. Figure
\ref{fig:compare} shows that our algorithm is faster and comparatively accurate.
\begin{figure}[ht]
\begin{center}
\centerline{\includegraphics[width=1.2\columnwidth]{num_500_cpu_png}}
\caption{ Comparison to infinite variational approach. The first plot compares the test negative log likelihood training on $N=500$ samples with different $\sigma$. The second plot shows the CPU time to data size, $N$, between the two methods.
}
\label{fig:compare}
\end{center}
\vskip -0.2in
\end{figure}
\paragraph{Image Source Recovery}
We repeated the same test using $100$ photos from
\cite{GriGha11}. We first reduce dimensionality on the data set by
representing the images with 100 principal components and apply our
algorithm on the 100-dimensional dataset (see Algorithm~\ref{alg:eca}
for details). Figure~\ref{fig:griff} shows the result. We used $10$ initial
iterations $50$ random seeds and $30$ final
iterations $50$ in the Robust Power Tensor Method. The total runtime was $0.3$s on an intel Core i7 processor (3.2GHz).
\begin{figure}[tb]
\vspace{-5mm}
\begin{center}
\includegraphics[width=0.6\columnwidth]{tom_griffths}
\end{center}
\hfill
\vspace{-5mm}
\caption{Results of modeling 100 images from \cite{GriGha11} of size $240 \times 320$
by model \eq{eq:lingalaf}. Row 1: four sample
images containing up to four objects (\$20 bill, Klein bottle,
prehistoric handaxe, cellular phone). An object basically
appears in the same location, but some small variation noise is
generated because the items are put into scene by hand;
%
Row 2: Independent attributes, as determined by infinite
variational inference of \cite{DosMilGaeTeh09} (note, the
results in \cite{GriGha11} are black and white only);
Row 3: Independent attributes, as determined by spectral IBP;
Row 4: Reconstruction of the images via spectral IBP. The binary
superscripts indicate the items identified in the image.
\label{fig:griff}}
\vspace{-3mm}
\end{figure}
\paragraph{Gene Expression Data}
As a first sanity check of the feasibility of our model for
\eq{eq:linzy}, we generated synthetic data using $x \in \RR^7$ with
$k=4$ sources and $n=500$ samples, as shown in
Figure~\ref{fig:results_syn}.
\begin{figure}[tbh]
\vspace{-1mm}
\begin{center}
\includegraphics[width=0.4\columnwidth]{isfa_synthesize_copy}
\end{center}
\hfill
\vspace{-10mm}
\caption{Recovery of the source matrix $A$ in model \eq{eq:linzy}
when comparing MCMC sampling and spectral methods. MCMC sampling
required $1.72$ seconds and yielded a Frobenius distance $\nbr{A
- A_{\mathrm{MCM}}}_F = 0.77$. Our spectral
algorithm required $0.77$ seconds to achieve a distance $\nbr{A
- A_{\mathrm{Spectral}}}_F = 0.31$.
\label{fig:results_syn}
}
\vspace{-2mm}
\begin{center}
\includegraphics[width=0.7\columnwidth]{ibp500_3var}
\end{center}
\vspace{-10mm}
\hfill
\caption{Gene signatures derived by the spectral IBP. They show that
there are common hidden causes in the observed expression levels,
thus offering a considerably simplified representation.
\label{fig:ibp500}}
\vspace{-5mm}
\end{figure}
For a more realistic analysis we used a microarray dataset. The data
consisted of 587 mouse liver samples detecting 8565 gene probes,
available as dataset GSE2187 as part of NCBI's Gene Expression Omnibus
\url{www.ncbi.nlm.nih.gov/geo}. There are four main types of
treatments, including Toxicant, Statin, Fibrate and Azole. Figure
\ref{fig:ibp500} shows the inferred latent factors arising from
expression levels of samples on 10 derived gene signatures. According
to the result, the group of fibrate-induced samples and a small group
of toxicant-induced samples can be classified accurately by the
special patterns. Azole-induced samples have strong positive signals
on gene signatures 4 and 8, while statin-induced samples
have strong positive signals only on the 9 gene signatures.
\subsection{HDP}
\label{sec:exp}
An attractive application of HDP is topic modelling in a corpus
where in documents are grouped naturally. We use the Enron email corpus \citep{KliYan04} and the Multi-Domain Sentiment Dataset \citep{BliDrePer07} to validate our algorithm. After the usual cleaning steps (stop word removal, numbers, infrequent
words), our training dataset for Enron consisted of $167,851$ emails sent with $10,000$ vocabulary size and average $91$ words in each email. Among these, $126,697$ emails are sent internally within Enron
and $41154$ are from external sources. In order to show that the topics are able to cover topics from external and internal sources and are not biased toward the larger group, we have $537$ internal emails and $4,63$ external email in our test data. To evaluate the computational efficiency of the spectral algorithms
using fast count sketch tensor decomposition(FC) \citep{WanTunSmoAna15}, robust tensor method (RB) and alternating least square
(ALS), we compare the CPU time and per-word likelihood among these approaches.
\vspace{-3mm}
\begin{table}[tbh]\centering
\footnotesize
\caption{Results on Enron
dataset with different tree structures and different solvers: spectral method for the HDP using fast count sketch method (sketch length is set to $10$), alternating least square (ALS) and robust tensor power method (RB).
\label{tab:runtime} }
\begin{tabular}{@{}lccccc@{}}
Tree &K & &sHDP (FC) &sHDP (ALS) & sHDP (RB) \\
\hline
Enron 2-layer &50 & like./time &8.09/{\bf 67} &7.86/119& 7.86/2641 \\
&100 &like./time &8.16/{\bf 104} & 7.82/668& 7.82/5841 \\
Enron 3-layer &50 & like./time &7.93/{\bf 68} & 7.78/121& {\bf 7.77}/2710 \\
&100 &like./time &8.18/{\bf 101} &7.69/852 &{\bf 7.68}/5782 \\
\end{tabular}
\end{table}
We further compare spectral method under balanced/unbalanced tree structure of data on Multi-Domain Sentiment Dataset. The dataset contains reviews from Amazon reviews that fall into four categories: books, DVD, electronics and kitchen. We generate $2$ training datasets where one has balanced number of reviews under each categories (1900 reviews for each category) and the other has highly unbalanced number of examples at the leaf node (1900/1500/700/300 reviews for the four categories), while the test dataset consisted of 100 reviews for each categories. The result in Table \ref{tab:senti} show that spectral algorithm with multi-layers structure will perform even better than with flat model when the tree structure is unbalanced.
\begin{table}[tbh]\centering
\footnotesize
\caption{Results on Multi-Domain Sentimental dataset. Train data 1 is selected so that the there are balanced numbers of reviews under each category. Train data 2 is selected to have highly unbalanced child number at the leaf nodes.
\label{tab:senti} }
\begin{tabular}{@{}lcccccc@{}}
Tree & train data 1 & K=50 & K=100 &train data 2 &K=50 & k= 100 \\
\hline
Sentiment 2-layer & like./time &{\bf 7.9}/38 & 7.99/151 & like./time &8.23/36 & 8.14/147\\
Sentiment 3-layer &like./time &7.92/37 & {\bf 7.96}/150 & like./time &{\bf 8.17}/38& {\bf 8.07}.148\\
\end{tabular}
\end{table}
The results of the experiments throw light on two key points. First, leveraging the
information in the form of hierarchical structure of documents, instead of
blinding grouping the documents into a single-layer model like LDA,
will result in better performance (i.e. higher log-likelihood) under
different settings. The tree structure is able to eliminate the
pernicious effects caused by unbalanced data. For example, a 2-layer model like LDA
considers every email to be equally important, and so for a topic
related to external events it will perform worse, as most of the
emails are exchanged within the company and they are unlikely to
possess topics related to the external emails. Second, although spectral method cannot obtain a solution that has higher performance in perplexity, it can be used as a tool for picking up a nice initial point.
\section{Conclusion}
The IBP and the HDP mixture models are useful and most popular nonparametric
Bayesian tool. Unfortunately the computational complexity of the inference
algorithms is high. Thus we propose a spectral algorithm to alleviate
the pain. We first derived the low-order moments for both mixture
model, and then described the algorithm to recover the latent factors
of interest. Concentration of measure for this method is also
provided. We demonstrate the advantages of utilizing structure information.
High performance numerical linear algebra and more advanced
optimization algorithms will improve matters further.
\acks{We would like to acknowledge support for this project
from Oracle and Microsoft. }
\vskip 0.2in
|
2,877,628,088,363 | arxiv | \section{Introduction}
Deep learning and Convolutional Neural Networks (ConvNets) have shown impressive state-of-the-art results in the last years on various visual recognition tasks, \textit{e.g.} image classification \cite{krizhevsky2012imagenet,he2016deep,Durand_WILDCAT_CVPR_2017}, object localization \cite{dai2016r,redmon2016you,Mordan2017}, image segmentation \cite{deeplab} and even multimodal embedding \cite{Martin2018,Carvalho2018,benyounescadene2017mutan}. Some key elements are the use of very deep models with a huge number of parameters and the availability of large-scale datasets such as ImageNet. When dealing with smaller datasets, however, the need for proper regularization methods becomes more crucial to control overfitting~\cite{weightdecay,batchnorm,dropout,Blot2018}. An appealing direction to tackle this issue is to take advantage of the huge number of unlabeled data by developing semi-supervised learning techniques.
Many approaches attempt at designing semi-supervised techniques where the unsupervised cost produces encoders that have high data-likelihood or small reconstruction error \cite{bengio2007greedy}. This strategy has been followed by historical deep learning approaches \cite{hinton2006reducing}, but also in some promising recent results with modern Conv\-Nets~\cite{Zhao2016a,Zhang2016a}. However, the unsupervised term in reconstruction-based approaches arguably conflicts with the supervised loss, which requires intra-class invariant representations. This is the motivation for designing auto-encoders that are able to discard information, such as the Ladder Networks~\cite{Rasmus2015}.
Another interesting regularization criterion relies on stability. Prediction functions which are stable under small input variations are likely to generalize well, especially when training with small amounts of data. Theoretical works have shown the stability properties of some deep models, \textit{e.g.} by using harmonic analysis for scattering transforms~\cite{Mallat2011,Bruna:2013:ISC:2498740.2498892} or for Convolution Kernel Machines~\cite{BiettiNIPS17}. In addition, recent semi-supervised models incorporate a stability-based regularizer on the prediction~\cite{Sajjadi2016,Laine2016,Tarvainen2017}.
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\textwidth]{images/overview}
\caption{Illustration of HybridNet behavior: the input image is processed by two network paths of weights $W_c$ and $W_u$; each path produces a partial reconstruction, and both are summed to produce the final reconstruction, while only one path is used to produce a classification prediction. Thanks to a joint training of both tasks, the weights $W_c$ and $W_u$ influence each other to cooperate}
\label{fig:intro}
\end{figure}
In this paper, we propose a new approach for regularizing ConvNets using unlabeled data. The behavior of our model, called HybridNet, is illustrated in Fig.~\ref{fig:intro}. It consists in a ``hybrid'' auto-encoder with the feature extraction path decomposed into two branches.
The top branch encoder, of parameters $W_c$, is connected to a classification layer that produces class predictions
while the decoder from this branch is used to partly reconstruct the input image from the discriminative features, leading to $\hat{\mathbf{x}}_c$. Since those features are expected to extract invariant class-specific patterns, information is lost and exact reconstruction is not possible. To complement it, a second encoder-decoder branch of parameters $W_u$ is added to produce a complementary reconstruction $\hat{\mathbf{x}}_u$ such that the sum $\hat{\mathbf{x}} = \hat{\mathbf{x}}_c+\hat{\mathbf{x}}_u$ is the final complete reconstruction.
During training, the supervised classification cost impact $W_c$ while an unsupervised reconstruction cost is applied to both $W_c$ and $W_u$ to properly reconstruct the input image. The main assumption behind HybridNet is that the two-path architecture helps in making classification and reconstruction cooperate. To encourage this, we use additional costs and training techniques, namely a stability regularization in the discriminative branch and a branch complementarity training method.
\section{Related Work}
Training deep models with relatively small annotated datasets is a crucial issue nowadays.
To this end, the design of proper regularization techniques plays a central role.
In this paper, we address the problem of taking advantage of unlabeled data for improving generalization performances of deep ConvNets with semi-supervised learning~\cite{Zhu2005}.
One standard goal followed when training deep models with unlabeled data consists in designing models which fit input data well. Reconstruction error is the standard criterion used in (possibly denoising) Auto-Encoders~\cite{bengio2007greedy,Ranzato2008,Ranzato2007b,vincent2008extracting}, while maximum likelihood is used with generative models, \textit{e.g.} Restricted Boltzmann Machines, Deep Belief Networks or Deep Generative Models \cite{hinton2006reducing,Ranzato2007,Larochelle2008,kingma2014semi}. This unsupervised training framework was generally used as a pre-training before supervised learning with back-propagation \cite{erhan2010does}, potentially with an intermediate step \cite{Goh_NIPS13}. The currently very popular Generative Adversarial Networks~\cite{NIPS2014_5423} also falls into this category. With modern ConvNets, regularization with unlabeled data is generally formulated as a multi-task learning problem, where reconstruction and classification objectives are combined during training~\cite{Zhao2016a,Zhang2016a,Makhzani2016}. In these architectures, the encoder used for classification is regularized by a decoder dedicated to reconstruction.
This strategy of classification and reconstruction with an Auto-Encoder is however questionable, since classification and reconstruction may play contradictory roles in terms of feature extraction. Classification arguably aims at extracting invariant class-specific features, improving sample complexity of the learned model~\cite{hastie_09_elements-of.statistical-learning}, therefore inducing an information loss which prevents exact reconstruction. Ladder Networks~\cite{Rasmus2015} have historically been designed to overcome the previously mentioned conflict between reconstruction and classification, by designing Auto-Encoders capable of discarding information. Reconstruction is produced using higher-layer representation and a noisy version of the reconstruction target. However, it is not obvious that providing a noisy version of the target and training the network to remove the noise allows the encoder to lose some information since it must be able to correct low-level errors that require details.
Another interesting regularization criterion relies on stability or smoothness of the prediction function, which is at the basis of interesting unsupervised training methods, \textit{e.g.} Slow Feature Analysis~\cite{TheriaultCVPR13}. Adding stability to the prediction function was studied in Adversarial Training~\cite{goodfellow2014explaining} for supervised learning and further extended to semi-supervised learning in the Virtual Adversarial Training method \cite{miyato2015distributional}. Other recent semi-supervised models incorporate a stability-based regularizer on the prediction. The idea was first introduced by~\cite{Sajjadi2016} and proposes to make the prediction vector stable toward data augmentation (translation, rotation, shearing, noise, \textit{etc.}) and model stochasticity (dropout) for a given input. Following work~\cite{Laine2016,Tarvainen2017} slightly improves upon it by proposing variants on the way to compute stability targets to increase their consistency and better adapt to the model's evolution over training.
When using large modern ConvNets, the problem of designing decoders able to invert the encoding still is an open question~\cite{Wojna2017}. The usual solution is to mirror the architecture of the encoder by using transposed convolutions~\cite{dumoulin2016guide}.
This problem is exacerbated with irreversible pooling operations such as max-pooling that must be reversed by an upsampling operation. In~\cite{Zhao2016a,Zhang2016a}, they use unpooling operations to bring back spatial information from the encoder to the decoder, reusing pooling switches locations for upsampling.
Another interesting option is to explicitly create models which are reversible by design. This is the option followed by recent works such as RevNet~\cite{NIPS2017_6816} and i-RevNet~\cite{jacobsen:hal-01712808}, being inspired by second generation of bi-orthogonal multi-resolution analysis and wavelets~\cite{swe:spie95} from the signal processing literature.
To sum up, using reconstruction as a regularization cost added to classification is an appealing idea but the best way to efficiently use it as a regularizer is still an open question. As we have seen, when applied to an auto-encoding architecture~\cite{Zhao2016a,Zhang2016a}, reconstruction and classification would compete. To overcome the aforementioned issues, we propose HybridNet, a new framework for semi-supervised learning. Presented on Fig.~\ref{fig:general-archi}, this framework can be seen as an extension of the popular auto-encoding architecture.
In HybridNet, the usual auto-encoder that does both classification and reconstruction is assisted by an additional auto-encoder so that the first one is allowed to discard information in order to produce intra-class invariant features while the second one retains the lost information. The combination of both branches then produces the reconstruction.
This way, our architecture prevents the conflict between classification and reconstruction and allows the two branches to cooperate and accomplish both classification and reconstruction tasks.
Compared to Ladder Networks~\cite{Rasmus2015}, our two-branch approach without direct skip connection allows for a finer and learned information separation and is thus expected to have a more favorable impact in terms of discriminative encoder regularization. Our HybridNet model also has conceptual connections with wavelet decomposition~\cite{wavelets}: the first branch can be seen as extracting discriminative low-pass features from input images, and the second branch acting as a high-pass filter to restore the lost information. HybridNet also differs from reversible models~\cite{NIPS2017_6816,jacobsen:hal-01712808} by the explicit decomposition between supervised and unsupervised signals, enabling the discriminative encoder to have fewer parameters and better sample complexity.
In this paper, our contributions with the HybridNet framework are twofold: first, in Section~\ref{sec:archi}, we propose an architecture designed to efficiently allow both reconstruction and classification losses to cooperate; second, in Section~\ref{sec:training}, we design a training loss adapted to it that includes reconstruction, stability in the discriminative branch and a branch complementarity technique. In Section~\ref{sec:experiements}, we perform experiments to show that HybridNet is able to outperform state-of-the-art results in various semi-supervised settings on CIFAR-10, SVHN and STL-10. We also provide ablation studies validating the favorable impact of our contributions. Finally, we show several visualizations on CIFAR-10 and STL-10 datasets analogous to Fig.~\ref{fig:intro} to validate the behavior of both branches, with a discriminative branch that loses information that is restored by the second branch.
\section{HybridNet: a semi-supervised learning framework}
\label{sec:architecture}
In this section, we detail the proposed HybridNet model: the chosen architecture to mix supervised and unsupervised information efficiently in Section~\ref{sec:archi}, and the semi-supervised training method adapted to this particular architecture in Section~\ref{sec:training}.
\subsection{Designing the HybridNet architecture}
\label{sec:archi}
\begin{figure}[tb]
\centering
\includegraphics[width=0.7\textwidth]{images/losses}
\caption{General description of the HybridNet framework. $E_c$ and $C$ correspond to a classifier, $E_c$ and $D_c$ form an autoencoder that we call \textit{discriminative path}, and $E_u$ and $D_u$ form a second autoencoder called \textit{unsupervised path}. The various loss functions used to train HybridNet are also represented in yellow}
\label{fig:general-archi}
\end{figure}
\subsubsection{General architecture.}
As we have seen, classification requires intra-class invariant features while reconstruction needs to retain all the information. To circumvent this issue, HybridNet is composed of two auto-encoding paths, the \textit{discriminative path} ($E_c$ and $D_c$) and the \textit{unsupervised path} ($E_u$ and $D_u$). Both encoders $E_c$ and $E_u$ take an input image $\mathbf{x}$ and produce representations $\mathbf{h}_c$ and $\mathbf{h}_u$, while decoders $D_c$ and $D_u$ take respectively $\mathbf{h}_c$ and $\mathbf{h}_u$ as input to produce two partial reconstructions $\hat{\mathbf{x}}_c$ and $\hat{\mathbf{x}}_u$. Finally, a classifier $C$ produces a class prediction using discriminative features only: $\hat{\mathbf{y}} = C(\mathbf{h}_c)$. Even if the two paths can have similar architectures, they should play different and complementary roles. The discriminative path must extract discriminative features $\mathbf{h}_c$ that should eventually be well crafted to perform a classification task effectively, and produce a purposely partial reconstruction $\hat{\mathbf{x}}_c$ that should not be perfect since preserving all the information is not a behavior we want to encourage. Consequently, the role of the unsupervised path is to be complementary to the discriminative branch by retaining in $\mathbf{h}_u$ the information lost in $\mathbf{h}_c$. This way, it can produce a complementary reconstruction $\hat{\mathbf{x}}_u$ so that, when merging $\hat{\mathbf{x}}_u$ and $\hat{\mathbf{x}}_c$, the final reconstruction $\hat{\mathbf{x}}$ is close to $\mathbf{x}$. The HybridNet architecture, visible on Fig.~\ref{fig:general-archi}, can be described by the following equations:
\begin{equation}\arraycolsep=8pt
\begin{array}{ccc}
\mathbf{h}_c = E_c(\mathbf{x}) & \hat{\mathbf{x}}_c = D_c(\mathbf{h}_c) & \hat{\mathbf{y}} = C(\mathbf{h}_c)\\
\mathbf{h}_u = E_u(\mathbf{x}) & \hat{\mathbf{x}}_u = D_u(\mathbf{h}_u) & \hat{\mathbf{x}} = \hat{\mathbf{x}}_c + \hat{\mathbf{x}}_u
\end{array}
\end{equation}
Note that the end-role of reconstruction is just to act as a regularizer for the discriminative encoder.
The main challenge and contribution of this paper is to find a way to ensure that the two paths will in fact behave in this desired way. The two main issues that we tackle are the fact that we want the discriminative branch to focus on discriminative features, and that we want both branches to cooperate and contribute to the reconstruction. Indeed, with such an architecture, we could end up with two paths that work independently: a classification path $\hat{\mathbf{y}} = C(E_c(\mathbf{x}))$ and a reconstruction path $\hat{\mathbf{x}} = \hat{\mathbf{x}}_u = D_u(E_u(\mathbf{x}))$ and $\hat{\mathbf{x}}_c = 0$. We address both those issues through the design of the architecture of the encoders and decoders as well as an appropriate loss and training procedure.
\subsubsection{Branches design.}
To design the HybridNet architecture, we start with a convolutional architecture adapted to the targeted dataset, for example a state-of-the-art ResNet architecture for CIFAR-10. This architecture is split into two modules: the discriminative encoder $E_c$ and the classifier $C$. On top of this model, we add the discriminative decoder $D_c$.
The location of the splitting point in the original network is free, but $C$ will not be directly affected by the reconstruction loss. In our experiments, we choose $\mathbf{h}_c$ ($E_c$'s output) to be the last intermediate representation before the final pooling that aggregates all the spatial information, leaving in $C$ a global average pooling followed by one or more fully-connected layers. The decoder $D_c$ is designed to be a ``mirror'' of the encoder's architecture, as commonly done in the literature, \textit{e.g.}~\cite{Zhao2016a,Rasmus2015,zeiler2014visualizing}.
After constructing the discriminative branch, we add an unsupervised complementary branch. To ensure that both branches are ``balanced'' and behave in a similar way, the internal architecture of $E_u$ and $D_u$ is mostly the same as for $E_c$ and $D_c$. The only difference remains in the mirroring of pooling layers, that can be reversed either by upsampling or unpooling. An upsampling will increase the spatial size of a feature map without any additional information while an unpooling, used in~\cite{Zhao2016a,Zhang2016a}, will use spatial information (\textit{pooling switches}) from the corresponding max-pooling layer to do the upsampling. In our architecture, we propose to use upsampling in the discriminative branch because we want to encourage spatial invariance, and use unpooling in the unsupervised branch to compensate this information loss and favor the learning of spatial-dependent low-level information. An example of HybridNet architecture is presented in Fig.~\ref{fig:cifar10-archi}.
As mentioned previously, one key problem to tackle is to ensure that this model will behave as expected, \textit{i.e.} by learning discriminative features in the discriminative encoder and non-discriminative features in the unsupervised one.
This is encouraged in different ways by the design of the architecture. First, the fact that only $\mathbf{h}_c$ is used for classification means that $E_c$ will be pushed by the classification loss to produce discriminative features. Thus, the unsupervised branch will naturally focus on information lost by $E_c$. Using upsampling in $D_c$ and unpooling in $D_u$ also encourages the unsupervised branch to focus on low-level information. In addition to this, the design of an adapted loss and training protocol is a major contribution to the efficient training of HybridNet.
\subsection{Training HybridNet}
\label{sec:training}
The HybridNet architecture has two information paths with only one producing a class prediction and both producing partial reconstructions that should be combined. In this section, we will address the question of training this architecture efficiently. The complete loss is composed of various terms as illustrated on Fig.~\ref{fig:general-archi}. It comprises terms for classification with $\mathcal L_\textrm{cls}$; final reconstruction with ${\mathcal L_\textrm{rec}}$; intermediate reconstructions with ${\mathcal L_\textrm{rec-inter}}_{b,l}$ (for layer $l$ and branch $b$); and stability with $\mathrm{\Omega}_\textrm{stability}$. It is also accompanied by a branch complementarity training method. Each term is weighted by a corresponding parameter $\lambda$:
\begin{equation}\textstyle
\mathcal L = \lambda_\textrm{c} \mathcal L_\textrm{cls} + \lambda_\textrm{r} \mathcal L_\textrm{rec} + \sum_{b\in \{c,u\},l} {\lambda_\textrm{r}}_{b,l} {\mathcal L_\textrm{rec-inter}}_{b,l} + \lambda_\textrm{s} \mathrm{\Omega}_\textrm{stability} \,.
\label{eq:full-loss}
\end{equation}
HybridNet can be trained on a partially labeled dataset, \textit{i.e.} that is composed of labeled pairs $\mathcal D_\textrm{sup} = \{(x^{(k)}, y^{(k)})\}_{k=1..N_\textrm{s}}$ and unlabeled images $\mathcal D_\textrm{unsup} = \{x^{(k)}\}_{k=1..N_\textrm{u}}$.
Each batch is composed of $n$ samples, divided into $n_\textrm{s}$ image-label pairs from $\mathcal D_\textrm{sup}$ and $n_\textrm{u}$ unlabeled images from $\mathcal D_\textrm{unsup}$.
\subsubsection{Classification.}
The classification term is a regular cross-entropy term, that is applied only on the $n_s$ labeled samples of the batch and averaged over them:
\begin{equation}
\ell_\mathrm{cls} = \ell_\mathrm{CE}(\hat{\mathbf{y}}, \mathbf{y}) = -\sum_i \mathbf{y}_i \log \hat{\mathbf{y}}_i \, , \qquad \mathcal L_\textrm{cls}=\frac{1}{n_s} \sum_k \ell_\mathrm{cls}(\hat{\mathbf{y}}^{(k)}, \mathbf{y}^{(k)}) \ .
\end{equation}
\subsubsection{Reconstruction losses.}
In HybridNet, we chose to keep discriminative and unsupervised paths separate so that they produce two complementary reconstructions $(\hat{\mathbf{x}}_u, \hat{\mathbf{x}}_c)$ that we combine with an addition into $\hat{\mathbf{x}} = \hat{\mathbf{x}}_u + \hat{\mathbf{x}}_c$. Keeping the two paths independent until the reconstruction in pixel space, as well as the merge-by-addition strategy allows us to apply different treatments to them and influence their behavior efficiently. The merge by addition in pixel space is also analogous to wavelet decomposition where the signal is decomposed into low- and high-pass branches that are then decoded and summed in pixel space. The reconstruction loss that we use is a simple mean-squared error between the input and the sum of the partial reconstructions:
\begin{equation}
\ell_\mathrm{rec} = ||\hat{\mathbf{x}} - \mathbf{x}||_2^2 = ||\hat{\mathbf{x}}_u + \hat{\mathbf{x}}_c - \mathbf{x}||_2^2\,, \qquad \mathcal L_\textrm{rec}=\frac{1}{n} \sum_k \ell_\mathrm{rec}(\hat{\mathbf{x}}^{(k)}, \mathbf{x}^{(k)}) \ .
\end{equation}
In addition to the final reconstruction loss, we also add reconstruction costs between intermediate representations in the encoders and the decoders which is possible since encoders and decoders have mirrored structure. We apply these costs to the representations $\mathbf{h}_{b,l}$ (for branch $b$ and layer $l$) produced just after pooling layers in the encoders and reconstructions $\hat{\mathbf{h}}_{b,l}$ produced just before the corresponding upsampling or unpooling layers in the decoders. This is common in the literature \cite{Zhao2016a,Zhang2016a,Rasmus2015} but is particularly important in our case: in addition to guiding the model to produce the right final reconstruction, it pushes the discriminative branch to produce a reconstruction and avoid the undesired situation where only the unsupervised branch would contribute to the final reconstruction. This is applied in both branches ($b \in \{c,u\}$):
\begin{equation}
{\mathcal L_\textrm{rec-inter}}_{b,l} = \frac{1}{n} \sum_k ||\hat{\mathbf{h}}_{b,l}^{(k)} - \mathbf{h}_{b,l}^{(k)}||_2^2\,.
\end{equation}
\subsubsection{Branch cooperation.} As described previously, we want to ensure that both branches contribute to the final reconstruction, otherwise this would mean that the reconstruction is not helping to regularize $E_c$, which is our end-goal. Having both branches produce a partial reconstruction and using intermediate reconstructions already help with this goal. In addition, to balance their training even more, we propose a training technique such that the reconstruction loss is only backpropagated to the branch that contributes less to the final reconstruction of each sample. This is done by comparing $||\hat{\mathbf{x}}_c - \mathbf{x}||_2^2$ and $||\hat{\mathbf{x}}_u - \mathbf{x}||_2^2$ and only applying the final reconstruction loss to the branch with the higher error.
This can be implemented either in the gradient descent or simply by preventing gradient propagation in one branch or the other using features like \texttt{tf.stop\_gradient} in Tensorflow or \texttt{.detach()} in PyTorch:
\begin{equation}
\mathcal \ell_\textrm{rec-balanced} = \begin{cases}
||\hat{\mathbf{x}}_u + \mathrm{stopgrad}(\hat{\mathbf{x}}_c) - \mathbf{x}||_2^2&\text{if }||\hat{\mathbf{x}}_u - \mathbf{x}||_2^2 \geq ||\hat{\mathbf{x}}_c - \mathbf{x}||_2^2\\
||\mathrm{stopgrad}(\hat{\mathbf{x}}_u) + \hat{\mathbf{x}}_c - \mathbf{x}||_2^2&\text{otherwise}\\
\end{cases} .
\end{equation}
\subsubsection{Encouraging invariance in the discriminative branch.}
We have seen that an important issue that needs to be addressed when training this model is to ensure that the discriminative branch will filter out information and learn invariant features. For now, the only signal that pushes the model to do so is the classification loss. However, in a semi-supervised context, when only a small portion of our dataset is labeled, this signal can be fairly weak and might not be sufficient to make the discriminative encoder focus on invariant features.
In order to further encourage this behavior, we propose to use a \textit{stability regularizer}. Such a regularizer is currently at the core of the models that give state-of-the-art results in semi-supervised setting on the most common datasets~\cite{Sajjadi2016,Laine2016,Tarvainen2017}. The principle is to encourage the classifier's output prediction $\hat{\mathbf{y}}^{(k)}$ for sample $k$ to be invariant to different sources of randomness applied on the input (translation, horizontal flip, random noise, \textit{etc.}) and in the network (\textit{e.g.} dropout). This is done by minimizing the squared euclidean distance between the output $\hat{\mathbf{y}}^{(k)}$ and a ``stability'' target $\mathbf{z}^{(k)}$. Multiple methods have been proposed to compute such a target~\cite{Sajjadi2016,Laine2016,Tarvainen2017}, for example by using a second pass of the sample in the network with a different draw of random factors that will therefore produce a different output. We have:
\begin{equation}
\mathrm{\Omega}_\mathrm{stability} = \frac{1}{n} \sum_k ||\hat{\mathbf{y}}^{(k)} - \mathbf{z}^{(k)}||_2^2\,.
\end{equation}
By applying this loss on $\hat{\mathbf{y}}$, we encourage $E_c$ to find invariant patterns in the data, patterns that have more chances of being discriminative and useful for classification. Furthermore, this loss has the advantage of being applicable to both labeled and unlabeled images.
In the experiments, we tried both Temporal Ensembling~\cite{Laine2016} and Mean Teacher~\cite{Tarvainen2017} methods and did not see a major difference. In Temporal Ensembling, the target $\mathbf{z}^{(k)}$ is a moving average of the $\hat{\mathbf{y}}^{(k)}$ over the previous pass of $\mathbf{x}^{(k)}$ in the network during training; while in Mean Teacher, $\mathbf{z}^{(k)}$ is the output of a secondary model where weights are a moving average of the weights of the model being trained.
\begin{figure}[tb]
\centering
\includegraphics[width=0.75\textwidth]{images/convlarge}
\caption{Example of HybridNet architecture where an original classifier (ConvLarge) constitutes $E_c$ and has been mirrored to create $D_c$ and duplicated for $E_u$ and $D_u$, with the addition of unpooling in the discriminative branch}
\label{fig:cifar10-archi}
\end{figure}
\section{Experiments}
\label{sec:experiements}
In this section, we will study and validate the behavior of our novel framework. We first perform ablation studies to validate the architecture and loss terms of the model. We also propose visualizations of the behavior of the model in various configurations, before demonstrating the capability of HybridNet to obtain state-of-the-art results.
In these experiments, we use three image datasets: SVHN~\cite{netzer2011reading}, CIFAR-10~\cite{cifar10} and STL-10~\cite{Coates2011}.
Both SVHN and CIFAR-10 are 10-classes datasets of $32\times 32$ pixels images. SVHN has 73,257 images for training, 26,032 for testing and 531,131 extra images used only as unlabeled data. CIFAR-10 has 50,000 training images and 10,000 testing images. For our semi-supervised experiments, we only keep $N$ labeled training samples (with $\nicefrac{N}{10}$ samples per class) while the rest of the data is kept unlabeled, as is commonly done.
STL-10 have the same 10 classes as CIFAR-10 with
$96\times 96$ pixels images. It is designed for semi-supervised learning since it contains 10 folds of 1,000 labeled training images, 100,000 unlabeled training images
and 8,000 test images with labels.
\subsection{HybridNet framework validation}
\label{sec:as}
First, we propose a thorough analysis of the behavior of our model at two different levels: first by comparing it to baselines that we obtain when disabling parts of the architecture, and second by analyzing the contribution of the different terms of the training loss of HybridNet both quantitatively and through visualizations.
This study was mainly performed using the ConvLarge architecture~\cite{Rasmus2015} on CIFAR-10 since it's a very common setup used in recent semi-supervised experiments~\cite{Sajjadi2016,Laine2016,Tarvainen2017}. The design of the HybridNet version of this architecture follows Section~\ref{sec:architecture} (illustrated in Fig.~\ref{fig:cifar10-archi}) and uses Temporal Ensembling to produce stability targets $\mathbf{z}$. Additional results are provided using an adapted version of ConvLarge for STL-10 with added blocks of convolutions and pooling.
Models are trained with Adam with a learning rate of 0.003 for 600 epochs with batches of 20 labeled images and 80 unlabeled ones. The various loss-weighting terms $\lambda$ of the general loss (Eq. (\ref{eq:full-loss})) could have been optimized on a validation set but for these experiments they were simply set so that the different loss terms have values of the same order of magnitude.
Thus, all $\lambda$ were set to either 0 or 1 if activated or not, except $\lambda_s$ set to 0 or 100.
All the details of the experiments --\,exact architecture, hyperparameters, optimization, \textit{etc.}\,-- are provided in the appendix.
\subsubsection{Ablation study of the architecture.}
We start this analysis by validating our architecture with an ablation study on CIFAR-10 with various number of labeled samples. By disabling parts of the model and training terms, we compare HybridNet to different baselines and validate the importance of combining both contributions of the paper: the architecture and the training method.
Results are presented in Table~\ref{table:ablation}. The classification and auto-encoder results are obtained with the same code and hyperparameters by simply disabling different losses and parts of the model: the classifier only use $E_c$ and $C$; and the auto-encoder (similar to~\cite{Zhao2016a}) only $E_c$, $D_c$ and $C$. For both, we can add the stability loss. The HybridNet architecture only uses the classification and reconstructions loss terms while the second result uses the full training loss.
\begin{table}[tb]
\setlength{\tabcolsep}{4pt}
\caption{Ablation study performed on CIFAR-10 with ConvLarge architecture}
\label{table:ablation}
\centering
\begin{tabular}{llll}
\toprule
& \multicolumn{3}{c}{Labeled samples} \\ \cmidrule{2-4}
Model & 1000 & 2000 & 4000 \\ \midrule
Classification & 63.4 & 71.5 & 79.0 \\
Classification and stability & 65.6 & 74.6 & 81.3 \\ \cmidrule{1-4}
Auto-encoder & 65.0 & 73.6 & 79.8 \\
Auto-encoder and stability & 71.8 & 80.4 & 84.9 \\ \cmidrule{1-4}
HybridNet architecture & 63.2 & 74.0 & 80.3 \\
HybridNet architecture and full training loss & \textbf{74.1} & \textbf{81.6} & \textbf{86.6} \\ \bottomrule
\end{tabular}
\end{table}
First, we can see that the HybridNet architecture alone already yields an improvement over the baseline and the auto-encoder, except at 1000 labels. This could be explained by the fact that with very few labels, the model fails to correctly separate the information between the two branches because of the faint classification signal, and the additional loss terms that control the training of HybridNet are even more necessary. Overall, the architecture alone does not provide an important gain since it is not guided to efficiently take advantage of the two branches, indeed, we see that the addition of the complete HybridNet loss allows the model to provide much stronger results, with an improvement of 6-7\,pts over the architecture alone, around 5-6\,pts better than the stability or auto-encoding baseline, and 7-10\,pts more than the supervised baseline. The most challenging baseline is the stabilized auto-encoder that manages to take advantage of the stability loss but from which we still improve by 1.2-2.8\,pts.
This ablation study demonstrates the capability of the HybridNet framework to surpass the different architectural baselines, and shows the importance of the complementarity between the two-branch architecture and the complete training loss.
\subsubsection{Importance of the various loss terms.} We now propose a more fine-grain study to look at the importance of each loss term of the HybridNet training described in Section~\ref{sec:training}, both through classification results and visualizations.
First, in Table~\ref{table:as-scores} we show classification accuracy on CIFAR-10 with 2000 labels and STL-10 with 1000 labels for numerous combinations of loss terms. These results demonstrates that each loss term has it's importance and that all of them cooperate in order to reach the final best result of the full HybridNet model. In particular the stability loss is an important element of the training but is not sufficient as shown by lines \textit{b} and \textit{f-h}, while the other terms bring an equivalent gain as shown by lines \textit{c-e}. Both those $\sim$5\,pts gains can be combined to work in concert and reach the final score line \textit{i} of a $\sim$10\,pts gain.
Second, to interpret how the branches behave we propose to visualizing the different reconstructions $\hat{\mathbf{x}}_c$, $\hat{\mathbf{x}}_u$ and $\hat{\mathbf{x}}$ for different combinations of loss terms in Table~\ref{table:asviz}. With only the final reconstruction term (lines \textit{c}), the discriminative branch does not contribute to the reconstruction and is thus barely regularized by the reconstruction loss, showing little gain over the classification baseline. The addition of the intermediate reconstruction terms helps the discriminative branch to produce a weak reconstruction (lines \textit{d}) and is complemented by the branch balancing technique (lines \textit{e}) to produce balanced reconstructions in both branch. The stability loss (lines \textit{i}) adds little visual impact on $\hat{\mathbf{x}}_c$, it has probably more impact on the quality of the latent representation $\mathbf{h}_c$ and seems to help in making the discriminative features and classifier more robust with a large improvement of the accuracy.
\newcolumntype{R}[2]{%
>{\adjustbox{angle=#1,lap=\width-(#2)}\bgroup}%
l%
<{\egroup}%
}
\colorlet{verylightgray}{gray!20}
\newcommand{\arrayrulecolor{verylightgray}\cmidrule{1-8}{\arrayrulecolor{verylightgray}\cmidrule{1-8}
\arrayrulecolor{black}}
\newcommand{\arrayrulecolor{verylightgray}\cmidrule{1-5}{\arrayrulecolor{verylightgray}\cmidrule{1-5}
\arrayrulecolor{black}}
\newcommand{\multicolumn{1}{R{65}{1em}}}{\multicolumn{1}{R{65}{1em}}}
\newcommand{\color{RoyalBlue}}{\color{RoyalBlue}}
\newcommand*\OK{\ding{51}}
\newcommand{\ \;}{\ \;}
\newcommand{\quad\;}{\quad\;}
\captionsetup[table]{skip=0pt}
\begin{table}[tb]
\caption{Detailed ablation studies when activating different terms and techniques of the HybridNet learning. These results are obtained with ConvLarge on CIFAR-10 with 2000 labeled samples and ConvLarge-like on STL-10 with 1000 labeled samples}
\label{table:abl}
%
\begin{subfigure}[t]{0.32\textwidth}
\caption{Test accuracy (\%)}
\label{table:as-scores}
\centering
\renewcommand{\arraystretch}{1.1}
\begin{tabular}{@{}clllllcc}
\toprule
&\multicolumn{1}{R{65}{1em}}{$\mathcal L_\textrm{classif}$} & \multicolumn{1}{R{65}{1em}}{$\Omega_\textrm{stability}$} & \multicolumn{1}{R{65}{1em}}{$\mathcal L_\textrm{rec}$ {\scriptsize(hybrid)}} & \multicolumn{1}{R{65}{1em}}{$\mathcal L_\textrm{rec-inter}$} & \multicolumn{1}{R{65}{1em}}{$\mathcal L_\textrm{rec-balanced}$} & \multicolumn{1}{R{65}{1em}}{CIFAR-10} & \multicolumn{1}{R{65}{1em}}{STL-10} \\ \midrule
\scriptsize \textit{a} & \OK & & & & & 71.5 & 65.6 \\
\scriptsize \textit{b} &\OK & \OK & & & & 74.6 & 69.8 \\
\arrayrulecolor{verylightgray}\cmidrule{1-8
\scriptsize \textit{c} &\OK & & \OK & & & 72.4 & 67.8 \\
\scriptsize \textit{d} &\OK & & \OK & \OK & & 74.0 & -- \\
\scriptsize \textit{e} &\OK & & \OK & \OK & \OK & 75.2 & -- \\
\arrayrulecolor{verylightgray}\cmidrule{1-8
\scriptsize \textit{f} &\OK & \OK & \OK & & & 77.7 & 71.5 \\
\scriptsize \textit{g} &\OK & \OK & \OK & & \OK & 77.4 & -- \\
\scriptsize \textit{h} &\OK & \OK & \OK & \OK & & 80.8 & 72.2 \\
\scriptsize \textit{i} &\OK & \OK & \OK & \OK & \OK & \textbf{81.6} & \textbf{74.1} \\
\bottomrule
\end{tabular}
\end{subfigure}
\hfill
\begin{subfigure}[t]{0.675\textwidth}
\caption{Visualization of partial and combined reconstructions}
\label{table:asviz}
\centering
\setlength{\tabcolsep}{1.1pt}
\renewcommand{\arraystretch}{1.28}
\begin{tabular}{@{}cllllc@{}}
\toprule
&\multicolumn{1}{R{65}{1em}}{$\mathcal L_\textrm{rec}$ {\scriptsize(hybrid)}} & \multicolumn{1}{R{65}{1em}}{$\mathcal L_\textrm{rec-inter}$} & \multicolumn{1}{R{65}{1em}}{$\mathcal L_\textrm{rec-balanced}$} & \multicolumn{1}{R{65}{1em}}{$\Omega_\textrm{stability}$} &
$\mathbf{x} \ \; \hat{\mathbf{x}}_c \ \; \hat{\mathbf{x}}_u \ \; \hat{\mathbf{x}} \quad\; \mathbf{x} \ \; \hat{\mathbf{x}}_c \ \; \hat{\mathbf{x}}_u \ \; \hat{\mathbf{x}} \quad\; \mathbf{x} \ \; \hat{\mathbf{x}}_c \ \; \hat{\mathbf{x}}_u \ \; \hat{\mathbf{x}} $ \\ \midrule
\scriptsize \textit{c} &\OK & & & & \multirow{4}{*}{\includegraphics[width=6.2cm]{images/viz-as-simple.pdf}\!} \\
\scriptsize \textit{d} &\OK & \OK & & & \\
\scriptsize \textit{e} &\OK & \OK & \OK & & \\
\scriptsize \textit{i} &\OK & \OK & \OK & \OK & \\
\arrayrulecolor{verylightgray}\cmidrule{1-5
\scriptsize \textit{c} & \OK & & & & \\
\scriptsize \textit{d} &\OK & \OK & & & \\
\scriptsize \textit{e} &\OK & \OK & \OK & & \\
\scriptsize \textit{i} &\OK & \OK & \OK & \OK & \\
\bottomrule
\end{tabular}
\end{subfigure}
\end{table}
\captionsetup[table]{skip=10pt}
\subsubsection{Visualization of information separation on CIFAR-10 and STL-10.} Overall, we can see in Table~\ref{table:asviz} lines \textit{i} that thanks to the full HybridNet training loss, the information is correctly separated between $\hat{\mathbf{x}}_c$ and $\hat{\mathbf{x}}_u$ than both contribute somewhat equally while specializing on different type of information. For example, for the blue car, $\hat{\mathbf{x}}_c$ produces a blurry car with approximate colors, while $\hat{\mathbf{x}}_u$ provides both shape details and exact color information. For nicer visualizations, we also show reconstructions of the full HybridNet model trained on STL-10 which has larger images in Fig.~\ref{fig:viz-stl}. These confirm the observations on CIFAR-10 with a very good final reconstruction composed of a rough reconstruction that lacks texture and color details from the discriminative branch, completed by low-level details of shape, texture, writings, color correction and background information from the unsupervised branch.
\begin{figure}[tb]
\centering
\includegraphics[width=\textwidth]{images/viz-stl}
\caption{Visualizations of input, partial and final reconstructions of STL-10 images using a HybridNet model derived from a ConvLarge-like architecture}
\label{fig:viz-stl}
\end{figure}
\subsection{State-of-the-art comparison}
\label{sec:cifar10sota}
After studying the behavior of this novel architecture, we propose to demonstrate its effectiveness and capability to produce state-of-the-art results for semi-supervised learning on three datasets: SVHN, CIFAR-10 and STL-10.
We use ResNet architectures to constitute the supervised encoder $E_c$ and classifier $C$; and augment them with a mirror decoder $D_c$ and an unsupervised second branch containing an encoder $E_u$ and a decoder $D_u$ using the same architecture. For SVHN and CIFAR-10, we use the small ResNet from~\cite{Gastaldi2017}, which is used in Mean Teacher~\cite{Tarvainen2017} and currently achieves state-of-the-art results on CIFAR-10. For STL-10, we upscale the images to 224$\times$224\,px and use a regular ResNet-50 pretrained on the Places dataset.
We trained HybridNet with the training method described in Section~\ref{sec:training}, including Mean Teacher to produce stability targets $\mathbf{z}^{(k)}$. The training protocol follow exactly the protocol of Mean Teacher~\cite{Tarvainen2017} for CIFAR-10 and a similar one for SVHN and STL-10 for which \cite{Tarvainen2017} does not report results with ResNet. The hyperparameters added in HybridNet, \textit{i.e.} the weights of the reconstruction terms (final and intermediate), were coarsely adjusted on a validation set (we tried values 0.25, 0.5 and 1.0 for both of them). Details are in the appendix.
The results of these experiments are presented in Table~\ref{table:cifar10}.
We can see the huge performance boost obtained by HybridNet compared to the ResNet baselines, in particular with CIFAR-10 with 1000 labels where the error rate goes from 45.2\% to 8.81\%, which demonstrates the large benefit of our regularizer. HybridNet also improves over the strong Mean Teacher baseline~\cite{Tarvainen2017}, with an improvement of 1.29\,pt with 1000 labeled samples on CIFAR-10, and 0.9\,pt on STL-10. We also significantly improve over other stability-based approaches~\cite{Sajjadi2016,Laine2016}, and over the Ladder Networks~\cite{Rasmus2015} and GAN-based techniques~\cite{Springenberg2015,Salimans2016}.
These results demonstrate the capability of HybridNet to apply to large residual architecture --\,that are very common nowadays\,-- and to improve over baselines that already provided very good performance.
\begin{table}[tb]
\setlength{\tabcolsep}{4pt}
\caption{Results on CIFAR-10, STL-10 and SVHN using a ResNet-based HybridNet. ``Mean Teacher ResNet'' is our classification \& stability baseline; results marked with ${}^*$ are not reported in the original paper and were obtained ourselves
}
\label{table:cifar10}
\centering
\begin{tabular}{@{}lrrrrrr@{}}
\toprule
Dataset & \multicolumn{3}{c}{CIFAR-10} & \multicolumn{2}{c}{SVHN} & $\!\!\!$STL-10 \\
\cmidrule{2-7}
Nb. labeled images & 1000 & 2000 & 4000 & 500 & 1000 & \multicolumn{1}{c}{1000} \\
\midrule
SWWAE~\cite{Zhao2016a} & & & & & 23.56 & 25.67 \\
Ladder Network~\cite{Rasmus2015} && & 20.40 \\
Improved GAN~\cite{Salimans2016} & 21.83 & 19.61 & 18.63 & 18.44 & 8.11 & \\
CatGAN~\cite{Springenberg2015} && & 19.58 \\
Stability regularization~\cite{Sajjadi2016} && & 11.29 & 6.03 & & \\
Temporal Ensembling~\cite{Laine2016} && & 12.16 & 5.12 & 4.42 \\
Mean Teacher ConvLarge~\cite{Tarvainen2017} & 21.55 & 15.73 & 12.31 & 4.18 & 3.95 \\
Mean Teacher ResNet~\cite{Tarvainen2017} & 10.10 & & 6.28 & ${}^*$2.33 & ${}^*$2.05 & ${}^*$16.8 \\
\cmidrule{1-7}
ResNet baseline~\cite{Gastaldi2017} & 45.2 & 24.3 & 15.45 & 12.27 & 9.56 & 18.0 \\
\textbf{HybridNet [ours]} & \textbf{8.81} & \textbf{7.87} & \textbf{6.09} & \textbf{1.85} & \textbf{1.80} & \textbf{15.9} \\
\bottomrule
\end{tabular}
\end{table}
\section{Conclusion}
In this paper, we described a novel semi-supervised framework called HybridNet that proposes an auto-encoder-based architecture with two distinct paths that separate the discriminative information useful for classification from the remaining information that is only useful for reconstruction. This architecture is accompanied by a loss and training technique that allows the architecture to behave in the desired way. In the experiments, we validate the significant performance boost brought by HybridNet in comparison with several other common architectures that use reconstruction losses and stability. We also show that HybridNet is able to produce state-of-the-art results on multiple datasets.
With two latent representations that explicitly encode classification information on one side and the remaining information on the other side, our model may be seen as a competitor to the fully reversible RevNets models recently proposed, that implicitly encode both types of information.
We plan to further explore the relationships between these approaches.
\textbf{Acknowledgements.} This work was funded by grant DeepVision (ANR-15-CE23-0029-02, STPGP-479356-15), a joint French/Canadian call by ANR \& NSERC.
\bibliographystyle{splncs}
|
2,877,628,088,364 | arxiv |
\section{Introduction}
\label{secIntroduction}
\input{intro.tex}
\section{Background}
\label{secRel}
\input{related.tex}
\section{Keeping the potato hot}
\label{secProblem}
\input{problem.tex}
\section{OPTIC}
\label{secOptic}
\input{optic.tex}
\section{Data Plane Scalability Analysis}
\label{secResults}
\input{results.tex}
\section{Conclusion}
\label{secConclusion}
\input{conclusion.tex}
\bibliographystyle{IEEEtran}
\subsection{Sorting and Rounding BGP Routes}
\label{sortround}
First, we show that optimal protecting sets can be efficiently computed and maintained by sorting and \textit{rounding} BGP routes in a specific way.
We start by explaining this concept in a high-level fashion before formally detailing our solution.
\subsubsection{General idea}
\label{sec:genid}
Using our $\beta$ (inter-domain attributes) and $\alpha$ (IGP distance) attribute separation, we can compute optimal protecting sets easily.
Indeed, $\beta$ is of higher importance than $\alpha$ within the BGP decision process, and IGP events can only affect $\alpha$, leaving $\beta$ unchanged.
Thus, given the current optimal route, denoted $R^{st}$, with $\beta(R^{st}) = \beta^{st}$, the new optimal route after an IGP event is among the ones
with the same best $\beta^{st}$ \CR{-- we simply need to find the one with the new best $\alpha$}.
We can then easily avoid predicting which gateway will be the optimal one for
a specific event; whatever the IGP event is (except the gateway failure possibly requiring to look for more gateways), the new optimal route is among $\{R ~|~ \beta(R) = \beta^{st}\}$. We thus create a \textit{rounded} set that includes all routes sharing the same $\beta$.
After the IGP event, since $\beta$ attributes are unaffected, we simply need to consider that $\alpha$ may have changed and find within this rounded set the gateway with the lowest $\alpha$ (\ie, with a simple min-search).
In Fig.~\ref{fig:mainexample}, such a set would be composed of
n\tss{1}, n\tss{2}, n\tss{3} as they share the same (best) $\beta$ attributes. We indeed showed in Section~\ref{secDecoupling} that any of these three gateways may become the new optimal gateway.
This is however not sufficient to deal with all failures. In particular, if the first rounded set only contains one gateway, a single failure may render all routes within the set unusable.
If this scenario occurs (because there are no two node-disjoint paths towards the external prefix -- see Section~\ref{secOinDataPlan}), more
gateways are needed to optimally protect the prefix.
Since $\beta$ attributes are unchanged by an internal event, the new best route is, a priori, among the ones with second-to-best $\beta$ attributes $\beta^{nd}$.
To form an optimal protecting set, we add rounded sets of $\beta$-tied gateways up until there is enough path diversity
to ensure that no single failure may render all of them unreachable \CR{(there are two node-disjoint paths between the border routers and prefix p)}.
By never adding less at a time than all gateways sharing the same $\beta$, we ensure that the final set contains all potential optimal gateways (as only $\alpha$ can be affected by internal events).
This final set (composed of the union of rounded sets) is thus optimally protecting, and the new optimal gateway can be found through a simple walk-through of this set after any IGP event.
If two prefixes share an equal optimal protecting set, they belong in the same group and share the same set in memory, reducing both the memory consumption and the number of entries to go through and update upon an event (as covering all shared sets covers all prefixes).
In Fig.~\ref{fig:mainexample}, n\tss{1}, n\tss{2}, and n\tss{3} provided enough path-diversity to ensure the prefix was protected, \CR{and shared the same best $\beta$}. \CR{Thus, the optimal gateway after any internal event is bound to be one of these three, which then form an optimal protecting set (for all single possible failures).}
We can now present formally the data structures allowing OPTIC to compute and maintain optimal protecting sets easily.
Our solution requires to re-design both
the control- and the data-plane. The control-plane refers to all learned BGP routes. It is restructured
to ease the handling of the routes, their comparison in particular, for efficient computation of optimal protecting sets. The data-plane
only contains the information necessary for the optimal forwarding and
protection of all prefixes (\ie, the optimal protecting sets).
The resulting
structures are illustrated in Fig.~\ref{fig:optictree}, which shows how the
network depicted in Fig.~\ref{fig:mainexample} would be translated
within OPTIC's control-plane (left) and data-plane (right). \CR{To better illustrate our data structures, we assume here that n$_4$ has a better MED than n$_5$, while other routes do not possess any MED.}
The next sections describe the control-plane structure, how we construct optimal protecting sets from it, and how they are used in the data-plane.
\subsubsection{OPTIC's control-plane}
\label{secRadix}
\begin{figure}
\hspace{-5mm}
\centering
\includegraphics[scale=0.55]{new-new-OPTIC-tree.pdf}
\caption{OPTIC's data-plane and control-plane data structures.
In the control-plane, routes are sorted within a prefix tree $T$ whose leaves form a linked list $L$ of structured BGP NH. MED-tied routes from the same AS are chained within a linked list inside their leaf.
Only a sufficient optimally protecting subset $O$ of routes is pushed to the data-plane.}
\label{fig:optictree}
\end{figure}
At the control-plane level, OPTIC stores every BGP routes learned within a sorted
prefix-tree referred to as $T$, whose leaves form an ordered linked-list $L$,
which contains rounded sets of routes sorted by decreasingly preferred $\beta$ attributes.
Both $T$ and $L$ are per-prefix structures. The set of all trees, for all prefixes,
is referred to as $\mathbb{T}$.
It is important to observe that, since $\alpha$ is not considered, the tree and the list stay stable upon IGP changes and that
routes sharing the same $\beta$ attributes are stored within the same leaf.
\CR{This observation implies that when an IGP event occurs, the BGP NH of the new optimal route belongs to the first leaf of the list $L$ that contains at least one reachable gateway.}
In addition, a route from another leaf
cannot be preferred to the routes of the first leaf.
\textbf{While any route within the first leaf can become optimal after an internal change, the order of the leaves themselves can not be modified by an IGP change}.
The MED attribute can only be used to compare
routes originated from the same AS, hence we cannot use it as a global, generic
attribute. One can only consider a route with a greater MED if the
route with a better one from the same AS becomes unreachable.
Thus, routes discriminated by their MED (\textit{MED-tied} routes) for each
AS are stored within a sub-linked-list inside their leaf. This is illustrated in
Fig.~\ref{fig:optictree} with n\tss{4} and n\tss{5}. Both BGP NH share the
same three first BGP attributes and are thus stored within the same blue
leaf of $T$ (MR2). As they originated from the same AS, we store them in a sorted
linked list depending on their MED attribute.
By doing so, we consider only the first route in the \textit{MED-tied} list that is reachable (referred to as $M_{top}$), respecting the MED's semantics. \CR{Peculiar situations, such as routes not having a MED while others do, can be resolved by applying the standard ISP practices (\eg, ignoring such routes or using a default MED value)}.
The leaves of the tree $T$ thus form a sequence of rounded sets of gateways
\CR{stable upon IGP changes}. We call each set a \textit{MED-aware Rounded set}.
\begin{mydef}\label{def:RS} MED-Aware Rounded sets (MR)\quad \hrulefill\\% \hrulefill\\
For a given prefix, a leaf of its prefix tree $T$ is called a MED-Aware Rounded set. In particular, it contains all the routes having the same $\beta$ attributes (MED-Excluded).\quad \hrulefil
\end{mydef}
\subsubsection{Getting optimal-protecting sets from the control-plane
\label{secModel}
We now explain how this construct eases the computation of optimal protecting sets.
For each prefix p, the first MR set contains, by construction, the optimal
pre-convergence route. As stated in Section~\ref{secRadix}, any BGP NH within the same MR set may offer the new optimal post-convergence route after an internal event.
However, this first MR set is not always sufficient to protect p.
In this case, the new optimal BGP
NH is bound to be within the first MR set in $L$ which contains a gateway that is still reachable.
Consequently, OPTIC constructs an optimally protecting set by considering the union of the best MR sets, in order,
until the destination prefix p is protected from any failure, \ie, there exist two node-disjoint paths. The union of such MR
sets is referred to as an Optimal-Protecting Rounded (OPR) set for p.
The formal definition is given in Theorem.~\ref{th:OPR}. Due to lack of space and its intuitive design, its proof is not presented in this paper (but available in \cite{masterthesis}).
\begin{myth}\label{th:OPR} Optimal-Protecting Rounded sets (\textbf{OPR})\quad \hrulefill\\
Let $p$ be a prefix, and $M_1,M_2,\ldots$ be the sequence of MR sets in the list $L$.
Let $\set{o} = \bigcup_{i=1}^{x}M_{i}$ with $x$ minimal such that there exist two node-disjoint
paths towards $p$ (passing through $\set{o}$).
\textbf{Then, $\set{o}$, called the Optimal-Protecting Rounded set of $p$, is optimally protecting $p$}. \quad \hrulefill
\end{myth}
Adding MR sets until the
prefix p is protected means that there now exists
enough path diversity to protect p from any single event.
The number of routes necessary to protect a prefix depends on the resilience of the network. In bi-connected networks, two gateways are enough.
OPR sets computation does not require any (prior) knowledge of the IGP graph to cover
all possible IGP events. Verifying the existence of two node-disjoint paths between the border router and p via $\set{o}$ is enough and the lightest possible processing to test the protection property. Unless the protection property
is affected by the event, OPR sets stay stable.
\subsubsection{Using OPR sets in the data-plane
\label{secOinDataPlan}
Once OPR sets are extracted from the control-plane, we push them to the data-plane.
The bottom part of
Fig.~\ref{fig:optictree} shows OPTIC's data-plane.
For a given prefix, only the OPR set O (and not the whole list $L$) that optimally protects p is pushed to the data-plane. The data-plane contains the meta-set \metaset{O}{} of all OPR sets for all groups of prefixes,
indexed by their hash, as shown in Fig.~\ref{fig:optictree}.
Prefixes, when sharing the same OPR set, point towards the same set O.
The hash index is content-based (see next sections for more details) and eases the management of \metaset{O}{}. Allowing prefixes to share the same O reduces the amount of data that has to be
stored within the data-plane, as well as the scale of the updates. Note that,
since O is constructed from a subset of $L$, prefixes can share the same OPR set O while having different control-plane structures $L$
\begin{algorithm}\caption{update\_OPR}\label{alg:upOPR}
\footnotesize
\hrulefill
\SetAlgoLined
\SetAlgoVlined
\KwData{ L, \metaset{O}, oldH, p, G, D
}
\KwResult{ updates OPR sets and P\tss{BGP}}
\vspace{-2mm}
\hrulefill
\SetKwProg{Fn}{Function}{ $\rightarrow$}{end}
\Fn{update\_OPR}{
\nl \set{o}~ = \kwEOS(L, G)\; \label{alg:uOR:extract}
\nl \For{$M_{top} \in \set{o}$}{
\nl\While{M$_{top}$ $\neq \emptyset$\label{alg:uOR:reachable?}}{
\nl M$_{top}$.$\alpha${} = D[M$_{top}$]\;
\nl M$_{top}$ = M$_{top}$.next\; \label{alg:uOR:med}
}
}
\nl \set{o}$_{top}$ = \textbf{min}$_\alpha$(\set{o})\; \label{alg:uOR:topg}
\nl newH = \hash(\set{o})\;
\nl \If{\set{o}~$\notin$ \metaset{O}}{
\nl \metaset{O}[newH]~= \set{o}\; \label{alg:uOR:insert}
}
\nl P\tss{BGP}(p) = newH\;
%
\nl \If{$\not\exists$~ p~ $|$ P\tss{BGP}(p) = oldH}{
\nl \remove \metaset{O}[oldH] \from \metaset{O};\label{alg:uOR:remove}
}
}
\hrulefill\vspace{2mm}
\end{algorithm}
\changed{
Algorithm~\ref{alg:upOPR} shows how the OPR sets are updated in the data-plane when necessary. The optimal protection property may require to add gateways from the data-plane structure \set{o}{} (while removals are performed for efficiency).
We start by extracting the OPR set \set{o}{} from the control-plane structure L{} (Line~\ref{alg:uOR:extract}).
The required MR sets are computed by first \emph{(i)} creating a graph $G'$ from $G$ where we add a virtual node representing a remote prefix, then \emph{(ii)} connecting in $G'$ the gateways from MR sets, MR per MR to this virtual node, until there exist two node-disjoint paths towards the virtual node.
\textbf{ExtractOPRSet} thus returns an OPR set as defined by Theorem~\ref{th:OPR}. We then add the IGP distances towards each gateway (contained within $D$)
to the structure (Line~\ref{alg:uOR:reachable?}). This is done for each gateway, including MED-tied ones (Line~\ref{alg:uOR:med}).
}
Finally, OPTIC retrieves the current optimal gateway within O, \set{o}$_{top}$, \ie, the one with the lowest IGP distance (Line~\ref{alg:uOR:topg}).
Once the OPR set O is updated, we compute its hash to check its existence within \metaset{O}{} and
insert it if required (Line~\ref{alg:uOR:insert}).
Finally, if no prefixes still use \CR{the previous} O, it is removed from the data-plane (Line~\ref{alg:uOR:remove}).
This algorithm maintains the data-plane in an optimal-protecting state.
Its limited complexity is often bypassed (after the bootstrap), as we expect
OPR sets to stay stable in bi-connected networks. The complexity of \textbf{extractOPR} scales linearly in the IGP dimensions.
Unused OPR sets could be kept transiently to mitigate the effects of intermittent failures.
\subsection{Dealing with BGP and IGP events}
\label{secAlgs}
We describe here how OPR sets are
updated upon a BGP or an IGP event to achieve
optimal protection of all destinations.
\begin{algorithm}\caption{BGP\_Update}\label{alg:bgp}
\footnotesize
\hrulefill
\SetAlgoLined
\SetAlgoVlined
\KwData{ \metaset{T}, \metaset{O}, P\tss{BGP}, R~=~(p, n, $\beta$), G, D}
\KwResult{ Update of \metaset{T}, \metaset{O}, P\tss{BGP}\\}
\vspace{-2mm}
\hrulefill
%
\SetKwProg{Fn}{Function}{ $\rightarrow$}{end}
\Fn{BGP\_Update}{
\nl T~ = \metaset{T}(p, tree)\; \label{alg:bgp:getT}
\nl H = P\tss{BGP}(p); \tcp{H = \hash(\set{o})} \label{alg:bgp:hash}
\nl L~ = \metaset{T}(p, leaves)\; \label{alg:bgp:getL}
\nl \uIf{Event = Add}{
\nl rMR = \add R~\kwin T \; \label{alg:bgp:add}
}
\nl \Else
\nl rMR = \remove R~ \from T\; \label{alg:bgp:remove}
}
\nl \If{rMR $\in$ \kwusefulMR(L)}{\label{alg:bgp:useful}
\nl \upOP(L, \metaset{O}, H, p, G, D)\;\label{alg:bgp:recomp1}
}
}
\hrulefill\vspace{2mm}
\end{algorithm}
\subsubsection{BGP updates}
Algorithm~\ref{alg:bgp} showcases how to maintain OPR sets upon a BGP update, being either an \textit{Add} (\ie, a new route is learned) or a \textit{Down} event (\ie, a
withdraw that cancels a route)\footnote{A modified route can
be handled through a \textit{Down} followed by an \textit{Add}.}.
As a BGP update concerns a
given prefix, only one OPR set O (the one that optimally protects p) is modified when necessary. Intuitively,
checking whether the route R belongs (or should belong) to the leaves of $T$ extracted to create the current O (\ie, if R belongs to the current O) is enough to know if the update is necessary.
First, Alg.~\ref{alg:bgp} retrieves the route-tree $T$
of the updated prefix p (Line~\ref{alg:bgp:getT}).
Depending on the nature of the update,
we update the control-plane structure $T$ (and implicitly $L$) by
either adding (Line~\ref{alg:bgp:add}) or removing
(Line~\ref{alg:bgp:remove}) the updated route.
When performing these operations, we store the rank of the MR set containing the route R, \textit{rMR}.
Using rMR, one can check whether R belongs (or should belong) to O (Line~\ref{alg:bgp:useful}), \eg, by memorizing the number of MR sets used to form O.
If R is not good enough to belong to the current OPR set, there is no need
to consider it and the algorithm ends.
Otherwise, if R is a newly added (resp. withdrawn) route, it must be added
(resp. removed) from the data-plane structure \set{o}{} which can be found in $\mathbb{O}$ through its hash.
In both cases, \set{o}{} has to be updated (Line~\ref{alg:bgp:recomp1}).
One can see that OPTIC's behavior when dealing with a BGP update is pretty
straightforward and that BGP events are likely to have no bearing on the data-plane.
\begin{algorithm}\caption{IGP\_Change}\label{alg:igpup}
\hrulefill
\SetAlgoLined
\SetAlgoVlined
\footnotesize
\KwData{ \metaset{T}, \metaset{O}, P\tss{BGP}, G$=(E,V)$, l, w }
\KwResult{ Update of \metaset{T}, \metaset{O}, P\tss{BGP}, G\\}
\vspace{-2mm}
\hrulefill
\SetKwProg{Fn}{Function}{ $\rightarrow$}{end}
\Fn{Change}{
\nl D~ = \spf(G, l, w)\;
\nl \ForAll{\set{o}~ $\in$ \metaset{O}}{ \label{alg:igp:allO}
\nl\ForAll{M$_{top}$ $\in$ \set{o}}{\label{alg:igp:through}
\nl\While{M$_{top}$ $\neq \emptyset$ $\wedge$ D[M$_{top}$] $= \infty$\label{alg:igp:reachable??}}{
\nl M$_{top}$ = M$_{top}$.next\; \label{alg:igp:med}
}
\nl\uIf{M$_{top}$ $= \emptyset$}{
\nl \remove M$_{top}$~ \from \set{o}\; \label{alg:igp:remove}
}
\nl\Else{
\nl M$_{top}$.$\alpha$ = D[M$_{top}$]\;\label{alg:igp:weight}
}
}
\nl \set{o}$_{top}$ = \textbf{min}$_\alpha$(\set{o})\; \label{alg:igp:opt}
\nl\uIf{l~ $\in E \wedge~ w \neq \infty$}{ \label{alg:igp:cond2}
\nl \kwcontinue\;
}
\nl\ElseIf{\ispro(\set{o}, G) $\wedge~ \kwismin(\set{o})$}{{\label{alg:igp:test2disjoint_routes}
\nl \kwcontinue\;
}
}
\nl \add \hash(\set{o}) \kwin OPR\_to\_update\;
}
\nl\ForAll{$H \in OPR\_to\_update$}{\label{alg:igp:broke}
\nl \ForAll{$p~ |~ P\tss{BGP}(p) =$ H}{
\nlL~ = \metaset{T}(p, list)\;
\nl\upOP(L, \metaset{O}, H, p, G, D)\;
}
}
}
\hrulefill\vspace{2mm}
\end{algorithm}
\subsubsection{IGP changes}
Algorithm~\ref{alg:igpup} showcases the behavior of OPTIC upon an IGP
change, which can be a modification on the existence (insertion or
deletion -- modeled by an infinite weight) or on the weight w{} of a link l{} (a node wide change can be modeled through its multiple outgoing links).
Upon a change, the new IGP distances D{} are
recovered. OPTIC then considers each \set{o}, covering so every BGP prefixes
(Line~\ref{alg:igp:allO}).
For each relevant gateway (with the best MED for each AS, $M_{top}$) within \set{o}, we first check
whether it is still reachable (Line~\ref{alg:igp:reachable??}). Unreachable
gateways are replaced by the next MED-tied route when possible (Line~\ref{alg:igp:med}) or removed (Line~\ref{alg:igp:remove}) otherwise. Reachable gateways are
first updated with their new best IGP distances (Line~\ref{alg:igp:weight}).
\changed{The whole group of prefixes using the set benefits from its new optimal path as soon as possible (Line~\ref{alg:igp:opt}).
Afterward, if necessary, we update OPTIC's structures in the background to anticipate any future internal event.}
If the updated link l{} is a
valuation change (Line~\ref{alg:igp:cond2}), there is no loss of reachability. Thus, \set{o}{} still contains two disjoint paths towards p and remains stable.
For other kinds of events, \set{o}{} may need to be updated, as
connectivity may have evolved due to the insertion or deletion of a link.
If a link was added, the network connectivity may have increased and useless MR sets can be removed
if \set{o}{} is not already minimal (\eg, containing two gateways).
If a link was removed, \set{o}{} may have lost its protecting property
and may have to be updated. These two scenarios, leading to the update of
\set{o}{}, are visible in the condition Line~\ref{alg:igp:test2disjoint_routes}.
This update is used to prepare for a future event.
We perform it in background afterwards (Line~\ref{alg:igp:broke}) and continue to walk through \metaset{O}{} to restore the optimal forwarding state of all groups of prefixes quickly.
The update aforementioned (Line~\ref{alg:igp:broke}) is performed at the prefix granularity (\ie, for each prefixes that used \set{o}{} that will be updated).
Indeed, while these prefixes share the same O before the change,
they do not necessarily share the same $L$. Since O may be updated by fetching
information from $L$, they may point to distinct OPR sets after the update.
Recall that this is a background processing phase where OPTIC may fallback to the prefix granularity to anticipate the next change only if node-bi-connectivity is not granted anymore.
The fast switch to the new optimal post-convergence gateway was already performed at Line~\ref{alg:igp:opt}.
This switch is not done at the prefix granularity, it is performed only for each \set{o}{} instead.
In short, most BGP and IGP updates both result in simple operations.
A BGP update just triggers a prefix tree manipulation: a single OPR set is re-computed only if the updated route is, or should be, part of the set.
An IGP weight-change
results in the walk-through of all OPR sets ($\metaset{O}$) \CR{and a min-search} to converge to the new optimal forwarding state \CR{followed by a background processing if necessary}.
We argue that the cardinal of $\metaset{O}$ will be orders of magnitudes lower than the number of BGP prefixes in most networks.
The failure or addition of a link or node results in the same walk-through, but could also require the background update of some OPR sets to prepare for a future event.
More precisely, only when the network gains or loses its bi-connected property could some OPR sets be affected. New OPR sets then need to be re-computed for the prefixes of the groups that depended on the affected OPR sets.
Instead of the number of prefixes, OPTIC convergence scales with the number and the size of the OPR sets.
Consequently, to assess the viability of OPTIC, we aim at limiting
their size (and so number). While Sec.~\ref{secResults} analyzes that aspect in
detail, the next subsection explores conditions on the graph properties allowing to use smaller optimally protecting sets.
\subsection{Optimizations}
\label{ssec:opti}
In this section, we introduce some
optimizations that allow to reduce the size of the OPR sets used by OPTIC.
Let us start with a fairly reasonable assumption: well-designed networks should
offer bi-connectivity between border routers.
Based on this realistic hypothesis, we can consider two kinds of
reductions: $(i)$ removing MED-tied entries from an OPR set and $(ii)$ discarding
all gateways in the second MR set except the best one (when the first MR set includes only one gateway).
As the first optimization will not be mentioned further on, we will not dwell on it.
Intuitively, since the MED attribute is of higher importance than the IGP cost,
it may allow us to remove routes with lower MED from the set, as an IGP cost change will not make these routes optimal.
This second optimization will be evaluated in our theoretical analysis and allows
to keep at most one gateway from the second MR set when the first one contains a single gateway.
If the current optimal gateway is not part of the path towards the first gateway of the second MR set, adding this
second gateway is enough to form an OPR set. When the first MR set is made of only one gateway and the network bi-connected, OPTIC only needs to consider other gateways for the specific case of the optimal gateway failure (as other changes cannot make it less preferred than gateways from the following MR sets).
If the second-best gateway does not use the first to reach p, its IGP
distance will not be impacted by the current gateways' failure, and it will become, after the failure,
the best gateway overall. This allows OPTIC to create many OPR sets containing only two routes.
\subsection{BGP/IGP: an Intimate Relationship}
\label{secDecoupling}
\begin{table}
\centering
\caption{Simplified BGP route selection}
\begin{tabular}{@{} r L L}
\toprule
\emph{Step} & \emph{Criterion} & \\ \midrule
\tikzmark[xshift=-10pt,yshift=1ex]{x}1 & Highest \texttt{local-pref} LP & (economical relationships)\\
\rowcolor{black!5}[0pt][0pt] 2 & Shortest \texttt{as-path} & \\
3 & Lowest \texttt{origin} & \\
\rowcolor{black!5}[0pt][0pt] \tikzmark[xshift=-10pt,yshift=-1.2ex]{y}4 & Lowest MED & (cold-potato routing) \\
\hline
\hline
\tikzmark[xshift=-10pt,yshift=1ex]{w}5 & eBGP over iBGP & \\
\rowcolor{black!5}[0pt][0pt] 6 & Lowest IGP cost & (hot-potato routing)\\
\tikzmark[xshift=-10pt,yshift=-1ex]{z}7 & Lowest \texttt{router-id} rid & (arbitrary tie-break)\\
\bottomrule
\hline
\drawbrace[brace mirrored, thick]{x}{y}
\drawbrace[brace mirrored, thick]{w}{z}
\annote[left]{brace-1}{$\beta$~ }
\annote[left]{brace-2}{$\alpha$~ }
\end{tabular}\label{fig:bgp-ranking}
\vspace{-2mm}
\end{table}
We start by showcasing the IGP-BGP interplay, resulting
in the need to go through all BGP prefixes after IGP events.
BGP routes are characterized by a collection of attributes of decreasing importance (Table~\ref{fig:bgp-ranking})
that can be locally modified by each router. Each attribute comes into
play whenever paths could not be
differentiated through the previous one. This route selection is
called the \textit{BGP decision process}. \CR{Note that the MED differs from the other attributes.
It can be used in different ways, but should only be used to compare routes that originate from the same AS, breaking the total order of the decision process. While our solution can be adapted to any MED flavors, we consider this most general one.}
\begin{figure}
\centering
\includegraphics[scale=0.45]{OPTIC-base-example.pdf}
\caption{This example consists of an AS that learns routes towards p{} via several border routers, focusing on the point of view of s. Each link from an internal border router to the BGP NH is labeled with the type of relation between the two ASes (p2c means provider-to-customer, p2p and c2p, peer-to-peer and customer-to-provider respectively, modeled in practice by a decreasing local preference). A route R\tss{x} is advertised by the BGP NH n\tss{x}. The routes announced by n\tss{4} and n\tss{5} are discriminated through the MED attribute. Unlabeled edges weight one.}
\label{fig:mainexample}
\end{figure}
When the inter-domain related attributes (local-pref, as-path \CR{and MED}) of two routes are equal, routers choose
the route with the closest exit point in terms of IGP distance (hot-potato routing, Line 6 in
Table~\ref{fig:bgp-ranking}).
This criterion is at the core of the
dependency between BGP and the IGP. To exhibit this interplay,
we separate the BGP attributes into two sub-lists: $\beta$ and $\alpha$,
as shown in Table~\ref{fig:bgp-ranking}. $\beta$ is composed of the
attributes purely related to inter-domain routing. They usually remain unchanged
within an AS but some operators may configure them to change in iBGP~\cite{vissicchio2014ibgp}.
Our work remains valid in both cases. Thus, for the sake of simplicity, we assume they are constant inside an AS.
The attributes $\alpha$, on the other hand, are, by construction,
router-dependent and focus on intra-domain routing.
Thus, a route R{} towards a
prefix p{} and advertised by a gateway or BGP next-hop (BGP NH) n{}
is characterized by the vector of attributes $\beta \circ \alpha$, with $\beta
= $ [LP, as-path length, origin, med] and $\alpha = $ [ibgp/ebgp, igp
cost, router id]. Since attributes after the IGP cost are simple tie-breaks,
and since rule 5 can be seen as an extension to the IGP cost (an
eBGP route has an IGP cost of 0), we can refer to $\alpha$ simply
as the IGP distance towards the BGP NH.
It now becomes clear that IGP events may affect the ranking of BGP routes. Let us consider Fig.~\ref{fig:mainexample}. We state that R\tss{x} $\prec~$}%{$<_{_{B}}$ R\tss{y} if R\tss{x} is better than R\tss{y} according to the BGP decision process. We
consider the routes R\tss{1}, R\tss{2}, R\tss{3} and R\tss{4} towards the prefix p announced by
n\tss{1}, n\tss{2}, n\tss{3} and n\tss{4} respectively.
\CR{The MED being irrelevant to the point, we consider that these routes have no MED for now.}
R\tss{4} originates from a client and has an as-path length of 2, leading to the attributes $\beta(\text{R\tss{4}}) = $ [p2c, 2, -, -]. R\tss{1}, R\tss{2} and R\tss{3} are all characterized by the same $\beta(\text{R\tss{1},R\tss{2}, R\tss{3}}) = $ [p2c, 1, -, -] and so are discriminated through their $\alpha$ distances (4 vs 5 vs 6).
All have a better $\beta$ than R\tss{4}. Thus, overall, R\tss{1} $\prec~$}%{$<_{_{B}}$ R\tss{2} $\prec~$}%{$<_{_{B}}$ R\tss{3} $\prec~$}%{$<_{_{B}}$ R\tss{4} from the point of view of s.
While the inequality R\tss{1} $\prec~$}%{$<_{_{B}}$ R\tss{2} $\prec~$}%{$<_{_{B}}$ R\tss{3} holds initially,
this order is reversed after the failure of the link a$\rightarrow$c as the IGP distances, taken into account by BGP,
go from 4, 5 and 6 to 9, 8 and 6 respectively. After the failure, R\tss{3} $\prec~$}%{$<_{_{B}}$ R\tss{2} $\prec~$}%{$<_{_{B}}$ R\tss{1}, \CR{requiring to wait for the BGP decision process to find the new best route.}
However, note that inter-domain related attributes ($\beta$) are left unaffected by an IGP event, meaning that R\tss{4} will remain less preferred than the other three routers after \textbf{any} IGP event in any cases.
\subsection{Fast BGP re-routing upon IGP Changes}
\label{secOptProc}
We now detail why current solutions are not sufficient to guarantee fast BGP convergence.
The state-of-the-art solution would be a combination of BGP PIC~\cite{filsfils2011bgp}
(implemented on many routers) and BGP Add-Path (for path diversity).
PIC relies on the use of a Hierarchical FIB (HFIB): for each prefix, a router maintains a pointer to the BGP NH which in turn points to the IGP NH (instead of
simply memorizing the associated outgoing interface).
To protect against the failure of the gateway, PIC stores at least the two best current BGP NH, which can be learned through Add-path.
However, PIC only ensures partial sub-optimal protection and requires to run the usual BGP convergence afterward as illustrated in
Fig~\ref{fig:mainexample}.\changed{
After the failure of the link a$\rightarrow$c, PIC's HFIB restores the connectivity to n\tss{1} by updating the IGP NH used to reach n$_1$, allowing the transit traffic to go through n\tss{1} again.
However, this IGP event leads to a change in the $\alpha$ ranking of some BGP routes.
As seen in Sec.~\ref{secDecoupling}, after this failure, R\tss{3} becomes the new best route.
Restoring the connectivity to n\tss{1} without considering the changes
on $\alpha$ leads to the use of a sub-optimal route until BGP re-converges, violating so the hot-potato routing policy.
Besides, the traffic may first be re-directed after the IGP convergence, and then re-directed once again after
the BGP convergence, potentially leading to flow disruptions~\cite{disrupt}.
Letting aside the optimality issues, storing the two best BGP NH is not enough to protect the transit traffic against all failures when the network is poorly connected. Even if both n\tss{1} and n\tss{2} were stored, both become unreachable if $a$ fails (due to the network not being node-biconnected), leading to a loss of connectivity until BGP re-converges and finds the new best available gateway, n\tss{3}.}
In both scenarios, retrieving the correct new optimal path requires the BGP decision process which does not scale well\footnote{
Studies proposed ways to store reduced set of routes to
enhance update times~\cite{sobrinho_distributed_2014,
thorup2001compact} but not specifically to deal with IGP changes for transit traffic.}.
\subsection{How to Reach a Symbiotic Coupling?}
We present here the necessary operational condition to untie the
BGP convergence from IGP events. The question to address is: how to efficiently pre-compute the subset composed of every BGP route that may become the new \textit{best} route upon \textit{any} IGP change?
We state that prefixes need to be \textit{optimally protected}, as per Definition~\ref{def:opt}.
\begin{mydef}{Optimal Protection}\label{def:opt}\quad \hrulefill\\%\hrulefill\\
Let p{} be an external destination. We state that p{} is \textbf{optimally protected} by a set \set{o},
if both pre- and post-convergence BGP NHs are stored within \set{o}.
More precisely, \set{o}{} should verify the two following properties for any IGP change $c$:
\begin{itemize}
\item (i) It contains the best BGP NH n~towards p~before $c$ occurs (pre-convergence NH for p)
\item (ii) It contains the BGP NH of the new best path
towards p{} after $c$ occurs (post-convergence NH for p). It should be true for any kind of $c$: link or node event, n{} included, such as an insertion, deletion or weight update.\quad \hrulefill
\end{itemize}
\end{mydef}
Computing such sets naively may look costly, as predicting the optimal
gateway for each specific possible failure and weight change is time-consuming.
However, OPTIC computes and maintains these sets efficiently by \textit{rounding} them.
We will show that the size and number of such rounded sets are limited in most cases.
Finding the new optimal gateway among these sets is performed through a simple
min-search within each set (with no additional computation), updating so
each group of prefixes depending on this set, and thus every prefix.
Depending on how such sets of gateways (per group of prefixes) are designed, OPTIC can protect the traffic transiting in a BGP network against any link, node, or even SRLG \CR{(\ie, links sharing a common fate)} single failure.
\subsection{Preliminary Model: counting \#OPR sets}
\label{secComplexity}
\newif\ifmodellong
\modellongfalse
\newif\ifmodelcourt
\modelcourttrue
\ifmodellong
\modelcourtfalse
\fi
We consider an AS (or a portion of it), with $B$ bi-connected gateways advertising $P$ prefixes in total.
Each prefix p{} is advertised by a subset of $b \leq B$ of those gateways, chosen uniformly at random.
The $\beta$ of each prefix is represented by a value between $1$ and $ps$ (policy spreading) also chosen uniformly at random.
For a given p, this implies that any subset of gateways of a given size $n \leq b$ all have the same probability to be the OPR set for p.
Our model analyzes the number $|\metaset{O}| = |{\metaset{O}}_{B,P,ps}|$ of unique OPR sets depending on the number $B$ of gateways, the number $P$ of prefixes, and on the policy spreading $ps$. In practice, we decide to set $b$ to a constant value (\eg, $b=5$) greater than the median in \cite{luckie_as_2013}.
The quantity $|{\metaset{O}}_{B, P,ps}|$ can be seen as the sum of distinct OPR sets of different sizes.
\ifmodelcourt
From our assumptions, each OPR set of size $n$ (${2 \leq n \leq b}$) is in ${\metaset{O}}_{B, P,ps}$ with the same probability, that we denote $\P_{B, P, ps, n}$. Since there are $\binom{B}{n}$ such possible sets of size $n$, we have:
\begin{equation}\label{eq:simple model main formula}
|{\metaset{O}}_{B,P,ps}| = \sum_{n=2}^{b} \binom{B}{n} \P_{B, P, ps, n}.
\end{equation}
For a given prefix, let $p_n$ be the probability that the size of its OPR set is $n$. We obtain
$
{\P_{B, P, ps, n} = 1-\left(1-\binom{B}{n}^{-1}\right)^{p_nP}}
$.
Since we assume that each prefix is learned by $b$ gateways, and the weight associated with each gateway is chosen uniformly at random between $1$ and $ps$, we use ``Balls into bins'' analysis to compute $p_n$:
\[
p_n = \sum\limits_{i=1}^{ps}
\frac{1}{ps^{n}}\left(1-\frac{i}{ps}\right)^{b-n}
\left(
(i-1)b\binom{b-1}{n-1} + \binom{b}{n}
\right)
\]
\fi
\ifmodellong
From our assumptions, OPR sets of size $n$ (${2 \leq n \leq b}$) are in ${\metaset{O}}_{B, P,ps}$ with the same probability, that we denote $\P_{B, P, ps, n}$.
In other words, $\P_{B, P, ps, n}$ is the probability for a particular subset of gateways of size $n$ to be the OPR set of at least one prefix (among all the $P$ prefixes). Since there are $\binom{B}{n}$ such possible sets of size $n$, we have:
\begin{equation}\label{eq:simple model main formula}
|{\metaset{O}}_{B,P,ps}| = \sum_{n=2}^{b} \binom{B}{n} \P_{B, P, ps, n}.
\end{equation}
For a given prefix, let $p_n$ be the probability that the size of its OPR set is $n$. Hence, the number of prefixes associated with OPR sets of size $n$ is $p_nP$ in average. Thus, the probability that a given OPR set of size $n$ is not associated with any prefix is $\left(1-\binom{B}{n}^{-1}\right)^{p_nP}$. So, the probability that a given OPR set is associated with at least one prefix is
$
{\P_{B, ps, P, n} = 1-\left(1-\binom{B}{n}^{-1}\right)^{p_nP}}
$.
We now compute the probability $p_n$ for a given prefix to have an OPR set of size $n$.
Since we assume that each prefix is learned by $b$ gateways, and the weight associated with each gateway is chosen uniformly at random between $1$ and $ps$, we can use ``Balls into bins'' techniques to count how many gateways have minimal weight and to obtain \emph{(i)} the probability that only one or two gateways have minimal weight, in which case the OPR set is of size 2, thanks to our optimization and \emph{(ii)} the probability that $n \geq 2$ gateways have minimal weight, in which case the OPR set is of size $n$:
\[\scriptstyle
p_n=\left\{
\begin{array}{ll}
\sum\limits_{i=1}^{ps}\frac{b}{ps}\left(1-\frac{i}{ps}\right)^{b-1} + \binom{b}{2}\frac{1}{ps^{2}}\left(1-\frac{i}{ps}\right)^{b-2}&\text{if }n{=}2\\
\sum\limits_{i=1}^{ps}\binom{b}{n}\frac{1}{ps^{n}}\left(1-\frac{i}{ps}\right)^{b-n} &\text{if }n{\geq}3
\end{array}
\right.
\]
\fi
A similar reasoning can be done for our optimization looking for sets having 2 gateways.
We now take into account the specificity of the local-pref attribute.
\subsection{Towards a Realistic Evaluation}
\label{secAnalysis}
\begin{table*}
\centering
\caption{Number of distinct OPR sets ($|\mathbb{O}|$) for several scenarios.}
\input{as-decomposition-table.tex}
\label{tab:number of OPR sets per category}
\end{table*}
\subsubsection{Break Down Into Classes}
In practice, neighboring ASes are partitioned in several classes (\emph{eg.}, clients, peers, and providers), usually represented by the local-pref attribute.
At the end of the decision process, we know that a prefix p~is associated with a single class.
Indeed, the local-pref depends only on the set of advertising neighbors for p: it belongs to the class of the neighbor having the highest local-pref.
This allows us to split the analysis by class.
With this assumption, OPR sets are included inside a unique class of gateways, but as a counterpart, the policy spreading in each class is reduced (because gateways have the same local-pref inside a class).
We use our former model to compute the number of distinct OPR sets in each class with $ps=b=5$.
This calibration is pessimistic enough as it only takes into account a limited AS length dispersion and always 5 learning gateways in the best class.
$B_1$, $B_2$ and $B_3$, denote respectively the number of gateways with local-pref 1, 2 and 3.
Similarly, $P_1$, $P_2$ and $P_3$, denote respectively the number of prefixes originating from a gateway with local-pref 1, 2 and 3.
We have $P_1 + P_2 + P_3 = P =\num{800000}$ and $B_1 + B_2 + B_3 = B$.
We can now compute $|\metaset{O}|$ by assuming each class follows our basic model:
\[|\metaset{O}| = |{\metaset{O}}_{B_1,P_1,5}| + |{\metaset{O}}_{B_2, P_2,5}| + |{\metaset{O}}_{B_3, P_3, 5}|
\]
This sum gives the theoretical performance of OPTIC as it is the number of OPR sets each router has to manage.
\subsubsection{Definition of the Lower Bound}
We define here the best theoretical performance an optimally protecting scheme could reach, to compare it with OPTIC. Such a scheme would have to store sets of at minima two gateways (less can not ensure protection).
This lower bound also provides an estimation of the performances of techniques just aiming at providing (non-optimal) protection like~\cite{filsfils2011bgp}.
In other words, with $P_i$ prefixes and $B_i$ gateways in a given class, the average minimum number of optimally protecting sets is the average number of distinct sets obtained when choosing $P_i$ random subsets of two gateways (such sets are chosen uniformly at random).
\subsubsection{Evaluation on Fixed Break Down}
We now compute $|\metaset{O}|$ for several AS categories; a \emph{Stub} has few peers and even fewer providers from where most the prefixes originate; a \emph{Transit} (Tier 2, 3 and 4) has a limited number of providers but from where the majority of the prefixes originate, more peers and possibly numerous customers; a \emph{Tier 1} has few peers and a large number of customers. For \emph{Transit} and \emph{Tier 1}, we present different class and prefix break down.
Note that our model is pessimistic, as, for \emph{Tier 1} in particular, ASes may have more classes with $ps > 5$ (\eg, gateways can be geographically grouped).
The number of gateways, and their partition into classes, are rounded upper bounds of realistic values obtained from~\cite{luckie_as_2013}.
Moreover, we did not assume any specific popularity of certain gateways.
Using our complementary material~\cite{zenodo}, $|\metaset{O}|$ can be computed for any parameters
Table~\ref{tab:number of OPR sets per category} shows that the number of OPR sets is more than reasonable for Stubs and small Transit. For large transit, the distribution of the prefixes into classes has a great impact on $|\metaset{O}|$.
As expected, for Tier 1, the number of OPR sets is high, but OPTIC is close to the lower bound (there is not much room for possible improvements).
The number of routes contained within each
OPR set is limited, meaning that the min-search applied upon
an IGP event has a limited computational cost.
Finally, it is worth recalling that our analysis is pessimistic because uniform. Regional preferences or gateway popularity can strongly reduce the size of \metaset{O}{} in practice.
\newcommand{\delta}{\delta}
\subsubsection{Evaluation on Variable Break Down}
In addition to previous specific cases, we here show how $|\metaset{O}|$ evolves depending on the sizes of the classes and their relation.
For that, we introduce the variable $\delta$ that represents the ratio between the sizes of two successive classes.
More precisely, when a break down considers a ratio of $\delta$, then it means $(B_1, B_2, B_3) = (B_1,B_1\times \delta,B_1\times \delta^2)$.
Similarly, we also assume that the number of prefixes learned by each class verifies $(P_1, P_2, P_3) = (P_1,P_1/\delta,P_1/ \delta^2)$.
We present in Figure~\ref{fig:memory usage per ratio} the number of distinct OPR sets depending on $\delta$ for $B=500$; $\delta$ varies from 1 (all the classes have the same size) to 15 (each class has 15 times more gateways that the previous class, but learns 15 times fewer prefixes).
When the number of gateways is low, the management cost is obviously limited (even when all the classes have the same size).
With $B = 500$, OPTIC's performance is limited for small $\delta$ but gets better quickly.
When
$\delta \geq 8$, our optimization performs as well as the lower bound.
We now investigate the case where $\delta$ equals 5 and look at how $|\metaset{O}|$ evolves depending on $B$.
We see in Fig.~\ref{fig:memory usage per number of gateways} that our optimized OPR reduction outperforms the non-optimized one.
For less than 1500 gateways, the number of OPR sets is smaller than \num{100000} with our optimized version. Then, $|\metaset{O}|$ increases quickly to reach \num{200000} when there are around 4000 gateways, with a lower bound at \num{125000} sets. After, the growth is linear, so the proportional overhead of our solution compared to the lower bound tends to one.
The management cost of OPTIC is reasonable, especially for networks having a limited number of border gateways where OPTIC exhibits very good performances.
For large networks having numerous gateways, the size of our data-plane structures remains limited regarding the number of prefixes, and there does not exist many room for theoretical improvements.
\begin{figure}
\centering
\includegraphics[width=6cm]{number-of-groups-per-ratio-B=500.png}
\caption{$|\metaset{O}|$ depending on the ratio $\delta$ between classes with $B=500$.
}\label{fig:memory usage per ratio}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=6cm]{number-of-groups-per-number-of-gateways-r=5.png}
\caption{$|\metaset{O}|$ depending on the number of gateways, with $\delta = 5$.
}\label{fig:memory usage per number of gateways}
\end{figure}
|
2,877,628,088,365 | arxiv | \section{Introduction}
Deep learning pipelines are typically designed to generalize, that is, to perform well on new data.
Consider pairs of inputs and outputs $(\boldsymbol{x}_1,y_1),\dots, (\boldsymbol{x}_n,y_n)\in\mathbb{R}^p\times\mathbb{R}$ and a corresponding loss function $\ell\,:\,\mathbb{R}\times\mathbb{R}\to[0,\infty)$.
In practice,
generalization is typically measured through sample splitting:
after calibrating the parameters of the pipeline on a training set $\mathcal{T}\subset\{1,\dots,n\}$,
the corresponding network~$\mathfrak{g}$ is tested on the validation set $\mathcal{V}:= \{1,\dots,n\}\setminus\mathcal{T}$:
\begin{equation*}
\text{empirical generalization error}~:= ~\sum_{i\in\mathcal{V}}\ell\bigl[\mathfrak{g}[\boldsymbol{x}_i],y_i\bigr]\,.
\end{equation*}
In theory,
generalization is typically measured through statistical risks~\cite{bartlett_mendelson_2001,lederer2020risk}:
\begin{equation*}
\text{theoretical generalization error}~:= ~\mathbb{E}_{(\boldsymbol{w},v)}\,\ell\bigl[\mathfrak{g}[\boldsymbol{w}],v\bigr]\,,
\end{equation*}
where $\mathbb{E}_{(\boldsymbol{w},v)}$ is the expectation over a random sample $(\boldsymbol{w},v)\in\mathbb{R}^p\times\mathbb{R}$.
Both empirical and theoretical generalization errors essentially describe a network's average error on new data.
The strength of generalization is that it captures, or at least tries to capture, a network's performance on \emph{unknown} inputs.
For example, generalization seems appropriate for training fraud-detection systems,
which might face all sorts of unforseen input~$\boldsymbol{w}$.
But in some cases,
the relevant inputs are already \emph{known}.
For example,
assume that we have covariate vectors~$\boldsymbol{x}_i$ and corresponding diagnoses~$y_i$ for $n$~patients,
and that we want to use these data to diagnose a new group of patients with known but unlabeled covariates~$\boldsymbol{w}_1,\dots,\boldsymbol{w}_g$.
We call $\boldsymbol{w}_1,\dots,\boldsymbol{w}_g$ the \emph{target vectors}.
We could, of course, still train a deep learning pipeline towards generalization as usual,
but this does not seem optimal at all,
because it completely disregards the information about the unlabeled target vectors~$\boldsymbol{w}_1,\dots,\boldsymbol{w}_g$.
Our objective in this paper is, therefore, to train networks for optimal performance for individual or groups of known but unlabeled target vectors.
For further reference,
we call this objective \emph{targeted deep learning}.
The idea of including target vectors into data-driven decision-making processes has already gained track recently in medical statistics under the names ``precision medicine'' and ``personalized medicine''~\cite{harvey_brand_holgate_kristiansen_lehrach_palotie_prainsack_2012, huang2019tuning,hellton2020penalized}.
A main motivation for this paper is to highlight the potential of this general idea in deep learning.
Moreover,
we devise a concrete approach for deep learning that is suprisingly simple and yet can be used in a wide variety of frameworks.
Rather than changing the pipelines themselves,
as done previously in medical statistics,
we include the targeting into the optimization routines.
We finally illustrate our framework and methods in applications in medicine, engineering, and other fields.
Our three main contributions are:
\begin{itemize}
\item We establish the concept of targeted deep learning.
\item We device a simple yet effective approach to targeted deep learning.
\item We highlight the benefits of prediction deep learning in practice.
\end{itemize}
\paragraph{Outline}
In Section~\ref{sec:frameworkmethods},
we develop a statistical framework and a concrete algorithm for targeted deep learning.
In Section~\ref{sec:applications},
we show the effectiveness of our approach on real-world data.
In Section~\ref{sec:discussion},
we highlight the potential impact of our paper on precision/personalized medicine and beyond.
\paragraph{Related work}
The framework of targeted deep learning as defined above has not been studied in deep learning before.
But other frameworks that deviate from the standard setup of generalization have been studied already.
We briefly discuss some of in the following.
The arguably most related one is \emph{personalized deep learning}~\cite{rudovic_lee_dai_schuller_picard_2018,schneider2020personalization}.
Personalized deep learning assumes that each sample corresponds to one of~$m$ separate entities (such as patients, customers, countries, ...).
The goal is then to train separate networks for each of these entities.
There is a range of approaches to personalized deep learning:
Networks are trained on the whole dataset first and then trained further on the subset of data that corresponds to the entity of interest;
this approach is called \emph{transfer learning}~\cite{transfer1993, zhuang2020comprehensive}.
Alternatively, networks are first trained on the samples of the entity of interest and then trained further on the entire data;
this approach is called \emph{early shaping}~\cite{schneider2020personalization,pmlr-v28-sutskever13}.
Or, instead, networks are trained on the samples of the entity of interest combined with similar samples from the entire dataset;
this approach is called \emph{data grouping}~\cite{schneider2020personalization,shorten_khoshgoftaar_2019}.
Networks can also be trained on features specifically designed for personalization~\cite{gupta2017learning}.
In addition,
there are approaches designed for specific tasks, such as tag recommendation~\cite{nguyen_wistuba_grabocka_drumond_schmidt-thieme_2017}.
While personalized deep learning and targeted deep learning can be used on similar types of data,
they differ substantially their goals and requirements:
personalized deep learning seeks optimal generalization for a given entity (``person''),
while target deep learning seeks optimal prediction for an input or a group of inputs---whether these inputs belong to a common entity or not;
personalized deep learning requires (a sufficiently large number) of labeled samples for the entity at hand,
while targeted deep learning works even for single, unlabeled target inputs.
Hence, none of these mentioned methods applies to our framework.
But then there are also other, less related frameworks.
One, very well-known framework is \emph{unsupervised learning} (such as \emph{autoencoders}~\cite{kramer_1991}),
where
labels are missing altogether,
and where the goal is not prediction but instead a useful representation of the inputs.
There is also supervised learning with \emph{unlabeled data in the presence of domain, knowledge},
where labels are missing but replaced by a constraint functions,
and where the goal is generalization~\cite{stewart2016labelfree}.
Another framework is \emph{multimodal learning},
where data from different sources is used~\cite{8103116}.
One more is \emph{adaptive learning},
where the algorithm can query a label of a previously unlabeled sample~\cite{cortes2017adanet}.
Targeted deep learning does not compete with these frameworks and methods:
our concepts and methods simply provide an approach to a type of application that has not yet been considered in deep learning.
\section{Framework and methods}
\label{sec:frameworkmethods}
We first establish a framework for targeted deep learning and contrast targeted deep learning with precision deep learning.
We then devise a general yet very simple approach to gear standard deep learning pipelines toward targeted deep learning.
\newcommand{\drawmyplot}{
\scalebox{0.7}{\begin{tikzpicture}
\newcommand{6}{6}
\newcommand{1.1}{1.1}
\node[draw, cloud, cloud puffs=10, cloud puff arc=120, aspect=3, line width=1.1] (model) at (0, 0) {deep learning model};
\node[draw, text width=30mm, align=center, line width=1.1] (inputdata) at (-6, 0) {labeled~training~data\\
$(\boldsymbol{x}_1,y_1)$\\
$(\boldsymbol{x}_2,y_2)$\\
$\cdots$};
\node[draw, text width=30mm, align=center, line width=1.1] (targets) at (6, 0) {unlabeled test data\\
$(\boldsymbol{w}_1,?)$\\
$(\boldsymbol{w}_2,?)$\\
$\cdots$};
\node[red, draw, text width=30mm, align=center, , line width=1.1] (targets2) at (-6, -2) {unlabeled test data\\
$(\boldsymbol{w}_1,?)$\\
$(\boldsymbol{w}_2,?)$\\
$\cdots$};
\node (trainlow) at (-3.25, 0) {};
\draw[->, line width=1.1] (inputdata) -- (model);
\draw[->, line width=1.1] (model) -- (targets);
\draw[->, red, line width=1.1] (targets2) -| (trainlow);
\node (train) at (-3.25, 0.22) {train};
\node (apply) at (3.25, 0.22) {apply to};
\end{tikzpicture}}
}
\newcommand{\drawmyfigure}[1]{\begin{figure}[#1]
\centering
\drawmyplot
\caption{
our methods adjust standard deep learning (black parts) for targeted deep learning by using the unlabeled test data (red box) to focus the training algorithm
}
\label{fig:overview}
\end{figure}
}
\subsection{A framework for targeted deep learning}
\label{sec:framework}
Consider an arbitrary (non-empty) class of neural networks $\mathcal{G}:= \{\mathfrak{g}_{\boldsymbol{\theta}}:\mathbb{R}^p\to\mathbb{R}~\text{with}~\boldsymbol{\theta}\in\Theta\}$ parameterized by $\Theta\subset\mathbb{R}^q$ and training data $(\boldsymbol{x}_1,y_1),\dots ,(\boldsymbol{x}_n,y_n)\in\mathbb{R}^p\times\mathbb{R}$.
The usual goal of deep learning is to find a $\widehat{\boldsymbol{\theta}}\equiv\widehat{\boldsymbol{\theta}}[(\boldsymbol{x}_1,y_1),\dots, (\boldsymbol{x}_n,y_n)]$ to achieve a low risk or generalization error:
\begin{equation*}
\mathbb{E}_{(\boldsymbol{w},v)}\bigl[\ell[\mathfrak{g}_{\boldsymbol{\theta}}[\boldsymbol{w}],v]\bigr]
\end{equation*}
where $\mathbb{E}_{(\boldsymbol{w},v)}$ is the expectation over a random sample $(\boldsymbol{w},v)\in\mathbb{R}^p\times\mathbb{R}$ that describes the training data, and $\ell:\mathbb{R}\times\mathbb{R}\to[0,\infty)$ is a loss function.
This goal is called \emph{generalization}.
Recently,
there has also been research on targeting the performances to entities that correspond to a subset of the data.
In other words,
one wants to maximize the performance of the deep learning pipeline for an entity (patient, client, ...) that corresponds---without loss of generality---to the first~$m$ training samples $\{(\boldsymbol{x}_1,y_1),\dots, (\boldsymbol{x}_m,y_m)\}$.
The goal is then to find a parameter $\widehat{\boldsymbol{\theta}}\equiv\widehat{\boldsymbol{\theta}}[(\boldsymbol{x}_1,y_1),\dots, (\boldsymbol{x}_n,y_n)]$ (which can depend on the entire data set) that generalizes well for that individual entity:
\begin{equation*}
\mathbb{E}_{(\boldsymbol{w}^{\operatorname{ent}},v^{\operatorname{ent}})}\bigl[\ell[\mathfrak{g}_{\boldsymbol{\theta}}[\boldsymbol{w}^{\operatorname{ent}}],v^{\operatorname{ent}}]\bigr]
\end{equation*}
where $\mathbb{E}_{(\boldsymbol{w}^{\operatorname{ent}},v^{\operatorname{ent}})}$ is the expectation over a random sample $(\boldsymbol{w}^{\operatorname{ent}},v^{\operatorname{ent}})\in\mathbb{R}^p\times\mathbb{R}$ that describes the distribution of the entity in question, that is, the first $m$~training samples (rather than all samples).
Of course, if all samples are independent and identically distributed,
it holds that $\mathbb{E}_{(\boldsymbol{w},v)}[\ell[\mathfrak{g}_{\boldsymbol{\theta}}[\boldsymbol{w}],v]]=\mathbb{E}_{(\boldsymbol{w}^{\operatorname{ent}},v^{\operatorname{ent}})}[\ell[\mathfrak{g}_{\boldsymbol{\theta}}[\boldsymbol{w}^{\operatorname{ent}}],v^{\operatorname{ent}}]]$,
but the basic notion here is that the individual entity can very well be different from the others.
The number of training samples~$m$ is usually considered much smaller than the total number of samples~$n$ (otherwise, there would hardly be a difference to the usual framework of deep learning),
but $m$ is also assumed to be much greater than~$1$ (because the known approaches involves steps that only have the $m$ first samples at disposal): $1\ll m\ll n$.
The described framework is called \emph{personalized deep learning}.
A generic application of personalized deep learning are recommender systems~\cite{aggarwal_2018}.
In other applications, however, we have both training data and a (potentially very small) group of unlabeled target inputs.
Our goal is then to design a deep learning pipeline that works well for those targets.
Mathematically speaking,
we assume given a group of $g$ target vectors~$\boldsymbol{w}_1,\dots,\boldsymbol{w}_g$ and try to find a parameter $\widehat{\boldsymbol{\theta}}\equiv\widehat{\boldsymbol{\theta}}[(\boldsymbol{x}_1,y_1),\dots, (\boldsymbol{x}_n,y_n),\boldsymbol{w}_1,\dots,\boldsymbol{w}_g]$ that has a low empirical loss over these targets:
\begin{equation*}
\sum_{i=1}^g\ell[\mathfrak{g}_{\boldsymbol{\theta}}[\boldsymbol{w}_i],v_i]\,,
\end{equation*}
where $v_1,\dots,v_g\in\mathbb{R}$ are the unknown labels of the target vectors.
We call this framework \emph{targeted deep learning}.
A generic application is precision medicine,
where we attempt to select a diagnosis/treatment/... for an individual patient that has not been diagnosed/treated/... beforehand.
We see in particular that personalized deep learning and precision deep learning are quite different:
personalized deep learning is about heterogeneity in the data,
while precision deep learning is about targeting the learning to prespecified inputs.
More broadly speaking,
precision deep learning provides a framework for tasks not covered by existing frameworks but increasingly relevant in practice---see our Section~\ref{sec:applications}, for example.
\subsection{Methods for targeted deep learning}
\label{sec:methods}
Having specified the goal of targeted deep learning,
we introduce methods for training the parameters accordingly.
As described in Section~\ref{sec:framework}, methods for personalized deep learning require (typically large groups of) labeled data for each individual and have an entirely different goal altogether;
similarly, methods for unsupervised deep learning, multimodal learning, and so forth are not suitable here (compare to our previous section on related literature).
Our methods, motivated by~\cite{bu_lederer_2021,huang2019tuning},
instead treat the target vectors as additional knowledge that can be integrated into the training procedure.
A overview of this concept is provided in Figure~\ref{fig:overview}.
Specifically,
we integrate the target vectors into the optimization algorithm in a way that requires minimal changes to existing deep-learning pipelines and yet is very effective.
\drawmyfigure{t}
The starting point of our approach is a measure for the similarity between a training sample and the target vectors.
In mathematical terms,
we assume a function
\begin{align*}
\mathfrak{s}~:~\mathbb{R}^p\times\mathbb{R}^p\times\mathbb{R}^p\times\dots\,&\to\,[0,\infty)\\
\boldsymbol{x},\boldsymbol{w}_1,\boldsymbol{w}_2,\dots\,&\mapsto\,\mathfrak{s}[\boldsymbol{x},\boldsymbol{w}_1,\boldsymbol{w}_2,\dots]\,.
\end{align*}
The first argument of the function is for the input of the training sample under consideration;
the remaining arguments are for the target vectors.
Formally, the function can have arbitrarily many inputs to account for arbitrarily many target vectors,
but in practice,
the number of target vectors is typically limited.
In any case,
we call the function~$\mathfrak{s}$ the \emph{similarity measure}.
One can take whatever similarity measure that seems suitable for a specific application.
But it turns out that the ``classical choice'' (see Appendix~\ref{sec:max} for further considerations)
\begin{equation*}
\mathfrak{s}[\boldsymbol{x},\boldsymbol{w}_1,\boldsymbol{w}_2,\dots]~:= ~\max\biggl\{0,\frac{\langle\boldsymbol{x},\boldsymbol{w}_1\rangle}{|\!|\boldsymbol{x}|\!|_2|\!|\boldsymbol{w}_1|\!|_2},\frac{\langle\boldsymbol{x},\boldsymbol{w}_2\rangle}{|\!|\boldsymbol{x}|\!|_2|\!|\boldsymbol{w}_2|\!|_2},\dots\biggr\}\,,
\end{equation*}
where $\langle\cdot,\cdot\rangle$ is the standard inner product on~$\mathbb{R}^p$ and $|\!|\cdot|\!|_2$ the Euclidean norm on~$\mathbb{R}^p$,
works for a wide range of applications---see especially our applications in Section~\ref{sec:applications}.
Recall from basic trigonometry that
\begin{equation*}
\frac{\langle\boldsymbol{x},\boldsymbol{w}_i\rangle}{|\!|\boldsymbol{x}|\!|_2|\!|\boldsymbol{w}_i|\!|_2}~=~\operatorname{cos}[\measuredangle_{\boldsymbol{x},\boldsymbol{w}_i}]
\end{equation*}
is the cosine of the angle between the vectors $\boldsymbol{x}$ and $\boldsymbol{w}_i$;
hence, geometrically speaking,
our choice of~$\mathfrak{s}$ uses angles to describe the similarity between an input and the target vectors (the additional $0$ in the maximum is just a threshold for the angles to ensures that the function~$\mathfrak{s}$ is nonnegative).
On the other hand,
we can also think of the elements of~$\boldsymbol{x}$ and~$\boldsymbol{w}_i$ as random variables and, assuming the vectors are centered,
\begin{equation*}
\frac{\langle\boldsymbol{x},\boldsymbol{w}_i\rangle}{|\!|\boldsymbol{x}|\!|_2|\!|\boldsymbol{w}_i|\!|_2}~=~\frac{\sum_{m=1}^p x_m(w_i)_m}{\sqrt{\sum_{m=1}^p (x_m)^2}\sqrt{\sum_{m=1}^p ((w_i)_m)^2}}~=~\operatorname{cor}[\boldsymbol{x},\boldsymbol{w}_i]
\end{equation*}
as their empirical correlation~\cite{stigler_1989};
hence, statistically speaking,
our choice of~$\mathfrak{s}$ uses correlations to describe the similarity of an input and the target vectors.
Since the correlation has a long-standing tradition in measuring similarities~\cite{6355687,london_1895,stigler_1989,kendall_1938,goodman_kruskal_1979},
we can indeed call our~$\mathfrak{s}$ the classical choice.
Most deep learning pipelines optimize the parameters based on stochastic-gradient descent (SGD)~\cite{Bottou98onlinelearning}.
In the usual mini-batch version,
SGD draws $b$~samples $(\widetilde{\boldsymbol{x}}_1,\widetilde y_1),\dots,(\widetilde{\boldsymbol{x}}_b,\widetilde y_b)$ uniformly at random without replacement from the training data $(\boldsymbol{x}_1,y_1),\dots, (\boldsymbol{x}_n,y_n)$,
calculates the gradient with respect to the selected batch of data of size~$b$,
and then updates the parameters accordingly.
We propose to use this strategy with a small twist:
instead of drawing samples uniformly at random,
we propose to draw samples in a way that accounts for the similarity between a sample and the targets.
Specifically,
we propose to draw the samples in each mini-batch from the dataset with probabilities
\begin{equation*}
\mathbb{P}\bigl\{(\widetilde{\boldsymbol{x}}_j,\widetilde y_j)=({\boldsymbol{x}}_i, y_i)\bigr\}~=~\frac{\mathfrak{s}[\boldsymbol{x}_i,\boldsymbol{w}_1,\boldsymbol{w}_2,\dots]}{\sum_{m=1}^n\mathfrak{s}[\boldsymbol{x}_m,\boldsymbol{w}_1,\boldsymbol{w}_2,\dots]}~~~~~~j\in\{1,\dots,b\},i\in\{1,\dots,n\}\,.
\end{equation*}
This means that we weight the drawing mechanism according to the similarities between the data points and the targets:
the more similar a sample is to the targets,
the more often tends to appear in a batch.
We thus propose to replace the standard uniform drawing scheme in stochastic-gradient decent by a weighted drawing scheme.
In short, we perform a form of \emph{importance sampling}~\cite{importsample}.
The only difference to other stochastic-gradient descent algorithms with weighted drawing schemes is the drawing distribution:
our distribution directs the training towards the target inputs,
while the distributions applied in other contexts usually aim at fast numerical convergence~\cite{needell2015stochastic}.
The drawing scheme can, of course, be implemented very easily,
and the computational complexity of a single gradient update is the same as in the original setup.
But to avoid adjusting the code of every deep learning pipeline that one wants to use for targeted deep learning,
or simply for testing reasons,
we can also proceed slightly differently:
instead of altering SGD,
we can simply alter the data it operates on.
More precisely,
we can generate a new dataset with samples $(\boldsymbol{x}'_1,y_1'),\dots, (\boldsymbol{x}_{\lfloor t\cdot n\rfloor}',y_{\lfloor t\cdot n\rfloor}')\in\mathbb{R}^p\times\mathbb{R}$ by sampling from the original data with the following probabilities:
\begin{equation*}
\mathbb{P}\bigl\{(\boldsymbol{x}'_j, y_j')=(\boldsymbol{x}_i, y_i)\bigr\}~=~\frac{\mathfrak{s}[\boldsymbol{x}_i,\boldsymbol{w}_1,\boldsymbol{w}_2,\dots]}{\sum_{m=1}^n\mathfrak{s}[\boldsymbol{x}_m,\boldsymbol{w}_1,\boldsymbol{w}_2,\dots]}\,.
\end{equation*}
The parameter~$t\in(0,\infty)$ scales the number of samples of the new dataset;
in our experience, it is of very minor importance;
for example, $t=10$ works for all practical purposes---see Appendix~\ref{sec:additional}.
In any case, using the standard batching scheme (with uniform sampling) on the new dataset $\{(\boldsymbol{x}'_1,y_1'),\dots, (\boldsymbol{x}_{\lfloor t\cdot n\rfloor}',y_{\lfloor t\cdot n\rfloor}')\}$ then yields the desired distribution of the samples in each batch.
Thus, rather than changing the implementation of SGD,
we can apply the original pipeline to the new dataset.
While we allow for $t>1$,
the purpose of our data-preprocessing scheme is to simplify the batching---not to augment the data.
In particular,
in contrast to data-augmentation schemes designed for deep learning in data-scarse applications~\cite{9186097},
we do not generate new, artificial samples.
In conclusion,
our approach to targeted deep learning boils down to a simple data-preprocessing step.
This means that
adjusting existing deep learning pipelines and implementations to targeted deep learning is extremely easy.
\section{Applications}
\label{sec:applications}
We now show that targeted deep learning trains networks faster and more accurately than standard deep learning when given individual or groups of targets.
We consider six applications,
two regressions,
two general classifications,
and two image classifications.
We first explain the general setup, then detail all applications,
and finally provide and discuss the results.
\paragraph{General setup}
Our goal is to compare targeted deep learning with standard deep learning in a way that might generalize to other applications;
in particular,
excessive tailoring of the deep learning setup to the specific data as well as artefacts from specific deep learning supplements,
such as dropouts~\cite{hinton2012improving}, special activation functions~\cite{lederer2021activation}, complex layering, and so forth,
should be avoided.
More generally,
our interest is not to achieve optimal performance on a given data set (by fine-tuning the system, for example)
but to show that targeted deep learning can improve standard deep learning---irrespective of the specific pipeline---when target vectors are known beforehand.
We focus on small groups ($g=1$ and $g=5$),
because this is the most realistic scenario in the majority of the discussed applications.
However, we show in Appendix~\ref{sec:group} that our approach also works for much larger groups.
We use standard stochastic-gradient decent with constant learning rate~0.005.
Targeted deep learning is implemented as described in Section~\ref{sec:methods}.
The deep learning ecosystem is PyTorch~\cite{NEURIPS2019_9015} on Nvidia K80s GPUs;
the entire set of analyses can be computed within less than half a day on a single GPU.
\paragraph{Regression application I}
The first regression-type application is the prediction of the unified Parkinson's disease rating scale (UPDRS) based on telemonitoring data.
The UPDRS is regularly used to assess the progression of the disease.
But measuring a UPDRS is expensive and inconvenient for patients,
because it requires time-consuming physical examinations by qualified medical personnel in a clinic.
A promising alternative to these direct, in person measurements is telemonitoring.
The data~\cite{DuaUCI2019,tsanas_little_mcsharry_ramig_2010,little_mcsharry_hunter_spielman_ramig_2009,little_mcsharry_roberts_costello_moroz_2007} consists of $n=5875$ samples that each comprise a UPDRS score and $p=25$ associated patient attributes as discussed in~\cite{tsanas_little_mcsharry_ramig_2010,little_mcsharry_hunter_spielman_ramig_2009}.
As usual, we standardize the data column-wise by subtracting the means and dividing through the standard deviations.
Our task is to predict the UPDRS of a new patient with attributes~$\boldsymbol{w}_1\in\mathbb{R}^{25}$ or of a group of new patients with attributes~$\boldsymbol{w}_1,\dots,\boldsymbol{w}_g\in\mathbb{R}^{25}$.
To test the performances of different approaches,
the inputs of samples (or groups of samples) selected uniformly at random are considered the ``new patient'' (or ``new group of patients''),
that is, they take the role of the unlabeled target vector (or vectors) $\boldsymbol{w}_1$ (or $\boldsymbol{w}_1,\dots,\boldsymbol{w}_g$),
and the corresponding outcome/outcomes are used for testing;
the remaining data is used as labeled training data $(\boldsymbol{x}_1,y_1),\dots ,(\boldsymbol{x}_n,y_n)$ (``previously telemonitored patients'').
We fit these data by a two-layer ReLU network with 150 and 50 neurons, respectively.
The training performances and the prediction performances on the targets are assessed by using the usual squared-error loss.
\begin{figure}[!h]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{parkinsons_loss.jpg}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{group_parkinsons_loss.jpg}
\end{minipage}
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{parkinsons_test.jpg}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{group_parkinsons_test.jpg}
\end{minipage}
\caption{training (top) and testing (bottom) losses for the Parkinson's telemonitoring data with single (left)/five (right) target vector/vectors.
Targeted deep learning yields faster and more accurate training than standard deep learning.}
\label{fig:parkinsons}
\end{figure}
\paragraph{Regression application II}
The second regression-type application is the prediction of concrete strengths.
High-strength concrete is one of the most wide-spread building materials.
However, the production process of high-strength concrete is complicated:
it involves a large number of ingredients that can interact in intricate ways.
Hence, a mathematical model is very useful to predict the strength of a given mixture beforehand.
The data~\cite{DuaUCI2019,yeh_1998} consists of $n=1030$ samples that each comprises the compressive strength of a given mixture and $p=8$ associated attributes as discussed in~\cite{yeh_1998}.
The data are standardized as described above.
The task is to predict the compressive strengths of new mixtures of concrete.
Again, in line with our notion of targeted deep learning,
we assume given the target mixtures beforehand.
The performances are evaluated as described above.
\begin{figure}[!h]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{concrete_loss.jpg}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{group_concrete_loss.jpg}
\end{minipage}
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{concrete_test.jpg}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{group_concrete_test.jpg}
\end{minipage}
\caption{training (top) and testing (bottom) losses for the concrete compression strength data with single (left)/five (right) target vector/vectors.
Targeted deep learning yields faster and more accurate training than standard deep learning.}
\label{fig:concrete}
\end{figure}
\paragraph{Classification application I}
The first classification-type application is the detection of diabetic retinopathy.
Diabetic retinopathy is a medical condition of the eye caused by diabetes.
It affects many long-term diabetes patients,
and it usually has only mild symptoms until it becomes a significant threat to the patient's vision.
Thus, early diagnosis can improve healthcare considerably.
The data~\cite{DuaUCI2019,messiDR} consists of $n=1151$ samples extracted from the publicly accessible Messidor database of patient fundus images~\cite{messiDR}.
Each sample comprises a binary observation of the existence of diabetic retinopathy and $p=19$~image attributes as discussed in~\cite{antal_hajdu_2014}.
We standardize the features by subtracting the overall means and dividing through the overall standard deviations.
Our task is to classify a new/a group of patient in terms of diabetic retinopathy.
The data are fitted by a two-layer ReLU network with a cross-entropy output layer, where the layers have 150 and 50 neurons, respectively.
The training and testing accuracies are computed in terms of cross-entropy loss and classification error, respectively.
\begin{figure}[!h]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{diabetic_loss.jpg}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{group_diabetic_loss.jpg}
\end{minipage}
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{diabetic_test.jpg}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{group_diabetic_test.jpg}
\end{minipage}
\caption{training losses (top) and testing accuracies (bottom) for the diabetic retinopathy data with single (left)/five (right) target vector/vectors.
Targeted deep learning yields faster and more accurate training than standard deep learning.}
\label{fig:diabetes}
\end{figure}
\paragraph{Classification application II}
The second classification-type application is the classification of fetal cardiotocograms.
A cardiotocogram is a continuous electronic record that monitors the fetal heart rate and uterine contractions of a pregnant woman.
It is the most common diagnostic technique for evaluating the well-being of a fetus and its mother during pregnancy and right before delivery.
The data~\cite{DuaUCI2019,ayres-de-campos_2000} consists $n=2126$~cardiotocograms of different pregnant women.
Each sample comprises a categorical quantity that denotes one of ten classes of fetal heart-rate patterns and
$p=21$ cardiotocographic attributes as described in~\cite{ayres-de-campos_2000}.
The data are standardized as before.
The task is to classify (groups of) pregnant women regarding their fetal heart-rate patterns.
We test the performances at this task in the same way as in the diabetic retinopathy data set.
\begin{figure}[!h]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{card_loss.jpg}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{group_card_loss.jpg}
\end{minipage}
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{card_test.jpg}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{group_card_test.jpg}
\end{minipage}
\caption{training losses (top) and test accuracies (bottom) for the cardiotocogram data with single(left)/five (right) target vector/vectors.
Targeted deep learning yields faster and more accurate training than standard deep learning.}
\label{fig:card}
\end{figure}
\paragraph{Image classification I}
The first image application is the classification of handwritten digit images for the outputs within 10 different Japanese characters from the well-known Kuzushiji-MNIST (KMNIST) data set~\cite{rois-ds,kmnist}.
These data contain $n=60\,000$ samples for training, $10\,000$ samples for testing, where each sample is a $28 \time 28$-pixel grayscale image.
The data are standardized as described above.
Our task is to classify a new image.
The data are fitted by a two-layer ReLU convolutional neural network, where each convolutional layer is equipped with a max-pooling layer, and the kernel sizes for each convolution layer are 3 and 5, respectively.
We stress once more than we are not interested in the overall performance on the test data but instead in the performance on a (small) predefined set of images.
These performances are assessed as described above.
\begin{figure}[!h]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{kmnist_loss.jpg}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{group_kmnist_loss.jpg}
\end{minipage}
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{kmnist_test.jpg}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{group_kmnist_test.jpg}
\end{minipage}
\caption{training losses (top) and test accuracies (bottom) for the KMNIST data with single (left)/five (right) target image/images.
Targeted deep learning yields faster and more accurate training than standard deep learning.}
\label{fig:kmnist}
\end{figure}
\paragraph{Image classification II}
The second image application is the classification of popular item images with 10 labels from the well-known Fashion-MNIST data set~\cite{xiao2017/online}.
These data contain $n=60\,000$ samples for training, $10\,000$ samples for testing, where each sample is a $28 \time 28$-pixel grayscale image.
We proceed as in the previous application.
\begin{figure}[!h]
\centering
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{mnist_loss.jpg}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{group_mnist_loss.jpg}
\end{minipage}
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{mnist_test.jpg}
\end{minipage}%
\begin{minipage}{.5\textwidth}
\centering
\includegraphics[width=0.72\linewidth]{group_mnist_test.jpg}
\end{minipage}
\caption{training losses (top) and test accuracies (bottom) for the Fashion-MNIST data with single (left)/five (right) target image/images.
Targeted deep learning yields faster and more accurate training than standard deep learning.}
\label{fig:mnist}
\end{figure}
\paragraph{Results}
Figures~\ref{fig:parkinsons}--\ref{fig:mnist} contain the results for individual targets and groups of $g=5$ targets.
The solid lines depict the means over 20~splits into training and testing samples and, where it makes sense,
the corresponding interquartile ranges as measures of uncertainty (recall that the means can be outside of the interquartile ranges).
Across all applications, targeted deep learning renders the parameter training faster (indeed, the training and testing curves converge faster) and more accurate (testing curves converge to a higher level).
\paragraph{Conclusions}
The results indicate that our framework and methods can gear network training very effectively to given target inputs.
Of course,
targeted deep learning only makes sense if the goal is indeed to optimize deep learning for certain inputs;
for other goals, unsupervised learning, multimodal learning, personalized deep learning, or just standard deep learning might be more appropriate.
More generally speaking,
it is important to select a suitable framework for a given task,
and targeted deep learning is a valuable addition to the set of possible frameworks.
\section{Discussion}
\label{sec:discussion}
Targeted deep learning can improve on standard deep learning when the goal is prediction or classification for target covariates that are known beforehand.
It can be implemented very easily either by adjusting the sampling strategy of SGD or by augmenting the training data.
Targeted deep learning is not restricted to specific types of architecture;
for example, it applies to convolutional neural networks equally well as to fully-connected neural networks.
Our approach already works very well with the simple correlation-based similarity measure discussed in Section~\ref{sec:methods}.
Resembling measures have been used in personalized deep learning~\cite{9078031}
and, of course, are common in statistics and machine learning more generally.
But our approach could also incorporate any other similarity measure,
including deep learning-based measures from the precision deep learning literature~\cite{8217759,che_xiao_liang_jin_zho_wang_2017}.
Targeted deep learning does not replace other paradigms,
such as minimizing overall generalization errors or personalized deep learning,
because it is sharply focused on situations where the target covariates are known beforehand.
But if this is the case,
targeted deep learning can make a substantial difference,
leading, for example, to faster and more accurate predictions in medicine and engineering.
\ifarXiv
\paragraph{Acknowledgments}
We thank Mike Laszkiewicz, Yannick D\"uren, and Mahsa Taheri for their insightful suggestions.
\else
\pagebreak
\fi
\bibliographystyle{plain}
|
2,877,628,088,366 | arxiv | \section{Introduction}
Two methods have been responsible for the vast majority of extrasolar planet discoveries. Radial velocity observations are used to determine the planetary mass from the amplitude of the shift in spectral lines due to the planet's gravitational pull on the star, while transit photometry gives the planetary radius based on the amount of light blocked as the planet transits across the host star. Both of these methods, however, are vulnerable to unresolved light from nearby stars, whether the contaminating star is a bound companion or a chance alignment. When close companions fall within the same spectral slit or photometric aperture as the target, the resulting blend distorts the derived planetary parameters and sometimes creates false positive signals.
Accounting for nearby stars is particularly important for transiting planets that lack corresponding radial velocity measurements and hence have no confirmation of planetary mass. This paper focuses on the transiting planet candidates identified by the \emph{Kepler} space mission, currently numbering over 2300 \citep{Batalha2012}. Many of these objects do not have mass estimates from radial velocity measurements because of the amount of observing time required, particularly for small planets around relatively faint stars. Current radial velocity instruments cannot detect planets around the size of the Earth, and thus are not useful for confirming objects of this size that have recently been found with transit photometry \citep[e.g.][]{Fressin2012a}. Another method is needed to confirm these types of planets.
The best way to account for nearby stars is provided by high-resolution imaging. A variety of methods have been used to examine planet-hosting stars, such as lucky imaging \citep[e.g.][]{Daemgen2009}, speckle imaging \citep[e.g][]{Howell2011}, and adaptive optics (this work). High-resolution images are used to (1) rule out false positive scenarios caused by background blends \citep[e.g.][]{Hartman2011a}, (2) estimate the dilution of the transit light curve caused by additional faint stars in the aperture \citep[e.g.][]{Buchhave2011,Demory2011}, and (3) confirm statistically that a candidate is probably a planet even without radial velocity measurements, by calculating the likelihood of all false positive scenarios using the BLENDER method \citep[e.g.][]{Torres2011, Fressin2011, Ballard2011, Fressin2012a}.
\subsection{False positives}
For transiting planet detections, the main source of false positives is a blend of the target star with an eclipsing binary star. The \emph{Kepler} team searches for many signatures of such blends, include examining the light curves for V-shapes, discrepancies between odd and even transits, and signs of secondary occultations that are too deep to be planetary \citep{Batalha2010}. In addition, the center of light for each target is tracked in and out of transit. If the centroid shifts significantly, this indicates that the source of the transit signal is around a different star. At a minimum, the derived planetary parameters must be completely reevaluated, sometimes resulting in a false positive \citep{Jenkins2010}. This extensive vetting effort is partly why the candidates announced by the \emph{Kepler} team are thought to have a low false positive rate, variously estimated from $5-20\%$ \citep{Borucki2011, Morton2011, Wolfgang2011}. Finding which of the announced candidates are actually false positives requires extensive follow-up efforts, including ground- and space-based high-resolution imaging.
For the purposes of this paper, a false positive is defined as a transit signal that is not produced by a planetary-mass object around the proposed target star. Some false positives may actually be larger planets around a fainter star that is blended with the brighter target star.
\subsection{Dilution corrections}
Additional stars in the photometric aperture will dilute the transit signal and distort the measured planetary radius. The amount of dilution depends on the brightness of the background star and how much of its light falls in the aperture. A \emph{Kepler} pixel is about 4\arcsec\ on a side, while a typical \emph{Kepler} aperture is 12\arcsec\ across, and existing catalogs typically do not contain companions more than a few magnitudes fainter or closer than a few arcseconds. Thus, it is critical to probe as close to the star as possible to find nearby companions that might challenge the identity and parameters of the star hosting the transit signature.
It is particularly useful to obtain dilution corrections in more than one wavelength. An important component of the \emph{Kepler} follow-up is transit observations with the near-infrared \emph{Spitzer} satellite. Most false-positive blend scenarios produce a color-dependent transit-like signature. Thus, an object is more likely to be a planet if it has the same depth at both the \emph{Kepler} bandpass and, for instance, at 4.5 $\mu m$ with \emph{Spitzer}. If the depth of the transit does vary with wavelength, however, it is vital to have resolved images of nearby stellar companions to test whether the depth variation is more consistent with a false positive or with a genuine planetary transit that is being diluted by the companion star(s).
\subsection{Planetary validation}
If no companions are detected with direct imaging, we can place strict limits on any remaining companions using BLENDER. The BLENDER method has been developed to combine the constraints on false positive scenarios placed by all available radial velocity, photometric, and imaging measurements \citep{Torres2011}. Strong limits on the allowed magnitude difference of undetected stars within a few tenths of an arcsecond are particularly useful. The probability of a false positive scenario scales directly with the area in which a background blend can exist within the limits placed by AO imaging. If the combined probability of any allowed background blend is much less than the planet occurence frequency, the candidate is said to be statistically validated.
Validation is particularly critical for the smallest objects, which cannot be confirmed using radial velocity techniques. Several planets from 1-2 $R_E$ have been statistically validated using the BLENDER method, include Kepler-9d \citep{Torres2011}, Kepler-10c \citep{Fressin2011}, Kepler-19b \citep{Ballard2011}, Kepler-22b \citep{Borucki2012}, and Kepler-20e and f \citep{Fressin2012a}.
This paper reports on observations of 90 candidate planetary objects, or Kepler Objects of Interest (KOIs). The observations are described in Section~\ref{section:obs}. The limits placed on observable companions are reported in Section~\ref{section:limits}. A list of all companion stars within 6\arcsec\ of the targets is reported in Section~\ref{section:freq}, along with a discussion of the frequency and implications of the companion stars.
\section{Observations and data analysis}
\label{section:obs}
All observations were made during the 2009 and 2010 seasons (roughly May to November). Data were obtained over four nights on the MMT and seven nights on Palomar, as listed in Table~\ref{table:obs}, with observations of 90 unique KOIs.
When possible, objects were observed in both $J$ and $Ks$ bands; many of these objects have also been observed using speckle imaging in optical wavelengths \citep{Howell2011}. Observations in multiple wavelengths are particularly useful for estimating the spectral type of any detected companions, and for converting the observed delta-magnitude into \emph{Kepler's} broad optical band ($Kp$). All observations in the this paper are relative photometry, so the absolute $J$ and $Ks$ magnitudes are found by adding the 2MASS catalog magnitude to the relative magnitudes observed by ARIES and PHARO. The resulting absolute magnitudes are converted to $Kp$-band magnitudes using the fifth-order polynomial fits for dwarf stars presented in Appendix A of \citet{Howell2012}:
\begin{equation}
Kp-J =\begin{cases}
\begin{split}
-398.04666+149.08127J\\
- 21.952130 J^2+1.5968619 J^3\\
- 0.057478947 J^4+0.00082033223J^5,
\end{split}
& J \le 16.7, \\
0.1918 + 0.08156J, & J > 16.7
\end{cases}
\label{eqnJ}
\end{equation}
and
\begin{equation}
Kp-Ks =\begin{cases}
\begin{split}
-643.05169+246.00603\emph{Ks}\\
-37.136501\emph{Ks}^2 +2.7802622\emph{Ks}^3\\
- 0.10349091{Ks}^4 +0.0015364343\emph{Ks}^5,
\end{split}
& Ks \le 15.4, \\
-2.7284 + 0.3311\emph{Ks}, & Ks > 15.4.
\end{cases}
\label{eqnKs}
\end{equation}
The single-filter conversions in Equations~\ref{eqnJ} and \ref{eqnKs} yield $Kp$ magnitude estimates that are accurate to approximately $0.6$-$0.8$ mag, and are used for all magnitude limit calculations. For objects detected in both $J$ and $Ks$, a better estimate of the $Kp$ magnitude can be obtained by using the dual-filter conversions, which yield $Kp$ magnitudes accurate to about $0.05$ mag \citep{Howell2012}.
\begin{equation}
Kp - K s = \begin{cases}
\begin{split}
0.314377 + 3.85667x\\
+ 3.176111x^2 - 25.3126x^3 \\
+ 40.7221x^4 - 19.2112x^5,
\end{split}
& \mbox{Dwarfs}, \\
0.42443603 + 3.7937617x\\
- 2.3267277x^2 + 1.4602553x^3 &\mbox{Giants}
\end{cases}
\label{eqnBoth}
\end{equation}
where $x = J - \emph{Ks}$.
\subsection{ARIES}
Observations with ARIES were taken on four nights between Nov 2009 and Sep 2010. The Arizona Infrared imager and Echelle Spectrograph (ARIES) on the 6.5m MMT telescope can provide diffraction-limited imaging in the $J$ and $Ks$ bands \citep{McCarthy1998}. ARIES is fed by the adaptive secondary AO system. Most KOIs were imaged in the f/30 mode, with a field of view of 20\arcsec\ $\times$ 20\arcsec\ and a resolution of 0.02085\arcsec\ per pixel. In poor seeing, the f/15 mode is used, with a field of view of 40\arcsec\ $\times$ 40\arcsec\ and a resolution of 0.0417\arcsec\ per pixel. The adaptive optics system in all cases guided on the primary target. The median width of the central cores of the point spread functions were $0.25\arcsec$ at $J$ and $0.14\arcsec$ at $Ks$, with a best resolution of 0.1\arcsec\ at J and 0.09\arcsec\ at Ks. Under good conditions in May 2010 (uncorrected seeing of 0.5\arcsec\ at Ks), the Strehl ratios were measured at 0.3 in Ks and 0.05 in J.
For each KOI, at least one set of 16 images on a 4-point dither pattern were observed in both $J$ and $Ks$. In May and Sep 2010, a random jitter was added to the dither position, which had steps of 2 or 4\arcsec\ . Integration times varied from 0.9 to 30 s depending on stellar magnitude; in some cases, more than one set of 16 images were taken. The images for each filter were calibrated using standard IRAF procedures\footnote{http://iraf.noao.edu/}, and combined and sky-subtracted using the \emph{xmosaic} function in the \emph{xdimsum} package. The images taken in 2009-2010 have a slight differential rotation in them, which was too small to require correction near the target, but causes stars near the edges of the field to be smeared out when stacked. The orientations of the fields are estimated from the dither pattern, and are only accurate to within a few degrees.
\subsection{PHARO}
Near-infrared adaptive optics imaging was obtained on the nights of
07-10 Sep 2009 and 01-03 July 2010 UT with the Palomar Hale 200-inch
telescope and the PHARO near-infrared camera \citep{Hayward2001} behind
the Palomar adaptive optics system \citep{Troy2000}. PHARO, a
$1024\times20124$ HgCdTe infrared array, was used in 0.0251\arcsec\ pixel$^{-1}$
mode yielding a field of view of $25\arcsec$. The KOIs observed
in 2009 were imaged only in the $J$ filter ($\lambda_0 = 1.25~\mu$m)
while the KOIs observed in 2010 were imaged in the in both the $J$ and
$K_{s}$ ($\lambda_0 = 2.145~\mu$m) filters. All the data were
collected in a standard 5-point quincunx dither pattern of 5\arcsec\
steps interlaced with an off-source ($60\arcsec$) sky dither
pattern. Individual integrations times varied depending on the brightness of
the KOIs, from 1.4 to 69 s, and were aimed at detecting sources 9 magnitudes fainter than
the target in $J$ and 8 magnitudes fainter in $Ks$ ($5\sigma$). The
individual frames were reduced with a custom set of IDL routines written
for the PHARO camera and were combined into a single final image. In
all cases, the adaptive optics system guided on the primary target
itself. The median width of the central cores of the point spread functions were 0.08\arcsec\ at $J$ and 0.1\arcsec at $Ks$, with a best resolution at 0.05\arcsec\ at J and 0.09\arcsec\ at Ks. The Strehl ratio for good images is 0.1-0.15 in $J$ and 0.35-0.5 in $Ks$.
\section{Detection limits}
\label{section:limits}
All objects were identified by manual inspection, which was more efficient at weeding out spurious signals and artifacts than automatic detection methods. The magnitude of a companion was estimated using the IRAF routine \emph{phot} using a 5 pixel aperture (large enough to capture most of the point spread function, or PSF, without also including light from all but the closest companions). In a few cases, PSF fitting was used on very close companions (such as K00098, separation=0.3\arcsec).
Limits on undetected stars are estimated as follows. A series of concentric annuli are drawn around the star, and the standard deviation of the background counts is calculated for each annulus. A star is considered detectable if its peak signal is more than 5 times the standard deviation above the background level. The magnitude of this star is reported as the detection limit at the distance of the center of the annulus. Limits are reported for distances from 0.1-4\arcsec\ in Table~\ref{table:limits}. The 4\arcsec\ level can also be applied toward more distant objects.
The innermost detectable object is a function of the observed PSF of the target star. The best FWHM achieved for targets in $J$ band was 0.1\arcsec\ for ARIES and 0.05\arcsec\ for PHARO, while both instruments reached 0.09\arcsec\ in $Ks$. However, poor weather and problems with the AO systems often caused excursions well above that level. The magnitude limits for each KOI are shown in Table~\ref{table:limits}. The limits on a few examples are shown in Figure~\ref{fig:complimits}, along with a scatter plot of the companions detected near all targets.
\section{Frequency and implications of companions}
\label{section:freq}
Additional faint stars are common near \emph{Kepler} targets. Over half (53/90, or nearly 60\%) of the targets imaged have at least one companion within 6\arcsec. All of the stars with companions are listed in Table~\ref{table:stats}, while a list of the relative magnitudes, distances and position angles is in Table~\ref{table:compstars}. Many of these objects are very faint (down to 10 magnitudes fainter in $Ks$, and typically even fainter visible magnitudes), and so have little dilution effect on the \emph{Kepler} light curves. Being able to say for certain that there are no brighter objects present lends confidence to the stated planetary parameters.
Close companions, within 2\arcsec, are of particular concern, since they are within the same \emph{Kepler} pixel as the target, and may not produce a detectable centroid shift. Of the objects presented here, 20\% of objects imaged have at least one companion within 2\arcsec, and 7\% have one within 0.5\arcsec. The images of twelve KOIs with detected companions between 0.5-2\arcsec\ are shown in Figure~\ref{fig:comps}, while six objects with companions closer than 0.5\arcsec\ are shown in Figure~\ref{fig:compsZoom}.
This subset of planetary candidates with close stellar companions should be carefully examined for false positives; the list in Table~\ref{table:compstars} in fact contains several objects that have since been identified as likely false positives by the \emph{Kepler} team, including K00068, K00076, K00088, K00264, and K00266.
On the other hand, several transiting candidates around stars with close stellar companions have been confirmed to be planetary. Knowledge of the additional star has been incorporated into the derivation of the correct planetary parameters. Several examples of confirmed planetary systems are K00098 \citep[aka Kepler-14,][]{Buchhave2011}, K00097 \citep[aka Kepler-7,][]{Demory2011}, and K00975 \citep[aka Kepler-21,][]{Howell2012}. For some objects, such as K00097, the corrections were relatively minor, at the level of a few percent. However, for K00098, the companion was within 0.3\arcsec\ and only a few tenths of a magnitude fainter, and the dilution corrections were substantial: the radius increased by 10\%, the mass by 60\% (since the stellar spectra were also blended), and the density changed by 25\% \citep{Buchhave2011}. Without high resolution images, we would have had a very inaccurate picture of this planet.
Not surprisingly, given the generally red colors of faint objects, more apparent binaries occur in the infrared than in the visible. High-resolution optical speckle images al by \citet{Howell2011} found that only 10/156 stars, or 6.4\%, had companions within 2\arcsec\ and down to 4 magnitudes fainter. (No attempts have been made to correct for the selection biases in the objects that were selected for follow-up observations.)
It is unknown which, if any, of the detected companions are bound to their targets. At present, we lack the time baseline needed to detect proper motion for any of the closest companions. Two statistical arguments can be made to argue that many of the closest stars are likely to be bound. The first is to note that if all of the companions detected were unconnected background or foreground objects, then we would expect to find 9 times as many objects within 6\arcsec\ as within 2\arcsec, However, we actually find only 3 times as many objects (53 vs. 18), and the ratio would be smaller if it included the the closest objects that are missed because they are within the stellar PSF.
The second statistical argument is to compare the Galactic latitudes of KOIs with detected companions within 2 and 6\arcsec, as shown in Figure~\ref{fig:lat}. The AO targets with companions within 6\arcsec\ are somewhat more likely to appear at low Galactic latitudes, indicating that some of them are likely background blends, but no such correlation can be seen with the (admittedly small) sample of close companions (within 2\arcsec). Thus many of the closest objects may be part of physical binary systems, but further observations are required to determine which ones they may be.
\section{Conclusions}
\label{section:ogle56conclusions}
High-resolution, adaptive optics images of 90 \emph{Kepler} planetary candidates have been obtained. A list of all companions within 6\arcsec\ is provided, with measured magnitudes in $J$ and/or $Ks$ band and calculated \emph{Kepler} magnitudes. Limits on additional companions from 0.1 to 4\arcsec\ are also given for each target, and can be used to calculate the probability of a remaining undetected blend.
Roughly 20\% of the objects imaged have at least one companion within 2\arcsec, and 7\% have one within 0.5\arcsec. Over half have a more distant companion at a distance of 6\arcsec. Although small number statistics apply, the objects within 2\arcsec\ appear uncorrelated with Galactic latitude, making it more likely that they represent physically bound (though still distant) companion stars.
Even if false positive blends can be ruled out, corrections to the planetary parameters based on nearby stars can range from a few to tens of percents, making high resolution images an important tool to understanding the true sizes of other discovered worlds.
\acknowledgements
|
2,877,628,088,367 | arxiv | \section{\label{sec:intro}Introduction}
With the pioneering observation of ``wave of translation'' made by John S. Russell in 1834 on the Union Canal in Scotland, the field of non-linear wave phenomena was born. Russel continued his investigations in water channel experiments by triggering waves with translating vertical plates placed in the water \cite{Russel1838,Russel1845}. With his experiments he was able to determine some properties of the single (``solo'' or as later called ``solitary'') waves, like that those are stable features, which can travel a long distance, and that the speed of the wave depends on its amplitude and on the water depth. He also found that two waves do not disturb each other so that, e.g., they can overtake each other.
Due to the lack of proper theoretical description of non-linear wave phenomena the field did not experience much progress for several decades. This changed with the derivation of the Korteweg--de Vries (KdV) equation in 1896 \cite{Korteweg1896}, which is the simplest equation embodying non-linearity and dispersion. Despite its apparent simplicity it is very rich and has a large variety of solutions, including spatially localized solitary waves and periodic cnoidal waves. In the meantime, solitons have been identified in many different fields beyond hydrodynamics, including optics (optical fibers and non-liner media) \cite{Hasegawa73}, magnetism \cite{Kosevich1998}, nuclear physics \cite{Iwata19}, and Bose-Einstein condensates \cite{Frantzeskakis10}.
Solitons are salient non-linear features in plasmas too. A review of the early experimental findings of ion-acoustic solitons was published in \cite{Tran79}. The combination of large family of systems we call plasmas and the richness of the KdV equation and its variants provides seemingly unlimited possibilities for theoretical investigations. Just to mention some of the most recent ones, the collision properties of overtaking small-amplitude super-solitons, as well as solitons with opposite polarities in a plasma consisting of cold ions and electrons with two-temperature Boltzmann energy distributions were investigated in \cite{Olivier18,Verheest19}. The effect of relativistic corrections to the electron kinetics on wave propagation was discussed in \cite{EL-Shamy19}. Ion acoustic solitons in multi-ion plasmas have been analyzed in \cite{Ur-Rehman19,Alam19}. Multiple soliton solutions have been presented in (3+1) dimensions in \cite{Wazwaz15}. The effect of magnetic field acting on the dust particle motion was taken into account in \cite{Saini16,Atteya18,Yahia19}, while solitary waves and rogue waves in a plasma with non-thermal electrons featuring Tsallis distribution have been derived in \cite{Wang13}. Bending of solitons in weak and slowly varying inhomogeneous plasmas was shown in \cite{Mukherjee15} and the application of solitary waves for particle acceleration has been discussed in \cite{Ishihara18}.
More closely related to the topic of this paper, in the field of strongly coupled dusty plasma research solitons became of high interest in the last decade after the pioneering experiments of Samsonov et. al. \cite{Samsonov02}. Numerous single layer dusty plasma experiments have followed providing more data and insight \cite{Nosenko04,Nosenko06,Sheridan08}. The connection between wave amplitude, width and propagation velocity has been explored in \cite{Bandyopadhyay08}, the existence of dissipative dark (rarefactive) solitons has been reported in \cite{Heidemann09,Zhdanov10}. Experiments on the collision of solitons were presented in \cite{Boruah15}. Experimental observations on the modifications in the propagation characteristics of precursor solitons due to the different shapes and sizes of obstacles over which the dust fluid had to flow was presented in \cite{Arora19}. In three-dimensional dusty plasmas under micro-gravity conditions solitons could be launched as well, as reported in \cite{Usachev14}. Solitary waves in one-dimensional dust particle chains were studied in \cite{Sheridan17}.
Supporting and extending experimental possibilities concerning solitons in dusty plasma, theoretical studies focused on the effects of charge-varying dusty plasma with nonthermal electrons \cite{Berbri09}, of an external magnetic field \cite{Nouri12}, of the external periodic perturbations and damping \cite{Chatterjee18}, as well as of the presence of dust particles with opposite polarities \cite{Rahman18}. The effects of dust--ion collision \cite{Paul19}, and the possibility of cylindrical solitary waves \cite{Gao19} were also addressed.
Numerical simulations became as well very useful for exploring the physics of strongly coupled dusty plasmas. The most widely used approach has been the molecular dynamics (MD) method, which solves the equations of motion of the particles with forces originating from the specified pairwise inter-particle potential and from any external forces, if present. In many settings, the validity of the Newtonian equations of motion can be assumed. In case of an appreciable interaction between the particles and the embedding gaseous system, the Langevin simulation approach \cite{Langevin} can be adopted. Recent molecular dynamics simulations have shown the presence of solitary waves and their compatibility with the predictions of the KdV equation \cite{Avinash03,Kumar17}, the possibility of excitation of solitons with moving charged objects \cite{Tiwari16}, and the presence of rarefactive solitons \cite{Tiwari15}.
Since the first laboratory plasma crystal experiments in 1994 \cite{Chu94,Thomas94,Melzer94} strongly coupled two-dimensional dusty plasmas and the corresponding single layer Yukawa one-component plasma model became popular model systems for the investigation of various structural, transport, and dynamical properties of many-body systems. These systems provide the unique possibility to observe collective and many-body phenomena on the microscopic level of individual particles. Most recent works on transport processes include studies of the effect of external magnetic fields on the different transport parameters \cite{Baalrud17,Feng17,Hartmann19} and testing different thermal conductivity models with equilibrium molecular dynamics simulation \cite{Scheiner19}. Related to waves and instabilities, the coupling of non-crossing wave modes has been addressed in \cite{Meyer17}, spiral waves in driven strongly coupled Yukawa systems were analysed in \cite{Kumar18}, the effect of periodic substrates were studied in \cite{Li18,Wong18}, thermoacoustic instabilities in the liquid phase were shown in \cite{Kryuchkov18}, and microscopic acoustic wave turbulence was analyzed in \cite{Hu19}. Studies on structural phase transitions included freezing \cite{Hartmann10,Su12} and melting \cite{Petrov2015,Jaiswal_2019}.
Dusty plasmas also provide an easily accessible playground for testing new theoretical approaches and concepts. E.g., the shear modulus for the solid phase was obtained from the viscoelasticity in the liquid phase \cite{Wang19}, a survival-function analysis was performed to identify multiple timescales in strongly coupled dusty plasma \cite{Wong18}, the applicability of the configurational temperature in dusty plasmas was investigated \cite{Himpel19}. In ref. \cite{Choi19} high-precision molecular dynamics results were used for testing of theoretical models of ion-acoustic wave-dispersion relations and related quantities. Studies at the mesoscopic scale of finite dust particle clusters are probably at the most fundamental level, as the contribution of every individual particle is significant. Most recently amplitude instability, phase transitions, and dynamic properties were studied \cite{Lisina19}. In addition to fundamental many-body physics dusty plasmas provide a very sensitive tool for the detailed investigation of mutual plasma--surface interactions. Recent studies include the investigations of the effect of external fields on the local plasma properties around a dust particle \cite{Sukhinin17}, and the sputtering rate of the solid dust surface with nanometer resolution in \cite{Hartmann17}.
In this work, we present molecular dynamics investigations of the propagation of solitons in a 2-dimensional strongly-coupled many body system, characterized by Yukawa pair potential. The solitons are created by electric field pulses. Their propagation and their collisions are traced at various system parameters. The effect of an external magnetic field is also addressed. In Sec. \ref{sec:model} we describe our simulation techniques, the generation and the characterization of the solitons. In Sec. \ref{sec:results} we report the results of our studies, while in Sec. \ref{sec:summary} a brief summary of our findings is given.
\section{Simulation technique}
\label{sec:model}
The system that we investigate consists of $N$ particles that have the same charge, $Q$, and mass, $m$, and reside within a square computational box, to which we apply periodic boundary conditions. The edge length of the box is $L$ that results a surface density of the particles $n = N / L^2$ and a Wigner-Seitz radius of $a = (\pi n)^{1/2}$. The particles interact via the screened Coulomb (Debye-H\"uckel, or Yukawa) potential
\begin{equation}
\phi(r) = \frac{Q}{4 \pi \varepsilon_0}~\frac{\exp(-r / \lambda_{\rm D})}{r},
\label{eq:pot}
\end{equation}
where $\lambda_{\rm D}$ is the screening (Debye) length, which depends on the characteristics of the plasma environment (densities and temperatures of the electrons and ions) that embeds the dust system. We account for the presence of the plasma environment solely by taking into account its screening effect, i.e., we assume that the friction force that originates from this plasma environment is negligible. Correspondingly, we use the Newtonian equation of motion to follow the trajectories of the particles ($i=1,2,\dots,N$),
\begin{equation}
m \ddot{\bf r}_i = {\bf F}_i = Q\,{\bf v}_i \times {\bf B} + \sum_{i \neq j} {\bf F}_{ij}(r_{ij}) + Q\,{\bf E}^\ast,
\label{eq:eom}
\end{equation}
where ${\bf B}$ is an external magnetic field that is perpendicular to the simulation plane, ${\bf F}_{ij}(r_{ij})$ is the force that acts on particle $i$ due to its interaction with particle $j$ situated at a distance $r_{ij}$, while the last term represents the force originating from the electric field ${\bf E}^\ast$ that is used to create the density perturbations. Our studies cover both $\textbf{B}=0$ and $\textbf{B} \neq 0$ cases. The characteristics of ${\bf E}^\ast$ will be defined below.
The summation for the inter-particle forces is carried out within a domain limited by a cutoff distance, $r_{ij} \leq r_{\rm c}$. The exponential decay of the pair potential allows us to assume that force contributions from particles at $r_{ij} > r_{\rm c}$ are negligible. $r_{\rm c}$ is set in a way to ensure that $F(r_{\rm c}) / F(2a) \sim 10^{-5}$ (recall that the nearest neighbour distance is $\approx 2a$). Finding the particles that give a contribution to the force acting on a given particle is aided by the chaining mesh technique. The resolution of the chaining mesh is set to be equal to $r_{\rm c}$.
In the cases when no magnetic field is applied, the equations of motion are integrated using the Velocity-Verlet scheme with a time step $\Delta t$ that provides a good resolution and accuracy over the time scale of the inverse plasma frequency $\omega_{\rm p}^{-1} = (n Q^2 / 2 \varepsilon_0 m a)^{-1/2}$ (by setting $\omega_{\rm p} \Delta t = 1 /30$). In the presence of magnetic field, the integration of the equations of motion is based on the method described in \cite{SPREITER1999102}.
The system is characterised by three dimensionless parameters: the coupling parameter
\begin{equation}
\Gamma = \frac{Q^2}{4 \pi \varepsilon_0}~\frac{1}{a k_{\rm B} T},
\end{equation}
where $T$ is the temperature, the screening parameter
\begin{equation}
\kappa = a / \lambda_{\rm D},
\end{equation}
and the normalised magnetic field strength
\begin{equation}
\beta = \Omega_{\rm c} / \omega_{\rm p},
\end{equation}
where $\Omega_{\rm c} = QB/m$ is the cyclotron frequency.
Upon the initialisation of the simulations the particles are placed at random positions within the simulation box and their velocities are sampled from a Maxwell-Boltzmann distribution that corresponds to the temperature defined by $\Gamma$. During the first 20\,000 time steps the system is equilibrated by re-scaling in each time step the velocities of the particles to match the desired system temperature. As this type of 'thermalisation' produces a non-Maxwellian velocity distribution, no measurements on the system are taken during this initial phase of the simulation. This phase is followed by a 'free run' period (consisting of 10\,000 time steps), during which the system is no longer thermostated and is allowed to equilibrate due to the interaction between the particles.
Following this phase, solitons in the system are created by applying an electric field pulse with a duration of $\omega_{\rm p} \Delta t = 1 /6$ and having a spatial form
\begin{equation}
E^\ast(x) = \widetilde{E}_0 \, \frac{Q}{4 \pi \varepsilon_0 a^2} \exp \biggl[-\frac {(x - x_0)^2}{2 w^2} \biggr],
\label{eq:field}
\end{equation}
where $x_0$ is the position where the soliton is to be generated, and $w$ the width of the perturbation region. We set this value to $w = 0.01 L$. The pulse is spatially homogeneous in the $y$ direction, i.e. particles with a given $x$ coordinate experience the same force regardless of their $y$ coordinates. $\widetilde{E}_0$ in eq.\,(\ref{eq:field}) is a dimensionless scaling factor that controls the strength of the perturbation: the factor $Q / (4 \pi \varepsilon_0 a^2)$ ensures that at $\widetilde{E}_0 = 1$ the peak value of the perturbing force acting on a particle at $x_0$, $E^\ast Q$, equals the Coulomb force between two particles separated by a distance $r=a$.
\begin{figure}[!h]
\begin{center}
\footnotesize(a)\includegraphics[width=0.4\textwidth]{sol-snapshot-1.png}\\
\footnotesize(b)\includegraphics[width=0.4\textwidth]{sol-snapshot-2.png}\\
\footnotesize(c)\includegraphics[width=0.4\textwidth]{sol-snapshot-3.png}\\
\footnotesize(d)\includegraphics[width=0.4\textwidth]{sol-snap-density.png}
\caption{Snapshots of the system at (a) $\omega_{\rm p} t$ = $-66.7$ (equilibrium phase, before perturbation), (b) $\omega_{\rm p} t$ = 2.67 (right after perturbation), and (c) $\omega_{\rm p} t$ = 66.7 (well after perturbation). The perturbation electric field is applied at the time $t$ = 0 and at the position $x_0 = L/2$ with an amplitude $\widetilde{E}_0$ = 8.3. The red arrows in (c) indicate the direction of propagation of the density fronts. A positive density perturbation propagates in the $+x$ direction, while a negative density perturbation propagates in the $-x$ direction. (d) The density of the particles (averaged over the $y$ direction) at $\omega_{\rm p} t$ = 2.67 and 66.7. The plots in (a-c) show only half of the system in the $y$ direction. $\Gamma=100$ and $\kappa=1$. To visualise the phenomenon studied the system is perturbed here very strongly. Note the significantly higher propagation velocity of the positive density perturbation.}
\label{fig:small}
\end{center}
\end{figure}
The application of a pulse given by eq.\,(\ref{eq:field}) causes a compression of the particles on the right side and a rarefaction of the particles on the left side of the interaction region of width $w$. The positive density perturbation propagates in the $+x$ direction, while the negative density perturbation propagates in the $-x$ direction, as it will be shown later.
The propagation of these structures is traced by recording the time evolution of the spatial density distribution of the particles. To facilitate this, the simulation box is split into $M = 200$ 'stripes' along the $x$ axis and the density of the particles is measured within these stripes in each time step. Density data are saved in every 20th time step for further analysis.
The operation of the simulation method is illustrated on a small system consisting of 40\,000 particles. (The results in Sec.~\ref{sec:results} will be given for systems consisting of 4\,000\,000 particles.) For this case we set $\Gamma=100$ and $\kappa=1$, and a very high value for the perturbation field, $\widetilde{E}_0$ = 8.3 that generates density perturbations, which can easily be observed by eye on the particle snapshots. The perturbation is applied at time $t=0$ and at the position $x_0 = L/2$.
Figure~\ref{fig:small}(a) shows snapshots of the particle positions at a time (at $\omega_{\rm p} t = - 66.7$) that belongs to the equilibrium phase, before the application of the electric field pulse at $t=0$. Here, we find a homogeneous density distribution of the particles. Panel (b) shows a snapshot right after the perturbing pulse, at $\omega_{\rm p} t = 2.67$. A strong negative/positive density perturbation ($\delta n / n \approx 20$\%) is created left/right from the middle of the simulation box, $x/L=0.5$, as it can also be see in panel (d) of Figure~\ref{fig:small}. These perturbations propagate into opposite directions and acquire specific shapes, see e.g. panel (c) that shows a snapshot of the particle configuration at $\omega_{\rm p} t = 66.7$. At this time, the positive density peak has a sharp leading edge, which is followed by a slow decay of the density. For the negative density peak, on the other hand, we find a slow change of the density at the leading edge and a very sharp trailing edge (also well seen in panel (d)). There is an obvious difference between the propagation velocities of the two structures, the velocity of the positive peak is about three times higher as compared to that of the negative peak. We note, that these properties are consequences of the very high degree of perturbation, for most of our studies we use significantly lower amplitude of the perturbing field, resulting in density perturbations in the order of $\delta n/n \approx$ 1\%.
\section{Results}
\label{sec:results}
The results presented below are derived from simulations that use $N$ = 4\,000\,000 particles with a chaining mesh of $N_{\rm c}$ = 400. The cutoff distance is chosen as $r_{\rm c} = L / N_{\rm c} \cong 9 \, a$.
The width of the electric field pulse used for the generation of the solitons is set to $w \approx 35\,a$. The perturbation is applied at $x_0=L/2$ unless specified otherwise.
In Sec.~\ref{sec:res1}, simulation results are reported for non-magnetised systems, for different $\Gamma$ and $\kappa$ values, as well as various perturbing electric field strength, $\widetilde{E}_0$. "Collisions" of two solitons are also investigated. In Sec.~\ref{sec:res2} the effect of an external magnetic field on the propagation of the solitons in studied.
\subsection{Non-magnetized systems}
\label{sec:res1}
Figure~\ref{fig:maps1} shows the density of the particles as a function of normalized space ($x/L$) and time ($\omega_{\rm p}t$) coordinates, for different degrees of perturbation applied at $t=0$ and $x/L=0.5$. At the smallest perturbation amplitude, $\widetilde{E}_0$ = 0.277, we observe that the density changes on the scale of $\sim 1$\%. The propagating positive and negative perturbations of the density show up as red and blue lines on this plot. The slopes of these lines are the same, i.e. both features exhibit the same velocity of propagation. At such low perturbation, low-amplitude spontaneous propagating density fluctuations are also visible to some extent. These features have the same propagation speed as the generated solitons. Similarly to the solitons, the spontaneous features have as well two branches that are created upon the initialization of the simulations. $\delta n / n$ in these two branches is, however, the same, unlike in the pairs of solitons that are created by the electric field pulse defined by eq.(\ref{eq:field}). At higher degrees of perturbation (Figures~\ref{fig:maps1}(b)--(d)) these features are no longer observed due to the broader range of $\delta n / n$ of interest. At these conditions, the propagation velocities of the "$+$" and "$-$" solitons becomes unequal: while at low $\widetilde{E}_0$ (see Figure~\ref{fig:maps1}(a)) the features "meet" at the sides of the simulation box, at higher $\widetilde{E}_0$ the positive peak propagates with a higher velocity compared to the negative peak.
The strength of the perturbation, $\widetilde{E}_0$, also influences the shapes of the propagating density perturbations. This is shown in Figure~\ref{fig:dens1}, which displays cuts of the density profiles at given times. For the case of $\widetilde{E}_0$ = 0.277 the two peaks propagate symmetrically in both directions and have similar shapes. The different propagation velocities are confirmed here, too, for the amplitudes $\widetilde{E}_0$ = 0.554, 1.662, and 2.77 (panels (b), (c), and (d), respectively). At high perturbations the shapes of both density peaks become significantly different. The positive density peak acquires a sharp leading edge and an extended trailing edge, while the opposite happens for the negative density peaks. At $\widetilde{E}_0$ = 0.277 and 0.554 (Figure~\ref{fig:dens1}(a) and (b)) the amplitude of the propagating peaks decreases only slightly with time, which is due to the broadening of the density "pulses". At higher perturbations a significant broadening as well as a remarkable change of the pulse shapes is seen in panels (c) and (d) of Figure~\ref{fig:dens1}. These effects, actually, can also be revealed from the spatio-temporal distributions shown previously in Figure~\ref{fig:maps1}.
\begin{figure}[H]
\begin{center}
\footnotesize(a)\includegraphics[width=0.9\columnwidth]{sol-01-map.png}\\
\footnotesize(b)\includegraphics[width=0.9\columnwidth]{sol-02-map.png}\\
\footnotesize(c)\includegraphics[width=0.9\columnwidth]{sol-31-map.png}\\
\footnotesize(d)\includegraphics[width=0.9\columnwidth]{sol-45-map.png}\\
\caption{Density of the system as a function of space and time. The perturbation electric field is applied at $t$ = 0, at the position $x/L = 0.5$, with amplitudes $\widetilde{E}_0$ = 0.277, (a), 0.554 (b), 1.662 (c), and 2.77 (d). $\Gamma=100$ and $\kappa=1$. As an effect of the periodic boundary conditions the solitons that leave the simulation box at either side, re-enter the box at the opposite side. Note, that while at low $\widetilde{E}_0$ the positive and negative density peaks propagate with the same velocity, with increasing $\widetilde{E}_0$ the velocity of the positive density peak becomes higher, and the velocity of the negative peak becomes lower. The additional features seen in (a) correspond to low-amplitude, spontaneous, propagating density fluctuations in the system.}
\label{fig:maps1}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\footnotesize(a)\includegraphics[width=0.85\columnwidth]{sol-01-density.png}\\
\footnotesize(b)\includegraphics[width=0.85\columnwidth]{sol-02-density.png}\\
\footnotesize(c)\includegraphics[width=0.85\columnwidth]{sol-31-density.png}\\
\footnotesize(c)\includegraphics[width=0.85\columnwidth]{sol-45-density.png}\\
\caption{Density of the system as a function of space at different times following the application of the perturbation electric field, $\widetilde{E}_0$ = 0.277 (a), 0.554 (b), 1.662 (c), and 2.77 (d). The perturbation electric field pulse is applied at $t$ = 0, at the position $x/L = 0.5$. $\Gamma=100$ and $\kappa=1$.}
\label{fig:dens1}
\end{center}
\end{figure}
Our further studies are conducted with an electric field amplitude of $\widetilde{E}_0$ = 0.554, which represents a compromise between the signal to noise ratio and small change of the density peak shapes with time. At lower $\widetilde{E}_0$ we have observed density peaks in the order of 1\% which is not much higher than the "natural" fluctuations of the density in the slabs where the density is measured. (As we use 200 slabs, the average number of particles in these is 20\,000, resulting in a fluctuation level of $\sim 1/\sqrt{20\,000} \approx 0.7$\%.) At high $\widetilde{E}_0$ values we have observed a significant change of the shapes of the density peaks over an extended domain of time.
Figure~\ref{fig:maps-kappa}, together with Figure~\ref{fig:maps1}(b) illustrates the effect of the screening parameter $\kappa$, on the propagation of the solitons. The softening of the potential (i.e., an increasing $\kappa$) clearly results in a decrease of the propagation velocity. In the limit of small amplitude solitons are found to propagate with the longitudinal sound speed. This is confirmed in Figure~\ref{fig:speed}(a) that shows the measured propagation velocities as a function of $\kappa$, in comparison of the theoretical curve derived from lattice summation calculations. The measured data are shown for both the positive and the negative density peaks, the first of these always indicates a slightly higher velocity. Figure~\ref{fig:speed}(b) shows the propagation velocity of the solitons as a function of the density perturbation $\delta n/n$ (that in turn, depends on $\widetilde{E}_0$). The data indicate a linear dependence of the velocity on $\delta n/n$.
\begin{figure}[H]
\begin{center}
\footnotesize(a)\includegraphics[width=0.45\textwidth]{sol-05-map.png}\\
\footnotesize(b)\includegraphics[width=0.45\textwidth]{sol-07-map.png}
\caption{Density of the system as a function of space and time for $\Gamma=100$ and (a) $\kappa=2$ and (b) $\kappa=3$. $\widetilde{E}_0$ = 0.554. The corresponding plot for $\kappa=1$ was shown in Figure \ref{fig:maps1}(b).}
\label{fig:maps-kappa}
\end{center}
\end{figure}
\begin{figure}[h]
\begin{center}
\footnotesize(a)\includegraphics[width=0.45\textwidth]{soliton_sound.png}\\
\footnotesize(b)\includegraphics[width=0.45\textwidth]{soliton_sound_kappa1.png}\\
\caption{Propagation velocity of the solitons: (a) dependence on the screening parameter $\kappa$ at a fixed pulse amplitude $\widetilde{E}_0$ = 0.554 (symbols, data taken from Figure~\ref{fig:maps-kappa}). The line shows the 2D Yukawa lattice sound speed as reference (approximate formula taken from Ref. \cite{Khrapak15}). The normalization factor is $c_0 = \omega_{\rm p}a$. (b) Dependence on density modulation amplitude including both compressional (positive) and rarefactional (negative) waves (symbols, data taken from Figure~\ref{fig:dens1}) at $\kappa=1$. The dashed line is a linear fit to the data points, shown to guide the eye. }
\label{fig:speed}
\end{center}
\end{figure}
Next, we investigate the scenario when the perturbing electric field is applied simultaneously at two locations in the simulation box (at $x/L$ = 0.25 and 0.75). Figure~\ref{fig:maps_2_sol}(a) shows the case when the electric field has the same polarity at the two different locations, while Figure~\ref{fig:maps_2_sol}(b) corresponds to the case when the electric field has opposite polarity at the two selected locations. In both cases, two pairs of solitons are generated. The plots of the density distributions confirm that the solitons cross each other without influencing each other's propagation.
\begin{figure}[h]
\begin{center}
\footnotesize(a)\includegraphics[width=0.45\textwidth]{sol-11-map.png}\\
\footnotesize(b)\includegraphics[width=0.45\textwidth]{sol-12-map.png}
\caption{Density of the system as a function of space and time for the cases of two pairs of solitons, generated at $x/L$ = 0.25 and at $x/L$ = 0.75. $|\widetilde{E}_0|$ = 0.554, $\Gamma=100$, $\kappa=1$. In (a) the solitons are created at the two selected spatial positions by electric field pulse having the same polarity, while in (b) the polarity of the electric field pulses is opposite at the two locations.}
\label{fig:maps_2_sol}
\end{center}
\end{figure}
\subsection{The effect of the magnetic field}
\label{sec:res2}
Finally, we address the effect of an external magnetic field on the propagation of the solitons.
In the first case, an external magnetic field with a strength of $\beta=0.1$ is turned on in the simulation at the time $\omega_{\rm p}t = 1333.\dot{3}$. Figure~\ref{fig:maps_magnetic} shows that both the positive and negative peaks become trapped, they neither propagate or dissolve by diffusion in the system over the time scale of the simulation. This behavior is the consequence of the well known scenario that diffusion can be strongly blocked by a magnetic field in strongly coupled plasmas \cite{Ott14,Ott15,Begum16,Feng17,Karasev19}. In the trapped state the particles undergo a cyclotron-type motion characterized by the strength of the magnetic field ($\beta$).
\begin{figure}[h]
\begin{center}
\includegraphics[width=0.45\textwidth]{sol-13-map.png}
\caption{Effect of magnetic field on the propagation of the solitons: a magnetic field with $\beta=0.1$ is turned on at $\omega_{\rm p}t = 1333.\dot{3}$. $\Gamma=100$, $\kappa=1$, $\widetilde{E}_0$ = 0.554, $x_0/L = 0.5$. Upon the application of the magnetic field the propagation of the pulses is blocked as the cyclotron motion converts propagation into localized oscillations.}
\label{fig:maps_magnetic}
\end{center}
\end{figure}
\begin{figure}[H]
\begin{center}
\footnotesize(a)\includegraphics[width=0.45\textwidth]{sol-1001-off-map.png}\\
\footnotesize(b)\includegraphics[width=0.45\textwidth]{sol-1003-off-map.png}\\
\footnotesize(c)\includegraphics[width=0.45\textwidth]{sol-1005-off-map.png}\\
\footnotesize(d)\includegraphics[width=0.45\textwidth]{sol-1007-off-map.png}\\
\footnotesize(e)\includegraphics[width=0.45\textwidth]{sol-1009-off-map.png}\\
\caption{Effect of a magnetic field pulse on soliton propagation: (a) $\beta=0.104$, (b) $\beta=0.108$, (c) $\beta=0.112$, (d) $\beta=0.116$, and (e) $\beta=0.120$. $\Gamma=100$, $\kappa=1$, $\widetilde{E}_0$ = 0.554. The magnetic field pulse has a duration of $\omega_{\rm p}T = 333.\dot{3}$, subsequently the solitons are released and can propagate in their original, opposite, or both direction(s) depending on the magnetic field strength.}
\label{fig:maps_magnetic_pulse}
\end{center}
\end{figure}
\begin{table}
\caption{Parameters of cases shown in Figure 8. $\beta$ is the normalized magnetic field strength, $N_{\rm c} = \Omega_{\rm c}\,T\,/\, 2\,\pi$ is the number of cyclotron oscillation cycles during the on phase of the magnetic field, $\phi$ is the phase angle of the fractional part of $N_{\rm c}$.}
\begin{ruledtabular}
\begin{tabular}{ccccc}
$\beta$ & $N_{\rm c}$ & $\phi$\,[deg] & Effect & Figure \\
\hline
0.102 & 5.411 & 148.1 & Reflection & \\
0.104 & 5.517 & 186.3 & Reflection & 8(a) \\
0.106 & 5.623 & 224.4 & Splitting & \\
0.108 & 5.730 & 262.6 & Splitting & 8(b) \\
0.110 & 5.836 & 300.8 & Transmission & \\
0.112 & 5.942 & 339.0 & Transmission & 8(c) \\
0.114 & 6.048 & 17.2 & Transmission & \\
0.116 & 6.154 & 55.4 & Splitting & 8(d) \\
0.118 & 6.260 & 93.6 & Splitting & \\
0.120 & 6.366 & 131.8 & Reflection & 8(e) \\
0.122 & 6.472 & 170.0 & Reflection & \\
0.124 & 6.578 & 208.2 & Reflection & \\
\end{tabular}
\end{ruledtabular}
\label{tab:1}
\end{table}
It is interesting to note, however, that if the magnetic field is switched off after a certain time, the temporarily blocked solitons can be "released". What happens at this moment is defined by the phase of the cyclotron motion. Figure~\ref{fig:maps_magnetic_pulse} displays and Table~\ref{tab:1} lists a sequence of cases with small differences in the magnetic field strength ($\beta$).
Depending on the phase of the cyclotron oscillation of the trapped particles the solitons can (i) continue propagating into their original directions, termed as "transmission" in Table I, (ii) propagate into the opposite directions, termed as "reflection" in Table I, as well as (iii) split into a pair of solitons having the same polarity, termed as "splitting" in Table I.
For the cases studied the on phase of the magnetic field pulse has a fixed duration of $\omega_{\rm p}\,T = 333.\dot{3}$. During this time the particles undergo a number of cyclotron oscillations $N_{\rm c} = \Omega_{\rm c} T / 2 \pi = 333.\dot{3}\, (\Omega_{\rm c}/\omega_{\rm p}) / 2\pi = 333.\dot{3}\, \beta / 2\pi$. Along with the values of $\beta$, Table I. also gives the valus of $N_{\rm c}$ as defined above and its fractional part converted to a phase angle $0^\circ \leq \phi \leq 360^\circ$. Our data show that whenever $\phi$ is close to $0^\circ$ or $360^\circ$, the solitons are transmitted after the temporary trapping by the magnetic field pulse. Reflection occurs whenever $\phi$ is in the vicinity of $180^\circ$, as expected because of the delay of phase of the localized cyclotron oscillations. At intermediate values of $\phi$, close to $90^\circ$ and $270^\circ$ splitting occurs with nearly equal or different amplitudes depending on the exact value of $\phi$. It is expcted that the localization time (duration of the magnetic field pulse) has a similar effect as the effect of the magnetic field strength, as $N_{\rm c}$, and consequently $\phi$ are proportional to $T$.
\section{Summary}
\label{sec:summary}
This work reported our investigations on the propagation of solitons, created by electric field pulses, in a two-dimensional strongly-coupled many body system characterized by the Yukawa potential. The electric field pulses created a pair of solitons, with positive / negative density peaks (referenced to the density of the unperturbed system) that were found to propagate into opposite directions. The propagation speed of both features in the limit of small density perturbations was found to be equal to the longitudinal sound speed in the Yukawa liquid. With increasing perturbation the propagation speed of the positive peak was found to increase, while the propagation speed of the negative peak was found to decrease.
An external magnetic field was found to "freeze" the positions of the density peaks (solitons) due to the largely reduced self-diffusion in the system at $\beta > 0$. Upon the termination of this magnetic field, however, the solitons were found to be released from these "traps" and to continue propagating into directions that depends on the strength of the magnetic field and the trapping time. These observations call for further studies at the microscopic level of individual particles and for the exploration of multiple solutions trapped simultaneously by magnetic field pulses.
\acknowledgements
This work has been supported by the Hungarian Office for Research, Development, and Innovation (NKFIH 119357 and 115805) and by the grant AP05132665 of the Ministry of Education and Science of the Republic of Kazakhstan.
|
2,877,628,088,368 | arxiv | \section{ Introduction}
Uncertainty principles appear in harmonic analysis and signal theory in a variety of different forms
involving not only the signal $\varphi$ and its Fourier transform $\mathcal{F}(\varphi)$, but essentially every representation of a signal in the time-frequency space. They are mathematical results that give limitations on the simultaneous concentration of a signal and its Fourier transform and they have implications in signal analysis and quantum physics. In quantum physics they tell us that a particle's speed and position cannot both be measured with infinite precision. In signal analysis they tell us that if we observe a signal only for a finite period of time, we will lose information about the frequencies the signal consists of. Timelimited functions and bandlimited functions are basic tools of signal and image
processing. Like, the simple form of the uncertainty principle tells us that a signal cannot be simultaneously time and bandlimited. This leads to the investigation
of the set of almost time and almost bandlimited functions, which has been initially
by Landau, Pollak \cite{landau1975szego, landau1962prolate} and then by Donoho, Stark \cite{donoho1989uncertainty}.
Motivated by the work of Laeng and Morpurgo \cite{laeng1999uncertainty, morpurgo2001extremals} and the work of Soltani and Ghazwani \cite{zbMATH06504466, zbMATH06692262}, we propose an extension of the techniques and results of Ghobber \cite{ghobber2013uncertainty, ghobber2014variations} to establish a variation of the $L^p$ uncertainty principles for the Weinstein transform.
Many uncertainty principles have already been proved for the Weinstein transform $\mathcal{F}_{W,\alpha}
$, \cite{mejjaoli2012weinstein, mejjaoli2011uncertainty, salem2015heisenberg}. The authors have established in \cite{mejjaoli2011uncertainty} the Heisenberg-Pauli-Weyl inequality for the Weinstein transform, by showing that, for every $\varphi$ in $L^2_\alpha(\mathbb{R}^{d+1}_+)$
\begin{equation}\label{firstuncert}
\|\varphi\|_{\alpha, 2}\leq \frac{2}{2\alpha+d+2}
\||x|\varphi\|_{\alpha, 2}\||y|\mathcal{F}_{W,\alpha}(\varphi)\|_{\alpha, 2}.
\end{equation}
Our first result will be the following variation of Heisenberg-Pauli-Weyl-type inequality.
\medskip
\noindent{\bf Theorem A.}\
{\sl Let $1<p\leq 2$, $q=p/(p-1)$, $0<s<(2\alpha+d+2)/q$ and $t>0$, then for all $\varphi\in L^p_\alpha(\mathbb{R}^{d+1}_+)$,
\begin{equation*}\label{LpHPWU}
\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q} \leq K(s,t) \left\||x|^s\varphi\right\|_{\alpha,p}^\frac{t}{s+t}\left\||y|^t\mathcal{F}_{W,\alpha}(\varphi)\right\|_
{\alpha,q}^\frac{s}{s+t},
\end{equation*}
where $K(s,t)$ is a positive constant.}
This theorem implies in particular that if $\varphi$ is highly localized in the neighbourhood of $x=0$,
then $\mathcal{F}_{W,\alpha}(\varphi)$ cannot be concentrated in the neighbourhood of $y=0$. In particular, for $p=q=2$, we obtain the general case of Heisenberg-Pauli-Weyl-type inequality (\ref{firstuncert})
\begin{equation*}
\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,2} \leq K(s,t) \left\||x|^s\varphi\right\|_{\alpha,2}^\frac{t}{s+t}\left\||y|^t\mathcal{F}_{W,\alpha}(\varphi)\right\|_
{\alpha,2}^\frac{s}{s+t}.
\end{equation*}
The second and third results are two continuous-time uncertainty principles of concentration type depend on the sets of concentration $\Omega$ and $\Sigma$, and on the time function $\varphi$.
\medskip
\noindent{\bf Theorem B.}\
{\sl Let $\Omega$, $\Sigma$ be a measurable subsets of $\mathbb{R}^{d+1}_+$ and $\varphi\in L^p_\alpha(\mathbb{R}^{d+1}_+)$, $1\leq p\leq 2$. If $\varphi$ is $\varepsilon_\Omega$-concentrated to $\Omega$ in $ L^p_\alpha(\mathbb{R}^{d+1}_+)$-norm and $\mathcal{F}_{W,\alpha}(\varphi)$ is $\varepsilon_\Sigma$-concentrated to $\Sigma$ in $ L^q_\alpha(\mathbb{R}^{d+1}_+)$-norm, $q=p/(p-1)$, then we have
\begin{equation*}
\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}\leq \frac{ \left(\mu_{\alpha}(\Sigma)\right)^\frac{1}{q} \left(\mu_{\alpha}(\Omega)\right)^\frac{1}{q}+\varepsilon_\Omega}{1-\varepsilon_\Sigma}\left\|\varphi
\right\|_{\alpha,p}.
\end{equation*} }
The statement of Theorem B depends on the time function $\varphi$. However although for $p=q=2$, the continuous-time uncertainty principle becomes
\begin{equation*}
1-\varepsilon_\Omega-\varepsilon_\Sigma\leq \mu_\alpha(\Omega)^\frac{1}{2}\mu_\alpha(\Sigma)^\frac{1}{2},
\end{equation*}
\medskip
\noindent{\bf Theorem C.}\
{\sl Let $\Omega$, $\Sigma$ be a measurable subsets of $\mathbb{R}^{d+1}_+$ and $\varphi\in (L^1_\alpha\cap L^p_\alpha)(\mathbb{R}^{d+1}_+)$, $1\leq p\leq 2$. If $\varphi$ is $\varepsilon_\Omega$-concentrated to $\Omega$ in $ L^p_\alpha(\mathbb{R}^{d+1}_+)$-norm and $\mathcal{F}_{W,\alpha}(\varphi)$ is $\varepsilon_\Sigma$-concentrated to $\Sigma$ in $ L^q_\alpha(\mathbb{R}^{d+1}_+)$-norm, $q=p/(p-1)$, then we have
\begin{equation*}
\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}\leq \frac{ \left(\mu_{\alpha}(\Sigma)\right)^\frac{1}{q} \left(\mu_{\alpha}(\Omega)\right)^\frac{1}{q}}{(1-\varepsilon_\Omega)(1-\varepsilon_\Sigma)}\left\|\varphi
\right\|_{\alpha,p}.
\end{equation*} }
Likewise, the statement of the pervious Theorem depends on the time function $\varphi$ and Like the first continuous-time uncertainty principle(Theorem B) is independent on $\varphi$ for $p=q=2$, and we have
\begin{equation*}
(1-\varepsilon_\Omega)(1-\varepsilon_\Sigma)\leq \mu_\alpha(\Omega)^\frac{1}{2}\mu_\alpha(\Sigma)^\frac{1}{2}.
\end{equation*}
The last result is a continuous-bandlimited uncertainty principle of
concentration type depends on the sets of concentration
$\Omega$ and $\Sigma$, but he is independent on the bandlimited function.
\medskip
\noindent{\bf Theorem D.}\
Let $\Omega$, $\Sigma$ be a measurable subsets of $\mathbb{R}^{d+1}_+$ and let $\varphi\in L^p(\mathbb{R}_+^d)$ such that $1\leq p\leq 2$. Then if $\varphi$ is $\varepsilon_\Omega$-concentrated to $\Omega$ and $\varepsilon_\Sigma$-bandlimited to $\Sigma$ in $L^p_\alpha$-norm, we have
\begin{equation*}
1-\varepsilon_\Omega-\varepsilon_\Sigma\leq (1+\varepsilon_\Sigma)\left(\mu_{\alpha}(\Sigma)\right)^\frac{1}{p} \left(\mu_{\alpha}(\Omega)\right)^\frac{1}{p}.
\end{equation*}
The main body of the paper is organized as follows. In section 2, we recall some harmonic analysis results related to the Weinstein operator. In section 3, we prove a variation of Heisenberg-Pauli-Weyl uncertainty principle for the Weinstein operator. Finally in section 4, we establish three continuous uncertainty principles of concentration type in $L^p_\alpha$-norm. These estimations depend on the sets of concentration $\Omega$ and $\Sigma$, and on the band-limited function $\varphi$, only the last estimation is
independent on $\varphi$.
\section{Preliminaires}
The Weinstein operator $\Delta_{W,\alpha}^d$ defined on $\mathbb{R}_{+}^{d+1}=\mathbb{R}^d\times(0, \infty)$, by
\begin{equation*}
\Delta_{W,\alpha}^d=\sum_{j=1}^{d+1}\frac{\partial^2}{\partial x_j^2}+\frac{2\alpha+1}{x_{d+1}}\frac{\partial}{\partial x_{d+1}}=\Delta_d+L_\alpha,\;\alpha>-1/2,
\end{equation*}
where $\Delta_d$ is the Laplacian operator for the $d$ first variables and $L_\alpha$ is the Bessel operator for the last variable defined on $(0,\infty)$ by
$$L_\alpha u=\frac{\partial^2 u}{\partial x_{d+1}^2}+\frac{2\alpha+1}{x_{d+1}}\frac{\partial u}{\partial x_{d+1}}.$$
The Weinstein operator $\Delta_{W,\alpha}^d$ has several applications in pure and applied mathematics, especially in fluid mechanics \cite{brelot1978equation, weinstein1962singular}.
For all $\lambda=(\lambda_1,...,\lambda_{d+1})\in\mathbb{C}^{d+1}$, the system
\begin{equation}
\begin{gathered}
\frac{\partial^2u}{\partial x_{j}^2}( x)
=-\lambda_{j} ^2u(x), \quad\text{if } 1\leq j\leq d \\
L_{\alpha}u( x) =-\lambda_{d+1}^2u( x), \\
u( 0) =1, \quad \frac{\partial u}{\partial
x_{d+1}}(0)=0,\quad \frac{\partial u}{\partial
x_{j}}(0)=-i\lambda_{j}, \quad \text{if } 1\leq j\leq d
\end{gathered}
\end{equation}
has a unique solution denoted by $\Lambda_{\alpha}^d(\lambda,.),$ and given by
\begin{equation}\label{wkernel}
\Lambda_{\alpha}^d(\lambda,x)=e^{-i<x^\prime,\lambda^\prime>}j_\alpha(x_{d+1}\lambda_{d+1})
\end{equation}
where $x=(x^\prime,x_{d+1}),\; \lambda=(\lambda^\prime,\lambda_{d+1})$ and $j_\alpha$ is is the normalized Bessel function of index $\alpha$ defined by
$$j_\alpha(x)=\Gamma(\alpha+1)\sum_{k=0}^\infty\frac{(-1)^k x^{2k}}{2^k k!\Gamma(\alpha+k+1)}.$$
The function $(\lambda,x)\mapsto\Lambda_{\alpha}^d(\lambda,x)$ is called the Weinstein kernel and has a unique extension to $\mathbb{C}^{d+1}\times\mathbb{C}^{d+1}$, and satisfied the following properties.\\
\begin{itemize}
\item[(i)] For all $(\lambda,x)\in \mathbb{C}^{d+1}\times\mathbb{C}^{d+1}$ we have
\begin{equation*}
\Lambda_{\alpha}^d(\lambda,x)=\Lambda_{\alpha}^d(x,\lambda).
\end{equation*}
\item[(ii)] For all $(\lambda,x)\in \mathbb{C}^{d+1}\times\mathbb{C}^{d+1}$ we have
\begin{equation*}
\Lambda_{\alpha}^d(\lambda,-x)=\Lambda_{\alpha}^d(-\lambda,x).
\end{equation*}
\item[(iii)] For all $(\lambda,x)\in \mathbb{C}^{d+1}\times\mathbb{C}^{d+1}$ we get
\begin{equation*}
\Lambda_{\alpha}^d(\lambda,0)=1.
\end{equation*}
\item[(iv)] For all $\nu\in\mathbb{N}^{d+1},\;x\in\mathbb{R}^{d+1}$ and $\lambda\in\mathbb{C}^{d+1}$ we have
\begin{equation*}\label{klk}
\left|D_\lambda^\nu\Lambda_{\alpha}^d(\lambda,x)\right|\leq\left\|x\right\|^{\left|\nu\right|}e^{\left\|x\right\|\left\|\Im \lambda\right\|}
\end{equation*}
\end{itemize}
where $D_\lambda^\nu=\partial^\nu/(\partial\lambda_1^{\nu_1}...\partial\lambda_{d+1}^{\nu_{d+1}})$ and $\left|\nu\right|=\nu_1+...+\nu_{d+1}.$ In particular, for all $(\lambda,x)\in \mathbb{R}^{d+1}\times\mathbb{R}^{d+1}$, we have
\begin{equation}\label{normLambda}
\left|\Lambda_{\alpha}^d(\lambda,x)\right|\leq 1.
\end{equation}
In the following we denote by
\begin{itemize}
\item[(i)] $C_*(\mathbb{R}^{d+1})$, the space of continuous functions on $\mathbb{R}^{d+1},$ even with respect to the last variable.
\item[(ii)] $S_*(\mathbb{R}^{d+1})$, the space of the $C^\infty$ functions, even with respect to the last variable, and rapidly decreasing together with their derivatives.
\item[(iii)] $L^p_\alpha(\mathbb{R}^{d+1}_+),\;1\leq p\leq \infty,$ the space of measurable functions $f$ on $\mathbb{R}^{d+1}_+$ such that
$$\left\|f\right\|_{\alpha,p}=\left(\int_{\mathbb{R}^{d+1}_+}\left|f(x)\right|^pd\mu_\alpha(x)\right)^{1/p}<\infty, \;p\in[1,\infty),$$
$$\left\|f\right\|_{\alpha,\infty}=\textrm{ess}\sup_{x\in\mathbb{R}^{d+1}_+}\left|f(x)\right|<\infty,$$
where $d\mu_{\alpha}(x)$ is the measure on $\mathbb{R}_{+}^{d+1}=\mathbb{R}^d\times(0,\infty)$ given by
\begin{equation*}\label{mesure}
d\mu_\alpha(x)=\frac{x^{2\alpha+1}_{d+1}}{(2\pi)^d2^{2\alpha}\Gamma^2(\alpha+1)}dx.
\end{equation*}
\end{itemize}
For a radial function $\varphi\in L_{\alpha}^{1}(\mathbb{R}_{+} ^{d+1})$ the function $\tilde{\varphi}$ defined on $\mathbb{R}_+$ such that $\varphi(x)=\tilde{\varphi}(|x|)$, for all
$x\in\mathbb{R}_{+} ^{d+1}$, is integrable with respect to the measure $r^{2\alpha+d+1}dr$, and we have
\begin{equation}\label{radialweinstein}
\int_{\mathbb{R}_{+}^{d+1}}\varphi(x)d\mu_{\alpha}(x)=a_\alpha\int_{0}^{\infty}
\tilde{\varphi}(r)r^{2\alpha+d+1}dr,
\end{equation}
where $$a_\alpha=\frac{1}{2^{\alpha+\frac{d}{2}}\Gamma(\alpha+\frac{d}{2}+1)}.$$
The Weinstein transform generalizing the usual Fourier transform, is given for
$\varphi\in L_{\alpha}^{1}(\mathbb{R}_{+} ^{d+1})$ and $\lambda\in\mathbb{R}_{+}^{d+1}$, by
$$
\mathcal{F}_{W,\alpha}
(\varphi)(\lambda)=\int_{\mathbb{R}_{+}^{d+1}}\varphi(x)\Lambda_{\alpha}^d(x, \lambda
)d\mu_{\alpha}(x),
$$
We list some known basic properties of the Weinstein transform are as follows. For the proofs, we refer \cite{nahia1996mean, nahia1996spherical}.
\begin{itemize}
\item[(i)] For all $\varphi\in L^1_\alpha(\mathbb{R}^{d+1}_+)$, the function $\mathcal{F}_{W,\alpha}(\varphi)$ is continuous on $\mathbb{R}^{d+1}_+$ and we have
\begin{equation}\label{L1-Linfty}
\left\|\mathcal{F}_{W,\alpha}\varphi\right\|_{\alpha,\infty}\leq\left\|\varphi\right\|_{\alpha,1}.
\end{equation}
\item[(ii)] The Weinstein transform is a topological isomorphism from $\mathcal{S}_*(\mathbb{R}^{d+1}_+)$ onto itself. The inverse transform is given by
\begin{equation}\label{inversionweinstein}
\mathcal{F}_{W,\alpha}^{-1}\varphi(\lambda)= \mathcal{F}_{W,\alpha}\varphi(-\lambda),\;\textrm{for\;all}\;\lambda\in\mathbb{R}^{d+1}_+.
\end{equation}
\item[(iii)] Parseval formula: For all $\varphi, \phi\in \mathcal{S}_*(\mathbb{R}^{d+1}_+)$, we have
\begin{equation*}\label{MM} \int_{\mathbb{R}^{d+1}_+}\varphi(x)\overline{\phi(x)}d\mu_\alpha(x)=\int_{\mathbb{R}^{d+1}_+}\mathcal{F}_{W,\alpha}
(\varphi)(x)\overline{\mathcal{F}_{W,\alpha}(\phi)(x)}d\mu_\alpha(x).
\end{equation*}
\item[(v)] Plancherel formula: For all $\varphi\in L^2_\alpha(\mathbb{R}^{d+1}_+)$, we have
\begin{equation}\label{Plancherel formula}
\left\|\mathcal{F}_{W,\alpha}\varphi\right\|_{\alpha,2}=\left\|\varphi\right\|_{\alpha,2}.
\end{equation}
\item[(vi)] Plancherel Theorem: The Weinstein transform $\mathcal{F}_{W,\alpha}$ extends uniquely to an isometric isomorphism on $L^2_\alpha(\mathbb{R}^{d+1}_+).$
\item[(vii)] Inversion formula: Let $\varphi\in L^1_\alpha(\mathbb{R}^{d+1}_+)$ such that $\mathcal{F}_{W,\alpha}\varphi\in L^1_\alpha(\mathbb{R}^{d+1}_+)$, then we have
\begin{equation}\label{inv}
\varphi(\lambda)=\int_{\mathbb{R}^{d+1}_+}\mathcal{F}_{W,\alpha}\varphi(x)\Lambda_{\alpha}^d(-\lambda,x)d\mu_\alpha(x),\;\textrm{a.e. }\lambda\in\mathbb{R}^{d+1}_+.
\end{equation}
\end{itemize}
Using relations (\ref{L1-Linfty}) and (\ref{Plancherel formula}) with Marcinkiewicz's interpolation theorem \cite{zbMATH03367521} we deduce that for every $\varphi\in L^p_\alpha(\mathbb{R}^{d+1}_+)$ for all $1\leq p\leq 2$, the function $\mathcal{F}_{W,\alpha}(\varphi)\in L^q_\alpha(\mathbb{R}^{d+1}_+), q=p/(p-1),$ and
\begin{equation}\label{Lp-Lq}
\left\|\mathcal{F}_{W,\alpha}\varphi\right\|_{\alpha,q}\leq\left\|\varphi\right\|_{\alpha,p}.
\end{equation}
\section{$L^p$-Heisenberg-Pauli-Weyl inequality}
In this section, we extend the Heisenberg-Pauli-Weyl uncertainty principle (\ref{firstuncert})
to more general case for the Weinstein operator. We need to use the method of Ciatti et al. \cite{ciatti2007heisenberg}, which is the
counterpart in the Euclidean case. In the following we need this lemma.
\begin{lem}
Let $1<p\leq 2$, $q=p/(p-1)$ and $0<s<(2\alpha+d+2)/q$. Then for all $\varphi\in L^p_\alpha(\mathbb{R}^{d+1}_+)$ and $z>0$,
\begin{equation}\label{lemexpo}
\left\|e^{-z|y|^2}\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}\leq \left(1+\frac{K_\alpha}{(2q)^{(\alpha+\frac{1}{d}+1)}\frac{1}{q}}\right)z^{-s/2} \left\||x|^s\varphi\right\|_{\alpha, p},
\end{equation}
where
$$K_\alpha=\left((2\alpha+d+2-qs)2^{\alpha+\frac{d}{2}}\Gamma(\alpha+\frac{d}{2}+1)\right)^{-1/q}$$
\end{lem}
\begin{proof}
Let $\varphi \in L^p_{\alpha}(\mathbb{R}^{d+1}_+)$. The inequality (\ref{lemexpo}) holds if
$\left\||x|^s\varphi\right\|_{\alpha, p}=\infty.$
Let us now assume that $\left\||x|^s\varphi\right\|_{\alpha, p}<\infty.$ For $\rho>0$, we put $B_\rho=\{x\in\mathbb{R}^{d+1}_+: |x|<\rho\}$ and $B_\rho^c=\mathbb{R}^{d+1}_+\backslash B_\rho.$ Denote by $\chi_{B_\rho}$ and $\chi_{B_\rho^c}$ the characteristic functions. Let $\varphi\in L^p_\alpha(\mathbb{R}^{d+1}_+)$, $1<p\leq 2$ and $q=p/(p-1)$. Like that
$$|\varphi\chi_{B_\rho^c}(x)|\leq \rho^{-s}|x|^s|\varphi(x)|,$$
then, by inequality (\ref{Lp-Lq}), we get
\begin{align*}
\left\|e^{-z|y|^2}\mathcal{F}_{W,\alpha}(\varphi\chi_{B_\rho^c})\right\|_{\alpha,q}\leq & \left\|e^{-z|y|^2}\right\|_{\alpha,\infty}\left\|\mathcal{F}_{W,\alpha}(\varphi\chi_{B_\rho^c})
\right\|_{\alpha,q} \\
\leq & \left\|e^{-z|y|^2}\right\|_{\alpha,\infty}\left\|\varphi\chi_{B_\rho^c}
\right\|_{\alpha,p} \\
\leq & \left\|e^{-z|y|^2}\right\|_{\alpha,\infty}\rho^{-s}\left\||x|^s|\varphi
\right\|_{\alpha,p}.
\end{align*}
On the other hand, according to (\ref{L1-Linfty}) and H\"oolder's inequality, we obtain
\begin{align*}
\left\|e^{-z|y|^2}\mathcal{F}_{W,\alpha}(\varphi\chi_{B_\rho})\right\|_{\alpha,q}\leq & \left\|e^{-z|y|^2}\right\|_{\alpha,q}\left\|\mathcal{F}_{W,\alpha}(\varphi\chi_{B_\rho})
\right\|_{\alpha,\infty} \\
\leq & \left\|e^{-z|y|^2}\right\|_{\alpha,q}\left\|\varphi\chi_{B_\rho)}\right\|_{\alpha,1}\\
\leq & \left\|e^{-z|y|^2}\right\|_{\alpha,q}\left\||x|^{-s}\chi_{B_\rho)}\right\|_{\alpha,q} \left\||x|^s\varphi\right\|_{\alpha,p}.
\end{align*}
According to integral relationship (\ref{radialweinstein}), we have the following identity
$$ \left\|e^{-z|y|^2}\right\|_{\alpha,q}=\frac{1}{(2q)^{(\alpha+\frac{d}{2}+1)}}z^{-(\alpha+\frac{d}{2}+1)}
\quad\text{and}\quad \left\||x|^{-s}\chi_{B_\rho}\right\|_{\alpha,q}=K_\alpha\rho^{-s+(2\alpha+d+2)/q}.$$
Hence, we get
$$\left\|e^{-z|y|^2}\mathcal{F}_{W,\alpha}(\varphi\chi_{B_\rho})\right\|_{\alpha,q}\leq \frac{C_\alpha}{(2q)^{(\alpha+\frac{d}{2}+1)}}z^{-(\alpha+\frac{d}{2}+1)}\rho^{-s+(2\alpha+d+2)/q} \left\||x|^s\varphi\right\|_{\alpha,p},$$
and
\begin{align*}
\left\|e^{-z|y|^2}\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q} \leq & \left\|e^{-z|y|^2}\mathcal{F}_{W,\alpha}(\varphi\chi_{B_\rho})\right\|_{\alpha,q} + \left\|e^{-z|y|^2}\mathcal{F}_{W,\alpha}(\varphi\chi_{B_\rho^c})\right\|_{\alpha,q} \\
\leq & \rho^{-s}\left(1+\frac{K_\alpha}{(2q)^{(\alpha+\frac{d}{2}+1)\frac{1}{q}}}\rho^{(2\alpha+d+2)/q}
z^{-(\alpha+\frac{d}{2}+1)\frac{1}{q}}\right) \left\||x|^s\varphi\right\|_{\alpha,p}.
\end{align*}
By choosing $\rho=z^{1/2}$ we get the result.
\end{proof}
\begin{thm}
Let $1<p\leq 2$, $q=p/(p-1)$, $0<s<(2\alpha+d+2)/q$ and $t>0$, then for all $\varphi\in L^p_\alpha(\mathbb{R}^{d+1}_+)$,
\begin{equation}\label{LpHPWU}
\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q} \leq K(s,t) \left\||x|^s\varphi\right\|_{\alpha,p}^\frac{t}{s+t}\left\||y|^t\mathcal{F}_{W,\alpha}(\varphi)\right\|_
{\alpha,q}^\frac{s}{s+t},
\end{equation}
where $K(s,t)$ is a positive constant.
\end{thm}
\begin{proof}
Let $\varphi\in L^p_\alpha(\mathbb{R}^{d+1}_+)$ and $1<p\leq 2$, such that
$$\left\||x|^s\varphi\right\|_{\alpha,p}+\left\||y|^t\mathcal{F}_{W,\alpha}(\varphi)\right\|_
{\alpha,q}<\infty.$$
Suppose that $0<s<(2\alpha+d+2)/q$ and $t<2$. Then, by Lemma \ref{lemexpo} we have, for all $z>0$,
\begin{align*}
\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q} \leq & \left\|e^{-z|y|^2}\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q} + \left\|(1-e^{-z|y|^2})\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q} \\
\leq & \left(1+\frac{K_\alpha}{(2q)^{(\alpha+\frac{1}{d}+1)}\frac{1}{q}}\right)z^{-s/2} \left\||x|^s\varphi\right\|_{\alpha, p} + \left\|(1-e^{-z|y|^2})\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}.
\end{align*}
on the other hand, we have
$$\left\|(1-e^{-z|y|^2})\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}=
z^{t/2}\left\|(s|y|^2)^{-s/2}(1-e^{-z|y|^2})|y|^2\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}.$$
Take into account$(1-e^{-u})u^{-s/2}$ is bounded for all $u\geq 0$, if $t\leq 2$. Consequently,
$$ \left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q} \leq K\left(z^{s/2}\left\||x|^s\varphi\right\|_{\alpha, p} +z^{t/2}\left\||y|^s\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}\right).$$
By taking $$z=\left(\frac{s\left\||x|^s\varphi\right\|_{\alpha, p}}{t\left\||y|^s\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}}\right)^{\frac{2}{s+t}},$$
we get the result for all $t\leq 2.$
It remains to show the result for $t>0$. Since, we have for all $\varepsilon>0$,
$|y|\leq \varepsilon+\varepsilon^{1-t}|y|^t,$
then it follows that
\begin{equation}\label{ineq1}
\left\||y|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}\leq \varepsilon\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q} + \varepsilon^{1-t} \left\||y|^t\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}.
\end{equation}
By choosing,
$$ \varepsilon=(t-1)^\frac{1}{t}\left(\frac{ \left\||y|^t\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}}{\left\|\mathcal{F}_{W,\alpha}(\varphi)
\right\|_{\alpha,q}}\right)^\frac{1}{t},$$
we get
\begin{equation}\label{inq2}
\left\||y|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}\leq \frac{t}{t-1}(t-1)^\frac{1}{t} \left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}^\frac{t-1}{t} \left\||y|^t\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}^\frac{1}{t}.
\end{equation}
Combining the inequalities (\ref{ineq1}) and (\ref{inq2}), we obtain
\begin{align*}
\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q} & \leq K\left\||x|^s\varphi\right\|_{\alpha,p}^\frac{1}{1+s}\left\||y|\mathcal{F}_{W,\alpha}(\varphi)\right\|_
{\alpha,q}^\frac{s}{1+s} \\
& \leq K \left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}^\frac{s(t-1)}{t(s+1)} \left\||x|^s\varphi\right\|_{\alpha,p}^\frac{1}{1+s}\left\||y|^t\mathcal{F}_{W,\alpha}(\varphi)\right\|_
{\alpha,q}^\frac{s}{t(s+1)}.
\end{align*}
which implies
\begin{equation*}
\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}^\frac{s+t}{t(s+1)}
\leq K \left\||x|^s\varphi)\right\|_{\alpha,p}^\frac{1}{1+s} \left\||y|^s\mathcal{F}_{W,\alpha}(\varphi)\right\|_
{\alpha,q}^\frac{s}{t(s+1)}.
\end{equation*}
which gives the rsult for $t>0.$
\end{proof}
\begin{rem}
Let $q=2$. According to Placherel formula (\ref{Plancherel formula}), we get
\begin{equation*}
\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,2} \leq K(s,t) \left\||x|^s\varphi\right\|_{\alpha,2}^\frac{t}{s+t}\left\||y|^t\mathcal{F}_{W,\alpha}(\varphi)\right\|_
{\alpha,2}^\frac{s}{s+t},
\end{equation*}
which is the general case of the Heisenberg-Pauli-Weyl inequality (\ref{firstuncert}) proved by Mejjaoli \cite{mejjaoli2011uncertainty}.
\end{rem}
\section{$L^p$-Donoho-Strak uncertainty principles}
\begin{defn}
Let $\Omega$ and $\Sigma$ be a measurable subsets of $\mathbb{R}^{d+1}_+$. We define the timelimiting
operator $P_\Omega$ by $P_\Omega\varphi:=\varphi\chi_\Omega$ and define the Weinstein integral operator $Q_\Sigma$ by $\mathcal{F}_{W,\alpha}(Q_\Sigma\varphi)=\mathcal{F}_{W,\alpha}(\varphi)\chi_\Sigma.$
\end{defn}
\begin{prop}\label{repQ}
Let $\varphi\in L^p_\alpha(\mathbb{R}^{d+1}_+)$, $1\leq p\leq 2.$ If $\mu_\alpha(\Sigma)<\infty$, then we have
\begin{equation*}
Q_\Sigma\varphi(x)=\int_{\Sigma}\Lambda_{\alpha}^d(x, \lambda)\mathcal{F}_{W,\alpha}(\varphi)(\lambda)d\mu_{\alpha}(\lambda).
\end{equation*}
\end{prop}
\begin{proof}
Let $\varphi\in L^p_\alpha(\mathbb{R}^{d+1}_+)$, $1\leq p\leq 2$ and put $q=p/(p-1)$. According to inequalities (\ref{normLambda}) and (\ref{Lp-Lq}) we obtain
\begin{align*}
\left\|\mathcal{F}_{W,\alpha}(\varphi)\chi_\Sigma\right\|_{\alpha,1} & = \int_{\Sigma}|\mathcal{F}_{W,\alpha}(\varphi)(x)|d\mu_{\alpha}(x) \\
& \leq \left(\mu_{\alpha}(\Sigma)\right)^{1/p} \left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}\\
& \leq \left(\mu_{\alpha}(\Sigma)\right)^{1/p} \left\|\varphi\right\|_{\alpha,q}.
\end{align*}
In the same way, we get
\begin{align*}
\left\|\mathcal{F}_{W,\alpha}(\varphi)\chi_\Sigma\right\|_{\alpha,2} & =
\left(\int_{\Sigma}|\mathcal{F}_{W,\alpha}(\varphi)(x)|^2d\mu_{\alpha}(x)\right)^\frac{1}{2} \\
& \leq \left(\mu_{\alpha}(\Sigma)\right)^\frac{q-2}{2q} \left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}\\
& \leq \left(\mu_{\alpha}(\Sigma)\right)^\frac{q-2}{2q} \left\|\varphi\right\|_{\alpha,q}.
\end{align*}
Hence, $\mathcal{F}_{W,\alpha}(\varphi)\chi_\Sigma\in L^1_\alpha\cap L^2_\alpha(\mathbb{R}^{d+1}_+) $ and by the definition of the Weinstein integral operator $Q_\Sigma$ we obtain
$$Q_\Sigma\varphi=\mathcal{F}_{W,\alpha}^{-1}(\mathcal{F}_{W,\alpha}(\varphi)\chi_\Sigma).$$
Finally, the inversion formula (\ref{inv}) gives the result.
\end{proof}
It is easy to see the following result by inequality (\ref{Lp-Lq}).
\begin{lem}
Let $\varphi\in L^p_\alpha(\mathbb{R}^{d+1}_+)$, $1< p\leq 2$ and let $q=p/(p-1)$. Then we have
$$\left\|\mathcal{F}_{W,\alpha}(Q_\Sigma\varphi)\right\|_{\alpha,q} \leq
\left\|\varphi\right\|_{\alpha,p}.$$
\end{lem}
\begin{thm}\label{FPQ}
Let $\Omega$ and $\Sigma$ be a measurable subsets of $\mathbb{R}^{d+1}_+$. Let $\varphi\in L^p_\alpha(\mathbb{R}^{d+1}_+)$, $1< p\leq 2$ and $q=p/(p-1)$. Then we have
\begin{equation*}
\left\|\mathcal{F}_{W,\alpha}(Q_\Sigma P_\Omega\varphi\right\|_{\alpha,q} \leq \left(\mu_{\alpha}(\Sigma)\right)^\frac{1}{q} \left(\mu_{\alpha}(\Omega)\right)^\frac{1}{q}
\left\|\varphi\right\|_{\alpha,p}.
\end{equation*}
\end{thm}
\begin{proof}
Assume that $ \mu_{\alpha}(\Sigma)<\infty$ and $ \mu_{\alpha}(\Omega)<\infty$. Let $\varphi\in L^p_\alpha(\mathbb{R}^{d+1}_+)$, $1\leq p\leq 2$ and $q=p/(p-1)$. From the definition of the Weinstein integral operator $Q_\Sigma$, we get
$$\mathcal{F}_{W,\alpha}(Q_\Sigma P_\Omega\varphi)=\mathcal{F}_{W,\alpha}(P_\Omega\varphi)\chi_\Sigma.$$
Therefore
\begin{equation}\label{NPQ}
\left\|\mathcal{F}_{W,\alpha}(Q_\Sigma P_\Omega\varphi)\right\|_{\alpha,q} = \left(\int_{\Sigma}\left|\mathcal{F}_{W,\alpha}(P_\Omega\varphi)(\lambda)\right|^q d\mu_{\alpha}(\lambda)\right)^\frac{1}{q}.
\end{equation}
Since
\begin{equation*}
\mathcal{F}_{W,\alpha}(P_\Omega\varphi)(\lambda)=\int_{\Omega}\varphi(x)\Lambda_{\alpha}^d(x, \lambda
)d\mu_{\alpha}(x),
\end{equation*}
then according to Holder's inequality and (\ref{normLambda}), we obtain
\begin{align*}
\left|\mathcal{F}_{W,\alpha}(P_\Omega\varphi)\right| & \leq \left(\int_{\Omega}\left|\Lambda_{\alpha}^d(x, \lambda)\right|^q d\mu_{\alpha}(x)\right)^\frac{1}{q} \left(\int_{\Omega}\left|\varphi(x)\right|^p d\mu_{\alpha}(x)\right)^\frac{1}{p}
\\
& \leq \left(\mu_{\alpha}(\Omega)\right)^\frac{1}{q} \left\|\varphi\right\|_{\alpha,q}.
\end{align*}
Hence, by (\ref{NPQ}), we get
$$\left\|\mathcal{F}_{W,\alpha}(Q_\Sigma\varphi)\right\|_{\alpha,q} \leq
\left(\mu_{\alpha}(\Sigma)\right)^\frac{1}{q} \left(\mu_{\alpha}(\Omega)\right)^\frac{1}{q}
\left\|\varphi\right\|_{\alpha,p}.$$
\end{proof}
\subsection{Concentration uncertainty principle}
In this section we present two continuous-time uncertainty
principles of concentration type and we show there are depend on the sets
of concentration $\Omega$ and $\Sigma$, and on the time function $\varphi$.
\begin{defn}
Let $\Omega$, $\Sigma$ be a measurable subsets of $\mathbb{R}^{d+1}_+$ and $\varphi\in L^p_\alpha(\mathbb{R}^{d+1}_+)$, $1\leq p\leq 2$.
\begin{enumerate}
\item[(i)] We say that $\varphi$ is $\varepsilon_\Omega$-concentrated to $\Omega$ in $ L^p_\alpha(\mathbb{R}^{d+1}_+)$-norm, if
\begin{equation}\label{omegaconc}
\left\|\varphi-P_\Omega\varphi\right\|_{\alpha,p}\leq \varepsilon_\Omega\left\|\varphi\right\|_{\alpha,p}.
\end{equation}
\item[(ii)] $\mathcal{F}_{W,\alpha}(\varphi)$ is $\varepsilon_\Sigma$-concentrated to $\Sigma$ in $ L^q_\alpha(\mathbb{R}^{d+1}_+)$-norm, $q=p/(p-1)$,
if
\begin{equation}\label{Sigmaconc}
\left\|\mathcal{F}_{W,\alpha}(\varphi)-\mathcal{F}_{W,\alpha}(Q_\Sigma\varphi)\right\|_{\alpha,q}\leq \varepsilon_\Sigma\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}.
\end{equation}
\end{enumerate}
\end{defn}
The following theorem, states the first continuous-time uncertainty
principle of concentration type for the $L^p_\alpha$-theory.
\begin{thm}\label{unc1}
Let $\Omega$, $\Sigma$ be a measurable subsets of $\mathbb{R}^{d+1}_+$ and $\varphi\in L^p_\alpha(\mathbb{R}^{d+1}_+)$, $1< p\leq 2$. If $\varphi$ is $\varepsilon_\Omega$-concentrated to $\Omega$ in $ L^p_\alpha(\mathbb{R}^{d+1}_+)$-norm and $\mathcal{F}_{W,\alpha}(\varphi)$ is $\varepsilon_\Sigma$-concentrated to $\Sigma$ in $ L^q_\alpha(\mathbb{R}^{d+1}_+)$-norm, $q=p/(p-1)$, then we have
\begin{equation*}
\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}\leq \frac{ \left(\mu_{\alpha}(\Sigma)\right)^\frac{1}{q} \left(\mu_{\alpha}(\Omega)\right)^\frac{1}{q}+\varepsilon_\Omega}{1-\varepsilon_\Sigma}\left\|\varphi
\right\|_{\alpha,p}.
\end{equation*}
\end{thm}
\begin{proof}
Let $\varphi\in L^p_\alpha(\mathbb{R}^{d+1}_+)$, $1< p\leq 2$. According to (\ref{omegaconc}), (\ref{Sigmaconc}) and Theorem \ref{FPQ} it follows that
\begin{align*}
\left\|\mathcal{F}_{W,\alpha}(\varphi)-\mathcal{F}_{W,\alpha}(Q_\Sigma P_\Omega\varphi)\right\|_{\alpha,q} \leq & \left\|\mathcal{F}_{W,\alpha}(\varphi)-\mathcal{F}_{W,\alpha}(Q_\Sigma \varphi)\right\|_{\alpha,q}\\ & + \left\|\mathcal{F}_{W,\alpha}(Q_\Sigma \varphi)-\mathcal{F}_{W,\alpha}(Q_\Sigma P_\Omega\varphi)\right\|_{\alpha,q}\\
\leq & \varepsilon_\Sigma \left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}+ \left\|\varphi-P_\Omega\varphi\right\|_{\alpha,p}\\
\leq &\varepsilon_\Sigma \left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}+ \varepsilon_\Omega\left\|\varphi\right\|_{\alpha,p}.
\end{align*}
Applying the triangle inequality and think to Theorem \ref{FPQ}, we show that
\begin{align*}
\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q} & \leq \left\|\mathcal{F}_{W,\alpha}(Q_\Sigma P_\Omega\varphi)\right\|_{\alpha,q}+\left\|\mathcal{F}_{W,\alpha}(\varphi)-\mathcal{F}_{W,\alpha}(Q_\Sigma P_\Omega\varphi)\right\|_{\alpha,q}\\
& \leq \left( \left(\mu_{\alpha}(\Sigma)\right)^\frac{1}{q} \left(\mu_{\alpha}(\Omega)\right)^\frac{1}{q}+\varepsilon_\Omega\right)\left\|\varphi
\right\|_{\alpha,p}+\varepsilon_\Sigma\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}.
\end{align*}
which gives the desired result.
\end{proof}
The following result gives the second continuous-time uncertainty principle of
concentration type for the $L^1_\alpha\cap L^p_\alpha$ theory.
\begin{thm}\label{unc2}
Let $\Omega$, $\Sigma$ be a measurable subsets of $\mathbb{R}^{d+1}_+$ and $\varphi\in (L^1_\alpha\cap L^p_\alpha)(\mathbb{R}^{d+1}_+)$, $1< p\leq 2$. If $\varphi$ is $\varepsilon_\Omega$-concentrated to $\Omega$ in $ L^p_\alpha(\mathbb{R}^{d+1}_+)$-norm and $\mathcal{F}_{W,\alpha}(\varphi)$ is $\varepsilon_\Sigma$-concentrated to $\Sigma$ in $ L^q_\alpha(\mathbb{R}^{d+1}_+)$-norm, $q=p/(p-1)$, then we have
\begin{equation*}
\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}\leq \frac{ \left(\mu_{\alpha}(\Sigma)\right)^\frac{1}{q} \left(\mu_{\alpha}(\Omega)\right)^\frac{1}{q}}{(1-\varepsilon_\Omega)(1-\varepsilon_\Sigma)}\left\|\varphi
\right\|_{\alpha,p}.
\end{equation*}
\end{thm}
\begin{proof}
Let $\Omega$, $\Sigma$ be a measurable subsets of $\mathbb{R}^{d+1}_+$ such that $\mu_{\alpha}(\Omega)<\infty$ and $\mu_{\alpha}(\Sigma)<\infty$. Assume that $\varphi\in (L^1_\alpha\cap L^p_\alpha)(\mathbb{R}^{d+1}_+)$, $1< p\leq 2$. Like that $\mathcal{F}_{W,\alpha}(\varphi)$ is $\varepsilon_\Sigma$-concentrated to $\Sigma$ in $ L^q_\alpha(\mathbb{R}^{d+1}_+)$-norm, $q=p/(p-1)$,then we get
\begin{align*}
\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q} &\leq \varepsilon_\Sigma\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}+ \left\|\mathcal{F}_{W,\alpha}(Q_\Sigma\varphi)\right\|_{\alpha,q}\\
& \leq \varepsilon_\Sigma\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}+ \left(\mu_{\alpha}(\Sigma)\right)^\frac{1}{q}\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,\infty}.
\end{align*}
According to inequality (\ref{L1-Linfty}), it follows
\begin{equation}\label{conc1}
\left\|\mathcal{F}_{W,\alpha}(\varphi)\right\|_{\alpha,q}\leq \frac{ \left(\mu_{\alpha}(\Sigma)\right)^\frac{1}{q}} {1-\varepsilon_\Sigma}\left\|\varphi
\right\|_{\alpha,1}.
\end{equation}
On the other hand, since $\varphi$ is $\varepsilon_\Omega$-concentrated to $\Omega$ in $ L^p_\alpha(\mathbb{R}^{d+1}_+)$-norm, then
\begin{align*}
\left\|\varphi\right\|_{\alpha,1} &\leq \varepsilon_\Omega\left\|\varphi\right\|_{\alpha,1}+ \left\|P_\Sigma\varphi\right\|_{\alpha,1}\\
& \leq \varepsilon_\Omega\left\|\varphi\right\|_{\alpha,q}+ \left(\mu_{\alpha}(\Omega)\right)^\frac{1}{p}\left\|\varphi\right\|_{\alpha,p}.
\end{align*}
Hence
\begin{equation}\label{conc2}
\left\|\varphi\right\|_{\alpha,1} \leq \frac{\left(\mu_{\alpha}(\Omega)\right)^\frac{1}{p}}{1-\varepsilon_\Omega}\left\|\varphi\right\|_{\alpha,q}.
\end{equation}
Combining (\ref{conc1}) and (\ref{conc2}) we obtain the result of this theorem.
\end{proof}
\begin{rem}
We observe at first that the statement of Theorem \ref{unc1} depends on the time function $\varphi$. However although for $p=q=2$, the continuous-time uncertainty principle becomes
\begin{equation*}
1-\varepsilon_\Omega-\varepsilon_\Sigma\leq \mu_\alpha(\Omega)^\frac{1}{2}\mu_\alpha(\Sigma)^\frac{1}{2},
\end{equation*}
and is independent on the time function $\varphi$. Likewise we observe that the statement of Theorem \ref{unc2} depends on the time function $\varphi$. Like the first continuous-time uncertainty principle
the statement of Theorem \ref{unc2} is independent on the time function $\varphi$ for $p=q=2$, and we have
\begin{equation*}
(1-\varepsilon_\Omega)(1-\varepsilon_\Sigma)\leq \mu_\alpha(\Omega)^\frac{1}{2}\mu_\alpha(\Sigma)^\frac{1}{2}.
\end{equation*}
\end{rem}
\subsection{Continuous-bandlimited uncertainty principle}
In this section, we establish continuous-bandlimited uncertainty principle of
concentration. This principle depends on the sets of concentration
$\Omega$ and $\Sigma$, but he is independent on the bandlimited function $\varphi$.
\begin{defn}
Let $1\leq p\leq 2$ and $\psi\in L^p(\mathbb{R}_+^d)$.
\begin{enumerate}
\item[(i)] We say that $\psi$ is bandlimited to $\Sigma$ if
$Q_\Sigma\psi=\psi$ and denote by $\mathcal{B}_\alpha^p(\Sigma)$ the set of functions $\psi\in L^p(\mathbb{R}_+^d)$ that are bandlimited to $\Sigma$.
\item [(ii)] We say that $\varphi$ is $\varepsilon_\Sigma$-bandlimited to $\Sigma$ in $L^p_\alpha$-norm if there exists $\psi\in \mathcal{B}_\alpha^p(\Sigma)$ such that
\begin{equation*}
\left\|\varphi-\psi\right\|_{\alpha,p}\leq \varepsilon \left\|\varphi\right\|_{\alpha,p}.
\end{equation*}
\end{enumerate}
\end{defn}
The space of bandlimited functions $\mathcal{B}_\alpha^p(\Sigma)$ satisfies the following property.
\begin{prop}\label{propband}
Let $\Omega$, $\Sigma$ be a measurable subsets of $\mathbb{R}^{d+1}_+$. For all $\psi\in\mathcal{B}_\alpha^p(\Sigma)$, $1\leq p\leq 2$, we have
\begin{equation*}
\left\|P_\Omega\psi\right\|_{\alpha,p}\leq \left(\mu_{\alpha}(\Sigma)\right)^\frac{1}{p} \left(\mu_{\alpha}(\Omega)\right)^\frac{1}{p} \left\|\psi\right\|_{\alpha,p}.
\end{equation*}
\end{prop}
\begin{proof}
The estimation is trivial if the measure of $\Omega$ or $\Sigma$ is infinite. Suppose that $ \mu_{\alpha}(\Sigma)<\infty$ and $ \mu_{\alpha}(\Omega)<\infty$. Let $\psi\in \mathcal{B}_\alpha^p(\Sigma)$ such that $1\leq p\leq 2$, then from Propostion \ref{repQ}, we have
\begin{equation*}
\psi(x)=\int_{\Sigma}\Lambda_{\alpha}^d(x, \lambda)\mathcal{F}_{W,\alpha}(\psi)(\lambda)d\mu_{\alpha}(\lambda).
\end{equation*}
From the property (\ref{normLambda}) and H\"older's inequality, we obtain for all $q=p/(p-1)$
\begin{align*}
\psi(x) & \leq \left(\mu_{\alpha}(\Sigma)\right)^\frac{1}{p} \left\|\mathcal{F}_{W,\alpha}(\psi)\right\|_{\alpha,q}\\
& \leq \left(\mu_{\alpha}(\Sigma)\right)^\frac{1}{p} \left\|\psi\right\|_{\alpha,p}.
\end{align*}
Applying now the time-limiting operator $P_\Omega$ to the bandlimited function $\psi$ and observing the $L^p_\alpha$-norm, we obtain
\begin{align*}
\left\|P_\Omega\psi\right\|_{\alpha,p} & =\left( \int_{\Omega}|\psi(x)|^pd\mu_{\alpha}(x)\right)^\frac{1}{p}\\
& \leq \left(\mu_{\alpha}(\Omega)\right)^\frac{1}{p}\left(\mu_{\alpha}(\Sigma)\right)^\frac{1}{p} \left\|\psi\right\|_{\alpha,p}.
\end{align*}
which yields the desired result.
\end{proof}
\begin{thm}\label{secc}
Let $\Omega$, $\Sigma$ be a measurable subsets of $\mathbb{R}^{d+1}_+$ and let $\varphi\in L^p(\mathbb{R}_+^d)$ such that $1\leq p\leq 2$. Then if $\varphi$ is $\varepsilon_\Sigma$-bandlimited to $\Sigma$ in $L^p_\alpha$-norm, we have
\begin{equation*}
\left\|P_\Omega\psi\right\|_{\alpha,p}\leq \left((1+\varepsilon_\Sigma)\left(\mu_{\alpha}(\Sigma)\right)^\frac{1}{p} \left(\mu_{\alpha}(\Omega)\right)^\frac{1}{p} +\varepsilon_\Sigma \right) \left\|\psi\right\|_{\alpha,p}.
\end{equation*}
\end{thm}
\begin{proof}
Let $\varphi\in L^p(\mathbb{R}_+^d)$ such that $1\leq p\leq 2$. Like that $\varphi$ is $\varepsilon_\Sigma$-bandlimited to $\Sigma$ in $L^p_\alpha$-norm, then there exists $\psi\in \mathcal{B}_\alpha^p(\Sigma)$ such that
\begin{equation*}
\left\|\varphi-\psi\right\|_{\alpha,p}\leq \varepsilon \left\|\varphi\right\|_{\alpha,p}
\end{equation*}
and we have
\begin{equation}\label{Pfi}
\left\|P_\Omega\varphi\right\|_{\alpha,p} \leq \left\|P_\Omega\psi\right\|_{\alpha,p}+ \left\|P_\Omega(\varphi-\psi)\right\|_{\alpha,p}
\leq \left\|P_\Omega\psi\right\|_{\alpha,p}+\varepsilon_\Sigma\left\|\varphi\right\|_{\alpha,p}.
\end{equation}
Observe now the result of Proposition \ref{propband} and the fact that
\begin{equation*}
\left\|\psi\right\|_{\alpha,p}\leq (1+\varepsilon_\Sigma)\left\|\varphi\right\|_{\alpha,p}
\end{equation*}
we obtain the desired result.
\end{proof}
\begin{thm}
Let $\Omega$, $\Sigma$ be a measurable subsets of $\mathbb{R}^{d+1}_+$ and let $\varphi\in L^p(\mathbb{R}_+^d)$ such that $1\leq p\leq 2$. Then if $\varphi$ is $\varepsilon_\Omega$-concentrated to $\Omega$ and $\varepsilon_\Sigma$-bandlimited to $\Sigma$ in $L^p_\alpha$-norm, we have
\begin{equation*}
1-\varepsilon_\Omega-\varepsilon_\Sigma\leq (1+\varepsilon_\Sigma)\left(\mu_{\alpha}(\Sigma)\right)^\frac{1}{p} \left(\mu_{\alpha}(\Omega)\right)^\frac{1}{p}.
\end{equation*}
\end{thm}
\begin{proof}
Let $\varphi\in L^p(\mathbb{R}_+^d)$ such that $1\leq p\leq 2$. Like that $\varphi$ is $\varepsilon_\Omega$-concentrated to $\Omega$ in $L^p_\alpha$-norm, then by estimation (\ref{omegaconc}) we obtain
\begin{equation*}
\left\|\varphi\right\|_{\alpha,p}\leq \varepsilon_\Omega\left\|\varphi\right\|_{\alpha,p} +\left\|P_\Omega\varphi\right\|_{\alpha,p}.
\end{equation*}
Therefore
\begin{equation*}
\left\|\varphi\right\|_{\alpha,p}\leq \frac{1}{1-\varepsilon_\Omega}\left\|P_\Omega\varphi\right\|_{\alpha,p}.
\end{equation*}
Finally, we deduce the desired estimation by inequality (\ref{Pfi}) and Theorem \ref{secc}.
\end{proof}
\begin{rem}
The continuous bandlimited uncertainty principle of concentration type for the $L^p_\alpha$-norm given by pervious Corollary is independent on the bandlimited function $\varphi$ for every $1\leq p\leq 2$.
\end{rem}
|
2,877,628,088,369 | arxiv | \section{Introduction}
Random walks on finite graphs play key roles to analysis on the electrical network~(e.g., \cite{DS}) and cut off phenomena~(e.g., \cite{Diaconis,LevinPeres}).
Quantum walks (QWs) are known as quantum counterpart of such random walks~\cite{AmbainisEtAl,FH}.
Although the application of quantum walks to quantum search algorithm is one of the more remarkable application~\cite{Ambainis2003, Childs, Portugal},
we also anticipate that such a quantum version of the application will be developed.
It is well known that irreducible random walks on finite graph converges to the stationary state in the long time limit.
This is very fundamental to such applications of random walks.
The convergence to the stationary state of random walks is supported by the fact that all the absolute values of the eigenvalues except $\pm 1$ are strictly smaller than $1$ because these eigenvalues converge to $0$ by the exponentiation of the time steps.
On the other hand, the eigenvalues of quantum walks live on the unit circle in the complex plan. Thus every eigenvalue having the overlap to the initial state ``asserts" its existence even in the long time limit in general since the absolute value of the eigenvalue is unit.
Indeed, in the quantum search algorithm,
a high probability at the marked vertices is obtained in an asymptotic periodicity of the time evolution with respect to the sufficiently large number of vertices.
This derives from the eigenvalues having a large overlap to the initial state.
Then if we ``miss" the optical timing of the observation, we may have a very low probability of finding the marked vertex~\cite{Brassard}.
Thus it is natural to consider finding a stationary state of a quantum walk as a fixed point of a dynamical system~\cite{Grover,YLC}. In~\cite{FelHil1,FelHil2,HS}, such a quantum walk model where the dynamics converges to a stationary state is proposed by considering a semi-infinite system.
In this model, the boundary of the graph and the initial state are set so that the unitary time evolution on the whole space, which includes the outside of the graph, describes that some quantum walkers penetrate as the inflow from the outside of the graph to the boundaries and some quantum walkers go out from the boundaries to the outside of the graph as the outflow at every time step.
Then it is mathematically shown the dynamical system based on this quantum walk model converges to a fixed point as a stationary state since the inflow to the graph and outflow to the outside are balanced in the long time limit~\cite{FelHil1,FelHil2,HS}.
For example, it is shown that the stationary state of the Szegedy walk induced by reversible random walk with a constant inflow can be expressed by using the current of an electrical circuit~\cite{MHS}.
Then, because discrete-time quantum walk is implemented on the one- and two-dimensional lattices (see for examples, \cite{ZhaoEtAl,SchreiberEtAl}, \cite{MW} and its references therein), and also continuous-time quantum walk is implemented on the circulant graph~\cite{QLMAZOWM} and so on, we attempt to consider an implementation of this quantum walk on a general graph that has the stationary state as a dynamical system.
In~\cite{MMOS}, the stationary state of a discrete-time quantum walk model on the one-dimensional lattice gives the stationary Schr{\"o}dinger equation with delta potentials on $\mathbb{R}$~\cite{Albe}
and a possible approach to the implementation of this quantum walk model using the optical circuit is suggested; the internal graph corresponds to a finite path graph in the setting of our quantum walk treated here. The more detailed mathematical discussion on \cite{MMOS} from the view point of the spectral and scattering theory can be seen in ~\cite{Morioka}.
In this paper, based on an idea inspired by \cite{ZhaoEtAl,MMOS} in particular, we propose the design of an optical circuit implementing our quantum walk model on a general graph which converges to a stationary state.
Moreover we mathematically find a useful setting of this quantum walk model where the stationary states can be implemented by using this optical circuit, although technical difficulties remain in terms of the implementation.
The quantum walk model of our target for the implementation is called the circulant quantum walk. The quantum coin assigned at each vertex, which describes the manner of scattering at each vertex $u\in V$, is given by a $d(u)$-dimensional circulant matrix. Here $d(u)$ is the degree of the vertex $u$.
The circulant matrix is diagonalized by the discrete Fourier matrix and related to coding theories ~\cite{Davis}. The circulant matrix assigned at each vertex coincides with the scattering matrix describing the response of our proposed optical circuit (see Fig.~\ref{fig:a}).
The main idea of the implementation of the quantum coin is that we embed this local optical circuit into each vertex as the quantum coin and we regard each boundary of the circuit as the ``gateway" leading into one of the adjacent vertices (see Figs.~\ref{fig:GOGBU}, \ref{fig:GprimeO}).
We heuristically show that this resulting large optical circuit can be represented by the stationary state of ``another" quantum walk model-namely, an optical quantum walk. The underlying graph of the optical quantum walk is a blow-up digraph of the original graph which is $2$-regular. This $2$-regularity is represented by horizontal and vertical polarization of the light $|H\rangle$ and $|V\rangle$ in the optical circuit in our proposed very ideal design.
To implement the original quantum walk by this optical circuit, the theoretical problem is to clarify the relation between stationary states of the circulant quantum walk and optical quantum walk.
So if such a stationary state of the circulant quantum walk coincides with that of the optical quantum walk, we say that ``the optical quantum walk implements the underlying circulant quantum walk".
In this paper, we mathematically show a useful sufficient condition for the implementation.
The sufficient condition provides a concrete setting of the implementing optical circuit. The setting breaks a kind of symmetricity (see Theorem~\ref{cor:graph} and Figs.~\ref{fig:setting1} and \ref{fig:setting2}) with respect to the orientation of the circuit or with respect to the quantum coins.
The rest of this paper is organized as follows.
In Section~2, the circulant quantum walk on graph $G=(V,A)$ is introduced. The quantum coin assigned at each vertex is a circulant matrix induced by a two-dimensional unitary matrix $H_u$. The circulant quantum walk is determined by the sequence $\{H_u\}_{u\in V}$ and labeling $\{\xi_u\}_{u\in V}$, where $\xi_u: A_u \to \{0,\dots,{\rm deg}(u)-1\}$ is a bijection map. Here $A_u$ is the set of all the arcs of $G$ whose terminal vertices are $u\in V$. In Section~3, we introduce the optical quantum walk on the blow-up directed graph induced by the original graph of the circulant graph. Then the optical quantum walk is determined by the same parameters of the circulant quantum walk. Our target is to find a useful condition for the setting of the circulant quantum walk in which the stationary state of the circulant quantum walk can be obtained by referring to that of the optical quantum walk. The motivation for the target derives from the expectation that the optical quantum walk can be implemented by an optical circuit using polarizing elements proposed by Section~4.
In particular, the design of the optical circuit is proposed for the circulant quantum walk on arbitrary connected graph in Section~4.
In Section~5, we demonstrate numerically the case for the complete graph with $10$ vertices, $K_{10}$. The first example is the case that the implementation works while the second example is the case that the implementation does not work. In Section~6, a solution to the concrete labeling way $\{\xi_u\}_{u\in V}$ for the implementation is proposed using a key proposition.
The key proposition gives a sufficient condition for the implementation: if the induced optical quantum walk does not have eigenvalue $1$, then the implementation works.
In Section~7, we present the useful conditions for the implementation in the complete graph $K_N$ case.
In Section~8, we give the proof for the key proposition. Finally, provide the summary and discussion of our results.
The important notations are listed in Table~\ref{table:notation}.
\begin{table}[htb]\label{table:notation}
\caption{Notations in this paper}
\begin{tabular}{r|l}
Symmetric digraph
& a digraph where every arc has the inverse arc\\
$t(a)$, $o(a)$
& terminal and origin vertices of arc $a$, respectively \\
$\bar{a}$
& the inverse arc of arc $a$ \\
$G_0=(V_0,A_0)$
& internal graph (symmetric digraph) \\
tail
& semi-infinite length path whose root connects to a vertex of $G_0$ \\
$\tilde{G}_0=(\tilde{V}_0,\tilde{A}_0)$
& semi-infinite graph obtained by adding the tail to every vertex of $G_0$\\
${\rm deg}(u)$
& degree of $u\in \tilde{V}_0$ in $\tilde{G}_0$ \\
$\partial\tilde{A}_{\pm}$
& the set of arcs of tails \\
& \quad whose origin($+$) and terminus($-$) belong to $G_0$. \\
$A_u$
& the set of arcs of $\tilde{G}_0$ whose terminal vertices are commonly $u\in \tilde{V}$. \\
$\bs{\xi}=(\xi_u)_{u\in \tilde{V}_0}$
& labeling of arcs in $\tilde{G}_0$ (Definition~\ref{def:label}) \\
$\tilde{G}_0^{(BU,\xi)}=(\tilde{V}_0^{(BU,\xi)},\tilde{A}_0^{(BU,\xi)})$
& blow-up graph of $\tilde{G}_0$ with the labeling $\bs{\xi}$ (Definition~\ref{def:BUG}) \\
island $u$ $(\subset \tilde{G}_0^{(BU,\xi))})$
& directed cycle induced by $u\in V_0$ obtained by \\
& \quad the blowing up procedure (1) in Definition~\ref{def:BUG}\\
$A_0^{cycle} \;(\subset \tilde{A}_0^{(BU,\xi)})$
& set of arcs of all the islands \\
$A_{0,u}^{cycle}$
& arc set of the island $u$ \\
$(u;\xi_u(v)) \;(\in \tilde{V}_0^{(BU,\xi)})$
& vertex in the island $u$ connected to the island $v$ \\
$G'=(HWP\sqcup PBS,A')$
& the graph for the optical circuit design induced by $\tilde{G}_0^{(BU,\xi)}$ (Sect.~\ref{subsect:optcircuit})\\
&\\
$H_u$
& two-dimensional unitary matrix assigned at $u\in V_0$ \\
$\mathrm{Circ}(H_u)$
& ${\rm deg}(u)$-dimensional circulant matrix defined by (\ref{eq:circulantmat}) \\
$QW(G_0;(H_u)_{u\in V_0};(\xi)_{u\in V_0})$
& circulant quantum walk on $\tilde{G}_0$ (Definition~\ref{def:CQW})\\
$\mathrm{Opt}(QW(G_0;(H_u)_{u\in V_0};(\xi)_{u\in V_0}))$
& optical quantum walk on $\tilde{G}_0^{(BU,\xi)}$ (Definition~\ref{def:OQW})\\
$\tilde{U}_0$
& time evolution operator of the circulant quantum walk on $\tilde{G}_0$ \\
$U^{(BU)}$
&
time evolution operator of the optical quantum walk on $\tilde{G}_0^{(BU,\xi)}$\\
& \\
$\psi_\infty$
& the stationary state of the circulant quantum walk \\
$\psi_\infty^{(BU)}$
& the stationary state of the optical quantum walk \\
& \\
$\chi_u: \mathbb{C}^{\tilde{A}}\to \mathbb{C}^{[{\rm deg}(u)]}$
& restriction to $\mathbb{C}^{\{a\in \tilde{A}_0 \;|\; t(a)=u\}}\cong\mathbb{C}^{[{\rm deg}(u)]}$ \\
& \quad following the labeling $\xi$ (Sect.~\ref{subsec:CQW})\\
$\iota: \mathbb{C}^{\tilde{A}_0^{(BU,\xi)}}\to \mathbb{C}^{\tilde{A}_0}$
& restriction to $\mathbb{C}^{\tilde{A}_0}$ (Sect.~\ref{set:proof})\\
$\eta_u: \mathbb{C}^{\tilde{A}^{(BU,\xi)}_0}\to \mathbb{C}^{A^{cycle}_{0,u} }$
& restriction to the island $u$; $\mathbb{C}^{A^{cycle}_{0,u}}$ (Sect.~\ref{set:proof})\\
$\zeta: \mathbb{C}^{\tilde{A}_0} \to \mathbb{C}^{\tilde{A}_0\setminus tails}$
& restriction to $\mathbb{C}^{\tilde{A}_0\setminus tails}$ (Sect.~\ref{set:proof})
\end{tabular}
\end{table}
\section{Circulant quantum walk on graphs}
\subsection{Setting of the graph and labeling}
Let $G=(V, A)$ be a connected digraph where $V$ is the set of the vertices and $A$ is the set of arcs. If every arc $a\in A$ has the inverse arc $\bar{a}\in A$, we call this graph a symmetric digraph. The terminal and origin vertices of $a\in A$ are denoted by $t(a)$ and $o(a)$, respectively.
Let $G_0=(V_0, A_0)$ be a finite and connected symmetric digraph. To this original graph $G_0$, in this paper, we connect the semi-infinite path to {\it every} vertex. This resulting infinite graph is denoted by $\tilde{G}_0=(\tilde{V}_0,\tilde{A}_0)$.
We set the degree of vertex $u\in V_0$ by \[{\rm deg}(u):=\{a\in \tilde{A}_0 \;|\; t(a)=u\}=\{a\in \tilde{A}_0 \;|\; o(a)=u\}.\]
The set of boundary of $G_0$; $\partial A_{\pm}$ are defined by
\[ \partial A_+=\{t(a)\in V_0,\;o(a)\notin V_0\;\},\;\partial A_-=\{o(a)\in V_0,\;t(a)\notin V_0\}. \]
In the following, we introduce the concept of the labeling, which plays an important role to describe the time evolution of the quantum walks treated here. Let $A_u\subset \tilde{A}_0$ be the set of arcs whose terminal verteices are commonly $u\in \tilde{V}_0$; that is, $A_u=\{a\in \tilde{A}_0\;|\;t(a)=u\}$.
\begin{definition}\label{def:label}Labeling of arcs {\rm :}
The labeling of the arcs of $\tilde{G}_0=(\tilde{V}_0,\tilde{A}_0)$ is defined by the series of the bijection maps $(\xi_u)_{u\in \tilde{V}_0}$. Here $\xi_u$ is a bijection map such that
\[ A_u \to \{ 0,\dots,{\rm deg}(u)-1 \}. \]
\end{definition}
Note that the number of choices of the labeling $\xi=(\xi_u)_u$ of $\tilde{G}_0$ is $\prod_{u\in V_0}{\rm deg}(u)!$; we choose a labeling from one of these choices.
\subsection{Circulant quantum walk}\label{subsec:CQW}
In this subsection, we introduce the circulant quantum walk on the infinite graph $\tilde{G}_0=(\tilde{V}_0,\tilde{A}_0)$ with the labeling $\xi=(\xi_u)_{u\in \tilde{V}_0}$ which has tails defined as described in the previous subsection.
For a discrete set $\Omega$, we define $\mathbb{C}^\Omega$ as the vector space whose standard basis is described by each element of $\Omega$; that is, $\mathbb{C}^\Omega=\mathrm{span}\{\mathrm{\delta_\omega\;|\; \omega\in \Omega}\}$.
Here $\delta_\omega$ is
\[ \delta_\omega(\omega')=\begin{cases} 1 & \text{: $\omega=\omega'$,} \\ 0 &\text{: $\omega\neq \omega '$.} \end{cases} \].
For a $2$-dimensional unitary matrix assigned at each vertex $u\in V_0$,
\[ H_u=\begin{bmatrix}a_u & b_u \\ c_u & d_u \end{bmatrix}, \] we introduce the following ${\rm deg}(u)\times {\rm deg}(u)$- circulant matrix $\mathrm{Circ}(H_u)$ induced by the $2\times 2$-matrix $H_u$, such that
$[\mathrm{Circ}(H_u)]_{i,j=0}^{{\rm deg}(u)-1}=w^{(u)}_{i-j}$, where $i-j$ is the modulus of ${\rm deg}(u)$; that is,
\begin{equation}\label{eq:circulantmat}
\mathrm{Circ}(H_u)=
\begin{bmatrix}
w_0^{(u)} & w_{{\rm deg}(u)-1}^{(u)} & w_{{\rm deg}(u)-2}^{(u)} & \cdots & w_1^{(u)} \\
w_1^{(u)} & w_0^{(u)} & w_{{\rm deg}(u)-1}^{(u)} & \cdots & w_2^{(u)} \\
w_2^{(u)} & w_1^{(u)} & w_0^{(u)} & \cdots & w_3^{(u)} \\
\vdots & \ddots & \ddots & \ddots & \vdots \\
w_{{\rm deg}(u)-1}^{(u)} & w_{{\rm deg}(u)-2}^{(u)} & w_{{\rm deg}(u)-3}^{(u)} & \cdots & w_0^{(u)}
\end{bmatrix}.
\end{equation}
Here $w^{(u)}_\ell$ is defined by
\begin{align*}
w_0^{(u)}=d_u+\frac{b_uc_u}{1-a_u^{\kappa}}a_u^{\kappa-1}, \;\;
w_{\ell}^{(u)}=\frac{b_uc_u}{1-a_u^\kappa}a_u^{\ell-1}\;\;(\ell=1,\dots,\kappa-1).
\end{align*}
Throughout this paper, we assume $a_ub_uc_ud_u\neq 0$ to avoid a trivial dynamics of the quantum walk.
\begin{assumption}\label{assump:1}
For any $u\in V_0$, we assume
$a_ub_uc_ud_u\neq 0$.
\end{assumption}
The ${\rm deg}(u)\times{\rm deg}(u)$-matrix; $\mathrm{Circ}(H_u)$, is a unitary matrix;
see Lemma~\ref{lem:1} for more detail.
To explain exactly how the quantum walk iterates the time evolution on the graph $\tilde{G}_0$ with the labeling $\xi$ driven by the circulant matrix, let us introduce $\chi_u: \mathbb{C}^{\tilde{A}}\to \mathbb{C}^{[{\rm deg}(u)]}$ by the restriction to $\mathbb{C}^{[{\rm deg}(u)]}$ such that
$(\chi_u\psi)(a)=\psi(\xi_u(a))$. The adjoint operator is described by
\[ (\chi^*_uf)(a) =\begin{cases} f(\xi^{-1}(a)) & \text{: $a\in A_u$,}\\ 0 & \text{: $a\notin A_u$.} \end{cases} \]
A matrix representation of the map $\chi_u$ is expressed by the ${\rm deg}(u)\times \infty$ matrix;
\[ \chi_u\cong [I_{{\rm deg}(u)}\;|\; 0\;], \]
under the decomposition of the set of arcs into $A_u \sqcup (\tilde{A}\setminus A_u)$.
Now we are ready to give the definition of the circulant quantum walk on graph $\tilde{G}$ with the labeling $\xi$.
\begin{definition}\label{def:CQW} Circulant quantum walk on $\tilde{G}_0$:
$QW(G_0;\{H_u\}_{u\in V_0};\{\xi_u\}_{u\in \tilde{V}_0})$
\noindent
\begin{enumerate}
\item The total vector space: $\mathbb{C}^{\tilde{A}_0}$
\item The time evolution operator: $\tilde{U}_0=U(G_0;\{H_u\}_{u\in V_0};\{\xi_u\}_{u\in \tilde{V}_0})=SC$.
Here $S$ is the flip flop shift operator such that $(S\psi)(a)=\psi(\bar{a})$ for any $\psi\in \mathbb{C}^{\tilde{A}}$, $a\in \tilde{A}$, and $C$ is defined by
\[ C=\bigoplus_{u\in \tilde{V}} \chi_u^* C_u \chi_u\]
under the decomposition of $\mathbb{C}^{\tilde{A}_0}=\oplus_{u\in \tilde{V}_0}\mathbb{C}^{A_u}$, where
\[ C_u=\begin{cases} \mathrm{Circ}(H_u) & \text{: $u\in V_0$} \\
\sigma_X & \text{: $u\notin V_0$,}
\end{cases} \]
and $\sigma_X$ is the Pauri matrix.
\item The initial state:
\[\psi_0(a)=
\begin{cases}
1 & \text{: $a\notin A_0$, ${\rm dist}(o(a),V_0)>{\rm dist}(t(a),V_0)$,}\\
0 & \text{: otherwise.}
\end{cases}\]
\end{enumerate}
We call this quantum walk the circulant quantum walk on $G_0$.
\end{definition}
Let us explain the important points of the time iteration of this quantum walk.
Let $\psi_n$ be the $n$-th iteration by $\psi_{n+1}=\tilde{U}_0\psi_n$.
The dynamics on the tails is ``free" such that if the arcs of a tail are labeled by $a_0,a_1,a_2,\dots$ with $o(a_0)\in V_0$, $t(a_j)=o(a_{j+1})$ ($j=0,1,2,\dots$), then
\begin{equation}\label{eq:tail}
\begin{bmatrix}
\psi_{n+1}(a_{j+1}) \\ \psi_{n+1}(\bar{a}_{j})
\end{bmatrix}
=\sigma_X
\begin{bmatrix}
\psi_{n}(\bar{a}_{j+1}) \\ \psi_{n}(a_{j})
\end{bmatrix}
=\begin{bmatrix}
\psi_{n}(a_{j}) \\ \psi_{n}(\bar{a}_{j+1})
\end{bmatrix}.
\end{equation}
This means that a quantum walker is perfectly transmitting at each vertex on the tails. Note that the free quantum walk on the tails is independent of the labeling of the vertices. On the other hand, the quantum walk in the internal graph depends on the labeling. At each vertex on the internal graph $G_0$, a quantum walker is scattered by $\mathrm{Circ}(H_u)$ as follows:
\begin{equation}
\begin{bmatrix}
\psi_{n+1}(\overline{\xi_u^{-1}(0)}) \\
\vdots \\
\psi_{n+1}(\overline{\xi_u^{-1}({\rm deg}(u)-1)})
\end{bmatrix}
= \mathrm{Circ}(H_u) \begin{bmatrix}
\psi_{n}(\xi_u^{-1}(0)) \\
\vdots \\
\psi_{n}(\xi_u^{-1}({\rm deg}(u)-1))
\end{bmatrix}
\end{equation}
for any $u\in V_0$.
The initial state $\psi_0$ is set so that $\psi_n(a)=\psi_0(a)$ for any $n\geq 0$ and $a\in \partial A_+$.
Therefore a quantum walker is provided to the internal graph $G_0$ as the inflow from $\partial A_+$ while a quantum walker is consumed as the outflow to $\partial A_-$ at every time step.
\section{Optical quantum walk on the blow-up graphs}
In this section, we introduce another quantum walk on the blow-up graph induced by $\tilde{G}_0=(\tilde{V}_0, \tilde{A}_0)$ with the labeling $\xi$.
This quantum walk is called the optical quantum walk. As we will see, the optical quantum walk is implemented by a circuit of the optical polarizing elements in theory.
Moreover, the stationary state of the optical quantum walk coincides with that of the circulant quantum walk under some conditions.
To explain the implementation design and the condition in greater detail, let us first introduce the definitions of the blow-up graph and the optical quantum walk precisely.
\subsection{blow-up graph}\label{def:BlowupGraph}
Let $\tilde{G}_0=(\tilde{V}_0,\tilde{A}_0)$ be the original graph with the tails. Recall that the bijection map $\xi_u:A_u\to [{\rm deg}(u)]:=\{0,\dots,{\rm deg}(u)-1\}$ is assigned at each vertex $u$ as defined by the previous section. The labeling is denoted by $\xi:=(\xi_u)_{u\in \tilde{V}_0}$.
Under this setting, the blow-up graph of $\tilde{G}_0$ is defined as follows. See also Fig.~\ref{fig:GOGBU}.
\begin{definition}\label{def:BUG}
Blow-up graph of $\tilde{G}_0$ with the labeling $\xi$ {\rm :} $\tilde{G}_0^{(BU,\xi)}=(\tilde{V}_0^{(BU,\xi)},\tilde{A}_0^{(BU,\xi)})$. \\
The vertex and arc sets are defined as follows.
\begin{align}
\tilde{V}_0^{(BU,\xi)} &= V_0^{(BU)}\cup (\tilde{V}_0\setminus V_0), \\
\tilde{A}_0^{(BU,\xi)} &= A_0^{(BU,\xi)} \cup (\tilde{A}_0\setminus A_0).
\end{align}
Here
\[V_0^{(BU)}:=\sqcup_{u\in V_0}\{ (u;\xi_u(a)) \;|\;t(a)=u \}\] and
$A_0^{(BU,\xi)}$ is defined as follows.
There is an arc from $(u;j)\in V_0^{(BU,\xi)}$ to $(v;\ell)\in V_0^{(BU,\xi)}$ in $\tilde{G}_0^{(BU,\xi)}$; that is,
$((u;j),(v;\ell))\in A_0^{(BU,\xi)}$, if and only if either of the following two conditions is hold.
\begin{enumerate}
\item $u=v$ and $\ell=j+1$ in the modulus of ${\rm deg}(u)$, \\
or
\item $v=o(\xi_u^{-1}(j))$ and $u=o(\xi^{-1}_v(\ell))$ in $\tilde{G}_0$.
\end{enumerate}
Each tail connecting to vertex $u\in V_0$ in the original graph $\tilde{G}_0$ is connected to the vertex $(u;\xi_u(a))$ with $a\in \partial A_+$ in the blow-up graph $\tilde{G}_0^{(BU,\xi)}$.
\end{definition}
The blow-up graph is constructed by (1) blowing up each vertex $u\in V(G_0)$ as the {\it directed} cycle following the labeling $\xi_u$, and by (2) connecting to other oriented cycles by the symmetric arcs following the original connection in $G_0$.
Then if vertices $u$ and $v$ are connected in $G_0$, there are symmetric arcs between the directed cycle of $u$ obtained by (1) and the that of $v$ in $\tilde{G}_0$. We call the directed cycle of $u$ in the new graph $\tilde{G}_0^{(BU,\xi)}$ obtained by (1) the island of $u$.
The set of all the arcs in all the islands is denoted by $A_0^{cycle}$, which is the set of the arcs of the oriented cycles by the blowing up.
On the other hand, since the arcs obtained by (2) in Definition~\ref{def:BUG} are isomorphic to $\tilde{A}_0$, we denote this set by $\tilde{A}_0$ itself to reduce the number of notations.
\begin{remark}
The set of arcs $\tilde{A}_0^{(BU,\xi)}$ is divided into
\[ \tilde{A}_0^{(BU,\xi)}=A_0^{cycle} \sqcup \tilde{A}_0. \]
\end{remark}
The blow-up graph is not a symmetric directed graph; that is, the existence of the inverse arc is not ensured, and it has more vertices and arcs than the original graph, which would seem to be suggest a complexity. On the other hand, it has the following nice property.
\begin{remark}\noindent
The blow-up graph is a $2$-regular digraph; that is,
for every vertex $u\in \tilde{V}_0^{(BU,\xi)}$, the in-degree and the out-degree are $2$;
one pair of in-arc and out-arc belongs to $A_0^{cycle}$, and the other belongs to $\tilde{A}_0$.
\end{remark}
The reason for replacing the original vertex $u$ in $\tilde{G}_0$ with island $u$ in $\tilde{G}^{(BU,\xi)}$ is to represent the $2$-polarizations of the optics as follows.
A quantum optics is driven by ``$2$" polarizations represented by $|V\rangle=[1,0]^\top$ and $|H\rangle=[0,1]^\top$. Instead of such a ``complexity" of the blowing up graph, we obtain a representation of the ``$2$"-internal degrees of freedom on every vertex in the blow-up graph because of the ``$2$"-regularity. The subset $\tilde{A}_0$ keeps the fundamental structure of the original graph. This fact will play an important role in the implementation of the circulant quantum walk on graph $\tilde{G}_0$ by a quantum optics.
\subsection{Optical quantum walk and the motivation}
The time evolution operator of the optical quantum walk is determined by the parameters of the previous circulant quantum walk. The graph of the optical quantum walk is the blow-up graph $\tilde{G}_0^{(BU,\xi)}$.
Recall that the in- and out- degrees of the blow-up graph are $2$.
The vector space of the time evolution is represented by $\mathbb{C}^{\tilde{A}_0^{(BU,\xi)}}$. The scattering at each vertex $(u;j)$ is expressed by $H_u$. More precisely, we define the optical quantum walk as follows.
\begin{definition}\label{def:OQW}
Optical quantum walk on $\tilde{G}_0^{(BU,\xi)}$: $\mathrm{Opt}(QW(G_0;\bs{H};\xi))$.
\begin{enumerate}
\item The vector space: $\mathbb{C}^{\tilde{A}_0^{(BU,\xi)}}$.
\item The time evolution:
Let $a_{in}\in A_0^{cycle}$, $b_{in}\in \tilde{A}_0$ be the arcs whose terminal vertices are $(u;j)$, and $a_{out}\in A_0^{cycle}$, $b_{out}\in \tilde{A}_0$ be the arcs whose origins are also $(u;j)$. Then the time evolution operator $U^{(BU)}$ is defined as follows:
\begin{equation}\label{eq:OQW} \begin{bmatrix}(U^{(BU)}\psi)(a_{out}) \\ (U^{(BU)}\psi)(b_{out}) \end{bmatrix}=H_u \begin{bmatrix}\psi(a_{in}) \\ \psi(b_{in}) \end{bmatrix}
\end{equation}
for any $\psi\in \mathbb{C}^{\tilde{A}_0^{(BU,\xi)}}$.
On the tails, the dynamics of the quantum walk is free; that is, it follows (\ref{eq:tail}).
\item The initial state:
\[\psi_0(a)=
\begin{cases}
1 & \text{: $a\notin A_0^{(BU,\xi)}$, ${\rm dist}(o(a),V_0^{(BU)})>{\rm dist}(t(a),V_0^{(BU)})$,}\\
0 & \text{: otherwise.}
\end{cases}\]
\end{enumerate}
\end{definition}
Our interest is how the optical quantum walk ``imitates" the original circulant quantum walk. One of them can be implemented by a quantum optics in theory. An experimental implementation of $\psi_\infty^{(BU)}$ by optical polarizing elements is proposed in Sect.\ref{sect:ei}.
According to \cite{HS}, both of the stationary states for the circulant quantum walk and its induced optical quantum walk exist:
\begin{theorem}[\cite{HS}]\label{thm:stationary}
Let $\psi_n$ and $\psi_n^{(BU)}$ be the $n$-th iterations of the circulant quantum walk and its induced optical quantum walk. Then we have
\[ \exists \lim_{n\to\infty}\psi_n=:\psi_\infty,\; \exists\lim_{n\to\infty}\psi_n^{(BU)}=:\psi_\infty^{(BU)}. \]
\end{theorem}
We can then focus on their stationary states and the condition for the two stationary states to coincide.
\begin{definition}\label{def:implement} Notion of the implementation in this paper {\rm: }
We say that the optical quantum walk implements the underlying circulant quantum walk if
\[ \psi_\infty^{(BU)}(a)=\psi_\infty(a)\text{ for any $a\in \tilde{A}_0$.} \]
\end{definition}
In Sect.~\ref{sect:demonstration}, we give examples by numerical simulation.
\section{A circuit of optical polarizing elements for the optical quantum walk }\label{sect:ei}
In this section, we propose the design of the optical circuit implementing the optical quantum walk in an ideal environment where the phase is matched in each interference and there is no attenuation. Improvement points for a more realistic design are discussed in the final section.
First we introduce our idea for the implementation of the island in $\tilde{G}_0^{(BU,\xi)}$ by using the half wave plate (HWP) and polarizing beam splitter (PBS) so that the output to arbitrary input of this optical circuit is represented by our circulant matrix.
Secondly, we explain how to connect each implemented circuit to reproduce the dynamics on the optical QW.
\subsection{Design of $U^{(BU)}|_{\text{island} \;u }$}\label{subsect:island}
\begin{figure}
\centering
\includegraphics[keepaspectratio, width=150mm]{fig1.eps}
\caption{The island of the blow-up graph (left figure), the island mounted with optical elements (middle figure) and the simplified version of the island mounted with optical elements (right figure): This figure corresponds to the islands of $\tilde{G}^{(BU,\xi)}$ for $N=3$, where each island of $\tilde{G}^{(BU,\xi)}$ is a directed graph with two inputs (i, s) and two outputs (o, s) (for example, in the left figure, we can see the vertices where i1 and s3 are inputs and o1 and s1 are outputs). The island on the right is a simplified equivalent of the middle optical island.}
\label{fig:a}
\end{figure}
\begin{figure}
\centering
\includegraphics[keepaspectratio, width=150mm]{fig2.eps}
\caption{The island implemented with the optical elements shown in the right of Figure 1, with $N$ generalized(the figure shown here is for $N=8$, but it can be extended to general circumscribed polygons): Mirrors are eliminated, and the angle of incidence and reflection at the PBS are not taken into account (originally the angle of incidence and reflection would be 90°, but that is just a matter of changing the direction of the optical path). Each location of HWP corresponds to an individual vertex of the island. Each PBS plays two roles, both receiving the inflows with a superposition of H and V polarizations from the backward HWP and with H polarization from the outside, and sending the outflow with V polarization to the forward HWP and with H polarization to the outside.}
\label{fig:b}
\end{figure}
In this subsection, we design the ``parts" of the circuit for the optical QW which will be placed on the islands in the blow-up graph.
The island $u$ is represented by a directed cycle with the tails. Let $\vec{C}_N$ be such a directed graph with $N$ vertices. See Fig.~\ref{fig:a} for the $N=3$ case.
We introduce an implementation of the quantum walk restricted to the island $\vec{C}_N$ driven by $2\times 2$ matrix $H$ by optical polarizing elements. Let us explain the implementation by the following three steps. \\
\noindent {\bf Direct implementation ($N=3$).}\\
First, let us focus on the implementation for $N=3$ case.
The stationary state of the quantum walk on the left figure in Fig.~\ref{fig:a}, $\vec{C}_3$, is described as
\begin{equation}\label{eq:cycle} \begin{bmatrix} s_{k} \\ o_{k} \end{bmatrix}=H \begin{bmatrix} s_{k-1} \\ i_{k} \end{bmatrix} \end{equation}
for any $k=0,1,2$.
The middle figure in Fig.~\ref{fig:a} depicts a direct implementation approach in which the dynamics of the quantum walk on the island is mounted with optical elements. Each HWP's in the middle figure is sandwiched between two PBSs (surrounded by dotted lines), which corresponds to one of the vertices of $\vec{C}_3$.
There are two modes of polarization, H polarization and V polarization.
The fundamental idea of our implementation is that we establish a correspondence between the arcs of $\vec{C}_3$ and the modes of polarization; that is,
each arc of the triangle in the left figure corresponds to the V polarization while each arc of the tail in the left figure correspond to the H polarization.
Since PBS is responsible for transmitting an H polarization and reflecting a V polarization,
the first PBS in the vertex $k$ ($k=1,2,3$) represents the situation that the vertex $k$ receives the inflow of quantum walkers from both inside ($s_{k-1}$) and outside ($i_k$) while the second PBS represents the situation that the vertex $k$ sends the outflow to both inside ($s_k$) and outside ($o_k$). We have set the inflow vectors $[s_{k-1},i_{k}]^\top$ and the outflow vectors $[s_k,o_k]^\top$ ($k=1,2,3$) in (\ref{eq:cycle}).
With PBS alone, there is no factor that can affect the polarization state. Then we set the HWP between the two PBSs, which shifts the phase of each polarization; as a consequence, input H polarization results in a superposition of the H and V polarizations. Thus the sandwiched HWP represents the unitary operator $H$ in (\ref{eq:cycle}).
In summary, the sandwiched HWP plays the role of the operation of quantum coin $H$, while the first and second PBSs play the role to giving the inflow of a quantum walker to vertex $k$ represented by $[s_{k-1}\;i_k]^\top$ and the outflow of a quantum walker from the vertex $k$ represented by $[s_k\;o_k]^\top$ in (\ref{eq:cycle}), respectively.
Note that it is possible to make an arbitrary unitary matrix $H$ by setting additional quarter wave plates \cite{polaoptics}.
Also note that there are fixed-end and free-end reflections in $PBS$.
In the situation we are considering here, the V polarization only appears on the $s_k$ corresponding to the vertex $k$ of the island. Therefore, at $i_k$ and $o_k$, where V polarization does not need to be taken into account, there is no reflection at the $PBS$ and the phase $\pi$ is not affected by the fixed-end. For these reasons, the island shown by the right figure in Fig.~1 should be mounted so that the fixed-end faces the outside of the island.\\
\noindent {\bf Economical implementation $(N=3)$}\\
\noindent In the second step, to reduce the optical elements, we introduce an ``economical" design as depicted by the right figure in Fig.~\ref{fig:a}.
Let us explain that we can omit the route between the first PBS in the forward vertex and the second PBS in the backward vertex of the middle figure in Fig.~\ref{fig:a}, which means that we combine the PBS for the inflow placed in vertex $k$ with the PBS for the outflow placed in vertex $k-1$ ($k=0,1,2$).
In the middle figure of Fig.~\ref{fig:a}, let us focus on the optical route denoted by the arc from the HWP to the second PBS placed in the vertex $3$. Let us denote the state on this arc by $w_3$. This state $w_3$ is described by a superposition of polarizations H and V. The outflow $o_3$ corresponds to the V polarization while $s_3$ corresponds to the H polarization.
The state $s_3$ will be reflected on the first PBS placed in the vertex $1$. Then the state between the first PBS and HWP placed in the vertex $1$ is described by $s_3+i_1$. In the next, let us focus on the corresponding optical route between the HWP and the combined PBS in the right figure. The state before the combined PBS in the right figure is $w_{3}$ and the state after the combined PBS is split into $s_{3}+i_{1}$ as the inflow to vertex $1$ and $o_3$ as the outflow from vertex $3$. Then the response to the inflows of the left figure is isomorphic to that of the middle figure. Note that the in- and out- flows on each PBS derive from the different vertices in this implementation. \\
\noindent {\bf Extension to $N\geq 3$}\\
As a third step, the idea for $N=3$ can be extended to a general $N\geq 3$.
In Fig.~\ref{fig:b}, we draw the resulting design for general $N$.
The island with the vertices $N$; $\vec{C}_N$, can be considered in the same way as the right figure in Fig.~\ref{fig:a}.
Each location of the HWP corresponds to each vertex of the island. Each PBS take both roles receiving the inputs from the backward vertex and from the outside, and sending the outputs to the forward vertex and to the outside. Then, the input from the outside corresponds to a quantum walker from the outside to the {\it forward} vertex, while the output to the outside corresponds to a quantum walker to the outside from the {\it backward} vertex. This means that at each PBS, the vertex of the output to the outside shifts that of the input from the outside by one. This observation will be important to design the whole circuit.
\subsection{Drawing the circuit}\label{subsect:optcircuit}
We build the circuit by combining the above parts (islands).
We introduce the method drawing the circuit so that each outflow from an island is switched to another inflow to a neighboring island.
In the following, the blow-up graph $\tilde{G}_0^{(BU,\xi)}$ is deformed to $G'=(HWP\sqcup PBS, A')$ to draw the circuit. After drawing the graph $G'$, we place the half-wave plate on each $HWP$ vertex in the island $u$ and the polarizing beam splitters on each $PBS$ vertex in $G'$. Now let us explain how to draw the graph $G'$ from the blow-up graph $\tilde{G}_0^{(BU,\xi)}$. \\
\noindent{\bf Vertex set}:
We begin by drawing the ``circumscribed polygon" to each original island in $\tilde{G}_0^{(BU,\xi)}$. The vertex set of $G'$ is constructed by all the corners of the circumscribed polygons, and all the sides of the circumscribed polygons. The vertex subset on the corners is denoted by $PBS$, while the vertex subset on the sides is denoted by $HWP$. The $HWP$ vertex is in one-to-one correspondence with the vertex set of the original graph $\tilde{G}_0^{BU,\xi}$. \\
\noindent{\bf Arc set}:
Following the orientation of each island, the sides of the circumscribed polygon with the subdivision by $HWP$ vertices are replaced with the arcs in $G'$.
Next, let us define the remaining arcs in $G'$.
Let us set the arc
from the islands $u$ to $v$ in $\tilde{G}_0^{(BU,\xi)}$ by $a\in \tilde{A}_0$.
The origin and terminal vertices of $a$ in $\tilde{G}_0^{(BU,\xi)}$ are denoted by $(u;\xi_u(v))$ and $(v,\xi_v(u))$, respectively.\footnote{Originally, the domain of $\xi_u$ was the arc set whose terminal vertices are $u$ in $\tilde{G}_0$, but since there is a one-to-one correspondence between the above arcs and their origin vertices if $G_0$ is a simple graph, we change the domain to the vertex set for readability. }
In $G'$, the arc $a$ is replaced with the arc whose origin vertex is the $PBS$ vertex located in the corner between the $(u;\xi_u(v))$ and $(u;\xi_u(v)+1)$ and the terminus vertex is the $PBS$ vertex located in the corner between $(v;\xi_v(u))$ and $(v;\xi_v(u)-1)$. The same reconnection procedure is done to every arc in $\tilde{A}_0$. Then the new graph $G'$ is obtained. \\
\noindent The reason for the reconnection procedure is as follows. From the consideration of the economical design in subsection~\ref{subsect:island} (see also Fig.~\ref{fig:b}), we know that
the $PBS$ vertex between the $HWP$ vertex $(u;\xi_u(v))$ and $(u;\xi_u(v)+1)$ must play the two roles, receiving inflow from the islands $\xi_u^{-1}(\xi_u(v)+1)$ and sending outflow to the island $v$.
It is implies that the forward $PBS$ vertex of the $HWP$ vertex located $(u;\xi_u(v))$ sends the outflow to the island $v$ while the backward $PBS$ vertex of the HWP vertex located in $(u;\xi_u(v)-1)$ receives the inflow from the island $v$.
By switching the situation of the islands $u$ to $v$, we see that the forward $PBS$ vertex sends the outflow to the island $u$ while the backward vertex receives the inflow from the island $u$. Thus in the optical circuit graph $G'$,
the arc from $(u;\xi_u(v))$ to $(v;\xi_v(u))$ in $\tilde{G}_0^{(BU,\xi)}$ must be replaced with the arc from ``the $PBS$ vertex between the $HWP$ vertices $\xi_u(v)$ and $\xi_u(v)+1$" to ``the $PBS$ vertex between the $HWP$ vertices $\xi_v(v)-1$ and $\xi_v(u)$" in the optical design graph $G'$. Then we realize the situation that each outflow from an island is switched to another inflow to a neighboring island by using the designs of $U^{(BU)}|_{\text{island}\;u}$'s.
More realistic implementation and arising problems will be discussed in the final section.
\begin{figure}
\centering
\includegraphics[keepaspectratio, width=150mm]{G0GBU.eps}
\caption{The original graph $G_0$ (left figure) and its blowing up graph $\tilde{G}^{(BU,\xi)}$ (right figure): The labeling $\xi$ is set as follows: $\xi_u(tail)=0$, $\xi_u((v,u))=1$, $\xi_u((z,u))=2$; $\xi_v(tail)=1$, $\xi_v((w,v))=2$, $\xi_v((z,v))=3$, $\xi_v((u,v))=0$; $\xi_w(tail)=2$, $\xi_w((z,w))=0$, $\xi_w((v,w))=1$; $\xi_z(tail)=3$, $\xi_z((u,z))=0$, $\xi_z((v,z))=1$, $\xi_z((w.z))=2$. For example; (i) there is an arc from $(u;2)$ to $(u;0)$ because the terminal vertices are commonly $u$ and $2+1=0$ in the modulus of ${\rm deg}(u)=3$, which satisfies the connected condition (1) in Definition~\ref{def:BUG}; (ii) there are symmetric arcs between vertex $(v;2)$ and $(w;1)$ in $\tilde{G}_0^{(BU;\xi)}$, because we can check that $o(\xi_v^{-1}(2))=o((w,v))=w$ and $o(\xi_w^{-1}(1))=o((v,w)))=v$, which satisfies the connected condition (2) in Definition~\ref{def:BUG}.}
\label{fig:GOGBU}
\end{figure}
\begin{figure}
\centering
\includegraphics[keepaspectratio, width=150mm]{GdashGoptical.eps}
\caption{$G'$ (left figure) and the design of the optical circuit (right figure): To highlight the act of drawing the circuit, we include $\tilde{G}^{(BU,\xi)}$ in gray color in the left figure. We draw the ``circumscribed polygons" around each island (blue arcs) and place the vertices on the corners and in the midpoint of each side. Then, the sides of the circumscribed polygon correspond to $A_0^{cycle}$. On the other hand, all the arcs in $\tilde{A}_0$ are reconnected as depicted in the left figure. For example, the symmetric arcs between $(u;1)$ and $(v;0)$ are reconnected by changing the origin and terminal vertices to the corresponding corners of the circumscribed polygon. We set the half wave plate on each black vertex and the polarizing beam splitter on each white vertex as depicted in the right figure. }
\label{fig:GprimeO}
\end{figure}
\section{Demonstration by numerical simulations}\label{sect:demonstration}
In this section, we examine whether a circulant quantum walk on the complete graph with $10$ vertices, $K_{10}$, is implemented by the corresponding optical quantum walk by changing ${\bs H}$.
The arc whose terminus is $i$ and origin is $j$ is denoted by $(i,j)$, and the arc whose terminus is $i$ and origin is located in the tail is denoted by $(i,i)$ in $\tilde{G}_0$. The labeling $\xi=(\xi_0,\dots,\xi_{9})$ is given as follows:
\begin{align}\label{eq:label}
\xi_i((i,j)) &= j
\end{align}
for every $i,j=0,\dots,9$.
We consider following two examples. First, we consider the case in which the the implementation is realized, and then we consider the case that the implementation fails.
For this purpose, we define a relative probability of the circulant QW by
\[ \mu_n(j)=\sum_{a\in \tilde{A}_0:t(a)=j} |\psi_n(a)|^2. \]
We also set
\[\mu_n^{BU}(j)=\sum_{a\in \tilde{A}_0:t(a)=j} |\psi_n^{BU}(a)|^2,\]
and chase their time courses $\mu_n$ and $\mu_n^{BU}$ simultaneously.
Note that the summation is taken over $\tilde{A}_0$ in the definition of $\mu_n^{(BU)}$ because the ``implementation" is determined at the original arcs of $G_0$ by Definition~\ref{def:implement}.
The existence of limits of $n\to\infty$ to $\mu_n$ and $\mu_n^{BU}$ is ensured by Theorem~\ref{thm:stationary}. We call $\lim_{n\to\infty}\mu_n$ a stationary measure of the circulant QW. \\
\noindent {\bf Example 1.} {\it $K_{10}$ with a marked vertex} :
We choose the vertex $0$ in the set of vertices $\{0,\dots,9\}$ as the marked vertex.
The circulant coin assigned at each vertex is denoted by $\mathrm{Circ}(H_j)$.
Here $H_j$ is set so that a perturbation is given only at the marked element $0$ as follows.
\[ H_j=
\begin{cases}
\begin{bmatrix}
1/\sqrt{2} & 1/\sqrt{2} \\ 1/\sqrt{2} & -1/\sqrt{2}
\end{bmatrix} & \text{: $j=0$} \\
\\
\begin{bmatrix}
1/\sqrt{2} & 1/\sqrt{2} \\ -1/\sqrt{2} & 1/\sqrt{2}
\end{bmatrix} & \text{: otherwise}
\end{cases} \]
for $j=0,\dots,9$.
Figure~\ref{Fig:1} shows the time course of the relative probability at each vertex $0,\dots,9$. The blue curve describes the time course of the relative probability of the vertex $0$, which is the marked vertex. We observe that although the time scales of the convergence are different, the stationary measures converge to the same value for every vertex. This is theoretically supported by Corollary~\ref{cor:graph} ,because there are arcs in $G_0^{(BU;\xi)}$ satisfying condition (2) in the corollary; these arcs are $(0,1)$ and $(0,9)$ and their inverses.
\begin{figure}
\begin{tabular}{cc}
\begin{minipage}[t]{0.45\hsize}
\centering
\includegraphics[keepaspectratio, width=70mm]{K10_defect_circ.eps}
\caption{Time course of $\mu_n$ for the circulant QW with a marked vertex in the setting of Example~1: the horizontal and vertical lines are the time step and the relative probability of each vertex. The blue line is the time course of the relative probability of the marked vertex, the other lines are the time courses of the other vertices. }
\label{Fig:1}
\end{minipage} &
\begin{minipage}[t]{0.45\hsize}
\centering
\includegraphics[keepaspectratio, width=70mm]{K10_defect_opt.eps}
\caption{Time course of $\mu_n^{BU}$ for the optical QW with a marked vertex induced by the left circulant QW}
\label{Fig:2}
\end{minipage}
\end{tabular}
\end{figure}
\noindent {\bf Example 2.}($K_{10}$ with the uniform setting) We assign $H_j$'s uniformly by
\[ H_j=\begin{bmatrix} 1/\sqrt{2} & 1/\sqrt{2} \\ 1/\sqrt{2} & -1/\sqrt{2} \end{bmatrix} \]
for any $j=0,\dots,9$.
From the symmetry on the time evolution with respect to each vertex, the stationary measure on each vertex is the same.
We observe that although both the circulant QW and its induced optical QW converge to some stationary measures, the convergence values are different.
\begin{figure}
\begin{tabular}{cc}
\begin{minipage}[t]{0.45\hsize}
\centering
\includegraphics[keepaspectratio, width=70mm]{K10_circ.eps}
\caption{Time course of the relative probabilities of the uniform circulant QW in the setting of Example~2.}
\label{Fig:1'}
\end{minipage} &
\begin{minipage}[t]{0.45\hsize}
\centering
\includegraphics[keepaspectratio, width=70mm]{K10_opt.eps}
\caption{Time course of the relative probabilities of the uniform optical QW induced by the left circulant QW}
\label{Fig:2'}
\end{minipage}
\end{tabular}
\end{figure}
\section{Mathematical results}
The following sufficient condition for the accomplishment of the implementation is the key to our main results.
\begin{proposition}\label{thm:key}
Let $\bs{H}=(H_u)_{u\in V_0}$ and $\xi=(\xi_u)_{u\in V}$ in the setting of the circulant quantum walk $QW(G_0;\bs{H};\xi)$ on $\tilde{G}_0$.
If $\ker(1-U^{(BU)})=\{\bs{0}\}$,
then
$\mathrm{Opt}(QW(G_0;\bs{H};\xi ))$ on $\tilde{G}_0^{(BU,\xi)}$ implements $QW(G_0;\bs{H};\xi)$ on $\tilde{G}_0$ .
\end{proposition}
\begin{proof}
See Section~\ref{set:proof}.
\end{proof}
The following theorem gives a sufficient condition for $\ker(1-U^{(BU)})= \{\bs{0}\}$, which illustrates two kinds of the setting of the optical quantum walk. See Figures~\ref{fig:setting1} and \ref{fig:setting2}; a kind of symmetry is broken around the boundary because (1) the rotational orientations of the connected islands are opposite each other or (2) the assigned coins of the connected islands are different.
\begin{theorem}\label{cor:graph} {\rm (The symmetry breaking designs for the implementation)}\\
Assume that
\[H_u=\begin{bmatrix} a_u & b_u \\ c_u & d_u \end{bmatrix}\] satisfies $a_ub_uc_ud_u\neq 0$ for any $u\in V_0$. If there is an arc $e\in A_0$ satisfying each of them:
\begin{enumerate}
\item $\xi_{o(e)}(\bar{e})-\xi_{o(e)}(t)=\xi_{t(e)}(e)-\xi_{t(e)}(\tau)=\pm 1$, or
\item $\xi_{o(e)}(\bar{e})-\xi_{o(e)}(t)=-(\xi_{t(e)}(e)-\xi_{t(e)}(\tau))=\pm 1$ and $d_{o(e)}\neq d_{t(e)}^*$,
\end{enumerate}
where $t,\tau\in \partial A_+$ whose terminal vertices are located in the islands $o(e)$ and $t(e)$, respectively,
then $\mathrm{Opt}(QW(G_0;\bs{H};\xi ))$ implements $QW(G_0;\bs{H};\xi)$.
Here for a complex number $z$, $z^*$ is the conjugate of $z$.
\end{theorem}
\begin{figure}
\begin{tabular}{cc}
\begin{minipage}[t]{0.45\hsize}
\centering
\includegraphics[keepaspectratio, width=75mm]{Fig3.eps}
\caption{The symmetry breaking design in Theorem~\ref{cor:graph} (1) for the implementation }
\label{fig:setting1}
\end{minipage} &
\begin{minipage}[t]{0.45\hsize}
\centering
\includegraphics[keepaspectratio, width=75mm]{Fig4.eps}
\caption{The symmetry breaking design in Theorem~\ref{cor:graph} (2) for the implementation}
\label{fig:setting2}
\end{minipage}
\end{tabular}
\end{figure}
Example~1 in Section~\ref{sect:demonstration} matches the setting of case (2) in Corollary~\ref{cor:graph}. This is the reason that the stationary state of the circulant QW coincides with that of its optical QW.
Now we give the proof using Proposition~\ref{thm:key}.
\begin{proof}
The arc in $\partial A_+$ whose terminal vertex is $u\in V_0$ is denoted by $\tau_u$.
We set $B_u\subset A^{(BU,\xi)}_0$ as the set of arcs whose terminal or origin vertices are $(u,\xi_u(\tau_u))$ (see Fig.~\ref{fig:Thm6.1}):
\[ B_u= \{a\in A^{(BU,\xi)}_0 \;|\; o(a)=(u,\xi_u(\tau_u))\text{\;or\;}t(a)=(u,\xi_u(\tau_u))\}. \]
The eigenvector of eigenvalue $1$ is denoted by $\psi$. Note that $\psi$ satisfies not only $U^{BU,\xi}\psi =\psi$ but also $||\psi||^2<\infty$.
First, let us see the following fact holds:
\begin{equation}\label{eq:condition1}
{\rm supp}(\psi) \cap (\cup_{u\in V_0}B_u) =\emptyset.
\end{equation}
In the tail, $\psi(\tau_u)=\psi(a_1)=\psi(a_2)=\cdots$ and $\psi(\bar{\tau}_u)=\psi(\bar{a}_1)=\psi(\bar{a}_2)=\cdots$ (where $o(\tau_u)=t(a_1),o(a_1)=t(a_2),\dots$) holds.
Then to ensure $||\psi||<\infty$, the values $\psi(\tau_u)$ and $\psi(\bar{\tau}_u)$ must be $0$.
From the definition of the time evolution operator of this quantum walk, letting $e_{in}',e_{out}'\in B_u$, we have
\[ \begin{bmatrix} \psi(e_{out}') \\ \psi(\bar{\tau}_u) \end{bmatrix}=\begin{bmatrix} a_u & b_u \\ c_u & d_u \end{bmatrix}\begin{bmatrix} \psi(e_{in}') \\ \psi(\bar{\tau}_u) \end{bmatrix}, \]
which is equivalent to
\[ \psi(\tau_u)=(\psi(e_{out}')-a_u\psi(e_{in}'))b_u^{-1},\;\psi(\bar{\tau}_u)=c_u\psi(e_{in}')+(\psi(e_{out}')-a_u\psi(e_{in}'))b_u^{-1}d_u. \]
This implies that $\psi(\tau_u),\psi(\bar{\tau}_u)= 0$ if and only if $\psi(e_{out}'),\psi(e_{in}')=0$. Then (\ref{eq:condition1}) holds.
Considering the contraposition of Proposition~\ref{thm:key}, we notice that it is enough to show that if $\ker(1-U^{BU})\neq\{\bs{0}\}$, then
there are no arcs in $A_0$ satisfying (1) or (2).
For $a\in A_0$, let us put $e_1,e_2\in A^{cycle}$ with $t(e_1)=(o(a);\xi_{o(a)}(\bar{a}))$ and $t(e_2)=(t(a);\xi_{t(a)}(a))$ (see Figure~\ref{fig:Thm6.1}).
Assume (1) holds, and consider the $+1$ case. By (\ref{eq:condition1}), we have
\[ \begin{bmatrix} 0 \\ \psi(a) \end{bmatrix} = \begin{bmatrix} a_{o(a)} & b_{o(a)} \\ c_{o(a)} & d_{o(a)}\end{bmatrix}\begin{bmatrix} \psi(e_1) \\ \psi(\bar{a}) \end{bmatrix}\text{ and } \begin{bmatrix} 0 \\ \psi(\bar{a}) \end{bmatrix} = \begin{bmatrix} a_{t(a)} & b_{t(a)} \\ c_{t(a)} & d_{t(a)}\end{bmatrix}\begin{bmatrix} \psi(e_2) \\ \psi(a) \end{bmatrix}.\]
From the unitarity of $H_{o(a)}$ and $H_{t(a)}$, we have $\psi(a),\psi(\bar{a})\neq 0$. Taking the inverses of $H_{o(a)}$ and $H_{t(a)}$ to both equations, which are the adjoints of them, we also have $\psi(\bar{a})=d_{o(a)}^*\psi(a)$ and $\psi(a)=d_{t(a)}^*\psi(\bar{a})$.
This implies $|d_{o(a)}d_{t(a)}|=1$. The unitarity of $H_u$'s leads to $|d_{o(a)}|=|d_{t(a)}|=1$, which induces $b_u=c_u=0$ $(u\in \{o(a),t(a)\})$. This is the contradiction to the Assumption~\ref{assump:1}. In the same way, the case for $-1$ in (1) can be proved.
In the next, let us assume (2) holds. Consider $+1$ case. Then we have
\[ \begin{bmatrix} 0 \\ \psi(a) \end{bmatrix} = \begin{bmatrix} a_{o(a)} & b_{o(a)} \\ c_{o(a)} & d_{o(a)}\end{bmatrix}\begin{bmatrix} \psi(e_1) \\ \psi(\bar{a}) \end{bmatrix}\text{ and } \begin{bmatrix} \psi(\bar{e}_2) \\ \psi(\bar{a}) \end{bmatrix} = \begin{bmatrix} a_{t(a)} & b_{t(a)} \\ c_{t(a)} & d_{t(a)}\end{bmatrix}\begin{bmatrix} 0 \\ \psi(a) \end{bmatrix}.\]
Taking the inverse, which is the adjoint of $H_{o(a)}$, to the first equation, we have $d_{o(a)}^*\psi(a)=\psi(\bar{a})$, while computing the second equation directly, we have $\psi(\bar{a})=d_{t(a)}\psi(a)$; which implies $\psi(\bar{a})/\psi(a)=d^*_{o(a)}=d_{t(a)}$.
The $-1$ case also can be done in the same way.
\end{proof}
\begin{figure}
\centering
\includegraphics[keepaspectratio, width=100mm]{Thm6.1.eps}
\caption{The vertex colored white is the vertex connecting to a tail. The eigenvector of eigenvalue $1$ does not overlap to $B_u$, for any island $u$. }
\label{fig:Thm6.1}
\end{figure}
\section{The conditions of $\ker(1-U^{BU})=0$ for $K_N$ case}
In this section, let us consider the optical quantum walk induced by the circulant QW with the uniform circulant coin on the complete graph with $N$ vertices $K_N$; $QW(K_N;\bs{H};\xi)$. Here the labeling $\xi=(\xi_i)_{i=0}^{N-1}$ is the same as in (\ref{eq:label}) and the unitary matrices $\bs{H}=(H_i)_{i=0}^{N-1}$ are denoted by
\[ H_i=H=\begin{bmatrix}a& b \\ c & d \end{bmatrix} \]
with $abcd\neq 0$.
In this setting, we obtain a useful sufficient condition to the parameters $a,b,c,d$ of the circulant QW for the implementation as follows.
\begin{theorem}
Let the circulant QW on $K_N$ be set as described in the above.
If ``$d\neq \mathbb{R}$" or ``$\det(H)^{2N}\neq 1$ for $(N>3)$,\;$\det(H)^{2N}\neq -1$ for $(N=3)$", then the induced optical QW implements the circulant QW.
\end{theorem}
In the setting of Example~2, the above condition is not satisfied because $\det(H)=-1$ ($a=b=c=-d=1/\sqrt{2}$). Now let us move to the proof.
\begin{proof}
Let us first prepare the following key lemma for the proof.
\begin{lemma}\label{lem:Kn}
Let $N\geq 3$. We have
\[\dim\ker(1-U^{BU}) =
\begin{cases}
N/2-1 & \text{: $d\in \mathbb{R}$, $(-\Delta)^{2N}=1$, $N$ is even,} \\
(N-3)/2 & \text{: $d\in \mathbb{R}$, $(-\Delta)^{2N}=1$, $(-\Delta)^{N}\neq 1$, $N$ is odd,} \\
(N-1)/2 & \text{: $d\in \mathbb{R}$, $(-\Delta)^{N}= 1$, $N$ is odd,} \\
0 & \text{: otherwise.}
\end{cases}\]
\end{lemma}
By Lemma~\ref{lem:Kn}, if ``$d\notin \mathbb{R}$" or ``$\Delta^{2N}\neq 1$", Theorem~\ref{thm:key} implies that the optical QW implements the underlying circulant QW.
\end{proof}
Now we focus on the proof of Lemma~\ref{lem:Kn}.\\
\noindent{\bf Proof of Lemma~\ref{lem:Kn}}.
Assume $\ker(1-U^{BU})=0$. The labeling $\xi$ satisfies with the condition (2) in Corollary~\ref{cor:graph}. Then the $(2,2)$-element of $H$; $d$, must be a real number.
Let $U^{BU}\Psi=\Psi$ and ${\rm supp}(\Psi)$ be included in the internal blow-up graph. Each vertex in $\tilde{G}_0^{BU,^\xi}$ is labeled by $(\ell,m)$. Here $\ell$ represents the island and $m$ represents the island heading from the island $\ell$. The arc whose origin is $(\ell,m)$ and terminus is $(\ell,m+1)$ belongs to $A_0^{cycle}$, where $m+1$ is the modulus of $N$, while the arc whose origin and terminus are $(\ell,m)$, $(m,\ell)$, respectively belongs to $\tilde{A}_0$. We define $(\ell,\ell)$ is the vertex connecting to the tail. We also define $((\ell,\ell),(\ell,\ell))$ as the arc from $(\ell,\ell)$ to the tail.
We set
\[ \Psi((\ell,m-1),(\ell,m))=:z_m^{(\ell)}, \text{ and } \Psi((\ell,m),(m,\ell))=:x_{\ell,m}. \]
Note that since the support of $\Psi$ is included in the internal graph, $x_{\ell,\ell}=0$ for any $\ell=0,\dots,N-1$.
All the indexes of ``$z$" and ``$x$" are the modulus of $N$.
Then we have the following useful lemma.
\begin{lemma}
Put $\Delta:=\det H$. We have
\begin{align}
\begin{bmatrix}
x_{\ell,m} \\ x_{m,\ell}
\end{bmatrix}
&= \frac{1}{b} \begin{bmatrix}
d & 1 \\ 1 & d
\end{bmatrix}
\begin{bmatrix}
z_{m+1}^{(\ell)} \\ -\Delta z_{m}^{(\ell)}
\end{bmatrix},\label{eq:1} \\
\begin{bmatrix}z_{m+1}^{(\ell)} \\ -\Delta z_m^{(\ell)}\end{bmatrix} &= \sigma_X \begin{bmatrix}z_{\ell+1}^{(m)} \\ -\Delta z_\ell^{(m)}\end{bmatrix} \label{eq:z}
\end{align}
for any $\ell,m=0,\dots,N-1$.
\end{lemma}
\begin{proof}
By inserting the time evolution of the optical QW in Definition~\ref{def:OQW} into the eigenequation $U^{BU}\Psi=\Psi$, we have
\begin{align}
\begin{bmatrix}z_{m+1}^{(\ell)}\\x_{\ell,m}\end{bmatrix}=\begin{bmatrix}a & b \\ c & d\end{bmatrix}\begin{bmatrix}z_m^{(\ell)} \\ x_{m,\ell}\end{bmatrix},\;\;
\begin{bmatrix}z_{\ell+1}^{(m)}\\x_{m,\ell}\end{bmatrix}=\begin{bmatrix}a & b \\ c & d\end{bmatrix}\begin{bmatrix}z_\ell^{(m)} \\ x_{\ell,m}\end{bmatrix}.
\end{align}
Solving $x_{\ell,m}$ and $x_{m,\ell}$ from each equation, we obtain
\begin{align}\label{eq:xz}
\begin{bmatrix} x_{\ell,m} \\ x_{m,\ell} \end{bmatrix} = \frac{1}{b} \begin{bmatrix} d & -\Delta \\ 1 & -\Delta d \end{bmatrix} \begin{bmatrix} z_{m+1}^{(\ell)} \\ z_{m}^{(\ell)} \end{bmatrix},\;
\begin{bmatrix} x_{m,\ell} \\ x_{\ell,m} \end{bmatrix} = \frac{1}{b} \begin{bmatrix} d & -\Delta \\ 1 & -\Delta d \end{bmatrix} \begin{bmatrix} z_{\ell+1}^{(m)} \\ z_{\ell}^{(m)} \end{bmatrix}.
\end{align}
Here we used $a=\Delta \bar{d}=\Delta d$ which derives from the unitarity of $H$ and Corollary~\ref{cor:graph}.
Then putting $\bs{z}_m^{(\ell)}:=[z_{m+1}^{(\ell)},\;z_{m}^{(\ell)}]^\top$, we have
\begin{align*}
\frac{1}{b} \begin{bmatrix} d & -\Delta \\ 1 & -\Delta d \end{bmatrix} \bs{z}_{m}^{(\ell)} = \sigma_X \frac{1}{b} \begin{bmatrix} d & -\Delta \\ 1 & -\Delta d \end{bmatrix} \bs{z}_{\ell}^{(m)}
& \Leftrightarrow
\begin{bmatrix} d & 1 \\ 1 & d \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & -\Delta \end{bmatrix}\bs{z}_{m}^{(\ell)} = \sigma_X \begin{bmatrix} d & 1 \\ 1 & d \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & -\Delta \end{bmatrix} \bs{z}_{m}^{(\ell)} \\
& \Leftrightarrow
\begin{bmatrix} 1 & 0 \\ 0 & -\Delta \end{bmatrix}\bs{z}_{m}^{(\ell)} = \begin{bmatrix} d & 1 \\ 1 & d \end{bmatrix}^{-1} \sigma_X \begin{bmatrix} d & 1 \\ 1 & d \end{bmatrix} \begin{bmatrix} 1 & 0 \\ 0 & -\Delta \end{bmatrix} \bs{z}_{\ell}^{(m)} \\
& \Leftrightarrow
\begin{bmatrix} 1 & 0 \\ 0 & -\Delta \end{bmatrix}\bs{z}_{m}^{(\ell)} = \sigma_X \begin{bmatrix} 1 & 0 \\ 0 & -\Delta \end{bmatrix} \bs{z}_{\ell}^{(m)}.
\end{align*}
Here we used
\[ \begin{bmatrix}1 & d \\ d & 1\end{bmatrix}\sigma_X= \sigma_X\begin{bmatrix}1 & d \\ d & 1\end{bmatrix} \]
in the last equivalence.
\end{proof}
From this Lemma, we obtain two observations. \\
\noindent{\bf Observation 1.}\\
Inserting $\ell=m$ in (\ref{eq:xz}), we have $z_{\ell+1}^{(\ell)}=z_{\ell}^{(\ell)}=0$ since $x_{\ell,\ell}=0$. This is consistent with Corollary~\ref{cor:graph} in the case of (2). \\
\noindent{\bf Observation 2.}\\
Since $\ell,m$ are arbitrarily chosen from $0,\dots,N-1$, equation (\ref{eq:z}) is equivalent to $z_{m}^{(\ell)}=(-\Delta) z_\ell^{(m-1)}$ for any $\ell,m=0,\dots,N-1$.
Starting from this, we recursively obtain
\begin{align}
z_m^{(\ell)} &= (-\Delta)z_{\ell}^{(m-1)} = (-\Delta)^2 z_{m-1}^{(\ell-1)} \notag\\
&= (-\Delta)^3 z_{\ell-1}^{(m-2)} = (-\Delta)^4 z_{m-2}^{(\ell-2)} \notag\\
&= \cdots \notag\\
&= (-\Delta)^{2N-1}z_{\ell-N+1}^{(m-N)} = (-\Delta)^{2N}z_{m-N}^{(\ell-N)} = (-\Delta)^{2N}z_{m}^{(\ell)}. \label{eq:orbit}
\end{align}
Then we have $(-\Delta)^{2N}=1$ or $z_{m}^{(\ell)}=0$. If $(-\Delta)^{2N}=0$, then $z_m^{(\ell)}=0$ for all $\ell,m$ and (\ref{eq:z}) implies $x_{\ell,m}=0$ for all $\ell,m$. This is contradiction. Thus at least, $(\Delta)^{2N}=1$ must be satisfied.
In the set of pairs of superscript and subscript $[i,j]\in\{ [\ell-k,m-k]\;|\;k=0,\dots,N \}$ of ``$z$" in the third column of the above equations, the pair $[\ell,m]$ first appears again as $[m-N,\ell-N]$; on the other hand, in the second column, there might be a $k<N$ such that $[\ell-k,m-k-1]=[m,\ell]$ in the modulus of $N$. But such a $k$ can be identified with $k=\ell-m=(N-1)/2$. Thus if such an appearance of $k$ happens, the size $N$ must be odd and the arc of $z_{m}^{(\ell)}$ is located in the ``front" of the vertex $(\ell,\ell)$. In particular, if $N=3$, since $z_{\ell+1}^{(\ell)}=z_\ell^{(\ell)}$, by the observation 1, every arc in the support of $\Psi$ living in $A_0^{cycle}$ is located in the front of the vertex $(\ell,\ell)$ ($\ell=0,1,2$). Therefore $(-\Delta)^{2k+1}=(-\Delta)^{N}$ must be $1$ for $N=3$. If $N>3$ and $N$ is even, the length of such a cycle is $2N$ from (\ref{eq:orbit}). This means that we obtain the eigenvector which is constructed by the orbit of $2N$-length closed path starting from the arc of $z_m^{(\ell)}$ and returning back to this arc. The number of arcs in $A_0^{(cycle)}$ is $N^2$, but Observation~1 implies the $2N$ arcs are eliminated as the support of $\Psi$. Then we have $(N^2-2N)/(2N)=N/2-1$ linearly independent eigenvectors of eigenvalue $1$; that is, $\dim\ker(1-U^{BU})=N/2-1$ if $N>3$ is even, and $(\Delta)^{2N}=1$.
On the other hand, if $N>3$ and $\ell-m\neq (N-2)/2$, then the length of the orbit of the closed path represented by (\ref{eq:orbit}) is $2N$. Then if $N>3$ is odd and $(-\Delta)^{2N}=1$ but $(-\Delta)^{N}\neq 1$, we have $(N^2-(2N+N))/(2N)$ linearly independent eigenvectors of eigenvalue $1$; while $N>3$ is odd and $(-\Delta)^N=1$, we have $(N^2-(2N+N))/(2N)+1$ linearly independent eigenvectors of eigenvalue $1$. We have completed the proof of Lemma~\ref{lem:Kn}.\;$\square$
\section{Proof of key statement (Proposition~\ref{thm:key})}\label{set:proof}
Let us consider the stationary state of the optical quantum walk $\psi^{BU}$ that satisfies $U^{BU}\psi^{BU}=\psi^{BU}$.
The arcs of the island $u$ in $\tilde{G}_0^{BU,\xi}$ is denoted by
\[A^{cycle}_{0,u}:=\{a\in A^{cycle}_0 \;|\; t(a)=u\text{ in $\tilde{G}_0$}\}=\{ f_0,\dots,f_{{\rm deg}_{\tilde{G}_0}(u)-1} \}\subset \tilde{A}_0^{(BU,\xi)}.\]
Here we set $t(f_j)=(u;j)$ in $\tilde{G}_0^{BU,\xi}$ for $j=0,\dots,{\rm deg}_{\tilde{G}_0}(u)$ (see Fig.~\ref{fig:Lem8.1}).
The restriction to the island $u$ is defined by $\eta_u: \mathbb{C}^{\tilde{A}^{(BU,\xi)}_0}\to \mathbb{C}^{A^{cycle}_{0,u} }$ such that $(\eta_u\psi)(a)=\psi(a)$ for any $a\in A^{cycle}_{0,u}$ and $\psi\in \mathbb{C}^{\tilde{A}_0^{(BU,\xi)}}$; the adjoint is
\[ (\eta_u^*f)(a) = \begin{cases} f(a) & \text{: $a\in A_{0,u}^{cycle}$,}\\ 0 & \text{: otherwise.}\end{cases} \]
Note that $\eta_u\eta_u^{*}$ is the identity operator on $\mathbb{C}^{A^{cycle}_0}$ while $\eta^*_u\eta_u$ is the projection operator.
Then we have
\begin{align*}
U^{BU}\psi^{BU}=\psi^{BU} & \Rightarrow \eta_u U^{BU}\psi^{BU}=\eta_u \psi^{BU} \\
& \Leftrightarrow \eta_uU^{BU}(\eta_u^*\eta_u+(1-\eta_u^*\eta_u))\psi^{BU}=\eta_u\psi^{BU} \\
& \Leftrightarrow E_u\varphi_{u,0}+\rho_u = \varphi_{u,0},
\end{align*}
where $E_u:=\eta_u U^{BU}\eta_u^*$, $\varphi_{u,0}:=\eta_u\psi^{BU}$, and $\rho_u:= \eta_u U^{BU}(1-\eta_u^*\eta_u)\psi^{BU}$.
Let $P_u$ be the cyclic permutation matrix on $\mathbb{C}^{[\kappa]}$ such that $(P_u\phi)(j)=\phi(j+1)$ in the modulus of $\kappa={\rm deg}_{\tilde{G}_0}(u)$. Then it is easy to see that $E_u$ is isomorphic to $a_uP_u$. Since $|a_u|<1$, the inverse matrix $(1-E_u)$ exists. Then we have
\begin{equation}
\varphi_{u,0}=(1-E_u)^{-1}\rho_{u},
\end{equation}
where if we label the $\kappa:={\rm deg}_{\tilde{G}_0}(u)$ arcs from the outside of the island $u$ whose terminal vertices in $\tilde{G}^{BU,\xi}_0$ are $(u;0),(u;1),\dots,(u;\kappa-1)$ by $e_0,\dots,e_{\kappa-1}$, respectively, (see Fig.~\ref{fig:Lem8.1}) then the inflow penetrating into $A_u^{cycle}$ is represented by
\[\rho_u=[\;b_u\;\psi^{BU}(e_0),\dots,b_u\;\psi^{BU}(e_{\kappa-1})\;]^\top.\]
Recall that the arcs in the island $u$ are denoted by $f_0,\dots,f_{\kappa-1}$ and the arcs from the outside of the island $u$ are denoted by $e_0,\dots,e_{\kappa-1}$.
Then we have the following lemma.
\begin{lemma}\label{lem:1}
Let $\psi^{BU}$ be a generalized eigenvector of $U^{BU}$ satisfying $U^{BU}\psi^{BU}=\psi^{BU}$.
Set $\varphi_{u,in}=[\;\psi^{BU}(e_0),\dots,\psi^{BU}(e_{\kappa-1})\;]^\top$, $\varphi_{u,out}=[\;\psi^{BU}(\bar{e}_0),\dots,\psi^{BU}(\bar{e}_{\kappa-1})\;]^\top$, and $\varphi_{u,0}=[\; \psi^{BU}(f_0),\dots,\psi^{BU}(f_{\kappa-1}) \;]^\top$. Then we have
\begin{align}
\varphi_{u,out}&=\mathrm{Circ}(H_u) \varphi_{u,in}, \label{eq:w1}\\
\varphi_{u,0}&=\frac{1}{c_u}(\mathrm{Circ}(H_u)-d_uI_u)\varphi_{u,in}. \label{eq:naka0}
\end{align}
\end{lemma}
\begin{proof}
It holds that
\begin{align}
\varphi_{u,out}(j) &= c_u\varphi_{u,0}(j)+d_u\varphi_{u,in}(j) \label{eq:naka}\\
&= c_u ((1-E_u)^{-1}\rho_u)(j)+d_u \varphi_{u,in}(j) \notag \\
&= c_u ((1-E_u)^{-1} b_u\varphi_{u,in})(j)+d_u \varphi_{u,in}(j).\notag
\end{align}
Then we have
\[ \varphi_{u,out}=(b_uc_u(1-E_u)^{-1}+d_u)\varphi_{u,in}. \]
The inverse of $(1-E_u)$ can be expressed by
\begin{align*}
(1-E_u)^{-1} &= 1+a_uP_u+(a_uP_u)^2+\cdots \\
&= 1+(a_u+a_u^{\kappa+1}+a_u^{2\kappa+1})P_u+(a^2+a^{\kappa+2}+a^{2\kappa+2}+\cdots)P_u^2+\\
&\quad\cdots+(a^{\kappa-1}+a^{\kappa+\kappa-1}+a^{2\kappa+\kappa-1}+\cdots)P_u^{\kappa-1} \\
&= 1+\frac{a_u}{1-a_u^{\kappa}} P_u+\frac{a_u^2}{1-a_u^{\kappa}} P_u^2+\cdots+\frac{a_u^{\kappa-1}}{1-a_u^\kappa}P_u^{\kappa-1}. \end{align*}
Then inserting the above expression for $(1-E)^{-1}$ into $b_uc_u(1-E_u)^{-1}$, we obtain $\mathrm{Circ}(H_u)=b_uc_u(1-E_u)^{-1}+d_uI_u$, which completes the proof of (\ref{eq:w1}). We obtain (\ref{eq:naka0}) by combining (\ref{eq:naka}) with (\ref{eq:w1}).
\end{proof}
\begin{figure}[h]
\centering
\includegraphics[width=60mm]{Lem8.1.eps}
\caption{Labeling of the arcs of the island $u$ in the proof of Lemma~\ref{lem:1}. }
\label{fig:Lem8.1}
\end{figure}
Thus $\mathrm{Circ}(H_u)$ represents the local scattering matrix of the $n$-directed cycle with $n$ boundaries for the in- and out-flow. The unitarity of $\mathrm{Circ}(H_u)$ is ensured by \cite{FelHil1,FelHil2} in more general settings.
Next, let us introduce the restriction $\iota: \mathbb{C}^{\tilde{A}_0^{(BU,\xi)}}\to \mathbb{C}^{\tilde{A}_0}$ such that $(\iota \psi)(a)=\psi(a)$ for any $a\in \tilde{A}_0$.
A matrix representation of $\iota$ is given by
\[ \iota \cong [\; I_{\tilde{A}_0} | 0\;] \]
under the decomposition of $\tilde{A}_0^{(BU,\xi)}$ into $\tilde{A}_0\sqcup A^{cycle}_0$.
Then we have the following proposition.
\begin{proposition}\label{prop:1}
For any $\psi\in \mathbb{C}^{\tilde{A}_0^{(BU,\xi)}}$ satisfying
a genralized eigenequation $U^{BU}\psi=\psi$, we have
\[U_0\iota \psi =\iota \psi.\]
\end{proposition}
\begin{proof}
By (\ref{eq:w1}) in Lemma~\ref{lem:1},
we immediately obtain the conclusion.
\end{proof}
We expect that $\iota \psi$ is the stationary state of the circulant quantum walk, but unfortunately, it is not true in general. See Example~2 in Section~\ref{sect:demonstration}. So in the following, let us consider when $\iota \psi$ coincides with the stationary state of the circulant quantum walk. To this end, we prepare the following two propositions.
\begin{proposition}\label{prop:2}
$\ker(1-U_0)\neq \{\bs{0}\}$ if and only if $\ker(1-U^{BU})\neq \{\bs{0}\}$.
\end{proposition}
\begin{proof}
Assume $\ker(1-U_0)\neq \{\bs{0}\}$, that is, there exists $\phi \neq \bs{0}$ such that $U_0\phi=\phi$ and ${\rm supp}(\phi)\subset A_0$.
Let $\{e_0,\dots,e_{\kappa-1}\}=\{a\in \tilde{A}_0 \;|\;t(a)=u\}$ and
$\varphi_{u,in}=[\phi(e_0),\dots,\phi(e_{\kappa-1})]^\top$.
Then let us define $\psi\in \mathbb{C}^{\tilde{A}_0^{(BU,\xi)}}$ by
\begin{equation}\label{eq:extension}
\psi(a)=\begin{cases} \phi(a) & \text{: $a\in \tilde{A}_0$,} \\
\frac{1}{c_u}((\mathrm{Circ}(H_u)-d_uI_u)\varphi_{u,in})(a) & \text{: $a\in A^{cycle}_{0,u}$, $u\in V_0$.}\end{cases}
\end{equation}
Then by Lemma~\ref{lem:1}, we have $U^{BU}\psi=\psi$, and the support of $\psi$ does not have the overlap with the tails, which implies that $||\psi||<\infty$.
Next, let us consider the converse direction.
Note that $\mathrm{spec}(E_u)\subset \{z\in\mathbb{C} \;|\; |z|<1\}$. Then the support of the eigenvectors of eigenvalue $1$ must has the overlap to $\tilde{A}_0$.
Then from Proposition~\ref{prop:1}, we obtain the converse direction.
\end{proof}
\begin{remark}
Proposition~\ref{prop:2} implies that the eigenvectors of $\ker(1-U_0)$ have one-to-one correspondence to those of $\ker(1-U^{BU})$.
\end{remark}
\begin{proposition}\label{prop:3}
Assume $\ker(1-U_0)=\{\bs{0}\}$. Then for arbitrary $s_0\in \mathbb{C}^{\partial A_+}$, the following generalized eigenfunction satisfying the boundary condition $\psi|_{\partial A_+}=s_0$ is uniquely determined.
\begin{align*}
(1-U_0)\psi &= 0,\;
\psi|_{\partial A_+}=s_0.
\end{align*}
\end{proposition}
\begin{proof}
Let $\zeta: \mathbb{C}^{\tilde{A}_0} \to \mathbb{C}^{\tilde{A}_0\setminus tails}$ such that $(\zeta \phi)(a)=\phi(a)$ for any $a\in \mathbb{C}^{\tilde{A}_0\setminus tails}$.
Putting $\zeta U_0 \zeta^*=E$, we have
\[ (1-E)\zeta \psi = \rho, \]
where $\rho=\zeta U_0 (1-\zeta^*\zeta)\psi$.
Let us see that $(1-E)^{-1}$ exists as follows.
Assume there is a $f\neq 0$ such that $Ef=f$.
Taking the operation $\zeta^*$ to both sides, we have $||\zeta^*\zeta U_0\zeta^*f||^2=||\zeta^*f||^2$.
On the other hand, by the unitarity of $U_0$, we have
$||U_0\zeta^*f||^2=||\zeta^*f||^2$.
Then we have $|| U_0\zeta^*f ||=||\zeta^*\zeta U_0\zeta^*f||$. Since $\zeta^*\zeta$ is a projection onto the internal graph, this implies that the support of $U_0\zeta^*f$ is included in the internal graph; that is, $(1-\zeta^*\zeta)U_0\zeta^*f=0$.
This is equivalent to $U_0\zeta^*f-\zeta^*Ef=0$. Since $Ef=f$, we have $(1-U_0)\zeta^*f=0$.
Thus $\zeta^*f(\neq 0)$ is an $(+1)$-eigenvector of $U_0$, which contradicts the the assumption $\ker(1-U_0)=\{\bs{0}\}$. Therefore we have $\zeta \psi=(1-E)^{-1}\rho$, which implies that $\zeta^*\zeta \psi=\xi (1-E)^{-1}\rho$. Thus the restriction of $\psi$ to the internal graph is uniquely determined. On the other hand, the restriction of $\psi$ to the tails is uniquely determined as $s_0$ for the inflow and $(1-\zeta^*\zeta)U\zeta^*\zeta\psi$ for the outflow.
\end{proof}
Now we are ready for the proof of Proposition~\ref{thm:key}; ``if $\ker(1-U^{BU})=\{\bs{0}\}$, then $\mathrm{Opt}(QW(G_0;\bs{H};\xi))$ implements $QW(G_0;\bs{H};\xi)$. "\\
\noindent \\
\noindent{\bf Proof of Proposition~\ref{thm:key}.}
Let $\Psi$ be the stationary state of the optical quantum walk.
We set the boundary condition by $\Psi|_{\partial A_+}=s_0$.
The assumption of Proposition~\ref{thm:key} is equivalent to $\ker(1-U_0)=\{\bs{0}\}$ by Proposition~\ref{prop:2}, which is the assumption of Proposition~\ref{prop:3}.
Then by Proposition~\ref{prop:3}, the unique solution of $(1-U_0)\psi=0$ with $\psi|_{\partial A_+}=s_0$ is nothing but the stationary state of the circulant quantum walk with the inflow represented by $s_0$.
On the other hand,
by Proposition~\ref{prop:1}, we have $U_0\iota\Psi=\iota\Psi$. In particular, $\iota \Psi|_{\partial A_+}=\Psi|_{\partial A_+}=s_0$.
Therefore, the unique solution $\psi$ is described by $\iota \Psi$, which implies that the stationary state for the circulant quantum walk is equivalent to $\psi=\iota \Psi$. \begin{flushright}
$\square$
\end{flushright}
\begin{remark}
In general, the stationary state of quantum walks must be orthogonal to every eigenspace of the time evolution operators in the whole space~\cite{HS}. Let $\psi_\infty^{(BU)}$ be the stationary state of the optical QW. Then
for any $\phi\in \ker(1-U^{(BU,\xi)})$, we have
\[ \langle \psi_\infty^{(BU)}, \phi \rangle=0. \]
However, in general, it is not ensured that
\begin{equation}\label{eq:orthogonal}
\langle \iota\psi_\infty^{(BU)}, \iota\phi \rangle=0.
\end{equation}
This is the reason that the optical QW does not implement the underlying QW in Example~2 since the optical QW has eigenvalue $1$ and does not satisfy (\ref{eq:orthogonal}).
\end{remark}
\begin{remark}
Conversely, if $\ker(1-U_0)=\{\bs{0}\}$, then the stationary state $\psi_\infty$ implements the stationary state of $\psi_\infty^{(BU)}$ by the manner of deformation of $\psi_\infty$ given by (\ref{eq:extension}) in Proposition~\ref{prop:2}.
This means that two stationary states on $\tilde{G}_0$ and $\tilde{G}^{(BU,\xi)}$ implement each other under the condition of $\ker(1-U_0)=\{\bs{0}\}$.
\end{remark}
\section{Summary and discussion}
In this paper, we considered an optical implementation of quantum walk on a graph driven by a circulant matrix, namely the circulant quantum walk. To this end, we introduced another kind of quantum walk on the blow-up graph of the original graph induced by the circulant quantum walk; this was the optical quantum walk. The blow-up graph is a $2$-regular directed graph. Then making a correspondence between the two incoming edges to each vertex and the vertical and horizontal polarizations,
we heuristically showed that the optical quantum walk can be implemented by the optical circuit in theory and also proposed the design of the optical circuit for the general graph. We suggest a kind of search of a perturbed vertex using our circulant quantum walk. A high relative probability of the perturbed vertex is asymptotically stable, while such a probability is asymptotically periodic in the usual quantum search algorithm driven by quantum walks.
The analysis on this convergence speed has a potential to considering a quantum walk version of the cut off phenamina~\cite{Diaconis,LevinPeres} in the future.
We also mathematically showed a sufficient condition for the coincidence of the stationary states of the circulant quantum walk and its induced optical quantum walk. From this condition, we gave a useful setting for the circulant quantum walk that can be implemented by the induced optical quantum walk.
Finally, let us discuss the design of the optical circuit in Section~4, as an experimental approach to potential problem in the future. In Section 4, we designed the optical circuit under ideal conditions where the phases are matched, but experimentally the phases are not matched due to the noise arising from complex environmental fluctuation, so the expected operation does not occur in HWPs. To solve this problem, we propose an experimental method.
In an optical circuit such as that in subsection 4.1, it is necessary to make the optical path length of one round trip of V-polarized light be an integer multiple of the wavelength in order to obtain constructive interference with the light from previous laps. It is also necessary to match the phases of waves at each PBS. To meet these requirements, we stabilize the optical path length between each pair of PBS's. Experimentally, stabilization of the length of the optical circuit can be achieved by a feedback control by using a reference laser and a piezo-electric transducer which can be attached to a mirror consisting of the optical circuit~\cite{quantumoptics}. If necessary, the phase of the incoming signal is also stabilized by a similar procedure. Also, in subsection 4.2, each pair of island is connected together following the original graph connection; the resulting design is described by $G'$. The ideal design $G'$ is implemented with optical elements as shown in the right figure of Fig. 4. The polarized light that flows out from $G'$ does not return to $G'$, so there is no need to consider the optical path length. On the other hand, H-polarized light flowing out from one island to another interferes with V-polarized light at the PBS of the destination island, so the optical path length needs to be stabilized by the same feedback control. However, in the above method, we should measure the outflow from each $PBS$ in order to perform feedback control in the optical path between each $PBS$. Then, a trade off problem remains in that the more accurately we try to get the interference inside the island, the more we lose the output outside the island. We expect that such realistic experimental problems based on our proposed optical circuit under the very ideal condition will be improved in the future.
\noindent\\
\noindent {\bf Acknowledgments}
Yu.H. acknowledges financial supports from the Grant-in-Aid of
Scientific Research (C) Japan Society for the Promotion of Science (Grant No.~18K03401).
E.S. acknowledges financial supports from the Grant-in-Aid of
Scientific Research (C) Japan Society for the Promotion of Science (Grant No.~19K03616) and Research Origin for Dressed Photon.
\begin{small}
\bibliographystyle{jplain}
|
2,877,628,088,370 | arxiv | \section{Introduction}
The class of neutron stars collectively known as ``Anomalous X-ray
Pulsars'' \citep[AXPs;][]{ms95} has many properties that have been
enigmatic since the discovery of the first example over 20 years ago
\citep{fg81}. Foremost among puzzles was the nature of their energy
source, as they show no evidence of being either accretion- or
rotation-powered. Following extensive theoretical and
observational work \citep[see][for a review]{wt06}, it is clear that
AXPs share a common nature with another unusual class of neutron
stars, the ``Soft Gamma Repeaters'' (SGRs), with both best identified
with young, isolated neutron stars that are powered by the decay of an
enormous ($\ga$10$^{14}$~G) internal magnetic field. As such, they are
called ``magnetars'' \citep{dt92a,td95,td96a}.
Recently, transient X-ray pulsars with properties otherwise unique to
the AXPs have been discovered. The one established transient AXP
(TAXP) is XTE~J1810$-$197, a 5.5-s X-ray
pulsar discovered in 2003 \citep{ims+04} during a period of dramatic
X-ray enhancement and subsequent flux decay on roughly a year
timescale. The source's spectrum at the time
of the outburst was soft in the 2--10~keV band, well characterized by
a combined two-component spectrum (power law plus blackbody, or 2
temperature blackbody model) with parameters similar to those seen in
classical, i.e., non-transient, AXPs \citep{ims+04,ghbb04}. This,
together with the observed secular spin down and implied
magnetar-strength magnetic field, as well as an observed X-ray
luminosity in excess of the implied rotational spin-down luminosity
$\dot{E}$, makes an AXP interpretation for XTE~J1810$-$197 difficult
to escape \citep{ims+04}. Yet \citet{ghbb04} showed from archival
X-ray data that in quiescence, the observed source flux was nearly two
orders of magnitude fainter than at the time of the outburst and in
subsequent months, and much fainter than any of the non-transient
AXPs. TAXPs also open the question of how many more quiescent AXPs
there are in the Galaxy. This question is particularly
interesting as the magnetar birthrate could be a substantial fraction
of the total neutron star birthrate, possibly even comparable to that
of classical radio pulsars, whose much greater longevity makes them
much more numerous in the Galaxy. On the other hand, \citet{gmo+05}
consider the growing evidence that magnetars have unusually massive
progenitors, and thus argue that the magnetar birthrate is $\sim$10\%
of the total neutron star birthrate. The study of TAXPs in
quiescence is important for constraining their true luminosity
function, and hence the size of the Galactic magnetar population.
The 6.97-s X-ray pulsar \psr\ was discovered during a periodicity
search for X-ray sources in the {\it ASCA} archive
\citep{gv98,tkk+98}. Strong X-ray pulsations having a sinusoidal
pulse profile were seen in data obtained in 1993 from a Galactic Plane
point source that was subsequently shown to be near the center of the
shell supernova remnant G29.6+0.1 \citep{ggv99}. The long pulse
period and possible association with a young remnant strongly
suggested that \psr\ is an AXP. Additional support for this
interpretation came from the soft, highly absorbed X-ray spectrum,
which was well described by the Wien tail of a blackbody having $kT
\sim 0.64$~keV, similar to that seen in other AXPs
\citep{gv98,tkk+98}. The pulsar was not detected in a serendipitous
observation of the region obtained in 1997 as part
of the \textit{ASCA} Galactic Plane Survey \citep{tkk+98}.
Interestingly, follow-up observations in 1999 revealed the source
\axj\footnote{The updated coordinate of this \textit{ASCA} source,
using the final correction for the systematic coordinate offset
derived in \citet{guf+00}, is $18^{\mathrm{h}}44^{\mathrm{m}}54\fs4,
-02\degr56\arcmin37\farcs7$.} in the original 3$'$ radius
\textit{ASCA} positional uncertainty region, whose flux was smaller by
a factor of $\sim$10 relative to that of \psr\ in 1993, precluding the
measurement of pulsations or spectral information \citep{vgtg00}. The
rate of change of the spin frequency has therefore not been measured,
rendering the source as yet unconfirmed as a {\it bona fide} AXP. It
is plausible that the 1993 observation was obtained shortly after a
major outburst like that seen for XTE~J1810$-$197 and that the source
faded subsequently back to its quiescent level. X-ray observations in
2001--2003 with \textit{BeppoSAX} MECS, \textit{Chandra} HRC-I and
\textit{XMM-Newton} EPIC reported by \citet{isc+04} revealed a faint
source consistent with \axj\ in position and brightness; a concurrent
attempt to detect its optical/IR counterpart produced an H-band upper
limit of 21 mag.
Here, we report on a series of {\it Chandra X-ray Observatory}
observations of \psr\ in apparent quiescence. We attempt to re-detect
pulsations and hence constrain the spin-down rate in order to test the
AXP interpretation for this source. We characterize the behaviour of
this candidate TAXP in a low flux state by determining its spectral
properties, and by searching for low-level flux variability on a
variety of time scales. Finally, we discuss the likelihood and
implications for a non-detection of the AXP counterpart.
\section{Observations and Analysis}
Seven observations with \textit{Chandra} ACIS-S were obtained between
2003 June~26 and September~14 in timed exposure mode.
Table~\ref{tab:obs} summarizes the observing characteristics. The
first six were taken in 1/8 subarray mode on the chip ACIS-S3, with a
time resolution of 0.441~s, sufficient to resolve the pulsar signal.
The subarray's small field of view does not cover the full 3$'$ radius
\textit{ASCA} error circle. Therefore, we used as aim point the
position of the counterpart supplied to us from \textit{Chandra} HRC
observations
($18^{\mathrm{h}}44^{\mathrm{m}}54\fs6, -02\degr56\arcmin53\arcsec$
(J2000); G. Israel, private communication). The seventh observation
was in ACIS-S full frame mode, for which the time resolution was
3.241~s. The total exposure length at the above position was
$\sim$80~ks.
Data processing was performed with CIAO~3.2.2 and CALDB~3.0.3
software packages. We re-performed some steps in the standard
processsing pipeline with updated calibration files, using with the
tool \texttt{acis\_process\_events}.
\subsection{Imaging}
\label{sec:image}
One source at the position $18^{\mathrm{h}}44^{\mathrm{m}}54\fs68,
-02\degr56\arcmin53\farcs1$ (J2000) is detected in all seven
observations, which we designate \cxou; this is the likely counterpart
to \axj\ and possible counterpart to \psr. As seen in
Figure~\ref{fig:image}, it falls within the error circles of both
objects. We find no evidence of extended emission. The observations
were aligned and summed using the nominal {\it Chandra} astrometric
information. Systematic uncertainties in {\it Chandra} absolute
positions are expected to be $0.6''$ at the 90\%
level\footnote{http://cxc.harvard.edu/cal/ASPECT/celmon}. Although
these systematic errors in general can be reduced by aligning other
sources, given that some of our subarray fields contain none, the
nominal astrometry must suffice. To confirm that co-addition had no
adverse effects on our source's radial profile, we directly compared
it to the simulated PSF produced by the Chandra Ray Tracer (ChaRT) at
the source chip position and found it consistent with an unresolved
point source. The final position was determined from the combined
image and is consistent with that measured with \textit{Chandra} HRC.
Since the absence of pulsations precludes confirming the AXP nature of
\cxou\ (see \S\ref{sec:time}), and similarly \axj, the true
counterpart could conceivably lie anywhere in the original 3$'$ radius
\textit{ASCA} error circle. We searched the combined event file,
unfiltered in energy, for additional significant point sources using
\texttt{celldetect}. One source, CXOU~J184507.2--025657, was found at
$18^{\mathrm{h}}45^{\mathrm{m}}07\fs27, -02\degr56\arcmin57\farcs3$,
located 3$\farcm$1 away from \cxou, at the 3$\sigma$ level. Its
coincidence with the near-IR source
2MASS~J18450724--0256571\footnote{See
http://www.ipac.caltech.edu/2mass/ for information on the 2MASS All
Sky Survey} of magnitude $K=12.7$ may suggest that
CXOU~J184507.2--025657 is an unlikely counterpart to \psr, since a
highly absorbed AXP candidate is expected to have a near-IR magnitude
$K \gg 20$ \citep[for a summary of AXP IR magnitudes and X-ray
absorptions, see][]{dv05}. A considerably fainter source,
CXOU~J184509.7--025715 located at
$18^{\mathrm{h}}45^{\mathrm{m}}09\fs76, -02\degr57\arcmin15\farcs0$,
was found at the 2$\sigma$ level. All of these sources are indicated
in Figure~\ref{fig:image}. We also inspected an archival
\textit{XMM-Newton} observation, taken 2003 March~3 \citep{isc+04},
but found no additional significant point sources in the error
region.
\subsection{Timing}
\label{sec:time}
In an attempt to perform phase-coherent timing, we observed in 1/8
subarray mode to acquire high-time-resolution data and identify a
pulsed signal. Light curves for \cxou\ were
extracted from a $2\farcs5$ radius circle in each data set at the
maximum allowable time resolution (0.441 s for observations
3891$-$3896) in 3 energy ranges: 1$-$10~keV, 1$-$3~keV and
3$-$10~keV. Event times were corrected to solar system
barycenter arrival times. We performed a fast fourier transform (FFT)
on each data set; no evidence for pulsations was found in the
resulting power density spectra. Using the longest of the
observations (Obs. ID 3891), for the frequency range 0.0880--0.1436~Hz,
we set a 95\% confidence upper limit on the pulsed amplitude of 80\%
in 1$-$10~keV, using the method outlined in \citet{vvw+94}. The above
frequency range allows for a 10-year change in frequency corresponding
to magnetic fields $\sim$10$^{16}$~G and lower.
Our detections of the faint point sources CXOU~J184507.2--025657 and
CXOU~J184509.7--025715 have far too few counts ($\leq$12 in 1$-$10
keV) to make detecting pulsations possible.
\subsection{Spectrum}
We used the \texttt{psextract} script and \texttt{mkacisrmf} tool to
extract \cxou's spectra from a $2\farcs5$ radius circle and the
background spectra from a 3$''$ to 22$''$ annulus centered on the
point source, and compute instrumental response files.
Background-subtracted count rates for the point source at each
observing epoch are given in Table~\ref{tab:obs}, where uncertainties
assume Poisson statistics. At each epoch there were too few counts to
allow a meaningful spectral fit; therefore, we combined the individual
data sets into a summed spectrum containing $550\pm24$
background-subtracted counts (0.5$-$10~keV). We excluded channels at
energies below 0.5~keV, where the effective area of ACIS-S falls off
significantly, and grouped the remainder so that a minimum of 12
counts fell in each spectral bin. The spectral fitting package
XSPEC~11.3.1 produced equally acceptable fits to single-component
thermal blackbody or power-law models with photoelectric absorption;
model parameters are presented in Table~\ref{tab:spec}. We found a
best-fit temperature of $kT = 2.0^{+0.4}_{-0.3}$~keV and an absorption
of $N_H=5.6^{+1.6}_{-1.2} \times 10^{22}$~cm$^{-2}$ assuming a
blackbody spectrum, and a photon index of $\Gamma = 1.0^{+0.5}_{-0.3}$
and absorption $N_H=7.8^{+2.3}_{-1.8} \times 10^{22}$~cm$^{-2}$
assuming a power-law spectrum (uncertainties reflect 90\% confidence).
The measured absorptions are consistent with the 1993 \textit{ASCA}
values within uncertainties.
Assuming that the shape of the combined spectrum is also
characteristic for the spectra at each epoch, we determined those
fluxes by holding the spectral parameters fixed at the values in
Table~\ref{tab:spec}. Since neither model is preferred based on
goodness of fit, we arbitrarily chose the blackbody model for the rest
of our analysis. We measured the 2$-$10~keV flux of the seven
individual data sets by grouping spectra in the same way as for the
combined spectrum, freezing $N_H$ and $kT$ at the above best-fit
values, and allowing only the normalization to vary. We found that
the data are consistent with the source's flux being stable over the
12-week observing window to within statistical uncertainties: fitting
to a constant flux resulted in a reduced $\chi^2=1.0$ for 6 degrees of
freedom. The inset plot of Figure~\ref{fig:flux} shows the
\textit{Chandra} fluxes assuming the best-fit blackbody model
parameters.
The combined 2$-$10~keV flux of \cxou, assuming the blackbody
spectrum, is $2.6 \pm 0.2 \times 10^{-13}$~erg~s$^{-1}$~cm$^{-2}$.
Although subject to large uncertainties on the measurement of $N_H$,
we estimate that the unabsorbed flux is $2.5-4.0\times
10^{-13}$~erg~s$^{-1}$~cm$^{-2}$. See Table~\ref{tab:spec} for an
explanation of the uncertainties. If the source is indeed the
counterpart to \psr, this implies that the flux in 2003 is a factor of
$\sim$13 fainter than that measured in 1993 with \textit{ASCA} GIS
\citep{gv98}.
What if \cxou\ is unrelated to \psr? We next looked at the two
fainter point sources coincident with the error region as possible
counterparts. We extracted spectra for CXOU~J184507.2$-$025657 from a
4$''$ radius circle, using the same background area as earlier. This
source, which was visible in only 3 of the 7 observations, produced
37 background-subtracted counts (0.5$-$10~keV) in its combined
spectrum, insufficent to adequately fit a spectral model. However, we
observed that the majority of counts fell below 2~keV, contrary to
what one would expect from a highly-absorbed source such as \psr\ that
previously exhibited $N_H \ga 6 \times 10^{22}$~cm$^{-2}$. This
evidence, combined with the probable 2MASS association we mentioned
earlier, strongly suggests that CXOU~J184507.2$-$025657 is unrelated
to \psr. From CXOU~J184509.7--025715, which appeared in 4 of 7
observations, we extracted counts from a 4$\farcs$4 radius circle;
this gave a combined spectrum containing 20 background-subtracted
counts (0.5$-$10~keV). Again, the paucity of counts prevented us from
drawing any conclusive results about the spectrum of this source.
Finally, we considered the case that \psr\ was not at all redetected,
and determined the $3\sigma$ upper limit on the absorbed flux for a
hypothetical point source. We measured the background count rate from
our only full-frame data set (Obs. ID 3897), which was the only
observation whose field was large enough to contain the full 3$'$
\textit{ASCA} error circle. The range $\sim8-13\times
10^{-15}$~erg~s$^{-1}$~cm$^{-2}$ (2--10~keV) encompasses results
assuming several likely models based on the outburst spectrum of \psr\
and the spectrum of XTE~J1810$-$197 in quiescence (see
Figure~\ref{fig:flux}). If the true counterpart were off-axis by
3$'$, the difference in effective area and PSF would not dramatically
affect our ability to detect a point source unless it were at the
limiting flux.
\section{Discussion}
\label{sec:disc}
Our observations reveal that \cxou, whether the counterpart or not, is
significantly fainter than was \psr\ in 1993 \citep{gv98} by a factor
of $\sim$13. If \cxou\ is not the AXP counterpart, this factor
increases significantly: \psr\ must now be at least 260--430 times
fainter than it was in 1993. This would be an unprecedented
range of variability in AXPs. \cxou's flux is consistent with that of
\axj\ in 1999 \citep{vgtg00}; therefore, we may well have detected the
same source. Figure~\ref{fig:flux} summarizes the flux history of
\psr.
Such variability on long time scales, seen here and in
XTE~J1810$-$197, presents a challenge to the magnetar model, which
posits that the decay of the internal field is continual during the
source's youth. This decay results in continual internal heating and
crustal stresses. Thus, the behaviour exhibited by TAXPs raises the
following important question: if they are magnetars, what causes
the dramatic difference in intrinsic brightness between active and
quiescent states? Estimates of the crustal temperatures heated by
internal magnetic dissipation predict X-ray luminosities like those
observed for non-transient AXPs \citep{td96a}. Those same estimates
are consistent with the expected stresses that result in the crustal
yields that produce bursts like that seen in XTE~J1810$-$197
\citep{wkg+05} and also in the non-transient AXP 1E~2259+586
\citep{kgw+03}. Thus TAXPs and non-transients have much in common
physically, but are apparently sufficiently dissimilar that their
quiescent X-ray luminosities differ by orders of magnitude.
The spectrum of \cxou\ raises doubt that this is indeed the pulsar
counterpart. For the blackbody model, the temperature of 2~keV is
much higher than the 0.18~keV measured for XTE~J1810$-$197 in
quiescence \citep{ghbb04} and in fact much higher than for any known
AXP or SGR\footnote{See online magnetar catalog
http://www.physics.mcgill.ca/$\sim$pulsar/magnetar/main.html for a
summary of AXP properties.}. Evidence for inconsistent spectral
behaviour may have already been seen in 2001--2003 by
\citet{isc+04}. Indeed the {\it Chandra} point source is much
harder than was the pulsar when in
outburst in 1993 \citep{gv98,tkk+98}, in stark contrast to
XTE~J1810$-$197 which greatly hardened ($kT=0.67$~keV) when bright.
Thus if \cxou\ is the pulsar counterpart, its spectral properties in
quiescence are puzzling. The quiescent spectrum is more in line with
that seen from magnetospheric emission in rotation-powered pulsars
\citep[see][for a review]{krh06}, however, no such object has ever
shown even a small variation in its X-ray luminosity, much less orders
of magnitude. Moreover, the 7-s periodicity is much longer than has
been seen in any rotation-powered magnetospheric X-ray
emission. The measured 80\% pulsed fraction is well above that
seen in other AXPs, and therefore unconstraining.
If the {\it Chandra} source is not the pulsar counterpart, what could
it be? The source's salient properties are its hard spectrum, its
approximate luminosity ($L_x \simeq 10^{33}(d/5 \; {\rm kpc})^2$), and
its absence of variability on time scales of days to weeks. Given the
photon index in the power-law spectral model, an active galactic
nuclei (AGN) interpretation is plausible
\citep[e.g.][]{woau04,nla+05}. We estimate the
probability of our object being a background AGN from the predicted
number density as a function of 2$-$10~keV flux according to
\textit{Chandra} ACIS-I deep observations of an ``empty'' Galactic
plane region by \citet{etp+05}. Coincidentally, their field of view
is centered only $\sim$1$\degr$ from our target, so it is likely
that our fields share many common properties, such as absorption
column. From Figure~24 of \citet{etp+05}, the number of extragalactic
point sources per square degree with flux greater than $3\times
10^{-13}$~erg~s$^{-1}$~cm$^{-2}$ is $\sim$2. We expect $\sim$0.02 AGN
per circular region of radius 3$'$; hence, there is a $\sim$2\% chance
that this source is an AGN.
\citet{etp+05} estimate $JHK_S$ magnitudes of 21$-$23~mag for AGN in
their survey of the Galactic plane, where at least $A_{K_S}\sim 4$~mag
of extinction may be present \citep{ps95,rl85}.
Even if we make the simple assumption that the X-ray and near-IR
emission of AGN are part of a single power law spectrum, and that
\cxou\ should be 1$-$2 orders of magnitude brighter in the near-IR
than their survey sample as it is in X-rays, it will still be
difficult to confirm or rule out this possibility using IR
observations, given its predicted faintness.
On the other hand, several types of Galactic objects could have
properties similar to those of this source \citep[see][for a similar
discussion]{mab+04}. Winds from massive stars have similar spectral
and flux properties, as do some high-mass X-ray binaries. However,
these would tend to be IR-bright, in conflict with the $H$-band limit
of 21 mag reported by \citet{isc+04}. A very small number of
millisecond pulsars with similar X-ray luminosities are shown to
possess comparably hard X-ray spectra \citep{kh04}, although until
pulsations are seen from \cxou, it will be impossible to test this.
One source class whose
properties are similar to that of the point source in question are
cataclysmic variables, specifically the class of intermediate polars.
The observed near-IR emission is thought to be dominated
by their dwarf companion and may be very faint given the
absorption to this source; \citet{mab+04} estimate $K\approx
22-25$~mag for sources at the Galactic center, at comparable distance
and suffering comparable extinction. This would be hard to detect.
Thus it seems clear that simply obtaining deeper near-IR observations
will not be sufficient to determine whether this source is the
counterpart. The most promising avenues for doing so therefore are
either obtaining very deep X-ray observations in the hope of
redetecting pulsations, or else waiting patiently for the pulsar to
grace us with another outburst bright enough for follow-up with other
observatories.
We conclude that no matter what, \psr\ is interesting:
if the counterpart is the detected source, then either \psr\ is
not an AXP or AXPs can have a much wider range of spectral properties
in quiescence than has been thought. If this is not the counterpart
and the AXP
identification is correct, then AXPs are capable of $>$2
order-of-magnitude flux variations, an interesting challenge to the
magnetar model, and also further evidence for a large as-yet-undetected
population.
\acknowledgements
We thank G. Israel for providing the HRC position and
\textit{BeppoSAX} flux and uncertainties of the possible
counterpart to \psr. V.M.K. acknowledges funding from NSERC via a
Discovery Grant and Steacie Supplement, the FQRNT, and CIAR. B.M.G
acknowledges support from Chandra GO grant GO3-4089X, awarded by the
SAO.
\bibliographystyle{apj}
|
2,877,628,088,371 | arxiv | \section{Introduction}
A 0-1 matrix $M$ \emph{contains} a 0-1 matrix $P$ if some submatrix of $M$ either equals $P$ or can be turned into $P$ by changing some ones to zeroes. Otherwise $M$ \emph{avoids} $P$. The function $\ex(n, P)$ is the maximum number of ones in any 0-1 matrix of dimensions $n \times n$ that avoids $P$.
The function $\ex(n, P)$ has been used for many applications, including resolving the Stanley-Wilf conjecture \cite{MT} and bounding the maximum number of unit distances in a convex $n$-gon \cite{Fure}, the complexity of algorithms for minimizing rectilinear path distance while avoiding obstacles \cite{Mit}, the maximum number of edges in ordered graphs on $n$ vertices avoiding fixed ordered subgraphs \cite{KM, PT, weid}, and the maximum lengths of sequences that avoid certain subsequences \cite{Pet}.
It is easy to see that $\ex(n, P) = \ex(n, P')$ if $P'$ is obtained from $P$ by reflections over horizontal or vertical lines or ninety-degree rotations. It is also obvious that if $P'$ contains $P$, then $\ex(n, P) \leq \ex(n, P')$.
If $P$ has at least two entries and at least one $1$-entry, then $\ex(n, P) \geq n$ since there exists a matrix with ones only in a single column or a single row that avoids $P$. For example, $\ex(n, \begin{bmatrix}
1 & 1
\end{bmatrix}) = n$ since the $n \times n$ matrix with ones only in the first column and zeroes elsewhere avoids $\begin{bmatrix}
1 & 1
\end{bmatrix}$ and every 0-1 matrix of dimensions $n \times n$ with $n+1$ ones has a row with at least two ones. It is also easy to see that $\ex(n, P) = (k-1)n$ when $P$ is a $1 \times k$ matrix with all ones: the $n \times n$ matrix with ones only in the first $k-1$ columns and zeroes elsewhere avoids $P$, while every 0-1 matrix with dimensions $n \times n$ and $(k-1)n+1$ ones has a row with at least $k$ ones.
Since the 0-1 matrix extremal function has a linear lower bound for all 0-1 matrices except those with all zeroes or just one entry, it is natural to ask which 0-1 matrices have linear upper bounds on their extremal functions. F\"{u}redi and Hajnal posed the problem of finding all 0-1 matrices $P$ such that $\ex(n, P) = O(n)$ \cite{FH}. Their problem has only been partially answered.
Marcus and Tardos proved that $\ex(n, P) = O(n)$ for every permutation matrix $P$ \cite{MT}. This linear bound was extended in \cite{G} to tuple permutation matrices, which are obtained by replacing every column of a permutation matrix with multiple adjacent copies of itself.
Keszegh \cite{Ke}, Tardos \cite{gtar} and F\"{u}redi and Hajnal \cite{FH} found multiple operations that can be used to construct new linear 0-1 matrices (matrices $P$ for which $\ex(n, P) = O(n)$) from known linear 0-1 matrices. No one has found a way to determine whether an arbitrary 0-1 matrix is linear just by looking at it. However, one approach that might eventually resolve the F\"{u}redi-Hajnal problem is to identify all minimally non-linear 0-1 matrices.
A 0-1 matrix $P$ is called \emph{minimally non-linear} if $\ex(n, P) = \omega(n)$ but $\ex(n, P') = O(n)$ for every $P'$ that is strictly contained in $P$. If $M$ contains a minimally non-linear 0-1 matrix, then $\ex(n, M)$ is non-linear. If $M$ avoids all minimally non-linear 0-1 matrices, then $\ex(n, M)$ is linear. Thus identifying all minimally non-linear 0-1 matrices is equivalent to solving F\"{u}redi and Hajnal's problem.
Keszegh \cite{Ke} constructed a class $H_{k}$ of 0-1 matrices for which $\ex(n, H_{k}) = \Theta(n \log{n})$ and conjectured the existence of infinitely many minimally non-linear 0-1 matrices contained in the class. This conjecture was confirmed in \cite{G}, without actually constructing an infinite family of minimally non-linear 0-1 matrices.
There are only seven minimally non-linear 0-1 matrices with $2$ rows. These matrices include $\begin{bmatrix}
1 & 1\\
1 & 1
\end{bmatrix}, \begin{bmatrix}
1 & 0 & 1 \\
0 & 1 & 1
\end{bmatrix}, \begin{bmatrix}
0 & 1 & 1 \\
1 & 0 & 1
\end{bmatrix}, \begin{bmatrix}
1 & 0 & 1 & 0 \\
0 & 1 & 0 & 1
\end{bmatrix}$, and reflections of the last three over a vertical line.
In this paper, we bound the number of minimally non-linear 0-1 matrices with $k$ rows for $k > 2$. In order to obtain upper bounds for this number, we bound the ratio between the length and width of a minimally non-linear 0-1 matrix. We also investigate similar problems for sequences and ordered graphs.
In Section \ref{seq}, we bound the lengths as well as the number of minimally non-linear sequences with $k$ distinct letters. These bounds are easier to obtain than the bounds on minimally non-linear 0-1 matrices and ordered graphs, since they rely mainly on the fact that every minimally non-linear sequence not isomorphic to $ababa$ must avoid $ababa$.
In Section \ref{01mat}, we bound the number of minimally non-linear 0-1 matrices with $k$ rows. We also prove that the ratio between the length and width of a minimally non-linear 0-1 matrix is at most $4$ and that a minimally non-linear 0-1 matrix with $k$ rows has at most $5k-3$ ones. In Section \ref{ordg}, we find corresponding bounds for extremal functions of forbidden ordered graphs.
\section{Minimally non-linear patterns in sequences}\label{seq}
A sequence $u$ contains a sequence $v$ if some subsequence of $u$ is isomorphic to $v$. Otherwise $u$ avoids $v$. If $u$ has $r$ distinct letters, then the function $\Ex(u, n)$ is the maximum possible length of any sequence that avoids $u$ with $n$ distinct letters in which every $r$ consecutive letters are distinct.
Like the extremal function $\ex(n, P)$ for forbidden 0-1 matrices, $\Ex(u, n)$ has been used for many applications in combinatorics and computational geometry. These applications include upper bounds on the complexity of lower envelopes of sets of polynomials of bounded degree \cite{3}, the complexity of faces in arrangements of arcs with a limited number of crossings \cite{1}, and the maximum possible number of edges in $k$-quasiplanar graphs on $n$ vertices with no pair of edges intersecting in more than $t$ points \cite{5, 10}.
Minimal non-linearity for $\Ex(u, n)$ is defined as for $\ex(n, P)$. Only the sequences equivalent to $ababa$, $abcacbc$, or its reversal are currently known to be minimally non-linear, but a few other minimally non-linear sequences are known to exist \cite{petmnl}.
In order to bound the number of minimally non-linear sequences with $k$ distinct letters, we bound the length of such sequences in terms of the extremal function $\Ex(a b a b a, k)$, which satisfies $\Ex(ababa,k) \sim 2k\alpha(k)$ \cite{7, 8}.
In the next proof, we use a well-known fact about the function $\Ex(u, n)$ \cite{7}: If $u$ is a linear sequence and $u'$ is obtained from $u$ by inserting the letter $a$ between two adjacent occurrences of $a$ in $u$, then $u'$ is linear.
\begin{lem}
The maximum possible length of a minimally non-linear sequence with $k$ distinct letters is at most $2 \Ex(a b a b a, k)$.
\end{lem}
\begin{proof}
First we claim that there is no immediate repetition of letters greater than $2$ in a minimally non-linear sequence. Suppose for contradiction that there is a minimally non-linear sequence $u$ with a repetition of length at least $3$.
Remove one of the letters in the repetition and get $u'$. By definition $u'$ is linear, but then inserting the letter back still gives a linear sequence by the well-known fact stated before this lemma, a contradiction.
If $u$ is not isomorphic to $ababa$, then the number of segments of repeated letters in $u$ is at most $\Ex(ababa, k)$ because $u$ avoids $ababa$. Thus $u$ has length at most $2 \Ex(ababa, k)$ since each segment has length at most $2$.
\end{proof}
\begin{cor}
The number of minimally non-linear sequences with $k$ distinct letters is at most $2k \sum_{i=1}^{\Ex(ababa,k)}(2k-2)^{i-1}$.
\end{cor}
\begin{proof}
The number of segments of repeated letters is at most $\Ex(ababa, k)$. Each segment can be filled with one of at most $k$ letters, with length $1$ or $2$, with no adjacent segments having the same letters.
Thus there are at most $2k$ choices for the first segment and at most $2k-2$ choices for the remaining segments. So the number of such sequences is bounded by $2k \sum_{i=1}^{\Ex(ababa,k)}(2k-2)^{i-1}$.
\end{proof}
\section{Minimally non-linear patterns in 0-1 matrices}\label{01mat}
Although the existence of infinitely many minimally non-linear 0-1 matrices was proved in \cite{G}, only finitely many minimally non-linear 0-1 matrices have been identified. It is an open problem to identify an infinite family of minimally non-linear 0-1 matrices.
In this section, we prove an upper bound of $\sum_{i=\lceil (k+2)/4
\rceil}^{4k-2}(i^{k}-(i-1)^{k}) k^{i-1}$ on the number of minimally non-linear 0-1 matrices with $k$ rows. In order to obtain this bound, we first show that any minimally non-linear 0-1 matrix with $k$ rows has at most $4k-2$ columns. Next, we bound the number of minimally non-linear 0-1 matrices with $k$ rows and $c$ columns. We prove this bound by showing that no column of a minimally non-linear 0-1 matrix has multiple ones after leftmost ones are removed from each row, unless the matrix is the $2 \times 2$ matrix of all ones, $\begin{bmatrix}
1 & 0 & 1 \\
0 & 1 & 1
\end{bmatrix}$, or its reflection over a horizontal line.
In order to bound the ratio between the length and width of any minimally non-linear 0-1 matrix, we use a few well-known lemmas about 0-1 matrix extremal functions. These facts are proved in \cite{FH, gtar}.
\begin{lem}
\begin{enumerate}
\item If $P$ has two adjacent ones $x$ and $y$ in the same row in columns $c$ and $d$, and $P'$ is obtained from $P$ by inserting a new column between $c$ and $d$ with a single one between $x$ and $y$ and zeroes elsewhere, then $\ex(n, P) \leq \ex(n, P') \leq 2 \ex(n, P)$.
\item If $P'$ is obtained by inserting columns or rows with all zeroes into $P$, then $\ex(n, P') = O(\ex(n, P)+n)$.
\item If $P = \begin{bmatrix}
1 & 0 & 1 & 0 \\
0 & 1 & 0 & 1
\end{bmatrix}$, then $\ex(n, P) = \Theta(n \alpha(n))$, where $\alpha(n)$ denotes the inverse Ackermann function.
\end{enumerate}
\end{lem}
The next theorem shows that a minimally non-linear 0-1 matrix must not be more than four times longer than it is wide. The greatest known ratio between the length and width of a minimally non-linear 0-1 matrix is $2$ for the matrix $\begin{bmatrix}
1 & 0 & 1 & 0 \\
0 & 1 & 0 & 1
\end{bmatrix}$.
\begin{thm}\label{main01}
The ratio of width over height of any minimally non-linear matrix is between $0.25$ and $4$.
\end{thm}
\begin{proof}
Since the lemma holds for $\begin{bmatrix}
1 & 0 & 1 & 0 \\
0 & 1 & 0 & 1
\end{bmatrix}$ and its reflections, suppose that $P$ is a minimally non-linear 0-1 matrix with $k$ rows that is not equal to $\begin{bmatrix}
1 & 0 & 1 & 0 \\
0 & 1 & 0 & 1
\end{bmatrix}$ or its reflections.
Let $P'$ be obtained by scanning through the columns of $P$ from left to right. The first column of $P'$ has a one only in the first row where the first column of $P$ has a one. For $i > 1$, the $i^{th}$ column of $P'$ has a one only in the first row where the $i^{th}$ column of $P$ has a one and where the $(i-1)^{st}$ column of $P'$ does not have a one, unless the $i^{th}$ column of $P$ only has a single one. If the $i^{th}$ column of $P$ only has a single one in row $r$, then the $i^{th}$ column of $P'$ has a one only in row $r$.
The reduction produces a 0-1 matrix with a single one in each column. Let each of the rows $1, \ldots, k$ of $P$ and $P'$ correspond to a letter $a_{1}, \ldots, a_{k}$, and construct a sequence $S$ from $P'$ so that the $i^{th}$ letter of $S$ is $a_{j}$ if and only if $P'$ has a one in row $j$ and column $i$.
By definition $|S|$ equals the number of columns of the minimally non-linear pattern $P$. There cannot be $3$ adjacent same letters in $S$, because any $3$ adjacent same letters implies a column in $P$ with a single $1$ and the immediate right and left neighbors of the $1$-entry being $1$ as well, which would imply that $P$ is not minimally non-linear. Also $S$ avoids $abab$ because otherwise $P$ contains
$\begin{bmatrix}
1 & 0 & 1 & 0 \\
0 & 1 & 0 & 1
\end{bmatrix}$ or its reflection, which are non-linear. So $|S|\leq 2\Ex(abab, k)=4k-2$. This shows that the ratio of width over height of a minimally non-linear matrix is between $0.25$ and $4$.
\end{proof}
Using the bound that we obtained on the number of columns in a minimally non-linear 0-1 matrix with $k$ rows, next we prove that the number of ones in a minimally non-linear 0-1 matrix with $k$ rows is at most $5k-3$. Note that any minimally non-linear 0-1 matrix with $k$ rows has at least $k$ ones since it has no rows with all zeroes.
In order to bound the number of ones in a minimally non-linear 0-1 matrix with $k$ rows, we first prove a more general bound on the number of ones in a minimally non-linear 0-1 matrix with $k$ rows and $c$ columns, assuming that it is not the $2 \times 2$ matrix of all ones.
\begin{lem}\label{01edge}
The number of ones in any minimally non-linear 0-1 matrix with $k$ rows and $c$ columns, besides the $2 \times 2$ matrix of all ones, is at most $k+c-1$.
\end{lem}
\begin{proof}
The result is true for $Q = \begin{bmatrix}
1 & 0 & 1 \\
0 & 1 & 1
\end{bmatrix}$, so suppose that $P$ is a minimally non-linear 0-1 matrix with $k$ rows that is not equal to $Q$, its reflection $\bar{Q}$ over a horizontal line, or the $2 \times 2$ matrix $R$ of all ones. Then $P$ must avoid $Q$, $\bar{Q}$, and $R$.
If $P$ has $k$ rows and $c$ columns, then remove the first one in each row to obtain a new matrix $P'$. Matrix $P'$ cannot have any column with multiple ones, since otherwise $P$ would contain $Q$, $\bar{Q}$, or $R$. Thus $P'$ has at most $c-1$ ones since the first column has no ones, so $P$ has at most $k+c-1$ ones.
\end{proof}
\begin{cor}
The number of ones in any minimally non-linear 0-1 matrix with $k$ rows is at most $5k-3$.
\end{cor}
\begin{proof}
Suppose that the minimally non-linear 0-1 matrix $P$ has $k$ rows and $c$ columns. Since $c \leq 4k-2$, matrix $P$ has at most $5k-3$ ones.
\end{proof}
Using the bound on the number of columns in a minimally non-linear 0-1 matrix with $k$ rows, combined with the technique that we used to bound the number of ones in a minimally non-linear 0-1 matrix with $k$ rows, we prove an upper bound on the number of minimally non-linear 0-1 matrices with $k$ rows.
\begin{cor}
For $k > 2$, the number of minimally non-linear 0-1 matrices with $k$ rows is at most $\sum_{i=\lceil (k+2)/4 \rceil}^{4k-2}(i^{k}-(i-1)^{k})k^{i-1}$.
\end{cor}
\begin{proof}
In a minimally non-linear 0-1 matrix with $k$ rows and $i$ columns, there are at most
$i^k-(i-1)^{k}$ possible combinations of leftmost ones that can be deleted in each row, because
having all leftmost ones in the rightmost $i-1$ columns implies that the first column is empty, which is
impossible. After leftmost ones are deleted in each row, each column except the first has at most a
single one. If a column has no one removed, then it stays non-empty with $k$ possibilities. If a
column has at least a one removed, say in the second row, then it cannot become a column with a
one in the second row. In either case, every column except for the first has at most $k$
possibilities, leaving at most $k^{i-1}$ possible matrices. Moreover there are between $\lceil (k+2)/4 \rceil$ and $4k-2$ columns in a minimally non-linear 0-1 matrix with $k$ rows.
\end{proof}
\section{Minimally non-linear patterns in ordered graphs}\label{ordg}
In this section, we prove bounds on parameters of minimally non-linear ordered graphs. The definitions of avoidance, extremal functions, and minimal non-linearity for ordered graphs are analogous to the corresponding definitions for 0-1 matrices.
If $H$ and $G$ are any ordered graphs, then $H$ avoids $G$ if no subgraph of $H$ is order isomorphic to $G$. The extremal function $\ex_{<}(n, G)$ is the maximum possible number of edges in any ordered graph with $n$ vertices that avoids $G$.
Past research on $\ex_{<}$ has identified similarities with the $0-1$ matrix extremal function $\ex$. For example, Klazar and Marcus \cite{KM} proved that $\ex_{<}(n, G) = O(n)$ for every ordered bipartite matching $G$ with interval chromatic number $2$. This is analogous to the result of Marcus and Tardos \cite{MT} that $\ex(n, P) = O(n)$ for every permutation matrix $P$. Weidert also identified several parallels between $\ex_{<}$ and $\ex$ \cite{weid}, including linear bounds on extremal functions of forbidden tuple matchings with interval chromatic number $2$. These bounds were analogous to the linear bounds for tuple permutation matrices that were proved in \cite{G}.
In order to prove results about minimally non-linear ordered graphs, we use two lemmas about $\ex_{<}(n, G)$. The first is from \cite{weid}:
\begin{lem}
\cite{weid} If $G'$ is created from $G$ by inserting a single vertex $v$ of degree one between two consecutive vertices that are both adjacent to $v$'s neighbor, then $\ex_{<}(n, G') \leq 2\ex_{<}(n, G)$.
\end{lem}
The second lemma and its proof is by Gabor Tardos via private communication \cite{gtardos}.
\begin{lem}
\cite{gtardos} If $G'$ is an ordered graph obtained from $G$ by adding an edgeless vertex, then $ex_{<}(n, G') = O(ex_{<}(n, G)+n)$.
\end{lem}
\begin{proof}
For simplicity assume the new isolated vertex in $G'$ is neither first nor last. Let $H'$ be an ordered graph avoiding $G'$. Take uniform random sample $R$ of the vertices of $H'$, then select a subset $S$ of $R$ deterministically by throwing away the second vertex from every pair of consecutive vertices in $V(H')$ if both of them were selected in $R$. Now $S$ is a subset of vertices without a consecutive pair, so $H=H'[S]$ avoids $G$, since you can stick in a vertex between any two wherever you wish. Now every edge of $H'$ has a minimum of $1/16$ chance of being in $H$ except the edges connecting neighboring vertices, which have no chance. Thus $w(H')<16E[w(H)]+n$ and we are done.
\end{proof}
Most of the results that we prove in this section about minimal non-linearity for the extremal function $\ex_{<}$ are analogous to the results that we proved in the last section about minimal non-linearity for the 0-1 matrix extremal function $\ex$. First we prove that the number of edges in any minimally non-linear ordered graph with $k$ vertices is at most $2k-2$. Since there are no singleton vertices in a minimally non-linear ordered graph, there is a lower bound of $k/2$ on the number of edges.
\begin{thm}\label{allord}
Any minimally non-linear ordered graph with $k$ vertices has at most $2k-2$ edges.
\end{thm}
\begin{proof}
For a 0-1 matrix $P$, define $G_o\left(P\right)$ to be the family of all bipartite ordered graphs with a unique decomposition into two independent sets that form a 0-1 matrix equivalent to $P$ when the vertices in each set are arranged in either increasing or decreasing order as columns and rows with edges corresponding to ones. Then every element of $G_o\left(\begin{bmatrix}
1 & 0 & 1\\
0 & 1 & 1
\end{bmatrix}\right) \cup G_o\left(\begin{bmatrix}
1 & 1\\
1 & 1
\end{bmatrix}\right)$ is non-linear for $\ex_<$, since $\begin{bmatrix}
1 & 0 & 1\\
0 & 1 & 1
\end{bmatrix}$ and $\begin{bmatrix}
1 & 1\\
1 & 1
\end{bmatrix}$ are non-linear for $\ex$ and any ordered graph with interval chromatic number more than $2$ is non-linear for $\ex_<$ \cite{weid}.
The lemma is true for every element of $G_o\left(\begin{bmatrix}
1 & 0 & 1\\
0 & 1 & 1
\end{bmatrix}\right) \cup G_o\left(\begin{bmatrix}
1 & 1\\
1 & 1
\end{bmatrix}\right)$, so let $G$ be a minimally non-linear ordered graph that is not equal to any element of $G_o\left(\begin{bmatrix}
1 & 0 & 1\\
0 & 1 & 1
\end{bmatrix}\right) \cup G_o\left(\begin{bmatrix}
1 & 1\\
1 & 1
\end{bmatrix}\right)$.
Thus $G$ avoids every element of $G_o\left(\begin{bmatrix}
1 & 0 & 1\\
0 & 1 & 1
\end{bmatrix}\right) \cup G_o\left(\begin{bmatrix}
1 & 1\\
1 & 1
\end{bmatrix}\right)$. Define an edge as $e=(v_i,v_j)$ where $v_i<v_j$. Remove all edges $(v_i,v_j)$ where $v_j$ is the smallest number $t$ such that $(v_i,t)\in E(G)$. There are at most $k-1$ such edges.
The resulting graph $G'$ cannot have both edges $(v_i, v_k)$ and $(v_j, v_k)$ for any node $v_k$. Because if it does, then there are $v_i<v_a<v_k$ and $v_j<v_b<v_k$ such that $(v_i,v_a)$, $(v_i,v_k)$, $(v_j,v_b)$, and $(v_j,v_k)$ are all in $E(G)$, and therefore $G$ must contain some element in $G_o\left(\begin{bmatrix}
1 & 0 & 1\\
0 & 1 & 1
\end{bmatrix}\right) \cup G_o\left(\begin{bmatrix}
1 & 1\\
1 & 1
\end{bmatrix}\right)$. Note that $v_a$ and $v_b$ may be identical, and there are no edges of the form $(v_i, v_1)$ where $v_1$ denotes the minimal vertex. Thus $|E(G')| \leq k-1$, so $|E(G)|\leq 2k-2$.
\end{proof}
The next result is analogous to the ratio bound for 0-1 matrices in Theorem \ref{main01}, except rows and columns are replaced by the parts of a bipartite ordered graph.
\begin{thm}
Any minimally non-linear bipartite ordered graph with $k$ vertices in one part has at most $4k-2$ vertices in the other part.
\end{thm}
\begin{proof}
Given a minimally non-linear bipartite ordered graph $G$, without loss of generality assume that the first part $U$ has $k$ nodes. For each node $v_i$ in the second part $V$, we choose a neighbor in the first part using a process analogous to the one that we used for 0-1 matrices: if $v_i$ has only one neighbor then pick it, otherwise pick the smallest neighbor different from what we pick for $v_{i-1}$.
Now we get a sequence with $k$ distinct elements without any repetition of length more than $2$ because otherwise $G$ is not minimally non-linear. The sequence cannot be longer than $2\Ex(abab,k)=4k-2$, or else it would contain some element in
$G_o\left(\begin{bmatrix}
1 & 0 & 1 & 0 \\
0 & 1 & 0 & 1
\end{bmatrix}\right)$, which has all elements non-linear.
\end{proof}
Next we obtain an upper bound of $k-1$ on the number of edges in minimally non-linear bipartite ordered graphs with $k$ vertices unless the underlying graph is $K_{2,2}$. This bound is half the upper bound for minimally non-linear ordered graphs in Theorem \ref{allord}. The lemma that we use to obtain this bound is analogous to Lemma \ref{01edge}, which we used to bound the number of ones in minimally non-linear 0-1 matrices.
\begin{lem}
The number of edges in any minimally non-linear bipartite ordered graph with $w$ vertices in one part and $h$ vertices in the other part, besides ordered graphs whose underlying graph is $K_{2,2}$, is at most $w+h-1$.
\end{lem}
\begin{proof}
The result is clear if $G$ is an element of $G_{o}\left(
\begin{bmatrix}
1 & 0 & 1\\
0 & 1 & 1
\end{bmatrix}
\right)$, so suppose that $G$ is a minimally nonlinear bipartite ordered graph that is not an element of $G_{o}\left(
\begin{bmatrix}
1 & 0 & 1\\
0 & 1 & 1
\end{bmatrix}
\right)\cup G_{o}\left(
\begin{bmatrix}
1 & 1\\
1 & 1
\end{bmatrix}
\right)$. For each node $u\in U$, remove the edge $(u,v)\in E(G)$ with the smallest possible $v\in V$, no matter whether $u>v$ or $u<v$. So we remove exactly $|U|$ edges.
Each $v_k\in V$ in the resulting graph $G'$ has at most one neighbor. If it has more, say $(u_a,v_k), (u_b,v_k)$, then there are $v_i$ and $v_j$, which could be identical, such that $v_i<v_k$, $v_j<v_k$ and $(u_a,v_i)\in E(G)$, $(u_b,v_j)\in E(G)$. Clearly $G$ contains some element in $G_{o}\left(
\begin{bmatrix}
1 & 0 & 1\\
0 & 1 & 1
\end{bmatrix}
\right)\cup G_{o}\left(
\begin{bmatrix}
1 & 1\\
1 & 1
\end{bmatrix}
\right)$, a contradiction. So $|E(G)|\leq |U|+|V|-1=|V(G)|-1$.
\end{proof}
\begin{cor}
The number of edges in any minimally non-linear bipartite ordered graph with $k$ total vertices is at most $k-1$ unless the underlying graph is $K_{2,2}$, and the number of edges in any minimally non-linear bipartite ordered graph with $k$ vertices in one part is at most $5k-3$.
\end{cor}
\begin{cor}
For $k > 2$, the number of minimally non-linear bipartite ordered graphs with $k$ nodes in one part is at most $\sum_{i=\lceil (k+2)/4 \rceil}^{4k-2}\binom{k+i}{k}(i^{k}-(i-1)^{k})k^{i-1}$.
\end{cor}
\section{Open Problems}
We proved bounds for the following problems, but none of these problems are completely resolved.
\begin{enumerate}
\item
\begin{enumerate}
\item For each $k > 0$, what is the maximum possible length of a minimally non-linear sequence with $k$ distinct letters?
\item How many minimally non-linear sequences have $k$ distinct letters?
\item Characterize all minimally non-linear sequences with $k$ distinct letters.
\end{enumerate}
\item
\begin{enumerate}
\item What is the maximum possible ratio between the length and width of a minimally non-linear 0-1 matrix?
\item For each $k > 0$, what is the maximum possible number of columns in a minimally non-linear 0-1 matrix with $k$ rows?
\item What is the maximum possible number of ones in a minimally non-linear 0-1 matrix with $k$ rows?
\item How many minimally non-linear 0-1 matrices have $k$ rows?
\item Characterize all minimally non-linear 0-1 matrices with $k$ rows.
\end{enumerate}
\item
\begin{enumerate}
\item What is the maximum possible ratio between the sizes of the parts of a minimally non-linear bipartite ordered graph?
\item For each $k > 0$, what is the maximum possible number of vertices in the second part in a minimally non-linear bipartite ordered graph with $k$ vertices in the first part?
\item What is the maximum possible number of edges in a minimally non-linear bipartite ordered graph with $k$ total vertices?
\item What is the maximum possible number of edges in a minimally non-linear bipartite ordered graph with $k$ vertices in one part?
\item How many minimally non-linear bipartite ordered graphs have $k$ vertices in one part?
\item Characterize all minimally non-linear bipartite ordered graphs with $k$ vertices in one part.
\end{enumerate}
\item
\begin{enumerate}
\item For each $k > 0$, what is the maximum possible number of edges in a minimally non-linear ordered graph with $k$ vertices?
\item Characterize all minimally non-linear ordered graphs with $k$ vertices.
\end{enumerate}
\end{enumerate}
\section{Acknowledgments}
CrowdMath is an open program created by the MIT Program for Research in Math, Engineering, and Science (PRIMES) and Art of Problem Solving that gives high school and college students all over the world the opportunity to collaborate on a research project. The 2016 CrowdMath project is online at http://www.artofproblemsolving.com/polymath/mitprimes2016. The authors thank Gabor Tardos for proving that if $G'$ is an ordered graph obtained from $G$ by adding an edgeless vertex, then $\ex_{<}(n, G') = O(\ex_{<}(n, G)+n)$.
|
2,877,628,088,372 | arxiv | \section{Introduction}
\label{sec:intro}
3D object detection is one of the most basic tasks in autonomous driving.
It aims to localize objects in 3D space and classify them simultaneously.
In recent years, LiDAR-based 3D object detection methods have achieved significant success due to the increasing amount of labeled training data~\cite{sun2020waymo, nuscenes}.
However, existing LiDAR-based outdoor 3D object detection methods often adopt the paradigm of training from scratch, which brings two defects.
First, the training-from-scratch paradigm largely relies on extensive labeled data.
For 3D object detection, annotating precise bounding boxes and classification labels is costly and time-consuming, \textit{e.g.}, it takes around 114$s$ to annotate one object~\cite{meng2020weakly} on KITTI\cite{geiger2012we}.
Second, in many practical scenarios, the massive unlabeled point cloud data is generated by self-driving vehicles daily, which cannot be used in the training-from-scratch paradigm.
\begin{figure}[!t]
\centering
\includegraphics[width=0.98\linewidth]{images/illustration.pdf}
\caption{\textbf{Illustration of several masking strategies in the masked modeling.}
MAE\cite{DBLP:conf/cvpr/mae} masks non-overlapping image patches. BERT\cite{devlin2018bert} masks words or sentences. Point-MAE\cite{DBLP:journals/corr/pointmae} uses furthest point sampling and k-nearest neighbors to create overlapping point patches for masking.
Our method (right) projects point clouds into a BEV plane and masks point clouds in non-overlapping BEV grids.
}
\label{fig:illustration}
\vspace{-10pt}
\end{figure}
A simple and desirable solution to alleviate the above two problems is self-supervised pre-training, which is widely used in fields such as computer vision~\cite{he2020momentum, chen2020simple} and natural language processing~\cite{DBLP:conf/nips/Brownlanguagefewshot}.
It can learn a general and transferable feature representation on large-scale unlabeled data by solving pre-designed pretext tasks\cite{huang2022survey}.
One of the mainstream approaches in self-supervised learning is masked modeling, as illustrated in \figref{fig:illustration}.
Specifically, researchers adopt masked image modeling~\cite{DBLP:conf/cvpr/mae,DBLP:conf/cvpr/simmim} and masked language modeling~\cite{devlin2018bert} to pre-train networks by reconstructing images, words, and sentences from masked inputs.
Recently, works~\cite{DBLP:conf/cvpr/pointbert, DBLP:journals/corr/pointmae} propose masked point modeling for small and dense point clouds, achieving promising performance on shape classification, shape segmentation, and indoor 3D object detection.
However, these works mainly focus on synthetic or indoor datasets, such as ShapeNet~\cite{DBLP:journals/corr/shapenet}, ModelNet40~\cite{DBLP:conf/cvpr/modelnet}, and ScanNet~\cite{dai2017scannet}.
When applied to outdoor point clouds, where the scale of scenes is more extensive and point density is more sparse, their results are not satisfactory.
There are few works\cite{liang2021exploring, DBLP:conf/eccv/proposal} to explore the self-supervised pre-training for outdoor point clouds.
However, since the data augmentation in outdoor point clouds is not as diverse as in the 2D images, the proposed methods based on contrastive learning need complicated pre-processing to produce pseudo instances\cite{liang2021exploring} or region proposals\cite{DBLP:conf/eccv/proposal} to avoid model collapse\cite{chen2020simple,DBLP:conf/iclr/simsiam}.
In this paper, we present bird's eye view masked autoencoders, dubbed BEV-MAE, specially for pre-training 3D object detectors on outdoor point clouds.
In particular, instead of randomly masking point clouds or voxels, we propose a BEV guided masking strategy (right part of \figref{fig:illustration}) for two benefits.
First, we force the 3D encoder to learn feature representation in a BEV perspective by reconstructing masked information on the BEV plane.
Therefore, during fine-tuning, the pre-trained 3D encoder can facilitate the training process of 3D detectors in the BEV perspective.
Second, current 3D encoders of LiDAR-based 3D object detectors often downsample the resolution of points or voxels to save the computational overhead, \textit{e.g.}, GPU memory and training time.
By using BEV guided masking strategy, we do not need to design complicated decoders with upsampling operations\cite{shi2020unet} since the size of masked girds matches the resolution of BEV features.
We find that BEV-MAE can achieve promising results with a simple one-layer 3$\times$3 convolution as the decoder.
Besides, since the commonly used sparse 3D convolutions only perform computation near the occupied areas, the receptive field size of the 3D encoder may become smaller with masked point inputs, resulting in low learning efficiency and poor transferability of the 3D encoder.
To alleviate this issue, during pre-training, we replace the masked points with a shared learnable point token to maintain a consistent receptive field size of the 3D encoder with fine-tuning.
The shared learnable point token can help communication between unmasked areas without providing additional information to reduce the difficulty of the pre-training task.
Moreover, in addition to using coordinates of masked point clouds as the reconstruction target~\cite{DBLP:journals/corr/pointmae, DBLP:journals/corr/m2ae}, we introduce point density prediction for masked areas.
Since the point clouds become sparse when they are far from the LiDAR sensor for outdoor scenes, the density of point clouds can reflect the distance between points and the central LiDAR sensor.
Naturally, density prediction can guide models to learn location information, which is critical for object detection.
Our main contributions can be summarized as follows:
\begin{itemize}
\vspace{-1pt}
\item We present a simple and effective self-supervised pre-training method, BEV-MAE, tailored for 3D object detectors on outdoor point clouds.
%
By using the proposed BEV guided masking strategy, the 3D encoder of 3D object detectors can directly learn feature representation in a BEV perspective with a simple and light decoder.
\vspace{-1pt}
\item We introduce a shared learnable point token to alleviate the inconsistency of the receptive field size for the 3D encoder during pre-training and fine-tuning, and propose density prediction to better learn location information for the 3D encoder.
\vspace{-1pt}
\item Compared with existing self-supervised methods, the proposed BEV-MAE achieves new state-of-the-art performance on two popular outdoor 3D object detection benchmarks, \textit{i.e.}, Waymo\cite{sun2020waymo} and nuScenes\cite{nuscenes},
with less than 1/4 pre-training epochs.
\end{itemize}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.99\linewidth]{images/pipeline.pdf}
\caption{\textbf{The overall pipeline of BEV-MAE.} We first mask point clouds with the BEV guided masking strategy. Then, the masked points are replaced with a shared learnable point token.
After extracting BEV features by a 3D encoder from visible points, we send the features to a light decoder to reconstruct masked point clouds and predict the point density of masked grids.
}
\label{fig:pipeline}
\vspace{-3pt}
\end{figure*}
\section{Related Work}
\label{sec:related work}
\subsection{3D Object Detection}
The objective of 3D object detection is to localize objects of interest with 3D bounding boxes and classify the detected objects.
Due to the large domain gap between indoor and outdoor datasets, the corresponding 3D object detection methods in the two cases have been developing almost independently. Here we focus on the works about outdoor 3D object detection.
Recent outdoor 3D object detection approaches\cite{yan2018second, DBLP:conf/cvpr/voxelnet, lang2019pointpillars, DBLP:journals/corr/pvrcnnpp} based on BEV representation attract much attention for their convenience in fusing different information, including multi-view\cite{DBLP:journals/corr/bevdet,DBLP:conf/nips/perception}, multi-modality\cite{DBLP:journals/corr/bevfusion, DBLP:journals/corr/bevfusionmit}, and temporal inputs\cite{DBLP:journals/corr/bevdet4d}.
To obtain BEV features, LiDAR-based methods first extract point features by a 3D encoder, such as PointPillar\cite{lang2019pointpillars} and VoxelNet\cite{DBLP:conf/cvpr/voxelnet}, and then project the features onto a BEV plane according to the point coordinates.
For camera-based approaches, they extract the 2D features from multi-view images by a prevalent 2D backbone, including ResNet\cite{he2016deep} and SwinTransformer\cite{liu2021swin}, and utilize geometry-based view transformation\cite{DBLP:conf/eccv/lift, DBLP:journals/corr/bev} to construct BEV features from multi-view image features.
After that, for both types of methods, a 2D encoder and a detection head\cite{yan2018second, yin2020center} are applied to process the BEV features and predict the final detection results.
In essence, this pipeline reduces the task of 3D object detection to 2D object detection.
Hence, these methods can take full advantage of the highly developed 2D object detection algorithms.
Similarly, our work pre-trains the 3D encoder of 3D object detectors in the BEV space and can directly facilitate the 3D detection task in the BEV perspective.
\subsection{Self-supervised Learning for Point Clouds}
The main idea of self-supervised learning is to train networks on fully unlabeled data with pretext tasks.
Since no prior human knowledge is introduced, the pre-trained networks can learn a more general feature representation and have an excellent transfer ability.
Recently, 3D representation learning of point clouds has attempted to adopt the core idea of self-supervised learning.
PointContrast\cite{xie2020pointcontrast} samples different views from one scene and utilizes contrastive learning to discriminate the positive views.
DepthContrast\cite{DBLP:conf/iccv/depthcontrast} transforms RGB-D images into point clouds and further extends PointContrast by using two different types of point encoders.
STRL\cite{DBLP:conf/iccv/strl} constructs positive samples by using spatial augmentation from videos.
In addition to contrastive learning methods, masked modeling for 3D point clouds has been studied in recent years for its simplicity and generality across different fields.
OcCo\cite{DBLP:conf/iccv/occo} proposes to drop points from a camera view and recover the occlusion part of point clouds.
Point-BERT\cite{DBLP:conf/cvpr/pointbert} introduces BEiT-style\cite{DBLP:conf/iclr/beit} masked modeling for 3D point clouds with a pre-trained dVAE\cite{DBLP:conf/iclr/dvae} and achieves competitive performance on shape classification, shape segmentation, and indoor 3D object detection.
Point-MAE\cite{DBLP:journals/corr/pointmae} removes the training stage of dVAE in Point-BERT and reconstructs the masked points directly following MAE\cite{DBLP:conf/cvpr/mae}.
Point-M2AE\cite{DBLP:journals/corr/m2ae} proposes a U-Net transformer architecture and adopts a multi-scale masking strategy to learn the multi-scale spatial geometries.
However, these methods are mainly applied to the synthetic or indoor dataset.
They fail to be directly transferred to large-scale outdoor point clouds due to the large domain gap between datasets.
To address this issue, more recently, GCC-3D\cite{liang2021exploring} proposes geometry-aware contrast and harmonized cluster to learn both geometry and semantic information from outdoor point clouds.
ProposalConstrast\cite{DBLP:conf/eccv/proposal} samples region proposals for each point cloud with farthest point sampling (FPS) and jointly optimizes inter-proposal discrimination and inter-cluster separation for outdoor point clouds.
However, these works need complex pre-processing on input point clouds to produce pseudo instances or region proposals.
In this paper, we propose a simple masked modeling framework, BEV-MAE, for pre-training large-scale outdoor point clouds.
Without any pre-processing on point clouds, BEV-MAE adopts a BEV guided masking strategy to learn BEV feature representation and achieves new state-of-the-art self-supervised learning performance.
\section{Method}
In this section, we introduce the proposed BEV-MAE in detail. The overall pipeline is shown in \figref{fig:pipeline}.
BEV-MAE first uses a BEV guided masking strategy to mask point clouds. Then, these masked points are replaced with a shared learnable point token.
After that, we send the processed points into a 3D encoder and a light decoder sequentially.
Finally, the light decoder will reconstruct the masked points and predict the point density of the masked area.
\subsection{BEV Guided Masking Strategy}
In outdoor LiDAR-based 3D object detection, the point clouds are often divided into regular voxels.
Naturally, a straightforward masking strategy is to mask the voxels like masking patches in vision\cite{DBLP:conf/cvpr/mae, DBLP:conf/cvpr/simmim}.
However, this simple voxel masking strategy makes subsequent decoder design difficult. Besides, it takes little consideration of the type of feature representation in the mainstream outdoor 3D object detection methods, \textit{i.e.}, BEV feature representation.
To this end, we propose a BEV guided masking strategy to mask points in the BEV plane.
Specifically, assuming the resolution of the features in the BEV perspective after encoding and transformation is $X\times Y\times C$, we first pre-define a grid-shaped BEV plane with the size of $X\times Y$.
Then, we project each LiDAR point $p_k$ into a corresponding BEV grid $g_{i,j}$ of the pre-defined BEV plane according to its point coordinates $(x_{p_k},y_{p_k})$.
Obviously, each BEV grid will contain a various number of points:
\begin{equation}
g_{i,j} = \{p_k~|~ \lfloor x_{p_k}/d \rfloor = i,~\lfloor y_{p_k}/d \rfloor = j\},
\end{equation}
where $d$ is the downsample ratio of the 3D encoder and $\lfloor x \rfloor$ denotes rounding down of $x$.
After that, we randomly select a large fraction of non-empty BEV grids, \textit{i.e.}, $g_{i,j}\neq \varnothing$, as the masked grids $\{g^m_i\}$ and denote the remaining BEV grids as visible grids $\{g^v_i\}$.
Finally, we obtain the visible and masked point clouds by merging the points in $\{g^m_i\}$ and $\{g^v_i\}$, formulated as $\{p_k^v\} = \mathop{\cup}\limits_{\tiny{i}} g^v_i$ and $\{p_k^m\} = \mathop{\cup}\limits_{\tiny{i}} g^m_i$ respectively.
\subsection{Learnable Point Token}
The 3D encoder of recent voxel-based 3D object detectors typically consists of several sparse convolution operations, which only process the features near the non-empty voxels.
Thus, when only taking visible point clouds $\{p_k^v\}$ as input, the size of the receptive field of the 3D encoder becomes smaller.
To address this issue, we replace the masked point clouds $\{p_k^m\}$ with a shared learnable point token.
In detail, we use coordinates of full point clouds as the input indexes\cite{yan2018second} of sparse convolution and replace the feature of masked point clouds with the shared learnable point token in the first sparse convolution layer.
We keep the other sparse convolutional layers unchanged.
The only purpose of the proposed shared learnable point token is to pass the information from one point or voxel to another to maintain the size of the receptive field.
It does not introduce any additional information, including the coordinates of masked points.
\subsection{Decoder Design}
Due to the BEV guided masking strategy, the size of masked areas is aligned with the resolution of the BEV features.
Thus, we can directly predict the reconstruction results of one masked grid from the corresponding BEV features without any upsampling operations.
The decoder of BEV-MAE is only used during pre-training to solve the masking task.
Naturally, the design of the decoder is a flexible manner that is independent of the 3D encoder architecture.
We test three types of decoders, \textit{i.e.}, one-layer 3$\times$3 convolution, transformer block\cite{vaswani2017attention,dosovitskiy2020vit, DBLP:conf/cvpr/mae}, and residual convolution block\cite{he2016deep}, and find the highly lightweight decoder, a one-layer 3$\times$3 convolution, can achieve impressive performance while reducing pre-training time and GPU memory cost.
More details can be found in \secref{ablation}.
\subsection{Reconstruction Target}
The proposed BEV-MAE is supervised by two tasks, \textit{i.e.}, point cloud reconstruction and density prediction.
For each task, a separate linear layer is applied as the prediction head to predict results.
We describe each task below.
\subsubsection{Point Cloud Reconstruction}
Like previous works\cite{DBLP:journals/corr/pointmae, DBLP:journals/corr/m2ae}, BEV-MAE reconstructs the masked input by predicting the coordinate of masked points.
However, each masked grid in the pre-defined BEV plane contains a different number of points, leading to the challenges of designing the prediction head.
To address this issue, we propose to reconstruct the local structure of the masked point clouds with a set-to-set prediction.
Specifically, we apply a linear layer to predict a set of 3D points with a fixed number, denoted as $P_i=\{p_l~|~l=1,2,...,L\}$, for each masked grid $g_i^m$.
Given the original points $\hat{P}_i=\{\hat{p_k}~|~k=1,2,...,N\}$ in $g_i^m$, where $N$ varies with the grid, we utilize Chamfer Distance\cite{DBLP:conf/cvpr/chamfer} between predictions $P_i$ and the ground-truth $\hat{P}_i$ as the reconstruction loss:
\begin{equation}
\mathcal{L}_c^i = \frac{1}{L}\sum_{p_l\in P_i}{\min_{\hat{p_k}\in \hat{P_i}} \lvert\lvert p_l-\hat{p_k} \rvert\rvert_2^2} + \frac{1}{N}\sum_{\hat{p_k}\in \hat{P_i}}{\min_{p_l\in P_i} \lvert\lvert \hat{p_k}-p_l \rvert\rvert_2^2}.
\end{equation}
Then, we average the loss over all the masked grids as the final reconstruction loss:
\begin{equation}
\mathcal{L}_c = \frac{1}{n_m} \sum_{i=1}^{n_m}\mathcal{L}_c^i,
\end{equation}
where $n_m$ is the number of masked grids.
The Chamfer distance measures the distance between two sets with different cardinalities.
Thus, it forces the shape of predicted point clouds to mimic the local structure of masked inputs\cite{DBLP:conf/cvpr/chamfer}.
More analysis can be found in \secref{sec:vis}.
Moreover, since the coordinate value of each ground-truth point varies largely in the different grids, directly predicting the absolute coordinates of each point may cause instability during pre-training.
To alleviate this issue, we propose to predict the normalized coordinate in point cloud reconstruction.
Specifically, we first calculate the coordinate offset of each ground-truth point to the center of its corresponding BEV grid and then normalize the offset value by the size of the BEV grid.
\begin{table*}[!t]
\begin{center}
\caption{\textbf{Comparisons between BEV-MAE and the state-of-the-art self-supervised learning methods on Waymo.}
All detectors are fine-tuning with 20\% training samples on Waymo following the OpenPCDet\cite{openpcdet2020} configuration. $^{*}$ denotes the results are from the paper\cite{liang2021exploring}.
$^{\dagger}$ presents pre-training both 3D encoder and 2D encoder.
`Epochs' indicates the effective pre-training epochs accounting for actual trained samples/views\cite{zhou2021ibot}.
`Dataset fraction' means the data fraction of Waymo training set used for pre-training.
`Time' refers to the pre-training time estimated by an 8-GPU P40 machine.
}
\label{tab:main result waymo}
\resizebox{1.0\textwidth}{!}{
\begin{tabular}{l|c|c|c|c|c|c|c}
\toprule
\multicolumn{1}{c|}{\multirow{2}{*}[-0.5ex]{{Method}}} &
\multirow{2}{*}[-0.5ex]{\shortstack{Epochs}}
&
\multirow{2}{*}[-0.5ex]{\shortstack{Time}}
&
\multirow{2}{*}[-0.5ex]{\shortstack{Dataset \vspace{1.4pt}\\ fraction}}
& \multicolumn{4}{c}{L2 (mAP/APH)} \\
\cmidrule[\lightrulewidth](r{0.1em}){5-8}
& & & & Overall & Vehicle & Pedestrian & Cyclist \\
\midrule
SECOND & - & - & - & 60.62 / 56.86 & 64.26 / 63.73 & 59.72 / 50.38 & 57.87 / 56.48 \\
ProposalContrast (SECOND) & 2$\times$50 & 64h & 100\% & 60.74$\color{red}{{^{+0.12}}}$ / 57.01$\color{red}{{^{+0.15}}}$ & 64.40 / 63.83& 60.17 / 50.82 & 57.87 / 56.45 \\
ProposalContrast (SECOND)$^\dagger$ & 2$\times$50 & 72h & 100\% & 60.90$\color{red}{{^{+0.28}}}$ / 57.17$\color{red}{{^{+0.31}}}$ & 64.50 / 63.90& 60.33 / 51.00 & 57.90 / 56.60 \\
BEV-MAE (SECOND) & 20 & 5h & 20\% & 60.86$\color{red}{{^{+0.24}}}$ / 57.15$\color{red}{{^{+0.29}}}$ & 64.38 / 63.85& 60.10 / 50.84 & 58.11 / 56.76 \\
BEV-MAE (SECOND) & 20 & 25h & 100\% & \textbf{61.03}$\color{red}{\bm{^{+0.43}}}$ \textbf{/} \textbf{57.30}$\color{red}{\bm{^{+0.44}}}$ & \textbf{64.42 / 63.87}& \textbf{59.97 / 50.65} & \textbf{58.69 / 57.39}\\
\midrule
CenterPoint$^*$& - & - & - & 63.46 / 60.95 & 61.81 / 61.30 & 63.62 / 57.79 & 64.96 / 63.77 \\
GCC-3D (CenterPoint)$^*$ & 2$\times$40 & - & 100\% & 65.29$\color{red}{^{+1.83}}$ / 62.79$\color{red}{^{+1.84}}$ & 63.97 / 63.47 & 64.23 / 58.47 & 67.68 / 66.44 \\
CenterPoint & -&- & -& 65.60 / 63.21 & 64.18 / 63.69 & 65.22 / 59.68 & 67.41 / 66.25 \\
ProposalContrast (CenterPoint) & 2$\times$50 & 64h & 100\% & 66.41$\color{red}{^{+0.81}}$ / 63.84$\color{red}{^{+0.63}}$ & 65.03 / 64.53 & 65.93 / 59.95 & 68.26 / 67.04 \\
ProposalContrast (CenterPoint)$^\dagger$ & 2$\times$50 & 72h & 100\% & 66.67$\color{red}{^{+1.07}}$ / 64.20$\color{red}{^{+0.99}}$ & 65.22 / 64.80 & 66.40 / 60.49 & 68.48 / 67.38 \\
BEV-MAE (CenterPoint) & 20 & 5h & 20\% & 66.70$\color{red}{^{+1.10}}$ / 64.25$\color{red}{^{+1.04}}$ & 64.71 / 64.22 & 66.21 / 60.59 & 69.11 / 67.93\\
BEV-MAE (CenterPoint) & 20 & 25h & 100\% & \textbf{66.92}$\color{red}{\bm{^{+1.32}}}$ \textbf{/} \textbf{64.45}$\color{red}{\bm{^{+1.24}}}$ & \textbf{64.78 / 64.29} & \textbf{66.25 / 60.53} & \textbf{69.73 / 68.52} \\
\midrule
PV-RCNN++ & -&- & - & 69.97 / 67.58 & 69.18 / 68.75 & 70.88 / 65.21 & 69.84 / 68.77 \\
ProposalContrast (PV-RCNN++) & 2$\times$50 & 64h & 100\% & 70.30$\color{red}{^{+0.33}}$ / 67.78$\color{red}{^{+0.20}}$ & 69.45 / 69.00 & 71.42 / 65.68 & 70.04 / 69.05\\
ProposalContrast (PV-RCNN++)$^\dagger$ & 2$\times$50 & 72h & 100\% & 70.49$\color{red}{^{+0.52}}$ / 67.98$\color{red}{^{+0.40}}$ & 69.47 / 68.95 & 71.28 / 65.31 & 70.73 / 69.59\\
BEV-MAE (PV-RCNN++) & 20 & 5h & 20\% & 70.45$\color{red}{^{+0.48}}$ / 67.96$\color{red}{^{+0.38}}$ & 69.44 / 69.02 & 71.14 / 65.21 & 70.77 / 69.65 \\
BEV-MAE (PV-RCNN++) & 20 & 25h & 100\% & \textbf{70.54}$\color{red}{\bm{^{+0.57}}}$ \textbf{/} \textbf{68.11}$\color{red}{\bm{^{+0.53}}}$ & \textbf{69.53 / 69.07} & \textbf{71.50 / 65.69} & \textbf{70.60 / 69.56} \\
\bottomrule
\end{tabular}
}
\vspace{-5pt}
\end{center}
\end{table*}
\begin{table*}[!t]
\begin{center}
\caption{\textbf{Comparisons between BEV-MAE and GCC-3D on nuScenes.} `Epochs’ indicates the effective pre-training epochs\cite{zhou2021ibot}.
}
\label{tab:main result nus}
\resizebox{1.0\textwidth}{!}{
\begin{tabular}{l|c|cc|cccccccccc}
\toprule
Method &
Epochs
& mAP & NDS & Car & Truck & CV. & Bus & Trailer & Barrier & Motor. & Bicycle & Ped. &TC. \\
\midrule
CenterPoint& - &56.2 & 64.5 & 84.8 & 53.9 & 16.8 & 67.0 &35.9& 64.8& 55.8& 36.4& 83.1& 63.4
\\
GCC-3D (CenterPoint) & 2$\times$40 & \textbf{57.3$\color{red}{\bm{^{+1.1}}}$} & 65.0$\color{red}{{^{+0.5}}}$ & 85.0 & 54.7 & 17.6 & 67.2 & 35.7 & 65.0 & 56.2 & 36.0 & 82.9 & 63.7 \\
BEV-MAE (CenterPoint) & 20 & 57.2$\color{red}{{^{+1.0}}}$ & \textbf{65.1$\color{red}{\bm{^{+0.6}}}$} & 84.9 & 54.9 & 16.5 & 67.2 & 35.9 & 65.2 & 56.0 & 36.2 & 83.2 & 63.5 \\
\bottomrule
\end{tabular}
}
\vspace{-10pt}
\end{center}
\end{table*}
\subsubsection{Point Density Prediction}
Different from image, language, and indoor point clouds, outdoor point clouds have the property that the density of the point clouds becomes small when they are far from the LiDAR sensor.
Consequently, the density can reflect the location of each point or object.
Besides, for object detection, the localization ability of detectors is essential.
Based on the above analysis, we propose another task for BEV-MAE, \textit{i.e.}, point density prediction, to guide the 3D encoder to achieve a better localization ability.
Concretely, for each masked grid $g_i^m$, we count the number of point clouds in this grid and calculate the density $\hat{\rho_i}$ by dividing the number of point clouds with the occupied volume in 3D space as the ground-truth for density prediction.
Then, we use a linear layer as the prediction head to obtain the density prediction $\rho_i$.
We supervise this task with the \textit{Smooth}-$\ell_1$\cite{girshick2015fast} loss:
\begin{equation}
\mathcal{L}_d^i = Smooth\text{-}{\ell_1}(\rho_i-\hat{\rho_i}).
\end{equation}
Similarly, we average the loss over all the masked grids as the final density prediction loss:
\begin{equation}
\mathcal{L}_d = \frac{1}{n_m} \sum_{i=1}^{n_m}\mathcal{L}_d^i.
\end{equation}
\subsubsection{Loss Function}
The overall loss function for BEV-MAE is:
\begin{equation}
\mathcal{L}=\alpha_c\mathcal{L}_c + \alpha_d\mathcal{L}_d,
\end{equation}
where $\alpha_c$ and $\alpha_d$ are balance weights for each loss. In our experiments, both $\alpha_c$ and $\alpha_d$ are set to 1.
\begin{table*}[!t]
\begin{center}
\caption{\textbf{Results about data efficiency on Waymo.}
We first pre-train the 3D encoder on the full Waymo dataset and then fine-tune it on various fractions of Waymo dataset with CenterPoint as the 3D object detector. Consistent improvements are obtained under each setting.
}
\label{tab:data efficiency}
\begin{tabular}{l|c|c|c|c|c}
\toprule
\multicolumn{1}{c|}{\multirow{2}{*}[-0.5ex]{{Method}}} &
\multirow{2}{*}[-0.5ex]{\shortstack{Dataset fraction \\
for fine-tuning}}
& \multicolumn{2}{c|}{LEVEL\_1} & \multicolumn{2}{c}{LEVEL\_2}\\
\cmidrule[\lightrulewidth](r{0.1em}){3-6}
& & mAP & APH & mAP & APH \\
\midrule
From scratch & \multirow{2}{*}[-0.3ex]{1\%} & 29.80&25.58 & 26.27&22.62 \\
BEV-MAE & & 35.27$\color{red}{{^{+5.47}}}$ & 31.63$\color{red}{{^{+6.05}}}$ & 31.26$\color{red}{{^{+4.99}}}$ & 28.08$\color{red}{{^{+5.46}}}$ \\
\midrule
From scratch & \multirow{2}{*}[-0.3ex]{5\%} & 58.02&54.94 & 52.33&49.57 \\
BEV-MAE & & 60.74$\color{red}{{^{+2.71}}}$&57.55$\color{red}{{^{+2.61}}}$ & 54.92$\color{red}{{^{+2.59}}}$&52.06$\color{red}{{^{+2.49}}}$ \\
\midrule
From scratch & \multirow{2}{*}[-0.3ex]{20\%} & 68.42&65.56 & 62.27&59.65 \\
BEV-MAE & & 70.00$\color{red}{{^{+1.58}}}$&67.19$\color{red}{{^{+1.63}}}$ & 63.85$\color{red}{{^{+1.58}}}$&61.26$\color{red}{{^{+1.61}}}$ \\
\midrule
From scratch & \multirow{2}{*}[-0.3ex]{50\%} & 71.65 & 69.02 & 65.45&63.02 \\
BEV-MAE && 72.20$\color{red}{{^{+0.55}}}$ & 69.50$\color{red}{{^{+0.48}}}$ & 66.00$\color{red}{{^{+0.55}}}$ & 63.50$\color{red}{{^{+0.48}}}$ \\
\midrule
From scratch & \multirow{2}{*}[-0.3ex]{100\%} & 73.28&70.67 & 67.11&64.69 \\
BEV-MAE & &73.50$\color{red}{{^{+0.22}}}$&70.90$\color{red}{{^{+0.23}}}$ & 67.36$\color{red}{{^{+0.25}}}$&64.95$\color{red}{{^{+0.26}}}$ \\
\bottomrule
\end{tabular}
\vspace{-7pt}
\end{center}
\end{table*}
\section{Experimental Results}
\subsection{Implementation Details}
We evaluate the proposed BEV-MAE on two popular large-scale autonomous driving datasets, \textit{i.e.}, nuScenes and Waymo Open Dataset.
We mainly focus on the evaluation metrics of mAP and NDS for nuScenes and the more difficult LEVEL\_2 metric (L2 mAP and L2 APH) for Waymo.
We adopt VoxelNet as the 3D encoder.
We fine-tune the pre-trained 3D encoder with three prevalent 3D detectors, \textit{i.e.}, SECOND\cite{yan2018second}, CenterPoint\cite{yin2020center}, and PV-RCNN++\cite{DBLP:journals/corr/pvrcnnpp}.
During pre-training, We train BEV-MAE for 20 epochs and use the Adam optimizer with the one-cycle schedule.
The maximum learning rate is set to 0.0003 with a batch size of 4.
The masking ratio is 0.7 and the number of predicted points $L$ in point cloud reconstruction is 20 for each masked grid.
For fine-tuning, we adopt the default training schedule as training-from-scratch baselines in OpenPCDet.
The maximum learning rate is set to 0.006 with a batch size of 4.
For Waymo, all detectors are fine-tuning with 20\% training samples.
\subsection{Main Results}
We compare the proposed BEV-MAE with the state-of-the-art self-supervised learning methods for LiDAR-based 3D Object Detection in \tabref{tab:main result waymo} and \tabref{tab:main result nus}.
On the Waymo validation set, \tabref{tab:main result waymo} shows that BEV-MAE substantially improves the performance of training-from-scratch baselines and outperforms previous state-of-the-art self-supervised learning methods with three types of 3D detectors.
Besides, we observe that, with only 20\% training data and 7\% computation cost, BEV-MAE achieves similar performance compared to ProposalContrast with 100\% data for pre-training, further demonstrating the efficiency of BEV-MAE pre-training.
Specifically, based on SECOND, BEV-MAE brings 0.43 mAP and 0.44 APH improvement for the training-from-scratch baseline.
For CenterPoint, BEV-MAE improves the training-from-scratch baseline by 1.32 mAP and 1.24 APH on average, outperforming the state-of-the-art ProposalContrast by 0.25 mAP and 0.25 APH.
Moreover, our method reaches 68.11 APH with PV-RCNN++, which surpasses the training-from-scratch baseline and ProposalContrast by 0.53 APH and 0.13 APH, respectively. %
For nuScenes, in \tabref{tab:main result nus}, BEV-MAE achieves a higher NDS with only 20 pre-training epochs compared to the state-of-the-art method GCC-3D.
Overall, the results demonstrate BEV-MAE is more effective and efficient on outdoor point clouds compared to the state-of-the-art self-supervised learning methods based on contrastive learning.
In addition, the pre-trained encoder shows excellent cross-model generalizability over several 3D detectors.
\subsection{Data Efficiency}
To assess the data efficiency of BEV-MAE, we train the 3D detectors with different amounts of labeled data.
Concretely, we first pre-train the 3D encoder of CenterPoint with BEV-MAE on the full Waymo dataset and then fine-tune it on various fractions of the Waymo training data, \textit{i.e.}, 1\%, 5\%, 20\%, 50\%, and 100\%.
To save computation, the 3D detectors are trained with 12 epochs\cite{yin2020center}.
As shown in \tabref{tab:data efficiency}, pre-training with BEV-MAE can consistently improve the detection results on different fractions of the training data.
Especially, our method can obtain more significant gains when the labeled data is less.
For example, BEV-MAE surpasses the training-from-scratch baseline by 6.05 APH on LEVEL\_1 and 5.46 APH on LEVEL\_2 under 1\% labeled data setting.
This suggests the potential of BEV-MAE in using large amounts of unlabeled data.
\subsection{Transfer Learning}
We evaluate the transferability of the pre-trained encoder, which indicates the cross-dataset generalization of the feature representation learned by BEV-MAE, by fine-tuning the encoder on a different dataset.
We only use the coordinates as the input feature of point clouds for compatibility across different datasets.
The results are present in \tabref{tab:transfer}.
We can find that BEV-MAE consistently improves the detection performance across different datasets.
Besides, we find that CenterPoint achieves better results when pre-training and fine-tuning on the same dataset.
We suspect that the domain gaps between two datasets may hurt the transfer performance, \textit{e.g.}, the density of point clouds on Waymo is five times higher than it of nuScenes.
Moreover, when pre-trained on the combined dataset of Waymo and nuScenes, the detection performance on both datasets can be further improved, which indicates that BEV-MAE benefits from a larger amount of data.
\begin{table}[!t]
\begin{center}
\caption{\textbf{Results of transfer learning.} The contents in column and row show the datasets for pre-training and fine-tuning respectively.}
\label{tab:transfer}
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{c|c|c|c|c}
\toprule
\multirow{2}{*}{\diagbox{Pre-train}{Fine-tune}} & \multicolumn{2}{c|}{nuScenes} & \multicolumn{2}{c}{Waymo} \\
\cmidrule[\lightrulewidth](r{0.1em}){2-5}
& mAP & NDS & L2 mAP & L2 APH \\
\midrule
Random init. & 48.6 & 58.4 & 63.97 & 61.53 \\
nuScenes & {49.7$\color{red}{{^{+1.1}}}$} & {58.9$\color{red}{{^{+0.5}}}$} & 64.79$\color{red}{^{+0.82}}$ & 62.28$\color{red}{^{+0.75}}$ \\
Waymo & {49.4$\color{red}{{^{+0.8}}}$} & {58.8$\color{red}{{^{+0.4}}}$} & 65.13$\color{red}{^{+1.16}}$ & 62.63$\color{red}{^{+1.10}}$ \\
nuScenes + Waymo & \textbf{50.1}$\color{red}{^{\bm{+1.5}}}$ & \textbf{59.1}$\color{red}{^{\bm{+0.7}}}$ & \textbf{65.36}$\color{red}{^{\bm{+1.39}}}$ & \textbf{62.89}$\color{red}{^{\bm{+1.36}}}$ \\
\bottomrule
\end{tabular}
}
\vspace{-5pt}
\end{center}
\end{table}
\begin{table}[!t]
\begin{center}
\caption{\textbf{Ablation on main components of BEV-MAE.} `Learnable token' denotes the shared learnable point token. Each component brings performance improvement for BEV-MAE.
}
\label{tab:main component}
\resizebox{0.49\textwidth}{!}{
\begin{tabular}{c|c|c|c|c}
\toprule
Pre-train & Reconstruction target & Learnable token & L2 mAP & L2 APH \\
\midrule
None & - & - & 65.60 & 63.21 \\
\midrule
\multirow{5}{*}[-0.7ex]{BEV-MAE} &
Coord. (w/o norm) & $\checkmark$ & 65.66&63.09 \\
& Coord. (w norm) & $\checkmark$ & 66.20&63.71 \\
& Density & $\checkmark$ & 65.80&63.27 \\
\cmidrule[\lightrulewidth](r{0.1em}){2-5}
& Coord. (w norm) + Density & $\times$ & 66.49 & 63.99\\
\cmidrule[\lightrulewidth](r{0.1em}){2-5}
& Coord. (w norm) + Density & $\checkmark$ & \textbf{66.70} & \textbf{64.25}\\
\bottomrule
\end{tabular}
}
\vspace{-7pt}
\end{center}
\end{table}
\subsection{Ablation Study}
\label{ablation}
In this section, we conduct ablation experiments to analyze the effectiveness of each setting of BEV-MAE, including the main components and hyper-parameters.
In the following experiments, we use CenterPoint as the 3D detector.
We first pre-train the 3D encoder on the 20\% training data of Waymo with BEV-MAE, and then evaluate its performance by fine-tuning with CenterPoint on the same 20\% training data of Waymo.
\vspace{-3pt}
\subsubsection{Main Component}
We evaluate the effectiveness of each component of the proposed BEV-MAE, as shown in \tabref{tab:main component}.
We can see that only pre-training with normalized coordinate or density prediction can significantly improve 3D detection performance.
For coordinate prediction, the normalization operation to process the coordinates of points is essential.
Besides, the detection results can be further improved by taking both coordinate and density prediction as the pre-training task, achieving 66.49 mAP and 63.99 APH.
Moreover, pre-training with the shared learnable token brings additional performance improvement of 0.43 mAP and 0.46 APH.
\vspace{-3pt}
\subsubsection{Decoder Design}
In addition to one-layer 3$\times$3 convolution, we also test two other decoders with more complex design, \textit{i.e.}, Transformer block\cite{vaswani2017attention, dosovitskiy2020vit, DBLP:conf/cvpr/mae} and Residual Conv block\cite{he2016deep}, as shown in \tabref{tab:decoder}.
From the table, we can find that the detection performance drops significantly with more complex decoders.
The performance decreases more when using the Transformer block as the decoder, which we assign to the different architecture between the encoder and decoder (Sparse Convolution \textit{vs.} Transformer).
Besides, we also observe that the complex decoders bring additional training cost, \textit{e.g.}, Residual Conv block brings additional 0.2$\times$ training cost compared with one-layer 3$\times$3 Conv.
These results indicate that the exploration of decoder design may not be essential for BEV-MAE. A simple one-layer 3$\times$3 convolution is adequate for practice.
Similar conclusions can be found in SimMiM\cite{DBLP:conf/cvpr/simmim}.
\begin{table}[!t]
\begin{center}
\caption{\textbf{Ablation on different deocders.} One-layer 3$\times$3 Conv achieves best results with least training cost.}
\label{tab:decoder}
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{l|c|c|c}
\toprule
Decoder & L2 mAP & L2 APH & Training cost \\
\midrule
One-layer 3$\times$3 Conv & \textbf{66.70} & \textbf{64.25} & \textbf{1$\times$}\\
Residual Conv block & 66.61 & 64.09 & 1.2$\times$\\
Transformer block & 65.80 & 63.26 &1.4$\times$\\
\bottomrule
\end{tabular}
}
\vspace{-5pt}
\end{center}
\end{table}
\begin{table}[!t]
\begin{center}
\caption{\textbf{Ablation on the masking strategy.} Pre-training with the BEV guided masking strategy performs better on the downstream 3D detection task with less GPU memory consumption and pre-training cost.}
\label{tab:mask strategy}
\resizebox{0.48\textwidth}{!}{
\begin{tabular}{l|c|c|c|c}
\toprule
Masking strategy & L2 mAP & L2 APH & Memory &Training cost \\
\midrule
Voxel masking & 66.63 & 64.16 & 12.6G & 1.4$\times$\\
BEV guided masking & \textbf{66.70} &\textbf{64.25} & \textbf{4.1G} & \textbf{1$\bm\times$}\\
\bottomrule
\end{tabular}
}
\vspace{-10pt}
\end{center}
\end{table}
\begin{figure*}[!t]
\centering
\includegraphics[width=0.98\linewidth]{images/vis.pdf}
\caption{
\textbf{Results of point cloud reconstruction on Waymo validation set.}
For each triplet, we show the masked point cloud input (left), our BEV-MAE reconstruction (middle), and the ground-truth (right).
BEV-MAE can predict the reasonable shape of most objects and capture the height of the points.
Best viewed by zooming in.
}
\vspace{-5pt}
\label{fig:vis}
\end{figure*}
\vspace{-3pt}
\subsubsection{Masking Strategy}
We study how different masking strategies, including a simple voxel masking strategy and the proposed BEV guided masking strategy, affect the effectiveness of representation learning.
For the voxel masking strategy, we randomly mask non-empty voxels and use sparse deconvolution layers\cite{shi2020unet} as the decoder to recover the points in masked voxels.
In \tabref{tab:mask strategy}, we observe that the proposed BEV guided masking strategy achieves better transfer performance on downstream 3D object detection tasks.
In addition, compared with the voxel masking strategy, the BEV guided masking strategy significantly reduces the GPU memory (4.1G \textit{vs.} 12.6G) and training costs (1$\times$ \textit{vs.} 1.4$\times$) during pre-training.
\begin{table}[!t]
\begin{center}
\caption{\textbf{Ablation on masking ratio.}
The fine-tuning results are less sensitive to the masking ratio.
}
\label{tab:mask ratio}
\begin{tabular}{c|c|c}
\toprule
Masking ratio & L2 mAP & L2 APH \\
\midrule
50\% & 66.45 & 64.00 \\
60\% & 66.62 & 64.13 \\
70\% & \textbf{66.70} & \textbf{64.25}\\
80\% & 66.52 & 64.06 \\
\bottomrule
\end{tabular}
\vspace{-15pt}
\end{center}
\end{table}
\subsubsection{Analysis of Hyper-parameters}
\noindent \textbf{Masking ratio.}
\tabref{tab:mask ratio} ablates the influence of the masking ratio.
It shows that the proposed BEV-MAE works well within a wide range of masking ratios (50\%-80\%).
The best results are achieved when the masking ratio is 70\%.
\vspace{1mm}
\noindent \textbf{Number of predicted points.}
\tabref{tab:number of point} ablates the influence of a varying number of predicted points in point cloud reconstruction.
We notice that the performance is optimal with 20 predicted points and drops significantly when the number of predicted points is too small or large.
We attribute to the fact that, when the number of predicted points is small, the reconstructed points are insufficient to represent the local structure of masked points.
Thus, the optimization objective does not converge well.
By contrast, for a large number of predicted points, the reconstructed points are redundant, leading to the over-fitting of masked points.
\begin{table}[!t]
\begin{center}
\caption{\textbf{Ablation on the number of predicted points in point cloud reconstruction.}
The fine-tuning performance drops when the number of predicted points is too large or small.
}
\label{tab:number of point}
\begin{tabular}{c|c|c}
\toprule
\multirow{2}{*}[-0.3ex]{\shortstack{Number of \\ predicted points}}
& \multirow{2}{*}[-0.2ex]{L2 mAP} & \multirow{2}{*}[-0.2ex]{L2 APH} \\
&& \\
\midrule
10 & 66.25 & 63.71 \\
20 & \textbf{66.70} &\textbf{64.25} \\
30 & 66.43 & 63.92\\
40 & 66.06 & 63.54\\
\bottomrule
\end{tabular}
\vspace{-10pt}
\end{center}
\end{table}
\subsection{Visualization}
\label{sec:vis}
We visualize the reconstruction results of BEV-MAE in \figref{fig:vis}.
The pre-trained BEV-MAE can predict the reasonable shape of most objects and the plausible boundaries of surrounding scenes.
Moreover, the model can capture the height of the points, which indicates that the 3D encoder has learned feature representation with much 3D location information in the BEV perspective.
\section{Conclusions}
In this work, we address the problem of self-supervised pre-training on outdoor point clouds.
We present BEV-MAE to pre-train the 3D encoder of LiDAR-based 3D object detectors.
Instead of simply masking points or voxels, we propose a BEV guided masking strategy for better BEV representation learning and to avoid complex decoder design.
Besides, we introduce a shared learnable point token to maintain the receptive field size of the encoder during pre-training and fine-tuning.
By leveraging the properties of outdoor point clouds, we propose point density prediction to guide the encoder to learn location information.
Experimental results show that BEV-MAE achieves new state-of-the-art self-supervised results on both Waymo and nuScenes with diverse prevalent 3D object detectors.
Moreover, with only 20\% pre-training data and 7\% cost, BEV-MAE achieves comparable performance with ProposalContrast, which shows that the proposed BEV-MAE is more efficient than previous self-supervised learning methods based on contrastive learning.
\renewcommand\thesection{\Alph{section}}
\renewcommand\thefigure{\Alph{figure}}
\renewcommand\thetable{\Alph{table}}
\setcounter{section}{0}
\setcounter{figure}{0}
\setcounter{table}{0}
\section*{Supplementary}
\section{Implementation Details}
\label{sec:details}
\subsection{Waymo Setup}
Waymo Open Dataset is a large-scale autonomous-driving dataset for 3D detection, consisting of 798 training, 202 validation, and 150 testing sequences. Each sequence contains about 200 frames of LiDAR points and camera images.
In our experiments, we keep the point clouds in a range of [-75.2m, 75.2m] for the X and Y axis, and [-2m, 4m] for the Z axis as the inputs.
Voxel size is set to (0.1m, 0.1m, 0.15m) for VoxelNet.
We use the coordinates of XYZ, intensity, and elongation as the initial point feature for both pre-training and fine-tuning.
\subsection{nuScenes Setup}
The nuScenes dataset contains 28,130 training samples and 6,019 validation samples under 850 scenes.
For experiments on nuScenes, we set the detection range to [-51.2m, 51.2m] for the X and Y axis, and [-5m, 3m] for the Z axis. Voxel size is set as (0.1m, 0.1m, 0.2m).
During pre-training and fine-tuning, we concatenate consecutive 10 sweep point clouds as the input and use the coordinates of XYZ, intensity, and timestamp as the initial point feature.
\subsection{Decoder Architecture}
\subsubsection{One-layer 3$\times$3 Conv}
We use one 3$\times$3 convolutional layer followed by a batch normalization layer and a ReLU as the default decoder. The number of output channels of the 3$\times$3 convolutional layer is the same as the number of input channels.
\subsubsection{Residual Conv block}
We adopt the bottleneck block in ResNet\cite{he2016deep} as the Residual Conv block.
Specifically, the initial BEV feature will be passed through three convolutional layers.
Then, the new feature will be added to the initial feature and sent to a one-layer MLP.
Batch normalization and ReLU are adopted right after each convolution.
\subsubsection{Transformer block}
We adopt a similar strategy to divide the BEV feature into several patches as ViT\cite{dosovitskiy2020vit}.
Each patch is flattened to a $C$-dimension vector and mapped into a 256-dimension feature vector with a linear projection layer.
Then, these vectors are sent to a standard transformer block to update their features.
Finally, another linear projection layer will project feature vectors into the $C$-dimension vectors, which will be reshaped into the 2D BEV feature.
\section{More Results}
\label{sec:result}
\subsection{Pre-training Epochs}
Self-supervised learning methods often benefit from longer pre-training time\cite{chen2020simple,DBLP:conf/cvpr/mae}. We show the results of BEV-MAE with different pre-training epochs in \tabref{tab:longer}. We use CenterPoint as the 3D detector for fine-tuning. The 3D encoder is pre-trained on the full Waymo dataset.
We can find that the downstream detection performance can be further improved with a longer pre-training epoch. Besides, we observe the saturation of BEV-MAE at 50 epochs.
We attribute to the fact that the amount of data in Waymo may be insufficient for large-scale self-supervised learning.
\begin{table}[!h]
\begin{center}
\caption{\textbf{Results of BEV-MAE with different pre-training epochs.} All detectors are fine-tuning with 20\% training samples on Waymo.
}
\label{tab:longer}
\resizebox{0.49\textwidth}{!}{
\begin{tabular}{c|c|c|c}
\toprule
Method & Epochs & L2 mAP & L2 APH \\
\midrule
CenterPoint & - & 65.60 & 63.21 \\
BEV-MAE (CenterPoint) & 10 & 66.66 & 64.16 \\
BEV-MAE (CenterPoint) & 20 & 66.92 & 64.45 \\
BEV-MAE (CenterPoint) & 50 & 66.95 & 64.47 \\
\bottomrule
\end{tabular}
}
\vspace{-15pt}
\end{center}
\end{table}
\subsection{More Diverse 3D Detector}
To further evaluate the cross-model generalization of BEV-MAE, we try to pre-train more diverse 3D detectors with BEV-MAE, \textit{i.e.}, PointPillar\cite{lang2019pointpillars} (point-based method) and PointAugmenting\cite{DBLP:conf/cvpr/pointaug} (multi-modality method). The results are present in \tabref{tab:diverse}.
We can find that BEV-MAE brings consistent improvements for each detector.
\begin{table}[!h]
\begin{center}
\caption{\textbf{Results of diverse 3D Detectors pre-training with BEV-MAE.} All detectors are fine-tuning with 20\% training samples on Waymo.
}
\label{tab:diverse}
\begin{tabular}{c|c|c|c}
\toprule
Method & Pre-training & L2 mAP & L2 APH \\
\midrule
PointPillar & None & 63.40 & 59.38 \\
PointPillar & BEV-MAE & 63.76 & 59.69 \\
\midrule
PointAugmenting & None & 66.72 & 64.23 \\
PointAugmenting & BEV-MAE & 67.22 & 64.70 \\
\bottomrule
\end{tabular}
\vspace{-15pt}
\end{center}
\end{table}
{\small
\bibliographystyle{ieee_fullname}
|
2,877,628,088,373 | arxiv | \section*{Appendix}
The uniqueness proof in section \ref{inverse_problem} requires results on the density of
a certain subset of functions and we give two ways to look at this through
different formulations; namely the Stone-Weierstrass and M\"untz-Sz\'asz
theorems.
We give the statements of these results below.
The Stone-Weierstrass theorem is a generalization of Weierstrass' result of
1885 that the polynomials are dense in $C[0,1]$ and was proved by
Stone some 50 years later, \cite{Stone:1948}.
If $X$ is a compact Hausdorff space and $C(X)$ those real-valued continuous
functions on $X$, with the topology of uniform convergence, then the question
is when is a subalgebra $A(X)$ dense?
A crucial notion is that of separation of points;
a set $A$ of functions defined on $X$ is said to {\it separate points\/} if,
for every $x,y\in X$, $x\not=y$,
there exists a function $f\in A$ such that $f(x) \not= f(y)$.
Then we have
\begin{theorem}(Stone--Weierstrass).
Suppose $X$ is a compact Hausdorff space and $A$ is a subalgebra of
$C(X)$ which contains a non-zero constant function.
Then $A$ is dense in $C(X)$ if and only if it separates points.
\end{theorem}
The proof can be found in standard references, for example,
\cite[Theorem 4.45]{Folland:2013}.
The M\"untz-Sz\'asz theorem, (1914-1916) is also a generalization of
the Weierstrass approximation theorem; it gives a condition under which
one can ``thin out'' the polynomials and still maintain a dense set.
\begin{theorem}(M\"untz--Sz\'asz)
Let $\Lambda := \{\lambda_j\}_1^\infty$ be a sequence of real positive numbers.
Then the span of $\{1,x^{\lambda_1},x^{\lambda_2},\ldots\,\}$
is dense in $C[0,1]$ if and only if
$\sum_1^\infty\frac{1}{\lambda_j} = \infty$.
\end{theorem}
This result can be generalized to the $L^p[0,1]$ spaces for $1\leq p\leq \infty$,
see \cite{BorweinErdelyi:1996}.
\section{Preliminary material}\label{sec:eur}
Let $\Omega$ be an open bounded domain in $\mathbb{R}^d$
with a smooth ($C^2$ will be more than sufficient) boundary $\partial\Omega$
and let $T>0$ be a fixed constant.
$\mathcal{L}$ is a strongly elliptic, self-adjoint operator with
smooth coefficients defined on $\Omega$,
\begin{equation*}
\mathcal{L} u = \sum_{i,j=1}^d a_{ij}(x)\frac{\partial^2 u}{\partial x_i\partial x_j}
+ c(x) u
\end{equation*}
where $a_{ij}(x)\in C^1(\overline{\Omega})$, $c(x)\in C(\overline{\Omega})$, $a_{ij}(x)=a_{ji}(x)$ and
$\sum_{i,j=1}^d a_{ij}\xi_i\xi_j \geq \delta \sum_{i=1}^d \xi^2$ for some
$\delta>0$, all $x\in\overline\Omega$ and all
$\xi=(\xi_1,\,\ldots\,\xi_d)\in \mathbb{R}^d$.
To avoid unnecessary complications for the main theme we will make the
assumption of homogeneous Dirichlet boundary conditions on $\partial\Omega$
so that the natural domain for $\mathcal{L}$ is $H^2(\Omega)\cap H_0^1(\Omega)$.
Then $-\mathcal{L}$ has a complete, orthonormal system
of eigenfunctions $\{\psi_n\}_1^\infty$ in $L^2(\Omega)$
with $\psi_n\in H^2(\Omega)\cap H_0^1(\Omega)$
and with corresponding eigenvalues $\{\lambda_n\}$ such that
$0<\lambda_1\leq \lambda_2\leq\dots \leq\lambda_n \to \infty$ as $n\to\infty$.
The nonhomogeneous term will be taken to satisfy $f(x,t)\in C(0,T; H^2(\Omega))$.
This can be weakened to assume only $L^p$ regularity in time, but as shown in
\cite{LiLiuYamamoto:2015} this requires more delicate analysis.
The initial value $u_0(x)\in H^2(\Omega)$.
We will use $\langle\cdot,\cdot\rangle$ to denote the inner product in
$L^2(\Omega)$.
Throughout this paper we will, by following \cite{Kochubei:2008}, make the assumptions on the distributed
derivative parameter $\mu.$
\noindent
\begin{assumption}\label{mu_assumption}
$$\mu\in C^1[0,1],\ \mu(\alpha)\ge 0,\ \mu(1)\ne 0.$$
\end{assumption}
\begin{remark}
From these conditions it follows that
there exists a constant $C_\mu>0$ and an interval
$(\beta_0,\beta)\subset (0,1)$ such that
$\mu(\alpha)\ge C_\mu$ on $(\beta_0,\beta)$.
This will be needed in our proof of the representation theorem in
Section~\ref{sect:representation}.
\end{remark}
We will use the Djrbashian-Caputo version for $D^{(\mu)}$:
$D^{(\mu)} u=\int_0^1 \mu(\alpha){}\partial_t^\alpha u {\rm d}\alpha$
with
$\partial_t^\alpha u=\frac{1}{\Gamma(1-\alpha)}\int_0^t(t-\tau)^{-\alpha}
\frac{d}{d\tau}u(x,\tau){\rm d}\tau$
and so
\begin{equation}\label{eqn:dist_frac_der}
D^{(\mu)}u=\int_0^t \left[\int_0^1 \frac{\mu(\alpha)}{\Gamma(1-\alpha)}
(t-\tau)^{-\alpha} {\rm d}\alpha\right]\frac{d}{d\tau}u(x,\tau){\rm d}\tau
:= \int_0^t \eta(t-\tau)\frac{d}{d\tau}u(x,\tau){\rm d}\tau,
\end{equation}
where
\begin{equation}\label{eqn:dist_frac_eta}
\eta(s)=\int_0^1 \frac{\mu(\alpha)}{\Gamma(1-\alpha)}
s^{-\alpha} {\rm d}\alpha.
\end{equation}
Thus our distributed differential equation (DDE) model in this paper will be
\begin{equation}\label{eqn:model_pde}
\begin{aligned}
D^{(\mu)} u(x,t) - \mathcal{L} u(x,t) &= f(x,t), &&\quad x\in\Omega,\quad t\in(0,T);\\
u(x,t) &= 0, &&\quad x\in\partial\Omega,\quad t\in(0,T);\\
u(x,0) &= u_0(x), &&\quad x\in\Omega.\\
\end{aligned}
\end{equation}
\subsection{A Distributional ODE}
Our first task is to analyze the ordinary distributed fractional order equation
\begin{equation}\label{eqn:dODE}
D^{(\mu)} v(t)=-\lambda v(t),\ v(0)=1,\ t\in (0,T)
\end{equation}
and to show there exists a unique solution.
We will need some preliminary analysis to determine the integral operator
that serves as the inverse for $D^{(\mu)}$ in analogy with
the Riemann-Liouville derivative being inverted by the Abel operator.
If we now take the Laplace transform of $\eta$ in \eqref{eqn:dist_frac_eta}
then we have
\begin{equation}\label{eqn:Phi}
(\L\eta) (z)=\frac{\Phi(z)}{z},\quad \text{where }\
\Phi(z)=\int_0^1 \mu(\alpha)z^\alpha {\rm d}\alpha.
\end{equation}
\par The next lemma introduces an operator $I^{(\mu)}$ to analyze
the distributed ODE \eqref{eqn:dODE}.
\begin{lemma}\label{lem:kappa}
Define the operator $I^{(\mu)}$ as
$$
I^{(\mu)} \phi(t)=\int_0^t \kappa(t-s)\phi(s){\rm d}s,\quad \text{where }\
\kappa (t)=\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}
\frac{e^{zt}}{\Phi(z)}{\rm d}z.
$$
Then the following conclusions hold:
\begin{itemize}
\item [(1)] $D^{(\mu)}I^{(\mu)} \phi(t)=\phi(t),\ \ImuD^{(\mu)} \phi(t)=\phi(t)-\phi(0)$
for $\phi\in C^1(0,T);$
\item [(2)] $\kappa(t)\in C^\infty (0,\infty)$ and
\begin{equation}\label{eqn:kappa_inequality}
\kappa(t)=|\kappa(t)|\le C \ln {\frac{1}{t}}\ \ \text{for sufficiently small}\ t>0.
\end{equation}
\end{itemize}
\end{lemma}
\begin{proof}
This is \cite[Proposition~3.2]{Kochubei:2008}.
We remark that the result in this paper include further estimates
on $\kappa$ that require additional regularity on $\mu$.
However, for the bound \eqref{eqn:kappa_inequality} only $C^1$ regularity on
$\mu$ is needed.
\end{proof}
\begin{remark}
In \cite[Proposition~3.2]{Kochubei:2008}, if the condition either
$\mu(0)\ne 0$ or $\mu(\alpha)\sim a\alpha^v,\ a>0,\ v>0$ is added,
then $\kappa$ is completely monotone.
This property is not explicitly used in this paper, however as we remark after the
uniqueness result, this condition on $\kappa$ could be a useful basis for
a reconstruction algorithm.
\end{remark}
\par With $I^{(\mu)}$, we have the following results.
\begin{lemma}\label{lem:existence_uniqueness_u_n}
For each $\lambda>0$ there exists a unique $u(t)$ which satisfies
\eqref{eqn:dODE}.
\end{lemma}
\begin{proof}
Lemma~\ref{lem:kappa} implies that \eqref{eqn:dODE} is
equivalent to
$$u(t)=-\lambda I^{(\mu)} u(t)+1 =: A_1 u.$$
Now the asymptotic and smoothness results of $\kappa(t)$ in Lemma~\ref{lem:kappa}
give $\kappa\in L^1(0,T)$, that is, there exists $t_1\in (0,T)$ such that
$$\|\kappa\|_{L^1(0,t_1)}<\frac{1}{\lambda}.$$
Hence, given $\phi_1, \phi_2\in L^1(0,t_1)$,
\begin{equation*}
\begin{split}
\|A_1(\phi_1)-A_1(\phi_2)\|_{L^1(0,t_1)}
&\le \lambda \int_0^{t_1} \int_0^t |\kappa(t-s)|\cdot |\phi_1(s)-\phi_2(s)|\,
ds dt\\
&= \lambda\int_0^{t_1} |\phi_1(s)-\phi_2(s)| \int_s^{t_1} |\kappa(t-s)|\,
dt ds\\
&\le \lambda\int_0^{t_1} |\phi_1(s)-\phi_2(s)|\cdot \|\kappa\|_{L^1(0,t_1)} ds\\
&= \lambda \|\kappa\|_{L^1(0,t_1)} \cdot\|\phi_1-\phi_2\|_{L^1(0,t_1)}.
\end{split}
\end{equation*}
From the fact that $0<\lambda\|\kappa\|_{L^1(0,t_1)}<1$,
$A_1$ is a contraction map on $L^1(0,t_1)$ and so by the
Banach fixed point theorem, there exists a
unique $u_{1}(t)\in L^1(0,t_1)$ that satisfies $u_{1}=A_1u_{1}$.
For each $t\in (t_1,2t_1)$, we have
\begin{equation*}
u(t) = 1 -\lambda I^{(\mu)} u(t)
= 1 -\lambda \int_{t_1}^{t} \kappa(t-s) u(s)\,ds
-\lambda \int_{0}^{t_1} \kappa(t-s) u(s)\,ds.
\end{equation*}
Since $u=u_{1}$ on $(0, t_1) $ which is now known, then
\begin{equation*}
\begin{split}
u(t)=-\lambda \int_{t_1}^{t} \kappa(t-s) u(s)\,ds+1
-\lambda \int_{0}^{t_1} \kappa(t-s) u_1(s)\,ds:=A_2 u
\end{split}
\end{equation*}
for each $t\in (t_1,2t_1)$.
Given $\phi_1, \phi_2\in L^1(t_1,2t_1)$, it holds
\begin{equation*}
\begin{split}
\|A_2(\phi_1)-A_2(\phi_2)\|_{L^1(t_1,2t_1)}
&\le \lambda \int_{t_1}^{2t_1} \int_{t_1}^{t}
|\kappa(t-s)|\cdot |\phi_1(s)-\phi_2(s)| ds dt\\
&=\lambda \int_{t_1}^{2t_1} |\phi_1(s)-\phi_2(s)|
\int_{s}^{2t_1} |\kappa(t-s)| {\rm d} t {\rm d} s\\
&\le \lambda \int_{t_1}^{2t_1} |\phi_1(s)-\phi_2(s)|
\cdot \|\kappa\|_{L^1(0,t_1)} {\rm d} s\\
&=\lambda \|\kappa\|_{L^1(0,t_1)} \cdot
\|\phi_1-\phi_2\|_{L^1(t_1,2t_1)}.
\end{split}
\end{equation*}
Hence, $A_2$ is also a contraction map on $L^1(t_1,2t_1)$,
which yields and shows that there exists a unique
$u_{2}(t)\in L^1(t_1,2t_1)$ such that $u_{2}=A_2 u_{2}$.
Repeating this argument yields that there exists a unique solution
$u\in L^1(0,T)$ of the distributed ODE \eqref{eqn:dODE}, which completes
the proof.
\end{proof}
\begin{lemma}\label{lem:u_is_cm}
$u(t)\in C^{\infty}(0,T)$ is completely monotone, which gives
$0\le u(t)\le 1$ on $[0,T]$.
\end{lemma}
\begin{proof}
This lemma is a special case of \cite[Theorem 2.3]{Kochubei:2008}.
\end{proof}
\section{Existence, uniqueness and regularity}
\subsection{Existence and uniqueness of weak solution for DDE \eqref{eqn:model_pde}
\label{sect:model_general}}
\par We state the definition of the weak solution as
\begin{definition}
$u(x,t)$ is a weak solution to
DDE \eqref{eqn:model_pde} in $L^2(\Omega)$ if
$u(\cdot,t)\in H_0^1(\Omega)$ for $t\in(0,T)$ and
for any $\psi(x)\in H^2(\Omega)\cap H_0^1(\Omega)$,
\begin{equation*}
\begin{split}
&\langleD^{(\mu)} u(x,t),\psi(x)\rangle-\langle\mathcal{L} u(x,t;a),\psi(x)\rangle=\langle f(x,t),\psi(x)\rangle,
\ t\in(0,T);\\
&\langle u(x,0),\psi(x)\rangle = \langle u_0(x),\psi(x)\rangle.
\end{split}
\end{equation*}
\end{definition}
\par Then Lemma \ref{lem:existence_uniqueness_u_n} gives the
following corollary.
\begin{corollary}\label{cor:existence_uniqueness}
There exists a unique weak solution $u^*(x,t)$ of
DDE \eqref{eqn:model_pde} and the representation of $u^*(x,t)$ is
\begin{equation}\label{eqn:weak solution}
\begin{aligned}
u^*(x,t)=&\sum_{n=1}^{\infty} \Big[\langle u_0,\psi_n\rangle u_n(t)
+\langle f(\cdot,0),\psi_n\rangleI^{(\mu)} u_n(t)\\
&+\int_0^t \langle\frac{\partial}{\partial t}
f(\cdot,\tau),\psi_n\rangle I^{(\mu)} u_n(t-\tau){\rm d}\tau\Big]\psi_n(x),
\end{aligned}
\end{equation}
where $u_n(t)$ is the unique solution of the distributed ODE \eqref{eqn:dODE}
with $\lambda=\lambda_n$.
\end{corollary}
\begin{proof}
\par Completeness of $\{\psi_n(x):\N+\}$ in $L^2(\Omega)$ and
direct calculation show that the representation
\eqref{eqn:weak solution} is a weak solution of DDE \eqref{eqn:model_pde};
while the uniqueness of $u^*$ follows from Lemma \ref{lem:existence_uniqueness_u_n}.
\end{proof}
\subsection{Regularity}
\par The next two lemmas concern the regularity of
$u^*$ and $D^{(\mu)} u^*$.
\begin{lemma}\label{lem:regularity_u}
\begin{equation*}
\|u^*(x,t)\|_{C([0,T];H^2(\Omega))}
\le C\big(\|u_0\|_{H^2(\Omega)}+\|f(\cdot, 0)\|_{H^2(\Omega)}
+T^{1/2} |f|_{H^1([0,T];H^2(\Omega))}\big)
\end{equation*}
where $C>0$ depends on $\mu$, $\mathcal{L}$ and $\Omega$,
and $|f|_{H^1([0,T];H^2(\Omega))}=
\|\frac{\partial f}{\partial t}\|_{L^2([0,T];H^2(\Omega))}$.
\end{lemma}
\begin{proof}
\par Fix $t\in(0,T)$,
\begin{equation*}
\begin{aligned}
\|u^*(x,t)\|_{H^2(\Omega)}
\le &\ \big\|\sum_{n=1}^{\infty}\langle u_0,\psi_n\rangle u_n(t)\psi_n(x)\big\|_{H^2(\Omega)}
&&:=I_1\\
&+\big\|\sum_{n=1}^{\infty}\langle f(\cdot,0),\psi_n\rangleI^{(\mu)} u_n(t)\psi_n(x)\big\|_{H^2(\Omega)}
&&:=I_2\\
&+\big\|\sum_{n=1}^{\infty}\int_0^t \langle\frac{\partial}{\partial t}
f(\cdot,\tau),\psi_n\rangle I^{(\mu)} u_n(t-\tau){\rm d}\tau\ \psi_n(x)\big\|_{H^2(\Omega)}
&&:=I_3.
\end{aligned}
\end{equation*}
We estimate each of $I_1$, $I_2$, and $I_3$ in turn using Lemmas~\ref{lem:kappa} and \ref{lem:u_is_cm}
where in each case $C>0$ is a generic constant that depends only on
$\mu$, $\mathcal{L}$ and $\Omega$.
\begin{equation*}\label{eqn:I_1}
\begin{aligned}
I_1^2&=\big\|\sum_{n=1}^{\infty}\langle u_0,\psi_n\rangle u_n(t)\psi_n(x)\big\|_{H^2(\Omega)}^2
\le C\big\|\mathcal{L}\big(\sum_{n=1}^{\infty}\langle u_0,\psi_n\rangle
u_n(t)\psi_n(x)\big)\big\|_{L^2(\Omega)}^2\\
&=C\big\|\sum_{n=1}^{\infty}\lambda_n\langle u_0,\psi_n\rangle u_n(t)\psi_n(x)\big\|_{L^2(\Omega)}^2
=C\sum_{n=1}^{\infty} \lambda_n^2\langle u_0,\psi_n\rangle^2 u^2_n(t)\\
&\le C\sum_{n=1}^{\infty} \lambda_n^2\langle u_0,\psi_n\rangle^2
=C\big\|\mathcal{L} u_0\big\|_{L^2(\Omega)}^2\le C\|u_0\|_{H^2(\Omega)}^2.
\end{aligned}
\end{equation*}
\begin{equation*}
\begin{aligned}
I_2^2&=\big\|\sum_{n=1}^{\infty}\langle f(\cdot,0),\psi_n\rangle
I^{(\mu)} u_n(t)\psi_n(x)\big\|_{H^2(\Omega)}^2
\le C\sum_{n=1}^{\infty}\lambda_n^2\langle f(\cdot,0),\psi_n\rangle^2
(I^{(\mu)} u_n(t))^2\\
&\le C\sum_{n=1}^{\infty}\lambda_n^2\langle f(\cdot,0),\psi_n\rangle^2
\Bigl(\int_0^t|\kappa(\tau)|
\cdot|u_n(t-\tau)|{\rm d}\tau\Bigr)^2 \\
&\le C \sum_{n=1}^{\infty}\lambda_n^2\langle f(\cdot,0),\psi_n\rangle^2
\Bigl(\int_0^t |\kappa(\tau)|{\rm d}\tau\Bigr)^2\\
&\le C\sum_{n=1}^{\infty}\lambda_n^2\langle f(\cdot,0),\psi_n\rangle^2 \|\kappa\|_{L^1(0,T)}^2
\le C\|\kappa\|_{L^1(0,T)}^2 \|f(\cdot, 0)\|_{H^2(\Omega)}^2.
\end{aligned}
\end{equation*}
\begin{equation*}\label{eqn:I_3}
\begin{aligned}
\quad I_3^2&=\big\|\sum_{n=1}^{\infty}\int_0^t \langle\frac{\partial}{\partial t}
f(\cdot,\tau),\psi_n\rangle I^{(\mu)} u_n(t-\tau){\rm d}\tau\ \psi_n(x)\big\|_{H^2(\Omega)}^2\\
&\le C\sum_{n=1}^{\infty}\left[\int_0^t \lambda_n \langle\frac{\partial}{\partial t}
f(\cdot,\tau),\psi_n\rangle I^{(\mu)} u_n(t-\tau){\rm d}\tau\right]^2\\
&\le C\sum_{n=1}^{\infty}\left[\int_0^t \lambda_n|\langle\frac{\partial}{\partial t}
f(\cdot,\tau),\psi_n\rangle|\cdot |I^{(\mu)} u_n(t-\tau)|{\rm d}\tau\right]^2\\
&\le C\|\kappa\|_{L^1(0,T)}^2 \sum_{n=1}^{\infty}
\int_0^t \lambda_n^2|\langle\frac{\partial}{\partial t} f(\cdot,\tau),\psi_n\rangle|^2
{\rm d}\tau \cdot \int_0^t 1^2 {\rm d}\tau\\
&\le C T\|\kappa\|_{L^1(0,T)}^2 \int_0^T \sum_{n=1}^{\infty}
\lambda_n^2|\langle\frac{\partial}{\partial t} f(\cdot,\tau),\psi_n\rangle|^2{\rm d}\tau
\le CT\|\kappa\|_{L^1(0,T)}^2 \int_0^T
\big\|\frac{\partial}{\partial t} f(\cdot,\tau)\big\|_{H^2(\Omega)}^2{\rm d}\tau\\
&=CT\|\kappa\|_{L^1(0,T)}^2 |f|^2_{H^1([0,T];H^2(\Omega))}.
\end{aligned}
\end{equation*}
Hence,
\begin{equation*}
\begin{split}
\|u^*(x,t)\|_{C([0,T];H^2(\Omega))}
&\le C\|u_0\|_{H^2( \Omega)}+C\|\kappa\|_{L^1(0,T)} \|f(\cdot, 0)\|_{H^2(\Omega)}\\
&\qquad+CT^{1/2}\|\kappa\|_{L^1(0,T)} |f|_{H^1([0,T];H^2(\Omega))}\\
&\le C\big(\|u_0\|_{H^2(\Omega)}+\|f(\cdot, 0)\|_{H^2(\Omega)}
+T^{1/2} |f|_{H^1([0,T];H^2(\Omega))}\big).
\end{split}
\end{equation*}
Due to the fact that $\kappa$ is determined by $\mu$,
the constant $C$ above only depends on $\mu$, $\mathcal{L}$ and $\Omega$.
\end{proof}
\begin{lemma}\label{lem:regularity_Dmu}
\begin{equation*}
\begin{split}
\|D^{(\mu)} u^*\|_{C([0,T];L^2(\Omega))}
\le C\left(\|u_0\|_{H^2(\Omega)}+T^{1/2}|f|_{H^1([0,T];H^2(\Omega))}
+\|f\|_{C([0,T];H^2(\Omega))}\right),
\end{split}
\end{equation*}
where $C>0$ only depends on $\mu$, $\mathcal{L}$ and $\Omega$.
\end{lemma}
\begin{proof}
\par For each $t\in(0,T)$,
\begin{equation*}
\begin{split}
D^{(\mu)} u^*(x,t)&=-\sum_{n=1}^{\infty}\lambda_n\langle u_0,\psi_n\rangle u_n(t)\psi_n(x)
-\sum_{n=1}^{\infty}\lambda_n\langle f(\cdot,0),\psi_n\rangleI^{(\mu)} u_n(t)\psi_n(x)\\
&\quad-\sum_{n=1}^{\infty}\lambda_n\int_0^t \langle\frac{\partial}{\partial t}
f(\cdot,\tau),\psi_n\rangle I^{(\mu)} u_n(t-\tau){\rm d}\tau\ \psi_n(x)
+f(x,t),
\end{split}
\end{equation*}
which implies
\begin{equation*}
\begin{aligned}
\qquad\|D^{(\mu)} u^*\|_{L^2(\Omega)}& \le
\|\sum_{n=1}^{\infty}\lambda_n\langle u_0,\psi_n\rangle u_n(t)\psi_n(x)\|_{L^2(\Omega)}
+\|\sum_{n=1}^{\infty}\lambda_n\langle f(\cdot,0),\psi_n\rangleI^{(\mu)} u_n(t)\psi_n(x)\|_{L^2(\Omega)}\\
&\quad+\|\sum_{n=1}^{\infty}\lambda_n\int_0^t \langle\frac{\partial}{\partial t}
f(\cdot,\tau),\psi_n\rangle I^{(\mu)} u_n(t-\tau){\rm d}\tau\ \psi_n(x)\|_{L^2(\Omega)}
+\|f(\cdot,t)\|_{L^2(\Omega)}.
\end{aligned}
\end{equation*}
Combining the estimates for $I_1$, $I_2$ and $I_3$ we obtain
\begin{equation*}
\|\sum_{n=1}^{\infty}\lambda_n\langle u_0,\psi_n\rangle u_n(t)\psi_n(x)\|_{L^2(\Omega)}^2
=\sum_{n=1}^{\infty} \lambda_n^2\langle u_0,\psi_n\rangle^2u_n^2(t)
\le C\|u_0\|_{H^2(\Omega)}^2,
\end{equation*}
\begin{equation*}
\begin{split}
\|\sum_{n=1}^{\infty}\lambda_n\langle f(\cdot,0),\psi_n\rangle
I^{(\mu)} u_n(t)\psi_n(x)\|_{L^2(\Omega)}^2
&=\sum_{n=1}^{\infty}\lambda_n^2\langle f(\cdot,0),\psi_n\rangle^2 (I^{(\mu)} u_n(t))^2\\
&\le C\|\kappa\|_{L^1(0,T)}^2 \|f(\cdot, 0)\|_{H^2(\Omega)}^2\\
&\le C\|\kappa\|_{L^1(0,T)}^2 \|f\|_{C([0,T];H^2(\Omega))}^2
\end{split}
\end{equation*}
and
\begin{equation*}
\begin{split}
&\|\sum_{n=1}^{\infty}\lambda_n\int_0^t\langle\frac{\partial}{\partial t}
f(\cdot,\tau),\psi_n\rangle I^{(\mu)} u_n(t-\tau){\rm d}\tau\ \psi_n(x)\|_{L^2(\Omega)}^2\\
=&\sum_{n=1}^{\infty}\left[\int_0^t \lambda_n\langle\frac{\partial}{\partial t}
f(\cdot,\tau),\psi_n\rangle I^{(\mu)} u_n(t-\tau){\rm d}\tau\right]^2
\le CT\|\kappa\|_{L^1(0,T)}^2 |f|^2_{H^1([0,T];H^2(\Omega))}.
\end{split}
\end{equation*}
Therefore,
\begin{equation*}
\begin{split}
\|D^{(\mu)} u^*\|_{C([0,T];L^2(\Omega))}
\le C\left(\|u_0\|_{H^2(\Omega)}+T^{1/2}|f|_{H^1([0,T];H^2(\Omega))}
+\|f\|_{C([0,T];H^2(\Omega))}\right),
\end{split}
\end{equation*}
where $C$ is dependent only on $\mu$, $\mathcal{L}$ and $\Omega$.
\end{proof}
The main theorem of this section follows from
Corollary~\ref{cor:existence_uniqueness},
Lemmas~\ref{lem:regularity_u} and \ref{lem:regularity_Dmu}.
\begin{theorem}[Main theorem for the direct problem]\label{main}
There exists a unique weak solution $u^*(x,t)$ in $L^2(\Omega)$ of the
DDE~\eqref{eqn:model_pde} with the representation \eqref{eqn:weak solution}
and the following regularity estimate
\begin{equation*}
\begin{aligned}
\quad\|u^*\|_{C([0,T];H^2(\Omega))} &+ \|D^{(\mu)} u^*\|_{C([0,T];L^2(\Omega))}\\
&\le C\Big(\|u_0\|_{H^2(\Omega)}+T^{1/2}|f|_{H^1([0,T];H^2(\Omega))}
+\|f\|_{C([0,T];H^2(\Omega))}\Big),
\end{aligned}
\end{equation*}
where $C>0$ depends only on $\mu$, $\mathcal{L}$ and $\Omega$.
\end{theorem}
\section*{Acknowledgment}
The authors were partially supported by NSF Grant DMS-1620138.
\bibliographystyle{abbrv}
\section{Introduction}\label{sec:intro}
Classical Brownian motion as formulated in
Einstein's 1905 paper \cite{Einstein:1905b}
can be viewed as a random walk
in which the dynamics are governed by an uncorrelated,
Markovian, Gaussian stochastic process.
The key assumption is that a change in the direction
of motion of a particle is random and that the mean-squared displacement
over many changes is proportional to time
$\langle x^2\rangle = C t$.
This easily leads to the derivation of the underlying differential equation
being the heat equation.
In fact we can generalize this situation to the case of a
continuous time random walk ({\sc ctrw}) where
the length of a given jump, as well as the waiting time elapsing
between two successive jumps follow a given probability density function.
In one spatial dimension, the picture is as follows:
a walker moves along the $x$-axis, starting at a position $x_0$ at time $t_0=0$.
At time $t_1$, the walker jumps to $x_1$, then at time $t_2$ jumps to $x_2$,
and so on.
We assume that the temporal and spatial increments
$\,\Delta t_n = t_n - t_{n-1}$, $\,\Delta x_n = x_n-x_{n-1}$
are independent, identically distributed random variables,
following probability density functions $\psi(t)$ and $\lambda (x)$,
respectively, which is known as the waiting time distribution and jump
length distribution, respectively.
Namely, the probability of $\Delta t_n$ lying in any interval $[a,b]\subset
(0,\infty)$ is
$ P(a<\Delta t_n<b) = \int_a^b \psi(t)\,dt$
and the probability of $\Delta x_n$ lying in any interval
$[a,b]\subset \mathbb{R}$ is $P(a<\Delta x_n <b) = \int_a^b \lambda(x)\,dx$.
For given $\psi$ and $\lambda$, the position $x$ of the walker
can be regarded as a step function of $t$.
It it easily shown using the Central Limit Theorem
that provided the first moment, or characteristic waiting time $T$, defined by
$T = \mu_1(\psi)=\int_0^\infty t\psi(t)\,dt$ and the second moment,
or jump length variance $\Sigma$,
$\mu_2(\lambda)=\int_{-\infty}^\infty x^2\lambda(t)\,dt$ are finite,
then the long-time limit again corresponds to Brownian motion,
On the other hand,
when the random walk involves correlations,
non-Gaussian statistics or a non-Markovian process
(for example, due to ``memory'' effects)
the diffusion equation will fail to describe the macroscopic limit.
For example, if we retain the assumption that $\Sigma$ is finite but
relax the condition on a finite characteristic waiting time so that
for large $t$ $\psi(t) A/t^{1+\alpha}$ as $t\to\infty$ where
$0<\alpha\leq 1$ , then we get very different results.
Such probability density functions are often referred to as
a ``heavy-tailed.''
If in fact we take
\begin{equation}\label{eqn:frac_dist}
\psi(t) = \frac{A_\alpha}{B_\alpha+t^{1+\alpha}}
\end{equation}
then again it can be shown, \cite{MontrollWeiss:1965,KlafterSokolov:2011},
that the effect is to modify the Einstein formulation
$\langle x^2\rangle = C t$ to $\langle x^2\rangle = C t^\alpha$.
This above leads to a {\it subdiffusive\/} process and, importantly provides
a tractable model where the partial differential equation is replaced by
one with a fractional derivative in time of order $\alpha$.
Such objects have been a steady source of investigation over the last
almost 200 years beginning in the 1820s with the work of Abel
and continuing first by Liouville then by Riemann.
The fractional derivative operator can take several forms, the most usual
being either the Riemann-Liouville $^R\!D_0^\alpha$ based on Abel's original
singular integral operator,
or the Djrbashian-Caputo $^C\!D^\alpha_0$ version,
\cite{Djrbashian:1989}, which reverses the order of the
Riemann-Liouville formulation
\begin{equation}\label{eqn:frac_ders}
\begin{aligned}
^R\!D^\alpha_0 u &=\frac{1}{\Gamma(n-\alpha)}\frac{d^n}{dx^n}\int_0^x(x-t)^{\alpha
+1-n}u(t)\,dt,\\
^C\!D^\alpha_0u &= \frac{1}{\Gamma(n-\alpha)}\int_0^x(x-t)^{\alpha+1-n}u^{(n)}(t)\,dt.\\
\end{aligned}
\end{equation}
The D{\v{z}}rba{\v{s}}jan-Caputo derivative tends to be more favored by
practitioners since it allows the
specification of initial conditions in the usual way.
Nonetheless, the Riemann-Liouville derivative enjoys certain analytic
advantages, including being defined for a wider class of functions
and possessing a semigroup property.
Thus the fractional-anomalous diffusion model gives rise to the fractional
differential equation
\begin{equation} \label{eqn:basic_one_term}
\partial_t^\alpha u - \mathcal{L} u = f(x,t),\qquad
x\in\Omega, t\in (0,T)
\end{equation}
where $\mathcal{L}$ is a uniformly elliptic differential operator on
an open domain $\Omega\subset\mathbb{R}^d$ and $\partial_t^\alpha$ is
one of the above fractional derivatives.
The governing function for the fractional derivative becomes
the Mittag-Leffler function $E_{\alpha,\beta}(z)$
which generalizes the exponential function that forms the key component
for the fundamental solution in the classical case when $\alpha=\beta=1$.
\begin{equation}\label{eqn:mlf}
E_{\alpha,\beta}(z) = \sum_{k=0}^\infty \frac{z^k}{\Gamma(\alpha k + \beta)}
\end{equation}
For the typical examples described here we have $0<\alpha\leq 1$ and
$\beta$ a positive real number although further generalization is certainly
possible.
See, for example, \cite{GorenfloKilbasMainardiRogosin:2014}.
During the past two decades, differential equations involving
fractional-order derivatives have received increasing attention in
applied disciplines.
Such models are known to capture more faithfully the dynamics of anomalous
diffusion processes in amorphous materials,
e.g., viscoelastic materials, porous media, diffusion on domains with
fractal geometry and option pricing models.
These models also describe certain diffusion processes more accurately
than Gaussian-based Brownian motion and have particular relevance
to materials exhibiting memory effects.
As a consequence, we can obtain fundamentally different physics.
There has been significant progress on both
mathematical methods and numerical algorithm design and, more recently,
attention has been paid to inverse problems.
This has shed considerable light on the new physics appearing,
\cite{JinRundell:2015,SokolovKlafterBlumen:2002}
Of course, such a specific form for $\psi(t)$ as given by \ref{eqn:frac_dist}
is rather restrictive as it assumes a quite specific scaling factor between
space and time distributions and there is no reason to expect nature
is so kind to only require a single value for $\alpha$.
One approach around this is to take a finite sum of such
terms each corresponding to a different value of $\alpha$.
This leads to a model where the time derivative is replaced by
a finite sum of fractional derivatives of orders $\alpha_j$ and
by analogy leads to the law $\langle x^2\rangle = g(t,\alpha)$
where $g$ is a finite sum of fractional powers.
This formulation replaces the single value fractional derivative by a finite sum
$\sum_1^m q_j\partial_t^{\alpha_j} u$ where a linear combination of
$m$ fractional powers has been taken.
Physically this represents a fractional diffusion model
that assumes diffusion takes place in a medium in
which there is no single scaling exponent; for example,
a medium in which there are memory effects over multiple time scales.
This seemingly simple device leads to considerable complications.
For one, we have to use the so-called multi-index Mittag-Leffler function
$E_{\alpha_1,\,\ldots\,\alpha_m,\beta_1,\,\ldots\,\beta_m}(z)$ in place
of the two parameter $E_{\alpha,\beta}(z)$ and this adds complexity not only
notationally but in proving required regularity results for the basic
forwards problem of knowing $\Omega$, $\mathcal{L}$, $f$, $u_0$
and recovering $u(x,t)$, see \cite{LiYamamoto:2015,LiLiuYamamoto:2015} and the references within.
It is also possible to generalize beyond the finite sum by taking the so-called
distributed fractional derivative,
\begin{equation}\label{eqn:distributional_der-def}
\partial_t^{(\mu)} u(t) = \int_0^1 \mu(\alpha) \partial_t^\alpha u(t) \,d\alpha.
\end{equation}
Thus the finite sum derivative can be obtained by taking
$\mu(\alpha) = \sum_{j=1}^m q_j\delta(\alpha-\alpha_j)$.
See \cite{Naber:2003,Kochubei:2008,MMPG:2008,Luchko:2009b,li2016analyticity},
for several studies incorporating this extension.
This in turn allows a more general function probability
density distribution function $\psi$ in \ref{eqn:frac_dist} and hence
a more general value for $g(t,\alpha)$.
The purpose of this paper is to analyze this distributed model extension to
equation~\eqref{eqn:basic_one_term} and the paper is organized as follows.
First, we demonstrate existence, uniqueness and regularity results for the
solution of the distributed fractional derivative model on a cylindrical region
in space-time $\Omega\times[0,T]$ where $\Omega$ is a bounded, open set in
$\mathbb{R}^d$. Second, in the case of one spatial variable, $d=1$,
we set up representation theorems for the solution analogous to that for
the heat equation itself, \cite{Cannon:1984}, and extended to the
case of a single fractional derivative in \cite{RundellXuZuo:2013}.
Section~\ref{sec:eur} looks at the assumptions to be made on the various terms
in \eqref{eqn:distributional_der-def} and utilizes these to show existence,
uniqueness and regularity results for the direct problem;
namely, to be given $\Omega$, $\mathcal{L}$, $f$, $u_0$ and the function
$\mu=\mu(\alpha)$, then to solve \eqref{eqn:distributional_der-def}
for $u(x,t)$.
Section~\ref{sect:representation} will derive several representation
theorems for this solution and these will be used in the final section
to formulate and prove a uniqueness result for the associated inverse problem
to be discussed below.
\medskip
However, there is the obvious question for all of these models:
what is the value of $\alpha$?
Needless to say there has been much work done on this; experiments have
been set up to collect additional information that allows a best fit
for $\alpha$ in a given setting.
One of the earliest works here is from 1975, \cite{Scher_Montroll:1975} and
in part was based on the Montroll-Weiss random walk model
\cite{MontrollWeiss:1965}.
See also \cite{HatanoHatano:1998}.
Mathematically the recovery in models with
a single value for $\alpha$ turns out to relatively
straightforward provided we are able to choose the type of data being measured.
This would be chosen to allow us to rely on the known asymptotic behavior
of the Mittag-Leffler function for both small and large arguments.
An exception here is when we also have to determine $\alpha$ as well
as an unknown coefficient in which case the combination problem can
be decidedly much more complex.
See, for example, \cite{cheng2009uniqueness, Li2013Simultaneous, RundellXuZuo:2013}.
Amongst the first papers in this direction with a rigorous existence and
uniqueness analysis is \cite{HatanoNakagawaWangYamamoto:2013}.
The multi-term case, although similar in concept,
is quite nontrivial but has been shown in
\cite{LiYamamoto:2015,LiLiuYamamoto:2015}.
In these papers the authors were able to prove an important uniqueness
theorem: if given the additional data consisting of the value of the normal
derivative $\frac{\partial u}{\partial\nu}$ at a fixed point
$x_0\in\partial\Omega$ for all $t$ then the sequence pair
$\{q_j,\alpha_j\}_{j=1}^m$ can be uniquely recovered.
The main result of the current paper in this direction is
in Section~\ref{inverse_problem} where we show that
the uniqueness results of
\cite{LiYamamoto:2015,LiLiuYamamoto:2015}
can be extended to recover a suitably defined exponent function $\mu(\alpha)$.
\section{Determining the distributed coefficient $\mu(\alpha)$}
\label{inverse_problem}
In this section we state and prove two uniqueness theorems for the recovery
of the distributed derivative $\mu$.
We show that by measuring the solution along a time trace from a fixed location
$x_0$ one can use this data to uniquely recover $\mu(\alpha)$.
This time trace can be one where the sampling point is located within the
interior of $\Omega=(0,1)$ and we measure $u(x_0,t)$, or we measure
the flux at $x^\star$; $u_x(x^\star,t)$ where $0<x^\star\leq 1$.
This latter case therefore includes measuring the flux on the right-hand
boundary $x=1$.
First we give the definition of the admissible set $\Psi$
according to Assumption~\ref{mu_assumption}.
\begin{definition}\label{def:Psi}
Define the set $\Psi$ by
\begin{equation*}
\Psi:=\{\mu\in C^1[0,1]:\ \mu\ge 0,
\ \mu(1)\ne 0,\ \mu(\alpha)\ge C_{\Psi}>0 \ \text{on}\ (\beta_0, \beta_1)\},
\end{equation*}
where the constant $C_{\Psi}>0$ and the interval
$(\beta_{0}, \beta_{1})\subset(0,1)$
only depend on $\Psi.$
\end{definition}
We introduce the functions
$F(y;x_0)$ and $F_f(y;x^\star)$ in the next two lemmas.
\begin{lemma}\label{lem:F}
Define the function $F(y;x_0)\in C^1((0,\infty),\mathbb{R})$ as
$$ F(y;x_0)=\frac{e^{(x_0-2)y}-e^{-x_0 y}}{2(1-e^{-2y})},$$
where $x_0\in (0,1)$ is a constant.
Then the function $F(y;x_0)$ is strictly increasing on the interval
$(\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)}, \infty)\subset (0,\infty).$
\end{lemma}
\begin{proof}
Since $x_0\in (0,1)$,
$e^{(x_0-2)y}-e^{-x_0 y}<0$ and $2(1-e^{-2y})>0$ on $(0,\infty)$.
A direct calculation now yields
$$\frac{d}{dy}(e^{(x_0-2)y}-e^{-x_0 y})=(x_0-2)e^{(x_0-2)y}+x_0e^{-x_0 y}>0$$
for $y\in (\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)}, \infty)$.
Then we have $e^{(x_0-2)y}-e^{-x_0 y}<0$ and strictly increasing on
$(\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)}, \infty)$.
The function $2(1-e^{-2y})$ is obviously both positive and strictly
increasing on $(\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)}, \infty)$.
Hence the function $F(y;x_0)$ is also strictly increasing on
$(\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)}, \infty)$, which completes
the proof.
\end{proof}
\begin{lemma}\label{lem:F_f}
For the inverse problem with flux data,
define the function $F_f(y;x^\star)\in C^1((0,\infty),\mathbb{R})$ as
$$F_f(y;x^\star)=\frac{y e^{(x^\star-2)y}+ye^{-x^\star y}}
{2(1-e^{-2y})},$$
where $x^\star\in (0,1]$ is a constant.
Then the function $F_f(y;x^\star)$ is strictly decreasing on the interval
$(1/x^\star,\infty)\subset (0,\infty).$
\end{lemma}
\begin{proof}
\begin{equation*}
\begin{split}
\frac{\partial F_f}{\partial y}(y;x^\star )
&=\frac{((x^\star -2)y+1)e^{(x^\star -2)y}+(1-x^\star y)e^{-x^\star y}}{2(1-e^{-2y})^2}\\
&\quad +\frac{(-x^\star y-1)e^{(x^\star -4)y}
+((x^\star -2)y-1)e^{(-x^\star -2)y}}{2(1-e^{-2y})^2},
\end{split}
\end{equation*}
hence $\frac{\partial F_f}{\partial y}(y;x^\star )<0$ if
$y\in (1/x^\star,\infty)$ and the proof is complete.
\end{proof}
For the important lemmas to follow, we need the Stone--Weierstrass and the
M\"untz--Sz\'asz Theorems.
See the appendix for statements and references for these results.
The next result shows that the set
$\{(n r)^x:\N+\}$ is complete in $L^2[0,1]$ for any positive integer $r$.
We give two proofs of this important lemma.
\begin{lemma}\label{lem:dense}
For each $r\in \mathbb{N}^+,$ the vector space consisting with the
set of functions $\{(nr)^x:\ \N+\}$ is dense in the space
$L^2[0,1],$ i.e.
$$\overline{span\{(nr)^x:\N+\}}=L^2[0,1]$$
w.r.t $L^2$ norm.
In other words, the set $\{(nr)^x:\N+\}$ is complete in $L^2[0,1].$
\end{lemma}
\begin{proof}
Clearly, $span\{(nr)^x:\N+\}$ satisfies all the conditions of the Stone--Weierstrass Theorem, so that the closure of $span\{(nr)^x:\N+\}$ w.r.t the
continuous norm is either $C[0,1]$ or $\{f\in C[0,1]:f(x_0)=0, x_0\in[0,1]\}.$
The two alternatives both yield that
$span\{(nr)^x:\ \N+\}$ is dense in $C[0,1]$ with respect to the $L^2$ norm,
which together with the fact $C[0,1]$ is dense in $L^2[0,1]$ gives
$span\{(nr)^x:\N+\}$ is dense in $L^2[0,1]$ and completes the proof.
As a second proof,
if for some $h\in C[0,1]$, $\int_0^1 (n r)^x h(x)\,dx = 0$ for all
$n\in \mathbb{N}^+$ then
$\int_0^1 e^{x\log(r n) } h(x)\,dx = 0$ and with the change of variables
$y = e^{x}$ this becomes
$\int_1^e y^{\log(r n)}\tilde h(y)\,dy = 0$ for all $n\in \mathbb{N}^+$
where $\tilde h(y) = h(\log(y))/y$.
Since $\sum_{n=1}^\infty 1/\log(r n)$ diverges, the M\"untz-Sz\'asz theorem
shows that $\tilde h =0$ and hence $h(x) = 0$.
\end{proof}
We now have the main result of this paper.
\begin{theorem}[Uniqueness theorem for the inverse problem]\label{thm:uniqueness_mu}
In the DDE~\eqref{eqn:one_dim_model}, set $u_0=g_1=f=0$ and let $g_0$
satisfy the following condition
\begin{equation*}\label{eqn:condition_g_0_v1}
(\L g_0)(z)\ne 0\ \text{for}\ z\in(0,\infty).
\end{equation*}
Given $\mu_1$, $\mu_2\in \Psi$, denote the two weak solutions
with respect to
$\mu_1$ and $\mu_2$ by $u(x,t;\mu_1)$ and $u(x,t;\mu_2)$ respectively.
Then for any $x_0\in (0,1)$ and $x^\star\in(0,1]$, either
\begin{equation*}\label{interior_data}
u(x_0,t;\mu_1)=u(x_0,t;\mu_2)
\end{equation*}
or
\begin{equation*}\label{flux_data}
\frac{\partial u}{\partial x}
(x^\star,t;\mu_1)=\frac{\partial u}{\partial x}(x^\star,t;\mu_2),\ t\in(0,\infty)
\end{equation*}
implies $\mu_1=\mu_2$ on $[0,1]$.
\end{theorem}
\begin{proof}
For the first case of $u(x_0,t;\mu_1)=u(x_0,t;\mu_2),$ fix $x_0\in (0,1)$,
Theorem~\ref{thm:representation} yields for $k=1,\,2$:
\begin{equation*}
u(x_0,t;\mu_k)=-2\int_0^t \overline{\theta}_{(\mu_k)}(x_0,t-s)g_0(s) \,ds,
\qquad k=1,\; 2
\end{equation*}
which implies
$$
\int_0^t \overline{\theta}_{(\mu_1)}(x_0,t-s)g_0(s)\,ds
=\int_0^t \overline{\theta}_{(\mu_2)}(x_0,t-s)g_0(s)\,ds.
$$
Taking the Laplace transform in $t$ on both sides of the above equality gives
$$\Big(\L(\overline{\theta}_{(\mu_1)}(x_0,\cdot))\Big)(z)\cdot (\L g_0)(z)
=\Big(\L(\overline{\theta}_{(\mu_2)}(x_0,\cdot))\Big)(z)\cdot (\L g_0)(z).$$
Since $(\L g_0)(z)\ne 0\ \text{on}\ (0,\infty),$
so that
$$\Big(\L(\overline{\theta}_{(\mu_1)}(x_0,\cdot))\Big)(z)
=\Big(\L(\overline{\theta}_{(\mu_2)}(x_0,\cdot))\Big)(z),
\ \text{for}\ z\in (0,\infty).$$
This result and \eqref{eqn:L_theta} then give
$$
\frac{e^{(x_0-2)\Phi_1^{1/2}(z)}-e^{-x_0\Phi_1^{1/2}(z)}}{2(1-e^{-2\Phi_1^{1/2}(z)})}
=\frac{e^{(x_0-2)\Phi_2^{1/2}(z)}-e^{-x_0\Phi_2^{1/2}(z)}}
{2(1-e^{-2\Phi_2^{1/2}(z)})},
\ z\in(0,\infty),
$$
where
$$\Phi_j (z)=\int_0^1 \mu_j(\alpha)z^\alpha {\rm d}\alpha,\quad j=1,2.$$
The definition of $\Psi$ and the fact $z\in (0,\infty)$ yield
$\Phi_j^{1/2}(z)\in (0,\infty)$ and hence we can rewrite the above
equality as
\begin{equation}\label{eqn:equality_F}
F(\Phi_1^{1/2}(z);x_0)=F(\Phi_2^{1/2}(z);x_0),\ z\in (0,\infty),
\end{equation}
where the function $F$ comes from Lemma~\ref{lem:F}.
Since $x_0\in (0,1)$, it is obvious that
$\displaystyle{\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)}>0}$.
Then we can pick a large $N^*\in\mathbb{N}^+$ such that
\begin{equation*}
\int_{\beta_0}^{\beta_1} C_\Psi\cdot (N^*)^\alpha {\rm d}\alpha
>\left(\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)}\right)^2,
\end{equation*}
which together with the definition of $\Psi$ gives that
for each $z\in (0,\infty)$ with $z\ge N^*,$
$\Phi_j(z)\in (0,\infty)$ and
$$\Phi_j^{1/2}(z)>\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)},\quad j=1,2.$$
This result means that
\begin{equation}\label{eqn:inequality_Phi}
\Phi_j^{1/2}(nN^*)>\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)},\ j=1,2,\ \N+.
\end{equation}
Lemma~\ref{lem:F} shows that $F(\cdot;x_0)$ is strictly increasing on the
interval $\bigl(\frac{\ln (2-x_0)-\ln x_0}{2(1-x_0)}, \infty\bigr)$,
which together with \eqref{eqn:equality_F} and \eqref{eqn:inequality_Phi}
yields
$$\Phi_1^{1/2}(nN^*)=\Phi_2^{1/2}(nN^*),\ \N+,$$
that is
$\Phi_1(nN^*)=\Phi_2(nN^*),\ \N+$,
sequentially, we have
\begin{equation*}
\int_0^1 (\mu_1(\alpha)-\mu_2(\alpha)) (nN^*)^\alpha {\rm d}\alpha=0,
\ \N+.
\end{equation*}
We can rewrite the above result as
$\,\langle \mu_1(\alpha)-\mu_2(\alpha),(nN^*)^\alpha\rangle=0$ for $\N+$.
From the completeness of $\{(nN^*)^\alpha:\N+\}$ in $L^2[0,1]$ which is
ensured by Lemma~\ref{lem:dense}, we have $\mu_1-\mu_2=0$ in $L^2[0,1]$,
that is,
$\,\|\mu_1-\mu_2\|_{L^2[0,1]}=0$,
which together with the continuity of $\mu_1$ and $\mu_2$ shows that
$\mu_1=\mu_2$ on $[0,1].$
For the case of $\frac{\partial u}{\partial x}
(x^\star,t;\mu_1)=\frac{\partial u}{\partial x}(x^\star,t;\mu_2),$
following \eqref{eqn:L_theta} we have
\begin{equation*}\label{eqn:L_theta_x}
\begin{split}
&\quad\ \L\left(\frac{\partial \overline{\theta}_{(\mu)}}{\partial x}(x,t)\right)
=\L \left[\kappa * \left(\frac{\partial ^3}{\partial t\partial x^2}
\sum_{m=-\infty}^{\infty} G_{(\mu)}(x,t)\right)\right]\\
&=\L \left[\kappa *\L^{-1}\left(\sum_{m=-1}^{-\infty}
\frac{\Phi^{3/2}(z)}{2} e^{\Phi^{1/2}(z)(x+2m)}{\rm d}z
+\sum_{m=0}^{\infty}\frac{\Phi^{3/2}(z)}{2}
e^{-\Phi^{1/2}(z)(x+2m)}\right)\right]\\
&=\frac{1}{\Phi(z)}\left(\sum_{m=-1}^{-\infty} \frac{\Phi^{3/2}(z)}{2}e^{\Phi^{1/2}(z)(x+2m)}
+\sum_{m=0}^{\infty} \frac{\Phi^{3/2}(z)}{2}e^{-\Phi^{1/2}(z)(x+2m)}\right)\\
&=\frac{\Phi^{1/2}(z)e^{(x-2)\Phi^{1/2}(z)}+\Phi^{1/2}(z)e^{-x\Phi^{1/2}(z)}}
{2(1-e^{-2\Phi^{1/2}(z)})}.
\end{split}
\end{equation*}
Following the proof for the case $u(x_0,t;\mu_1)=u(x_0,t;\mu_2)$,
we can deduce $\mu_1=\mu_2$ from the above result and Lemmas \ref{lem:F_f}
and \ref{lem:dense}.
\end{proof}
\begin{remark}
In this paper we have considered only the uniqueness question for the function
$\mu(\alpha)$.
Certainly, one would like to know under what conditions this function
can be effectively recovered from the given data.
Clearly this is an important question, but we caution there are many
difficulties, especially with a mathematical analysis of the stability issue
of $\mu$ in terms of the overposed data either $u(x_0,t)$ or $\frac{\partial u}{\partial x}(x^\star,t)$.
One can certainly employ the representation result of
section~\ref{sect:representation} to obtain a nonlinear integral equation for
$\mu$ but the analysis of this is unclear.
An alternative approach would be restrict the function $\mu$ as in
Lemma~\ref{lem:kappa} to ensure that $\kappa$ is completely monotone and hence
use Bernstein's theorem to obtain an integral representation for this function.
We hope to address some of these questions in subsequent work.
\end{remark}
\section{Representation of the DDE solution for one spatial variable}
\label{sect:representation}
In this section, we will establish a representation result for the
special case $\Omega=(0,1)$, $\mathcal{L}u=u_{xx}$ in \eqref{eqn:model_pde}
\begin{equation}\label{eqn:one_dim_model}
\begin{cases}
D^{(\mu)} u-u_{xx}=f(x,t),\ 0<x<1,\ 0<t<\infty;\\
u(x,0)=u_0(x),\ 0<x<1;\\
u(0,t)=g_0(t),\ 0\le t< \infty;\\
u(1,t)=g_1(t),\ 0\le t< \infty,
\end{cases}
\end{equation}
where $g_0,g_1\in L^2(0,\infty)$ and $f(x,\cdot)\in L^1(0,\infty)$ for each $x\in(0,1)$.
We can obtain the fundamental solution by Laplace and Fourier transforms.
First, we extend the finite domain to an infinite one and impose a homogeneous
right-hand side, i.e. we consider the following model
\begin{equation*}
\begin{cases}
D^{(\mu)} u-u_{xx}=0,\ -\infty<x<\infty,\ 0<t<\infty;\\
u(x,0)=u_0(x),\ -\infty<x<\infty.
\end{cases}
\end{equation*}
Next we take the Fourier transform $\mathcal{F}$
with respect to $x$ and denote
$(\mathcal{F} u)(\xi,t)$ by $\tilde{u}(\xi,t)$,
\begin{equation*}
D^{(\mu)} \tilde{u}(\xi,t) +\xi^2\tilde{u}(\xi,t)=0.
\end{equation*}
Then by taking the Laplace transform $\L$ with respect to $t$ and denote
$(\L \tilde{u})(\xi,z)$ by $\hat{\tilde{u}}(\xi,z)$,
we obtain
\begin{equation*}
\int_0^1 \mu(\alpha)\left(z^\alpha\hat{\tilde{u}}(\xi,z)-
z^{\alpha-1}\tilde{u}_0(\xi)\right)\,d\alpha
+\xi^2\hat{\tilde{u}}(\xi,z)=0,
\end{equation*}
that is,
\begin{equation*}
\hat{\tilde{u}}(\xi,z)=\frac{\Phi(z)/z}{\Phi(z)+\xi^2}\tilde{u}_0(\xi),
\end{equation*}
where $\Phi(z)$ comes from \eqref{eqn:Phi}.
Then we have
\begin{equation*}
\begin{aligned}
u(x,t)=\mathcal{F}^{-1}\!\circ\L^{-1}( \hat{\tilde{u}}(\xi,z))
&=\frac{1}{2\pi}\int_{-\infty}^{+\infty} e^{i x\xi}\frac{1}{2\pi i}
\int_{\gamma-i\infty}^{\gamma+i\infty} e^{zt}
\frac{\Phi(z)/z}{\Phi(z)+\xi^2}\tilde{u}_0(\xi) \,dz \,d\xi\\
&=\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} e^{zt}
\int_{-\infty}^{+\infty} \frac{1}{2\pi}e^{ i x\xi}\frac{\Phi(z)/z}{\Phi(z)+\xi^2}
\tilde{u}_0(\xi) \,d\xi\,dz\\
&=\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} e^{zt}
\big(\mathcal{F}^{-1}(\frac{\Phi(z)/z}{\Phi(z)+\xi^2})*u_0\big)(x)\,dz,
\end{aligned}
\end{equation*}
where the integral above is the usual Bromwich path, that is, a line in the complex plane parallel to the imaginary axis $z=\gamma + it$, $-\infty<t<\infty$,
see \cite{WhittakerWatson:1962}. The last equality follows from the Fourier transform formula
on convolutions and $\gamma$ can be an arbitrary positive number
due to the fact that $z=0$ is a singular point of
the function $\frac{\Phi(z)/z}{\Phi(z)+\xi^2}$.
Throughout the remainder of this paper we will use $\gamma$ to denote a
strictly positive constant which is larger than $e^{1/\beta}$. The number
$e^{1/\beta}$ will be seen in the proof of Lemma \ref{lem:phi}.
We shall assume the angle of variation $z$ for the Laplace transforms is
from $-\pi$ to $\pi$, that is
$z\in\Lambda:=\{z\in \mathbb{C}:arg(z)\in (-\pi, \pi]\}$.
For $\Phi(z)$, we have the following result which will be central to
the rest of the paper.
It can be shown by using the Cauchy-Riemann equations in polar form.
\begin{lemma}
$\Phi(z)$ is analytic on $\mathbb{C}\setminus\!\{0\}$.
\end{lemma}
\par In the next two lemmas, we obtain important properties of $\Phi(z).$
\begin{lemma}\label{lem:re_phi}
$\;{\displaystyle
\operatorname{Re}(\Phi^{1/2}(z))\ge \frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|,\ \operatorname{Re} z=\gamma>0}$.
\end{lemma}
\begin{proof}
$\gamma>0$ implies that $\operatorname{Re} z>0$, i.e.
$arg(z)\in (-\frac{\pi}{2},\frac{\pi}{2})$,
which together with $0<\alpha<1$ and $\mu(\alpha)\ge 0$ yields
$\operatorname{Re} \Phi(z)\ge 0$, i.e. $arg(\Phi(z))\in
(-\frac{\pi}{2},\frac{\pi}{2})$. This gives $arg(\Phi^{1/2}(z))\in
(-\frac{\pi}{4},\frac{\pi}{4})$. Hence,
$$\operatorname{Re}(\Phi^{1/2}(z))=\cos(arg(\Phi^{1/2}(z)))|\Phi^{1/2}(z)|
\ge\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|,$$
which completes the proof.
\end{proof}
\begin{lemma}\label{lem:phi}
$$\;{\displaystyle
C_{\mu, \beta}\frac{\gamma^\beta-\gamma^{\beta_0}}{\ln \gamma} \le
C_{\mu, \beta} \frac{|z|^\beta-|z|^{\beta_0}}{\ln |z|}
\le|\Phi(z)|\le C\frac{|z|-1}{\ln |z|}},$$ for $z$ such that
$\operatorname{Re} z=\gamma>e^{1/\beta}>0$.
\end{lemma}
\begin{proof}
For the right-hand side of the inequality,
$\mu(\alpha)\in C^1[0,1]$ obviously implies that there exists a
$C>0$ such that $|\mu(\alpha)|\le C$ on $[0,1]$.
Hence,
\begin{equation*}
\begin{aligned}
|\Phi(z)|\le \int_0^1 |\mu(\alpha)|\cdot |z|^\alpha \,d\alpha
\le C \int_0^1 |z|^\alpha \,d\alpha=C\frac{|z|-1}{\ln|z|}.
\end{aligned}
\end{equation*}
\par
For the left-hand side, write $z=r e^{i\theta}$. Since $\operatorname{Re} z=\gamma>0$,
$\theta\in(-\frac{\pi}{2},\frac{\pi}{2})$, then
\begin{equation*}
\begin{aligned}
|\Phi (z)|&\ge \operatorname{Re}(\phi(z))=\int_0^1 \mu(\alpha) r^\alpha
\cos(\theta \alpha) \,d\alpha \\
&\ge C_\mu \int_{\beta_0}^\beta r^\alpha \cos(\theta\alpha)\,d\alpha
\ge C_\mu \cos(\beta\theta) \int_{\beta_0}^\beta r^\alpha \,d\alpha\\
&\ge C_\mu \cos(\frac{\beta\pi}{2}) \int_{\beta_0}^\beta |z|^\alpha
\,d\alpha
=C_{\mu, \beta} \frac{|z|^\beta-|z|^{\beta_0}}{\ln |z|}.
\end{aligned}
\end{equation*}
Recall $|z|\ge \gamma>e^{1/\beta}$, we have
$\frac{|z|^\beta-|z|^{\beta_0}}{\ln |z|}
\ge \frac{\gamma^\beta-\gamma^{\beta_0}}{\ln \gamma}$ due to the function
$\frac{x^\beta-x^{\beta_0}}{\ln x} $ being increasing on the interval
$(e^{1/\beta},+\infty)$.
\end{proof}
Now we are in a position to calculate the complex integral
$\mathcal{F}^{-1}\bigl(\frac{\Phi(z)/z}{\Phi(z)+\xi^2}\bigr)$.
\begin{lemma}\label{lem:inversefourier}
$\;{\displaystyle
\mathcal{F}^{-1}(\frac{\Phi(z)/z}{\Phi(z)+\xi^2})
=\frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)|x|}}$.
\end{lemma}
\begin{proof}
From the inverse Fourier transform formula we have
\begin{equation*}
\begin{aligned}
\mathcal{F}^{-1}\Bigl(\frac{\Phi(z)/z}{\Phi(z)+\xi^2}\Bigr)
=\frac{1}{2\pi}\int_{-\infty}^{+\infty} e^{i x\xi}
\frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi.
\end{aligned}
\end{equation*}
We denote the contour from $-R$ to $R$ by
$C_0$, the semicircle with radius $R$ in the upper and lower half plane
by $C_{R^+}$ and $C_{R^{-}}$, respectively.
Also, let $C_+$, $C_-$ be the closed contours which consist of
$C_0, C_{R^+}$ and $C_0, C_{R^-}$ respectively.
For the case of $x>0$, working on the closed contour $C_+$, we have
\begin{equation*}
\begin{aligned}
\quad\frac{1}{2\pi}\int_{-\infty}^{+\infty}\!\!\!e^{ i x\xi}
\frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi
&=\lim_{R\to \infty} \frac{1}{2\pi}\oint_{C_+}\!\! e^{ i x\xi}
\frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi
-\lim_{R\to \infty} \frac{1}{2\pi}\int_{C_R^+} \!\!e^{ i x\xi}
\frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi\\
&=\lim_{R\to \infty} \frac{1}{2\pi}\oint_{C_+}\!\!e^{ i x\xi}
\frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi,
\end{aligned}
\end{equation*}
where the second limit is $0$ as follows from Jordan's Lemma.
Since to $0<\alpha<1$, $\gamma>0$, by our assumptions
we have $\operatorname{Re}(\Phi(z))\ge 0$, which in turn leads to $\operatorname{Re}(\Phi^{1/2}(z))\ge 0$.
Then there is only one singular point $\xi=i\Phi^{1/2}(z)$ in
$C_+$ which is contained by the upper half plane.
By the residue theorem \cite{WhittakerWatson:1962}, we have
\begin{equation*}
\lim_{R\to \infty} \frac{1}{2\pi}\oint_{C_+}e^{ i x\xi}
\frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi
=\lim_{R\to \infty} 2\pi i\frac{1}{2\pi}
e^{ixi\Phi^{1/2}(z)} \frac{\Phi(z)/z}{2i\Phi^{1/2}(z)}
=\frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)x}.
\end{equation*}
For the case of $x<0$, we choose the closed contour
$C_-$.
Since $\operatorname{Re}(\Phi^{1/2}(z))\ge 0$, it follows that
$\xi=-i\Phi^{1/2}(z)$ is the unique singular point in $C_-$.
Then a similar calculation gives
\begin{equation*}
\begin{aligned}
\quad\frac{1}{2\pi}\int_{-\infty}^{+\infty}\!\!\! e^{ i x\xi}
\frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi
&=-\lim_{R\to \infty} \frac{1}{2\pi}\oint_{C_-}\!\! e^{ i x\xi}
\frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi
+\lim_{R\to \infty} \frac{1}{2\pi}\int_{C_R^-}\!\! e^{ i x\xi}
\frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi\\
&=-\lim_{R\to \infty} \frac{1}{2\pi}\oint_{C_-}\!\! e^{ i x\xi}
\frac{\Phi(z)/z}{\Phi(z)+\xi^2} \,d\xi\\
&=\lim_{R\to \infty} \frac{\Phi^{1/2}(z)}{2z}e^{\Phi^{1/2}(z)x}
= \frac{\Phi^{1/2}(z)}{2z}e^{\Phi^{1/2}(z)x}.
\end{aligned}
\end{equation*}
Therefore,
$$
\mathcal{F}^{-1}\Bigl(\frac{\Phi(z)/z}{\Phi(z)+\xi^2}\Bigr)
=\frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)|x|},
$$
which completes the proof.
\end{proof}
\subsection{The fundamental solution $\,G_{\mu}(x,t)$}
With the above lemma, we have
\begin{equation*}
\begin{aligned}
u(x,t)&=\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty} e^{zt}
\int_{-\infty}^{+\infty} \frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)|x-y|}
u_0(y)\,dy\,dz\\
&=\int_{-\infty}^{+\infty} \left[\frac{1}{2\pi i}
\int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Phi^{1/2}(z)}{2z}
e^{zt-\Phi^{1/2}(z)|x-y|}\,dz\right] u_0(y)\,dy.
\end{aligned}
\end{equation*}
Then we can define the fundamental solution $G_{(\mu)}(x,t)$ as
\begin{equation}\label{G_mu}
G_{(\mu)}(x,t)=\frac{1}{2\pi i}
\int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Phi^{1/2}(z)}{2z}
e^{zt-\Phi^{1/2}(z)|x|}\,dz.
\end{equation}
The following three lemmas provide some important properties of
$G_{(\mu)}(x,t)$.
\begin{lemma}\label{lem:pointwise}
The integral for $G_{(\mu)}(x,t)$ is convergent for each
$(x,t)\in (0,\infty)\times(0,\infty)$.
\end{lemma}
\begin{proof}
\par Given $(x,t)\in (0,\infty)\times(0,\infty)$, with Lemmas
\ref{lem:re_phi} and \ref{lem:phi}, we have
\begin{equation*}
\begin{aligned}
|G_{(\mu)}(x,t)|
&\le \frac{1}{4\pi} \int_{\gamma-i\infty}^{\gamma+i\infty}
|\frac{\Phi^{1/2}(z)}{z}|\cdot|e^{zt}|\cdot|e^{-\Phi^{1/2}(z)|x|}|\,dz\\
&= \frac{1}{4\pi} \int_{\gamma-i\infty}^{\gamma+i\infty}
\frac{|\Phi^{1/2}(z)|}{|z|}e^{\gamma t}
e^{-\operatorname{Re}(\Phi^{1/2}(z)|x|)} \,d z\\
&\le \frac{1}{4\pi} \int_{\gamma-i\infty}^{\gamma+i\infty}
\frac{|\Phi^{1/2}(z)|}{|z|}e^{\gamma t}
e^{-\frac{\sqrt{2}}{2}|x||\Phi^{1/2}(z)|} \,d z\\
&\le\frac{Ce^{\gamma t}}{4\pi}\int_{\gamma-i\infty}^{\gamma+i\infty}
(|z|\ln|z|)^{-1/2}
e^{-C_{\mu, \beta}|x|(\frac{|z|^\beta-|z|^{\beta_0}}{\ln |z|})^{1/2}}
\,dz\\
&\le \frac{Ce^{\gamma t}}{4\pi (\ln\gamma)^{1/2}}
\int_{\gamma-i\infty}^{\gamma+i\infty} |z|^{-1/2}
e^{-C_{\mu, \beta}|x|(\frac{C|z|^\beta}{\ln |z|})^{1/2}}
\,dz<\infty.
\end{aligned}
\end{equation*}
\end{proof}
\begin{lemma}\label{eqn:G_smooth}
$G_{(\mu)}(x,t)\in C^\infty((0,\infty)\times(0,\infty))$.
\end{lemma}
\begin{proof}
Fix $(x,t)\in (0,\infty)\times(0,\infty)$.
Then for small $|\epsilon_x|, |\epsilon_t|$ we have
\begin{equation*}
\begin{aligned}
\quad |G_{(\mu)}(x+\epsilon_x,t+\epsilon_t)-G_{(\mu)}(x,t)|
&\le |G_{(\mu)}(x+\epsilon_x,t+\epsilon_t)-G_{(\mu)}(x,t+\epsilon_t)|\\
&\quad+ |G_{(\mu)}(x,t+\epsilon_t)-G_{(\mu)}(x,t)|.
\end{aligned}
\end{equation*}
For $|G_{(\mu)}(x+\epsilon_x,t+\epsilon_t)-G_{(\mu)}(x,t+\epsilon_t)|$,
the following holds
\begin{equation*}
\begin{aligned}
&\quad|G_{(\mu)}(x+\epsilon_x,t+\epsilon_t)-G_{(\mu)}(x,t+\epsilon_t)|\\
&\le \frac{1}{2\pi} \int_{\gamma-i\infty}^{\gamma+i\infty}
|\frac{\Phi^{1/2}(z)}{2z}|\cdot|e^{zt+z\epsilon_t}|
\cdot|e^{-\Phi^{1/2}(z)|x/2|}|\cdot
|e^{-\Phi^{1/2}(z)(\frac{x}{2}+\epsilon_x)}
-e^{-\Phi^{1/2}(z)(x/2)}|\ \,dz.
\end{aligned}
\end{equation*}
From the proof of Lemma~\ref{lem:pointwise}, we have
\begin{equation*}
\begin{aligned}
|e^{-\Phi^{1/2}(z)(\frac{x}{2}+\epsilon_x)}-e^{-\Phi^{1/2}(z)(x/2)}|
&\le |e^{-\Phi^{1/2}(z)(\frac{x}{2}+\epsilon_x)}|+|e^{-\Phi^{1/2}(z)(x/2)}|\\
&\le e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|(\frac{x}{2}+\epsilon_x)}
+e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|(x/2)} \le 2,
\end{aligned}
\end{equation*}
and
$$
\frac{1}{2\pi} \int_{\gamma-i\infty}^{\gamma+i\infty}
|\frac{\Phi^{1/2}(z)}{2z}|\cdot|e^{zt+z\epsilon_t}|
\cdot|e^{-\Phi^{1/2}(z)|x/2|}|\ \,dz<\infty.
$$
Hence, after setting $e_1(z,\epsilon_x)=|e^{-\Phi^{1/2}(z)(\frac{x}{2}+\epsilon_x)}
-e^{-\Phi^{1/2}(z)(x/2)}|,$
we can apply Lebesgue's dominated convergent theorem to deduce that
\begin{equation*}
\begin{aligned}
&\lim_{\epsilon_x\to 0}|G_{(\mu)}(x+\epsilon_x,t+\epsilon_t)
-G_{(\mu)}(x,t+\epsilon_t)|\\
\le& \lim_{\epsilon_x\to 0}
\frac{1}{2\pi} \int_{\gamma-i\infty}^{\gamma+i\infty}
|\frac{\Phi^{1/2}(z)}{2z}|\cdot|e^{zt+z\epsilon_t}|
\!\cdot\!|e^{-\Phi^{1/2}(z)|x/2|}|\!\cdot e_1(z,\epsilon_x)\ \,dz\\
=& \frac{1}{2\pi} \int_{\gamma-i\infty}^{\gamma+i\infty}
|\frac{\Phi^{1/2}(z)}{2z}|\!\cdot\!|e^{zt+z\epsilon_t}|
\!\cdot\!|e^{-\Phi^{1/2}(z)|x/2|}|\!\cdot\!
\lim_{\epsilon_x\to 0}e_1(z,\epsilon_x)\ \,dz=0.
\end{aligned}
\end{equation*}
A similar argument also shows that
$\lim_{\epsilon_t\to 0}|G_{(\mu)}(x,t+\epsilon_t)-G_{(\mu)}(x,t)|=0$.
From this we deduce that
$\lim_{\epsilon_x,\ \epsilon_t\to 0}
|G_{(\mu)}(x+\epsilon_x,t+\epsilon_t)-G_{(\mu)}(x,t)|=0$,
which shows that $G_{(\mu)}(x,t)\in C((0,\infty)\times(0,\infty))$.
Similarly, following from the proof of Lemma~\ref{lem:pointwise}
and the above limiting argument, we obtain
$$G_{(\mu)}(x,t) \in C^n((0,\infty)\times(0,\infty)),\ \N+,$$
which leads to $G_{(\mu)}(x,t)\in C^\infty((0,\infty)\times(0,\infty))$
and this completes the proof.
\end{proof}
\begin{lemma}\label{lem:delta}
\begin{equation*}
\lim_{t\to 0} G_{(\mu)}(x,t)=\delta (x).
\end{equation*}
\end{lemma}
\begin{proof}
\par Fix $x\ne 0$, for each $t\in (0,\infty)$,
\begin{equation*}
\left|\frac{\Phi^{1/2}(z)}{2z}\right|\cdot |e^{zt-\Phi^{1/2}(z)|x|}|
\le e^{\gamma t} \left|\frac{\Phi^{1/2}(z)}{2z}\right|\cdot|e^{-\Phi^{1/2}(z)|x|}|.
\end{equation*}
The proof of Lemma \ref{lem:pointwise} shows that
$$\int_{\gamma-i\infty}^{\gamma+i\infty} \left|\frac{\Phi^{1/2}(z)}{2z}\right|
\cdot|e^{-\Phi^{1/2}(z)|x|}|<\infty,$$
then by dominated convergence theorem, we can deduce that
\begin{equation}\label{eqn:equality_2}
\begin{aligned}
\lim_{t\to 0} G_{(\mu)}(x,t)&=
\lim_{t\to 0}\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}
\frac{\Phi^{1/2}(z)}{2z} e^{zt-\Phi^{1/2}(z)|x|}\,dz\\
&=\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}
\frac{\Phi^{1/2}(z)}{2z} \lim_{t\to 0}e^{zt-\Phi^{1/2}(z)|x|}\,dz\\
&=\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}
\frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)|x|}\,dz,
\end{aligned}
\end{equation}
for each $x\ne 0$. Let $z=\gamma+mi$, we have
\begin{equation}\label{eqn:equality_1}
\begin{aligned}
\lim_{t\to 0} G_{(\mu)}(x,t) =\frac{1}{4\pi}
\int_{-\infty}^{+\infty}\frac{\Phi^{1/2}(\gamma+mi)}{\gamma+mi}
e^{-\Phi^{1/2}(\gamma+mi)|x|}\,dm.
\end{aligned}
\end{equation}
Recalling the definition of the closed contour $C_-$ and
the proof of Lemma~\ref{lem:inversefourier}, we see the function
$\frac{\Phi^{1/2}(\gamma+mi)}{\gamma+mi}e^{-\Phi^{1/2}(\gamma+mi)|x|}$
is analytic in $C_-$.
Then
\begin{equation*}
\begin{aligned}
\int_{-\infty}^{+\infty}\frac{\Phi^{1/2}(\gamma+mi)}{\gamma+mi}
e^{-\Phi^{1/2}(\gamma+mi)|x|}\,dm
&=\lim_{R\to \infty}\int_{C_{R^-}}\!\!\!\frac{\Phi^{1/2}(\gamma+mi)}{\gamma+mi}
e^{-\Phi^{1/2}(\gamma+mi)|x|}\,dm\\
&=\lim_{R\to \infty}\int_{-\pi}^0 Rie^{i\theta}\frac{\Phi^{1/2}(\gamma+Rie^{i\theta})}
{\gamma+Rie^{i\theta}} e^{-\Phi^{1/2}(\gamma+Rie^{i\theta})|x|}\,d\theta,
\end{aligned}
\end{equation*}
where $m=Re^{i\theta}$.
Since $\operatorname{Re} (\gamma+Rie^{i\theta})=\gamma-R\sin\theta\ge 0$, following from
the proofs of Lemmas~\ref{lem:re_phi} and \ref{lem:phi}, we can deduce that
\begin{equation*}
\begin{aligned}
\operatorname{Re} (\Phi^{1/2}(\gamma+Rie^{i\theta}))
&\ge \frac{\sqrt{2}}{2} |\Phi^{1/2}(\gamma+Rie^{i\theta})|\\
&\ge C_{\mu, \beta} \frac{|\gamma+Rie^{i\theta}|^\beta
-|\gamma+Rie^{i\theta}|^{\beta_0}}{\ln |\gamma+Rie^{i\theta}|}
\ge C \frac{R^\beta-R^{\beta_0}}{\ln R},
\end{aligned}
\end{equation*}
and
$$|\Phi^{1/2}(\gamma+Rie^{i\theta})|
\le C\frac{|\gamma+Rie^{i\theta}|-1}{\ln |\gamma+Rie^{i\theta}|}
\le C\frac{|R|-1}{\ln |R|}$$
for large $R$.
Hence, as $R\to \infty$,
\begin{equation*}
\begin{aligned}
\Bigl|Rie^{i\theta}\frac{\Phi^{1/2}(\gamma+Rie^{i\theta})}
{\gamma+Rie^{i\theta}}&e^{-\Phi^{1/2}(\gamma+Rie^{i\theta})|x|}\Bigr|\\
&\le |\frac{Rie^{i\theta}}{\gamma+Rie^{i\theta}}|
\!\cdot\! |\Phi^{1/2}(\gamma+Rie^{i\theta})|
\!\cdot\! |e^{-\Phi^{1/2}(\gamma+Rie^{i\theta})|x|}|\\
&\le C\frac{|R|-1}{\ln |R|}\!\cdot\! e^{-C \frac{R^\beta-R^{\beta_0}}{\ln R}|x|}
\to 0,
\end{aligned}
\end{equation*}
which implies
$$\left|\int_{-\infty}^{+\infty}\frac{\Phi^{1/2}(\gamma+mi)}{\gamma+mi}
e^{-\Phi^{1/2}(\gamma+mi)|x|}\,dm\right|
\le \pi\cdot C\frac{|R|-1}{\ln |R|}\cdot e^{-C \frac{R^\beta-R^{\beta_0}}{\ln R}|x|}
\to 0.$$
The above result and \eqref{eqn:equality_1} show that
\begin{equation}\label{eqn:equality_3}
\lim_{t\to 0} G_{(\mu)}(x,t)=0\ \text{for}\ x\ne 0.
\end{equation}
Now, we are in the position to calculate
$\int_{-\infty}^\infty \lim_{t\to 0} G_{(\mu)}(x,t) \,dx$.
Equation~\eqref{eqn:equality_2} gives
\begin{equation*}
\begin{aligned}
\int_{-\infty}^\infty \lim_{t\to 0} G_{(\mu)}(x,t) \,dx
&=\int_{-\infty}^0 \lim_{t\to 0} G_{(\mu)}(x,t) \,dx
+\int_0^\infty \lim_{t\to 0} G_{(\mu)}(x,t) \,dx\\
&=\int_{-\infty}^0 \frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}
\frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)|x|}\,dz \,dx\\
&\qquad+\int_0^\infty \frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}
\frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)|x|}\,dz \,dx\\
&=\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}\int_{-\infty}^0
\frac{\Phi^{1/2}(z)}{2z}e^{\Phi^{1/2}(z)x}\,dx \,dz\\
&\qquad+\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}\int_0^\infty
\frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)x}\,dx \,dz.
\end{aligned}
\end{equation*}
Now Lemma~\ref{lem:re_phi} and the fact that $\operatorname{Re} z=\gamma>0$ shows that
\begin{equation*}
\begin{aligned}
&\int_{-\infty}^0\frac{\Phi^{1/2}(z)}{2z}e^{\Phi^{1/2}(z)x}\,dx
=\frac{e^{\Phi^{1/2}(z)x}}{2z}\Big |_{-\infty}^0=\frac{1}{2z},\\
&\int_{0}^\infty\frac{\Phi^{1/2}(z)}{2z}e^{-\Phi^{1/2}(z)x}\,dx
=\frac{e^{-\Phi^{1/2}(z)x}}{2z}\Big |_\infty^0=\frac{1}{2z}.\\
\end{aligned}
\end{equation*}
Therefore,
${\displaystyle\;
\int_{-\infty}^\infty \lim_{t\to 0} G_{(\mu)}(x,t) \,dx
=\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}
\frac{1}{2z}\cdot 2\,dz=1
}$,
which together with \eqref{eqn:equality_3} yields the conclusion.
\end{proof}
Lemma~\ref{lem:delta} allows us to make the definition
\begin{equation}\label{initial_G_mu}
G_{(\mu)}(x,0)=\lim_{t\to 0} G_{(\mu)}(x,t)=\delta (x).
\end{equation}
\subsection{The Theta functions:
$\theta_{\mu}(x,t)$ and $\overline{\theta}_{\mu}(x,t)$}
One very useful way to represent solutions to initial value problems
for a parabolic equation is through the $\theta-$function, \cite{Cannon:1984}.
For the case of the heat equation if we let $K(x,t)$ denote the
fundamental solution, then set
$\theta(x,t) = \sum_{m=-\infty}^{\infty}K(x+2m,t)$.
The value of this function lies in the following result.
If $u_t-u_{xx}=0$, $u(0,t)=f_0(t)$, $u(1,t)=f_1(t)$, $u(x,0)=u_0(x)$, then
$u(x,t)$ has the representation
\begin{equation}
\begin{aligned}
u(x,t) &= \int_0^1[\theta(x-\xi,t)-\theta(x+\xi,t)]u_0(\xi)\,d\xi\\
&\quad -2\int_0^t \frac{\partial\theta}{\partial x}(x,t-\tau)f_0(\tau)\,d\tau
+2\int_0^t \frac{\partial\theta}{\partial x}(x-1,t-\tau)f_1(\tau)\,d\tau.
\end{aligned}
\end{equation}
A generalization to the case of the fractional equation
$D_t^\alpha -u_{xx} = 0$ for a fixed $\alpha$, $0<\alpha\leq 1$ can be found
in \cite{RundellXuZuo:2013}.
Our aim is to
extend this representation result to the distributed fractional order case.
\begin{definition}\label{def:theta_func}
We define for each $\mu(\alpha)$ which satisfies Assumption \ref{mu_assumption},
$${\displaystyle\;
\theta_{(\mu)}(x,t)=\sum_{m=-\infty}^{\infty} G_{(\mu)}(x+2m,t)}.$$
\end{definition}
\par The uniform convergence and smoothness property
of $\theta_{(\mu)}(x,t)$ are established by the next lemma.
\begin{lemma}\label{lem:uniform_theta}
$\theta_{(\mu)}(x,t)$ is an even function on $x$ and
uniformly convergent on $(0,2)\times (0,T)$ for any positive $T$.
Then $\theta_{(\mu)}(x,t)\in C^\infty((0,2)\times (0,\infty))$.
\end{lemma}
\begin{proof}
The even symmetric property follows from the definitions
of $G_{(\mu)}(x,t)$ and $\theta_{(\mu)}(x,t)$ directly.
\par Given a positive $T,$ fix $(x,t)\in (0,2)\times (0,T)$, by Lemma \ref{lem:re_phi} we have
\begin{equation}\label{leftsum}
\begin{aligned}
\sum_{|m|>N}|G_{(\mu)}(x+2m,t)|
&\le \left|\frac{1}{2\pi i} \sum_{|m|>N}\int_{\gamma-i\infty}^{\gamma+i\infty}
\frac{\Phi^{1/2}(z)}{2z} e^{zt-\Phi^{1/2}(z)|x+2m|} \,dz\right|\\
&= \left|\frac{1}{2\pi i} \int_{\gamma-i\infty}^{\gamma+i\infty}
\frac{\Phi^{1/2}(z)}{2z} \sum_{|m|>N}e^{zt-\Phi^{1/2}(z)|x+2m|} \,dz\right|\\
&\le \frac{1}{2\pi } \int_{\gamma-i\infty}^{\gamma+i\infty}
\big|\frac{\Phi^{1/2}(z)}{2z}\big|e^{\gamma t}\sum_{|m|>N}
e^{-\operatorname{Re}(\Phi^{1/2}(z))|x+2m|} \,dz\\
&\le \frac{1}{2\pi } \int_{\gamma-i\infty}^{\gamma+i\infty}
\big|\frac{\Phi^{1/2}(z)}{2z}\big|e^{\gamma t}\sum_{|m|>N}
e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)||x+2m|} \,dz.
\end{aligned}
\end{equation}
For the series $\sum_{|m|>N}
e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)||x+2m|}$,
Lemma~\ref{lem:phi} shows that
\begin{equation*}
\begin{aligned}
&\sum_{|m|>N}e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)||x+2m|}\\
=&\ (1-e^{-\sqrt{2}|\Phi^{1/2}(z)|})^{-1}
(e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|(2N+2+x)}+
e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|(2N+2-x)})\\
=&\ \frac{e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|(2N-2)}}
{1-e^{-\sqrt{2}|\Phi^{1/2}(z)|}}
e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|}
(e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|(3+x)}+
e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|(3-x)})\\
\le&\ 2 (1-e^{-\sqrt{2}(C_{\mu, \beta}\frac{\gamma^\beta-\gamma^{\beta_0}}
{\ln \gamma})^{1/2}})^{-1}
(e^{-\frac{\sqrt{2}}{2}(C_{\mu, \beta}\frac{\gamma^\beta-\gamma^{\beta_0}}
{\ln \gamma})^{1/2}})^{2N-2}
e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|}\\
\le&\ A_\gamma C_\gamma^{2N-2}e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|}
\end{aligned}
\end{equation*}
where
$$A_\gamma=2 (1-e^{-\sqrt{2}(C_{\mu, \beta}\frac{\gamma^\beta-\gamma^{\beta_0}}
{\ln \gamma})^{1/2}})^{-1},
\quad
0<C_\gamma=e^{-\frac{\sqrt{2}}{2}(C_{\mu,\beta}\frac{\gamma^\beta-\gamma^{\beta_0}}
{\ln \gamma})^{1/2}}<1$$
only depend on $\gamma>0$.
Inserting the above result into \eqref{leftsum} yields
$$
\sum_{|m|>N}|G_{(\mu)}(x+2m,t)|
\le \frac{1}{2\pi} \int_{\gamma-i\infty}^{\gamma+i\infty}
\big|\frac{\Phi^{1/2}(z)}{2z}\big|e^{\gamma t}
A_\gamma C_\gamma^{2N-2}e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|}\,dz.
$$
Meanwhile, from the proof of Lemma~\ref{lem:pointwise}, we have
$$\int_{\gamma-i\infty}^{\gamma+i\infty} \big|\frac{\Phi^{1/2}(z)}{2z}\big|
e^{-\frac{\sqrt{2}}{2}|\Phi^{1/2}(z)|}\,dz<\infty.$$
Therefore,
$$
\sum_{|m|>N}|G_{(\mu)}(x+2m,t)|
\le CC_\gamma^{2N-2}
$$
where the constant $C$ only depends on
$T$, $\gamma$ and $0<C_\gamma<1$ only depends on $\gamma$.
We conclude from this that for each $\epsilon>0$, $\exists$ sufficiently large
$N\in\mathbb{N}$ independent of $x,t$ such that
$$
\sum_{|m|>N}|G_{(\mu)}(x+2m,t)|<\epsilon
\ \text{for each}\ (x,t)\in (0,2)\times (0,T),
$$
which implies the uniform convergence of the series.
Then the smoothness results follow from Lemma~\ref{eqn:G_smooth}
and the uniform convergence.
\end{proof}
\par Now we introduce the definition of $\overline{\theta}_{(\mu)}(x,t)$
and state some of its properties.
\begin{definition}
\begin{equation*}\label{eqn:theta_bar}
\overline{\theta}_{(\mu)}(x,t)
=\left(I^{(\mu)} \frac{\partial ^2 \theta_{(\mu)}}{\partial t \partial x}
\right)(x,t),\ (x,t)\in (0,2)\times(0,\infty).
\end{equation*}
\end{definition}
\begin{lemma}\label{lem:dmu_xx}
$D^{(\mu)}\theta_{(\mu)}(x,t)=(\theta_{(\mu)}(x,t))_{xx}$,\quad
$D^{(\mu)}\overline{\theta}_{(\mu)}(x,t)=(\overline{\theta}_{(\mu)}(x,t))_{xx}$ .
\end{lemma}
\begin{proof}
\par The first equality follows from the fact
$D^{(\mu)} G_{(\mu)}(x,t)=(G_{(\mu)}(x,t))_{xx}$
and the uniform convergence of the series representation.
For the second equality, Lemma~\ref{lem:kappa} yields
$D^{(\mu)}\overline{\theta}_{(\mu)}=D^{(\mu)}I^{(\mu)} \frac{\partial ^2
\theta_{(\mu)}}{\partial t \partial x}
=\frac{\partial ^2 \theta_{(\mu)}}{\partial t \partial x}$ and this together
with the first equality and Lemma~\ref{lem:uniform_theta} then gives
\begin{equation*}
\begin{aligned}
(\overline{\theta}_{(\mu)})_{xx}
&=I^{(\mu)} \frac{\partial ^2 }{\partial t \partial x}(\frac{\partial^2\theta_{(\mu)}}{\partial x^2})
=I^{(\mu)} \frac{\partial ^2 }{\partial t \partial x} D^{(\mu)}\theta_{(\mu)}
=I^{(\mu)} \frac{\partial }{\partial t} D^{(\mu)} (\frac{\partial \theta_{(\mu)}}{\partial x})\\
&=\kappa * \frac{\partial }{\partial t}
[\eta * \frac{\partial^2 \theta_{(\mu)}}{\partial t\partial x} ]
=\kappa * \eta * \frac{\partial^3 \theta_{(\mu)}}{\partial t^2\partial x}
+\kappa * \eta\cdot \frac{\partial^2 \theta_{(\mu)}}{\partial t\partial x} (x,0)\\
&=\int_0^t \frac{\partial^3 \theta_{(\mu)}}{\partial t^2 \partial x} \,dt
+\frac{\partial^2 \theta_{(\mu)}}{\partial t\partial x} (x,0)\\
&=\frac{\partial^2 \theta_{(\mu)}}{\partial t\partial x} (x,t)
-\frac{\partial^2 \theta_{(\mu)}}{\partial t\partial x} (x,0)
+\frac{\partial^2 \theta_{(\mu)}}{\partial t\partial x} (x,0)
=\frac{\partial^2 \theta_{(\mu)}}{\partial t\partial x},
\end{aligned}
\end{equation*}
which shows that the second equality holds.
\end{proof}
\begin{lemma}\label{lem:boundary_overline_theta}
For each $\psi(t)\in L^2(0,\infty)$, we have
\begin{equation*}
\begin{split}
&\int_0^t \overline{\theta}_{(\mu)}(0+,t-s)\psi(s){\rm d}s=-\frac{1}{2} \psi(t),
\quad
\int_0^t \overline{\theta}_{(\mu)}(1-,t-s)\psi(s)\,ds = 0,\\
&\int_0^t \overline{\theta}_{(\mu)}(0-,t-s)\psi(s){\rm d}s=\frac{1}{2} \psi(t),
\quad
\int_0^t \overline{\theta}_{(\mu)}(-1+,t-s)\psi(s)\,ds = 0,\quad t\in (0,\infty).
\end{split}
\end{equation*}
\end{lemma}
\begin{proof}
Fix $(x,t)\in (0,1)\times(0,\infty)$, then computing the Laplace
transform yields
\begin{equation}\label{eqn:L_theta}
\begin{aligned}
\L(\overline{\theta}_{(\mu)}(x,t))
&=\L \Bigl[\kappa * \Bigl(\frac{\partial ^2}{\partial t\partial x}
\sum_{m=-\infty}^{+\infty} G_{(\mu)}(x,t)\Bigr)\Bigr]\\
&=\L \Bigl[\kappa *\Bigl(\sum_{m=-1}^{-\infty} \frac{1}{2\pi i}
\int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Phi(z)}{2}
e^{zt+\Phi^{1/2}(z)(x+2m)}\,dz\\
&\quad-\sum_{m=0}^{+\infty} \frac{1}{2\pi i}
\int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Phi(z)}{2}
e^{zt-\Phi^{1/2}(z)(x+2m)}\,dz\Bigr)\Bigr]\\
&=\L (\kappa)\cdot \L\Bigl(\sum_{m=-1}^{-\infty} \frac{1}{2\pi i}
\int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Phi(z)}{2}
e^{zt+\Phi^{1/2}(z)(x+2m)}\,dz\\
&\quad-\sum_{m=0}^{+\infty} \frac{1}{2\pi i}
\int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Phi(z)}{2}
e^{zt-\Phi^{1/2}(z)(x+2m)}\,dz\Bigr)\\
&=\frac{1}{\Phi(z)}\Bigl(\sum_{m=-1}^{-\infty} \frac{\Phi(z)}{2}e^{\Phi^{1/2}(z)(x+2m)}
-\sum_{m=0}^{+\infty} \frac{\Phi(z)}{2}e^{-\Phi^{1/2}(z)(x+2m)}\Bigr)\\
&=\frac{e^{(x-2)\Phi^{1/2}(z)}-e^{-x\Phi^{1/2}(z)}}
{2(1-e^{-2\Phi^{1/2}(z)})},
\end{aligned}
\end{equation}
where the last equality follows from the fact
$\operatorname{Re} (\Phi^{1/2}(z))>0$ which is in turn ensured by Lemma~\ref{lem:re_phi}.
Therefore,
\begin{equation*}
\begin{aligned}
&\L\left(\int_0^t \overline{\theta}_{(\mu)}(0+,t-s)\psi(s) \,ds\right)
=\L(\overline{\theta}_{(\mu)}(0+,t))\L(\psi(t))
=-\frac{1}{2}\L(\psi(t));\\
&\L\left(\int_0^t \overline{\theta}_{(\mu)}(1-,t-s)\psi(s) \,ds\right)
=\L(\overline{\theta}_{(\mu)}(1-,t))\L(\psi(t))=0.
\end{aligned}
\end{equation*}
\par For $(x,t)\in (-1,0)\times(0,\infty)$, we have
\begin{equation*}
\begin{aligned}
\L(\overline{\theta}_{(\mu)}(x,t))
&=\L \Bigl[\kappa *\Bigl(\sum_{m=0}^{-\infty} \frac{1}{2\pi i}
\int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Phi(z)}{2}
e^{zt+\Phi^{1/2}(z)(x+2m)}\,dz\\
&\quad-\sum_{m=1}^{+\infty} \frac{1}{2\pi i}
\int_{\gamma-i\infty}^{\gamma+i\infty}\frac{\Phi(z)}{2}
e^{zt-\Phi^{1/2}(z)(x+2m)}\,dz\Bigr)\Bigr]\\
&=\frac{1}{\Phi(z)}\Bigl(\sum_{m=0}^{-\infty} \frac{\Phi(z)}{2}e^{\Phi^{1/2}(z)(x+2m)}
-\sum_{m=1}^{+\infty} \frac{\Phi(z)}{2}e^{-\Phi^{1/2}(z)(x+2m)}\Bigr)\\
&=\frac{e^{x\Phi^{1/2}(z)}-e^{-(x+2)\Phi^{1/2}(z)}}
{2(1-e^{-2\Phi^{1/2}(z)})},
\end{aligned}
\end{equation*}
which gives
$\L(\overline{\theta}_{(\mu)}(0-,t))=\frac{1}{2}$ and
$\L(\overline{\theta}_{(\mu)}(-1+,t))=0,$ and completes the proof.
\end{proof}
\subsection{Representation of the solution to the initial-boundary value problem}
We will build the representation of the solution in this subsection from
four representations in terms of the theta functions;
the initial condition, the values of $u$ at each boundary $x=0$, $x=1$, and the
nonhomogeneous term $f$.
\begin{definition}\label{eqn:theta_kernels}
\begin{equation*}
\begin{aligned}
u_1(x,t)&=\int_0^1(\theta_{(\mu)}(x-y,t)-\theta_{(\mu)}(x+y,t))u_0(y)
\,dy;\\
u_2(x,t)&=-2\int_0^t \overline{\theta}_{(\mu)}(x,t-s)g_0(s)
\,ds;\\
u_3(x,t)&=2\int_0^t \overline{\theta}_{(\mu)}(x-1,t-s)
g_1(s)\,ds;\\
u_4(x,t)&=\int_0^1\int_0^t[\theta_{(\mu)}(x-y,t-s)-
\theta_{(\mu)}(x+y,t-s)]\cdot
[\frac{\partial}{\partial t}I^{(\mu)} f(y,s)] \,ds\,dy.
\end{aligned}
\end{equation*}
\end{definition}
\par The following four lemmas give some properties of $u_j,\ j=1,2,3,4$.
\begin{lemma}\label{lem:u1u2u3u4}
$\;{\displaystyle D^{(\mu)} u_j=\frac{\partial^2 u_j}{\partial x^2},\ j=1,2,3}$,
$\;{\displaystyle
D^{(\mu)} u_4=\frac{\partial^2 u_4}{\partial x^2}+f(x,t)}$,
where $(x,t)\in (0,1)\times(0,\infty)$.
\end{lemma}
\begin{proof}
For $u_1$, by Lemma~\ref{lem:dmu_xx}, we have
\begin{equation*}
\begin{aligned}
D^{(\mu)} u_1
&= \int_0^1(D^{(\mu)}\theta_{(\mu)}(x-y,t)-D^{(\mu)}\theta_{(\mu)}(x+y,t))u_0(y)
\,dy\\
&=\int_0^x(D^{(\mu)}\theta_{(\mu)}(x-y,t)-D^{(\mu)}\theta_{(\mu)}(x+y,t))u_0(y)
\,dy\\
&\quad+
\int_x^1(D^{(\mu)}\theta_{(\mu)}(x-y,t)-D^{(\mu)}\theta_{(\mu)}(x+y,t))u_0(y)
\,dy\\
&=\int_0^x\Big[\theta_{(\mu)}(x-y,t)-\theta_{(\mu)}(x+y,t)\Big]_{xx}u_0(y)
\,dy\\
&\quad+\int_x^1\Big[\theta_{(\mu)}(x-y,t)-\theta_{(\mu)}(x+y,t)\Big]_{xx}u_0(y)
\,dy\\
&=\int_0^1\Big[\theta_{(\mu)}(x-y,t)-\theta_{(\mu)}(x+y,t)\Big]_{xx}u_0(y)
\,dy= \frac{\partial^2 u_1}{\partial x^2}.
\end{aligned}
\end{equation*}
For $u_2$,
\begin{equation*}
\begin{aligned}
D^{(\mu)} u_2 &=\eta * \frac{\partial u_2}{\partial t}
=-2\eta * \frac{\partial}{\partial t}(\overline{\theta}_{(\mu)}*g_0)
=-2\eta *(\frac{\partial}{\partial t}\overline{\theta}_{(\mu)})*g_0
-2(\eta *g_0)\cdot \overline{\theta}_{(\mu)}(x,0)\\
&=-2D^{(\mu)} \overline{\theta}_{(\mu)} *g_0
=-2(\overline{\theta}_{(\mu)})_{xx}*g_0
=(-2\overline{\theta}_{(\mu)}*g_0)_{xx}
=(u_2)_{xx}.
\end{aligned}
\end{equation*}
In an analogous fashion to the above argument, we deduce that
$D^{(\mu)} u_3=(u_3)_{xx}$.
For $u_4$, using Lemmas~\ref{lem:delta}, \ref{lem:kappa} and
\ref{lem:uniform_theta} we obtain
\begin{equation*}
\begin{aligned}
D^{(\mu)} u_4&=\eta * \frac{\partial u_4}{\partial t}
=\eta * \frac{\partial}{\partial t}
\Bigl(\int_0^1 [\theta_{(\mu)}(x-y,\cdot)-\theta_{(\mu)}(x+y,\cdot)]
* [\frac{\partial}{\partial t}I^{(\mu)} f(y,\cdot)]\,dy\Bigr)\\
&=\eta *\Bigl(\int_0^1 \frac{\partial}{\partial t}[\theta_{(\mu)}(x-y,\cdot)-\theta_{(\mu)}(x+y,\cdot)]
* [\frac{\partial}{\partial t}I^{(\mu)} f(y,\cdot)]\,dy\Bigr)\\
&\quad+ \eta *\Bigl(\int_0^1 [\theta_{(\mu)}(x-y,0)-\theta_{(\mu)}(x+y,0)]
\cdot[\frac{\partial}{\partial t}I^{(\mu)} f(y,t)]\,dy\Bigr)\\
&=\int_0^1 \eta*\frac{\partial}{\partial t}[\theta_{(\mu)}(x-y,\cdot)-\theta_{(\mu)}(x+y,\cdot)]
* [\frac{\partial}{\partial t}I^{(\mu)} f(y,\cdot)]\,dy\\
&\quad +\eta *\Bigl(\int_0^1 [\delta(x-y)-\delta(x+y)]
\cdot[\frac{\partial}{\partial t}I^{(\mu)} f(y,t)]\,dy\Bigr)\\
&=\int_0^1D^{(\mu)} [\theta_{(\mu)}(x-y,\cdot)-\theta_{(\mu)}(x+y,\cdot)]
* [\frac{\partial}{\partial t}I^{(\mu)} f(y,\cdot)]\,dy
+\eta *\frac{\partial}{\partial t}I^{(\mu)} f(x,t)\\
&=\int_0^1[\theta_{(\mu)}(x-y,\cdot)-\theta_{(\mu)}(x+y,\cdot)]_{xx}
* [\frac{\partial}{\partial t}I^{(\mu)} f(y,\cdot)]\,dy
+D^{(\mu)}I^{(\mu)} f(x,t)\\
&=(u_4)_{xx}+f(x,t).
\end{aligned}
\end{equation*}
\end{proof}
\begin{lemma}\label{lem:initial_u}
$\;{\displaystyle \lim_{t\to 0} u_1(x,t)=u_0(x)}$,
$\;{\displaystyle \lim_{t\to 0} u_j(x,t)=0}$
for $j=2,3,4$, $x\in (0,1)$.
\end{lemma}
\begin{proof}
\par For each $x\in (0,1)$, Lemmas~\ref{lem:uniform_theta} and
\ref{initial_G_mu} yield that
\begin{equation*}
\begin{aligned}
\lim_{t\to 0} u_1&= \int_0^1(\theta_{(\mu)}(x-y,0)-\theta_{(\mu)}(x+y,0))u_0(y)\,dy\\
&=\int_0^1 \sum_{m=-\infty}^\infty(\delta(x-y+2m)-\delta(x+y+2m))u_0(y)\,dy
=\int_0^1 \delta(x-y)u_0(y)\,dy
=u_0(x).\\
\end{aligned}
\end{equation*}
The other result follows directly from the definitions of $u_2$, $u_3$ and $u_4$.
\end{proof}
\begin{lemma}\label{lem:boundary_u14}
$\;u_j(0,t)=u_j(1,t)=0$, for $\,j=1,4$ and $t\in (0,\infty)$.
\end{lemma}
\begin{proof}
Since $\theta_{(\mu)}(x,t)$ is even on $x$ which is stated
in Lemma~\ref{lem:uniform_theta}, then
\begin{equation*}
u_1(0,t)=\int_0^1(\theta_{(\mu)}(0-y,t)-\theta_{(\mu)}(0+y,t))
u_0(y)\,dy=0.
\end{equation*}
We also have
\begin{equation*}
\begin{aligned}
u_1(1,t)&=\int_0^1(\theta_{(\mu)}(1-y,t)-\theta_{(\mu)}(1+y,t))
u_0(y)\,dy\\
&=\int_0^1(\theta_{(\mu)}(y-1,t)-\theta_{(\mu)}(1+y,t))
u_0(y)\,dy\\
&=\int_0^1\!\Big[\!\sum_{m=-\infty}^\infty G_{(\mu)}(y-1+2m,t)-
\!\!\sum_{m=-\infty}^{\infty}G_{(\mu)}(y+1+2m,t)\Big]u_0(y)\,dy\\
&=\int_0^1\!\Big[\!\sum_{q=-\infty}^\infty G_{(\mu)}(y+1+2q,t)-
\!\!\sum_{m=-\infty}^{\infty}G_{(\mu)}(y+1+2m,t)\Big]u_0(y)\,dy=0,
\end{aligned}
\end{equation*}
where $q=m-1$.
Following from the above proof, we obtain the conclusion for $u_4$.
\end{proof}
\begin{lemma}\label{lem:boundary_u23}
$u_2(0,t)=g_0(t)$, $u_2(1,t)=0$,
$u_3(0,t)=0$, $u_3(1,t)=g_1(t)$,
for $t\in (0,\infty)$.
\end{lemma}
\begin{proof}
The proof follows from Lemma \ref{lem:boundary_overline_theta} directly.
\end{proof}
\par Now we can state
\begin{theorem}[Representation theorem]\label{thm:representation}
There exists a unique solution $u(x,t)$ of Equations~\eqref{eqn:one_dim_model},
which has the representation
$\;{\displaystyle u(x,t)=\sum_{j=1}^4 u_j}$.
\end{theorem}
\begin{proof}
The existence follows from Lemmas~\ref{lem:u1u2u3u4}, \ref{lem:initial_u},
\ref{lem:boundary_u14} and \ref{lem:boundary_u23}; while the uniqueness
is ensured by Corollary \ref{cor:existence_uniqueness}.
\end{proof}
|
2,877,628,088,374 | arxiv | \section{Introduction}
The problem of a polaron---a mobile particle interacting with a host medium---has a long history dating back to Landau's seminal work on an electron inducing local distortion of a crystal lattice~\cite{landau1933electron}. Polarons are ubiquitous in many-body systems, especially in solid-state~\cite{alexandrov2010advances,devreese_polaron_09} and atomic physics \cite{bloch2012quantum,massignan2014polarons,schmidt2018universal}, and provide one of the key paradigms of modern quantum theory. Recent experimental progress in cold atoms and ion-based quantum simulators brings new motivation for studying polaronic phenomena ~\cite{nascimbene2009collective,schirotzek2009observation,palzer_impurity_transport_09,catani_impurity_dynamics_11,kohstall2012metastability,stojanovic2012quantum,fukuhara_spin_impurity_2013,hu2016bose,jorgensen2016observation,cetina2016ultrafast,meinert2017bloch,yan2019boiling,yan2020bose}, since these platforms offer a high-degree of isolation, tunability of the interaction strength and dispersion, and control of dimensionality~\cite{bloch2008many}.
These setups are particularly well suited for accurate studies of far-from-equilibrium dynamics~\cite{meinert2017bloch,catani_impurity_dynamics_11} since system parameters can be modified much faster than intrinsic timescales of the many-body Hamiltonians. On the theoretical side, recent progress in understanding polarons has come from using such powerful techniques as variational ansatzes~\cite{chevy2006universal,parish2013highly,peotta_mobile_impurity_TDMRG_2013,massel_mobile_impurity_TDMRG_2013,knap2014quantum,li2014variational,shchadilova2016quantum,kain2017hartree,shi2018variational,mistakidis2019repulsive}, renormalization-group calculations~\cite{grusdt2015renormalization,grusdt2018strong}, Monte-Carlo simulations~\cite{prokof2008fermi,ardila2015impurity,grusdt2017bose}, diagrammatic technique~\cite{rosch1995heavy,schmidt2012fermi,rath2013field,burovski_impurity_momentum_2014,gamayun_kinetic_impurity_TG_14,gamayun_quantum_boltzmann_14}, exact Bethe ansatz (BA) calculations for integrable models~\cite{mcguire1965interacting,mcguire1966interacting,yang_fermions_spinful_67,Gamayun_correlation_2016,gamayun_impact_18,gamayun2019zero}, and approaches based on non-linear Luttinger liquids~\cite{imambekov_review_12}. Analysis of equilibrium and dynamical properties of polarons has played an important role in developing new ideas and concepts, and in testing theoretical methods and approaches.
One of the surprising recent discoveries in the far-from-equilibrium dynamics of Fermi polarons has been the prediction of the effect called quantum flutter~\cite{mathy2012quantum,knap2014quantum}: When a repulsive mobile impurity with large momentum is injected into a one-dimensional Fermi gas, it undergoes long-lived oscillations of its velocity. This should be contrasted to the classical situation in which the impurity gradually slows down while transferring its momentum to the host atoms. It has also been found that the quantum flutter frequency does not depend on the initial conditions. The robustness of these oscillations naturally suggests that they represent an incarnation of a fundamental yet unknown property of the polaronic system at equilibrium.
In the present work, we investigate collective modes---elementary excitations describing small deviations from an equilibrium state---in the system of a mobile impurity interacting repulsively with a one-dimensional Fermi gas. Remarkably, we find the density of states of these modes displays a sharp peak when the total momentum of the system equals the Fermi momentum.
This peak signals the emergence of a distinct collective excitation with a frequency $\omega_{k_F}$, representing a ``breathing mode'' of a polaronic cloud surrounding the impurity. The frequency $\omega_{k_F}$ matches the magnon-plasmon energy-difference at the Fermi momentum, as has been checked for an arbitrary range of model parameters. As we demonstrate below, modern cold-atom techniques, including Ramsey interferometry and radio-frequency (rf) spectroscopy, can be used to detect this mode. Specifically, we find that the impurity absorption spectra at the Fermi momentum exhibit a double-peak structure, with the second peak corresponding to the frequency $\omega_{k_F}$. Our study provides a natural interpretation of such a complex far-from-equilibrium phenomenon as recently discovered quantum flutter in terms of basic equilibrium properties. In particular, we argue that flutter oscillations, as well as their robustness, represent nothing but the signatures of the collective mode $\omega_{k_F}$.
\section{Theoretical framework}
\label{sec:formalism}
A Fermi-polaron model represents a non-trivial many-body problem with the Hamiltonian consisting of three parts: $\hat{H} = \hat{H}_{f} + \hat{H}_{\rm imp} + \hat{H}_{\rm int}$, where $\hat{H}_{f} = \sum_{k} \frac{k^2}{2m} \hat{c}_{k}^\dagger\hat{c}_{k}$ is the fermionic kinetic energy, $\hat{H}_{\rm imp} = \sum_{k} \frac{k^2}{2M} \hat{d}_{ k}^\dagger\hat{d}_{k}$ is the kinetic energy of the impurity, and $\hat{H}_{\rm int}=\frac{g}{L} \sum_{k, k', q} \hat{d}^{\dagger}_{k + q}\hat{d}_{k}\hat{c}^{\dagger}_{ k' - q} \hat{c}_{ k'}$ describes contact interaction between the two species of particles. The Planck constant is set to $\hbar = 1$ throughout the paper. Operator $\hat{d}^\dagger_k$ ($\hat{d}_k$) creates (annihilates) the impurity with momentum $k$; operators $\hat{c}^\dagger_k$ and $\hat{c}_k$ represent the host gas. Throughout this work, we assume periodic boundary conditions with the system size $L$, so that $k = \frac{2\pi}{L}n$ with $n$ being integer. The total number of host-gas particles $N$ is fixed via the chemical potential $\mu = \frac{k_F^2}{2m}$, and $k_F =\frac{\pi N}{L}$ is the Fermi momentum. In our calculations, we set $k_F=\frac{\pi}{2}$, which implicitly defines the unit of length. The case of a single impurity restricts the Hilbert space to states with $\sum_{k} \hat{d}_{k}^\dagger\hat{d}_{k} = 1$. The dimensionless interaction strength between the impurity and medium is $\gamma = \frac{ \pi m g}{k_F} $. We use the following convention for the Fourier transform: $\hat{c}_x = \frac{1}{\sqrt{L}} \sum_k e^{ikx}\hat{c}_k$. We choose a sufficiently large UV momentum cutoff $\Lambda\gg k_F$ in our numerical simulations.
A challenge one encounters when solving a many-body problem is that the Hilbert space grows exponentially with the system size, limiting direct numerical simulations to relatively small systems. One approach to overcoming this difficulty is to employ a variational method, where a limited number of parameters is used to parameterize a class of many-body states. In this approach, the complexity of computations typically grows polynomially with the system size, allowing for efficient numerical analysis. However, one needs to ensure that the variational wave function contains the right class of quantum states that can reliably capture the many-body correlations. More specifically, a variational family of states is required to satisfy the following criteria: (i) it contains a manageable number of variational parameters, (ii) it accurately predicts ground-state properties, (iii) it captures real-time dynamics including the spectrum of collective modes, and (iv) it can be used to compute observables relevant for experiments.
We employ recent developments of approaches based on non-Gaussian states (NGS) to realize this program~\cite{shi2018variational,hackl2020geometry}. In this work, we deal with zero-temperature situations, and finite-temperature ensembles can be studied using the formalism developed in Ref.~\cite{shi2019variational}. For the Fermi-polaron problem, one first performs a unitary transformation to the impurity reference frame~\cite{lee1953motion,kain2017hartree}, $\hat{{\cal S}} = \exp(- i \hat{x}_{\rm imp} \hat{ P}_f )$, where $\hat{P}_f = \sum_{ k} k \,\hat{c}^\dagger_{ k} \hat{c}_{k}$ is the total fermionic momentum and $\hat{x}_{\rm imp}$ is the impurity position operator, and then invokes the Hartree-Fock approximation. The unitary transformation $\hat{{\cal S}}$ plays a two-fold role: (i) it provides sufficient entanglement between the impurity and the medium so that the Hartree-Fock approximation becomes accurate, and (ii) it takes advantage of the total momentum conservation and decouples the impurity from the rest of the system. In the impurity frame, the transformed Hamiltonian is parametrized by the total momentum $Q$:
\begin{equation}
\hat{H}_{Q} = \sum_{ k, k'}\hat{c}_{k}^\dagger \left[\frac{k^2}{2m}\delta_{k k'} + \frac{g}{L}\right] \hat{c}_{k'} + \frac{(Q - \hat{P}_f)^2}{2M}. \label{eqn::LLP_H}
\end{equation}
Note that only the degrees of freedom of the host gas enter Eq.~\eqref{eqn::LLP_H}. The first term is the fermionic kinetic energy, the second term describes scattering off the impurity, and the third term corresponds to its recoil energy.
\subsection{Equations of motion}
The Hartree-Fock approximation can be conveniently cast into a Gaussian wave function~\cite{shi2018variational}, which we write as
\begin{equation}
\Ket{\psi (t)} = {\rm e}^{-i\theta} \exp( i \hat{c}^\dagger \xi \hat{c}) \Ket{\rm FS}, \label{eqn:wf}
\end{equation}
where $\xi = \xi^\dagger$ and $\Ket{\rm FS} \equiv \prod_{|k|\leq k_F} \hat{c}^\dagger_k\Ket{0}$ is the wave function of the filled Fermi sea ($\Ket{0}$ corresponds to the vacuum state). The information about the state~\eqref{eqn:wf} is then encoded in $\theta$ and $U \equiv {\rm e}^{i\xi}$. Let us define the covariance matrix as:
$\Gamma_{k,k'} \equiv \langle \hat{c}^\dagger_k \hat{c}_{k'} \rangle_{\rm GS} = U^* \Gamma_0 U^T,$
where $\Gamma_0$ is the covariance matrix of the filled Fermi sea. We emphasize that even though we choose the state to be Gaussian in the impurity reference frame, it is non-Gaussian in the laboratory frame due to the unitary operator ${\cal \hat{S}}$.
To find the (momentum-dependent) ground state wave function, we employ the imaginary-time dynamics~\cite{shi2018variational}:
\begin{equation}
d_{\tau} \Gamma = 2 \Gamma h \Gamma - \left\{ h, \Gamma \right\},\label{eqn:im_Gamma}
\end{equation}
where
\begin{align}
E[\Gamma] & \equiv \langle \hat{H}_{ Q}\rangle = \sum_{k,k'} \Gamma_{kk'}\left[ \Big(\epsilon_k +\frac{k^2}{2M} -\frac{Q\cdot k}{M}\Big)\delta_{kk'} + \frac{g}{L} \right] \notag\\
& \qquad\qquad - \sum_{k,k'}\frac{k \cdot k'}{2M} |\Gamma_{k k'}|^2 +\frac{P_f^2}{2M} + \frac{Q^2}{2M}, \\
h_{k k'} & \equiv \frac{\delta E[\Gamma]}{\delta \Gamma_{k'k}} = \Big(\epsilon_k + \frac{k^2}{2M} + \frac{k \cdot ( P_f - Q)}{M} \Big) \delta_{k k'} \notag\\
& \qquad \qquad \qquad \qquad\qquad
+\frac{g}{L}-\frac{k\cdot k'}{M}\Gamma_{ k k'}.\label{eqn::h}
\end{align}
Here $\epsilon_k = \frac{k^2}{2m} - \mu$. Note that initially pure states, with $\Gamma^2 = \Gamma$, will remain pure under the imaginary-time evolution: $d_{\tau} (\Gamma^2 - \Gamma) = 0$. For these states, the total number of fermions $N = \sum_k {\Gamma}_{kk}$ is conserved.
The equations of motion for the real-time dynamics are obtained from the Dirac's variational principle:
\begin{equation}
\partial_t \theta = E[\Gamma] - \,\mathrm{tr} h\, \Gamma,\quad i\partial_t U = h^* U. \label{eqn::dt_U}
\end{equation}
From Eqs.~\eqref{eqn::dt_U} one can derive an equation of motion solely on the covariance matrix~\cite{shi2018variational}:
\begin{equation}
d_t \Gamma = i \left[ h,\Gamma \right]. \label{eqn::real_dyn_Gamma}
\end{equation}
During the real-time dynamics, the state remains pure, and the total number of fermions is also conserved.
\subsection{Analysis of collective modes}
\label{subsec:CM}
Collective excitations represent low-energy small-amplitude fluctuations on top of an equilibrium state, in our case on top of a ground state, with the covariance matrix $\Gamma_Q$, previously computed via the imaginary-time dynamics. To obtain their spectrum within the NGS approach, we write $\Gamma = \Gamma_Q + \delta \Gamma$, and, assuming that $\delta \Gamma$ is small, we linearize the real-time equation of motion~\eqref{eqn::real_dyn_Gamma}:
\begin{align}
d_t \delta \Gamma &= i\left[ h_Q,\delta \Gamma \right] + \frac{i}{M}\left[\,\mathrm{tr}({P}\delta \Gamma) {P} - {P} \delta \Gamma {P},\Gamma_Q \right], \label{eqn::dt_delta_Gamma}
\end{align}
where $h_Q = h[\Gamma_Q]$ and ${P}_{k,k'} \equiv k\delta_{k,k'}$. Importantly, these fluctuations $\delta \Gamma$ are constrained to satisfy: i) hermiticity $\delta \Gamma = \delta \Gamma^\dagger$, ii) particle number conservation $\,\mathrm{tr} \, \delta \Gamma = 0$, and iii) purity $\{ \Gamma_Q, \delta \Gamma \} = \delta \Gamma$.
To see the implications of these conditions, we switch to the basis where the matrix $\Gamma_Q$ is diagonal:
\begin{align}
\Gamma_Q = \begin{pmatrix} 0 & 0\\
0 & I_{N\times N}
\end{pmatrix} \Longrightarrow \delta \Gamma = \begin{pmatrix}
0 & K\\
K^\dagger & 0
\end{pmatrix}, \label{eqn:K_def}
\end{align}
where $K$ is an arbitrary $(N_{\rm sp}-N)\times N$ matrix that fully parametrizes all possible physical fluctuations $\delta \Gamma$. $N_{\rm sp}$ is the total number of single-particle modes in the fermionic system, and it is determined by the system size $L$ and by the UV-cutoff $\Lambda$. We, therefore, conclude that the above constraints reduce the total number of degrees of freedom in $\delta \Gamma$ from $2N_{\rm sp}^2$ to only $2N(N_{\rm sp}-N)$. Plugging in Eq.~\eqref{eqn:K_def} into Eq.~\eqref{eqn::dt_delta_Gamma}, one obtains:
\begin{align}
d_t K & = i (h_{11}K - K h_{22}) + \frac{i}{M} P_{12} \,\mathrm{tr}( P_{12} K^\dagger + P_{21} K ) \notag \\
&\qquad - \frac{i}{M} (P_{11}KP_{22} + P_{12}K^\dagger P_{12}), \label{eqn::dt_K}
\end{align}
where we used that in the chosen basis $h_Q = \begin{pmatrix}
h_{11} & 0\\
0 & h_{22}
\end{pmatrix}$, since $h_Q$ and $\Gamma_Q$ commute (in principle, one can always choose a basis where both $h_Q$ and $\Gamma_Q$ are diagonal). The eigenenergies $\omega_i^Q$ of Eq.~\eqref{eqn::dt_K} constitute the spectrum of collective excitations (here $Q$ is the total momentum of the system, and $i$ labels excitations). From $\omega_i^Q$ and corresponding eigenvectors, one can compute standard linear response functions such as the density response function~\cite{kamenev2011field}. For a bosonic system, similar fluctuation analysis has been proven to be equivalent to the generalized random phase approximation~\cite{guaita2019gaussian} and successfully applied to reproduce the Goldstone zero-mode naturally without imposing the Hugenholtz-Pines condition.
Using the outlined theoretical framework, we obtained our main results, which we turn to discuss in the next section.
\begin{figure*}[t!]
\centering
\includegraphics[width=1\linewidth]{Figure1.pdf}
\caption{(a) Polaron energy-momentum relation---the magnon branch---for the case of equal masses $M = m$ and the plasmon branch of the host Fermi sea. Note the plasmon-magnon excitation energy $\omega_{\rm pm}$ at $k_F$. Our variational NGS approach remarkably reproduces the exact BA result (solid line), adopted from Refs.~\cite{mcguire1965interacting,gamayun2019zero} (here we used $\Lambda = 10 k_F$). (b) Density of states (DOS) of collective excitations as a function of frequency $\omega$ and total momentum $Q$. Here $t_F = 1/E_F$ is the Fermi time. Note that the spectral signal becomes particularly pronounced at $Q = k_F$. (c) and (d) Cuts of the DOS at $Q=0$ and $Q=k_F$, respectively. Dashed line in (d) (shown also in (b)) indicates the emergence of a sharp mode. (e) The collective mode $\omega_{k_F}$, as a function of the mass ratio $M/m$, matches the plasmon-magnon mode $\omega_{\rm pm}$ from (a). Parameters used: $\gamma = 5$, $N = 51$, and $\Lambda = 5k_F$.}
\label{fig::Main}
\end{figure*}
\section{Main Results and Discussion}
\label{sec:results}
\subsection{Emergent collective mode and its origin}
An important feature of the polaron system is momentum conservation, which after the Lee-Low-Pines transformation and via the imaginary-time dynamics, allows us to compute not only the ground-state energy but rather the entire polaron energy-momentum dispersion. An example of such a calculation for the integrable case of equal masses $M = m$ is shown in Fig.~\ref{fig::Main} (a), where we compare momentum-dependent ground-state energies to the exact Bethe ansatz (BA) results~\cite{mcguire1965interacting,gamayun2019zero}. The agreement is excellent, and it becomes even better for a larger total number of particles $N$ and/or larger momentum cutoff $\Lambda$. Note that in the thermodynamic limit $N\to\infty$, this energy-momentum relation is $2k_F$-periodic, since at $Q = 2k_F$, one can always excite a zero-energy particle-hole pair across the Fermi surface. At finite $N$, this is no longer true, and to excite such a pair costs energy proportional to $1/N$, explaining the discrepancy in Fig.~\ref{fig::Main} (a) at large momenta $Q\simeq 2k_F$. In Appendix~\ref{Appendix_BA_static}, we demonstrate that our approach also reproduces exact many-body correlation functions. Below we also investigate a generic situation of not equal masses, where no known exact solutions are available.
Using the approach outlined in Sec.~\ref{subsec:CM}, we turn to investigate the spectrum of collective modes on top of momentum-dependent ground states $\Gamma_Q$. Figure~\ref{fig::Main}~(b) shows the density of states (DOS) of these excitations, $\nu_\omega = \sum_i \delta(\omega - \omega_i^Q)$.
Most spectacularly, we discover a sharp peak at $Q = k_F$ [Fig.~\ref{fig::Main}~(c)], which signals the onset of a new distinct collective mode. Our primary goal below is to elucidate its physical origin and investigate the feasibility of experimental verification with ultracold atoms.
\begin{figure}[t!]
\centering
\includegraphics[width=1\linewidth]{Figure2.pdf}
\caption{Linear-response dynamics in the impurity frame after a soft quench of the coupling strength $\gamma$: from $\gamma = 6$ to $\gamma = 5$. The wave function at $t=0$ corresponds to the ground state at $Q = k_F$ and $\gamma = 6$. (a) Evolution of $\delta G_2(x,t) = G_2(x,t)-G_2(x,0)$ showing oscillatory behavior of the fermionic density surrounding the impurity. (b) Dynamics of $\delta G_2(x,t)$ at $x = x_0$ (dashed line in (a)). Notably, the Fourier transform of this signal (inset) matches the frequency $\omega_{k_F}$ extracted from Fig.~\ref{fig::Main}~(b).}
\label{fig::G2_xt}
\end{figure}
We turn to discuss the physical mechanism behind the emergence of this mode $\omega_{k_F}$. We first identify which states in the many-body spectrum determine the frequency $\omega_{k_F}$. Let us define plasmon as the lowest energy excitation of the Fermi gas in the absence of impurity. Its dispersion, $E_p(Q)$, has a familiar inverse-parabolic shape, shown in Fig.~\ref{fig::Main}~(a) with dashed line. Magnon is the lowest energy excitation of the entire interacting system. The magnon dispersion $E_m(Q)$---the polaron energy-momentum relation---is illustrated in Fig.~\ref{fig::Main}~(a) with a solid line. In the presence of the impurity, the plasmon still exists, but no longer represents the lowest-energy excitation. Note that both magnon and plasmon group velocities evaluated at $Q=k_F$---at the same wave vector where the mode $\omega_{k_F}$ emerges---are zero, suggesting that these two states can form a correlated long-lived excitation, with frequency $\omega_\mathrm{pm}(k_F) = E_p(k_F)-E_m(k_F)$. Interestingly, our variational calculations show that
\begin{equation}
\omega_{k_F} = \omega_\mathrm{pm}(k_F)
\end{equation}
for any impurity-gas mass ratios $M/m$ and coupling strengths $\gamma$, as illustrated in Fig.~\ref{fig::Main}~(e).
Now we demonstrate that the collective excitation $\omega_{k_F}$ represents oscillations of the polaronic cloud surrounding the impurity. To that end, we take the initial many-body wave function \textbf{$|\psi_\mathrm{lab}(0)\rangle = |\mathrm{GS}_Q\rangle$} to be the ground state of the interacting Fermi polaron model with the total momentum $Q = k_F$, and then suddenly change the interaction strength. In response to such a quench, we find that the fermionic density in the vicinity of the impurity
\begin{align*}
G_2(x,t)= \frac{L}{N} \int\limits_0^L dy \Bra{\psi_{\rm lab}(t)} \hat{d}^\dagger_y\hat{d}_y \hat{c}^\dagger_{x+y}\hat{c}_{x + y}\Ket{\psi_{\rm lab}(t)} \label{eqn:G2_xt}
\end{align*}
demonstrates damped oscillatory behavior, illustrated in Fig.~\ref{fig::G2_xt}, with the frequency $\omega_{k_F}$. These real-time dynamical correlations are, in principle, accessible with ultracold-atom setups. One can see, however, that the amplitude of the signal shown in Fig.~\ref{fig::G2_xt} is rather small because the system is close to the linear-response regime. We find that the amplitude of oscillations remains small even for stronger quenches. To overcome this issue, below we suggest a complementary experimental verification of our findings by computing observables accessible with rf spectroscopy and Ramsey-type interferometry.
\begin{figure*}[htb!]
\centering
\includegraphics[scale=0.44]{Combined.pdf}
\caption{(a) and (b) Possible cold-atom setups. The initial wave function corresponds to $\Ket{\rm FS}\otimes\Ket{0,\downarrow}$, where the hyperfine state $\Ket{\downarrow}$ does not interact with the fermionic medium. (a) The impurity is first accelerated such that it acquires momentum $Q$; subsequent rf-pulse drives it into the hyperfine state $\Ket{\uparrow}$ strongly interacting with the host gas. (b) Alternatively, the two states $\Ket{0,\downarrow}$ and $\Ket{Q,\uparrow}$ can be directly coupled by a two-photon Raman process. (c) to (f) The dynamical overlap function ${\cal S}(t)$ and the impurity absorption spectra ${\cal A}_\omega$ for $Q = 0$ (panels (c) and (e)) and for $Q = k_F$ (panels (d) and (f)). We shifted frequencies in (e) and (f) such that the zero value in both panels represents the corresponding ground-state oscillations. For $Q = k_F$, the Ramsey contrast $|{\cal S}(t)|$ demonstrates switching from initial rapid decay for times $t\lesssim 10 t_F$ to the lasting regime of slow dynamics. This behavior reflects in ${\cal A}_\omega$ as it acquires a double-peak structure: The frequency of the first peak is close to that of the ground state oscillations, whereas the second peak corresponds to the collective mode $\omega_{k_F}=\omega_{\rm pm}$. Parameters are the same as in Fig.~\ref{fig::Main}, except $N = 251$.}
\label{fig::Combined}
\end{figure*}
\subsection{Cold-atom setups}
Possible experimental setups for investigating the physics of a mobile impurity coupled to a Fermi bath are shown in Fig.~\ref{fig::Combined}~(a) and~(b). We assume that the impurity has two hyperfine states: $\Ket{\downarrow}$ is decoupled from the Fermi sea, whereas $\Ket{\uparrow}$ strongly interacts with the host gas. We start from an initial many-body wave function prepared in the state $\Ket{\rm FS}\otimes\Ket{0,\downarrow}$. $\Ket{Q,\downarrow}$ labels the impurity state with the total momentum $Q$. To reach a given total momentum sector $Q$, we suggest the quenching protocol illustrated in Fig.~\ref{fig::Combined}~(a). The impurity is first accelerated---for example, by application of an external force as in Ref.~\cite{meinert2017bloch}---such that its momentum becomes $Q$. Then an rf-pulse is used to couple the two hyperfine states. Similar to the case of a static impurity discussed in Ref.~\cite{knap2012time}, Ramsey interferometry can probe the dynamical overlap function, which in our case is written as ${\cal S}(t) = \Bra{\rm FS} {\rm e}^{it\hat{H}_Q^{(0)}}{\rm e}^{-it\hat{H}_Q} \Ket{\rm FS}$, where $\hat{H}_Q^{(0)}$ is given by Eq.~\eqref{eqn::LLP_H} with $g = 0$. The impurity absorption spectra is obtained as ${\cal A}_\omega = \frac{1}{\pi} \text{Re} \int_0^{\infty}dt\,{\rm e}^{i\omega t} {\cal S}(t)$. Figure~\ref{fig::Combined}~(b) shows an alternative experimental setup~\cite{ness2020observation}, where one employs the two-photon Bragg spectroscopy~\cite{stenger1999bragg,stamper1999excitation,ozeri2005colloquium}. In this latter situation, the dynamical overlap function is modified by a non-essential phase factor.
The overlap function ${\cal S}(t)$ is computed analytically:
\begin{equation}
{\cal S}(t) = {\rm e}^{-i(\theta(t)-E[\Gamma_0]t)} \det (1 - (1-U(t))\Gamma_0^T).
\end{equation}
This expression is a generalization of the approach used for the case of static impurity~\cite{knap2012time}. A numerical simulation of Eqs.~\eqref{eqn::dt_U} indicates that ${\cal S}(t)$ exhibits long-time revivals (roughly at $t \simeq L/k_F$) associated with the finite system size $L$. Below we, therefore, choose a sufficiently large system such that these revivals do not appear up to the largest simulation times.
For the case $Q=0$, shown in Fig.~\ref{fig::Combined}~(c) and~(e), the Ramsey contrast $\lvert{\cal S}(t)\rvert$ demonstrates a slow monotonic decay; at long times it saturates around $R_0 = \lvert \braket{\rm GS_0| \rm FS}\rvert^2$, which equals $R_0 \simeq 0.6$ for the parameters used in Fig.~\ref{fig::Combined}. Note, however, that in the thermodynamic limit, $L\to \infty$, this quasiparticle residue $R_0$ should vanish -- an analog of the Anderson orthogonality catastrophe for the case of a static impurity. This latter statement we explicitly verify numerically in Appendix~\ref{Appendix_Residue}, where we show that $R_0$ decays with the system size $L$ as a power-law. We note in passing that this decay is much slower than in the case $M=\infty$. In the frequency domain, ${\cal A}_\omega$ displays a sharp peak at the polaron binding energy $E_0$, as it should be.
Figure~\ref{fig::Combined}~(d) and~(f) shows the results for $Q=k_F$. We find that ${\cal S}(t)$ demonstrates a qualitative change in its dynamics roughly at $t\simeq 10t_F$: The initial quick decay of $|{\cal S}(t)|$, associated with the fact that the initial wave function represents a far-from-equilibrium state for $Q=k_F$, turns into a much slower power-law decay at longer times. This behavior resembles the non-Markovian spontaneous emission of a two-level atom coupled to a non-flat photon bath~\cite{cohen1998atom}, in which case the initial fast decay is associated with the large DOS of the collective mode, cf. Fig.~\ref{fig::Main}~(b). For the impurity absorption spectra ${\cal A}_\omega$, we find it acquires a double-peak structure.
Importantly, the second broad peak corresponds to the collective mode $\omega_{k_F}$ -- the dashed line in Fig.~\ref{fig::Combined} (f) denotes the discussed plasmon-magnon mode $\omega_{\rm pm}(k_F)$. The position of the first peak is close to the frequency of the ground state at $Q=k_F$. There are a few reasons for the small mismatch between them. First, we find that the overlap $R_{k_F}= \lvert \braket{{\rm GS}_{k_F}| \rm FS}\rvert^2$ is suppressed: it equals $4\times 10^{-3}$ for the parameters used in Fig.~\ref{fig::Combined}. Therefore, the first maximum in ${\cal A}_\omega$ is shifted towards higher frequencies, where the overlap of the initial wave function and an excited state is more pronounced. Second, such a small value of $R_{k_F}$ further indicates that the intrinsic dynamics is far-from-equilibrium. However, in an out-of-equilibrium setting, our method is not expected to be exact. Indeed, an explicit comparison in Appendix~\ref{Appendix_dyn} of dynamics in the NGS approach to that of the BA indicates that our method displays a similar small discrepancy with the exact result. Qualitatively, the method provides correct predictions even for non-equilibrium problems. Finally, the small mismatch could potentially be reduced by increasing the frequency resolution, which requires a simulation of an even larger system and for longer times.
\subsection{Quantum flutter and its robustness}
Equipped with our main results outlined above, we turn to discuss the phenomenon of quantum flutter. While the quantum flutter embodies a strongly out-of-equilibrium character, its phenomenology, surprisingly, shares a lot in common with the discovered here equilibrium collective mode $\omega_{k_F}$. Indeed, essentially arbitrary quenching of the polaronic system~\cite{mathy2012quantum,knap2014quantum} results in the development at long times of long-lived oscillations of the polaronic cloud surrounding the impurity. For the integrable situations, it was shown that the frequency of those oscillations matches the plasmon-magnon energy difference at $k_F$. The robustness of quantum flutter oscillations to quenching conditions, together with the results of our work, suggests the following interpretation. The initial (strong) perturbation of the polaronic system results in exciting all kinds of possible excitations. Only the ones with small relative group velocities contribute to the impurity dynamics at long times, while the role of the rest of them is to carry energy and momentum away from the impurity. Although the system is out-of-equilibrium, near the impurity, and at long times, the dynamics should remind the equilibrium situation, such as in Fig.~\ref{fig::G2_xt}. This is i) because both the plasmon and magnon group velocities are zero at $k_F$ and ii) due to the enhanced density of states of the associated collective mode $\omega_{k_F}$, cf. Fig.~\ref{fig::Main}~(b). This physical picture explains long-time flutter oscillations and their robustness.
We turn to discuss new insights that our work brings to the physics of quantum flutter. The plasmon and magnon at $k_F$ form a long-lived collective mode $\omega_{k_F}$, although the plasmon lives in the continuum of excitations of the interacting system. The sharp collective mode is responsible for a robust, long-lived oscillatory dynamics, independent of quenching conditions. More generally, we expect the development of such a collective mode provided there exist two branches of excitations, which are required to have zero group velocities simultaneously (as such, the corresponding density of states are enhanced due to Van Hove singularities). In contrast to existing theoretical approaches, our variational states enable an accurate computation of the magnon dispersion away from the integrable points. We remark that the plasmon-magnon energy difference at $k_F$ shown in Fig.~\ref{fig::Main}~(e) is consistent with the flutter frequency for $M \geq m$ obtained from DMRG simulations~\cite{knap2014quantum}. In the regime of light impurity, $M \lesssim m$, the two approaches start to deviate, which warrants further investigation. A probable reason for this is that it becomes notoriously difficult to extract the flutter frequency reliably in this regime (see Appendix~\ref{appendix:CM}).
\section{Conclusion and Outlook}
\label{sec:discussion}
The Fermi polaron Hamiltonian represents an instance of an interacting many-body system without a small parameter. As such, to establish the validity of any conclusions, it is crucial to verify that our chosen variational states are sufficient to capture both equilibrium and out-of-equilibrium properties. Importantly, it turns out that the case $M=m$ is Bethe ansatz solvable, providing us with the necessary ground for testing our approach. Specifically, in Appendix~\ref{Appendix_BA_static}, we demonstrate that our non-Gaussian states reproduce both the ground-state energies and many-body correlation functions, both for repulsive and attractive interactions. Benchmarking the out-of-equilibrium situation is more challenging. In Appendix~\ref{Appendix_dyn}, we show that the developed here variational approach captures the essential features of the quantum flutter dynamics, although not perfectly. Our work, therefore, establishes the reliability of the non-Gaussian states in one spatial dimension, where fluctuations are expected to be the strongest. Because of efficient numerical implementation, these variational states are promising to solve interacting many-body problems in higher dimensions, where no such powerful numerical methods as DMRG exist, with Monte Carlo-based approaches being the only alternative. They can be used for computing ground-state properties, including many-body correlations and collective modes, as well as out-of-equilibrium real-time dynamics.
Once we established the reliability of the non-Gaussian approach, we turned to investigate collective excitations of the polaronic system at equilibrium. Our main result is that their spectrum contains a sharp peak when the total momentum of the system equals $k_F$, signaling the onset of a new distinct collective mode. By analyzing polaron dispersion for an arbitrary range of the mass ratios $M/m$ and interaction strengths $\gamma$, we concluded that this mode represents nothing but the plasmon-magnon excitation at $k_F$. Our work suggests a close connection between the far-from-equilibrium Fermi-polaron dynamics and equilibrium collective excitations. This relation explains the robustness of the phenomenon of quantum flutter to changes in model parameters and initial conditions. Theoretical predictions made in this paper can be tested with currently available experimental systems of ultracold atoms. Specifically, one can search for the following features that should appear when the momentum of the impurity relative to the host atoms reaches $k_F$: (i) long-lived oscillations of the polaronic cloud, (ii) abrupt change in the time evolution of the Ramsey contrast, and (iii) development of the double-peak structure in the impurity absorption spectra at the Fermi momentum. The frequency of oscillations can be tuned experimentally by varying either the interaction strength $\gamma$ or the mass ratio $M/m$. From a broader perspective, our work is inspired by the recent developments in designing and studying controlled quantum systems using both solid-state and cold-atom platforms. In particular, as modern semiconductor technologies are approaching the quantum domain with the current feature-size of a few nanometers, understanding far-from-equilibrium dynamics of interacting fermionic systems will be crucial for the design and operation of future electronic devices.
\section{Acknowledgements}
\noindent We thank I.~Bloch, M.~Zwierlein, Z.~Yan, I.~Cirac, D.~Wild, F. Grusdt, K. Seetharam, O.~Gamayun, and D.~Sels for fruitful discussions. P.~E.~D. and E.~D. are supported by the Harvard-MIT Center of Ultracold Atoms, ARO grant number W911NF-20-1-0163, and NSF EAGER-QAC-QSA. T.~S. is supported by the NSFC (Grants No. 11974363). The work of M.~B.~Z. is supported by Grant No.~ANR-16-CE91-0009-01 and CNRS grant PICS06738.
|
2,877,628,088,375 | arxiv |
\section{Numerical results}
\label{appendixResults}
The numerical results for the measurements of the \ensuremath{R_{\Lbar}}\xspace and \ensuremath{R_{\antihyp}}\xspace ratios, presented in Secs.~\ref{sec:exclusive} and \ref{sec:inclusive}, are reported in Table~\ref{tab:Results_1} in intervals of the antiproton momentum and transverse momentum.
\input{Table_results}
\section{Introduction}
\label{sec:Introduction}
In recent years, the space-borne experiments
PAMELA~\cite{pamela_antiprotons} and AMS-02~\cite{ams_results2021} greatly improved measurements of the abundance of the antiproton, \antiproton, component in cosmic rays, which is sensitive to a possible dark matter contribution~\cite{Winkler_WIMP,Boudaud_2019,Cuoco_2019}.
In the 10--100\aunit{Ge\kern -0.1em V}\xspace~\antiproton energy range, the interpretation of their measurements requires accurate knowledge of the \antiproton production cross-sections in the spallation of cosmic rays in the
interstellar medium~\cite{theo_2015}, which is mainly composed of hydrogen and helium.
The \mbox{LHCb}\xspace experiment has the unique ability to study collisions of the LHC beams with fixed gaseous targets, including helium, reaching the $100\aunit{Ge\kern -0.1em V}\xspace$ scale for the nucleon-nucleon centre-of-mass energy, \ensuremath{\protect\sqrt{s_{\scriptscriptstyle\rm NN}}}\xspace, unprecedented for fixed-target experiments~\cite{LHCb-PUB-2018-015}.
Using a sample of proton-helium (\ensuremath{\proton {\rm He}}\xspace) collisions collected in 2016, the production of prompt antiprotons directly in the collisions or through decays of excited states was measured by the \mbox{LHCb}\xspace collaboration~\cite{LHCb-PAPER-2018-031}. These results were the first to use a helium target, and, covering an energy scale where significant violation of Feynman scaling~\cite{FeynmanScaling} occurs, contributed to a better modelling of the secondary \antiproton cosmic flux~\cite{Winkler_2017,Donato_2018,Boudaud_2019}.
The uncertainties on \antiproton production from weak decays still
limit the interpretation of cosmic \antiproton data~\cite{Winkler_2017}. The largest of these contributions
is due to antineutron decays, which cannot be directly observed in \mbox{LHCb}\xspace but can be estimated from the antiproton measurements and the assumption of isospin symmetry. Another significant contribution, which is less constrained theoretically, comes from decays of antihyperons, \ensuremath{\offsetoverline{H}}\xspace . Antiprotons produced in this way are referred to as {\it detached} in the following as they can experimentally be distinguished from prompt antiprotons in the \mbox{LHCb}\xspace experiment by the separation between their production vertex and the primary \ensuremath{\proton {\rm He}}\xspace collision vertex (PV).
This paper reports a determination of the ratio
\begin{equation}
\ensuremath{R_{\antihyp}}\xspace \equiv \dfrac{\sigma(\ensuremath{\proton {\rm He}}\xspace \ensuremath{\rightarrow}\xspace \ensuremath{\offsetoverline{H}}\xspace X \ensuremath{\rightarrow}\xspace \antiproton X)}
{\sigma(\ensuremath{\proton {\rm He}}\xspace \ensuremath{\rightarrow}\xspace \antiproton\ped{prompt} X) }
\end{equation}
of detached to prompt antiprotons in \ensuremath{\proton {\rm He}}\xspace collisions at $\ensuremath{\protect\sqrt{s_{\scriptscriptstyle\rm NN}}}\xspace = 110\aunit{Ge\kern -0.1em V}\xspace$ with momentum, \ensuremath{p}\xspace, ranging from $12$ to $110$\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace and transverse momentum, \ensuremath{p_{\mathrm{T}}}\xspace, between $0.4$ and $4$\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace, where \textit{X} stands for any arbitrary set of unreconstructed particles.
Two approaches to the measurement, presented in Secs.~\ref{sec:exclusive}~and~\ref{sec:inclusive}, are followed as described below.
The dominant process, namely \mbox{\ensuremath{\Lbar\to\pbar\pip}}\xspace with promptly produced {\ensuremath{\offsetoverline{\PLambda}}}\xspace particles, is measured relying only on the secondary vertex displacement from the PV and on the decay kinematics.
The ratio
\begin{equation}
\ensuremath{R_{\Lbar}}\xspace \equiv \dfrac{\sigma(\ensuremath{\proton {\rm He}}\xspace \ensuremath{\rightarrow}\xspace {\ensuremath{\offsetoverline{\PLambda}}}\xspace X \ensuremath{\rightarrow}\xspace \antiproton {\ensuremath{\pion^+}}\xspace X)}
{\sigma(\decay{\ensuremath{\proton {\rm He}}\xspace}{\antiproton\ped{prompt} X}) }
\end{equation}
is then determined using the prompt production result~\cite{LHCb-PAPER-2018-031}, obtained from
the same dataset.
In the second approach, an inclusive measurement of detached antiprotons is performed by exploiting the particle identification (PID) capabilities of the \mbox{LHCb}\xspace detector. Prompt and detached \antiproton are
distinguished by the minimum distance of their reconstructed track
to the PV, the impact parameter~(IP).
As $\ensuremath{R_{\Lbar}}\xspace/\ensuremath{R_{\antihyp}}\xspace$ is predicted more reliably than the single ratios,
the consistency of these two complementary approaches is verified.
The available measurements of the \ensuremath{R_{\Lbar}}\xspace ratio, though affected by large uncertainties, hint at a significant increase of this ratio for $\ensuremath{\protect\sqrt{s_{\scriptscriptstyle\rm NN}}}\xspace > 100 \aunit{Ge\kern -0.1em V}\xspace$~\cite{Winkler_2017}. The \mbox{LHCb}\xspace fixed-target configuration is capable of exploring the energy scale where the \ensuremath{R_{\Lbar}}\xspace enhancement occurs.
The contribution to \antiproton production from charm and beauty hadron decays
is estimated to be three orders of magnitude smaller than the prompt one, using the measured {\ensuremath{\cquark\cquarkbar}}\xspace cross-section in the
same fixed-target configuration at \mbox{LHCb}\xspace~\cite{LHCb-PAPER-2018-023} and the known charm branching fractions to baryons~\cite{fragm}. This is negligible compared to the accuracy of this measurement.
\section{The \mbox{LHCb}\xspace detector and its fixed-target operation}
\label{sec:Detector}
The \mbox{LHCb}\xspace detector~\cite{LHCb-DP-2008-001,LHCb-DP-2014-002} is a
single-arm forward spectrometer covering the \mbox{pseudorapidity}
range $2<\eta <5$, designed for the study of particles containing
{\ensuremath{\Pb}}\xspace or {\ensuremath{\Pc}}\xspace quarks. The detector includes a high-precision
tracking system consisting of a silicon-strip vertex detector (VELO\xspace)
surrounding the proton-proton (\proton\proton) interaction
region~\cite{LHCb-DP-2014-001}, a large-area silicon-strip
detector located upstream of a dipole magnet with a bending power of
about 4\aunit{Tm}\xspace, and three stations of silicon-strip
detectors and straw drift tubes~\cite{LHCb-DP-2017-001} placed
downstream of the magnet. The tracking system provides a measurement
of the momentum of charged particles with a relative
uncertainty that varies from 0.5\% at low momentum to 1.0\% at
200\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace. The IP is measured with a resolution
of $(15+29/\ensuremath{p_{\mathrm{T}}}\xspace)\ensuremath{\,\upmu\nospaceunit{m}}\xspace$, where \ensuremath{p_{\mathrm{T}}}\xspace is measured in \ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace.
Different types of charged hadrons
are distinguished by using information from two ring-imaging Cherenkov (RICH)
detectors~\cite{LHCb-DP-2012-003}, whose acceptance and performance define the \antiproton kinematic range accessible to this study. The first RICH detector has an inner acceptance limited to $\eta<4.4$ and is used to identify antiprotons with momenta between $12$ and $60\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace$. The second RICH detector, whose Cherenkov threshold for protons is $30\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace$,
covers the range $3<\eta<5$ and is used for antiproton momenta up to $110\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace$.
The scintillating-pad detector (SPD) of the calorimeter system is also used in this study. The SMOG (System for Measuring Overlap with Gas) system~\cite{BGI_thesis, LHCb-PAPER-2014-047}
enables the injection of noble gases with pressure of $\mathcal{O}(10^{-7})$\aunit{mbar}\xspace in the beam pipe section crossing the VELO\xspace, allowing \mbox{LHCb}\xspace to be operated as a fixed-target experiment.
The online event selection
is performed by a
trigger~\cite{LHCb-DP-2012-004}, which consists of a hardware stage,
requiring any activity in the SPD detector, and a software stage
asking for at least one reconstructed track in the VELO\xspace.
To avoid background from \proton\proton collisions, fixed-target events are acquired only when a bunch in the beam pointing toward \mbox{LHCb}\xspace crosses the nominal interaction region without a corresponding colliding bunch in the other beam.
\section{Data sample and simulation}
\label{sec:Data}
This measurement is performed on data specifically collected for
\antiproton production studies in May 2016. Helium gas was injected
when the two beams circulating in the LHC accelerator consisted
of proton bunches separated by at least 1~\ensuremath{\,\upmu\nospaceunit{s}}\xspace, 40 times the nominal value. In this configuration, spurious \proton\proton collisions are suppressed.
A sample of \ensuremath{\proton {\rm He}}\xspace collisions with a $6.5$\aunit{Te\kern -0.1em V}\xspace proton-beam energy ($\ensuremath{\protect\sqrt{s_{\scriptscriptstyle\rm NN}}}\xspace=110.5$\aunit{Ge\kern -0.1em V}\xspace) and corresponding to an integrated luminosity of about $0.5\ensuremath{\nb^{-1}}\xspace$ was collected~\cite{LHCb-PAPER-2018-031}.
Selected events are required to have a reconstructed PV within the fiducial region \mbox{$-700 < z < +100$\aunit{mm}\xspace}, where the $z$ axis is along the beam direction and $z=0$\aunit{mm}\xspace corresponds to the \mbox{LHCb}\xspace nominal collision
point in the central part of the VELO\xspace. The fiducial region is chosen
to achieve a high efficiency for PV reconstruction in fixed-target collisions and a significant probability that antihyperon decays occur within the VELO\xspace.
Antiproton candidates are reconstructed in the full tracking system exploiting the excellent {\ensuremath{\offsetoverline{\PLambda}}}\xspace invariant-mass resolution and IP determination. The PV position is required to be compatible with the beam profile and events must have fewer than 5 tracks reconstructed in the VELO\xspace with negative pseudorapidity. This selection suppresses to a negligible level the background from interactions with material, decays, and particle showers produced in beam-gas collisions occurring upstream of the VELO\xspace. A sample of $33.7$ million reconstructed \ensuremath{\proton {\rm He}}\xspace collisions satisfying these requirements is obtained from the data.
Simulated data samples of \ensuremath{\proton {\rm He}}\xspace collisions are produced with the
\mbox{\textsc{Epos-lhc}}\xspace generator~\cite{epos-lhc}.
The interaction of the generated particles with the detector, and its response, are implemented by using the \mbox{\textsc{Geant4}}\xspace toolkit~\cite{Allison:2006ve,
*Agostinelli:2002hh} as described in Ref.~\cite{LHCb-PROC-2011-006}.
The collisions are uniformly distributed along $z$ in the range
$-1000 < z < +300$\aunit{mm}\xspace, wide enough to cover the fiducial region. When estimating efficiencies, a $z$-dependent weight is applied to simulated events to account for the measured gas pressure variation.
This study uses a sample of unbiased simulated inelastic collisions
and several \antiproton-enriched samples, where the simulation of the
detector response is performed only if the event generated by \mbox{\textsc{Epos-lhc}}\xspace
contains a suitable \antiproton candidate. In the sample used for the
inclusive analysis, events must include at least one \antiproton with $\ensuremath{p_{\mathrm{T}}}\xspace>0.3\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace$
and $1.9 <\eta < 5.4$. In the sample used for the exclusive analysis,
the antiproton must also come from a {\ensuremath{\offsetoverline{\PLambda}}}\xspace decay occurring within the acceptance of the VELO\xspace. To study the cascade baryon contribution, a sample where the {\ensuremath{\offsetoverline{\PLambda}}}\xspace decay follows from a \ensuremath{\decay{\Xibarp}{\Lbar\pip}}\xspace decay is also simulated.
\section{Exclusive \texorpdfstring{{\boldmath \ensuremath{R_{\Lbar}}\xspace}}{RLbar} measurement}
\label{sec:exclusive}
About 70\%~\cite{CRMC} of the detached antiprotons are expected to originate
from decays of promptly produced {\ensuremath{\offsetoverline{\PLambda}}}\xspace baryons and can be selected in the \mbox{LHCb}\xspace detector by exploiting the detached decay vertex and the invariant-mass resolution. The decay kinematics allow the antiproton to be identified from the charge and the asymmetry of the longitudinal momenta of the final-state particles with respect to the {\ensuremath{\offsetoverline{\PLambda}}}\xspace flight direction ($p_{ \text{L}{\ensuremath{\offsetoverline{\PLambda}}}\xspace}$),
\begin{equation}
\alpha_{{\ensuremath{\offsetoverline{\PLambda}}}\xspace} \equiv \dfrac{ p_{\text{L} {\ensuremath{\offsetoverline{\PLambda}}}\xspace}({\ensuremath{\pion^+}}\xspace) -p_{ \text{L}{\ensuremath{\offsetoverline{\PLambda}}}\xspace}(\antiproton)}{p_{ \text{L}{\ensuremath{\offsetoverline{\PLambda}}}\xspace}({\ensuremath{\pion^+}}\xspace)+p_{ \text{L}{\ensuremath{\offsetoverline{\PLambda}}}\xspace}(\antiproton)},
\end{equation}
which is always negative for {\ensuremath{\offsetoverline{\PLambda}}}\xspace decays~\cite{Armenteros}.
Therefore, the RICH detectors are not used in this approach.
In order to minimise systematic uncertainties in the measurement of \ensuremath{R_{\Lbar}}\xspace, the selection follows as much as possible that used for the prompt measurement~\cite{LHCb-PAPER-2018-031}. In particular, the same fiducial volume, where the PV reconstruction efficiency cancels in the ratio, and the same kinematic region for the \antiproton candidate, $12 < \ensuremath{p}\xspace < 110$\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace and $0.4 < \ensuremath{p_{\mathrm{T}}}\xspace < 4$\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace, are required. The analysis is performed in intervals of \ensuremath{p}\xspace and \ensuremath{p_{\mathrm{T}}}\xspace. These intervals are aligned with those used in the prompt measurement, except that some are merged to improve the statistical accuracy.
\subsection{Selection and invariant-mass fit}
\begin{table}
\caption{ Selection requirements for \mbox{\ensuremath{\Lbar\to\pbar\pip}}\xspace decays. Symbols are defined in the text.}
\label{tab:Lambda0Selection}
\centering
\begin{tabular}{lc}
\toprule
\multirow{3}{*}{Detector acceptance} &
$2 < \eta (\antiproton) < 5$ \\
& $2 < \eta ({\ensuremath{\pion^+}}\xspace) < 5.5$ \\
& $2 < \eta ({\ensuremath{\offsetoverline{\PLambda}}}\xspace) < 5.5;~ \ensuremath{p_{\mathrm{T}}}\xspace ({\ensuremath{\offsetoverline{\PLambda}}}\xspace) > 0.3\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace$ \\
\midrule
\multirow{3}{*}{Decay geometry} &
IP({\ensuremath{\offsetoverline{\PLambda}}}\xspace) $< 5$\aunit{mm}\xspace; \ensuremath{\chisq_{\text{DOCA}}}\xspace $<$ 10 \\
& $\log[ \ensuremath{\chi^2_{\text{IP}}}\xspace(\antiproton)] > 1$; $\log[\ensuremath{\chi^2_{\text{IP}}}\xspace({\ensuremath{\pion^+}}\xspace)] > 2$\\
& $\ensuremath{\mathcal{F}\ped{IP}}\xspace > 1.5;~ \ensuremath{\mathcal{F}\ped{\chisqip}}\xspace > 4$ \\
\midrule
{\ensuremath{\kaon^0_{\mathrm{S}}}}\xspace veto & $\ensuremath{M_{\proton \rightarrow \pi}}\xspace < 490$ or $\ensuremath{M_{\proton \rightarrow \pi}}\xspace > 511 \ensuremath{\aunit{Me\kern -0.1em V\!/}c^2}\xspace$ \\
\midrule
Armenteros-Podolanski & $\Big| \Big(\frac{\alpha_{{\ensuremath{\offsetoverline{\PLambda}}}\xspace} + 0.69}{0.18}\Big)^{2} + \Big(\frac{p_{\text{T}{\ensuremath{\offsetoverline{\PLambda}}}\xspace}({\ensuremath{\pion^+}}\xspace)}{100.4 \ensuremath{\aunit{Me\kern -0.1em V\!/}c}\xspace}\Big)^{2} - 1 \Big| < 0.39 $\\
\bottomrule
\end{tabular}
\end{table}
The {\ensuremath{\offsetoverline{\PLambda}}}\xspace decay candidates are reconstructed from two oppositely charged tracks, which comprise segments in the VELO and in the downstream tracking stations, have a good fit quality and are incompatible with being produced at the PV.
The two-track combinations are selected only if their distance of closest approach is compatible with zero
using a \ensuremath{\chi^2}\xspace test (\ensuremath{\chisq_{\text{DOCA}}}\xspace). Following previous {\ensuremath{\PLambda}}\xspace production studies in \mbox{LHCb}\xspace~\cite{LHCb-PAPER-2011-005}, large discrimination against combinatorial background
is obtained by combining the IP information of the {\ensuremath{\offsetoverline{\PLambda}}}\xspace and the final-state
particles into the linear discriminant
\begin{equation}
\ensuremath{\mathcal{F}\ped{IP}}\xspace \equiv \log\left(\frac{\text{IP}(\antiproton)}{1\aunit{mm}\xspace}\right) + \log\left(\frac{\text{IP}({\ensuremath{\pion^+}}\xspace)}{1\aunit{mm}\xspace}\right) - \log\left( \frac{\text{IP}({\ensuremath{\offsetoverline{\PLambda}}}\xspace)}{1\aunit{mm}\xspace}\right).
\label{eq:fisher}
\end{equation}
To take into account the uncertainty on their measurements, a second discriminant \ensuremath{\mathcal{F}\ped{\chisqip}}\xspace is constructed by replacing IP in Eq.~(\ref{eq:fisher}) with the \ensuremath{\chi^2_{\text{IP}}}\xspace variable, defined as
the difference in the vertex-fit \ensuremath{\chi^2}\xspace of the PV reconstructed with and
without the track(s) under consideration.
To veto \ensuremath{\decay{\KS}{\pim \pip}}\xspace decays, the misreconstructed invariant-mass \ensuremath{M_{\proton \rightarrow \pi}}\xspace, obtained by assigning the pion mass to both final-state particles, is required to be incompatible with the {\ensuremath{\kaon^0_{\mathrm{S}}}}\xspace mass. Finally, a requirement on the Armenteros-Podolanski plane~\cite{Armenteros}
$\left(\alpha_{{\ensuremath{\offsetoverline{\PLambda}}}\xspace},\, p_{\text{T}{\ensuremath{\offsetoverline{\PLambda}}}\xspace}({\ensuremath{\pion^+}}\xspace) \right)$, where $ p_{\text{T}{\ensuremath{\offsetoverline{\PLambda}}}\xspace}$ is the transverse momentum with respect to the {\ensuremath{\offsetoverline{\PLambda}}}\xspace direction, is used.
The selection requirements are listed in Table~\ref{tab:Lambda0Selection}.
The purity of the selected sample is above $90\%$ in the
unbiased simulation. To subtract the residual background, the invariant-mass distribution of the {\ensuremath{\offsetoverline{\PLambda}}}\xspace candidates is fitted with the sum of
one Voigtian~\cite{Voigtian} and two Gaussian functions for the signal and a second-order polynomial for the background. This model, validated with simulation, takes into account the bias to the background distribution
from the Armenteros-Podolanski plot requirement and is able to describe the data in all kinematic intervals. The invariant-mass distribution for selected \mbox{\ensuremath{\Lbar\to\pbar\pip}}\xspace candidates is shown in Fig.~\ref{fig:LambdaMass} together with a fit integrated over all \ensuremath{p}\xspace and \ensuremath{p_{\mathrm{T}}}\xspace intervals, which results in a yield of $(50.7 \pm 0.3)\cdot 10^3$ \mbox{\ensuremath{\Lbar\to\pbar\pip}}\xspace decays.
\begin{figure}[tb]
\centering
\includegraphics[width=\linewidth]{figs/Fig1.pdf}
\caption{Invariant-mass distribution for the \mbox{\ensuremath{\Lbar\to\pbar\pip}}\xspace
candidates selected in the \ensuremath{\proton {\rm He}}\xspace data. The fit model is overlaid on the data.}
\label{fig:LambdaMass}
\end{figure}
\subsection{Tracking efficiency}
\label{sec:tracking}
The yields of the selected candidates in each kinematic interval are corrected for the total reconstruction and selection efficiencies. These are
determined as the ratio of signal yields obtained by the invariant-mass fit of the {\ensuremath{\offsetoverline{\PLambda}}}\xspace candidates in the \antiproton-enriched simulated sample to the number of actual candidates generated by the \mbox{\textsc{Epos-lhc}}\xspace model in the same interval of \ensuremath{p}\xspace and \ensuremath{p_{\mathrm{T}}}\xspace. With this procedure, the efficiency takes into account the resolution effects resulting in migration across kinematic intervals. The largest inefficiency comes from decays occurring downstream of the VELO\xspace and can be accurately predicted. For the upstream decays, the average track reconstruction efficiency is determined in simulation to be $(95.84 \pm 0.04)\%$ for the antiprotons and $(85.40 \pm 0.06)\%$ for the pions, which tend to have a lower momentum. The quoted uncertainties are only due to the finite simulated sample size. These efficiencies are corrected by factors determined from calibration samples in \proton\proton data, which are consistent with unity in all kinematic intervals within their systematic uncertainty of $0.8\%$~\cite{LHCB-DP-2013-002}.
As illustrated in Fig.~\ref{fig:z}, the tracks considered in this study exhibit a different topology, notably in the VELO\xspace, with respect to the prompt tracks from \proton\proton collisions used for calibration, because of the
larger spread of the fixed-target collision position and the long {\ensuremath{\offsetoverline{\PLambda}}}\xspace flight distance.
\begin{figure}[tb]
\centering
\includegraphics[width=\textwidth]{figs/Fig2.pdf}
\caption{Normalised distributions of the production vertex $z$ coordinate for simulated prompt and detached \antiproton in \ensuremath{\proton {\rm He}}\xspace collisions and for prompt \antiproton in simulated \proton\proton collisions in the kinematic range explored in this paper. For \ensuremath{\proton {\rm He}}\xspace collisions, all antiprotons originate in the $-100 < z < 700$\aunit{mm}\xspace PV fiducial region.}
\label{fig:z}
\end{figure}
The validation of the VELO\xspace tracking efficiency is therefore extended using partially reconstructed \mbox{\ensuremath{\Lbar\to\pbar\pip}}\xspace decay candidates in the \ensuremath{\proton {\rm He}}\xspace collision sample, where the candidate \antiproton is reconstructed in the tracking stations upstream and downstream of the magnet but ignoring the information from the VELO\xspace. To take into account the degraded resolution of the decay vertex,
the selection is loosened by requiring $\ensuremath{\mathcal{F}\ped{IP}}\xspace > 1$ and $\ensuremath{\mathcal{F}\ped{\chisqip}}\xspace > 3$, while
the pion, reconstructed using the whole tracking system, is required to be identified by the RICH\xspace detectors to compensate for the larger background. The VELO\xspace tracking efficiency is estimated from the fraction of candidates in this sample where the partially reconstructed track satisfies the quality requirements for a fully reconstructed track when including the information from the VELO\xspace. The VELO\xspace efficiencies measured in data and simulation with the same analysis are compared in Fig.~\ref{fig:VELOeff}, notably as a function of the \antiproton production vertex position. No significant differences are observed.
\begin{figure}[tb]
\centering
\includegraphics[width = 0.48\textwidth]{figs/Fig3a.pdf}
\includegraphics[width = 0.48\textwidth]{figs/Fig3b.pdf}
\includegraphics[width = 0.48\textwidth]{figs/Fig3c.pdf}
\includegraphics[width = 0.48\textwidth]{figs/Fig3d.pdf}
\caption{VELO\xspace tracking efficiency for \antiproton in \mbox{\ensuremath{\Lbar\to\pbar\pip}}\xspace decays as a function of (top left) the particle momentum, (top right) the transverse momentum, (bottom left) the production vertex $z$ coordinate and (bottom right) the number of reconstructed long tracks in the event.}
\label{fig:VELOeff}
\end{figure}
The uncertainty on the \antiproton reconstruction efficiency in each kinematic interval is expected to cancel in the \ensuremath{R_{\Lbar}}\xspace ratio. Therefore, a 0.8\% systematic uncertainty on \ensuremath{R_{\Lbar}}\xspace from the {\ensuremath{\pion^+}}\xspace tracking efficiency is assigned.
\subsection{Other systematic uncertainties}
A fit model uncertainty is evaluated by repeating the fit with an additional Gaussian component in the
signal model or with an Argus function~\cite{Argus} for the background model. In both cases, the variation of the result is smaller than the statistical uncertainty in both data and simulation. The online selection requirements are found to be fully efficient in a control sample with randomly selected events. Each of the offline selection requirements listed in Table~\ref{tab:Lambda0Selection} has an efficiency larger than 90\%, with a total selection efficiency of 70\%. The largest inefficiencies are attributed to the requirements on \ensuremath{\mathcal{F}\ped{IP}}\xspace, \ensuremath{\mathcal{F}\ped{\chisqip}}\xspace and \ensuremath{M_{\proton \rightarrow \pi}}\xspace. The normalised distributions of the two $\mathcal{F}$ discriminants are compared between data and simulated signal. The background contamination is statistically subtracted from the data by modelling the {\ensuremath{\offsetoverline{\PLambda}}}\xspace invariant-mass distributions and by applying the \mbox{\em sPlot}\xspace technique~\cite{Pivk:2004ty} with $m(\antiproton {\ensuremath{\pion^+}}\xspace)$ as discriminating variable. The efficiencies of the requirements in Table~\ref{tab:Lambda0Selection} are measured on the resulting distributions and the difference between data and simulation, amounting to 1\%, is assigned as systematic uncertainty. As a further cross-check of the reliability of the simulation in the wide fiducial region for fixed-target collisions, the analysis is repeated in four equally populated intervals of the PV $z$ position. The efficiency-corrected signal yields are found to agree within the statistical uncertainties.
To check that the \antiproton-enriched simulated sample does not bias
the efficiency estimation, a simulated sample with a looser {\ensuremath{\offsetoverline{\PLambda}}}\xspace selection is used. The total signal efficiencies are found to agree in all kinematic intervals.
\subsection{Results}
The ratio \ensuremath{R_{\Lbar}}\xspace is determined in each kinematic interval from the measured yield $N_{{\ensuremath{\offsetoverline{\PLambda}}}\xspace}$ of \mbox{\ensuremath{\Lbar\to\pbar\pip}}\xspace decays, the total efficiency $\epsilon_{{\ensuremath{\offsetoverline{\PLambda}}}\xspace}$ and the corresponding quantities for prompt \antiproton production~\cite{LHCb-PAPER-2018-031} as
\begin{equation}
\ensuremath{R_{\Lbar}}\xspace = \dfrac{N_{{\ensuremath{\offsetoverline{\PLambda}}}\xspace}}{N\ped{\antiproton}} \dfrac{\epsilon\ped{\antiproton}}{\epsilon_{{\ensuremath{\offsetoverline{\PLambda}}}\xspace}}.
\end{equation}
The $N_{{\ensuremath{\offsetoverline{\PLambda}}}\xspace}$ yields determined from the fits to the {\ensuremath{\offsetoverline{\PLambda}}}\xspace invariant-mass distributions are corrected by $0.6\%$ to account for the contribution from collisions on the residual gas of the LHC vacuum contaminating the helium target, as estimated in Ref.~\cite{LHCb-PAPER-2018-031}. The related uncertainty is expected to cancel in the ratio. All significant sources of systematic uncertainty on the \ensuremath{R_{\Lbar}}\xspace ratio are listed in Table~\ref{tab:exclSyst}. The leading contributions relate to the particle identification of prompt antiprotons and to the limited size of the produced {\ensuremath{\offsetoverline{\PLambda}}}\xspace sample. The \ensuremath{R_{\Lbar}}\xspace results are illustrated in Fig.~\ref{fig:Rexc2d} and reported in Appendix~\ref{appendixResults} for the kinematic intervals that are common to this and the prompt antiproton production analysis. The results as a function of \ensuremath{p}\xspace (\ensuremath{p_{\mathrm{T}}}\xspace), integrated over the $0.55< \ensuremath{p_{\mathrm{T}}}\xspace < 1.2$\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace ($12 < \ensuremath{p}\xspace < 50.5$\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace) region, are shown in Fig.~\ref{fig:Rexc} and compared to widely used hadronic collision models included in the \mbox{\textsc{Crmc}}\xspace package~\cite{CRMC}.
The data indicate that all considered generators significantly underestimate the {\ensuremath{\offsetoverline{\PLambda}}}\xspace contribution to the \antiproton production.
\begin{table}
\centering
\caption{Relative uncertainties on the \ensuremath{R_{\Lbar}}\xspace measurement.}
\label{tab:exclSyst}
\begin{tabular}{lc}
\toprule
Particle identification ($N\ped{\antiproton}$) & $0\% -36\%$ $(<5\%$ for most intervals)\\
Statistical uncertainty ($N_{{\ensuremath{\offsetoverline{\PLambda}}}\xspace}$) & $2.2\% -11\%$ $ (<4\%$ for most intervals)\\
Statistical uncertainty ($N\ped{\antiproton}$) & $0.5\% -11\%$ $ (<2\%$ for most intervals)\\
Simulated sample size ($\epsilon_{{\ensuremath{\offsetoverline{\PLambda}}}\xspace}$) & $1.8\% -4.1\%$ $ (<2\%$ for most intervals)\\
Simulated sample size ($\epsilon\ped{\antiproton}$) & $0.4\% -11\% $ $(<2\%$ for most intervals)\\
Background subtraction ($N\ped{\antiproton}$) & $1.1 \%$ \\
Selection efficiency ($\epsilon_{{\ensuremath{\offsetoverline{\PLambda}}}\xspace}$) & $1\%$ \\
Tracking efficiency for {\ensuremath{\pion^+}}\xspace ($\epsilon_{{\ensuremath{\offsetoverline{\PLambda}}}\xspace}$) & $0.8 \%$ \\
\bottomrule
\end{tabular}
\end{table}
\begin{figure}[tb]
\centering
\includegraphics[width = \textwidth]{figs/Fig4.pdf}
\caption{ Measured \ensuremath{R_{\Lbar}}\xspace in each of the considered \ensuremath{p}\xspace and \ensuremath{p_{\mathrm{T}}}\xspace intervals.}
\label{fig:Rexc2d}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width = 0.95\textwidth]{figs/Fig5a.pdf}
\includegraphics[width = 0.95\textwidth]{figs/Fig5b.pdf}
\caption{Measured \ensuremath{R_{\Lbar}}\xspace as a function of (top)
the \antiproton momentum for \mbox{$0.55<\ensuremath{p_{\mathrm{T}}}\xspace<1.2$\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace} and (bottom) the \antiproton transverse momentum for \mbox{$12<\ensuremath{p}\xspace<50.5$\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace}. The measurement is compared to the predictions, in the same kinematic regions, from the \mbox{\textsc{Epos~1.99}}\xspace~\cite{EPOS99}, \mbox{\textsc{Epos-lhc}}\xspace~\cite{epos-lhc}, \mbox{\textsc{Hijing~1.38}}\xspace~\cite{HIJING} and \mbox{\textsc{Pythia~6}}\xspace~\cite{pythia} models, included in the \mbox{\textsc{Crmc}}\xspace package~\cite{CRMC}.}
\label{fig:Rexc}
\end{figure}
\section{Inclusive \texorpdfstring{{\boldmath \ensuremath{R_{\antihyp}}\xspace}}{RHbar} measurement}
\label{sec:inclusive}
An alternative inclusive approach to the measurement of the detached \antiproton yield relies on the PID capabilities of the RICH detectors and on the IP resolution of the VELO\xspace, rather than on the reconstruction of the \ensuremath{\offsetoverline{H}}\xspace decays. In this second analysis, a high-purity \antiproton sample is selected through a tight PID requirement. Prompt and detached antiprotons are statistically resolved through a template fit to the distribution of the \ensuremath{\chi^2_{\text{IP}}}\xspace variable. Figure~\ref{fig:threepeaks} shows the $\log(\ensuremath{\chi^2_{\text{IP}}}\xspace)$ distribution for all simulated \antiproton in the \antiproton-enriched sample. Three contributions can be clearly distinguished, mainly corresponding to prompt, detached, and antiprotons produced in secondary collisions with the detector material. A non-Gaussian tail of the prompt distribution, extending towards the detached \antiproton region, is attributed to scattering
in the material separating the primary \mbox{LHC}\xspace vacuum and the VELO\xspace,
as further discussed in Section~\ref{sec:RFfoilSyst}.
\begin{figure}[tb]
\centering
\includegraphics[width=\textwidth]{figs/Fig6.pdf}
\caption{Distributions of the $\log(\ensuremath{\chi^2_{\text{IP}}}\xspace)$ variable for all simulated
antiprotons in the \antiproton-enriched simulated sample. The
contributions from prompt, detached and antiprotons produced in the detector material are separately shown.}
\label{fig:threepeaks}
\end{figure}
Antiproton candidates are selected from negatively charged tracks
reconstructed with a high-quality fit including segments in the VELO\xspace and in the tracking stations upstream and downstream of the magnet. The analysis is performed in the (\ensuremath{p}\xspace, \ensuremath{p_{\mathrm{T}}}\xspace) plane, with ten momentum intervals between $12$ and $110$\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace and five for \ensuremath{p_{\mathrm{T}}}\xspace between $0.4$ and $4\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace$. The \antiproton identification is based on two quantities determined from the response of the RICH detectors: the difference between the logarithm of the likelihood of the proton and pion hypotheses, \ensuremath{\mathrm{DLL}_{\proton\pion}}\xspace, and that between the proton and kaon hypotheses, \ensuremath{\mathrm{DLL}_{\proton\kaon}}\xspace~\cite{LHCb-DP-2012-003}. A tight selection is used, requiring $\ensuremath{\mathrm{DLL}_{\proton\pion}}\xspace > 20$ and $\ensuremath{\mathrm{DLL}_{\proton\kaon}}\xspace> 10$, to suppress contamination from misidentified particles. Kinematic intervals at the
boundaries of the RICH capabilities, where the \antiproton purity
predicted in simulation is below 80\%, are removed from the analysis.
The overall predicted purity of the resulting \antiproton sample is 97\%.
The numbers of reconstructed prompt and detached
\antiproton, $N_{\text{prompt}}$ and $N_{\text{det}}$, are determined from the fit and then corrected for the corresponding efficiencies as estimated
from simulation. These are then used to calculate
\begin{equation}
\ensuremath{R_{\antihyp}}\xspace = \dfrac{N_{\text{det}}} { N_{\text{prompt}}} \dfrac{\epsilon_{\text{prompt}}}{\epsilon_{\text{det}}}.
\end{equation}
Efficiencies are determined from the simulation as the ratio between the number of selected candidates in each interval of reconstructed \ensuremath{p}\xspace and \ensuremath{p_{\mathrm{T}}}\xspace, and the number generated by the \mbox{\textsc{Epos-lhc}}\xspace model in the same kinematic interval.
\subsection{Template fit}
Templates for the \ensuremath{\chi^2_{\text{IP}}}\xspace distributions are drawn from the \antiproton-enriched simulation in each kinematic interval for different categories of candidates:
three templates for the prompt, detached and secondary \antiproton and four templates for misidentified particles, consisting of pions, kaons, electrons and fake tracks. Smoothed curves are obtained through a parametrisation of the probability density functions as a sum of Gaussian functions, whose parameters are obtained from a fit to the simulated event distributions, as illustrated in Fig.~\ref{fig:templateDrawing}.
For each template the number of Gaussian components, whose parameters
are initialized to random values in the appropriate range, is increased, up to 15, until a good fit is obtained.
\begin{figure}[tb]
\centering
\includegraphics[width = \textwidth]{figs/Fig7a.pdf}
\includegraphics[width = \textwidth]{figs/Fig7b.pdf}
\caption{Distributions of $\log(\ensuremath{\chi^2_{\text{IP}}}\xspace)$ for (top) prompt
and (bottom) detached antiprotons in the \antiproton-enriched
simulated sample for a kinematic interval in the central region of the considered phase-space. The fit model is overlaid on the data.}
\label{fig:templateDrawing}
\end{figure}
The template fits are performed with the fractions of the three \antiproton components left free to float, while the small contributions from misidentified particles are fixed to the values predicted by the unbiased simulation. The procedure is validated by performing the fit to the unbiased simulated sample and verifying that the obtained abundance for each of the three \antiproton categories agrees with the actual value within the statistical uncertainties.
The fit is then applied to the data. Figure~\ref{fig:fitResult}
shows the fit result
integrated over all kinematic intervals.
The raw ratio of detached to prompt reconstructed candidates is found to be
$R_{\text{raw}}\equiv N_{\text{det}}/N_{\text{prompt}} = 0.1247 \pm 0.0005$, where the uncertainty is statistical only. This is significantly
larger than the value predicted by the unbiased simulated sample,
$0.0848 \pm 0.0014$, confirming a sizeable underestimation
of the antihyperon component by the \mbox{\textsc{Epos-lhc}}\xspace generator.
\begin{figure}[tb]
\centering
\includegraphics[width = \textwidth]{figs/Fig8.pdf}
\caption{Distributions of the $\log(\ensuremath{\chi^2_{\text{IP}}}\xspace)$ variable in the data sample integrated over all kinematic intervals. The fit model is overlaid on the data.
}
\label{fig:fitResult}
\end{figure}
\subsection{Scattered prompt antiprotons}
\label{sec:RFfoilSyst}
Figure~\ref{fig:threepeaks} shows that a significant fraction of simulated prompt \antiproton candidates are reconstructed with an IP value well above the expected resolution, compatible with the detached \antiproton typical values. As illustrated in Fig.~\ref{fig:PromptTemplate_Phi}, this is due to scattering that may change the track trajectory of the prompt \antiproton when the particle crosses the aluminium foil, shown in Fig.~\ref{fig:VELO_RFfoils}, separating the primary \mbox{LHC}\xspace vacuum from the VELO\xspace sensor volume.
The tail is indeed found to be strongly dependent on the azimuthal angle $\phi$. The simulation of the material geometry and of the scattering cross-section is therefore critical to the determination of \ensuremath{R_{\antihyp}}\xspace.
\begin{figure}[tb]
\centering
\includegraphics[width = 1\textwidth]{figs/Fig9.pdf}
\caption{Normalised distributions of the prompt \antiproton $\log(\ensuremath{\chi^2_{\text{IP}}}\xspace)$ variable in different ranges of the azimuthal angle $\phi=\text{atan}(p_y/p_x)$.}
\label{fig:PromptTemplate_Phi}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width = .55\textwidth]{figs/Fig10.jpg}
\caption{Sketch of the VELO\xspace~\cite{ALEXANDER2013184}, where the aluminium foils crossed by the particles before entering the VELO\xspace volume is visible. The crossed material is maximum for $|\phi| > 1$.}
\label{fig:VELO_RFfoils}
\end{figure}
A validation using data of the predicted prompt \antiproton template is performed by selecting \ensuremath{\decay{\Lbar(1520)}{\pbar \Kp}}\xspace decays. These happen at the primary interaction vertex and, tagged by the invariant-mass of the parent particle and by the kaon identification, provide a prompt \antiproton sample in data. Since a small sample size of these decays is selected in the \ensuremath{\proton {\rm He}}\xspace collision sample, the analysis is performed on the largest fixed-target sample collected during the LHC Run 2, namely a sample of proton-neon~(\ensuremath{\proton {\rm Ne}}\xspace) collisions with beam energy of 2.5\aunit{Te\kern -0.1em V}\xspace acquired in 2017. The \antiproton candidate is selected with the same requirements as for the inclusive study, while the tagging kaon must satisfy $\ensuremath{\mathrm{DLL}_{\kaon\pion}}\xspace \equiv \ensuremath{\mathrm{DLL}_{\proton\pion}}\xspace - \ensuremath{\mathrm{DLL}_{\proton\kaon}}\xspace > 20, \ensuremath{\mathrm{DLL}_{\proton\kaon}}\xspace < 0$
and $\log(\ensuremath{\chi^2_{\text{IP}}}\xspace) < 3$ to enforce prompt decays.
Events are weighted according to the \antiproton transverse momentum and
the SPD hit multiplicity to equalize these distributions with those
observed for the prompt \antiproton candidates in \ensuremath{\proton {\rm He}}\xspace data.
The \ensuremath{\decay{\Lbar(1520)}{\pbar \Kp}}\xspace yield is determined in intervals of the \antiproton $\log(\ensuremath{\chi^2_{\text{IP}}}\xspace)$.
The background is subtracted by fitting the $\antiproton{\ensuremath{\kaon^+}}\xspace$
invariant-mass distribution with a Voigtian function for the signal and an exponential function for the background, as illustrated
in Fig.~\ref{fig:LambdaStar_Fit}.
The mass resolution for the signal in each interval is independent, as it degrades for increasing values of \ensuremath{\chi^2_{\text{IP}}}\xspace. Figure~\ref{fig:LambdaStar_Template} shows the resulting reconstructed $\log(\ensuremath{\chi^2_{\text{IP}}}\xspace)$ distribution for the prompt \antiproton candidates. A reasonable agreement with the template from simulation is found, though differences are expected due to the simplification of the material geometry in the simulation. To estimate the related systematic uncertainty, the inclusive template fit for the sample integrated over all intervals is repeated using this template drawn using data for the prompt \antiproton component. The relative variation of $R\ped{\text{raw}}$ is 4.8\% and is assigned as systematic uncertainty.
\begin{figure}[tb]
\centering
\includegraphics[width = .48\textwidth]{figs/Fig11a.pdf}
\includegraphics[width = .48\textwidth]{figs/Fig11b.pdf}
\caption{Invariant-mass distributions for the \ensuremath{\decay{\Lbar(1520)}{\pbar \Kp}}\xspace
candidates selected in the \ensuremath{\proton {\rm Ne}}\xspace data within two intervals of the \antiproton $\log(\ensuremath{\chi^2_{\text{IP}}}\xspace)$ variable. The fit model is overlaid on the data.}
\label{fig:LambdaStar_Fit}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width = \textwidth]{figs/Fig12.pdf}
\caption{Distributions of the number of signal candidates determined in each interval of the antiproton $\log(\ensuremath{\chi^2_{\text{IP}}}\xspace)$ compared to the prediction of the \ensuremath{\proton {\rm He}}\xspace simulation for selected prompt antiprotons.}
\label{fig:LambdaStar_Template}
\end{figure}
\subsection{Antihyperon decays}
\label{sec:hyperonCocktail}
The detached \antiproton efficiency, in particular the fraction of \ensuremath{\offsetoverline{H}}\xspace
decays occurring within the VELO\xspace, strongly depends on the assumed relative production yields of the different antihyperons contributing to the inclusive \antiproton yield. The \mbox{\textsc{Epos-lhc}}\xspace model predicts that, in the measured kinematic range, 72\% of the \antiproton candidates originate from \mbox{\ensuremath{\Lbar\to\pbar\pip}}\xspace decays of promptly produced {\ensuremath{\offsetoverline{\PLambda}}}\xspace particles, 17\% from \ensuremath{\decay{\Sigmabarm}{\pbar\piz}}\xspace decays, 11\% from \ensuremath{\decay{\Xibarp}{\Lbar\pip}}\xspace and \ensuremath{\decay{\Xibarz}{\Lbar\piz}}\xspace cascade decays, and less than 1\% from {\ensuremath{\Omegaresbar^+}}\xspace decays. These predictions are expected to be accurate within a relative uncertainty of approximately $20\%$~\cite{Becattini:2010sk}. The assumed values of ${\ensuremath{\Sigmaresbar{}^-}}\xspace/\ensuremath{\offsetoverline{H}}\xspace$ and ${\ensuremath{\Xiresbar^+}}\xspace/{\ensuremath{\offsetoverline{\PLambda}}}\xspace$ ratios are verified with the collision data.
The template fit is expected to have sensitivity to the
contribution of {\ensuremath{\Sigmaresbar{}^-}}\xspace decays, as illustrated in Fig.~\ref{fig:fitSigma}. Indeed, when compared to {\ensuremath{\offsetoverline{\PLambda}}}\xspace decays, the \ensuremath{\decay{\Sigmabarm}{\pbar\piz}}\xspace decay Q-value is larger and antiprotons show on average a larger IP. The fit is repeated with two independent detached
\antiproton components: the {\ensuremath{\Sigmaresbar{}^-}}\xspace decays and all other decays. The best fit
fraction of {\ensuremath{\Sigmaresbar{}^-}}\xspace is larger than the \mbox{\textsc{Epos-lhc}}\xspace prediction by a factor $1.13 \pm 0.02$. This correction factor, compatible with the expected theoretical uncertainty, is applied to the simulated sample to recompute the efficiency and correct the detached \antiproton templates. While the fit results for $R\ped{raw}$ change less than 1\%, the variation of $\epsilon\ped{det}$ implies a change to \ensuremath{R_{\antihyp}}\xspace between $1.2\%$ and $3.8\%$, depending on the kinematic interval. This is assigned as a systematic uncertainty on \ensuremath{R_{\antihyp}}\xspace due to the relative {\ensuremath{\Sigmaresbar{}^-}}\xspace production.
\begin{figure}[tb]
\centering
\includegraphics[width = \textwidth]{figs/Fig13.pdf}
\caption{Distribution of the $\log(\ensuremath{\chi^2_{\text{IP}}}\xspace)$ variable in the data sample integrated over all kinematic intervals modelled with independent components for {\ensuremath{\Sigmaresbar{}^-}}\xspace decays (labelled as \antiproton from \Sigmabarm) and all other antihyperon decays (\antiproton from {\ensuremath{\offsetoverline{\PLambda}}}\xspace).}
\label{fig:fitSigma}
\end{figure}
To check the cascade contribution, the \ensuremath{\decay{\Xibarp}{\Lbar\pip}}\xspace yield is directly measured and compared to the \mbox{\ensuremath{\Lbar\to\pbar\pip}}\xspace one. Candidates are selected combining a reconstructed {\ensuremath{\offsetoverline{\PLambda}}}\xspace decay with a {\ensuremath{\pion^+}}\xspace candidate track with a distance of closest approach to the {\ensuremath{\offsetoverline{\PLambda}}}\xspace trajectory compatible with zero.
Similarly to the prompt {\ensuremath{\offsetoverline{\PLambda}}}\xspace selection, prompt {\ensuremath{\Xiresbar^+}}\xspace decays are selected using a linear discriminant: $\log[\ensuremath{\chi^2_{\text{IP}}}\xspace({\ensuremath{\offsetoverline{\PLambda}}}\xspace)] +\log[\ensuremath{\chi^2_{\text{IP}}}\xspace({\ensuremath{\pion^+}}\xspace)]-\log[\ensuremath{\chi^2_{\text{IP}}}\xspace({\ensuremath{\Xiresbar^+}}\xspace)]>0$. To minimise the systematic bias in the {\ensuremath{\Xiresbar^+}}\xspace/{\ensuremath{\offsetoverline{\PLambda}}}\xspace ratio, the final-state {\ensuremath{\offsetoverline{\PLambda}}}\xspace selection
follows the same requirements as the prompt {\ensuremath{\offsetoverline{\PLambda}}}\xspace candidates, except that on \ensuremath{\chi^2_{\text{IP}}}\xspace, and a loose selection is chosen for the {\ensuremath{\pion^+}}\xspace candidate, without any PID requirement. The $z$ distribution of the decays is also equalised, by weighting the prompt {\ensuremath{\offsetoverline{\PLambda}}}\xspace candidates to reproduce the observed distribution of {\ensuremath{\offsetoverline{\PLambda}}}\xspace decay vertices from the reconstructed {\ensuremath{\Xiresbar^+}}\xspace decays. The invariant-mass distribution of the \ensuremath{\decay{\Xibarp}{\Lbar\pip}}\xspace candidates is displayed in
Fig.~\ref{fig:Ximass}, where the fit to determine the signal yield is also shown. The fit model, verified on simulation, uses a Voigtian function for the signal and an exponential function for the background.
The same analysis is performed on the simulated sample and the
yield ratio $\sigma({\ensuremath{\Xiresbar^+}}\xspace)/\sigma({\ensuremath{\offsetoverline{\PLambda}}}\xspace)$ is found to be
larger in data with respect to the \mbox{\textsc{Epos-lhc}}\xspace model by a factor $1.09 \pm0.09$.
This factor is used to weight the relative production yield of both
{\ensuremath{\Xiresbar^+}}\xspace and {\ensuremath{\Xiresbar^0}}\xspace baryons in the simulation. The related uncertainty
corresponds to a systematic uncertainty on \ensuremath{R_{\antihyp}}\xspace from cascade production, varying between $0.6\%$ and $0.9\%$, depending on the kinematic interval.
\begin{figure}[tb]
\centering
\includegraphics[width = 0.95\textwidth]{figs/Fig14.pdf}
\caption{Invariant-mass distribution for the \ensuremath{\decay{\Xibarp}{\Lbar\pip}}\xspace
candidates selected in the \ensuremath{\proton {\rm He}}\xspace data. The fit model is overlaid on the data.}
\label{fig:Ximass}
\end{figure}
\subsection{Other systematic uncertainties}
\label{sec:inclFitSyst}
The fit model uncertainty
is estimated by repeating it with the raw binned distributions as templates and is found, in most intervals, to be below 2\%.
Once the probability that antihyperon decays occur within the VELO\xspace is taken into account, the reconstruction and selection efficiencies are expected to mostly cancel in the \ensuremath{R_{\antihyp}}\xspace ratio.
Residual differences are still expected from the different distributions
of the production vertex position due to the decaying antihyperons flight distance. The overall tracking efficiency is measured in the simulation as $(94.85 \pm 0.01)\%$ for prompt and $(93.47 \pm 0.05)\%$ for detached antiprotons, with the quoted uncertainties due to the finite simulated sample size. The small difference is mainly due to the lower average number of hits in the VELO\xspace for the latter.
As discussed in Section~\ref{sec:tracking}, these geometrical effects are verified to be predicted reliably and no significant systematic uncertainty is assumed.
A larger bias could be induced by the tight PID selection. In the simulation, its efficiency is measured to be $(64.90 \pm 0.03)\%$ and $(57.74 \pm 0.11)\%$ for for prompt and detached antiprotons, respectively. The quoted uncertainties correspond to the finite simulated sample size. These predictions are validated using high-purity \antiproton samples from \mbox{\ensuremath{\Lbar\to\pbar\pip}}\xspace decays, where the antiproton is identified without using the RICH. A large sample of these decays is selected from the \ensuremath{\proton {\rm Ne}}\xspace collision sample acquired in 2017. A machine-learning-based approach, documented in Ref.~\cite{PID4SMOG}, is used to model the PID response as a function of 12 variables related to the particle trajectory, its reconstruction quality and the event occupancy. This model, applied to the simulated events, is able to reproduce the predicted PID efficiency for the two \antiproton categories within the statistical uncertainties, which are lower than 1\%. This demonstrates that the predicted difference is due to geometrical effects and that the RICH response for a given track topology and detector occupancy is accurately simulated. On the other hand, the RICH detector response is affected by low-energy background
that is not accurately simulated. For the selected \mbox{\ensuremath{\Lbar\to\pbar\pip}}\xspace decays in the \ensuremath{\proton {\rm He}}\xspace sample, the distributions of RICH hit multiplicities differ from those predicted in simulation, and the efficiency of the PID requirement is found to be larger by a relative 8\% than the predicted value. When the PID requirement is loosened to increase its efficiency by this 8\% for the detached component, the \ensuremath{R_{\antihyp}}\xspace value changes by 0.9\%, which is assigned as the systematic uncertainty on the PID selection. The systematic uncertainties on the predicted fraction of misidentified particles is also evaluated from this check and its effect on \ensuremath{R_{\antihyp}}\xspace is found to be negligible.
A systematic uncertainty on the assumed longitudinal profile of the gas target density is assigned from the change of the $\varepsilon\ped{det}/\varepsilon\ped{prompt}$ ratio when introducing the weights to equalize the PV $z$ distribution in data and simulation. It amounts to less than 0.5\% in most kinematic intervals.
\subsection{Results}
\begin{table}
\centering
\caption{Relative uncertainties on the \ensuremath{R_{\antihyp}}\xspace measurement. }
\label{tab:inclSyst}
\begin{tabular}{lc} \hline
Prompt \antiproton template & $4.8 \%$ \\
Statistical uncertainty & $1.9\% -6.2\%$ $(<2.5\%$ for most intervals) \\
Template parametrisation & $0 - 5.3\%$ $(<2\%$ for most intervals) \\
Simulated sample size & $1.5\% - 5.8\%$ $(<1.8\%$ for most intervals) \\
Production of \Sigmabarm & $1.2 - 3.8 \%$ \\
Particle identification & $0.9 \%$ \\
Production of {\ensuremath{\offsetoverline{\Xires}}}\xspace & $0.6 - 0.9 \%$ \\
Gas $z$ profile simulation & $0.1 - 1.5\%$ $(<0.5\%$ for most intervals) \\
\hline
\end{tabular}
\end{table}
\begin{figure}[tb]
\centering
\includegraphics[width = \textwidth]{figs/Fig15.pdf}
\caption{Measured \ensuremath{R_{\antihyp}}\xspace in each of the considered \ensuremath{p}\xspace and \ensuremath{p_{\mathrm{T}}}\xspace intervals.}
\label{fig:Rinc2d}
\end{figure}
\begin{figure}[tb]
\centering
\includegraphics[width = \textwidth]{figs/Fig16a.pdf}
\includegraphics[width = \textwidth]{figs/Fig16b.pdf}
\caption{Measured \ensuremath{R_{\antihyp}}\xspace as a function of (top) the \antiproton momentum for \mbox{$0.4<\ensuremath{p_{\mathrm{T}}}\xspace<4\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace$} and (bottom) the \antiproton transverse momentum for \mbox{$12<\ensuremath{p}\xspace<110\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace$}. The measurement is compared to predictions, in the same kinematic regions, from the \mbox{\textsc{Epos~1.99}}\xspace~\cite{EPOS99}, \mbox{\textsc{Epos-lhc}}\xspace~\cite{epos-lhc}, \mbox{\textsc{Hijing~1.38}}\xspace~\cite{HIJING}, \mbox{\textsc{Pythia~6}}\xspace~\cite{pythia} and \mbox{\textsc{Qgsjet-ii04}}\xspace~\cite{QGSJET-II} models, included in the \mbox{\textsc{Crmc}}\xspace package~\cite{CRMC}.}
\label{fig:Rinc}
\end{figure}
The results for each kinematic
interval are illustrated in
Fig.~\ref{fig:Rinc2d} and the numerical values are provided in Appendix~\ref{appendixResults}. Table~\ref{tab:inclSyst} summarises the
uncertainties in the \ensuremath{R_{\antihyp}}\xspace measurement. The inclusive results as a function of \ensuremath{p}\xspace or \ensuremath{p_{\mathrm{T}}}\xspace, integrated in the other variable, are shown in Fig.~\ref{fig:Rinc}. As already observed in the \ensuremath{R_{\Lbar}}\xspace measurement, the most commonly used hadronic collision generators are shown to underestimate the antihyperon contribution to \antiproton production at $\ensuremath{\protect\sqrt{s_{\scriptscriptstyle\rm NN}}}\xspace=110\aunit{Ge\kern -0.1em V}\xspace$.
\section{Conclusions}
\label{sec:results}
\begin{figure}[ptb]
\centering
\includegraphics[width=\textwidth]{figs/Fig17a.pdf}
\includegraphics[width=\textwidth]{figs/Fig17b.pdf}
\caption{Fraction of antiprotons from decays of promptly produced {\ensuremath{\offsetoverline{\PLambda}}}\xspace particles to the total yield of detached antiprotons as a function of (top) their momentum for $0.55<\ensuremath{p_{\mathrm{T}}}\xspace<1.2\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace$ and (bottom) their transverse momentum for $12<\ensuremath{p}\xspace<50.5\ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace$. The data are compared to the \mbox{\textsc{Epos-lhc}}\xspace~\cite{epos-lhc} prediction for this quantity.}
\label{fig:Rratio}
\end{figure}
The ratio $\ensuremath{R_{\Lbar}}\xspace/\ensuremath{R_{\antihyp}}\xspace$ measured with the inclusive and exclusive approaches is compared with the \mbox{\textsc{Epos-lhc}}\xspace prediction in Fig.~\ref{fig:Rratio}. As this ratio is predicted more reliably than the inclusive detached \antiproton yield, the good agreement between the measured and predicted values provides a mutual validation for the results of the two complementary approaches followed in this paper.
In conclusion, the production of antiprotons from antihyperon decays relative to the prompt \antiproton production
is measured in the
fixed-target configuration of the \mbox{LHCb}\xspace experiment from \ensuremath{\proton {\rm He}}\xspace
collisions at $\ensuremath{\protect\sqrt{s_{\scriptscriptstyle\rm NN}}}\xspace = 110\aunit{Ge\kern -0.1em V}\xspace$.
The results confirm previous findings from colliders~\cite{STAR:2006nmo,ALICE:2010vtz,CMS:2011jlm} for an increased \ensuremath{\offsetoverline{H}}\xspace contribution with
respect to the $\ensuremath{\protect\sqrt{s_{\scriptscriptstyle\rm NN}}}\xspace~\sim~10\aunit{Ge\kern -0.1em V}\xspace$ scale probed in past fixed-target experiments and indicate a sizeable underestimation of this contribution in most hadronic production models used in cosmic ray physics.
A significant dependence of \ensuremath{R_{\antihyp}}\xspace on the \antiproton momentum is observed. This
effect is not usually considered in the modelling
of the secondary \antiproton component in cosmic rays, where \ensuremath{R_{\antihyp}}\xspace is assumed to depend only on \ensuremath{\protect\sqrt{s_{\scriptscriptstyle\rm NN}}}\xspace~\cite{Winkler_2017,Donato_2018}.
These results are thus expected to provide a valuable input to improve the predictions for the secondary \antiproton cosmic flux.
\section{List of all symbols}
\label{sec:listofsymbols}
\subsection{Experiments}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash lhcb} & \mbox{LHCb}\xspace & \texttt{\textbackslash atlas} & \mbox{ATLAS}\xspace & \texttt{\textbackslash cms} & \mbox{CMS}\xspace \\
\texttt{\textbackslash alice} & \mbox{ALICE}\xspace & \texttt{\textbackslash babar} & \mbox{BaBar}\xspace & \texttt{\textbackslash belle} & \mbox{Belle}\xspace \\
\texttt{\textbackslash belletwo} & \mbox{Belle~II}\xspace & \texttt{\textbackslash besiii} & \mbox{BESIII}\xspace & \texttt{\textbackslash cleo} & \mbox{CLEO}\xspace \\
\texttt{\textbackslash cdf} & \mbox{CDF}\xspace & \texttt{\textbackslash dzero} & \mbox{D0}\xspace & \texttt{\textbackslash aleph} & \mbox{ALEPH}\xspace \\
\texttt{\textbackslash delphi} & \mbox{DELPHI}\xspace & \texttt{\textbackslash opal} & \mbox{OPAL}\xspace & \texttt{\textbackslash lthree} & \mbox{L3}\xspace \\
\texttt{\textbackslash sld} & \mbox{SLD}\xspace & \texttt{\textbackslash cern} & \mbox{CERN}\xspace & \texttt{\textbackslash lhc} & \mbox{LHC}\xspace \\
\texttt{\textbackslash lep} & \mbox{LEP}\xspace & \texttt{\textbackslash tevatron} & Tevatron\xspace & \texttt{\textbackslash bfactories} & \mbox{{\ensuremath{\PB}}\xspace Factories}\xspace \\
\texttt{\textbackslash bfactory} & \mbox{{\ensuremath{\PB}}\xspace Factory}\xspace & \texttt{\textbackslash upgradeone} & \mbox{Upgrade~I}\xspace & \texttt{\textbackslash upgradetwo} & \mbox{Upgrade~II}\xspace \\
\end{tabular*}
\subsubsection{LHCb sub-detectors and sub-systems}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash velo} & VELO\xspace & \texttt{\textbackslash rich} & RICH\xspace & \texttt{\textbackslash richone} & RICH1\xspace \\
\texttt{\textbackslash richtwo} & RICH2\xspace & \texttt{\textbackslash ttracker} & TT\xspace & \texttt{\textbackslash intr} & IT\xspace \\
\texttt{\textbackslash st} & ST\xspace & \texttt{\textbackslash ot} & OT\xspace & \texttt{\textbackslash herschel} & \mbox{\textsc{HeRSCheL}}\xspace \\
\texttt{\textbackslash spd} & SPD\xspace & \texttt{\textbackslash presh} & PS\xspace & \texttt{\textbackslash ecal} & ECAL\xspace \\
\texttt{\textbackslash hcal} & HCAL\xspace & \texttt{\textbackslash MagUp} & \mbox{\em Mag\kern -0.05em Up}\xspace & \texttt{\textbackslash MagDown} & \mbox{\em MagDown}\xspace \\
\texttt{\textbackslash ode} & ODE\xspace & \texttt{\textbackslash daq} & DAQ\xspace & \texttt{\textbackslash tfc} & TFC\xspace \\
\texttt{\textbackslash ecs} & ECS\xspace & \texttt{\textbackslash lone} & L0\xspace & \texttt{\textbackslash hlt} & HLT\xspace \\
\texttt{\textbackslash hltone} & HLT1\xspace & \texttt{\textbackslash hlttwo} & HLT2\xspace & \\
\end{tabular*}
\subsection{Particles}
\subsubsection{Leptons}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash electron} & {\ensuremath{\Pe}}\xspace & \texttt{\textbackslash en} & \en & \texttt{\textbackslash ep} & {\ensuremath{\Pe^+}}\xspace \\
\texttt{\textbackslash epm} & \epm & \texttt{\textbackslash emp} & \emp & \texttt{\textbackslash epem} & {\ensuremath{\Pe^+\Pe^-}}\xspace \\
\texttt{\textbackslash muon} & {\ensuremath{\Pmu}}\xspace & \texttt{\textbackslash mup} & {\ensuremath{\Pmu^+}}\xspace & \texttt{\textbackslash mun} & \mun \\
\texttt{\textbackslash mupm} & \mupm & \texttt{\textbackslash mump} & \mump & \texttt{\textbackslash mumu} & {\ensuremath{\Pmu^+\Pmu^-}}\xspace \\
\texttt{\textbackslash tauon} & {\ensuremath{\Ptau}}\xspace & \texttt{\textbackslash taup} & {\ensuremath{\Ptau^+}}\xspace & \texttt{\textbackslash taum} & {\ensuremath{\Ptau^-}}\xspace \\
\texttt{\textbackslash taupm} & {\ensuremath{\Ptau^\pm}}\xspace & \texttt{\textbackslash taump} & {\ensuremath{\Ptau^\mp}}\xspace & \texttt{\textbackslash tautau} & {\ensuremath{\Ptau^+\Ptau^-}}\xspace \\
\texttt{\textbackslash lepton} & {\ensuremath{\ell}}\xspace & \texttt{\textbackslash ellm} & {\ensuremath{\ell^-}}\xspace & \texttt{\textbackslash ellp} & {\ensuremath{\ell^+}}\xspace \\
\texttt{\textbackslash ellpm} & {\ensuremath{\ell^\pm}}\xspace & \texttt{\textbackslash ellmp} & {\ensuremath{\ell^\mp}}\xspace & \texttt{\textbackslash ellell} & \ensuremath{\ell^+ \ell^-}\xspace \\
\texttt{\textbackslash neu} & {\ensuremath{\Pnu}}\xspace & \texttt{\textbackslash neub} & {\ensuremath{\overline{\Pnu}}}\xspace & \texttt{\textbackslash neue} & {\ensuremath{\neu_e}}\xspace \\
\texttt{\textbackslash neueb} & {\ensuremath{\neub_e}}\xspace & \texttt{\textbackslash neum} & {\ensuremath{\neu_\mu}}\xspace & \texttt{\textbackslash neumb} & {\ensuremath{\neub_\mu}}\xspace \\
\texttt{\textbackslash neut} & {\ensuremath{\neu_\tau}}\xspace & \texttt{\textbackslash neutb} & {\ensuremath{\neub_\tau}}\xspace & \texttt{\textbackslash neul} & {\ensuremath{\neu_\ell}}\xspace \\
\texttt{\textbackslash neulb} & {\ensuremath{\neub_\ell}}\xspace & \\
\end{tabular*}
\subsubsection{Gauge bosons and scalars}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash g} & {\ensuremath{\Pgamma}}\xspace & \texttt{\textbackslash H} & {\ensuremath{\PH^0}}\xspace & \texttt{\textbackslash Hp} & {\ensuremath{\PH^+}}\xspace \\
\texttt{\textbackslash Hm} & {\ensuremath{\PH^-}}\xspace & \texttt{\textbackslash Hpm} & {\ensuremath{\PH^\pm}}\xspace & \texttt{\textbackslash W} & {\ensuremath{\PW}}\xspace \\
\texttt{\textbackslash Wp} & {\ensuremath{\PW^+}}\xspace & \texttt{\textbackslash Wm} & {\ensuremath{\PW^-}}\xspace & \texttt{\textbackslash Wpm} & {\ensuremath{\PW^\pm}}\xspace \\
\texttt{\textbackslash Z} & {\ensuremath{\PZ}}\xspace & \\
\end{tabular*}
\subsubsection{Quarks}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash quark} & {\ensuremath{\Pq}}\xspace & \texttt{\textbackslash quarkbar} & {\ensuremath{\overline \quark}}\xspace & \texttt{\textbackslash qqbar} & {\ensuremath{\quark\quarkbar}}\xspace \\
\texttt{\textbackslash uquark} & {\ensuremath{\Pu}}\xspace & \texttt{\textbackslash uquarkbar} & {\ensuremath{\overline \uquark}}\xspace & \texttt{\textbackslash uubar} & {\ensuremath{\uquark\uquarkbar}}\xspace \\
\texttt{\textbackslash dquark} & {\ensuremath{\Pd}}\xspace & \texttt{\textbackslash dquarkbar} & {\ensuremath{\overline \dquark}}\xspace & \texttt{\textbackslash ddbar} & {\ensuremath{\dquark\dquarkbar}}\xspace \\
\texttt{\textbackslash squark} & {\ensuremath{\Ps}}\xspace & \texttt{\textbackslash squarkbar} & {\ensuremath{\overline \squark}}\xspace & \texttt{\textbackslash ssbar} & {\ensuremath{\squark\squarkbar}}\xspace \\
\texttt{\textbackslash cquark} & {\ensuremath{\Pc}}\xspace & \texttt{\textbackslash cquarkbar} & {\ensuremath{\overline \cquark}}\xspace & \texttt{\textbackslash ccbar} & {\ensuremath{\cquark\cquarkbar}}\xspace \\
\texttt{\textbackslash bquark} & {\ensuremath{\Pb}}\xspace & \texttt{\textbackslash bquarkbar} & {\ensuremath{\overline \bquark}}\xspace & \texttt{\textbackslash bbbar} & {\ensuremath{\bquark\bquarkbar}}\xspace \\
\texttt{\textbackslash tquark} & {\ensuremath{\Pt}}\xspace & \texttt{\textbackslash tquarkbar} & {\ensuremath{\overline \tquark}}\xspace & \texttt{\textbackslash ttbar} & {\ensuremath{\tquark\tquarkbar}}\xspace \\
\end{tabular*}
\subsubsection{Light mesons}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash hadron} & {\ensuremath{\Ph}}\xspace & \texttt{\textbackslash pion} & {\ensuremath{\Ppi}}\xspace & \texttt{\textbackslash piz} & {\ensuremath{\pion^0}}\xspace \\
\texttt{\textbackslash pip} & {\ensuremath{\pion^+}}\xspace & \texttt{\textbackslash pim} & {\ensuremath{\pion^-}}\xspace & \texttt{\textbackslash pipm} & {\ensuremath{\pion^\pm}}\xspace \\
\texttt{\textbackslash pimp} & {\ensuremath{\pion^\mp}}\xspace & \texttt{\textbackslash rhomeson} & {\ensuremath{\Prho}}\xspace & \texttt{\textbackslash rhoz} & {\ensuremath{\rhomeson^0}}\xspace \\
\texttt{\textbackslash rhop} & {\ensuremath{\rhomeson^+}}\xspace & \texttt{\textbackslash rhom} & {\ensuremath{\rhomeson^-}}\xspace & \texttt{\textbackslash rhopm} & {\ensuremath{\rhomeson^\pm}}\xspace \\
\texttt{\textbackslash rhomp} & {\ensuremath{\rhomeson^\mp}}\xspace & \texttt{\textbackslash kaon} & {\ensuremath{\PK}}\xspace & \texttt{\textbackslash Kbar} & {\ensuremath{\offsetoverline{\PK}}}\xspace \\
\texttt{\textbackslash Kb} & {\ensuremath{\Kbar}}\xspace & \texttt{\textbackslash KorKbar} & \kern \thebaroffset\optbar{\kern -\thebaroffset \PK}{}\xspace & \texttt{\textbackslash Kz} & {\ensuremath{\kaon^0}}\xspace \\
\texttt{\textbackslash Kzb} & {\ensuremath{\Kbar{}^0}}\xspace & \texttt{\textbackslash Kp} & {\ensuremath{\kaon^+}}\xspace & \texttt{\textbackslash Km} & {\ensuremath{\kaon^-}}\xspace \\
\texttt{\textbackslash Kpm} & {\ensuremath{\kaon^\pm}}\xspace & \texttt{\textbackslash Kmp} & {\ensuremath{\kaon^\mp}}\xspace & \texttt{\textbackslash KS} & {\ensuremath{\kaon^0_{\mathrm{S}}}}\xspace \\
\texttt{\textbackslash Vzero} & {\ensuremath{V^0}}\xspace & \texttt{\textbackslash KL} & {\ensuremath{\kaon^0_{\mathrm{L}}}}\xspace & \texttt{\textbackslash Kstarz} & {\ensuremath{\kaon^{*0}}}\xspace \\
\texttt{\textbackslash Kstarzb} & {\ensuremath{\Kbar{}^{*0}}}\xspace & \texttt{\textbackslash Kstar} & {\ensuremath{\kaon^*}}\xspace & \texttt{\textbackslash Kstarb} & {\ensuremath{\Kbar{}^*}}\xspace \\
\texttt{\textbackslash Kstarp} & {\ensuremath{\kaon^{*+}}}\xspace & \texttt{\textbackslash Kstarm} & {\ensuremath{\kaon^{*-}}}\xspace & \texttt{\textbackslash Kstarpm} & {\ensuremath{\kaon^{*\pm}}}\xspace \\
\texttt{\textbackslash Kstarmp} & {\ensuremath{\kaon^{*\mp}}}\xspace & \texttt{\textbackslash KorKbarz} & \ensuremath{\KorKbar^0}\xspace & \texttt{\textbackslash etaz} & \ensuremath{\ensuremath{\upeta}\xspace}\xspace \\
\texttt{\textbackslash etapr} & \ensuremath{\ensuremath{\upeta}\xspace^{\prime}}\xspace & \texttt{\textbackslash phiz} & \ensuremath{\Pphi}\xspace & \texttt{\textbackslash omegaz} & \ensuremath{\Pomega}\xspace \\
\end{tabular*}
\subsubsection{Charmed mesons}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash Dbar} & {\ensuremath{\offsetoverline{\PD}}}\xspace & \texttt{\textbackslash D} & {\ensuremath{\PD}}\xspace & \texttt{\textbackslash Db} & {\ensuremath{\Dbar}}\xspace \\
\texttt{\textbackslash DorDbar} & \kern \thebaroffset\optbar{\kern -\thebaroffset \PD}\xspace & \texttt{\textbackslash Dz} & {\ensuremath{\D^0}}\xspace & \texttt{\textbackslash Dzb} & {\ensuremath{\Dbar{}^0}}\xspace \\
\texttt{\textbackslash Dp} & {\ensuremath{\D^+}}\xspace & \texttt{\textbackslash Dm} & {\ensuremath{\D^-}}\xspace & \texttt{\textbackslash Dpm} & {\ensuremath{\D^\pm}}\xspace \\
\texttt{\textbackslash Dmp} & {\ensuremath{\D^\mp}}\xspace & \texttt{\textbackslash DpDm} & \ensuremath{\Dp {\kern -0.16em \Dm}}\xspace & \texttt{\textbackslash Dstar} & {\ensuremath{\D^*}}\xspace \\
\texttt{\textbackslash Dstarb} & {\ensuremath{\Dbar{}^*}}\xspace & \texttt{\textbackslash Dstarz} & {\ensuremath{\D^{*0}}}\xspace & \texttt{\textbackslash Dstarzb} & {\ensuremath{\Dbar{}^{*0}}}\xspace \\
\texttt{\textbackslash theDstarz} & {\ensuremath{\D^{*}(2007)^{0}}}\xspace & \texttt{\textbackslash theDstarzb} & {\ensuremath{\Dbar^{*}(2007)^{0}}}\xspace & \texttt{\textbackslash Dstarp} & {\ensuremath{\D^{*+}}}\xspace \\
\texttt{\textbackslash Dstarm} & {\ensuremath{\D^{*-}}}\xspace & \texttt{\textbackslash Dstarpm} & {\ensuremath{\D^{*\pm}}}\xspace & \texttt{\textbackslash Dstarmp} & {\ensuremath{\D^{*\mp}}}\xspace \\
\texttt{\textbackslash theDstarp} & {\ensuremath{\D^{*}(2010)^{+}}}\xspace & \texttt{\textbackslash theDstarm} & {\ensuremath{\D^{*}(2010)^{-}}}\xspace & \texttt{\textbackslash theDstarpm} & {\ensuremath{\D^{*}(2010)^{\pm}}}\xspace \\
\texttt{\textbackslash theDstarmp} & {\ensuremath{\D^{*}(2010)^{\mp}}}\xspace & \texttt{\textbackslash Ds} & {\ensuremath{\D^+_\squark}}\xspace & \texttt{\textbackslash Dsp} & {\ensuremath{\D^+_\squark}}\xspace \\
\texttt{\textbackslash Dsm} & {\ensuremath{\D^-_\squark}}\xspace & \texttt{\textbackslash Dspm} & {\ensuremath{\D^{\pm}_\squark}}\xspace & \texttt{\textbackslash Dsmp} & {\ensuremath{\D^{\mp}_\squark}}\xspace \\
\texttt{\textbackslash Dss} & {\ensuremath{\D^{*+}_\squark}}\xspace & \texttt{\textbackslash Dssp} & {\ensuremath{\D^{*+}_\squark}}\xspace & \texttt{\textbackslash Dssm} & {\ensuremath{\D^{*-}_\squark}}\xspace \\
\texttt{\textbackslash Dsspm} & {\ensuremath{\D^{*\pm}_\squark}}\xspace & \texttt{\textbackslash Dssmp} & {\ensuremath{\D^{*\mp}_\squark}}\xspace & \texttt{\textbackslash DporDsp} & {\ensuremath{\D_{(\squark)}^+}}\xspace \\
\texttt{\textbackslash DmorDsm} & {\ensuremath{\D{}_{(\squark)}^-}}\xspace & \texttt{\textbackslash DpmorDspm} & {\ensuremath{\D{}_{(\squark)}^\pm}}\xspace & \\
\end{tabular*}
\subsubsection{Beauty mesons}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash B} & {\ensuremath{\PB}}\xspace & \texttt{\textbackslash Bbar} & {\ensuremath{\offsetoverline{\PB}}}\xspace & \texttt{\textbackslash Bb} & {\ensuremath{\Bbar}}\xspace \\
\texttt{\textbackslash BorBbar} & \kern \thebaroffset\optbar{\kern -\thebaroffset \PB}\xspace & \texttt{\textbackslash Bz} & {\ensuremath{\B^0}}\xspace & \texttt{\textbackslash Bzb} & {\ensuremath{\Bbar{}^0}}\xspace \\
\texttt{\textbackslash Bd} & {\ensuremath{\B^0}}\xspace & \texttt{\textbackslash Bdb} & {\ensuremath{\Bbar{}^0}}\xspace & \texttt{\textbackslash BdorBdbar} & \kern \thebaroffset\optbar{\kern -\thebaroffset \Bd}\xspace \\
\texttt{\textbackslash Bu} & {\ensuremath{\B^+}}\xspace & \texttt{\textbackslash Bub} & {\ensuremath{\B^-}}\xspace & \texttt{\textbackslash Bp} & {\ensuremath{\Bu}}\xspace \\
\texttt{\textbackslash Bm} & {\ensuremath{\Bub}}\xspace & \texttt{\textbackslash Bpm} & {\ensuremath{\B^\pm}}\xspace & \texttt{\textbackslash Bmp} & {\ensuremath{\B^\mp}}\xspace \\
\texttt{\textbackslash Bs} & {\ensuremath{\B^0_\squark}}\xspace & \texttt{\textbackslash Bsb} & {\ensuremath{\Bbar{}^0_\squark}}\xspace & \texttt{\textbackslash BsorBsbar} & \kern \thebaroffset\optbar{\kern -\thebaroffset \Bs}\xspace \\
\texttt{\textbackslash Bc} & {\ensuremath{\B_\cquark^+}}\xspace & \texttt{\textbackslash Bcp} & {\ensuremath{\B_\cquark^+}}\xspace & \texttt{\textbackslash Bcm} & {\ensuremath{\B_\cquark^-}}\xspace \\
\texttt{\textbackslash Bcpm} & {\ensuremath{\B_\cquark^\pm}}\xspace & \texttt{\textbackslash Bds} & {\ensuremath{\B_{(\squark)}^0}}\xspace & \texttt{\textbackslash Bdsb} & {\ensuremath{\Bbar{}_{(\squark)}^0}}\xspace \\
\texttt{\textbackslash BdorBs} & \Bds & \texttt{\textbackslash BdorBsbar} & \Bdsb & \\
\end{tabular*}
\subsubsection{Onia}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash jpsi} & {\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi}}}\xspace & \texttt{\textbackslash psitwos} & {\ensuremath{\Ppsi{(2S)}}}\xspace & \texttt{\textbackslash psiprpr} & {\ensuremath{\Ppsi(3770)}}\xspace \\
\texttt{\textbackslash etac} & {\ensuremath{\Peta_\cquark}}\xspace & \texttt{\textbackslash psires} & {\ensuremath{\Ppsi}}\xspace & \texttt{\textbackslash chic} & {\ensuremath{\Pchi_\cquark}}\xspace \\
\texttt{\textbackslash chiczero} & {\ensuremath{\Pchi_{\cquark 0}}}\xspace & \texttt{\textbackslash chicone} & {\ensuremath{\Pchi_{\cquark 1}}}\xspace & \texttt{\textbackslash chictwo} & {\ensuremath{\Pchi_{\cquark 2}}}\xspace \\
\texttt{\textbackslash chicJ} & {\ensuremath{\Pchi_{\cquark J}}}\xspace & \texttt{\textbackslash Upsilonres} & {\ensuremath{\PUpsilon}}\xspace & \texttt{\textbackslash OneS} & {\Y1S} \\
\texttt{\textbackslash TwoS} & {\Y2S} & \texttt{\textbackslash ThreeS} & {\Y3S} & \texttt{\textbackslash FourS} & {\Y4S} \\
\texttt{\textbackslash FiveS} & {\Y5S} & \texttt{\textbackslash chib} & {\ensuremath{\Pchi_{b}}}\xspace & \texttt{\textbackslash chibzero} & {\ensuremath{\Pchi_{\bquark 0}}}\xspace \\
\texttt{\textbackslash chibone} & {\ensuremath{\Pchi_{\bquark 1}}}\xspace & \texttt{\textbackslash chibtwo} & {\ensuremath{\Pchi_{\bquark 2}}}\xspace & \texttt{\textbackslash chibJ} & {\ensuremath{\Pchi_{\bquark J}}}\xspace \\
\texttt{\textbackslash theX} & {\ensuremath{\Pchi_{c1}(3872)}}\xspace & \\
\end{tabular*}
\subsubsection{Light Baryons}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash proton} & {\ensuremath{\Pp}}\xspace & \texttt{\textbackslash antiproton} & {\ensuremath{\overline \proton}}\xspace & \texttt{\textbackslash neutron} & {\ensuremath{\Pn}}\xspace \\
\texttt{\textbackslash antineutron} & {\ensuremath{\overline \neutron}}\xspace & \texttt{\textbackslash Deltares} & {\ensuremath{\PDelta}}\xspace & \texttt{\textbackslash Deltaresbar} & {\ensuremath{\overline \Deltares}}\xspace \\
\texttt{\textbackslash Lz} & {\ensuremath{\PLambda}}\xspace & \texttt{\textbackslash Lbar} & {\ensuremath{\offsetoverline{\PLambda}}}\xspace & \texttt{\textbackslash LorLbar} & \kern \thebaroffset\optbar{\kern -\thebaroffset \PLambda}\xspace \\
\texttt{\textbackslash Lambdares} & {\ensuremath{\PLambda}}\xspace & \texttt{\textbackslash Lambdaresbar} & {\ensuremath{\Lbar}}\xspace & \texttt{\textbackslash Sigmares} & {\ensuremath{\PSigma}}\xspace \\
\texttt{\textbackslash Sigmaz} & {\ensuremath{\Sigmares{}^0}}\xspace & \texttt{\textbackslash Sigmap} & {\ensuremath{\Sigmares{}^+}}\xspace & \texttt{\textbackslash Sigmam} & {\ensuremath{\Sigmares{}^-}}\xspace \\
\texttt{\textbackslash Sigmaresbar} & {\ensuremath{\offsetoverline{\Sigmares}}}\xspace & \texttt{\textbackslash Sigmabarz} & {\ensuremath{\Sigmaresbar{}^0}}\xspace & \texttt{\textbackslash Sigmabarp} & {\ensuremath{\Sigmaresbar{}^+}}\xspace \\
\texttt{\textbackslash Sigmabarm} & {\ensuremath{\Sigmaresbar{}^-}}\xspace & \texttt{\textbackslash Xires} & {\ensuremath{\PXi}}\xspace & \texttt{\textbackslash Xiz} & {\ensuremath{\Xires^0}}\xspace \\
\texttt{\textbackslash Xim} & {\ensuremath{\Xires^-}}\xspace & \texttt{\textbackslash Xiresbar} & {\ensuremath{\offsetoverline{\Xires}}}\xspace & \texttt{\textbackslash Xibarz} & {\ensuremath{\Xiresbar^0}}\xspace \\
\texttt{\textbackslash Xibarp} & {\ensuremath{\Xiresbar^+}}\xspace & \texttt{\textbackslash Omegares} & {\ensuremath{\POmega}}\xspace & \texttt{\textbackslash Omegaresbar} & {\ensuremath{\offsetoverline{\POmega}}}\xspace \\
\texttt{\textbackslash Omegam} & {\ensuremath{\Omegares^-}}\xspace & \texttt{\textbackslash Omegabarp} & {\ensuremath{\Omegaresbar^+}}\xspace & \\
\end{tabular*}
\subsubsection{Charmed Baryons}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash Lc} & {\ensuremath{\Lz^+_\cquark}}\xspace & \texttt{\textbackslash Lcbar} & {\ensuremath{\Lbar{}^-_\cquark}}\xspace & \texttt{\textbackslash Xic} & {\ensuremath{\Xires_\cquark}}\xspace \\
\texttt{\textbackslash Xicz} & {\ensuremath{\Xires^0_\cquark}}\xspace & \texttt{\textbackslash Xicp} & {\ensuremath{\Xires^+_\cquark}}\xspace & \texttt{\textbackslash Xicbar} & {\ensuremath{\Xiresbar{}_\cquark}}\xspace \\
\texttt{\textbackslash Xicbarz} & {\ensuremath{\Xiresbar{}_\cquark^0}}\xspace & \texttt{\textbackslash Xicbarm} & {\ensuremath{\Xiresbar{}_\cquark^-}}\xspace & \texttt{\textbackslash Omegac} & {\ensuremath{\Omegares^0_\cquark}}\xspace \\
\texttt{\textbackslash Omegacbar} & {\ensuremath{\Omegaresbar{}_\cquark^0}}\xspace & \texttt{\textbackslash Xicc} & {\ensuremath{\Xires_{\cquark\cquark}}}\xspace & \texttt{\textbackslash Xiccbar} & {\ensuremath{\Xiresbar{}_{\cquark\cquark}}}\xspace \\
\texttt{\textbackslash Xiccp} & {\ensuremath{\Xires^+_{\cquark\cquark}}}\xspace & \texttt{\textbackslash Xiccpp} & {\ensuremath{\Xires^{++}_{\cquark\cquark}}}\xspace & \texttt{\textbackslash Xiccbarm} & {\ensuremath{\Xiresbar{}_{\cquark\cquark}^-}}\xspace \\
\texttt{\textbackslash Xiccbarmm} & {\ensuremath{\Xiresbar{}_{\cquark\cquark}^{--}}}\xspace & \texttt{\textbackslash Omegacc} & {\ensuremath{\Omegares^+_{\cquark\cquark}}}\xspace & \texttt{\textbackslash Omegaccbar} & {\ensuremath{\Omegaresbar{}_{\cquark\cquark}^-}}\xspace \\
\texttt{\textbackslash Omegaccc} & {\ensuremath{\Omegares^{++}_{\cquark\cquark\cquark}}}\xspace & \texttt{\textbackslash Omegacccbar} & {\ensuremath{\Omegaresbar{}_{\cquark\cquark\cquark}^{--}}}\xspace & \\
\end{tabular*}
\subsubsection{Beauty Baryons}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash Lb} & {\ensuremath{\Lz^0_\bquark}}\xspace & \texttt{\textbackslash Lbbar} & {\ensuremath{\Lbar{}^0_\bquark}}\xspace & \texttt{\textbackslash Sigmab} & {\ensuremath{\Sigmares_\bquark}}\xspace \\
\texttt{\textbackslash Sigmabp} & {\ensuremath{\Sigmares_\bquark^+}}\xspace & \texttt{\textbackslash Sigmabz} & {\ensuremath{\Sigmares_\bquark^0}}\xspace & \texttt{\textbackslash Sigmabm} & {\ensuremath{\Sigmares_\bquark^-}}\xspace \\
\texttt{\textbackslash Sigmabpm} & {\ensuremath{\Sigmares_\bquark^\pm}}\xspace & \texttt{\textbackslash Sigmabbar} & {\ensuremath{\Sigmaresbar_\bquark}}\xspace & \texttt{\textbackslash Sigmabbarp} & {\ensuremath{\Sigmaresbar_\bquark^+}}\xspace \\
\texttt{\textbackslash Sigmabbarz} & {\ensuremath{\Sigmaresbar_\bquark^0}}\xspace & \texttt{\textbackslash Sigmabbarm} & {\ensuremath{\Sigmaresbar_\bquark^-}}\xspace & \texttt{\textbackslash Sigmabbarpm} & {\ensuremath{\Sigmaresbar_\bquark^-}}\xspace \\
\texttt{\textbackslash Xib} & {\ensuremath{\Xires_\bquark}}\xspace & \texttt{\textbackslash Xibz} & {\ensuremath{\Xires^0_\bquark}}\xspace & \texttt{\textbackslash Xibm} & {\ensuremath{\Xires^-_\bquark}}\xspace \\
\texttt{\textbackslash Xibbar} & {\ensuremath{\Xiresbar{}_\bquark}}\xspace & \texttt{\textbackslash Xibbarz} & {\ensuremath{\Xiresbar{}_\bquark^0}}\xspace & \texttt{\textbackslash Xibbarp} & {\ensuremath{\Xiresbar{}_\bquark^+}}\xspace \\
\texttt{\textbackslash Omegab} & {\ensuremath{\Omegares^-_\bquark}}\xspace & \texttt{\textbackslash Omegabbar} & {\ensuremath{\Omegaresbar{}_\bquark^+}}\xspace & \\
\end{tabular*}
\subsection{Physics symbols}
\subsubsection{Decays}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash BF} & {\ensuremath{\mathcal{B}}}\xspace & \texttt{\textbackslash BR} & \BF & \texttt{\textbackslash BRvis} & {\ensuremath{\BR_{\mathrm{{vis}}}}} \\
\texttt{\textbackslash ra} & \ensuremath{\rightarrow}\xspace & \texttt{\textbackslash to} & \ensuremath{\rightarrow}\xspace & \\
\end{tabular*}
\subsubsection{Lifetimes}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash tauBs} & {\ensuremath{\tau_{{\ensuremath{\B^0_\squark}}\xspace}}}\xspace & \texttt{\textbackslash tauBd} & {\ensuremath{\tau_{{\ensuremath{\B^0}}\xspace}}}\xspace & \texttt{\textbackslash tauBz} & {\ensuremath{\tau_{{\ensuremath{\B^0}}\xspace}}}\xspace \\
\texttt{\textbackslash tauBu} & {\ensuremath{\tau_{{\ensuremath{\Bu}}\xspace}}}\xspace & \texttt{\textbackslash tauDp} & {\ensuremath{\tau_{{\ensuremath{\D^+}}\xspace}}}\xspace & \texttt{\textbackslash tauDz} & {\ensuremath{\tau_{{\ensuremath{\D^0}}\xspace}}}\xspace \\
\texttt{\textbackslash tauL} & {\ensuremath{\tau_{\mathrm{ L}}}}\xspace & \texttt{\textbackslash tauH} & {\ensuremath{\tau_{\mathrm{ H}}}}\xspace & \\
\end{tabular*}
\subsubsection{Masses}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash mBd} & {\ensuremath{m_{{\ensuremath{\B^0}}\xspace}}}\xspace & \texttt{\textbackslash mBp} & {\ensuremath{m_{{\ensuremath{\Bu}}\xspace}}}\xspace & \texttt{\textbackslash mBs} & {\ensuremath{m_{{\ensuremath{\B^0_\squark}}\xspace}}}\xspace \\
\texttt{\textbackslash mBc} & {\ensuremath{m_{{\ensuremath{\B_\cquark^+}}\xspace}}}\xspace & \texttt{\textbackslash mLb} & {\ensuremath{m_{{\ensuremath{\Lz^0_\bquark}}\xspace}}}\xspace & \\
\end{tabular*}
\subsubsection{EW theory, groups}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash grpsuthree} & {\ensuremath{\mathrm{SU}(3)}}\xspace & \texttt{\textbackslash grpsutw} & {\ensuremath{\mathrm{SU}(2)}}\xspace & \texttt{\textbackslash grpuone} & {\ensuremath{\mathrm{U}(1)}}\xspace \\
\texttt{\textbackslash ssqtw} & {\ensuremath{\sin^{2}\!\theta_{\mathrm{W}}}}\xspace & \texttt{\textbackslash csqtw} & {\ensuremath{\cos^{2}\!\theta_{\mathrm{W}}}}\xspace & \texttt{\textbackslash stw} & {\ensuremath{\sin\theta_{\mathrm{W}}}}\xspace \\
\texttt{\textbackslash ctw} & {\ensuremath{\cos\theta_{\mathrm{W}}}}\xspace & \texttt{\textbackslash ssqtwef} & {\ensuremath{{\sin}^{2}\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace & \texttt{\textbackslash csqtwef} & {\ensuremath{{\cos}^{2}\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace \\
\texttt{\textbackslash stwef} & {\ensuremath{\sin\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace & \texttt{\textbackslash ctwef} & {\ensuremath{\cos\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace & \texttt{\textbackslash gv} & {\ensuremath{g_{\mbox{\tiny V}}}}\xspace \\
\texttt{\textbackslash ga} & {\ensuremath{g_{\mbox{\tiny A}}}}\xspace & \texttt{\textbackslash order} & {\ensuremath{\mathcal{O}}}\xspace & \texttt{\textbackslash ordalph} & {\ensuremath{\mathcal{O}(\alpha)}}\xspace \\
\texttt{\textbackslash ordalsq} & {\ensuremath{\mathcal{O}(\alpha^{2})}}\xspace & \texttt{\textbackslash ordalcb} & {\ensuremath{\mathcal{O}(\alpha^{3})}}\xspace & \\
\end{tabular*}
\subsubsection{QCD parameters}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash as} & {\ensuremath{\alpha_s}}\xspace & \texttt{\textbackslash MSb} & {\ensuremath{\overline{\mathrm{MS}}}}\xspace & \texttt{\textbackslash lqcd} & {\ensuremath{\Lambda_{\mathrm{QCD}}}}\xspace \\
\texttt{\textbackslash qsq} & {\ensuremath{q^2}}\xspace & \\
\end{tabular*}
\subsubsection{CKM, \boldmath {\ensuremath{C\!P}}\xspace violation}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash eps} & {\ensuremath{\varepsilon}}\xspace & \texttt{\textbackslash epsK} & {\ensuremath{\varepsilon_K}}\xspace & \texttt{\textbackslash epsB} & {\ensuremath{\varepsilon_B}}\xspace \\
\texttt{\textbackslash epsp} & {\ensuremath{\varepsilon^\prime_K}}\xspace & \texttt{\textbackslash CP} & {\ensuremath{C\!P}}\xspace & \texttt{\textbackslash CPT} & {\ensuremath{C\!PT}}\xspace \\
\texttt{\textbackslash T} & {\ensuremath{T}}\xspace & \texttt{\textbackslash rhobar} & {\ensuremath{\overline \rho}}\xspace & \texttt{\textbackslash etabar} & {\ensuremath{\overline \eta}}\xspace \\
\texttt{\textbackslash Vud} & {\ensuremath{V_{\uquark\dquark}^{\phantom{\ast}}}}\xspace & \texttt{\textbackslash Vcd} & {\ensuremath{V_{\cquark\dquark}^{\phantom{\ast}}}}\xspace & \texttt{\textbackslash Vtd} & {\ensuremath{V_{\tquark\dquark}^{\phantom{\ast}}}}\xspace \\
\texttt{\textbackslash Vus} & {\ensuremath{V_{\uquark\squark}^{\phantom{\ast}}}}\xspace & \texttt{\textbackslash Vcs} & {\ensuremath{V_{\cquark\squark}^{\phantom{\ast}}}}\xspace & \texttt{\textbackslash Vts} & {\ensuremath{V_{\tquark\squark}^{\phantom{\ast}}}}\xspace \\
\texttt{\textbackslash Vub} & {\ensuremath{V_{\uquark\bquark}^{\phantom{\ast}}}}\xspace & \texttt{\textbackslash Vcb} & {\ensuremath{V_{\cquark\bquark}^{\phantom{\ast}}}}\xspace & \texttt{\textbackslash Vtb} & {\ensuremath{V_{\tquark\bquark}^{\phantom{\ast}}}}\xspace \\
\texttt{\textbackslash Vuds} & {\ensuremath{V_{\uquark\dquark}^\ast}}\xspace & \texttt{\textbackslash Vcds} & {\ensuremath{V_{\cquark\dquark}^\ast}}\xspace & \texttt{\textbackslash Vtds} & {\ensuremath{V_{\tquark\dquark}^\ast}}\xspace \\
\texttt{\textbackslash Vuss} & {\ensuremath{V_{\uquark\squark}^\ast}}\xspace & \texttt{\textbackslash Vcss} & {\ensuremath{V_{\cquark\squark}^\ast}}\xspace & \texttt{\textbackslash Vtss} & {\ensuremath{V_{\tquark\squark}^\ast}}\xspace \\
\texttt{\textbackslash Vubs} & {\ensuremath{V_{\uquark\bquark}^\ast}}\xspace & \texttt{\textbackslash Vcbs} & {\ensuremath{V_{\cquark\bquark}^\ast}}\xspace & \texttt{\textbackslash Vtbs} & {\ensuremath{V_{\tquark\bquark}^\ast}}\xspace \\
\end{tabular*}
\subsubsection{Oscillations}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash dm} & {\ensuremath{\Delta m}}\xspace & \texttt{\textbackslash dms} & {\ensuremath{\Delta m_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash dmd} & {\ensuremath{\Delta m_{{\ensuremath{\Pd}}\xspace}}}\xspace \\
\texttt{\textbackslash DG} & {\ensuremath{\Delta\Gamma}}\xspace & \texttt{\textbackslash DGs} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash DGd} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Pd}}\xspace}}}\xspace \\
\texttt{\textbackslash Gs} & {\ensuremath{\Gamma_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash Gd} & {\ensuremath{\Gamma_{{\ensuremath{\Pd}}\xspace}}}\xspace & \texttt{\textbackslash MBq} & {\ensuremath{M_{{\ensuremath{\PB}}\xspace_{\ensuremath{\Pq}}\xspace}}}\xspace \\
\texttt{\textbackslash DGq} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Pq}}\xspace}}}\xspace & \texttt{\textbackslash Gq} & {\ensuremath{\Gamma_{{\ensuremath{\Pq}}\xspace}}}\xspace & \texttt{\textbackslash dmq} & {\ensuremath{\Delta m_{{\ensuremath{\Pq}}\xspace}}}\xspace \\
\texttt{\textbackslash GL} & {\ensuremath{\Gamma_{\mathrm{ L}}}}\xspace & \texttt{\textbackslash GH} & {\ensuremath{\Gamma_{\mathrm{ H}}}}\xspace & \texttt{\textbackslash DGsGs} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Ps}}\xspace}/\Gamma_{{\ensuremath{\Ps}}\xspace}}}\xspace \\
\texttt{\textbackslash Delm} & {\mbox{$\Delta m $}}\xspace & \texttt{\textbackslash ACP} & {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace & \texttt{\textbackslash Adir} & {\ensuremath{{\mathcal{A}}^{\mathrm{ dir}}}}\xspace \\
\texttt{\textbackslash Amix} & {\ensuremath{{\mathcal{A}}^{\mathrm{ mix}}}}\xspace & \texttt{\textbackslash ADelta} & {\ensuremath{{\mathcal{A}}^\Delta}}\xspace & \texttt{\textbackslash phid} & {\ensuremath{\phi_{{\ensuremath{\Pd}}\xspace}}}\xspace \\
\texttt{\textbackslash sinphid} & {\ensuremath{\sin\!\phid}}\xspace & \texttt{\textbackslash phis} & {\ensuremath{\phi_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash betas} & {\ensuremath{\beta_{{\ensuremath{\Ps}}\xspace}}}\xspace \\
\texttt{\textbackslash sbetas} & {\ensuremath{\sigma(\beta_{{\ensuremath{\Ps}}\xspace})}}\xspace & \texttt{\textbackslash stbetas} & {\ensuremath{\sigma(2\beta_{{\ensuremath{\Ps}}\xspace})}}\xspace & \texttt{\textbackslash stphis} & {\ensuremath{\sigma(\phi_{{\ensuremath{\Ps}}\xspace})}}\xspace \\
\texttt{\textbackslash sinphis} & {\ensuremath{\sin\!\phis}}\xspace & \\
\end{tabular*}
\subsubsection{Tagging}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash edet} & {\ensuremath{\varepsilon_{\mathrm{ det}}}}\xspace & \texttt{\textbackslash erec} & {\ensuremath{\varepsilon_{\mathrm{ rec/det}}}}\xspace & \texttt{\textbackslash esel} & {\ensuremath{\varepsilon_{\mathrm{ sel/rec}}}}\xspace \\
\texttt{\textbackslash etrg} & {\ensuremath{\varepsilon_{\mathrm{ trg/sel}}}}\xspace & \texttt{\textbackslash etot} & {\ensuremath{\varepsilon_{\mathrm{ tot}}}}\xspace & \texttt{\textbackslash mistag} & \ensuremath{\omega}\xspace \\
\texttt{\textbackslash wcomb} & \ensuremath{\omega^{\mathrm{comb}}}\xspace & \texttt{\textbackslash etag} & {\ensuremath{\varepsilon_{\mathrm{tag}}}}\xspace & \texttt{\textbackslash etagcomb} & {\ensuremath{\varepsilon_{\mathrm{tag}}^{\mathrm{comb}}}}\xspace \\
\texttt{\textbackslash effeff} & \ensuremath{\varepsilon_{\mathrm{eff}}}\xspace & \texttt{\textbackslash effeffcomb} & \ensuremath{\varepsilon_{\mathrm{eff}}^{\mathrm{comb}}}\xspace & \texttt{\textbackslash efftag} & {\ensuremath{\etag(1-2\omega)^2}}\xspace \\
\texttt{\textbackslash effD} & {\ensuremath{\etag D^2}}\xspace & \texttt{\textbackslash etagprompt} & {\ensuremath{\varepsilon_{\mathrm{ tag}}^{\mathrm{Pr}}}}\xspace & \texttt{\textbackslash etagLL} & {\ensuremath{\varepsilon_{\mathrm{ tag}}^{\mathrm{LL}}}}\xspace \\
\end{tabular*}
\subsubsection{Key decay channels}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash BdToKstmm} & \decay{\Bd}{\Kstarz\mup\mun} & \texttt{\textbackslash BdbToKstmm} & \decay{\Bdb}{\Kstarzb\mup\mun} & \texttt{\textbackslash BsToJPsiPhi} & \decay{\Bs}{\jpsi\phi} \\
\texttt{\textbackslash BdToJPsiKst} & \decay{\Bd}{\jpsi\Kstarz} & \texttt{\textbackslash BdbToJPsiKst} & \decay{\Bdb}{\jpsi\Kstarzb} & \texttt{\textbackslash BsPhiGam} & \decay{\Bs}{\phi \g} \\
\texttt{\textbackslash BdKstGam} & \decay{\Bd}{\Kstarz \g} & \texttt{\textbackslash BTohh} & \decay{\B}{\Ph^+ \Ph'^-} & \texttt{\textbackslash BdTopipi} & \decay{\Bd}{\pip\pim} \\
\texttt{\textbackslash BdToKpi} & \decay{\Bd}{\Kp\pim} & \texttt{\textbackslash BsToKK} & \decay{\Bs}{\Kp\Km} & \texttt{\textbackslash BsTopiK} & \decay{\Bs}{\pip\Km} \\
\texttt{\textbackslash Cpipi} & \ensuremath{C_{\pip\pim}}\xspace & \texttt{\textbackslash Spipi} & \ensuremath{S_{\pip\pim}}\xspace & \texttt{\textbackslash CKK} & \ensuremath{C_{\Kp\Km}}\xspace \\
\texttt{\textbackslash SKK} & \ensuremath{S_{\Kp\Km}}\xspace & \texttt{\textbackslash ADGKK} & \ensuremath{A^{\DG}_{\Kp\Km}}\xspace & \\
\end{tabular*}
\subsubsection{Rare decays}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash BdKstee} & \decay{\Bd}{\Kstarz\epem} & \texttt{\textbackslash BdbKstee} & \decay{\Bdb}{\Kstarzb\epem} & \texttt{\textbackslash bsll} & \decay{\bquark}{\squark \ell^+ \ell^-} \\
\texttt{\textbackslash AFB} & \ensuremath{A_{\mathrm{FB}}}\xspace & \texttt{\textbackslash FL} & \ensuremath{F_{\mathrm{L}}}\xspace & \texttt{\textbackslash AT\#1 \textbackslash AT2} & \AT2 \\
\texttt{\textbackslash btosgam} & \decay{\bquark}{\squark \g} & \texttt{\textbackslash btodgam} & \decay{\bquark}{\dquark \g} & \texttt{\textbackslash Bsmm} & \decay{\Bs}{\mup\mun} \\
\texttt{\textbackslash Bdmm} & \decay{\Bd}{\mup\mun} & \texttt{\textbackslash Bsee} & \decay{\Bs}{\epem} & \texttt{\textbackslash Bdee} & \decay{\Bd}{\epem} \\
\texttt{\textbackslash ctl} & \ensuremath{\cos{\theta_\ell}}\xspace & \texttt{\textbackslash ctk} & \ensuremath{\cos{\theta_K}}\xspace & \\
\end{tabular*}
\subsubsection{Wilson coefficients and operators}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash C\#1 \textbackslash C9} & \C9 & \texttt{\textbackslash Cp\#1 \textbackslash Cp7} & \Cp7 & \texttt{\textbackslash Ceff\#1 \textbackslash Ceff9 } & \Ceff9 \\
\texttt{\textbackslash Cpeff\#1 \textbackslash Cpeff7} & \Cpeff7 & \texttt{\textbackslash Ope\#1 \textbackslash Ope2} & \Ope2 & \texttt{\textbackslash Opep\#1 \textbackslash Opep7} & \Opep7 \\
\end{tabular*}
\subsubsection{Charm}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash xprime} & \ensuremath{x^{\prime}}\xspace & \texttt{\textbackslash yprime} & \ensuremath{y^{\prime}}\xspace & \texttt{\textbackslash ycp} & \ensuremath{y_{\CP}}\xspace \\
\texttt{\textbackslash agamma} & \ensuremath{A_{\Gamma}}\xspace & \texttt{\textbackslash dkpicf} & \decay{\Dz}{\Km\pip} & \\
\end{tabular*}
\subsubsection{QM}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash bra[1] \textbackslash bra\{a\}} & \bra{a} & \texttt{\textbackslash ket[1] \textbackslash ket\{b\}} & \ket{b} & \texttt{\textbackslash braket[2] \textbackslash braket\{a\}\{b\}} & \braket{a}{b} \\
\end{tabular*}
\subsection{Units (these macros add a small space in front)}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash unit[1] \textbackslash unit\{kg\} } & \unit{kg} & \\
\end{tabular*}
\subsubsection{Energy and momentum }
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash tev} & \aunit{Te\kern -0.1em V}\xspace & \texttt{\textbackslash gev} & \aunit{Ge\kern -0.1em V}\xspace & \texttt{\textbackslash mev} & \aunit{Me\kern -0.1em V}\xspace \\
\texttt{\textbackslash kev} & \aunit{ke\kern -0.1em V}\xspace & \texttt{\textbackslash ev} & \aunit{e\kern -0.1em V}\xspace & \texttt{\textbackslash gevgev} & \gevgev \\
\texttt{\textbackslash mevc} & \ensuremath{\aunit{Me\kern -0.1em V\!/}c}\xspace & \texttt{\textbackslash gevc} & \ensuremath{\aunit{Ge\kern -0.1em V\!/}c}\xspace & \texttt{\textbackslash mevcc} & \ensuremath{\aunit{Me\kern -0.1em V\!/}c^2}\xspace \\
\texttt{\textbackslash gevcc} & \ensuremath{\aunit{Ge\kern -0.1em V\!/}c^2}\xspace & \texttt{\textbackslash gevgevcc} & \gevgevcc & \texttt{\textbackslash gevgevcccc} & \gevgevcccc \\
\end{tabular*}
\subsubsection{Distance and area (these macros add a small space)}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash km} & \aunit{km}\xspace & \texttt{\textbackslash m} & \aunit{m}\xspace & \texttt{\textbackslash ma} & \ensuremath{\aunit{m}^2}\xspace \\
\texttt{\textbackslash cm} & \aunit{cm}\xspace & \texttt{\textbackslash cma} & \ensuremath{\aunit{cm}^2}\xspace & \texttt{\textbackslash mm} & \aunit{mm}\xspace \\
\texttt{\textbackslash mma} & \ensuremath{\aunit{mm}^2}\xspace & \texttt{\textbackslash mum} & \ensuremath{\,\upmu\nospaceunit{m}}\xspace & \texttt{\textbackslash muma} & \ensuremath{\,\upmu\nospaceunit{m}^2}\xspace \\
\texttt{\textbackslash nm} & \aunit{nm}\xspace & \texttt{\textbackslash fm} & \aunit{fm}\xspace & \texttt{\textbackslash barn} & \aunit{b}\xspace \\
\texttt{\textbackslash mbarn} & \aunit{mb}\xspace & \texttt{\textbackslash mub} & \ensuremath{\,\upmu\nospaceunit{b}}\xspace & \texttt{\textbackslash nb} & \aunit{nb}\xspace \\
\texttt{\textbackslash invnb} & \ensuremath{\nb^{-1}}\xspace & \texttt{\textbackslash pb} & \aunit{pb}\xspace & \texttt{\textbackslash invpb} & \ensuremath{\pb^{-1}}\xspace \\
\texttt{\textbackslash fb} & \ensuremath{\aunit{fb}}\xspace & \texttt{\textbackslash invfb} & \ensuremath{\fb^{-1}}\xspace & \texttt{\textbackslash ab} & \ensuremath{\aunit{ab}}\xspace \\
\texttt{\textbackslash invab} & \ensuremath{\ab^{-1}}\xspace & \\
\end{tabular*}
\subsubsection{Time }
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash sec} & \ensuremath{\aunit{s}}\xspace & \texttt{\textbackslash ms} & \ensuremath{\aunit{ms}}\xspace & \texttt{\textbackslash mus} & \ensuremath{\,\upmu\nospaceunit{s}}\xspace \\
\texttt{\textbackslash ns} & \ensuremath{\aunit{ns}}\xspace & \texttt{\textbackslash ps} & \ensuremath{\aunit{ps}}\xspace & \texttt{\textbackslash fs} & \aunit{fs} \\
\texttt{\textbackslash mhz} & \ensuremath{\aunit{MHz}}\xspace & \texttt{\textbackslash khz} & \ensuremath{\aunit{kHz}}\xspace & \texttt{\textbackslash hz} & \ensuremath{\aunit{Hz}}\xspace \\
\texttt{\textbackslash invps} & \ensuremath{\ps^{-1}}\xspace & \texttt{\textbackslash invns} & \ensuremath{\ns^{-1}}\xspace & \texttt{\textbackslash yr} & \aunit{yr}\xspace \\
\texttt{\textbackslash hr} & \aunit{hr}\xspace & \\
\end{tabular*}
\subsubsection{Temperature}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash degc} & \ensuremath{^\circ}{\text{C}}\xspace & \texttt{\textbackslash degk} & \aunit{K}\xspace & \\
\end{tabular*}
\subsubsection{Material lengths, radiation}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash Xrad} & \ensuremath{X_0}\xspace & \texttt{\textbackslash NIL} & \ensuremath{\lambda_{\rm int}}\xspace & \texttt{\textbackslash mip} & MIP\xspace \\
\texttt{\textbackslash neutroneq} & \ensuremath{n_\nospaceunit{eq}}\xspace & \texttt{\textbackslash neqcmcm} & \ensuremath{\neutroneq/\nospaceunit{cm}^2}\xspace & \texttt{\textbackslash kRad} & \aunit{kRad}\xspace \\
\texttt{\textbackslash MRad} & \aunit{MRad}\xspace & \texttt{\textbackslash ci} & \aunit{Ci}\xspace & \texttt{\textbackslash mci} & \aunit{mCi}\xspace \\
\end{tabular*}
\subsubsection{Uncertainties}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash sx} & \sx & \texttt{\textbackslash sy} & \sy & \texttt{\textbackslash sz} & \sz \\
\texttt{\textbackslash stat} & \aunit{(stat)}\xspace & \texttt{\textbackslash syst} & \aunit{(syst)}\xspace & \texttt{\textbackslash lumi} & \aunit{(lumi)}\xspace \\
\end{tabular*}
\subsubsection{Maths}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash order} & {\ensuremath{\mathcal{O}}}\xspace & \texttt{\textbackslash chisq} & \ensuremath{\chi^2}\xspace & \texttt{\textbackslash chisqndf} & \ensuremath{\chi^2/\mathrm{ndf}}\xspace \\
\texttt{\textbackslash chisqip} & \ensuremath{\chi^2_{\text{IP}}}\xspace & \texttt{\textbackslash chisqvs} & \ensuremath{\chi^2_{\text{VS}}}\xspace & \texttt{\textbackslash chisqvtx} & \ensuremath{\chi^2_{\text{vtx}}}\xspace \\
\texttt{\textbackslash chisqvtxndf} & \ensuremath{\chi^2_{\text{vtx}}/\mathrm{ndf}}\xspace & \texttt{\textbackslash chisqfd} & \ensuremath{\chi^2_{\text{FD}}}\xspace & \texttt{\textbackslash gsim} & \gsim \\
\texttt{\textbackslash lsim} & \lsim & \texttt{\textbackslash mean[1] \textbackslash mean\{x\}} & \mean{x} & \texttt{\textbackslash abs[1] \textbackslash abs\{x\}} & \abs{x} \\
\texttt{\textbackslash Real} & \ensuremath{\mathcal{R}e}\xspace & \texttt{\textbackslash Imag} & \ensuremath{\mathcal{I}m}\xspace & \texttt{\textbackslash PDF} & PDF\xspace \\
\texttt{\textbackslash sPlot} & \mbox{\em sPlot}\xspace & \texttt{\textbackslash sFit} & \mbox{\em sFit}\xspace & \texttt{\textbackslash deriv} & \ensuremath{\mathrm{d}} \\
\end{tabular*}
\subsection{Kinematics}
\subsubsection{Energy, Momenta}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash Ebeam} & \ensuremath{E_{\mbox{\tiny BEAM}}}\xspace & \texttt{\textbackslash sqs} & \ensuremath{\protect\sqrt{s}}\xspace & \texttt{\textbackslash sqsnn} & \ensuremath{\protect\sqrt{s_{\scriptscriptstyle\rm NN}}}\xspace \\
\texttt{\textbackslash pt} & \ensuremath{p_{\mathrm{T}}}\xspace & \texttt{\textbackslash ptsq} & \ensuremath{p_{\mathrm{T}}^2}\xspace & \texttt{\textbackslash ptot} & \ensuremath{p}\xspace \\
\texttt{\textbackslash et} & \ensuremath{E_{\mathrm{T}}}\xspace & \texttt{\textbackslash mt} & \ensuremath{M_{\mathrm{T}}}\xspace & \texttt{\textbackslash dpp} & \ensuremath{\Delta p/p}\xspace \\
\texttt{\textbackslash msq} & \ensuremath{m^2}\xspace & \texttt{\textbackslash dedx} & \ensuremath{\mathrm{d}\hspace{-0.1em}E/\mathrm{d}x}\xspace & \\
\end{tabular*}
\subsubsection{PID}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash dllkpi} & \ensuremath{\mathrm{DLL}_{\kaon\pion}}\xspace & \texttt{\textbackslash dllppi} & \ensuremath{\mathrm{DLL}_{\proton\pion}}\xspace & \texttt{\textbackslash dllepi} & \ensuremath{\mathrm{DLL}_{\electron\pion}}\xspace \\
\texttt{\textbackslash dllmupi} & \ensuremath{\mathrm{DLL}_{\muon\pi}}\xspace & \\
\end{tabular*}
\subsubsection{Geometry}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash degrees} & \ensuremath{^{\circ}}\xspace & \texttt{\textbackslash murad} & \ensuremath{\,\upmu\nospaceunit{rad}}\xspace & \texttt{\textbackslash mrad} & \aunit{mrad}\xspace \\
\texttt{\textbackslash rad} & \aunit{rad}\xspace & \\
\end{tabular*}
\subsubsection{Accelerator}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash betastar} & \ensuremath{\beta^*} & \texttt{\textbackslash lum} & \lum & \texttt{\textbackslash intlum[1] \textbackslash intlum\{2 \,\ensuremath{\fb^{-1}}\xspace\}} & \intlum{2 \,\ensuremath{\fb^{-1}}\xspace} \\
\end{tabular*}
\subsection{Software}
\subsubsection{Programs}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash bcvegpy} & \mbox{\textsc{Bcvegpy}}\xspace & \texttt{\textbackslash boole} & \mbox{\textsc{Boole}}\xspace & \texttt{\textbackslash brunel} & \mbox{\textsc{Brunel}}\xspace \\
\texttt{\textbackslash davinci} & \mbox{\textsc{DaVinci}}\xspace & \texttt{\textbackslash dirac} & \mbox{\textsc{Dirac}}\xspace & \texttt{\textbackslash evtgen} & \mbox{\textsc{EvtGen}}\xspace \\
\texttt{\textbackslash fewz} & \mbox{\textsc{Fewz}}\xspace & \texttt{\textbackslash fluka} & \mbox{\textsc{Fluka}}\xspace & \texttt{\textbackslash ganga} & \mbox{\textsc{Ganga}}\xspace \\
\texttt{\textbackslash gaudi} & \mbox{\textsc{Gaudi}}\xspace & \texttt{\textbackslash gauss} & \mbox{\textsc{Gauss}}\xspace & \texttt{\textbackslash geant} & \mbox{\textsc{Geant4}}\xspace \\
\texttt{\textbackslash hepmc} & \mbox{\textsc{HepMC}}\xspace & \texttt{\textbackslash herwig} & \mbox{\textsc{Herwig}}\xspace & \texttt{\textbackslash moore} & \mbox{\textsc{Moore}}\xspace \\
\texttt{\textbackslash neurobayes} & \mbox{\textsc{NeuroBayes}}\xspace & \texttt{\textbackslash photos} & \mbox{\textsc{Photos}}\xspace & \texttt{\textbackslash powheg} & \mbox{\textsc{Powheg}}\xspace \\
\texttt{\textbackslash pythia} & \mbox{\textsc{Pythia}}\xspace & \texttt{\textbackslash resbos} & \mbox{\textsc{ResBos}}\xspace & \texttt{\textbackslash roofit} & \mbox{\textsc{RooFit}}\xspace \\
\texttt{\textbackslash root} & \mbox{\textsc{Root}}\xspace & \texttt{\textbackslash spice} & \mbox{\textsc{Spice}}\xspace & \texttt{\textbackslash urania} & \mbox{\textsc{Urania}}\xspace \\
\end{tabular*}
\subsubsection{Languages}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash cpp} & \mbox{\textsc{C\raisebox{0.1em}{{\footnotesize{++}}}}}\xspace & \texttt{\textbackslash ruby} & \mbox{\textsc{Ruby}}\xspace & \texttt{\textbackslash fortran} & \mbox{\textsc{Fortran}}\xspace \\
\texttt{\textbackslash svn} & \mbox{\textsc{svn}}\xspace & \texttt{\textbackslash git} & \mbox{\textsc{git}}\xspace & \texttt{\textbackslash latex} & \mbox{\LaTeX}\xspace \\
\end{tabular*}
\subsubsection{Data processing}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash kbit} & \aunit{kbit}\xspace & \texttt{\textbackslash kbps} & \aunit{kbit/s}\xspace & \texttt{\textbackslash kbytes} & \aunit{kB}\xspace \\
\texttt{\textbackslash kbyps} & \aunit{kB/s}\xspace & \texttt{\textbackslash mbit} & \aunit{Mbit}\xspace & \texttt{\textbackslash mbps} & \aunit{Mbit/s}\xspace \\
\texttt{\textbackslash mbytes} & \aunit{MB}\xspace & \texttt{\textbackslash mbyps} & \aunit{MB/s}\xspace & \texttt{\textbackslash gbit} & \aunit{Gbit}\xspace \\
\texttt{\textbackslash gbps} & \aunit{Gbit/s}\xspace & \texttt{\textbackslash gbytes} & \aunit{GB}\xspace & \texttt{\textbackslash gbyps} & \aunit{GB/s}\xspace \\
\texttt{\textbackslash tbit} & \aunit{Tbit}\xspace & \texttt{\textbackslash tbps} & \aunit{Tbit/s}\xspace & \texttt{\textbackslash tbytes} & \aunit{TB}\xspace \\
\texttt{\textbackslash tbyps} & \aunit{TB/s}\xspace & \texttt{\textbackslash dst} & DST\xspace & \\
\end{tabular*}
\subsection{Detector related}
\subsubsection{Detector technologies}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash nonn} & \ensuremath{\mathrm{{ \mathit{n^+}} \mbox{-} on\mbox{-}{ \mathit{n}}}}\xspace & \texttt{\textbackslash ponn} & \ensuremath{\mathrm{{ \mathit{p^+}} \mbox{-} on\mbox{-}{ \mathit{n}}}}\xspace & \texttt{\textbackslash nonp} & \ensuremath{\mathrm{{ \mathit{n^+}} \mbox{-} on\mbox{-}{ \mathit{p}}}}\xspace \\
\texttt{\textbackslash cvd} & CVD\xspace & \texttt{\textbackslash mwpc} & MWPC\xspace & \texttt{\textbackslash gem} & GEM\xspace \\
\end{tabular*}
\subsubsection{Detector components, electronics}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash tell1} & TELL1\xspace & \texttt{\textbackslash ukl1} & UKL1\xspace & \texttt{\textbackslash beetle} & Beetle\xspace \\
\texttt{\textbackslash otis} & OTIS\xspace & \texttt{\textbackslash croc} & CROC\xspace & \texttt{\textbackslash carioca} & CARIOCA\xspace \\
\texttt{\textbackslash dialog} & DIALOG\xspace & \texttt{\textbackslash sync} & SYNC\xspace & \texttt{\textbackslash cardiac} & CARDIAC\xspace \\
\texttt{\textbackslash gol} & GOL\xspace & \texttt{\textbackslash vcsel} & VCSEL\xspace & \texttt{\textbackslash ttc} & TTC\xspace \\
\texttt{\textbackslash ttcrx} & TTCrx\xspace & \texttt{\textbackslash hpd} & HPD\xspace & \texttt{\textbackslash pmt} & PMT\xspace \\
\texttt{\textbackslash specs} & SPECS\xspace & \texttt{\textbackslash elmb} & ELMB\xspace & \texttt{\textbackslash fpga} & FPGA\xspace \\
\texttt{\textbackslash plc} & PLC\xspace & \texttt{\textbackslash rasnik} & RASNIK\xspace & \texttt{\textbackslash elmb} & ELMB\xspace \\
\texttt{\textbackslash can} & CAN\xspace & \texttt{\textbackslash lvds} & LVDS\xspace & \texttt{\textbackslash ntc} & NTC\xspace \\
\texttt{\textbackslash adc} & ADC\xspace & \texttt{\textbackslash led} & LED\xspace & \texttt{\textbackslash ccd} & CCD\xspace \\
\texttt{\textbackslash hv} & HV\xspace & \texttt{\textbackslash lv} & LV\xspace & \texttt{\textbackslash pvss} & PVSS\xspace \\
\texttt{\textbackslash cmos} & CMOS\xspace & \texttt{\textbackslash fifo} & FIFO\xspace & \texttt{\textbackslash ccpc} & CCPC\xspace \\
\end{tabular*}
\subsubsection{Chemical symbols}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash cfourften} & \ensuremath{\mathrm{ C_4 F_{10}}}\xspace & \texttt{\textbackslash cffour} & \ensuremath{\mathrm{ CF_4}}\xspace & \texttt{\textbackslash cotwo} & \cotwo \\
\texttt{\textbackslash csixffouteen} & \csixffouteen & \texttt{\textbackslash mgftwo} & \mgftwo & \texttt{\textbackslash siotwo} & \siotwo \\
\end{tabular*}
\subsection{Special Text }
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash eg} & \mbox{\itshape e.g.}\xspace & \texttt{\textbackslash ie} & \mbox{\itshape i.e.}\xspace & \texttt{\textbackslash etal} & \mbox{\itshape et al.}\xspace \\
\texttt{\textbackslash etc} & \mbox{\itshape etc.}\xspace & \texttt{\textbackslash cf} & \mbox{\itshape cf.}\xspace & \texttt{\textbackslash ffp} & \mbox{\itshape ff.}\xspace \\
\texttt{\textbackslash vs} & \mbox{\itshape vs.}\xspace & \\
\end{tabular*}
\subsubsection{Helpful to align numbers in tables}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash phz} & \phantom{0} & \\
\end{tabular*}
\section*{\secdef\my@section{\mbox{\LaTeX}\xspace@section*}}
\def\my@section[#1]#2{}
\makeatother
\input{lhcb-symbols-def}
\usepackage{longtable}
\begin{document}
\preprint{LHCb-PAPER-20XX-YYY}
\title{Template for writing LHCb papers}
\author{LHCb collaboration}
\affiliation{%
Authors are listed at the end of this Letter.}%
\date{\today
\begin{abstract}
Guidelines for the preparation of LHCb documents are given. This is
a ``living'' document that should reflect our current practice. It
is expected that these guidelines are implemented for papers
before they go into the first collaboration wide review. Please
contact the Editorial Board chair if you have suggestions for
modifications.
This is the title page for journal publications (PAPER).
For a CONF note or ANA note, switch to the appropriate template
by uncommenting the corresponding line in the file \verb!main.tex!.
\end{abstract}
\pacs{
\maketitle
\input{introduction}
\input{principles}
\input{layout}
\input{typography}
\input{detector}
\input{figures}
\input{reference}
\input{supplementary}
\begin{acknowledgments}
\input{acknowledgements}
\end{acknowledgments}
\ifthenelse{\boolean{wordcount}}%
{ \bibliographystyle{plain}
\nobibliography{main,LHCb-PAPER,LHCb-CONF,LHCb-DP,LHCb-TDR}}
{
\section*{\secdef\my@section{\mbox{\LaTeX}\xspace@section*}}
\def\my@section[#1]#2{}
\makeatother
\input{lhcb-symbols-def}
\usepackage{longtable}
\begin{document}
\preprint{LHCb-PAPER-20XX-YYY}
\title{Template for writing LHCb papers}
\author{LHCb collaboration}
\affiliation{%
Authors are listed at the end of this Letter.}%
\date{\today
\begin{abstract}
Guidelines for the preparation of LHCb documents are given. This is
a ``living'' document that should reflect our current practice. It
is expected that these guidelines are implemented for papers
before they go into the first collaboration wide review. Please
contact the Editorial Board chair if you have suggestions for
modifications.
This is the title page for journal publications (PAPER).
For a CONF note or ANA note, switch to the appropriate template
by uncommenting the corresponding line in the file \verb!main.tex!.
\end{abstract}
\pacs{
\maketitle
\input{introduction}
\input{principles}
\input{layout}
\input{typography}
\input{detector}
\input{figures}
\input{reference}
\input{supplementary}
\begin{acknowledgments}
\input{acknowledgements}
\end{acknowledgments}
\ifthenelse{\boolean{wordcount}}%
{ \bibliographystyle{plain}
\nobibliography{main,LHCb-PAPER,LHCb-CONF,LHCb-DP,LHCb-TDR}}
{
|
2,877,628,088,376 | arxiv | \section{Introduction and Main Result}
The classical Schwarz lemma says that an analytic function
$f$ from the unit disk $\mathbb{D}=\{z\in
\mathbb{C}:|z|<1\}$ into itself with $f(0)=0$ must map
each smaller disk $\{z\in \mathbb{C}:\; |z|<r<1\}$ into itself. Also, $|f'(0)|\le 1$, and $|f'(0)|= 1$
if and only if $f$ is a rotation of $\mathbb{D}$.
This is a very powerful tool in complex analysis.
An elementary consequence of Schwarz lemma is that if $f$ extends
continuously to some boundary point $\alpha$,
$|f(\alpha)|=1$, and if $f$ is differentiable at $\alpha$, then $|f'(\alpha)|\ge 1$ (see, for example,
\cite{Gar-book,Oss00}).
Establishing various versions of Schwarz lemma
and boundary Schwarz lemma has
attracted many researchers in recent years. In \cite{BK94}, Burns and Krantz obtained a Schwarz lemma at the boundary for holomorphic mappings defined on $\mathbb{D}$ as well as on balls in $\mathbb{C}^n$. They have also obtained similar results for holomorphic mappings on strongly convex and strongly pseudoconvex domains in $\mathbb{C}^n$. Liu and Tang in \cite{LT15} obtained the boundary Schwarz lemma for holomorphic mappings defined on the unit ball in $\mathbb{C}^n$. We refer the survey article by Krantz \cite{Kra11} for a brief history on the Schwarz lemma at the boundary.
The Schwarz lemma at the boundary plays an important
role in complex analysis. For example, by using the Schwarz lemma at the boundary,
Bonk improved the previously known lower bound for the Bloch constant in \cite{Bonk90}.
The boundary Schwarz lemma is also a fundamental tool in the study of the geometric properties of functions of several complex variables; see \cite{LT15,LT16,LWT15}.
In this paper, we are interested in establishing a boundary Schwarz lemma for functions which satisfy certain partial differential equations, namely,
the non-homogeneous biharmonic equations.
We now proceed to write some notations and preliminaries which are required
to state our result.
We denote by $\mathbb{T}=\partial\mathbb{D}$ the boundary of
$\mathbb{D}$, and by $\overline{\mathbb{D}}=\mathbb{D}\cup \mathbb{T}$, the closure of $\mathbb{D}$. For any subset $\Omega$ of $\mathbb{C}$, we denote by $\mathcal{C}^{m}(\Omega)$, the set of all complex-valued $m$-times continuously differentiable functions from
$\Omega$ into $\mathbb{C}$, where $m\in \mathbb{N}\cup\{0\}$. In
particular, $\mathcal{C}(\Omega):=\mathcal{C}^{0}(\Omega)$ denotes the
set of all continuous functions in $\Omega$.
For a real $2\times2$ matrix $A$, we use the matrix norm
$$\|A\|=\sup\{|Az|:\,z\in \mathbb{T}\}$$ and the matrix function
$$\lambda(A)=\inf\{|Az|:\, z\in \mathbb{T}\}.$$
For $z=x+iy\in\mathbb{C}$ with $x$, $y\in \mathbb{R}$, the
formal derivative of a complex-valued function $f=u+iv$ is given
by
$$D_{f}=\left(\begin{array}{cccc}
\displaystyle u_{x}\;~~ u_{y}\\[2mm]
\displaystyle v_{x}\;~~ v_{y}
\end{array}\right),
$$
so that
$$\|D_{f}\|=|f_{z}|+|f_{\overline{z}}| ~\mbox{ and }~ \lambda(D_{f})=\big| |f_{z}|-|f_{\overline{z}}|\big |,
$$
where $$f_{z}=\frac{1}{2}\big(
f_x-if_y\big)\;\;\mbox{and}\;\; f_{\overline{z}}=\frac{1}{2}\big(f_x+if_y\big).$$ We use
$$J_{f}:=\det D_{f} =|f_{z}|^{2}-|f_{\overline{z}}|^{2}
$$
to denote the {\it Jacobian} of $f$.
Let $f^{\ast}, g\in\mathcal{C}(\overline{\mathbb{D}})$, $\varphi \in \mathcal{C}(\mathbb{T})$
and
$f\in\mathcal{C}^{4}(\mathbb{D})$. We are interested in
the following {\it non-homogeneous biharmonic equation} defined in
$\mathbb{D}$:
\begin{equation}\label{eq-ch-1.0}\Delta(\Delta f)=g\end{equation} with the following
associated Dirichlet boundary value: \begin{equation}\label{eq-ch-1}
\begin{cases}
\displaystyle f_{\overline{z}}=\varphi &\mbox{ on }\, \mathbb{T},\\
\displaystyle f=f^{\ast}&\mbox{ on }\, \mathbb{T},
\end{cases}
\end{equation} where $$\Delta f=f_{xx}+f_{yy}=4f_{z \overline{z}}$$ is the
{\it Laplacian} of $f$.
In particular, if $g\equiv0$, then any
solution to (\ref{eq-ch-1.0}) is {\it biharmonic}. For the
properties of biharmonic mappings, see
\cite{CWY99,Str03}.
Chen et. al. in \cite{CLW} have discussed the Schwarz-type lemma, Landau-type theorems and bilipschitz properties for the solutions of
non-homogeneous biharmonic equations \eqref{eq-ch-1.0} satisfying \eqref{eq-ch-1}.
Suppose that
\begin{equation}\label{G-1}G(z,w)=|z-w|^{2}\log\left|\frac{1-z\overline{w}}{z-w}\right|^{2}-(1-|z|^{2})(1-|w|^{2})\end{equation}
and
$$P(z,e^{i\theta})=\frac{1-|z|^{2}}{|1-ze^{-i\theta}|^{2}} \quad (\theta\in[0,2\pi])
$$
denote the {\it biharmonic Green function} and {\it (harmonic)
Poisson kernel} in $\mathbb{D}$, respectively. It follows from \cite[Theorem 2]{Beg05} that all the
solutions to the equation (\ref{eq-ch-1.0}) satisfying the boundary conditions (\ref{eq-ch-1}) are given by
\begin{eqnarray}\label{eq-ch-3}
f(z)&=&\mathcal{P}_{f^{\ast}}(z)+\frac{1}{2\pi}(1-|z|^{2})\int_{0}^{2\pi}f^{\ast}(e^{it})\frac{\overline{z}e^{it}}{(1-\overline{z}e^{it})^{2}}dt\\
\nonumber
&&-(1-|z|^{2})\mathcal{P}_{\varphi_{1}}(z)-\frac{1}{8}G[g](z),
\end{eqnarray}
where
\begin{equation}\label{mon-001}\mathcal{P}_{f^{\ast}}(z)=\frac{1}{2\pi}\int_{0}^{2\pi}P(z,e^{it})f(e^{it})dt,
\quad
\mathcal{P}_{\varphi_{1}}(z)=\frac{1}{2\pi}\int_{0}^{2\pi}P(z,e^{it})\varphi_{1}(e^{it})dt,
\end{equation}
\begin{equation}\label{mon-002} \varphi_{1}(e^{it})=\varphi(e^{it})e^{-it}\quad \mbox{and}\quad G[g](z)=\frac{1}{2\pi} \int_{\mathbb{D}} g(w)G(z,w)dA(w).
\end{equation}
Here $dA(w)$ denotes the Lebesgue area measure in $\mathbb{D}$.
The solvability of the non-homogeneous biharmonic equations has also been studied in \cite{MM09}.
Let us recall the following version of the boundary Schwarz lemma of analytic functions, which was proved in \cite{LT15}.
\begin{Thm}{\rm (\cite[Theorem~$1.1'$]{LT15})}\label{MWZ} Suppose that $f$ is an analytic function from $\mathbb{D}$ into itself. If $f(0)=0$ and $f$ is analytic at $z=\alpha\in \mathbb{T}$ with $f(\alpha)=\beta\in \mathbb{T}$, then
\begin{enumerate}
\item $\overline{\beta}f'(\alpha)\alpha \geq 1$.
\item $\overline{\beta}f'(\alpha)\alpha = 1$ if and only if $f(z)\equiv e^{i\theta} z$, where $e^{i\theta}=\beta \alpha^{-1}$ and $\theta\in \mathbb{R}$.
\end{enumerate}
\end{Thm}
This useful result has attracted much attention and has been
generalized in various forms (see, e.g., \cite{CK,Zhu18-fil}). Recently, Wang et. al. obtained a boundary Schwarz lemma for
the solutions to Poisson's equation (\cite{WZ18}).
By analogy with the studies in \cite{WZ18}, we discuss the boundary Schwarz lemma for the functions with the form \eqref{eq-ch-3}. Our result is as follows. Note that a different form of the boundary Schwarz lemma
for functions with the form \eqref{eq-ch-3} was proved in \cite{CLW}.
\begin{theorem}\label{bsl-2}
Suppose $f\in \mathcal{C}^4(\mathbb{D})$ and $g\in \mathcal{C}(\overline{\mathbb{D}})$ satisfy the following equations:
$$\left\{\begin{array}{ll}
\Delta (\Delta f)=g & \mbox{ in } \mathbb{D},\\
f_{\overline{z}}=\varphi & \mbox{ on } \mathbb{T},\\
f=f^\ast & \mbox{ on } \mathbb{T},
\end{array}\right.
$$
where $\varphi\in \mathcal{C}(\mathbb{T})$, $f^\ast \in \mathcal{C}(\overline{\mathbb{D}})$, $f^\ast $ is analytic in $\mathbb{D}$ and $f(\mathbb{D})\subset \mathbb{D}$. If $f$ is differentiable at $z=\alpha\in \mathbb{T}$, $f(\alpha)=\beta\in \mathbb{T}$ and $f(0)=0$, then
\begin{equation}\label{wen-1} {\rm Re} [\overline{\beta}(f_z(\alpha)\alpha+f_{\overline{z}}(\alpha)\overline{\alpha})]\ge
\frac{2}{\pi}-3\|\mathcal{P}_{\varphi_1}\|_\infty-\frac{1}{64} \|g\|_{\infty},
\end{equation}
where $\mathcal{P}_{\varphi_1}$ and $\varphi_1$ are defined in \eqref{mon-001} and \eqref{mon-002}, respectively.
In particular, when $\|\mathcal{P}_{\varphi_1}\|_\infty=\|g\|_{\infty}=0$, the following inequality is sharp:
\begin{equation}\label{mana-1}{\rm Re} [\overline{\beta}(f_z(\alpha)\alpha+f_{\overline{z}}(\alpha)\overline{\alpha})]\ge
\frac{2}{\pi}.\end{equation}
\end{theorem}
We have the following two remarks.
\begin{enumerate}
\item
For analytic functions, the value of $\overline{\beta}f'(\alpha)\alpha$ in Theorem A is
a real number. However, this is not true for the case of the solutions to the equation
\eqref{eq-ch-1.0} (see Example~\ref{existence} below). Hence, in Theorem \ref{bsl-2}, we consider the real part of the quantity $\overline{\beta}(f_z(\alpha)\alpha+f_{\overline{z}}(\alpha)\overline{\alpha})$.
\item The obtained lower bound for the quantity ${\rm Re} [\overline{\beta}(f_z(\alpha)\alpha+f_{\overline{z}}(\alpha)\overline{\alpha})]$ in \eqref{wen-1} is always positive for all $\varphi_1$ and $g$ with $(\|\mathcal{P}_{\varphi_1}\|_{\infty}, \|g\|_{\infty})\in \{(x,y):\; x\geq 0,\; y\geq 0,\; 3x+\frac{1}{64}y<\frac{2}{\pi}\}$.
\end{enumerate}
\section{Proof of Theorem \ref{bsl-2}}
We start with the following lemma.
\begin{lemma}\label{bsl-1}
Suppose $\mathfrak{g}\in \mathcal{C}(\overline{\mathbb{D}})$ and $h \in \mathcal{C}^4(\mathbb{D})$ satisfy the following equations:
$$\left\{\begin{array}{ll}
\Delta (\Delta h)=\mathfrak{g} & \mbox{ in } \mathbb{D},\\
h_{\overline{z}}=\psi & \mbox{ on } \mathbb{T},\\
h=h^\ast & \mbox{ on } \mathbb{T},
\end{array}\right.
$$
where $\psi \in \mathcal{C}(\mathbb{T})$, $h^\ast \in \mathcal{C}(\overline{\mathbb{D}})$, $h^\ast $ is analytic in $\mathbb{D}$ and $h(\mathbb{D})\subset \mathbb{D}$. If $h$ is differentiable at $z=1$, $h(1)=1$ and $h(0)=0$, then
\[ {\rm Re} [h_z(1)+h_{\overline{z}}(1)]\ge
\frac{2}{\pi}-3\|\mathcal{P}_{\psi_1}\|_\infty-\frac{1}{64} \|\mathfrak{g}\|_{\infty},
\]
where $\psi_1(e^{it})=\psi(e^{it}) e^{-it}$.
In particular, when $\|\mathcal{P}_{\psi_1}\|_\infty=\|\mathfrak{g}\|_{\infty}=0$, the following inequality is sharp:
\begin{equation}\label{mana-2}{\rm Re} \left[h_z(1)+h_{\overline{z}}(1)\right]\ge
\frac{2}{\pi}.\end{equation}
\end{lemma}
\begin{proof}
The assumptions of the lemma ensure that $h$ has the form \eqref{eq-ch-3}, i.e.,
$$
h(z)=\mathcal{P}_{h^{\ast}}(z)+\frac{1}{2\pi}(1-|z|^{2})\int_{0}^{2\pi}h^{\ast}(e^{it})\frac{\overline{z}e^{it}}{(1-\overline{z}e^{it})^{2}}dt-(1-|z|^{2})
\mathcal{P}_{\psi_{1}}(z)-\frac{1}{8}G[\mathfrak{g}](z).
$$
Since the analyticity of $h^\ast$ in $\mathbb{D}$ gives
\begin{equation}\label{eq-anal}
\frac{1}{2\pi}\int_{0}^{2\pi}\overline{z}e^{it}h^{\ast}(e^{it})\frac{1-|z|^{2}}{(1-\overline{z}e^{it})^{2}}dt=0,
\end{equation}
we obtain that
\begin{eqnarray}\label{wed-001}
|h(z)| &=& \left| \mathcal{P}_{h^\ast}(z)-(1-|z|^2) \mathcal{P}_{\psi_1}(z)-\frac{1}{8} G[\mathfrak{g}](z)\right|\\ \nonumber
&\le & \left| \mathcal{P}_{h^\ast}(z) -\frac{1-|z|^{2}}{1+|z|^{2}}\mathcal{P}_{h^{\ast}}(0)\right|+
(1-|z|^2)\left|\mathcal{P}_{\psi_1}(z)- \frac{1-|z|^{2}}{1+|z|^{2}}\mathcal{P}_{\psi_{1}}(0)\right|\\ \nonumber
&& +\frac{1-|z|^2}{1+|z|^2} \left(\left|\mathcal{P}_{h^{\ast}}(0)-\mathcal{P}_{\psi_1}(0)\right|+|z|^2\left|\mathcal{P}_{\psi_1}(0)\right|\right)+\left|\frac{1}{8} G[\mathfrak{g}](z)\right|.
\end{eqnarray}
By the proof of Theorem $1.1$ in \cite{CLW}, we have the following estimates:
$$ \left| \mathcal{P}_{h^\ast}(z) -\frac{1-|z|^{2}}{1+|z|^{2}}\mathcal{P}_{h^{\ast}}(0)\right|\le
\frac{4}{\pi} \|\mathcal{P}_{h^{\ast}}\|_{\infty}\arctan|z|,
$$
$$\left|\mathcal{P}_{\psi_1}(z)- \frac{1-|z|^{2}}{1+|z|^{2}}\mathcal{P}_{\psi_{1}}(0)\right| \le
\frac{4}{\pi}\|\mathcal{P}_{\psi_{1}}\|_{\infty}\arctan|z|
$$
and
$$
\left|G[\mathfrak{g}](z)\right|\le \frac{1}{8}\|\mathfrak{g}\|_\infty (1-|z|^2)^2.
$$
Moreover, it follows from the assumption $h(0)=0$ that
$$ \mathcal{P}_{h^{\ast}}(0)-\mathcal{P}_{\psi_1}(0)=\frac{1}{8} G[\mathfrak{g}](0),
$$ and so, we get
$$
|\mathcal{P}_{h^{\ast}}(0)-\mathcal{P}_{\psi_1}(0)|\le \frac{1}{64} \|\mathfrak{g}\|_{\infty}.
$$
Based on the above estimates, together with the fact $\|\mathcal{P}_{h^\ast}\|_{\infty}\le 1$, the inequality \eqref{wed-001} is changed into the following form:
\begin{eqnarray}
\label{eqn-f-bound} |h(z)| &\le& \frac{4}{\pi}
\arctan|z|+ \frac{1-|z|^2}{1+|z|^2}
\left(\frac{1}{64} \|\mathfrak{g}\|_{\infty}+|z|^2 \|\mathcal{P}_{\psi_1}\|_\infty \right)
\\
&&\nonumber +\frac{4}{\pi}\|\mathcal{P}_{\psi_{1}}\|_{\infty}(1-|z|^{2})\arctan|z| +\frac{1}{64}\|\mathfrak{g}\|_\infty (1-|z|^2)^2\\
&=:&\nonumber M(|z|).
\end{eqnarray}
Since $h$ is differentiable at $z=1$, we have
\begin{equation}\nonumber
h(z)=1+h_z(1)(z-1)+h_{\overline{z}}(1)(\overline{z}-1)+o(|z-1|),
\end{equation}
where $o(x)$ means a function with $\lim_{x\to 0} o(x)/x=0$. Then we deduce from \eqref{eqn-f-bound} that
$$ 2 {\rm Re} [h_z(1)(1-z)+h_{\overline{z}}(1)(1-\overline{z})]\ge 1-M^2(|z|)-o(|z-1|).
$$
By letting $z=r\in (0,1)$ and $r\to 1^-$, we get
$${\rm Re} [h_z(1)+h_{\overline{z}}(1)]\ge \lim_{r\to 1^-} M'(r)=
\frac{2}{\pi}-3\|\mathcal{P}_{\psi_1}\|_\infty-\frac{1}{64} \|\mathfrak{g}\|_{\infty}.
$$
To finish the proof of the lemma, it remains to check the sharpness of the inequality
\eqref{mana-2}. For this, we borrow the following function from \cite[Page 127]{ABR}:
\begin{equation}\label{mana-3}
\mathfrak{h}(z)=\left\{\begin{array}{ll}
\frac{2}{\pi} \arctan \frac{z+\overline{z}}{1-|z|^2}\; \mbox{ if }\; z\in \mathbb{D},\\
\;\;\;\;\;\;\;\;\;\;\;1\; \;\;\;\;\;\;\;\,\ \mbox{ if }\; z\in\mathbb{T}.
\end{array}\right.
\end{equation}
It can be seen that $\mathfrak{h}$ is harmonic in $\mathbb{D}$ with $\mathfrak{h}(0)=0$ and $\mathfrak{h}(1)=1$.
Since
\begin{equation}\label{lem-sh} \mathfrak{h}_z(z)=\frac{2}{\pi} \frac{1+\overline{z}^2}{(1-|z|^2)^2+(z+\overline{z})^2}\;\; \mbox { and }\;\;
\mathfrak{h}_{\overline{z}}(z)= \frac{2}{\pi} \frac{1+{z}^2}{(1-|z|^2)^2+(z+\overline{z})^2},
\end{equation} we know that both $\mathfrak{h}_z$ and $\mathfrak{h}_{\overline{z}}$ are continuous at $z=1$. This
guarantees the differentiability of $\mathfrak{h}$ at this point.
Let
$$\mathfrak{h}^\ast(z)=1
$$ in $\mathbb{\overline{D}}.$
It is clear that $\mathfrak{h}^\ast$ is analytic in $\mathbb{D}$ and $\mathfrak{h}=\mathfrak{h}^\ast$ on $\mathbb{T}$. Further, the harmonicity of $\mathfrak{h}$ in $\mathbb{D}$, together with \cite[(1.5)]{CLW} and \eqref{eq-anal}, ensures that $$\mathcal{P}_{\psi_1}=0.$$ Since \eqref{lem-sh} leads to
$$ {\rm Re} \left[\mathfrak{h}_z(1)+\mathfrak{h}_{\overline{z}}(1)\right]=\frac{2}{\pi},
$$ we see that $\mathfrak{h}$ is our needed extremal function for the sharpness of \eqref{mana-2}.
The proof of the lemma is complete.
\end{proof}
\begin{proof}[\bf Proof of Theorem \ref{bsl-2}]
Let $$h(z)=\overline{\beta} f(\alpha z)$$ in $\mathbb{D}$, and let
$$\mathfrak{g}(z)=\overline{\beta}g(\alpha z)$$ in $\mathbb{D}$,
$$\psi(\xi)=\overline{\beta}\overline{\alpha}\varphi(\alpha \xi)$$ on $\mathbb{T}$, and
$$h^*(z)=\overline{\beta}f^*(\alpha z)$$ in $\overline{\mathbb{D}}$.
Then we know from Lemma \ref{bsl-1} that
$$ {\rm Re} [h_z(1)+h_{\overline{z}}(1)]\ge
\frac{2}{\pi}-3\|\mathcal{P}_{\varphi_1}\|_\infty-\frac{1}{64} \|g\|_{\infty},
$$
from which the inequality \eqref{wen-1} in Theorem \ref{bsl-2} follows since
$${\rm Re} [\overline{\beta}(f_z(\alpha)\alpha+f_{\overline{z}}(\alpha)\overline{\alpha})]={\rm Re} [h_z(1)+h_{\overline{z}}(1)].
$$
The inequality \eqref{mana-1} is obvious. For its sharpness, let
$$\mathfrak{f}(z)=\frac{2 \beta}{\pi} \arctan \frac{\overline{\alpha}z+\alpha \overline{z}}{1-|z|^2}
$$
in $\mathbb{D}$. Then $$\mathfrak{f}(z) =\beta \mathfrak{h}(\overline{\alpha} z),$$
where the function $\mathfrak{h}$ is defined in \eqref{mana-3}. By the discussions on the sharpness of the inequality \eqref{mana-2} in the proof of Lemma \ref{bsl-1},
we see that $\mathfrak{f}$ is the needed function for the sharpness of the inequality \eqref{mana-1}.
Now, the theorem is proved.
\end{proof}
\section{An example}
In this section, we construct an example to show that, in Theorem \ref{bsl-2}, it is reasonable for us to consider the real part of the quantity $\overline{\beta}(f_z(\alpha)\alpha+f_{\overline{z}}(\alpha)\overline{\alpha})$.
\begin{example}\label{existence}
Assume that
$$g(z)=32 M\left[2-3i(z^2+\overline{z}^2)\right]$$ and
$$f(z)= (1-M)z^2+\frac{Mi}{4} (1-|z|^4)(z^2+\overline{z}^2)+M|z|^4$$ in $\overline{\mathbb{D}}$,
where $0<M<\frac{2}{35\pi}\sqrt{5}(3-\sqrt{2})$.
Then
\begin{enumerate}
\item
$f$ and $g$ satisfy the following non-homogeneous biharmonic equation $$\Delta^2 f=g,$$ and all other assumptions in Theorem \ref{bsl-2} with $\alpha=\beta=1$;
\item
${\rm Re} \big(f_z(1)+f_{\overline{z}}(1)\big)=2(1+M)$, $\|\mathcal{P}_{\varphi_1}\|_\infty=\sqrt{5}M$, $\|g\|_{\infty}=64\sqrt{10}M$ and
$${\rm Im} \big(f_z(1)+f_{\overline{z}}(1)\big)=-2M\not=0,$$
where $\varphi_{1}(\zeta)=\frac{M}{2} \left(4-i (\zeta^2+\overline{\zeta}^2)\right )$ on $\mathbb{T}$.
\end{enumerate}
\end{example}
\begin{proof} Elementary computations yield
\begin{equation}\label{mw-1} f_z(z)=2(1-M)z+\frac{Mi}{2} \left[z(1-|z|^4)-z\overline{z}^2(z^2+\overline{z}^2)\right]+2Mz\overline{z}^2,
\end{equation}
\begin{equation}\label{mw-2}f_{\overline{z}}(z)=\frac{Mi}{2} \left[\overline{z}(1-|z|^4)-z^2\overline{z}(z^2+\overline{z}^2)\right]+
2Mz^2\overline{z}
\end{equation}
and $$\Delta^2 f=g.$$
Obviously, $f(0)=0$ and $f(1)=1$. Let
$$\varphi(\zeta)=\frac{M}{2}\zeta \left(4-i (\zeta^2+\overline{\zeta}^2)\right )$$ on $\mathbb{T}$,
and
$$f^\ast(z)=(1-M)z^2+M$$ in $\overline{\mathbb{D}}.$ Then $$f_{\overline{z}}=\varphi$$ on $\mathbb{T}$, $f^*$ is analytic in $\mathbb{D}$, and
$$f^*=f$$ on $\mathbb{T}$.
Since for $z\in \mathbb{D}$,
$$ |f(z)|\le |z|^2 \left(1-\frac{M}{2} (1-|z|^2)^2 \right)<1,
$$
we see that $f(\mathbb{D})\subset \mathbb{D}$.
Moreover, the differentiability of $f$ at $z=1$ can be seen from the continuity of
its partial derivatives (cf. \eqref{mw-1} and \eqref{mw-2}).
Now, we have proved that the first conclusion of the example is true.
The equalities
$$ {\rm Re} \big(f_z(1)+f_{\overline{z}}(1)\big)=2(1+M)\;\;\mbox{and}\;\; {\rm Im} [f_z(1)+ f_{\overline{z}}(1)]=-2M\not=0
$$ easily follow from \eqref{mw-1} and \eqref{mw-2} and elementary computations give
$$\|\mathcal{P}_{\varphi_1}\|_\infty=\max_{z\in \overline{\mathbb{D}}}\left\{\frac{M}{2} \left |4-i (z^2+\overline{z}^2)\right |\right\}=\sqrt{5}M$$
and
$$\|g\|_{\infty}=\max_{z\in \overline{\mathbb{D}}}\left\{32 M\left |2-3i(z^2+\overline{z}^2)\right |\right\}=64\sqrt{10}M.$$
Hence the second conclusion of the example is true too, and so, the proof of the example is complete.
\end{proof}
\begin{remark}
The purpose to add the condition $0<M<\frac{2}{35\pi}\sqrt{5}(3-\sqrt{2})$ in Example \ref{existence} is to guarantee that $$3\|\mathcal{P}_{\varphi_1}\|_\infty+\frac{1}{64} \|g\|_{\infty}<\frac{2}{\pi},$$ i.e., the quantity $\frac{2}{\pi}-3\|\mathcal{P}_{\varphi_1}\|_\infty-\frac{1}{64} \|g\|_{\infty}$ is positive.
\end{remark}
\vspace*{5mm}
\noindent {\bf Acknowledgments}.
The research was partly supported by NSFs of China (Nos 11571216, 11671127 and 11720101003) and STU SRFT.
The third author was supported by NSF of Fujian Province (No. 2016J01020)
and the Promotion Program for Young and Middle-aged Teachers in Science and Technology Research of Huaqiao University (ZQN-PY402).
|
2,877,628,088,377 | arxiv | \section{Introduction}\label{intro}
It has been known\cite{BarklaPM53} since the early days of
ferroelectric KH$_2$PO$_4$ (KDP) that the material exhibits a retarded
response to a disturbance as well as a prompt one. In particular, if
its dielectric permittivity,
$\epsilon=\epsilon^\prime-i\epsilon^{\prime\prime}$, is recorded using
a low amplitude $ac$ field in the additional presence of a $dc$ bias field
that is changed abruptly at time $t=0$, then $\epsilon^\prime$
responds by promptly taking a new value, then by decreasing with $\log
t$ as shown in Fig.~\ref{fig1}, curve (a).
Zimmer, Engert, and Hegenbarth\cite{ZimmerFL87} reported
this for a single crystal with the fields parallel to the
ferroelectric $c$-axis, at temperatures $T$~= 4.2, 20.4, 77.8, and
300~K. The property is shared by many
disordered ferroelectrics and will be referred to as ``the
normal after-effect''.
Part of the interest of such studies lies in comparison with
dielectric glasses which also exhibit a ``normal after-effect''
at $T<1$~K.\cite{SalvinoPRL94} It is also well
known that ferroelectrics with diffuse transitions and relaxor
ferroelectrics display low
temperature thermal properties somewhat similar to structural
glasses.\cite{DeYoreoPRB85,HegenbarthF95} A $T^{3/2}$ specific heat
term at $T<5$~K
was also at one time reported for KDP\cite{LawlessPRL76,LawlessPRB76}
though KDP has a normal, sharp ferroelectric transition. It was later
clarified that large, pure KDP crystals can have almost Debye-like
specific heat,\cite{LawlessF87} and lack a ``glassy'' thermal
conductivity anomaly.\cite{DeYoreoPRB85}
The present article reports a study of the dielectric after-effect in
KDP, mostly as pressed powders. These would be certain to have a
strong $T^{3/2}$
specific heat term, and the after-effects were found to be stronger,
and less variable from one sample to another than with single
crystals. In the course of this study, a curious anomaly was
found.\cite{GilchristSSC98} In a narrow temperature range around
7.5~K the effect of a bias switch was a prompt decrease of
$\epsilon^\prime$ followed by an upward relaxation
according to a stretched exponential law. Curves (b), (c) and (d) in
Fig.~\ref{fig1} show schematically what was observed at 9.0, 6.0 and
7.5~K respectively. The retarded response appeared to be the sum of
two terms, the usual downward relaxation as $\log t$ (the normal
after-effect) and an upward relaxation with a characteristic time
constant that depended on $T$. Near 7.5~K, the upward relaxation
dominated throughout the range 6~s~$<t<2000$~s, at 9.0~K only at the
shorter times, at 6.0~K the longer ones. This term will
be referred to as ``the anomalous after-effect''. Two electroded single
crystal samples exhibited the anomalous after effect, as well as the
normal one,\cite{GilchristSSC98} but both effects were more
precisely measurable using pressed powders.
Among many previous dielectric studies of KDP, ones by Holste,
Lawless and Samara\cite{HolsteF76} and by Motegi, Kuramoto, and
Nakamura\cite{MotegiJPSJ83,KuramotoJPSJ84} paid particular attention
to temperatures below 25~K. Over a continuous background of absorption
and dispersion rising regularly with $T$, Motegi {\it et al.} observed
two specific dispersions, one sensitive to fields parallel to the
ferroelectric axis ($E\parallel c$), that required the presence of
domain walls, the other sensitive to $E\parallel a$ and equally
present in poly- or mono-domain crystals. The $E\parallel c$
dispersion (hereafter ``KMN-C'') obeyed an Arrhenius law with
activation energy $A$~= 19~meV and pre-exponential frequency, $f_0$~=
45~GHz,\cite{KuramotoJPSJ84} i.e., it was centered at frequency
$f=f_0\exp(-A/kT)$. It looks like the typical effect of a reorientable
point defect species in a crystalline environment. A link between
the anomalous after effect
and KMN-C was suggested\cite{GilchristSSC98} by the
finding that the time constant of the stretched exponential obeyed the
same Arrhenius law as KMN-C. As it will be shown in
Sec.~\ref{singlecrystals} below, the agreement is not as exact as
initially supposed, but it will also be reported (Sec.~\ref{particlesize})
that the strengths of the two effects were closely correlated, when
samples of different qualities were compared.
The main aim of the present article is to report the anomalous after
effect and to suggest an explanation for it. Since it appears to be
related both to the normal after effect and to KMN-C, a report and
discussion in some detail of each of these effects is also necessary.
It is notorious that a dielectric study of any solid sample that is
not a properly electroded single crystal can lead to spurious results
that bear little relation to bulk material properties. It is argued
below in Sec.~\ref{results} and \ref{particlesize} that serious errors
are avoided by restricting attention to $T<25$~K, and interpreting
the measurements cautiously. The general features of the data are
also described in Sec.~\ref{results}, by reference to one
representative KDP sample. In Sec.~\ref{behavior} the normal
after-effect is reported. This allows scaling parameters to be
defined, that are useful in the subsequent (Sec.~\ref{T25K} and
\ref{particlesize}) description of the anomalous after-effect. Results
obtained with four single crystal samples are reported in
Sec.~\ref{singlecrystals}, and the three effects, their interrelations
and possible origins are discussed in Sec.~\ref{discussion}.
\section{Samples and measurements}\label{measurements}
The pressed powder samples were derived from Aldrich 99+\%~ACS
reagent. In most cases the product was ground
manually to $\sim~3~\mu$m particle size (as examined by swept-beam
electron microscopy at 20~kV) then pressed ($\sim~50$~MPa) into
pills of diameter 10~mm and thickness normally 250--400~$\mu$m,
density~$>$~80\% of crystal density. The pills were pressed
($\sim~100$~kPa) between lead or indium electrodes. Four single
crystal samples were also
studied, and these are described in Sec.~\ref{singlecrystals}, where
their results are also reported.
Capacitance and
conductance were measured using a General Radio 1621 transformer
bridge system, and expressed as $\epsilon^\prime$ and
$\epsilon^{\prime\prime}$ based on the external dimensions of the
pills and electrodes. The variations of $\epsilon^\prime$ and
$\epsilon^{\prime\prime}$ during a 360~s interval were obtained from
a chart recording of the bridge off-balance signals. At longer times
the bridge was rebalanced for each point. The temperature was controlled
using a carbon resistor sensor (Allen Bradley 390~$\Omega$), that was
calibrated periodically by substituting
a Pt and a Ge resistor for the sample capacitance. Thermometry errors
were of two types, a short and a long term error. The short term error
was caused by the carbon resistor varying with time following a
temperature change. This was noticeable below 10~K and
non-negligible below 6~K. It was allowed for by assuming that
\begin{eqnarray*}
\frac{d\epsilon^\prime}{dt}=\left(\frac{\partial\epsilon^\prime}{\partial
t}\right)_T+\left(\frac{\partial\epsilon^\prime}{\partial
T}\right)_t\frac{dT}{dt},
\end{eqnarray*}
where $d\epsilon^\prime/dt$ was measured and
$(\partial\epsilon^\prime/\partial t)_T$ was required. By waiting long
enough before switching the bias field, the last term became a slow
drift that could be subtracted confidently. The ``long term'' error
had various contributing causes. In routine work with different
samples, the absolute $T$ was known to $\pm 0.5$~K, but for the
extensive work on Sample~2, to $\pm 0.2$~K. For the detailed study around
7.5~K (Fig.~\ref{fig9}) the relative error was $\pm 0.05$~K (rapid
succession of measurements without heating above 122~K). For
Fig.~\ref{fig12}, absolute $T$ was known to $\pm 0.01$~K.
\section{Results, Generalities}\label{results}
The results reported here and in Sec.~\ref{behavior} were obtained
with a typical pressed powder sample (Sample~1) prepared from KDP as
received (not recrystallized).
\subsection{Weak $ac$ field, no bias}\label{weakfield}
Figure~\ref{fig2}
shows the response of Sample~1 to
alternating fields
of different strengths and a fixed frequency. At ambient $T$, sample
impedance was limited by conduction, but this diminished rapidly
on cooling, and became
undetectably small already well above the ferroelectric transition
temperature, $T_c$, and outside the range of Fig.~\ref{fig2}. In
Fig.~\ref{fig2}, $T_c$
is marked by a peak of $\epsilon^\prime$ at the usual value of 122~K.
Well below $T_c$ there are $\epsilon^{\prime\prime}$ peaks near 12~K
and near 60~K.
The 12~K peak is a genuine property of the material but the 60~K
peak is not. A spurious peak is to
be expected near 60~K for the following reason. For the sake of
argument, suppose each grain had an anisotropic permittivity as
measured with a properly electroded, unclamped single crystal. Both
$\epsilon^\prime_c$ and $\epsilon^{\prime\prime}_c$ rise rapidly
with $T$ in this range,\cite{BarklaPM53,MotegiJPSJ83}
$\epsilon^\prime_c$ changing in order of magnitude from $\approx 10$
to $>10^4$. At $\epsilon^\prime_c\approx 10$, there would be
appreciable, though nonuniform penetration of $E\parallel c$ field
component into suitably oriented powder grains, but at
$\epsilon^\prime_c>10^4$ such penetration would be negligible, the
applied field being confined to the vacuum gaps, and to $E\parallel
a$ within the grains ($\epsilon^\prime_a$ remains moderate, rising to
a peak value $\approx 60$ at $T_c$, while $\epsilon^{\prime\prime}_a$
remains low). The best compromise between the rising
$\epsilon^{\prime\prime}_c(T)$ and the falling $E\parallel c$
penetration then locates a loss peak as in Fig.~\ref{fig2}(a). It is
not a Maxwell-Wagner effect and does not involve $dc$ conduction.
On the other hand at $T<25$~K, crystal $\epsilon_a$ and $\epsilon_c$
values are both moderate and depend weakly on $T$. At the moderate
field values used, the dielectric response did not depend markedly on
field strength (Fig.~\ref{fig2}(a)), while $dc$ conductance was totally
insignificant. Insofar as the constituents of a composite dielectric
material can be regarded as continuous media, the $\epsilon$ value of
the composite is a mean value of the $\epsilon$ of the constituent
parts. The possible complex mean values lie within rigorously
determined
bounds.\cite{BergmanPR78,BergmanPRL80,MiltonAPL80,MiltonJAP81} In
these conditions, the 12~K loss peak in Fig.~\ref{fig2} can be
assigned as a peak in $\epsilon_c$ or $\epsilon_a$ or both. In
absolute value, it is distinctly stronger than either the
$\epsilon_c$ or $\epsilon_a$ dispersions reported by Motegi {\it et
al.}\cite{MotegiJPSJ83,KuramotoJPSJ84} in single crystals, but in
position it is nearer KMN-C (see Sec.~\ref{singlecrystals} below and
Fig.~\ref{fig12}). The
background value of $\epsilon^{\prime\prime}$ in Fig.~\ref{fig2} is
also an order of magnitude higher than the background
$\epsilon^{\prime\prime}_c$ in Motegi's crystals, so the relative
strength of the special absorption effect is quite similar. It is to
be noted that the background $\epsilon^{\prime\prime}_c$ value as well
as KMN-C depended on the presence of domain
walls,\cite{KuramotoJPSJ84} so the relative strength could be used for
comparing the KMN dispersion in samples of different qualities. It
will be useful to characterize this relative strength for different
samples by
expressing the ratio of the peak $\epsilon^{\prime\prime}(T)$ near
12~K to the minimum near 15~K, the ratio $a/b$ in Fig.~\ref{fig2}(b),
as a percentage. For Sample~1, $a/b$~= 126\% and in absolute value,
$c\approx 0.042$.
\subsection{Effects of bias switches}\label{switches}
Figure~\ref{fig3} shows typically how $\epsilon^\prime$ behaved
following bias steps of different magnitudes at any $T$ value not near
7.5~K. ``Step'' here means a
single abrupt change of applied field.
Normally after each
step, and after the subsequent changes of $\epsilon$ had been recorded
the sample was heated to $T_c$ and recooled. As Fig.~\ref{fig3}
shows, after a small bias
step, $\epsilon^\prime$ promptly rose then gradually returned towards
its original value, but after a larger step it gradually moved to new,
lower values. A curve like (a) of Fig.~\ref{fig1} would be found after
a bias step of intermediate magnitude. In each case, taking $t=0$ at
the bias step,
$\epsilon^\prime(t>0)\approx\epsilon^\prime(\infty)+Ct^{-p}$, where
$0<p<0.1$ and $C$ is a constant. To estimate $\epsilon^\prime(\infty)$
would require a long, uncertain extrapolation, and over a limited $t$
range the power law differs little from a logarithmic variation, so
it is more useful to define $s^\prime=d\epsilon^\prime/d\ln t$, and
find $s^\prime$, which was usually a slowly varying function of $t$.
The variations
of $\epsilon^{\prime\prime}$ (not shown in Fig.~\ref{fig3}) were
similar, but scaled down by a factor of $\approx 4$. Defining
$s^{\prime\prime}=d\epsilon^{\prime\prime}/d\ln t$, $\vert
s^{\prime\prime}\vert$ was like $\vert s^\prime\vert$, usually a slowly
diminishing function of $t$. In cases like curves (b) and (c) of
Fig.~\ref{fig1}, $s^\prime$ depended more strongly on $t$ and even
changed sign. In every case the final value of $\epsilon^\prime$, for
$t\rightarrow\infty$, was less than or equal to the initial value
$\epsilon^\prime(t<0)$. The strength of the $ac$ measuring field was
unimportant provided it did not exceed $0.4\Delta E$, otherwise $\vert
s^\prime\vert$ and $\vert s^{\prime\prime}\vert$ were underestimated.
Figure~\ref{fig3} shows two relaxations of $\epsilon^\prime(t)$ that
start from a same value, $\epsilon^\prime(t<0)$ even though in one
case the sample had been cooled from $T_c$ in a bias field, the other
in zero field. At $T\leq 25$~K, $\epsilon^\prime(t<0)$ was generally
reproduced to within 0.1\% over a series of thermal cycles up to $T_c$
and back, even though the bias field was sometimes zero, sometimes
$\approx 1$~MV/m and the cooling rate also was variable. Similarly the
value of $\epsilon^{\prime\prime}$ prior to any bias step was
reproduced to within 1\%. This is quite different from single crystal
behavior. Three types of bias switches were studied. Either the sample was
cooled in zero field, and field switched on at $t=0$, or it was field
cooled and at $t=0$, the field switched off or reversed. Repeated
measurements at 25~K with bias field switched from 0 to 920~kV/m,
separated by thermal cycles to $T_c$ yielded standard deviations $\pm$
4\% for $s^\prime$ and $\pm$2\% for $s^{\prime\prime}$, excluding
data from the first few cycles. If instead the sample was cooled in
920~kV/m and this bias switched off, $\vert s^\prime\vert$ was
3$\pm$4\% higher and $\vert s^{\prime\prime}\vert$, 2$\pm$2\% higher.
These are not significant differences, and the same applies if the
sample was cooled in 460~kV/m and this switched to -460~kV/m. Only the
magnitude $\vert\Delta E\vert$ of the bias change was important. In
this respect also the pressed powders behaved quite differently from
single crystals. This point established, the three types of bias
switches were used indifferently. Since also the sign of $\Delta E$ is
irrelevant, it will be taken to be positive.
A spurious after-effect could have been caused by space charge
migration. If charge were to gradually migrate and accumulate, the
effective penetration of the bias field into the grains would
gradually diminish. This might cause $\epsilon$ to return towards its
original value, but not to acquire a new, lower value. It will be
reported (Sec.~\ref{particlesize}) that samples of
very different qualities always had limiting $s^\prime$ and
$s^{\prime\prime}$ values of the same order of magnitude, which argues
against charge migration.
Sequences of bias changes without change of $T$ were not studied
systematically, but schematically, if the bias field was switched
periodically between
values $E_1$ and $E_2$, starting at $t=0$, without changing $T$, each
switch initiated a new relaxation. In that case
$\epsilon^\prime(t)\approx\epsilon^\prime(\infty)+C(t-t_i)^{-p}$,
where $t_i$ is the time of the most recent bias switch. For small
$\vert E_2-E_1\vert$,
$\epsilon^\prime(\infty)\approx\epsilon^\prime(t<0)$, while for
large $\vert E_2-E_1\vert$, after several switches
$\epsilon^\prime(\infty)$ approached a constant lower value.
\section{Permittivity changes following a bias
step}\label{permittivity}
A first systematic study of the after-effect at $T\approx 25$~K is
reported because this is far from any special feature in
Fig.~\ref{fig2}(a), and the normal after-effect could be observed
without the anomalous effect.
\subsection{Behavior near 25~K}\label{behavior}
Figure~\ref{fig4} shows $-s^\prime$ and $-s^{\prime\prime}$ vs. the
magnitude $\Delta E$ of the bias step. Each pair of
data points corresponds to a first bias step after a thermal cycle
to $T_c$ and back to 25~K. $s^\prime$ and $s^{\prime\prime}$ are the
slopes of the best logarithmic fits to $\epsilon^\prime(t)$ curves as
in Fig.~\ref{fig3} and corresponding
$\epsilon^{\prime\prime}(t)$ data for $t$ between 6 and 360~s.
It is already clear from
Fig.~\ref{fig3} that $s^\prime$ was not always proportional to
$\Delta E$. Figure~\ref{fig4} shows that $s^\prime\propto
s^{\prime\prime}\propto\Delta E$ only at small values. At larger
$\Delta E$ values $s^\prime$ and $s^{\prime\prime}$ reached limits.
The limiting value of $-s^\prime$ will be written $s^\prime_0$,
and another independent scaling
parameter $\Delta E^\prime_0$ will be defined by putting
$s^\prime/s^\prime_0=-\Delta E/
\Delta E^\prime_0$ for small $\Delta E$. $s^{\prime\prime}_0$ and
$\Delta E^{\prime\prime}_0$ similarly define the scale of the
$s^{\prime\prime}(\Delta E)$ curve. The values of the four parameters
for each of the three $f$ values of Fig.~\ref{fig4} are given in
Table~\ref{tab1}. Their precise absolute values are not significant on
account of the incomplete and nonuniform penetration of the applied
bias fields into the grains of KDP. It is more useful to note that
$s^\prime_0$ and $s^{\prime\prime}_0$ are decreasing functions of
$f$, while $\Delta E^\prime_0$ and $\Delta E^{\prime\prime}_0$ are
increasing functions. Also, at given $f$, $\Delta E^\prime_0\approx
1.5\Delta E^{\prime\prime}_0$.
\vbox{
\begin{table}
\caption{Scaling parameters for the rate of change of permittivity of
a typical pressed powder (Sample~1) at 25~K
measured between 6 and 360~s after a step, $\Delta E$ of bias
field. $f$ is the frequency of the low-amplitude measuring field.
The rates of change are expressed as $s^\prime=t{\rm d}\epsilon^\prime/
{\rm d}t$, $s^{\prime\prime}=t{\rm d}\epsilon^{\prime\prime}/{\rm
d}t$. Small steps, $\Delta E$ caused proportionate changes,
$s^\prime/s^\prime_0=-\Delta E/\Delta E^\prime_0$,
$s^{\prime\prime}/s^{\prime\prime}_0=-\Delta E/\Delta
E_0^{\prime\prime}$, but $-s^\prime_0$ and $-s^{\prime\prime}_0$
represent limiting values for $s^\prime$ and $s^{\prime\prime}$. See
also Fig.~\ref{fig4}.}
\begin{tabular}{ccccc}
$f$ (kHz) & $\Delta E^\prime_0$ (kV/m) &
$s^\prime_0$ & $\Delta E^{\prime\prime}_0$ (kV/m) &
$s^{\prime\prime}_0$\\
0.12 & 31 & 0.0077 & 20 & 0.0018 \\
1.2 & 37 & 0.0055 & 24 & 0.0013 \\
12 & 43 & 0.0039 & 28 & 0.0009\\
\end{tabular}
\label{tab1}
\end{table}}
The curves in Fig.~\ref{fig4} drawn to fit $s^\prime$ at 120~Hz and
12~kHz correspond to an empirical formula
$s^\prime/s^\prime_0=-[1+(\Delta E^\prime_0/\Delta E)^2]^{-1/2}$ with
the $s^\prime_0$ and $\Delta E^\prime_0$ values given in
Table~\ref{tab1}. The four other curves do not correspond to
Table~\ref{tab1}, but were obtained from these two by supposing that
for any given $\Delta E$ value, $s^\prime$ depends on the frequency of
the measuring field according to a power law, and the response obeys
the Kronig-Kramers relation. The appeal to Kronig-Kramers is based
on the reasoning
that whereas the response to $\Delta E$ on the time-scale of minutes
is essentially nonlinear, $\epsilon$ represents an approximately
linear response to a small field on the millisecond time-scale, and a
set of $\epsilon(f)$ data provides a ``snapshot'' for given $\Delta E$
and $t$. The same is true at $t+\delta t$, and so of the derivative
$s^\prime-is^{\prime\prime}$. It shows there is a link between the
observations that $\Delta E^\prime_0$ and $\Delta E^{\prime\prime}_0$
increase with $f$, and that $\Delta E^{\prime\prime}_0$ is smaller
than the corresponding $\Delta E^\prime_0$.
\subsection{$T<25$~K}\label{T25K}
The results reported here were obtained with another typical pressed
powder (Sample~2) prepared from material that had been recrystallized
in bidistilled water, the solution microfiltered. Referring to
Fig.~\ref{fig2}(b), $a/b$~= 130\%, $c\approx 0.028$.
Figure~\ref{fig5} shows $s^\prime(\Delta E)$ and
$s^{\prime\prime}(\Delta E)$ for Sample~2 at a fixed $f$, fixed $t$
interval and three
temperatures. In this linear plot the details near the origin are not
clearly seen, but the curves have two straight-line
sections joined by a curved section. In a similar linear plot the
same would be true of the 25~K data of Fig.~\ref{fig4}, but at these lower $T$ values
the straight-line section representing $s^\prime(\Delta E)$ at high
$\Delta E$ is not horizontal. It has a distinct positive slope at
each $T$, steep at 7.5~K. In the case of
$s^{\prime\prime}(\Delta E)$ the effect is much less pronounced.
It is necessary to generalize the
definitions of the scaling parameters introduced in
Sec.~\ref{behavior} and it will be defined that the
straight lines intersect at $(\Delta E^\prime_0,-s^\prime_0)$ and
$(\Delta E^{\prime\prime}_0,-s^{\prime\prime}_0)$ respectively as
shown in Fig.~\ref{fig6}.
The parameters so defined for Sample~2 at various $T$ are shown in
Fig.~\ref{fig7}. $s^\prime_0$ and $s^{\prime\prime}_0$ are
increasing functions of $T$, but $\Delta E^\prime_0$ and $\Delta
E^{\prime\prime}_0$ peak near 5~K. Below 5~K, $s^\prime_0$ and $\Delta
E^\prime_0$, $s^{\prime\prime}_0$ and $\Delta E^{\prime\prime}_0$ vary
in the same proportions, because the response to a small $\Delta E$
became temperature independent.
In a linear plot like Fig.~\ref{fig5}, the slopes of the straight
lines at high $\Delta E$ can be expressed
in dimensionless units by using $s^\prime_0$ and $\Delta E^\prime_0$
or $s^{\prime\prime}_0$ and $\Delta E_0^{\prime\prime}$.
In such units the initial slope is always -1 by definition. The values
obtained for these positive slopes of $s^\prime(\Delta E)$ were 0.01,
0.02, 0.086,0.32, 0.04, and 0.017 respectively at $T$~= 1.37, 2.17,
4.9, 7.5, 9.9, and 12.4~K. For $s^{\prime\prime}(\Delta E)$ the slopes
were 0.04, 0.09, 0.003, and 0.000 respectively at $T$~= 4.9, 7.5,
9.9, and 12.4~K. A closer scrutiny of the 25~K data showed a
significant positive slope there also, for $s^\prime$ at $\Delta
E>\Delta E^\prime_0$. Statistical treatment of all data extending
to $\Delta E>20\Delta E^\prime_0$ (five pressed powder samples) yielded a
dimensionless slope 0.003$\pm$0.001.
Figure~\ref{fig8} shows $s^\prime(T)$ and $s^{\prime\prime}(T)$, at
fixed $f$ and fixed $t$ interval, for
Sample~2 and another similar sample at three different $\Delta E$ values.
The $\Delta E$
values are such that $\Delta E>\Delta E^\prime_0(T)$ always, so that
far away from 7.5~K, $s^\prime\approx -s^\prime_0$ and
$s^{\prime\prime}\approx -s^{\prime\prime}_0$. The position of the
positive peak is independent of $\Delta E$. In relative as well as in
absolute value it is weaker in $s^{\prime\prime}$ than in $s^\prime$.
Figure~\ref{fig9} shows $s^\prime(T)$ in the peak region
at two frequencies and two time lapse intervals. The position depends
distinctly on the time lapse but not at all on $f$. The magnitude is
a very slowly diminishing function of $f$, which is consistent with
the weakness of the $s^{\prime\prime}$ peak (Kronig Kramers).
Figure~\ref{fig10} illustrates another method of studying the upward
relaxation effect. Two data sets are shown, using different
procedures. In one case all the $\epsilon^\prime$ measurements were
made at 4.9~K, where the change $\Delta\epsilon^\prime$ caused by the
bias step could be measured with little ambiguity because $s^\prime$
was relatively very small. The sample was then annealed at 5.45~K for
2~mn and returned to 4.9~K for another $\epsilon^\prime$ measurement
without further bias change, annealed at 6.0~K, at 6.6~K and so on.
The $x$ coordinate in Fig.~\ref{fig10} is the highest anneal temperature
prior to each measurement. The steepest slope near 7~K corresponds to
the $s^\prime$ peaks in Figs.~\ref{fig8} and \ref{fig9}. Anneals up to
20~K continued
to have some effect, but then a plateau extended to 40~K, at which
$\sim 75$\% of the $\epsilon^\prime$ shift had been annealed out.
The original $\epsilon^\prime$ value was almost entirely restored by
an anneal to 65~K. For the other data set, $\epsilon^\prime$ was
always measured at 9.9~K, where the bias step was effected, but after
each 2~mn anneal at different $T$, and after returning to 9.9~K the
sample was cycled to 122~K. Where they can be compared, ie $T>9.9$~K
the curves are similar apart from a scaling factor. The scaling factor
suggests bias step $\Delta E$ was more effective at 9.9~K than at
4.9~K, doubtless because $\Delta E/\Delta E^\prime_0$ was larger.
A similar procedure was used to compare the strength of the anomalous
after-effect of different samples after bias steps of different
magnitudes. The bias was stepped at 6.0~K and the
sample was annealed for 3~mn at 9.0~K before remeasuring $\epsilon$ at
6.0~K. This anneal generally restored $\epsilon^\prime$ half way back
to its original value. The $\Delta\epsilon^\prime$ recorded in these
cases (and plotted in Fig.~\ref{fig11}) was the change caused by the
anneal alone, without reference to the original value before the bias
step.
\subsection{Particle size and impurity effects}\label{particlesize}
The variations of the scaling parameters from one sample to another
are reported here, and also of the KMN dispersion strength and the
strength of the upward relaxation of $\epsilon^\prime$ corresponding
to the anomalous after-effect. The principal results are also
summarized in Table~\ref{tab2}.
\widetext
\begin{table}
\caption{Summary of the effects of smaller particle size and of added
impurities compared with ``standard, pure'' pressed powder samples.
Different impurity species all had qualitatively similar effects.}
\begin{tabular}{p{6cm}p{6cm}p{3cm}}
& Smaller particle size & Added impurities\\
$s^\prime_0$, $s^{\prime\prime}_0$ & little changed
(slightly~increased) & unchanged\\
$\Delta E^\prime_0$, $\Delta E^{\prime\prime}_0$ & increased &
increased\\
KMN dispersion & unchanged & weakened\\
strength of anomalous after-effect & unchanged (at~equivalent \hfill{\eject} $\Delta
E/\Delta E^\prime_0$~values) & weakened\\
\end{tabular}
\label{tab2}
\end{table}
\narrowtext
Firstly it is necessary to consider variations amongst nominally pure
samples prepared as in Sec.~\ref{measurements}. This category includes
Samples~1 and 2 that were said to be ``typical''. It was found that
$s^\prime_0$, $\Delta E^\prime_0$, $s^{\prime\prime}_0$, and $\Delta
E^{\prime\prime}_0$ for such samples might vary by as much as a
factor 2, but often much less. This would be due to accidental
variations of density, homogeneity, and particle size distribution of
the pressed powders. Therefore when comparing different categories
of samples, any variations of these parameters exceeding a factor 2
are considered significant. Neither
the scaling parameters nor the KMN strength depended significantly on
whether the material was used as received or recrystallized, and
whether in bidistilled water (solution microfiltered) or in deionized
water. There was no apparent difference between moderately pure and
highly pure samples. Mean values at $T$~= 25~K and $f$~=
1.2~kHz were $s^\prime_0\approx 0.005$ and $\Delta E^\prime_0\approx
35$~kV/m. Referring to Fig.~\ref{fig2}(b), the KMN dispersion strength
was characterized by 125\%$<a/b<$145\% and 0.025$<c<$0.045.
To investigate the effects of particle size, two samples were prepared
from powders more thoroughly ground than usual, one of ``as received''
material, the other recrystallized. Examination showed many particles
of globular shape and diameter $\approx 1~\mu$m. With the smaller
average particle size, these samples undoubtedly contained a higher
proportion of severely damaged and nonferroelectric material. Both
had slightly higher $s^\prime_0$ and $s^{\prime\prime}_0$ values, but
markedly higher $\Delta E^\prime_0$ and $\Delta E^{\prime\prime}_0$
values than standard samples. At $T$~= 25~K and $f$~= 1.2~kHz,
$s^\prime_0\approx 0.007$ and $\Delta E^\prime_0\approx 170$~kV/m. On
the other hand two samples with larger than normal particles (loose
powder contained angular shaped particles of dimensions $\approx
10~\mu$m) yielded similar parameter values as standard samples. The
larger particles would be likely to have broken up during pressing.
Similarly, a normally ground sample pressed at 500~MPa had $\Delta
E^\prime_0$ and $\Delta E^{\prime\prime}_0$ values typical of the more
thoroughly ground samples pressed as usual at 50~MPa. The KMN
dispersion strength was found not to depend significantly on particle
size.
A series of samples was prepared in the usual way from material
recrystallized from nonstoichiometric or impure solutions. Whatever
the impurity species, the results were abnormally high $\Delta
E^\prime_0$ and $\Delta E^{\prime\prime}_0$ values, unchanged
$s^\prime_0$ or $s^{\prime\prime}_0$ and weakened KMN dispersions. In
particular, when the solution contained 0.2~H$_3$PO$_4$, 0.1~KHSO$_4$,
0.2~NH$_4$H$_2$PO$_4$ or 2~RbH$_2$PO$_4$ per 100~KDP, or 2~D$_2$O per
98~H$_2$O (2\%~d), $s^\prime_0$ ranged from 0.0035 to 0.0077 as usual,
but $\Delta E^\prime_0$ from 85 to 315~kV/m (always at $T$~= 25~K
and $f$~= 1.2~kHz) while, referring to Fig.~\ref{fig2}(b),
101\%$<a/b<$108\% and 0.014$<c<$0.018. Increased impurity
concentration (0.5~H$_3$PO$_4$, 0.3~KHSO$_4$ or 11~RbH$_2$PO$_4$ per
100~KDP) caused further increase of $\Delta E^\prime_0$, still no change
to $s^\prime_0$ but further weakening or disappearance of the KMN
dispersion. Between 5~K and 25~K the temperature variations of the
four parameters were roughly similar for pure or impure samples with
coarse or fine grains (as Fig.~\ref{fig7}).
The impure samples also had weaker anomalous after-effects.
Figure~\ref{fig11} demonstrates a correlation between strength of
anomalous after-effect and KMN dispersion strength. Two groups of
samples are featured. The first group comprises seven nominally pure
samples. These were prepared as usual (Sec.~\ref{measurements}) or
were more thoroughly ground than usual so they had a variety of
$\Delta E^\prime_0$ values, but all had normal KMN dispersions
($a/b>$125\% and $c>0.025$). The second group included the five impure
samples mentioned above, and one other that fell into the same
category (101\%$<a/b<$108\% and 0.014$<c<$0.018). As mentioned in
Sec.~\ref{T25K}, the strength of the anomalous after-effect was
characterized by $\Delta\epsilon^\prime$, the change in
$\epsilon^\prime$ caused by anneal at 9.0~K, following a bias step at
6.0~K. To allow for variation of sample densities and
homogeneities, $\Delta\epsilon^\prime$ was normalized with respect to
$s^\prime_0$ (as measured at $T\approx 25$~K and $f$~= 1.2~kHz). If
only the ``normal KMN'' samples are selected,
$\Delta\epsilon^\prime/s^\prime_0$ shows a strong linear correlation
with $\Delta E$, but only if the latter is also normalized with
respect to $\Delta E^\prime_0$. The line does not pass through the
origin, but below it. For $\Delta E<6\Delta E^\prime_0$ ($\Delta
E^\prime_0$ at 25~K, 1.2~kHz), or equivalently $\Delta E<2\Delta
E^\prime_0$ ($\Delta E^\prime_0$ at 7.5~K), the normal after-effect
(downward relaxation) still dominated. The points in Fig.~\ref{fig11}
for the ``weak KMN'' samples fall near another line of lower slope,
indicating a weaker anomalous after-effect.
One other impure sample deserves a special comment. Following a known
example,\cite{NakamuraJJAP81,AbeJPSJ84} KDP was recrystallized from
solution containing a large excess of base, 58~KOH for 100~KDP. The
crystals were very hygroscopic and when removed from the dessicator
and pressed into pills, fluid was expelled and filled the space
between grains (pill density = 96\% of crystal density). Several
other samples were more or less conductive at room temperature, but
in this respect, this one was an extreme case. Nevertheless, it
behaved at low $T$ almost as a usual, slightly impure sample. At
25~K, $\epsilon^\prime$, $\epsilon^{\prime\prime}$, $s^\prime_0$ and
$s^{\prime\prime}_0$ were rather higher than usual. The $ac$ field
would have penetrated the grains more effectively, on account of the
higher permittivity of the intergranular space. The KMN dispersion
was observed near 12~K (at 1.2~kHz) as usual, as also the anomalous
after-effect centered around 7.5~K. This demonstrates conclusively
than none of the reported low $T$ effects is a purely surface effect,
and that the KMN dispersion is unlikely to be a Maxwell-Wagner
effect, as already argued by Kuramoto {\it et al.}\cite{KuramotoJPSJ84}
The 2\% deuterated sample was classed amongst the impure samples
because of its properties outlined above. It is logical to suppose
that protonic impurity in KD$_2$PO$_4$ (DKDP) would have a similar
effect, so that if an analog to the KMN dispersion occurs in DKDP it
would be necessary look for it in a $d>98$\% sample. Such an effect
was searched at 5~K$<T<$80~K but not found with a commercial
(Aldrich) 98\% $d$ sample nor with another KDP sample recrystallized
twice from 99.8\% D$_2$O in an atmosphere free of natural humidity.
\section{Single crystals}\label{singlecrystals}
Four single-crystal samples were studied. SC~1 consisted of two
slabs, cut normal to the $c$-axis and silver electroded by evaporation:
total area 72~mm$^2$, mean thickness 0.73~mm. The two pieces were
connected in parallel. SC~2 consisted of two other $c$-cut slabs,
gold electroded, also connected in parallel: 38~mm$^2\times 0.35$~mm.
SC~3 idem but the two pieces were cut normal to an
$a$-axis and silver painted: 75~mm$^2\times 0.94$~mm. SC~4 was a single
$c$-cut plate of irregular shape 61~mm$^2\times 0.30$~mm, gold
electroded.
Both the normal (at 25~K) and the anomalous (near 7.5~K) after-effects
were observed with SC~1 and SC~2 ($E\parallel c$). All low $T$
dielectric properties were sensitive to cooling speed through $T_c$,
and whether cooled in field or no field. They were less accurately
reproducible from one thermal cycle to another than with pressed
powders. With SC~1, after fast zero-field
cooling, $\vert s^\prime\vert$ and $\vert s^{\prime\prime}\vert$ were
typically several times smaller than with pressed powders,
\cite{GilchristSSC98} but so
also were $\epsilon^{\prime\prime}$ and $d\epsilon^\prime/dT$. At
25~K, attempts to apply fields $>200$~kV/m always caused an
instability. At lesser $\Delta E$ values, $\vert s^\prime\vert$ varied
roughly as $\Delta E^{0.4}$, and no $s^\prime_0$ or $\Delta
E^\prime_0$ value could be estimated. Near 7.5~K, the anomalous
after-effect was characterized by a lower $\alpha$ constant in the
stretched exponential law. Thus if
$\epsilon^\prime(t)=\epsilon^\prime(\infty)-C\exp-(t/\tau)^\alpha$,
$\alpha$ took the value 0.45 for pressed powder Sample~1 at 7.5~K,
but 0.34 for SC~1 at the same $T$. With SC~2, also fast
zero-field cooled, all the low $T$ dielectric properties were several
times weaker than with SC~1.
With SC~3 very little after-effect was found following a bias
step, $\Delta E$~= 300~kV/m at 25~K. This puts an upper limit to
$\vert s^\prime\vert$ of $1.5\times 10^{-4}$, so if any effect exists
for $E\parallel a$ it is of a lower order of magnitude than for
$E\parallel c$. The inference is that the after-effects of the
pressed powders are essentially caused by the $E\parallel c$ field
component.
SC~4 was used to check the Arrhenius law of the KMN-C absorption.
For this purpose the calibration of the carbon resistor thermometer
was not relied on, but a germanium resistor was placed in close
thermal contact with the sample. As KMN noted,\cite{KuramotoJPSJ84}
the background absorption and dispersion that has to be subtracted
depends on $f$ as well as $T$. Some plausible choice has to be made,
how this is done. KMN assumed at each $T$, a Cole-Cole law plus a
linear $\epsilon^{\prime\prime}(\log f)$ background. For the present,
the background to be subtracted at each $T$ and $f$ was assumed to be a
linear interpolation of the data at $T$~= 6.14 and 19.75~K (i.e. well
below and above the KMN effect) and the same $f$. This also
subtracted an instrumental
and circuit error that becomes serious at $f>10$~kHz, and it resulted
in a symmetric $\epsilon^{\prime\prime}(\log f)$. The
$\epsilon^{\prime\prime}$ peak position was found at each $T$. Two
field values were used, 3.5 and 7.0~kV/m (cf.~1.0~kV/m~
\cite{KuramotoJPSJ84}) and the
data sets analysed separately. The 3.5~kV/m results are shown in
Fig.~\ref{fig12} and yielded Arrhenius parameters $f_0$~= 173~GHz,
$A$~= 20.09~meV. The 7.0~kV/m results would not be distinguishable in
the plot and gave $f_0$~= 170~GHz, $A$~= 20.03~meV. If the 6.14~K
data alone had been subtracted as background, the figures would have
been $f_0$~=
605(640)~GHz, $A$~= 21.65(21.68)~meV. Assuming the linear
interpolation is more
appropriate, the present result is not significantly different from
the published\cite{KuramotoJPSJ84} result and it may be concluded that
$A=19.5\pm 0.7$~meV and 40~GHz~$<f_0<200$~GHz.
Also shown in Fig.~\ref{fig12} are analogous results for pressed powder
Sample~2, at 3.8~kV/m, using the same background subtraction. It might
be expected that $\epsilon^{\prime\prime}(f)$ at 6~K would have a peak
near 100~Hz corresponding to the $E\parallel a$ dispersion,\cite{KuramotoJPSJ84} and
so give a false baseline. Such an effect was searched between 5 and
7~K but not detected, which means it could have been no more than 3\%
as strong as the absorption and dispersion near 12~K in this sample.
Best Arrhenius parameters for the latter were $f_0$~= 880~GHz, and
$A$~= 21.16~meV, and it is concluded from its position in
Fig.~\ref{fig12}, and absence of $E\parallel a$ effect (at 5--7~K)
that it corresponds almost entirely to the KMN-C effect.
Figure~\ref{fig12} also shows two points representing the time constant
of the anomalous after-effect. The time constant $\tau(T)$ was taken
to equal $t$ at the point of maximum positive slope of
$\epsilon^\prime(\log t)$. It was found at $T$~=
7.44 and 7.82~K, and ($2\pi\tau)^{-1}$ is plotted. The points fall
distinctly below the Arrhenius law, whichever of the three data sets
is extrapolated, but the law derived from the same pressed powder
sample comes nearest. If an estimated normal after-effect contribution
of the form $Ct^{-p}$ had been subtracted from the data, the
discrepancy would have been smaller.
\section{Discussion}\label{discussion}
The after-effect in KDP at $T<25$~K consists of two parts that are
readily distinguished. Both parts have been reported for single
crystal samples,\cite{ZimmerFL87,GilchristSSC98} as well as pressed
powders and the KMN-C dispersion\cite{MotegiJPSJ83,KuramotoJPSJ84} is
also a single-crystal property. The discussion of their origins must
be broad enough to encompass both types of sample.
\subsection{The normal after-effect}\label{normalafter-effect}
A first salient feature of the normal after-effect is that it has been
observed at $T$ ranging from 1.4 to 300~K, so that its $T$ range
extends well above $T_c$ and well below
the usual domain freezing temperature in crystals ($\sim~100$~K). A
second is that it has no characteristic relaxation time, and a third
is that its magnitude reaches a limit ($s^\prime_0$,
$s^{\prime\prime}_0$) at very moderate values ($\Delta E^\prime_0$,
$\Delta E^{\prime\prime}_0$) of applied field step, with pressed
powders at least.
Below 25~K, it conforms to a general law for ``glassy''
properties in that $\epsilon$ is a simple,
near-linear function of $T\log t$.\cite{PrejeanJP80} The low $\Delta
E^\prime_0$ and $\Delta E^{\prime\prime}_0$ values that decrease with $T$
suggest a high dipole moment value that increases with $T$. All this suggests
microdomains, that were present accidentally in the single crystals, but
more systematically present in the powders. The notion of microdomains was
developed\cite{YokosukaJJAP86} to explain the diffuse nature of the phase
transition in disordered ferroelectrics such as PLZT ceramics.
The sizes and shapes are widely distributed. R. H\"ohler {\it et al.}
\cite{HohlerPRB91} invoked this notion to explain their
finding of a retarded dielectric response at low $T$ in a PLZT ceramic.
Although a different $\epsilon$ was measured, related to the polarisation
and not the $ac$ polarisability, a $T\log t$ law was also found,
between 20 and 80~K, in conditions corresponding to $s^\prime=-s^\prime_0$
(retarded response independent of
$\Delta E$). This possibly suggested independent thermally activated Debye
processes with a uniform distribution of activation energies, which would
imply a uniform spectral density of $\log\tau$ values ($\tau$ is a
relaxation time) that varies as $T$. Strongly interacting systems that
relaxed according to a hierarchical sequence\cite{PalmerPRL84} would also
have been possible, and more plausible. The authors
suggested\cite{HohlerPRB91,HohlerJNCS91} a discrimination in favour
of the latter, based on results of applying a $T$ step as well as a
$\Delta E$ pulse, which indicated an
abnormally low frequency prefactor. For the present, it is also more
plausible to suppose strongly interacting systems in the pressed powder
samples. This would explain why $s^\prime_0$ and $s^{\prime\prime}_0$
always took similar values at a given $T$. In the single crystals
there may not have been enough active microdomains to reach the
strong-interaction limit. Apparently $\Delta E^\prime_0$ and $\Delta
E^{\prime\prime}_0$ represent a threshold bias-step value for a
prompt and widespread
microdomain polarization rearrangement that limits the extent to which
the system can be out of equilibrium. It is also most likely that
microdomains are responsible for the ``background'' dispersion and
absorption at low $T$, both in single crystals ($E\parallel c$) and
powders. They also explain naturally why $\Delta E^\prime_0$ and
$\Delta E^{\prime\prime}_0$ decreased with $T$, but increased with $f$.
Either increased $T$ or decreased $f$ would bring into play larger
microdomains.
The microdomains that are active at low $T$ are probably ones that
have less than full orthorhombic distortion, so they are easily
switched. In single crystals they are likely to be associated with
crystal defects, in pressed powders likely to be much influenced by
the damaged grain surface layers. In the powders they must be subject
to a very broad distribution of stresses, as witnessed by the
equivalence of field-cooled and zero-field-cooled properties.
On the other hand, the active microdomains at ambient $T$
are perhaps to be identified with the orthorhombic inclusions that
have been reported in the tetragonal phase.\cite{SuvorovaSPC91}
A ``normal'' after-effect of a bias step is also characteristic of
disordered
dielectrics generally at very low $T$. The immediate rise of
$\epsilon^\prime$ and $\epsilon^{\prime\prime}$ followed by decay as
$\log t$ was observed with hydroxyl-doped KCl at $T<1$~K,
\cite{Saint-PaulJPC86} and several structural glasses, also at
$T<1$~K.\cite{SalvinoPRL94} The results could be displayed in plots
like the present Figs.~\ref{fig3} and \ref{fig4}, though the samples were not
thermally cycled between measurements as in the present study. The
behaviour of the structural glasses below 1~K was
explained\cite{CarruzzoPRB94} by reference to a random Ising model of
dipoles with long-range interactions.\cite{KirkpatrickSSC78} One difference
from the present results
was that the $\Delta E^\prime_0$ parameter was an increasing function of $T$.
In accordance with the model, it was found $\Delta E^\prime_0\approx kT/p$,
where $p$ is the (fixed) relevant dipole moment. Another difference
was that $s^\prime_0$ decreased sharply with $T$. The other main point in
common is that $\epsilon^\prime$ and $\epsilon^{\prime\prime}$ relaxed
downwards, and this can be understood as the system
of interacting dipoles gradually self-trapping into deeper potential wells,
from which it can respond less actively to weak applied fields.
\subsection{The anomalous after-effect}\label{anormalafter-effect}
The most obvious distinguishing features are that $\epsilon^\prime$
relaxed upwards, not
down and that although some effect was detectable at all $T$ between 1.4 and
25~K, there was a pronounced peak near 7 or 8~K, depending on the time lapse.
Also, the anomalous effect was only observed when $\Delta E>\Delta E^\prime_0$,
and then its strength as represented for example in Fig.~\ref{fig11}
increased as $(\Delta E/\Delta E^\prime_0-a)$, where
$a\approx 1$. The $\Delta E^\prime_0$ parameter needed to describe
the normal after-effect appears as a threshold for the anomalous
effect. More precisely there was a tendency for the strength
to level off and possibly saturate at some high $\Delta E$ value not
reached in this work.\cite{GilchristSSC98} This suggests that the unit
dipole moment involved
here is much smaller than the moments of the microdomains. Since the
anomalous after effect is also linked to KMN-C, it may be
supposed a same species of dipole (``the KMN dipoles'') is responsible.
A phenomenological model can then be formulated
as follows. Two subsystems (the microdomains and the KMN dipoles)
coexist and interact, each having its own relaxation dynamics. The KMN
dipoles only interact weakly with one another, but certainly
interact with the microdomain system. After a bias step that is big enough
to unsettle the microdomain system, this settles into a new
configuration that is metastable, with the KMN dipoles in their present
states. These dipoles then relax with their characteristic time constant,
$\tau$, perpetually changing the local strain fields and electric
fields. This affects the microdomain system qualitatively as would a
random series of applied field changes. Referring to the effect of
repeated bias changes that was mentioned in
Sec.~\ref{switches}, and summing over the sample volume, it might be
expected that after a single applied field step, $\Delta E>\Delta
E^\prime_0$,
\begin{eqnarray*}
\epsilon^\prime(t)=\epsilon^\prime(\infty)+Ce^{-t/\tau}t^{-p}+
\frac{C^\prime}{\tau}\int^t_0e^{-(t-t_i)/\tau}(t-t_i)^{-p}dt_i
\end{eqnarray*}
For simplicity, unstretched exponentials have been written here.
The term in $C$ represents regions of the sample where
$\epsilon^\prime(t)$ is still relaxing downwards due to the applied
field step. Its volume diminishes exponentially. The $C^\prime$ term
represents the sum of micro-regions that have been affected by a
subsequent KMN dipole flip, that
occurred at $t_i$. The expression allows an upward relaxation of
$\epsilon^\prime(t)$ only if $C^\prime>C$. This is possible because
if $\Delta E>\Delta E^\prime_0$, $C$ takes its limiting value related
to $s^\prime_0$, while $C^\prime$ can plausibly exceed this limit. The
field changes caused by the dipole flips may be very strong, but they
are localized and random, so they do not cause a prompt and widespread
microdomain rearrangement like a strong applied field change. The
microdomain system can therefore be driven further away from
equilibrium.
\subsection{Possible origin of KMN-C and anomalous
after-effects}\label{KMN-C}
KMN suggested\cite{KuramotoJPSJ84} that a peculiar mode of motion
related to the domain wall structure may be responsible. If some of
the atoms located at the domain boundary were in shallow double
potential wells, movement of these atoms across the barriers would be
a possible origin. This brings to mind the hydrogen atoms and their
O\ldots H-O bonds. At the time of KMN, the potential barrier to
H-bond reversal was thought to be well over
100~meV,\cite{LawrenceJPC80} but newer
work\cite{SugimotoPRL91,Yamada94,YamadaJPSJ94,IkedaPB96} based on
neutron incoherent scattering data has yielded the much lower estimate
of 37.1~meV,\cite{Yamada94} and again a precise potential function
with a barrier height near 30~meV (for DKDP, 135~meV) has been
calculated for the system comprising a hydrogen atom and coupled
lattice mode, based on 30~K data.\cite{IkedaPB96} The principal aim
was to account for $T_c$ and its deuteration shift by considering
lattice dynamics at $T>T_c$, but the result might also apply to a
special case where a single H bond reversal could occur at low $T$
without the energy cost of creating a HPO$^{2-}_4$ ion and an
adjacent H$_3$PO$_4$ group.
It is necessary to examine possible domain-wall structures. Wall width
increases with $T$ towards $T_c$,\cite{AndrewsJPC86} but at $T\ll T_c$,
walls are plausibly approximated by the vanishingly thin models
proposed by Barkla and
Finlayson.\cite{BarklaPM53} There were two such structures, the
``polarized'' and the ``neutral'' domain wall. Both respect the
Slater rule (no HPO$_4$ and no H$_3$PO$_4$ groups). It is
not known for certain which is the more accurate approximation
to real walls at
$T\ll T_c$ but Bjorkstam and Oettel calculated that the polarized wall
would be the more stable.\cite{Bjorkstam66} Bornarel\cite{BornarelJAP72}
showed that either of these
walls, normally planar and perpendicular to an $a$-axis, can have any
number of lateral step displacements. The minimum displacement is a
half lattice parameter. The step displacements, or quasi-dislocations,
like the planar walls, respect the Slater rule, but only if they run
straight across the entire crystal, parallel to the $c$-axis. They are
associated with an intense local strain field. Minimum domain wall
movement therefore involves glide of a quasi-dislocation along its
entire length, or the presence of HPO$_4$ and H$_3$PO$_4$ groups.
Otherwise, if the walls have finite width, a minimum movement without
HPO$_4$ or H$_3$PO$_4$ groups consists of six simultaneous H-bond
reversals.\cite{Schmidtref34} The simplest case with HPO$_4$ or
H$_3$PO$_4$ is a unit jog, where a quasidislocation shifts by one
lattice parameter. This requires a single HPO$_4$ or H$_3$PO$_4$ group
and its illustration\cite{BornarelF75} is reproduced in
Fig.~\ref{fig13} for the case of a polarized
domain wall. The case of a neutral wall is almost entirely equivalent.
By a sequence of single H-bond reversals, the vacancy (or the excess
H) can move from one PO$_4$ to the next within a $c$-stack of unit
cells, and so, in principle right across the crystal, together with
the associated jog. In practice it may get pinned somewhere along the
line, by a crystal dislocation or an impurity. A few such pinned jogs
may persist as the material is cooled down to temperatures where
their creation by thermal activation would be virtually impossible.
Also, progression of a jog requires successive reversals of
differently oriented H-bonds, first one that lies near the plane of
the domain wall, then one nearly perpendicular to it and so on. It is
plausible that sometimes, because of the local fields and strains one
of these, but not the other remains possible at low $T$, because in
just one case the two states are near energetically equivalent. This
would constitute the double potential well system.
Yamada and Ikeda\cite{Yamada94} considered the ``cluster tunneling
mode'' or protonic polaron\cite{YamadaJCSJ98} model, but found that
their purpose of predicting hydrogen dynamics at $T>T_c$ was better
served by a model of incoherent tunneling between self-trapped states.
At low $T$ this would become coherent phonon-assisted tunneling, with
an expected transition probability $\propto T^7$. On the other hand,
the protonic polaron would have an extremely small tunneling
splitting, so that thermal activation down to 12~K, or even 7~K,
would be more plausible. Moreover, the tunneling mode would be
overdamped at higher $T$, and in that condition the predictions of the
two models would be experimentally indistinguishable.
In another development since KMN, evidence has been
reported\cite{MeloBJP92} of an orthorhombic-monoclinic phase
transition in KDP near 60~K. This would mean the H-bonds within a
ferroelectric domain are not all equivalent, as previously supposed,
but not enough is known about the new phase to draw any other
conclusions.
Meanwhile, a different origin for the KMN-C effect cannot be totally
rejected. This would attribute it to a defect species that is
intrinsic to solution grown KDP (for example a growth dislocation or
an included water molecule). At low $T$ this defect would only be
dielectrically activated by the presence of a domain wall.
\section{Conclusions}\label{conclusions}
A new dielectric property of ferroelectric KDP has been reported,
``the anomalous after-effect''. It has been shown to be related to
the ``normal'' after-effect\cite{ZimmerFL87} and also to the KMN-C
dispersion.\cite{MotegiJPSJ83,KuramotoJPSJ84} The normal after-effect
is a well-known property of $c$-cut KDP crystals, and of many other
ferroelectric materials. Like the $T^{3/2}$ specific heat term in
microcrystalline KDP, it is reminiscent of such an effect in
structural and dipole glasses, but differs in certain important details.
It is attributable to microdomains with a wide distribution of sizes,
shapes and stresses. Study of the normal after-effect with pressed
powders has allowed a set of scaling parameters to be determined for
a series of samples of different qualities. As shown in
Fig.~\ref{fig11}, these scaling parameters also apply to the anomalous
after-effect, which therefore also involves microdomains.
The KMN-C dispersion is another known property of $c$-cut KDP crystals.
Such dispersions are generally caused by point defects in crystals,
and are unknown in structural glasses at low temperatures. Before the
present work there was no apparent link between KMN-C and the normal
after-effect. The KMN-C dispersion possibly owes its origin to rare,
isolated HPO$_4$ and H$_3$PO$_4$ groups associated with jogs on
lateral steps (quasidislocations) of domain walls. The activation
energy of 19~meV, which is also shared by the anomalous after-effect,
would then be related to the energy barrier for the reversal of a
single H-bond in this particular environment. It is not clear how
closely this should be assimilated to the hypothetical barrier for
the reversal of a single H-bond in tetragonal KDP, that has been the
object of recent estimates.\cite{Yamada94,IkedaPB96} A correlation
might be expected, if different ferroelectric compounds of the KDP
type could be compared. The most interesting comparison would be with
DKDP, but so far, as mentioned in Sec.~\ref{particlesize} no
dispersion analogous to the KMN effect has been observed with DKDP.
For the other isostructural compounds the situation can be summarized
as follows. As pressed powders, RbH$_2$PO$_4$ and the arsenates all
behaved analogously to KDP, but the arsenates had first to be
crystallized with excess base (solution pH$>$6). Each compound exhibited
a low-field dispersion that obeyed an Arrhenius law, and a
corresponding anomalous after-effect. With RbH$_2$PO$_4$ the
activation energy was close to the value for KDP, perhaps 1~meV
higher. With KH$_2$AsO$_4$ it was near 30~meV, with CsH$_2$AsO$_4$,
44~meV and with RbH$_2$AsO$_4$ between these two. However, no
independent estimates of the barrier heights are available.
The anomalous after-effect results from an interaction between two
coexisting subsystems. One subsystem relaxes with a definite
relaxation time, the other with a very broad distribution of
relaxation times. The result of the interaction is an apparent
tendency of the system to evolve temporarily away from its stable
equilibrium. The lines of an explanation sketched in
Sec.~\ref{anormalafter-effect} need to be developed into a model.
\acknowledgments
This work owes much to Jean~Bornarel, who supplied copious advice and
background knowledge together with the electroded single crystals.
Comments by Jean~Souletie, and information supplied by Andr\'e~Durif
and Marie-Th\'er\`ese~Averbuch were also appreciated.
|
2,877,628,088,378 | arxiv | \section{Implementation Choices for Initialization and Seq2seq Models}
We describe the modeling choices for initialization models (Pr-Thr), (DBAE), and ($\bm{\mu}$:1). All hyper-parameters for each of these systems are set based on the models' \textsc{rouge-l} score on the validation set. Unless otherwise stated, all models use Skipgram FastText\footnote{\url{https://fasttext.cc/}} word embeddings which are shared across the input and output layers. The dimension 512 embeddings are trained on the concatenation of the full text and summary sequences ${\mathcal{D}^F \cup \mathcal{D}^S}$. $\mathcal{V}$ is the full vocabulary, and $\mathcal{V}^F$ and $\mathcal{V}^S$ are the vocabularies of $\mathcal{D}^F$ and $\mathcal{D}^S$ respectively. All trained models use the Adam optimizer with learning rate $5e-4$. The convolutional seq2seq models use the \emph{fconv\_iwslt\_de\_en} architecture previded in Fairseq\footnote{\url{https://fairseq.readthedocs.io/en/latest/models.html}} with pre-trained input and output word embeddings, a vocabulary size of 50K for the full text and of 15K for the summaries. For the expander generations, we collapse contiguous UNK tokens, and cut the sentence at the first full stop even when the model did not generate an EOS token, yielding outputs that are sometimes shorter than 16 words.
\paragraph{Procrustes Thresholded Alignment (Pr-Thr)} For this model, we train two sets of word embeddings on $\mathcal{D}^F$ and $\mathcal{D}^S$ separately, and compute aligned vectors using the FastText implementation of the \citep{DBLP:journals/corr/abs-1805-11222} algorithm\footnote{\url{https://github.com/facebookresearch/fastText/tree/master/alignment}}. We then map each word in an input sequence to its closest word in $\mathcal{V}^S$ in the aligned space, unless the nearest neighbor is the EOS token or the distance to the nearest neighbor in the aligned space is greater than a threshold $\eta$. The output sequence then consists in the first $N$ mapped words in the order of the input sequence. We found that using embeddings of dimension $256$, threshold $\eta = 0.9$, and maximum output length $N = 12$ yields the best validation \textsc{rouge-l}.
We compare (Pr-Thr) to a PBSMT baseline in Table~\ref{tab:base-bootstrap}. We use the UnsupervisedMT codebase\footnote{\url{https://github.com/facebookresearch/UnsupervisedMT/tree/master/PBSMT}} of \citep{DBLP:conf/emnlp/LampleOCDR18} with the same pre-trained embedding, and also perform a hyper-parameter search over maximum length, which sets $N=15$.
\paragraph{Denoising Bag-of-Word Auto-Encoder (DBAE)} The DBAE is trained on all sentences in $\mathcal{D}^S$. The encoder of the DBAE averages the input word embeddings and applies a linear transformation, followed by a Batch Normalization layer \citep{DBLP:conf/icml/IoffeS15}. The decoder is a 2-layer GRU recurrent neural network with hidden dimension 256. The encoder output is concatenated to the initial hidden state of both layers, then projected back down to the hidden dimension.
To use the model for summarization, we perform two changes from the auto-encoding setting. First, we perform a weighted instead of a standard average, where words that are less likely to appear in $\mathcal{D}^S$ than in $\mathcal{D}^F$ are down-weighted (and words that are in $\mathcal{V}^F$ but not in $\mathcal{V}^S$ are dropped). Specifically, given a word $v \in \mathcal{V}^S$, its weight $w_v$ in the summarization weighted BoW encoder is given as:
\begin{align}
\mu_v^F = \frac{\sum_{s \in \mathcal{D}^F} \mathbbm{1}_{v \in s} }{|\mathcal{D}^F|} \;\; \text{and} \;\; \mu_v^S = \frac{\sum_{s \in \mathcal{D}^s} \mathbbm{1}_{v \in s} }{|\mathcal{D}^S|} \label{eq:moments_apdx}
\end{align}
\begin{align}
w_v = \max(\frac{\mu_v^S}{\mu_v^F}, 1)
\end{align}
Secondly, we implement something like a pointer mechanism by adding $\lambda$ to the score of each of the input words in the output of the GRU, before the softmax. At test time and when creating artificial data, we decode with beam search and a beam size of size 5, maximum output length $N=15$, and input word bias $\lambda=2$.
\paragraph{First-Order Word Moments Matching ($\bm{\mu}$:1)} The moments matching model uses the same encoder as the (DBAE) followed by a linear mapping to the summary vocabulary, followed by a sigmoid layer (the log-score of all words that do not appear in the input is set to $-1e6$). Unfortunately, computing the output probabilities for all sentences in the corpus before computing the Binary Cross-Entropy is impractical, and so we implement a batched version of the algorithm. Let corpus-level moments $\mu_v^F$ and $\mu_v^S$ be defined as in Equation~\ref{eq:moments_apdx}. Let $\mathcal{B}^F$ be a batch of full text sequences, we define:
\begin{align}
\hat{\mu}_v^F = \frac{\sum_{s \in \mathcal{B}^F} \mathbbm{1}_{v \in s} }{|\mathcal{B}^F|} \;\; \text{and} \;\; \hat{\mu}_v^S = \frac{\hat{\mu}_v^F }{\mu_v^F } . \mu_v^S
\end{align}
For each batch, the algorithm then takes a gradient step for the loss:
\begin{align*}
\hat{\mathcal{L}}(\mathcal{B}^F; \theta) = \sum_{v \in \mathcal{V}^S} \text{BCE} \Big ( \frac{\sum_{s \in \mathcal{B}^F} f_\theta^\mu(s, v) }{|\mathcal{D}^F|}, \hat{\mu}_v^S \Big )
\end{align*}
The prediction is similar as for the (Pr-Thr) system except that we threshold on $f_\theta^\mu(s, v)$ rather than on the nearest neighbor distance, with threshold ${\eta=0.3}$ (the maximum output length is also ${N=12}$)
\begin{table*}
\centering
\begin{tabular}{l}
{\bf over N,NNN ancient graves found in greek metro dig} \\
\toprule
\bottomrule
(Pr-Thr) over N,NNN ancient graves were found in a greek metro -lrb- UNK -rrb-. \\
\midrule
(DBAE) the remains of N,NNN graves on ancient greek island have been found in three ancient \\
graves in the past few days, a senior police officer said on friday. \\
\midrule
($\bm{\mu}:1$) over N,NNN ancient graves have been found in the greek city of alexandria in the northern \\
greek city of salonika in connection with the greek metro and dig deep underground. \\
\multicolumn{1}{c}{} \\
{\bf ukraine's crimea dreams of union with russia} \\
\toprule
\bottomrule
(Pr-Thr) ukraine 's crimea UNK of the union with russia. \\
\midrule
(DBAE) ukraine has signed two agreements with ukraine on forming its european union and \\
ukraine as its membership. \\
\midrule
($\bm{\mu}:1$) ukraine's crimea peninsula dreams of UNK, one of the soviet republic's most UNK country \\
with russia, the itar-tass news agency reported. \\
\multicolumn{1}{c}{} \\
{\bf malaysian opposition seeks international help to release detainees} \\
\toprule
\bottomrule
(Pr-Thr) the malaysian opposition thursday sought international help to release detainees. \\
the malaysian opposition, news reports said. \\
\midrule
(DBAE) malaysian prime minister abdullah ahmad badawi said tuesday that the government's \\
decision to release NNN detainees, a report said wednesday. \\
\midrule
($\bm{\mu}:1$) malaysian opposition parties said tuesday it seeks to ``help'' the release of detainees. \\
\multicolumn{1}{c}{} \\
{\bf russia to unify energy transport networks with georgia rebels} \\
\toprule
\bottomrule
(Pr-Thr) russia is to unify energy transport networks with georgia rebels. \\
\midrule
(DBAE) russian government leaders met with representatives of the international energy giant said \\
monday that their networks have been trying to unify their areas with energy supplies. \\
\midrule
($\bm{\mu}:1$) russia is to unify its energy and telecommunication networks to cope with georgia's \\
separatist rebels and the government. \\
\multicolumn{1}{c}{} \\
{\bf eu losing hope of swift solution to treaty crisis} \\
\toprule
\bottomrule
(Pr-Thr) the eu has been losing hope of a UNK solution to the maastricht treaty crisis. \\
\midrule
(DBAE) the european union is losing hope it will be a swift solution to the crisis of the eu \\
-lrb- eu -rrb-, hoping that it's in an ``urgent'' referendum. \\
\midrule
($\bm{\mu}:1$) eu governments have already come under hope of a swift solution to a european union treaty \\
that ended the current financial crisis. \\
\end{tabular}
\caption{More examples of artificial data after the first back-translation iteration.}
\label{tab:examples_more}
\end{table*}
\begin{table*}[t]
\centering
\begin{tabular}{l}
(Original) \emph{malaysia has drafted its first legislation aimed at punishing computer hackers,} \\
\emph{an official said wednesday.} \\
\midrule
\bottomrule
(Pr-Thr)-1 malaysia has enacted a draft, the first law on a UNK computer hacking. \\
(Pr-Thr)-3 malaysia has issued a draft of the law on computer hacking. \\
(Pr-Thr)-5 malaysia has drafted a first law on the computer hacking and internet hacking. \\
\midrule
\bottomrule
(DBAE)-1 malaysia's parliament friday signed a bill to allow computer users to \\
monitor UNK law. \\
(DBAE)-3 the country has been submitted to parliament in NNNN passed a bill wednesday \\
in the first reading of the computer system, officials said monday. \\
(DBAE)-5 malaysia's national defense ministry has drafted a regulation of computer \\
hacking in the country, the prime minister said friday. \\
\midrule
($\bm{\mu}:1$)-1 malaysia will have drafts the first law on computer hacking. \\
($\bm{\mu}:1$)-3 malaysia has started drafts to be the first law on computer hacking. \\
($\bm{\mu}:1$)-5 malaysia today presented the nation's first law on computer hacking in the \\
country, news reports said wednesday. \\
\midrule
\bottomrule
{\bf (Title) malaysia drafts first law on computer hacking} \\
\end{tabular}
\caption{Evolution of generated full text sequences across iterations.}
\label{tab:examples_evolve}
\end{table*}
\begin{table*}[t]
\centering
\begin{tabular}{l}
(Article) \emph{chinese permanent representative to the united nations wang guangya on wednesday urged} \\
\emph{the un and the international community to continue supporting timor-leste.} \\
(Pred) chinese permanent representative urges un to continue supporting timor-leste \\
(Title) china stresses continued international support for timor-leste \\
\midrule
(Article) \emph{macedonian president branko crvenkovski will spend orthodox christmas this weekend with } \\
\emph{the country's troops serving in iraq, his cabinet said thursday.} \\
(Pred) macedonian president to spend orthodox christmas with troops in iraq \\
(Title) macedonian president to visit troops in iraq \\
\midrule
(Article) \emph{televangelist pat robertson, it seems, isn't the only one who thinks he can see god's } \\
\emph{purpose in natural disasters.} \\
(Pred) evangelist pat robertson thinks he can see god's purpose in disasters \\
(Title) editorial: blaming god for disasters \\
\midrule
(Article) \emph{the sudanese opposition said here thursday it had killed more than NNN government } \\
\emph{soldiers in an ambush in the east of the country.} \\
(Pred) sudanese opposition kills N government soldiers in ambush \\
(Title) sudanese opposition says NNN government troops killed in ambush \\
\end{tabular}
\caption{Example of model predicitons for $f_{F \rightarrow S}^{(\text{All}, 6)}$.}
\label{tab:examples_prediction}
\end{table*}
\section{More Examples of Model Predictions}
We present more examples of the expander and summarizer models' outputs in Tables~\ref{tab:examples_more}, \ref{tab:examples_evolve}, and \ref{tab:examples_prediction}. Table~\ref{tab:examples_more} shows more expander generations for all three initial models after one back-translation epoch. They follow the patterns outlined in Section~\ref{sec:experiments}, with (DBAE) showing more variety but being less faithful to the input. Table~\ref{tab:examples_evolve} show generations from the expander models at different back-translation iteration. It is interesting to see that each of the three models slowly overcome their initial limitations: the (DBAE) expander's third version is much more faithful to the input than its first, while the moments-based approach starts using rephrases and modeling vocabulary shift. The Procrustes method seems to benefit less from the successive iterations, but still starts to produce longer outputs. Finally, Table~\ref{tab:examples_prediction} provides summaries produced by the final model. While the model does produce likely summaries, we note that aside from the occasional synonym use or verbal tense change, and even though we do not use an explicit pointer mechanism beyond the standard seq2seq attention, the model's outputs are mostly extractive.
\section{Introduction}
Machine summarization systems have made significant progress in recent years, especially in the domain of news text. This has been made possible among other things by the popularization of the neural sequence-to-sequence (seq2seq) paradigm \citep{DBLP:conf/emnlp/KalchbrennerB13,DBLP:conf/nips/SutskeverVL14,DBLP:conf/ssst/ChoMBB14}, the development of methods which combine the strengths of extractive and abstractive approaches to summarization \citep{DBLP:conf/acl/SeeLM17,DBLP:conf/emnlp/GehrmannDR18}, and the availability of large training datasets for the task, such as Gigaword or the CNN-Daily Mail corpus which comprise of over 3.8M shorter and 300K longer articles and aligned summaries respectively.
Unfortunately, the lack of datasets of similar scale for other text genres remains a limiting factor when attempting to take full advantage of these modeling advances using supervised training algorithms.
In this work, we investigate the application of back-translation to training a summarization system in an unsupervised fashion from unaligned full text and summaries corpora. Back-translation has been successfully applied to unsupervised training for other sequence to sequence tasks such as machine translation \citep{DBLP:conf/emnlp/LampleOCDR18} or style transfer \citep{DBLP:journals/corr/SubramanianLSDRB18}.
We outline the main differences between these settings and text summarization, devise initialization strategies which take advantage of the asymmetrical nature of the task, and demonstrate the advantage of combining varied initializers.
Our approach outperforms the previous state-of-the-art on unsupervised text summarization while using less training data, and even matches the \textsc{rouge} scores of recent semi-supervised methods.
\section{Related Work}
\citet{DBLP:conf/emnlp/RushCW15}'s work on applying neural seq2seq systems to the task of text summarization has been followed by a number of works improving upon the initial model architecture.
These have included changing the base encoder structure \citep{DBLP:conf/naacl/ChopraAR16}, adding a pointer mechanism to directly re-use input words in the summary \citep{DBLP:conf/conll/NallapatiZSGX16,DBLP:conf/acl/SeeLM17}, or explicitly pre-selecting parts of the full text to focus on \citep{DBLP:conf/emnlp/GehrmannDR18}.
While there have been comparatively few attempts to train these models with less supervision, auto-encoding based approaches have met some success \citep{DBLP:conf/emnlp/MiaoB16,DBLP:conf/emnlp/WangL18a}.
\citet{DBLP:conf/emnlp/MiaoB16}'s work endeavors to use summaries as a discrete latent variable for a text auto-encoder. They train a system on a combination of the classical log-likelihood loss of the supervised setting and a reconstruction objective which requires the full text to be mostly recoverable from the produced summary. While their method is able to take advantage of unlabelled data, it relies on a good initialization of the encoder part of the system which still needs to be learned on a significant number of aligned pairs. \citet{DBLP:conf/emnlp/WangL18a} expand upon this approach by replacing the need for supervised data with adversarial objectives which encourage the summaries to be structured like natural language, allowing them to train a system in a fully unsupervised setting from unaligned corpora of full text and summary sequences. Finally, \citep{DBLP:journals/corr/SongTQLL19} uses a general purpose pre-trained text encoder to learn a summarization system from fewer examples. Their proposed MASS scheme is shown to be more efficient than BERT \citep{DBLP:journals/corr/DevlinCLT18} or Denoising Auto-Encoders (DAE) \citep{DBLP:conf/icml/VincentLBM08,DBLP:conf/aaai/FuTPZY18}.
This work proposes a different approach to unsupervised training based on back-translation. The idea of using an initial weak system to create and iteratively refine artificial training data for a supervised algorithm has been successfully applied to semi-supervised \citep{DBLP:conf/acl/SennrichHB16} and unsupervised machine translation \citep{DBLP:conf/emnlp/LampleOCDR18} as well as style transfer \citep{DBLP:journals/corr/SubramanianLSDRB18}. We investigate how the same general paradigm may be applied to the task of summarizing text.
\section{Mixed Model Back-Translation}
\begin{table*}[t]
\centering
\normalsize
\begin{tabular}{l}
(Original) \emph{france took an important step toward power market liberalization monday, braving } \\
\emph{union anger to announce the partial privatization of state-owned behemoth electricite de france.} \\
\bottomrule
(Pr-Thr) france launched a partial UNK of state-controlled utility, the privatization agency said. \\
\bottomrule
(DBAE) france's state-owned gaz de france sa said tuesday it was considering partial partial \\
privatization of france's state-owned nuclear power plants. \\
\bottomrule
($\bm{\mu}:1$) france launches an initial public announcement wednesday as the european union announced \\
it would soon undertake a partial privatization. \\
\midrule
\bottomrule
{\bf (Title) france launches partial edf privatization} \\
\end{tabular}
\caption{Full text sequences generated by $f_{S\rightarrow F}^{(\text{Pr-Thr}), 1}$, $f_{S\rightarrow F}^{(\text{DBAE}), 1}$, and $f_{S\rightarrow F}^{(\bm{\mu}:1), 1}$ during the first back-translation loop.}
\label{tab:art-examples}
\end{table*}
Let us consider the task of transforming a sequence in domain $A$ into a corresponding sequence in domain $B$ (e.g. sentences in two languages for machine translation). Let $\mathcal{D}_A$ and $\mathcal{D}_B$ be corpora of sequences in $A$ and $B$, without any mapping between their respective elements. The back-translation approach starts with initial seq2seq models $f^0_{A \rightarrow B}$ and $f^0_{B \rightarrow A}$, which can be hand-crafted or learned without aligned pairs, and uses them to create artificial aligned training data:
\begin{align}
\mathcal{D}^0_{A' \rightarrow B} &= \Big \{ \big( f^0_{B \rightarrow A}(b), b \big); \;\; \forall b \in \mathcal{D}_B \Big \} \\
\mathcal{D}^0_{B' \rightarrow A} &= \Big \{ \big( f^0_{A \rightarrow B}(a), a \big); \;\; \forall a \in \mathcal{D}_A \Big \}
\end{align}
Let $\mathcal{S}$ denote a supervised learning algorithm, which takes a set of aligned sequence pairs and returns a mapping function. This artificial data can then be used to train the next iteration of seq2seq models, which in turn are used to create new artificial training sets ($A$ and $B$ can be switched here):
\begin{align}
f^{i+1}_{A \rightarrow B} &= \mathcal{S}(\mathcal{D}^{i}_{A' \rightarrow B}) \\
\mathcal{D}^{i+1}_{B' \rightarrow A} &= \Big \{ \big( f^{i+1}_{A \rightarrow B}(a), a \big); \;\; \forall a \in \mathcal{D}_A \Big \}
\end{align}
The model is trained at each iteration on artificial inputs and real outputs, then used to create new training inputs. Thus, if the initial system isn't too far off, we can hope that training pairs get closer to the true data distribution with each step, allowing in turn to train better models.
In the case of summarization, we consider the domains of full text sequences $\mathcal{D}^F$ and of summaries $\mathcal{D}^S$, and attempt to learn \emph{summarization} ($f_{F\rightarrow S}$) and \emph{expansion} ($f_{S\rightarrow F}$) functions. However, contrary to the translation case, $\mathcal{D}^F$ and $\mathcal{D}^S$ are not interchangeable. Considering that a summary typically has less information than the corresponding full text, we choose to only define initial ${F\rightarrow S}$ models. We can still follow the proposed procedure by alternating directions at each step.
\subsection{Initialization Models for Summarization}
To initiate their process for the case of machine translation, \citet{DBLP:conf/emnlp/LampleOCDR18} use two different initialization models for their neural (NMT) and phrase-based (PBSMT) systems. The former relies on denoising auto-encoders in both languages with a shared latent space, while the latter uses the PBSMT system of \citet{DBLP:conf/naacl/KoehnOM03} with a phrase table obtained through unsupervised vocabulary alignment as in \citep{DBLP:journals/corr/abs-1805-11222}.
While both of these methods work well for machine translation, they rely on the input and output having similar lengths and information content. In particular, the statistical machine translation algorithm tries to align most input tokens to an output word. In the case of text summarization, however, there is an inherent asymmetry between the full text and the summaries, since the latter express only a subset of the former. Next, we propose three initialization systems which implicitly model this information loss. Full implementation details are provided in the Appendix.
\paragraph{Procrustes Thresholded Alignment (Pr-Thr)} The first initialization is similar to the one for PBSMT in that it relies on unsupervised vocabulary alignment. Specifically, we train two skipgram word embedding models using \textsc{fasttext} \citep{DBLP:journals/tacl/BojanowskiGJM17} on $\mathcal{D}^F$ and $\mathcal{D}^S$, then align them in a common space using the Wasserstein Procrustes method of \citet{DBLP:journals/corr/abs-1805-11222}. Then, we map each word of a full text sequence to its nearest neighbor in the aligned space if their distance is smaller than some threshold, or skip it otherwise. We also limit the output length, keeping only the first $N$ tokens.
We refer to this function as $f_{F\rightarrow S}^{(\text{Pr-Thr}), 0}$.
\paragraph{Denoising Bag-of-Word Auto-Encoder (DBAE)} Similarly to both \citep{DBLP:conf/emnlp/LampleOCDR18} and \citep{DBLP:conf/emnlp/WangL18a}, we also devise a starting model based on a DAE. One major difference is that we use a simple Bag-of-Words (BoW) encoder with fixed pre-trained word embeddings, and a 2-layer GRU decoder. Indeed, we find that a BoW auto-encoder trained on the summaries reaches a reconstruction \textsc{rouge-l} f-score of nearly 70\% on the test set, indicating that word presence information is mostly sufficient to model the summaries. As for the noise model, for each token in the input, we remove it with probability $p/2$ and add a word drawn uniformly from the summary vocabulary with probability $p$.
The BoW encoder has two advantages. First, it lacks the other models' bias to keep the word order of the full text in the summary. Secondly, when using the DBAE to predict summaries from the full text, we can weight the input word embeddings by their corpus-level probability of appearing in a summary, forcing the model to pay less attention to words that only appear in $\mathcal{D}^F$. The Denoising Bag-of-Words Auto-Encoder with input re-weighting is referred to as $f_{F\rightarrow S}^{(\text{DBAE}), 0}$.
\paragraph{First-Order Word Moments Matching ($\bm{\mu}$:1)} We also propose an extractive initialization model. Given the same BoW representation as for the DBAE, function $f_\theta^\mu(s, v)$ predicts the probability that each word $v$ in a full text sequence $s$ is present in the summary. We learn the parameters of $f_\theta^\mu$ by marginalizing the output probability of each word over all full text sequences, and matching these first-order moments to the marginal probability of each word's presence in a summary. That is, let $\mathcal{V}^S$ denote the vocabulary of $\mathcal{D}^S$, then $\forall v \in \mathcal{V}^S$:
\begin{align*}
\mu_v^F = \frac{\sum_{s \in \mathcal{D}^F} \mathbbm{1}_{v \in s} }{|\mathcal{D}^F|} \;\; \text{and} \;\; \mu_v^S = \frac{\sum_{s \in \mathcal{D}^s} \mathbbm{1}_{v \in s} }{|\mathcal{D}^S|}
\end{align*}
We minimize the binary cross-entropy (BCE) between the output and summary moments:
\begin{align*}
\theta^* = \argmin \sum_{v \in \mathcal{V}^S} \text{BCE} \Big ( \frac{\sum_{s \in \mathcal{D}^F} f_\theta^\mu(s, v) }{|\mathcal{D}^F|}, \mu_v^S \Big )
\end{align*}
We then define an initial extractive summarization model by applying $f_{\theta^*}^\mu(\cdot, \cdot)$ to all words of an input sentence, and keeping the ones whose output probability is greater than some threshold. We refer to this model as $f_{F\rightarrow S}^{(\bm{\mu}:1), 0}$.
\subsection{Artificial Training Data}
\label{sec:art-method}
We apply the back-translation procedure outlined above in parallel for all three initialization models. For example, $f_{F\rightarrow S}^{(\bm{\mu}:1), 0}$ yields the following sequence of models and artificial aligned datasets:
\begin{align*}
& f_{F\rightarrow S}^{(\bm{\mu}:1), 0} \; \rightarrow \; \mathcal{D}_{S' \rightarrow F}^{(\bm{\mu}:1), 0} \; \rightarrow \; f_{S\rightarrow F}^{(\bm{\mu}:1), 1} \; \rightarrow \; \mathcal{D}_{F' \rightarrow S}^{(\bm{\mu}:1), 1} \\
& \; \rightarrow \; f_{F\rightarrow S}^{(\bm{\mu}:1), 2} \; \rightarrow \; \mathcal{D}_{S' \rightarrow F}^{(\bm{\mu}:1), 2} \; \rightarrow \; f_{S\rightarrow F}^{(\bm{\mu}:1), 3} \; \rightarrow \; \ldots
\end{align*}
Finally, in order to take advantage of the various strengths of each of the initialization models, we also concatenate the artificial training dataset at each odd iteration to train a summarizer, e.g.:
\begin{align*}
f_{F\rightarrow S}^{(\text{All}), 2} &= \mathcal{S} \Big( \mathcal{D}_{F' \rightarrow S}^{(\text{Pr-Thr}), 1} \cup \mathcal{D}_{F' \rightarrow S}^{(\text{DBAE}), 1} \cup \mathcal{D}_{F' \rightarrow S}^{(\bm{\mu}:1), 1} \Big)
\end{align*}
\begin{table}[t]
\centering
\begin{tabular}{l|c|c|c}
& \textsc{R-1} & \textsc{R-2} & \textsc{R-L} \\
\toprule
\bottomrule
Lead-8 & 21.86 & 7.66 & 20.45 \\
PBSMT & 24.29 & 8.65 & 21.82 \\
Pre-DAE$^1$ & 21.26 & 5.60 & 18.89 \\
\midrule
(Pr-Thr)-0 & 24.79 & \textbf{8.80} & 22.46 \\
(DBAE)-0 & 28.58 & 6.74 & 22.72 \\
($\bm{\mu}$:1)-0 & \textbf{29.17}& 8.10 & \textbf{24.71} \\
\end{tabular}
\caption{Test \textsc{rouge} for trivial baseline and initialization systems. $^1$\citep{DBLP:conf/emnlp/WangL18a}.}
\label{tab:base-bootstrap}
\end{table}
\section{Experiments}
\label{sec:experiments}
\paragraph{Data and Model Choices} We validate our approach on the Gigaword corpus, which comprises of a training set of 3.8M article headlines (considered to be the full text) and titles (summaries), along with 200K validation pairs, and we report test performance on the same 2K set used in \citep{DBLP:conf/emnlp/RushCW15}. Since we want to learn systems from fully unaligned data without giving the model an opportunity to learn an implicit mapping, we also further split the training set into 2M examples for which we only use titles, and 1.8M for headlines.
All models after the initialization step are implemented as convolutional seq2seq architectures using Fairseq \citep{DBLP:journals/corr/abs-1904-01038}. Artificial data generation uses top-15 sampling, with a minimum length of 16 for full text and a maximum length of 12 for summaries. \textsc{rouge} scores are obtained with an output vocabulary of size 15K and a beam search of size 5 to match \citep{DBLP:conf/emnlp/WangL18a}.
\paragraph{Initializers} Table~\ref{tab:base-bootstrap} compares test \textsc{ROUGE} for different initialization models, as well as the trivial Lead-8 baseline which simply copies the first 8 words of the article. We find that simply thresholding on distance during the word alignment step of (Pr-Thr) does slightly better then the full PBSMT system used by \citet{DBLP:conf/emnlp/LampleOCDR18}. Our BoW denoising auto-encoder with word re-weighting also performs significantly better than the full seq2seq DAE initialization used by \citet{DBLP:conf/emnlp/WangL18a} (Pre-DAE). The moments-based initial model ($\bm{\mu}$:1) scores higher than either of these, with scores already close to the full unsupervised system of \citet{DBLP:conf/emnlp/WangL18a}.
In order to investigate the effect of these three different strategies beyond their \textsc{rouge} statistics, we show generations of the three corresponding first iteration expanders for a given summary in Table~\ref{tab:art-examples}. The unsupervised vocabulary alignment in (Pr-Thr) handles vocabulary shift, especially changes in verb tenses (summaries tend to be in the present tense), but maintains the word order and adds very little information. Conversely, the ($\bm{\mu}$:1) expansion function, which is learned from purely extractive summaries, re-uses most words in the summary without any change and adds some new information. Finally, the auto-encoder based (DBAE) significantly increases the sequence length and variety, but also strays from the original meaning (more examples in the Appendix). The decoders also seem to learn facts about the world during their training on article text (EDF/GDF is France's public power company).
\paragraph{Full Models} Finally, Table~\ref{tab:full-res} compares the summarizers learned at various back-translation iterations to other unsupervised and semi-supervised approaches. Overall, our system outperforms the unsupervised Adversarial-\textsc{reinforce} of \citet{DBLP:conf/emnlp/WangL18a} after one back-translation loop, and most semi-supervised systems after the second one, including \citet{DBLP:journals/corr/SongTQLL19}'s MASS pre-trained sentence encoder and \citet{DBLP:conf/emnlp/MiaoB16}'s Forced-attention Sentence Compression (FSC), which use 100K and 500K aligned pairs respectively.
As far as back-translation approaches are concerned, we note that the model performances are correlated with the initializers' scores reported in Table~\ref{tab:base-bootstrap} (iterations 4 and 6 follow the same pattern). In addition, we find that combining data from all three initializers before training a summarizer system at each iteration as described in Section~\ref{sec:art-method} performs best, suggesting that the greater variety of artificial full text does help the model learn.
\begin{table}[t!]
\centering
\begin{tabular}{l|c|c|c|c}
& Sup. & \textsc{R-1} & \textsc{R-2} & \textsc{R-L} \\
\toprule
\bottomrule
(Pr-Thr)-2 & 0 & 26.17 & 9.42 & 23.65 \\
(DBAE)-2 & 0 & 28.55 & 10.24 & 25.46 \\
($\bm{\mu}$:1)-2 & 0 & 29.55 & 9.62 & 26.10 \\
(All)-2 & 0 & 29.80 & 11.52 & 27.01 \\
(All)-4 & 0 & {\bf 30.19} & 12.36 & {\bf 27.75} \\
(All)-6 & 0 & 30.04 & {\bf 12.69} & 27.64 \\
\midrule
Advers. & 0 & 28.11 & 9.97 & 25.41 \\
\textsc{rein-} & 10K & 30.01 & 11.57 & 27.61 \\
\textsc{force}$^1$ & 500K & 33.33 & 14.18 & 30.48 \\
\midrule
MASS$^2$ & 100K & 29.79 & 12.75 & 27.45 \\
\midrule
FSC$^3$ & 500K & 30.14 & 12.05 & 27.99 \\
\bottomrule
\midrule
Seq2seq$^4$ & 3.8M & 35.30 & 16.64 & 32.62 \\
\end{tabular}
\caption{Comparison of full systems. The best scores for unsupervised training are bolded. Results from: $^1$\citep{DBLP:conf/emnlp/WangL18a}, $^2$\citep{DBLP:journals/corr/SongTQLL19}, $^3$\citep{DBLP:conf/emnlp/MiaoB16}, and $^4$\citep{DBLP:conf/conll/NallapatiZSGX16}}
\label{tab:full-res}
\end{table}
\paragraph{Conclusion} In this work, we use the back-translation paradigm for unsupervised training of a summarization system. We find that the model benefits from combining initializers, matching the performance of semi-supervised approaches.
\clearpage
|
2,877,628,088,379 | arxiv | \section{Introduction}
Consider the problem of hiring workers to
complete complex tasks on crowdsourcing platforms such as Mechanical
Turk. A principal must select a set of participants, henceforth
agents, whose contributions will be aggregated to complete the task.
The principal's value for the task is a function of the set of
participants selected and the principal's budget limits the total
payments to participants. We assume that the principal's value is
submodular, i.e., it exhibits diminishing returns to recruiting
additional participants. The participants have a private cost for
participating and will choose to participate strategically to optimize
their payments received relative to this cost. The principal seeks a
budget feasible mechanism for selecting participants so as to maximize
the value of the completed task.
The literature on {\em budget feasible mechanism design} initiated by
\citet{sin-10} studies this problem; however, it primarily considers
sealed-bid mechanisms which do not tend to be seen on crowdsourcing
platforms like Mechanical Turk. Instead, these platforms use posted
pricing mechanisms. We follow a traditional economics approach to
this problem where agents' costs are drawn from a common prior
distribution and a mechanism is sought to optimize the principal's
value function in expectation. Note that this approach is especially
relevant to the principal's problem as the workers on crowdsourcing
platforms are drawn from a large population of available workers.
We show that posted pricing mechanisms give a good
approximation to the optimal sealed-bid mechanism. Additionally, we
give efficient algorithms for calculating the appropriate prices. In comparison to other work in optimization of prices in
crowdsourcing, our work focuses on the use of prices to control
participation and not the level of effort of participants. Controlling
the level of effort of participants was studied in online behavioral
experiments by \citet{HSSV-15}, theoretically for crowdsourcing
contests by \citet{CHS-12}, and for user generated content by
\citet{ISS-15}.
\paragraph{Overview of Approach.}
Our approach follows similarly to that of \citet{ala-14} and
\citet{yan-11}. The starting point for our analysis is an upper bound
on the performance of the optimal sealed bid mechanism that relaxes
the ex post budget constraint on the mechanism to hold ex ante, i.e., in
expectation over the private costs of the agents. Via this {\em ex ante
relaxation} and the \citet{mye-81} theory of virtual values, we
construct a posted price mechanism that is budget feasible in
expectation and a $1 - 1/e$ approximation to the optimal ex ante
mechanism when the principal's value function exhibits decreasing
returns, i.e., is {\em submodular}. For the special case
where the principal's value function is {\em additive}, this posted pricing
is optimal (for the ex ante relaxation).
We then consider posting the prices from the solution to the ex ante
relaxation until the budget runs out. The resulting mechanism is ex
post budget feasible, but suffers a loss in performance because the
budget may run out early. The main technical contribution of this
work is to show that the performance of such a price
posting mechanism compares favorably to the optimal sealed-bid
mechanism. Previous work in mechanism design gives techniques which are now well understood to satisfy ex post allocation constraints. Ex post payment constraints require different techniques and our analyses follow two basic approaches that combine optimization and mechanism design concepts. To analyze the
performance of the posted pricing under any arrival order of the
agents, we solve the ex ante relaxation with a slightly smaller budget
and then, using results from the \citet{VCZ-11} analysis of {\em
contention resolution schemes}, show that it is unlikely for the
original ex post budget constraint to bind. Alternatively, we obtain better bounds
for additive value functions and when the order of agent arrivals can
be specified by the mechanism via the {\em correlation gap} approach
of \citet{yan-11}. As a corollary, we obtain new correlation gap results for integral and fractional knapsack set functions. Moreover, when the environment is symmetric (both in
distribution of agent costs and the principal's value function), the
submodular case can be reduced to the additive case.
The prices identified above can be computed or approximately computed
in polynomial time. In particular, for submodular value functions, we
reduce the problem of finding the prices to the well-known {\em greedy
algorithm} for submodular optimization. The identified prices
approximate the optimal prices with relative loss in the value
function that is within a factor of $1-1/e$. For additive value
functions, the optimization problem simplifies to a monopoly pricing
problem of classic microeconomics. Similarly to the \citet{MS-83}
treatment of welfare maximization subject to budget balance in a
buyer--seller exchange, optimization in this context is based on {\em
Lagrangian virtual surplus}. These optimal prices can be
approximated arbitrarily precisely by solving this problem on a
discretized instance.
\paragraph{Related work.}
The prior literature on budget feasibility primarily considers a
worst-case design and analysis framework that compares the performance
of the designed mechanism to the {\em first-best} outcome, i.e., the
one that could be obtained if the agents' costs were public. See
\citet{sin-10}, \citet{BCGL-12}, \citet{BKS-12}, and \citet{AGN-14}.
Our analysis compares the designed mechanism, in expectation for the
known prior distribution, to the {\em second-best} outcome, i.e., the
one obtained by the Bayesian optimal mechanism.
The following results are for prior-free mechanisms in comparison to
the first-best outcome. \citet{sin-10} obtained a randomized truthful
budget feasible mechanism with a constant factor approximation for
submodular value functions, \citet{CGL-11} then improved the analysis
of this mechanism to a 0.13 approximation. In the Bayesian setting,
\citet{BCGL-12} obtained a constant approximation for subadditive
functions. More recently, \citet{AGN-14} obtained better bounds by
considering large markets, which we also consider in this
paper. Finally, \citet{BKS-12} also considered posted pricing
mechanisms but when the agents arrive online. They obtained a constant
approximation for the class of symmetric submodular functions. They
also obtained a $O(\log n)$ mechanism for the case of submodular
functions. In comparison to this last paper, we give much better
bounds when the prior distribution on costs is known.
The starting point for our analysis is the solution to the relaxed
problem of budget balance in expectation, i.e., ex ante. In the
additive case, this problem was recently studied by \citet{EG-14}.
They show that posted pricing mechanisms solve the relaxed problem and
remark that the same performance can be obtained with ex post budget
balance, but at the expense of relaxing ex post individual rationality
(for the bidders) and not with a posted pricing. This latter
observation follows, for example, by applying a general construction
of \citet{EF-99}. Our analysis of the relaxed problem gives a much simpler proof of
their main theorem.
Budget feasibility has also been studied in the context of
crowdsourcing. Among that line of work, the model considered in
\citet{AGN-14} is the closest to ours, and will be compared in detail
below. \citet{SK-13} and \citet{SM-13} consider the special case of
our model where the principal's value function is the number of tasks
performed. The former studies posted pricing for agents with i.i.d.\@
costs from an unknown distribution, while the latter studies sealed
bid mechanisms without a prior.
\paragraph{Our results.}
\begin{figure}
\begin{center}
\scalebox{0.75}{
\renewcommand{\arraystretch}{1.4}
\begin{tabular}{ |c|c|c|c|c |}
\hline
\rule{0pt}{3ex} Value Function & \begin{tabular}{c}Mechanism \\ Family\end{tabular} & \begin{tabular}{c}Ex Post Constraint \\ Approach\end{tabular} & General Result & \begin{tabular}{c}Large \\ Markets \end{tabular} \rule[-1.2ex]{0pt}{0pt} \\
\hline \hline
Additive & \rule[1.2ex]{0pt}{0pt} \begin{tabular}{c}Sequential\\ Posted Pricing\end{tabular}
& \begin{tabular}{c}Correlation \\ Gap \end{tabular} & $(1-\frac{1}{\sqrt{2 \pi k}})(1 - \frac{1}{k}) $& $1$\\\hline
\begin{tabular}{c}Symmetric \\ Submodular \end{tabular} & \rule[1.2ex]{0pt}{0pt} \begin{tabular}{c}Oblivious\\ Posted Pricing\end{tabular}
& \begin{tabular}{c}Correlation \\ Gap \end{tabular} & $(1-\frac{1}{\sqrt{2 \pi k}})(1 - \frac{1}{k}) $& $1$\\\hline
\begin{tabular}{c}Submodular \\ (computational) \end{tabular} &\begin{tabular}{c}Oblivious \\ Posted Pricing\end{tabular} & \begin{tabular}{c}Contention \\ Resolution \end{tabular}
& $(1 - \frac{1}{e })^2 ( 1 - \epsilon) (1 - e^{- \epsilon^2 (1 - \epsilon) k / 12})$ & $(1 - \frac{1}{e })^2 \approx 0.40$ \\\hline
\begin{tabular}{c}Submodular \\ (non-computational) \end{tabular} & \begin{tabular}{c}Oblivious \\ Posted Pricing\end{tabular} & \begin{tabular}{c}Contention \\ Resolution \end{tabular}
& $(1 - \frac{1}{e }) ( 1 - \epsilon) (1 - e^{- \epsilon^2 (1 - \epsilon) k / 12})$ & $1 - \frac{1}{e } \approx 0.63$ \\\hline
\end{tabular}}
\caption{Our results are approximations to the
Bayesian optimal mechanism. Bounds are parameterized by the market size $k$, a lower bound on the number of
agents that can be simultaneously selected with the given budget (see
Definition \ref{def:largemarkets}). In large markets, $k$ grows large. The given results with the contention resolution approach require $k \geq 4$ and
$\epsilon \in (2 /k, 1/2)$, a result for $k < 4$ is mentioned in Section~\ref{s:crs}.. For the symmetric submodular results, we also assume symmetric distributions on costs. Our
computational results also have an additional
$o(1)$ loss due to discretization.}
\end{center}
\label{f:results-table}
\end{figure}
Our results are summarized in Figure~\ref{f:results-table}. We
consider two main classes of valuation functions, additive and
submodular. We use two different methods to satisfy the ex post payment constraint, one is based on contention resolution schemes and the
other on correlation gap. Contention resolution schemes give an
oblivious posted price mechanism, i.e., one that obtains its proven
bound under any arrival order of the agents. The correlation gap
approach, for the case where the principal has an additive value
function, gives a sequential posted price mechanism. Such a mechanism
is specified by an ordering on agents and take-it-or-leave-it prices
to offer each agent. As a special case, we consider symmetric
environments where both the value function and the distribution is
symmetric.
Our results can most directly be compared to those of
\citet{AGN-14}, but with the following caveats. Their results are for
sealed bid mechanisms while ours are for posted pricings; their
mechanism is prior-free while ours is parameterized by the prior
distribution on agent costs; their results compare performance to the
first-best outcome, i.e., without incentive constraints, while ours
compare to the second-best outcome, i.e., that of the Bayesian optimal
mechanism (with incentive constraints). They obtain approximation ratios of $1 - 1/e$, $1/3$ and $1/2$ in large markets respectively for additive, submodular (computational), and submodular (non-computational) value functions. Moreover, they show that no truthful mechanism can achieve an approximation ratio better than $1 - 1/e$ with respect to the first-best outcome for additive value functions.
\paragraph{Discussion about posted pricing mechanisms and benchmarks.}
Following a line of literature in mechanism design that was initiated
by \citet{CHMS-10}, the goal of this work is to show that there exists simple posted pricing mechanisms that approximate the optimal sealed-bid mechanism. Two quantities of interest therefore need to be separated. The first is the cost of incentive compatibility in budget feasible settings, i.e., the gap between the first-best and second-best benchmarks. The second is the cost of simplicity, i.e., the loss of a posted pricing mechanism compared to the Bayesian optimal mechanism. Prior work with comparisons to a first-best benchmark has approximations that are a combination of both of these quantities. Our comparison to the
second-best outcome isolates the loss from a simple decentralized
pricing over the optimal centralized mechanism as the quantity of
interest.
\paragraph{Paper Organization.}
We start with preliminaries in Section~\ref{s:prelim} to introduce the
model and different concepts used in this paper. We then
describe posted price mechanisms for the ex ante relaxation, where the
budget holds in expectation, in Section~\ref{s:relaxations}. We
explain how to go from an ex ante posted price mechanism to an ex post
posted price mechanism using two different methods, one inspired by
contention resolution schemes in Section~\ref{s:crs} and another based
on a correlation gap analysis in Section~\ref{s:additive}. We
tackle the computation issues of finding a good ex ante mechanism in
Section~\ref{s:optimal}. In Section~\ref{s:sym}, we study symmetric environments. Up to Section~\ref{s:sym}, cost distributions are assumed to be regular and Section~\ref{s:irregular} considers the case where some distributions might be irregular. Throughout the paper, we assume that the principal's valuation
function is monotone and submodular.
\section{Preliminaries}
\label{s:prelim}
There are $n$ agents $N = \{1,\ldots,n\}$. Agent $i$ has a private
cost $\costi$ for providing a service that is drawn from a
distribution $\disti$ (denoting the cumulative distribution function)
with density $\densi$. Indicator variable $\alloci$ denotes whether
or not $i$ provides service and $\paymenti$ denotes the payment $i$
receives. Agent $i$ aims to optimize her utility given by $\paymenti
- \costi \alloci$. The cost profile is denoted $\costs =
(\costi[1],\ldots,\costi[n])$; the joint distribution on costs is the
product distribution $\dists = \disti[1] \times \cdots \times
\disti[n]$; the payment profile is denoted $\payments =
(\paymenti[1],\ldots,\paymenti[n])$; and the allocation profile is
denoted $\allocs = (\alloci[1],\ldots,\alloci[n])$.
The principal has a value function $\val : \{0,1\}^n \to {\mathbb R}_{+}$. For
allocation profile $\allocs \in \{0,1\}^n$ or set of agents $S = \{i :
\alloci = 1\}$ who provide service, the value to the principal is
$\val(\allocs) = \val(S)$. The principal has a budget $B$ and
requires the payments to the agents not to exceed the budget, i.e.,
$\sum_i \paymenti \leq B$. The following mathematical program
captures the principal's objective.
\begin{align}
\label{eq:main}
\max_{\allocs,\payments}\ &\expect[\costs]{\val(\allocs(\costs))}\\
\notag \text{s.t. } &\sum\nolimits_i \paymenti(\costs) \leq B \hspace{0.5 cm} \forall \costs,\\
\notag &\text{$\allocs(\cdot)$ and $\payments(\cdot)$ are incentive compatible.}
\end{align}
We consider only mechanisms that are incentive compatible. A
mechanism is {\em incentive compatible} (IC), if truthful reporting of
the agents is a dominant strategy equilibrium.\footnote{The
restriction to dominant strategy mechanisms over Bayesian incentive
compatible mechanisms is without loss for the budget feasibility
objective.} We will consider the budget constraint both ex ante,
i.e., in expectation over realizations of agents' costs and random
choices of the mechanism, and ex post, i.e., the payments to the
agents never exceed the budget. The main goal of this paper is to
approximate the optimal ex ante budget feasible mechanism with an ex
post budget feasible posted pricing mechanism. Posted pricing
mechanisms are trivially incentive compatible.
\begin{definition}
\label{d:posted-pricing}
The {\em posted pricing} $(\critcosts,\orders)$, for prices $\critcosts$ and ordering on agents $\orders$, is:
\begin{enumerate}
\item The remaining budget is initially $B$.
\item The agents arrive in order $\orders$.
\item If agent $i$ arrives with cost $\costi$ below her offered price
$\critcosti$ which is below the remaining budget, then select this agent for service, pay her $\critcosti$, and deduct $\critcosti$ from the remaining budget. Otherwise, discard this agent.
\end{enumerate}
\end{definition}
For (implicit) distribution on costs $\dists$, we can equivalently
specify a posted pricing $(\critcosts,\orders)$ as
$(\exquants,\orders)$ where $\exquanti = \disti(\critcosti)$ is the
{\em marginal probability} that agent $i$ with cost $\costi \sim
\disti$ would accept the price $\critcosti$.\footnote{It is common in
Bayesian mechanism design to consider the agents' private costs in
{\em quantile space} where $i$'s quantile $\quanti = \disti(\costi)$
is the measure of cost lower than $\costi$ according to $\disti$.
Agent quantiles are always uniformly distributed on $[0,1]$. From
this perspective, $\exquanti$ is agent $i$'s price in quantile
space.}
Note that the prices $\critcosts$ are non-adaptive, i.e., fixed before
the agents arrive. We consider posted pricing mechanisms under two
different models for agent arrival. In the {\em sequential posted
pricing} model, the ordering $\orders$ can be fixed in advance by
the mechanism and, without computational considerations, our analysis
is for the best case ordering of the prices. In the {\em oblivious
posted pricing} model, the ordering $\orders$ is unconstrained and
our analysis is worst case with respect to this ordering. An
oblivious posted pricing is denoted $\critcosts$. We
compare our mechanisms to an {\em ex ante posted pricing}
$\critcosts$ where the budget constraint holds in expectation, i.e.,
$\sum_i \critcosti\,\critquanti \leq B$. The value of an ex
ante posted pricing is $\expect[S\sim\critquants]{\val(S)}$ where
$S\sim\critquants$ adds each agent $i$ to $S$ independently with
probability $\critquanti$.
The paper focuses on value functions that are monotone and submodular
(Definition~\ref{d:submodular}). An important special case, which we
will treat separately, is that of {\em additive value functions} where
each agent has a value $\vali$ and the value function is $\val(S) =
\sum_{i\in S} \vali$.
\begin{definition}
\label{d:submodular}
A set function $\val : \{0,1\}^n \to {\mathbb R}_+$ is {\em monotone submodular} if
\begin{itemize}
\item (monotonicity) $\val(T) \leq \val(S)$ for all $T \subset S$, and
\item (submodularity) for all $T \subset S$ the marginal contribution of $i \not \in S$ to $T$ is at least its marginal contribution to $S$. In other words,
$$
\val(T \cup \{i\}) - \val(T) \geq \val(S \cup \{i\}) - \val(S).
$$
\end{itemize}
\end{definition}
Our analysis is based on the relationship between a set function and two
standard extensions of a set functions from the domain $\{0,1\}^n$ to
the domain $[0,1]^n$. For submodular set functions, these extensions
were studied by \citet{CCPV-07} and \citet{ADSY-10}.
\begin{definition}
\label{d:submodular-extensions}
Given a set function $\val : \{0,1\}^n \to {\mathbb R}_+$,
\begin{itemize}
\item its {\em concave closure} $V^+(\cdot)$ (a.k.a., {\em
correlated value}) is the smallest concave function that upper
bounds the set function. Alternatively, $V^+(\exquants) =
\max_{{\cal D}} \expect[S \sim {\cal D}]{\val(S)}$ with the
maximization taken over all distributions ${\cal D}$ with marginal probabilities
$\exquants = (\exquanti[1],\ldots,\exquanti[n])$; and
\item its {\em multilinear extension} $V(\cdot)$ (a.k.a., {\em
independent value}) is the expected value of the set function when
each element $i$ is drawn independently with marginal probability
$\exquanti$. In other words, $V(\exquants) = \expect[S \sim
\exquants]{\val(S)}$.
\end{itemize}
\end{definition}
For any set function, the concave closure is clearly an upper bound on
the multilinear extension. For submodular functions the inequality
approximately holds in the opposite direction as well. By the
interpretation of the multilinear extension as the expected value of
the set function for independent distribution and the concave closure
as the expected value of the set function for correlated
distributions, their worst case ratio over marginal probabilities
$\exquants$ is known as the {\em correlation gap} \citep{ADSY-10}.
\begin{theorem}[\citealp{CCPV-07}, \citealp{ADSY-10}]
\label{t:cg-submodular}
For monotone submodular set function $\val(\cdot)$, the correlation gap is
$$\min_{\exquants} \frac{V(\exquants)}{V^+(\exquants)} \leq 1 - 1/e.$$
\end{theorem}
\begin{theorem}[\citealp{yan-11}]
\label{t:cg-multi-unit}
For a $k$-highest-value-elements set function $\val(\cdot)$, which is
additive with value $\vali$ for element $i$ up to a capacity of at most $k$
elements, the correlation gap is $$\min_{\exquants} \frac{ V(\exquants) }{
V^+(\exquants)} \leq 1-1/\sqrt{2 \pi k}.$$
\end{theorem}
Our analysis is parameterized by a measure of the size of the
market. This notion of market size is standard in the literature,
e.g., see \citet{BCGL-12} and \citet{AGN-14}. A large market analysis
considers the market size in the limit. Although large markets are described as an assumption by \citet{AGN-14}, the market size $k$ is a parameter in our analysis and we obtain results for any market size.
\begin{definition} \label{def:largemarkets}
A market is {\em $k$-large} for prices $\critcosts$ and budget $B$ if
$B / \critcosti \geq k$ for all agents $i$.
\end{definition}
Note that the market size depends on prices and therefore on the mechanism, which is inherent to our analysis. These prices can trivially be upper bounded by the maximum cost that can be drawn from the distributions.
\section{The Ex Ante Budget Feasible and Concave Closure Relaxations}
\label{s:relaxations}
In this section we relax the objective function and the budget
constraint to make the problem more amenable to optimization. We first
relax the budget constraint so that it only holds in expectation,
making it an ex ante feasibility constraint. We then upper bound the
value function by its concave closure. With an ex ante feasibility
constraint, the objective is to optimize the following ex ante program over
allocation rule $\allocs(\cdot)$ and payment rule $\payments(\cdot)$
with $\costs \sim \dists$.
\begin{align}
\label{eq:bayes-opt-prog}
\max_{\allocs,\payments}\ &\expect[\costs]{\val(\allocs(\costs))}\\
\notag \text{s.t. } &\sum\nolimits_i \expect[\costs]{\paymenti(\costs)} \leq B,\\
\notag &\text{$\allocs(\cdot)$ and $\payments(\cdot)$ are IC.}
\end{align}
When payments are part of the principal's objective or constraints,
the Bayesian mechanism design problem will typically rely on the
\citet{mye-81} theory of virtual values or, in our case where the
agents are sellers, virtual costs. The {\em virtual cost} of agent
$i$ with cost $\costi$ drawn from distribution $\disti$ is
$\virti(\costi) = \costi + \frac{\disti(\costi)}{\densi(\costi)}$.
The {\em virtual surplus} of an agent $i$ with virtual cost
$\virti(\costi)$ and allocation indicator $\alloci$ is
$\virti(\costi)\,\alloci$.
\begin{lemma}[\citealp{MS-83}]
\label{l:payment=virtual-cost}
In any incentive compatible mechanism, any agent $i$'s expected
payment is equal to her expected virtual surplus, i.e., for $\costs
\sim \dists$,
$$\expect[\costs]{\paymenti(\costs)} = \expect[\costs]{\virti(\costs)\,\alloci(\costs)}.$$
\end{lemma}
The definition of virtual costs and Lemma~\ref{l:payment=virtual-cost}
allows the ex ante program~\eqref{eq:bayes-opt-prog} to be rewritten in terms
of the allocation rule only. To do so, we invoke the following
characterization of incentive compatible mechanisms of
\citet{mye-81}.
\begin{lemma}[\citealp{mye-81}]
\label{l:IC=monotone}
There exists an incentive compatible mechanism with allocation rule
$\allocs(\cdot)$ if and only if $\allocs(\cdot)$ is monotone in the
cost of any agent.
\end{lemma}
We now rewrite the optimization program~\eqref{eq:bayes-opt-prog} by
substituting in virtual costs for
payments to obtain the following virtual surplus program,
\begin{align}
\label{eq:relaxed-bayes-opt-prog}
\max_{\allocs}\ &\expect[\costs]{\val(\allocs(\costs))}\\
\notag \text{s.t. } &\sum\nolimits_i \expect[\costs]{\virti(\costs)\,\alloci(\costs)} \leq B, \\
\notag & \allocs(\cdot) \text{ is monotone in the cost of any agent.}
\end{align}
For the general case of submodular value functions, the expected
value of the set function $\val(\cdot)$ is upper bounded by its
concave closure (Definition~\ref{d:submodular-extensions}) as
follows. The allocation rule $\allocs(\cdot)$ that optimizes this
virtual surplus program induces, for $\costs \sim \dists$, a
distribution over sets of winning agents. Denote this distribution
by ${\cal D}$ and denote by $\exquants$ the profile of marginal
probabilities, i.e., with $\exquanti = \prob[S \sim {\cal D}]{ i \in
S}$. By the definition of the concave closure of the set function
$\val(\cdot)$, $\expect[\costs]{\val(\allocs(\costs))} =
\expect[S\sim {\cal D}]{\val(S)} \leq V^+(\exquants)$.
The payment to an agent is lower bounded by the payment from price
posting. As above, the optimal mechanism selects agent $i$ with
probability $\exquanti$. When virtual costs are monotonically
increasing, i.e., in the case of \emph{regular distributions}, the expected
payment to an agent $i$ selected with probability $\exquanti$ is
minimized if agent $i$ is served if and only if $\costi \leq
\disti^{-1}(\exquanti)$ by Lemma~\ref{l:payment=virtual-cost} since
these costs minimize $\virti(\costs)$.\footnote{The case of irregular distributions is considered in Section~\ref{s:irregular}.} Thus, the mechanism that minimizes expected
payments and serves each agent $i$ with probability $\exquanti$ is the
mechanism that posts price $\pricei = \disti^{-1}(\exquanti)$ to each
agent $i$.
\begin{lemma}
For any agent with cost drawn from regular distribution $\disti$ and
any incentive compatible mechanism that selects agent $i$ with
probability $\exquanti$, the expected payment of agent $i$ is at least
$\exquanti \critcosti$ where $\critcosti = \disti^{-1}(\exquanti)$.
\end{lemma}
Combining the relaxation of the value function and the relaxation of
the payments we obtain the following concave closure program,
\begin{align}
\label{eq:relaxed-concave-bayes-opt-prog}
\max_{\quants}\ &V^+(\quants) \\
\notag \text{s.t. } &\sum\nolimits_i \quanti \disti^{-1}(\quanti) \leq B.
\end{align}
\begin{lemma}\label{l:ub} Let $\exquants^+$ be the optimal solution to the concave closure program~\eqref{eq:relaxed-concave-bayes-opt-prog}, then $V^+(\exquants^+)$ upper bounds the performance of the optimal ex ante mechanism in the case of regular cost distributions.
\end{lemma}
Posted price mechanisms are trivially incentive compatible. Since the distributions of agents' costs are independent, the set of agents who will accept their offer with a posted price mechanism is a set which will contain each agent with some probability $\quanti$ independently. Therefore the performance of a posted price mechanism where agents accept their offer with probabilities $\quants$ is the multilinear extension $V(\quants)$. This motivates us to rewrite the concave closure program~\eqref{eq:relaxed-concave-bayes-opt-prog} as the following multilinear extension program,
\begin{align}
\label{eq:multilinear-prog}
\max_{\quants}\ &V(\quants) \\
\notag \text{s.t. } &\sum\nolimits_i \quanti \disti^{-1}(\quanti) \leq B.
\end{align}
Maximizing the multilinear extension program gives us an ex ante posted price mechanism that is approximately optimal.
\begin{theorem}
\label{thm:mlextccext} In the case of monotone submodular value functions and regular cost distributions, the ex ante mechanism that posts price $\critcost_i = \disti^{-1}(\exquanti)$ to each agent $i$ is an $1 - 1/e$ approximation to the optimal ex ante mechanism, where $\exquants$ is the optimal solution to the multilinear extension program~\eqref{eq:multilinear-prog}.
\end{theorem}
\begin{proof}
Let $\exquants^{+}$ be the optimal solution to the concave closure
program~\eqref{eq:relaxed-concave-bayes-opt-prog}. By
Theorem~\ref{t:cg-submodular}, $V(\exquants^{+}) \geq (1 -
1/e)V^+(\exquants^{+})$. By the optimality of $\exquants$,
$V(\exquants) \geq V(\exquants^{+})$. Since the performance
of posting price $\disti^{-1}(\exquanti)$ to each agent $i$ is
$V(\exquants)$ and since $\ V^+(\exquants^{+})$ upper bounds
the performance of the optimal ex ante mechanism by Lemma~\ref{l:ub}, posting price
$\disti^{-1}(\exquanti)$ to each agent is an $1 - 1/e$ approximation
to the optimal ex ante mechanism.
\end{proof}
Note that in the additive case where each agent has value $\vali$,
$V(\quants) = V^+(\quants) = \sum_i \vali \quanti$ and we get
the following corollary.
\begin{corollary} \label{c:optexante} In the case of additive value functions and regular cost distributions, the ex ante mechanism that posts price $\critcost_i = \disti^{-1}(\exquanti)$ to each agent $i$ is an optimal mechanism, where $\exquants$ is the optimal solution to the multilinear extension program~\eqref{eq:multilinear-prog}.
\end{corollary}
We discuss the computational issues of finding a good solution
$\quants$ to the multilinear extension
program~\eqref{eq:multilinear-prog} in Section~\ref{s:optimal}. For
the case of submodular functions, we reduce the problem to submodular
function maximization (with a cardinality constraint) for which the greedy
algorithm gives an $1-1/e$ approximation. In the additive case, we
will show that the optimal ex ante budget feasible mechanism can be
found by taking the Lagrangian relaxation of the virtual surplus
program~\eqref{eq:relaxed-bayes-opt-prog}.
\section{Submodular Value and Oblivious Posted Pricing}
\label{s:crs}
In the previous section, we obtained an ex ante mechanism by
optimizing the multilinear extension
program~\eqref{eq:multilinear-prog}. In this section we analyze the
performance of oblivious posted pricing (with an ex post budget
constraint).
The approach of this section is the following: lower the budget by
some small amount and optimize the multilinear extension
program~\eqref{eq:multilinear-prog} so that the lowered budget is
satisfied ex ante. With the budget sufficiently lowered, with high
probability the cost (sum of prices) of the set of agents who would
accept their offer is under the original budget (regardless of their
arrival order and ex post).
This approach is a special case of that taken by the contention
resolution schemes of \citet{VCZ-11} and we first review some known
bounds. The first comes from the submodularity of the value function; the second comes from the Chernoff bound.
\begin{theorem}[\citealp{BKNS-10}]
\label{t:cr-main}
Given a non-negative monotone submodular function $\val(\cdot)$, a random set $R$ which contains each agent $i$ independently with probability $\exquanti$, and a (possibly randomized) procedure $\pi$
that maps (possibly infeasible) sets to feasible sets such that,
\begin{itemize}
\item (marginal property) for all $i$, $\prob[R\sim \exquants;\pi]{i \in \pi(R) \,\mid\, i
\in R} \geq \gamma$, and
\item (monotonicity property) for all $T \subseteq S$ and $i \in T$, $\prob[\pi]{i \in \pi(T)} \geq \prob[\pi]{i \in \pi(S)}$,
\end{itemize}
then $\expect[R\sim \exquants;\pi]{\val(\pi(R))} \geq \gamma \cdot \expect[R\sim \exquants]{\val(R)}$.
\end{theorem}
\begin{theorem}[\citealp{VCZ-11} \footnote{The formulation of this theorem is slightly different than in \cite{VCZ-11} but follows easily from their analysis.}]
\label{t:CR-knapsack}
Given $\epsilon \in (0,1/2)$, budget $B$, independent variables $p_i$ that are the payments to each agent
such that,
\begin{itemize}
\item (scaled ex ante budget constraint) $\sum_i \expect{p_i}
\leq (1-\epsilon)\, B$,
\item ($k$-large market) $p_i$ is bounded by $[0, B / k]$ for all $i$, and
\item $ k > 2 / \epsilon$,
\end{itemize}
then the probability that the sum of costs of selected agents does not exceed
the budget less the cost of any agent, i.e., $\prob{\sum_{i } p_i\leq (1-1/k) B}$, is at least
$1 - e^{-\epsilon^2(1-\epsilon)k/12}$.
\end{theorem}
We now connect these two results by relating the probability that the
sum of costs does not exceed $(1-1/k) B$ of Theorem~\ref{t:CR-knapsack}
to $\gamma$ of Theorem~\ref{t:cr-main} and then show that posted
pricings satisfy the conditions of Theorem~\ref{t:cr-main}.
\begin{lemma}
\label{l:budget-offer-relation}
For sequential posted pricing $(\prices,\orders)$ that satisfy the scaled ex ante budget constraint and $k$-large market conditions, the probability
that an agent is offered her price is lower bounded by $\prob[R \sim
\exquants]{\sum_{i \in R} \critcosti \leq (1-1/k) B}$, the probability that the sum of
the prices of agents who would accept their offered price is at most
$(1-1/k)B$.
\end{lemma}
\begin{proof}
If the total cost of all agents who would accept their price is at
most $(1-1/k)B$ then this budget remains at the time an agent
$i$ is considered in the sequence $\orders$. By the definition of $k \geq
B / \pricei$ it is feasible to serve this agent and so she is
offered her price $\pricei$ by the sequential posted pricing mechanism.
\end{proof}
\begin{lemma}\label{l:CR-main}
For sequential posted pricing $(\exquants,\orders)$, if each agent is
offered her price with probability at least $\gamma$, then the
expected value of the mechanism is at least $\gamma
V(\critquants)$.
\end{lemma}
\begin{proof}
It suffices to show, for sequential posted pricing
$(\exquants,\orders)$ with an ex post budget constraint $B$,
that the marginal and monotonicity properties of
Theorem~\ref{t:cr-main} hold.
In our case, $R\sim \exquants$ is the random set of agents who would accept
their offer if the budget never runs out. Given a set of agents $R$
who accept their offer, define $\pi(R)$ to be the set of agents who
accept their offer and who arrive before the budget runs out. In our
case, $\pi$ is deterministic given the ordering $\orders$. Note that
$\prob[R\sim \exquants;\pi]{i \in \pi(R) \,\mid\, i \in R}$ is equal to
the probability that an agent gets offered her price, meaning that she
arrives before the budget runs out. Thus, by the assumption of the
lemma the marginal property holds.
For the monotonicity property, consider two sets $T \subseteq S$. When
an agent $i$ arrives in the posted price mechanism, the mechanism has
spent less if the set of agents who accept their offer is $T$ than if
this set is $S$. Therefore $i \in \pi(S)$ implies that $i \in \pi(T)$ and the monotonicity property holds.
\end{proof}
By combining the previous results, we obtain the main theorem for this section.
\begin{theorem}
For $\epsilon \in (0,1/2)$, if the oblivious posted pricing $\prices$
corresponding to the optimal solution $\exquants$ to the multilinear
extension program~\eqref{eq:multilinear-prog} with budget $(1 -
\epsilon) B$
(i.e., with $\price_i = \disti^{-1}(\exquanti)$ for each agent $i$)
satisfies $2/ \epsilon \leq k \leq B / \max_i \pricei$, then this
posted pricing mechanism is a $(1 - 1 /e)(1 - \epsilon) (1- e^{-
\epsilon^2 ( 1- \epsilon) k / 12})$ approximation to the optimal
mechanism for submodular value functions and $(1 - \epsilon) (1- e^{-
\epsilon^2 ( 1- \epsilon) k / 12})$ for additive value functions in the case of regular cost distributions.
\end{theorem}
\begin{proof}
The proof starts with the ex ante mechanism from the previous section and then applies results from this section to modify it into an ex post mechanism.
Let $\exquants$ be the optimal solution to the multilinear extension
program~\eqref{eq:multilinear-prog} with budget $(1 - \epsilon)
B$, $\exquants^+_{(1-\epsilon) B}$ be the optimal solution to the concave closure program~\eqref{eq:relaxed-concave-bayes-opt-prog} with budget $(1 - \epsilon)
B$, and $\exquants^+_{B}$ be the optimal solution to the concave closure
program~\eqref{eq:relaxed-concave-bayes-opt-prog} with budget $
B$.
By the optimality of $\exquants$ and Theorem~\ref{t:cg-submodular},
$$
V(\exquants) \geq V(\exquants^+_{(1-\epsilon) B}) \geq (1 - \tfrac{1}{e}) V^+(\exquants^+_{(1-\epsilon) B}).$$
Note that the solution $(1 - \epsilon)\exquants^+_{B}$ has cost at most $(1 - \epsilon) B$ since $\disti^{-1}(\cdot)$ is increasing. So by the optimality of $\exquants^+_{(1-\epsilon) B}$ and by the concavity of the concave closure $V^+(\cdot)$, $$V^+(\exquants^+_{(1-\epsilon) B}) \geq V^+((1 - \epsilon) \exquants^+_{ B}) \geq (1-\epsilon) V^+( \exquants^+_{ B}). $$ Since $V^+( \exquants^+_{ B})$ is an upper bound on the performance of the optimal ex ante mechanism by Lemma~\ref{l:ub}, the ex ante posted pricing mechanism defined for each agent
by $\price_i = \disti^{-1}(\exquanti)$ is a $(1 - 1/e)(1 - \epsilon)$ approximation to the optimal mechanism.
We now consider the posted pricing mechanism defined by $\prices$ that is no longer ex ante. Since the budget has been lowered by a factor $1 - \epsilon$, each agent is offered her price with probability at least $\prob[R \sim \exquants]{\sum_{i \in R} \critcosti \leq (1-1/k) B}$ by Lemma~\ref{l:budget-offer-relation}, regardless of the ordering $\sigma$ of agents. By Theorem~\ref{t:CR-knapsack}, this probability is at least $1- e^{-
\epsilon^2 ( 1- \epsilon) k / 12}$. Therefore, by Lemma~\ref{l:CR-main}, the expected value of this mechanism is at least $(1- e^{-
\epsilon^2 ( 1- \epsilon) k / 12}) V(\critquants)$ and this mechanism is a $(1 - \epsilon) (1 - 1/e) (1- e^{-
\epsilon^2 ( 1- \epsilon) k / 12})$ approximation to the optimal mechanism in the case of submodular value functions. In the case of additive functions, there is no loss from the multilinear extension to the concave closure, so the mechanism is a $(1 - \epsilon) (1- e^{-
\epsilon^2 ( 1- \epsilon) k / 12})$ approximation.
\end{proof}
Note that as the size of the market $k$ grows to infinity, this
approximation ratio approaches $1 - 1/e$. Also note that this
mechanism requires the market to be at least $4$-large. Using another result from \citet{VCZ-11} and a similar analysis to the one from this section, a $(1 - 1
/e)/8$ posted pricing mechanism can easily be obtained for any market size. This posted pricing attains
its performance guarantee when agents with cost at least $B/4$
arrive before all others, but otherwise the order is oblivious.
\section{Additive Value and Sequential Posted Pricing}
\label{s:additive}
In this section we give improved bounds for sequential posted pricing,
i.e., where the mechanism orders the agents, and when the value
function is additive, i.e., $\val(S) = \sum_{i \in S} \vali$. In
particular, we analyze the sequential posted pricing $(\critcosts,
\orders)$ with $\critcosti = \disti^{-1}(\exquanti)$ from the solution
to the multilinear extension program~\eqref{eq:multilinear-prog} with
the full budget $B$ and the ordering $\orders$ by decreasing
bang-per-buck, i.e., $\vali / \critcosti$ for agent $i$.
Our results in this section are based on the analysis of the
correlation gap of fractional and integral-knapsack set functions (to
be defined subsequently). The fractional-knapsack set function is a
submodular function, so a correlation gap of $1 - 1/e$ can be directly
obtained (Theorem~\ref{t:cg-submodular}). In this section, we improve
this bound to $1 - 1 / \sqrt{2 \pi k }$ for $k$-large markets, i.e.,
with $k = B / \max_i \pricei$. From this bound we observe that
the correlation gap for fractional-knapsack in large market is
asymptotically one. We show that the integral-knapsack correlation
gap is nearly the same. Following the approach of \citet{yan-11}, the
factor by which sequential posted pricing approximates the ex ante
relaxation is equal to the integral-knapsack correlation gap.
\begin{definition}
\label{d:fractional-knapsack-value}
The {\em fractional-knapsack} set function corresponding to additive
set function $\val(S) = \sum_{i\in S} \vali$, sizes $\prices$, and
capacity $B$ is denoted $\forbudget{\val}(S)$ and equals the maximum value
solution to the corresponding fractional-knapsack problem on elements
$S$.\footnote{This value is given by sorting the elements of $S$ by
$\vali/\pricei$ and admitting them greedily until the first element
that does not fit with the remaining capacity, that element is
admitted fractionally (providing a fraction of its value).} The {\em integral-knapsack} set function can be defined analogously to
the fractional one, but it cannot add elements fractionally.
\end{definition}
Most of this section analyzes the ratio of the independent value of fractional-knapsack to the correlated value of $\val(\cdot)$ (see Definition~\ref{d:submodular-extensions} for the definition of independent and correlated values) in the case where the budget constraint is met ex ante, i.e., $\expect[S
\sim \exquants]{\forbudget{\val}(S)} / \expect[S
\sim {\cal D}]{\val(S)}$ when $\sum_i \pricei \exquanti \leq B$. We then show that this ratio is equal to the approximation ratio of the sequential posted pricing mechanism. Finally, we use this ratio to bound the integral, and fractional, knapsack correlation gap.
The main idea to derive a bound on this ratio is to show that it is minimized when all agents have equal cost $B / k$, in which case, when the budget constraint is met ex ante, we can then apply the result from \citet{yan-11} for the correlation gap of the k-highest-value-elements set function.
\begin{lemma}\label{l:maxcBk}
For any additive value function $\val(\cdot)$ and budget $B$, over
marginal probabilities $\exquants$ and prices $\prices$ that (a) satisfy
the ex ante budget constraint, i.e., $ \sum_i \pricei\,\exquanti \leq B$,
and (b) satisfy the $k$-large market condition, i.e., $\pricei \leq
B/k$, the ratio of the independent value of the fractional-knapsack and the correlated value of $\val(\cdot)$ is minimized when
$\pricei = B/k$ for all $i$.
\end{lemma}
\begin{proof}
For the first part of the proof, we assume that $\vals = \prices$,
i.e., that the bang-per-buck is one for all elements. The last step of
the proof is to generalize this special case to any values. Observe
that with this assumption, $\forbudget{\val}(S) = \min(B, \sum_{j \in S}
\pricei[j])$.
Assume that there is some $\pricei$ such that $\pricei < B /
k$. We show that when $\vali = \pricei$, increasing $\pricei$ to any
$\pricei' > \pricei$ and decreasing $ \exquanti$ to $\exquanti' =
\pricei \exquanti / \pricei'$ preserves the correlated value while
only lowering the independent value. Let $\price_j' = \price_j$ and
$\exquant_j' = \exquant_j$ for $j \neq i$. The correlated value of
$\val(\cdot)$ is $\expect[S \sim {\cal D}]{\val(S)} = \sum_j \price_j
\exquant_j = \sum_j \price_j' \exquant_j'$ so it is preserved. Similarly, the ex ante budget constraint is still satisfied.
The argument for the independent value decreasing is the following.
Let $\forbudget{\val}^{\prime}(S)$ be defined similarly as $\forbudget{\val}(S)$, but where
agents have values and costs equal to $\prices'$. Condition on the
subset of other agents $S$ who accept their prices and consider the
marginal contribution to the expected value of $\forbudget{\val}(\cdot)$ and
$\forbudget{\val}^{\prime}(\cdot)$ from agent $i$. In the case that $C = \sum_{j
\in S} \pricei[j] > B$, this contribution is zero for both
$\pricei$ and $\pricei'$. When $C < B$,
these contributions are $\exquanti\min(B - C,\pricei)$ and
$\exquanti'\min(B - C,\pricei')$. By the definition of
$\exquanti' = \pricei \exquanti/\pricei'$ and concavity of
$\min(B-C,\cdot)$, the former is greater than the latter. This
inequality holds for all sets $S$, so removing the conditioning on $S$, it
holds in expectation and the independent value of fractional-knapsack is lowered.
It remains to extend this result to any $\vals$. Fix $\vals$ and assume without loss of generality that $\val_1 / \price_1 \geq \cdots \geq \val_n / \price_n$. Then the fractional-knapsack set function can be rewritten as $$\val_{B}(S) = \sum_{i \in N} (\vali /\pricei - \val_{i + 1}/ \price_{i+1}) \min(B, \sum_{j \in S \cap \{1, \dots, i\}} \price_j)$$ and the additive set function as $$\val(S) = \sum_{i \in N} (\vali /\pricei - \val_{i + 1}/ \price_{i+1}) (\sum_{j \in S \cap \{1, \dots, i\}} \price_j)$$ since these sums telescope.
So the ratio of independent value of $\forbudget{\val}(S)$ to the correlated value of $\val(S)$ is minimized when the ratios of the independent value of $\min(B, \sum_{j \in S \cap \{1, \dots, i\}} \price_j)$ to the correlated value of $\sum_{j \in S \cap \{1, \dots, i\}} \price_j$ are minimized for all $i$. We conclude by observing that $\min(B, \sum_{j \in S \cap \{1, \dots, i\}} \price_j)$ and $ \sum_{j \in S \cap \{1, \dots, i\}} \price_j$ are the fractional-knapsack set function and the additive set function when $\vali = \critcosti$ over ground set $\{1, \dots, i\}$, and that their ratio is minimized when $\pricei = B / k$ for all agents $i$.
\end{proof}
Next, we use the result from \citet{yan-11} to bound the ratio of the independent value of fractional-knapsack to the correlated value of $\val(\cdot)$.
\begin{lemma}
\label{l:fractional-knapsack-additive-relaxation-cg}
For any distribution over sets ${\cal D}$ with marginal probabilities
$\exquants$ satisfying the ex ante budget constraint, i.e., $\sum_i \pricei\,\exquanti \leq B$, the ratio of the independent value of fractional-knapsack to the correlated value of $\val(\cdot)$ is at least $1-1/\sqrt{2 \pi k}$ when the market is $k$-large.
\end{lemma}
\begin{proof}
Consider the case where each agent $i$ has cost $\pricei = B / k$ and assume that the ex ante budget constraint is satisfied, so $\sum_i \exquanti \leq k$. Since any set of size at most $k$ is feasible and since $\sum_i \exquanti \leq k$, there is a distribution such that the budget constraint is always met ex post. Therefore, the correlated value of $\val(\cdot)$ is equal to the correlated value of fractional-knapasck. The ratio of the independent value of fractional-knapsack to the correlated value of $\val(\cdot)$ is thus equal to the correlation gap of fractional-knapsack. Since all agents have cost $B / k$, the fractional-knapsack set function is equal to the k-highest-value-elements set function. By Theorem~\ref{t:cg-multi-unit}, the ratio of the independent value of fractional-knapsack to the correlated value of $\val(\cdot)$ is therefore $1-1/\sqrt{2 \pi k}$.
By Lemma~\ref{l:maxcBk}, the ratio of the independent value of fractional-knapsack to the correlated value of $\val(\cdot)$ when the ex ante budget constraint is satisfied is minimized when all agents have cost $B / k$, so this ratio is at least $1-1/\sqrt{2 \pi k}$.
\end{proof}
We now prove the main theorem of this section which relates the
approximation factor of sequential posted pricing (with ex post budget
feasibility) to the optimal mechanism with ex ante budget feasibility.
\begin{theorem}
\label{t:pp-additive}
The sequential posted pricing mechanism $(\exquants,\orders)$, where $\exquants$ is the solution to the multilinear extension
program~\eqref{eq:multilinear-prog} and where the order $\orders$ is decreasing in
$\frac{\vali}{\pricei}$, is a $(1-1/\sqrt{2 \pi
k})(1 - 1/k)$ approximation to the optimal mechanism in the case of regular cost distributions.
\end{theorem}
\begin{proof}
Denote $\exquants$ the optimal solution to the multilinear extension
program~\eqref{eq:multilinear-prog}. For additive value functions,
linearity of expectation implies that the multilinear extension is
equal to the concave closure and the optima of the multilinear
extension program~\eqref{eq:multilinear-prog} and concave closure
program~\eqref{eq:relaxed-concave-bayes-opt-prog} are the same. Their
performance upper bounds that of the optimal mechanism that satisfies
ex post budget feasibility by Lemma~\ref{l:ub}. The objective value of these programs with optimal solution $\exquants$ is $\sum_i \vali \exquanti$, which is equal to the correlated
value of the additive set function $\val(\cdot)$ on distributions with
marginals $\exquants$. So by Lemma~\ref{l:fractional-knapsack-additive-relaxation-cg}, the ratio of the independent value of fractional-knapsack to the upper bound of the optimal mechanism is at least $1-1/\sqrt{2 \pi k}$
The random set of agents who accept their offer in the sequential posted pricing is equal to the set of agents who are admitted by the fractional-knapsack set function on an independent random set of agents with marginals $\exquants$, without including the fractional agent. The loss from this fractional agent is at most a factor $1 - 1/k$. This posted pricing mechanism therefore has an approximation ratio of $(1-1/\sqrt{2 \pi k})(1-1/k)$.
\end{proof}
As a corollary of Lemma~\ref{l:fractional-knapsack-additive-relaxation-cg}, we get new correlation gap results for the fractional, and integral, knapsack set functions.
\begin{theorem}
\label{t:cg}
The correlation gaps of fractional-knapsack and integral-knapsack are at least $1-1/\sqrt{2 \pi k}$ and $(1-1/\sqrt{2 \pi k})(1-1/k)$ respectively, in a $k$-large market.
\end{theorem}
\begin{proof}
We first show the correlation gap of fractional-knapsack, the correlation gap of integral-knapsack will then follow easily. We start by showing that the correlation gap is minimized when the budget constraint is satisfied. Then, we upper bound the fractional-knapsack correlated value by the correlated value of $\val(\cdot)$. Finally, we apply Lemma~\ref{l:fractional-knapsack-additive-relaxation-cg}.
We claim that the correlation gap of fractional-knapsack is minimized when the budget constraint is satisfied. Observe that if the budget constraint is not satisfied, then it is possible to decrease some $\exquanti$ such that the correlated value of fractional-knapsack remains the same. Since decreasing some $\exquanti$ only decreases the independent value of fractional-knapsack, the ratio of the independent value to the correlated value also decreases.
Clearly, the fractional-knapsack correlated value is upper bounded by the correlated value of $\val(\cdot)$. Therefore, the correlation gap of fractional-knapsack is at least the ratio of the independent value of fractional-knapsack to the correlated value of $\val(\cdot)$ when the budget constraint is satisfied, so at least $1-1/\sqrt{2 \pi k}$ by Lemma~\ref{l:fractional-knapsack-additive-relaxation-cg}.
Finally, observe that the correlated value of fractional-knapsack upper bounds the correlated value of integral- knapsack and that the independent value of integral-knapsack is a $1 - 1/k$ approximation to the independent value of fractional-knapsack. Therefore, the correlation gap of integral-knapsack is at least $(1-1/\sqrt{2 \pi k})(1-1/k)$.
\end{proof}
\paragraph{Comparison of Sequential and Oblivious posted pricing.}
We now compare the approximation ratio for additive value functions achieved using the sequential
posted pricing mechanism with the bang per buck order, $(1-1/\sqrt{2 \pi k})(1 - 1/k)$, and using oblivious posted pricing where the budget is lowered, $(1 -
\epsilon) (1- e^{- \epsilon^2 ( 1- \epsilon) k / 12})$. Figure~2 shows that the approximation ratio with the sequential ordering approaches $1$ much faster than with the oblivious ordering as the size of the market increases. To obtain these results for oblivious posted
pricing, we numerically solved for the best $\epsilon$. We emphasize that we are comparing the theoretical bounds of these approaches, and not empirical performances.
\begin{figure}\label{f:comp}
\centering
\includegraphics[scale = .5, bb=0 0 550 150]{comparison.png}
\caption{Comparison of the approximation ratios obtained for additive value functions by the two different approaches. On the horizontal axis is $k$, the size of the market. }
\end{figure}
\section{Computing Prices}
\label{s:optimal}
In the two previous sections, we gave conditions under which optimal
prices from the multilinear extension
program~\eqref{eq:multilinear-prog} perform well when offered
sequentially or obliviously. In this section, we consider the
computational problem of finding these prices. For submodular
value functions, we reduce the problem to the well-known greedy algorithm for submodular optimization. For additive value functions, we use a simple method based
on the Lagrangian relaxation of the budget constraint.
\subsection{The Lagrangian Relaxation for Additive Value Functions}
Consider the case of additive value functions where the principal has
a value $\vali$ for each agent $i$ and the value function is
$\val(S) = \sum_{i \in S} \vali$. Recall the virtual surplus
program~\eqref{eq:bayes-opt-prog} from Section~\ref{s:relaxations}:
\begin{align}
\max_{\allocs}\ &\expect[\costs]{\val(\allocs(\costs))} \tag{\ref{eq:bayes-opt-prog}} \\
\notag \text{s.t. } &\sum\nolimits_i \expect[\costs]{\virti(\costs)\,\alloci(\costs)} \leq B,
\\
\intertext{which can be rewritten for additive value functions as:}
\label{eq:relaxed-bayes-opt-prog-add}
\max_{\quants}\ &\sum\nolimits_i \expect[\costs]{ \vali \, \alloci(\costs)}\\
\notag \text{s.t. } &\sum\nolimits_i \expect[\costs]{\virti(\costs)\,\alloci(\costs)} \leq B.
\end{align}
We show that the ex ante optimal mechanism can be found directly by taking the Lagrangian relaxation of the budget constraint (with parameter $\lambda$) of the following Lagrangian program:
\begin{align}
\label{eq:relaxed-bayes-opt-prog-virt}
\max_{\allocs}\ & \lambda B + \sum\nolimits_i \expect[\costs]{(\vali - \lambda \virti(\costi))\,\alloci(\costs)}.
\end{align}
For any Lagrangian parameter $\lambda$, this objective can be
optimized by pointwise optimizing $\sum_i (\vali - \lambda \virti(\costi))\,\alloci(\costs)$, a.k.a., the {\em Lagrangian virtual
surplus}. This pointwise optimization picks all the agents such that $\vali \geq \lambda \virti(\costi)$. If the virtual cost functions are monotone, i.e., in the
so-called {\em regular} case, then this optimization gives a
monotone allocation rule where an agent is picked whenever $\costi \leq \virti^{-1}(\vali / \lambda)$
Notice that as the Lagrangian parameter increases, the payments of
the agents, as represented by virtual costs, become more costly in the
objective of the lagrangian program~\eqref{eq:relaxed-bayes-opt-prog-virt}. Thus, the
expected payment of the mechanism is monotonically decreasing in the
Lagrangian parameter. With $\lambda = 0$ the Lagrangian virtual
surplus optimizer simply maximizes $\val(\allocs)$ and pays each
agent selected the maximum cost in the support of her distribution.
If this payment is under budget then it is optimal, otherwise, we can
increase $\lambda$ until the budget constraint is satisfied. For
example, with $\lambda = \infty$ the empty set of agents is selected
and no payments are made. The optimal mechanism is the one that meets
the budget constraint with equality. In the case that the expected
payment is discontinuous then mixing between the least over-budget and
least under-budget mechanism is optimal. For
further discussion of Lagrangian virtual surplus optimizers, see \citet{DHH-13}.
\begin{proposition}
\label{prop:bayes-opt}
The Lagrangian virtual surplus optimizer (or appropriate mixture
thereof) that meets the budget constraint with equality is the
Bayesian optimal ex ante budget feasible mechanism.
\end{proposition}
Lagrangian virtual surplus optimization suggests selecting an agent $i$
when her private cost $\costi$ is below $\virti^{-1}(\vali /
\lambda)$. The mechanism that achieves this outcome posts the price
of $\pricei = \virti^{-1}(\vali/\lambda)$ to agent $i$. Denote by
$\exquanti = \disti(\pricei)$ the probability that $i$ accepts the
price $\pricei$. For the prices $\prices$, the total expected
payments are $\sum_i \pricei\,\exquanti$. When the virtual cost
functions are monotone and strictly increasing, there is a Lagrangian
parameter for which the budget constraint is met with equality, i.e.,
with $\sum_i \pricei\,\exquanti = B$. The optimal ex ante mechanism is
therefore the posted price mechanism that posts $\pricei$ to each
agent $i$ for the Lagrangian parameter $\lambda$ that satisfies
$\sum_i \pricei\,\exquanti = B$. Note that such a Lagrangian parameter
$\lambda$ can be arbitrarily well approximated since $\sum_i
\pricei\,\exquanti$ is decreasing as a function of $\lambda$.
\begin{example}
Consider $n$ agents with costs drawn uniformly and i.i.d.\@ from
$[0,1]$ and uniform additive value function $\vali = 1$ for all $i$,
i.e., the cardinality function. The virtual cost function is
$\virt(\cost) = \cost + \frac{\dist(\cost)}{\dens(\cost)} = 2\cost$.
The Lagrangian parameter $\lambda = \frac{1}{2} \sqrt{n/B}$
induces a uniform posted price of $\price = \sqrt{B/n}$ which
is accepted with probability $\exquant = \sqrt{B/n}$ for an
expected payment of $B/n$. Summing over all $n$ agents,
the budget is balanced ex ante.
\end{example}
\subsection{A Reduction to the Greedy Algorithm for Submodular Optimization}
For general submodular value functions we reduce the optimization of
the multilinear extension program~\eqref{eq:multilinear-prog},
restated below, to the problem of optimizing a submodular function
subject to a cardinality constraint. This problem of optimizing a submodular function under cardinality, knapsack, or matroid constraints is well
studied and the {\em greedy algorithm} gives a
$1-1/e$ approximation for knapsack and cardinality constraints; see \citet{NWF-78}, \citet{KMN-99}, and
\citet{SVI-04}.
\begin{align*}
\max_{\quants}\ &V(\quants) \tag{\ref{eq:multilinear-prog}} \\
\notag \text{s.t. } &\sum\nolimits_i \quanti \disti^{-1}(\quanti) \leq B.
\end{align*}
Define the \emph{cost curve} of agent $i$ to be the expected payment to agent $i$, i.e., $\quanti \disti^{-1}(\quanti)$ in our case.
The main difference between the multilinear extension program
\eqref{eq:multilinear-prog} and the knapsack setting considered in the
literature is that the cost curves in the knapsack setting are linear in $\quanti$. Our reduction to the greedy algorithm is the following. We divide each agent $i$, called a big agent, in cost space into $m$ discrete agents $i_j$ of equal cost, called the small agents. An agent $i_j$ corresponds to the $j$th increase of $\quanti$, starting from $\quanti = 0$, that has cost $B / m$.
We set
$1 / m$ as a fraction of the total budget $B$ which fixes the
number of steps in the algorithm to be $m$. With large $m$, the reduction becomes a finer discretization.
Before formally describing the reduction, we introduce some notation. For each $i$ and $j$, let $\delta_{ij}$ be the $j$th increase in $\quanti$, starting from $\quanti= 0$,
that has cost $B / m$, i.e., $\delta_{ij}$ satisfying $ B / m = \disti^{-1}(\sum_{k \leq j} \delta_{ik}) \cdot (\sum_{k \leq j} \delta_{ik}) - \disti^{-1}(\sum_{k < j} \delta_{ik}) \cdot (\sum_{k < j} \delta_{ik})$. Given a set $S$ of small agents, the continuous solution corresponding to $S$ is $\quants(S)$ with $\quanti(S) = \sum_{j : i_j \in S} \delta_{ij}$.
\paragraph{The reduction.}
\begin{enumerate}
\item For each agent $i$, create $ m$ small agents $i_j$ where
$1 \leq j \leq m$ so that the reduced instance has $
m n$ agents.
\item For each small agent $i_j$, its cost is $ B / m$.
\item For each small agent $i_j$, its marginal contribution
$V_S(i_j)$ in value to a set $S$ is the marginal contribution
of increasing the fraction of agent $i$ corresponding to $S$ by
$\delta_{ij}$, i.e., $V(\quants') - V(\quants(S))$ where
$\quanti' = \quanti(S) + \delta_{ij}$ and $\quant_j' = \quant_j(S)$
for $j \neq i$.
\end{enumerate}
We show that the solution to the reduced problem that we obtained with the greedy algorithm for cardinality constraint corresponds to a solution for the multilinear extension program~\eqref{eq:multilinear-prog} that is a
$1 - 1/e - o(1)$ approximation, almost matching the performance of the greedy
algorithm for knapsack constraint with integral agents and linear cost curves. We start by showing that if a solution is feasible in the reduced problem, then the continuous solution corresponding to it is a feasible solution to the multilinear extension program~\eqref{eq:multilinear-prog}. Then, with access to exact values of the increases $\delta_{ij}$ and of the marginal contributions $V_S(i_j)$, the approximation ratio is
$1 - 1/e - o(1)$. Finally, we show that it is possible to approximate $\delta_{ij}$ and $V_S(i_j)$ with estimates that cause an additional loss of $o(1)$ to the approximation ratio.
\paragraph{From a set of small agents to a continuous solution for the big agents.}
Previously, we defined a distribution to be regular if the virtual cost function is monotonically increasing. An alternate definition is that a distribution $\dist$ is regular if the cost curve $\quant \cdot \dist^{-1}(\quant)$ is convex. This definition is the analogue to the revenue curve being concave for regular distributions when the agents are buyers, and not sellers, from \citet{BR-89}.
Recall that given a set $S$ of small agents, the continuous solution corresponding to $S$ is $\quants(S)$ with $\quanti(S) = \sum_{j : i_j \in S} \delta_{ij}$ and that $\delta_{ij}$ is the $j$th increase in $\quanti$ that has cost $B / m$. Therefore, given a set $S$ of small agents of size at most $m$ such that for any $\delta_{ij} \in S$, $\delta_{ik} \in S$ for all $k < j$, then $\quants(S)$ has cost at most $B$. The condition that if $\delta_{ij} \in S$, then $\delta_{ik} \in S$ for all $k < j$, is equivalent to the condition that greedy always picks small agents corresponding to lower quantiles before small agents corresponding to higher quantiles, which we show formally.
\begin{lemma}
Given two small agents $i_k$ and $i_j$ such that $k < j$, the greedy algorithm with a cardinality constraint picks $i_k$ before $i_j$ for regular distributions $\disti$.
\end{lemma}
\begin{proof}
Since all small agents have equal cost, we need to show that $i_k$ has a larger marginal contribution than $i_j$ to any set $S$ of small agents such that $i_k, i_j \not \in S$. Since $V(\cdot)$ is monotone, it suffices to show that $\delta_{ik} > \delta_{ij}$. In quantile space, the cost of increasing some quantile $\quanti$ by a fix amount is increasing in $\quanti$ since $\quanti \cdot \dist^{-1}(\quanti)$ is convex by definition of regular distributions. Therefore, in cost space, the increase in quantile $\delta_i$ that is obtained by increasing the cost curve by a fix amount is decreasing, so $\delta_{ik} > \delta_{ij}$.
\end{proof}
The case of irregular distributions is considered in Section~\ref{s:irregular}.
\paragraph{With exact values of $\delta_{ij}$ and $V_S(i_j)$.} We consider the case where the exact values of the increases in $\quants$ and marginal contributions
are given by an oracle. We show that finding a good solution to this reduced problem with small agents gives us a good solution to the problem with big agents.
\begin{lemma}\label{l:discretization} The optimal solution $\opti{S}$ to the reduced problem satisfies $ V(\quants(\opti{S})) \geq (1 - o(1)) V(\exquants)$ where $\exquants$ is the optimal solution to the multilinear extension program~\eqref{eq:multilinear-prog}.
\end{lemma}
\begin{proof} We pick the step size to be $m = n^2$. The proof shows that there exists a set $S$ that is close to a feasible solution in the reduced problem and such that $\quants(S)$ is a better solution than $\exquants$. Let $S$ be the set of small agents such that $\quants(S)$ is maximized subject to $\quants(S) \leq \exquants$. Define $S^{+1}$ to be the set containing all small agents in $S$ and one additional small agent for each big agent $i$. Observe that $V(\quants(S^{+1})) \geq V( \exquants)$ since $V(\cdot)$ is non-decreasing. So there is a feasible solution to the discretized problem such that if we add one small agent for each big agent $i$, then we obtain a better solution than the optimal solution to the original problem.
Greedily remove agents by minimal marginal contribution from $S^{+1}$ until we get a feasible solution $S$. The number of small agents who need to be removed is $n$ since $S$ is feasible. Since $S$ contains $n^2$ small agents, by the greediness and the fact $V(\cdot)$ is concave along any line of positive direction, $(1 + 1/n) V(\quants(S)) \geq V(\quants(S^{+1}))$.
Therefore,
\begin{align*}
(1 + o(1)) V(\quants(\opti{S})) \geq (1 + o(1)) V(\quants(S)) \geq V(\quants(S^{+1})) \geq V( \exquants)
\end{align*}
\end{proof}
Next, we show that the reduced problem can be optimized.
\begin{lemma}\label{l:greedyreduced} Let $S$ be the set returned by the greedy algorithm for submodular functions under a cardinality constraint on the reduced problem, then $ V(\quants(S)) \geq (1 - 1/e) V(\quants(S^*))$ where $S^*$ is the optimal solution to the reduced problem.
\end{lemma}
\begin{proof}
Observe that the objective function in the reduced problem is a submodular function. This follows directly from the concavity of $V(\cdot)$ along any positive line of direction. In addition, since all small agents have cost $B / m$, the constraint is a cardinality constraint. Since the greedy algorithm for submodular functions under a cardinality constraint is a $1 -1 /e$ approximation for submodular functions, we get the desired result.
\end{proof}
We now have the tools to show that if we had an oracle for the increases and marginal contributions, the greedy algorithm on the reduced instance would give us a $1 - 1/e - o(1)$ approximation.
\begin{lemma} \label{l:greedyExact} Let $S$ be the output of the greedy algorithm on the reduced instance, where exact values of $\delta_{ij}$ and $V_S(i_j)$ are given by an oracle at each iteration, then $V(\quants(S)) \geq (1 - 1/e - o(1)) V(\exquants)$, where $\exquants$ is the optimal solution to the multilinear extension program~\eqref{eq:multilinear-prog}.
\end{lemma}
\begin{proof}
We combine the results from the discretization that causes a $o(1)$ loss with the greediness of the algorithm that is a $1 - 1/e$ approximation to obtain the desired result.
By Lemma~\ref{l:greedyreduced} and Lemma~\ref{l:discretization},
$$ V(\quants(S)) \geq (1 - 1/e) V(\quants(S^*)) \geq (1 - 1/e - o(1)) V(\critquants)$$ where $S^*$ is the optimal solution to the reduced problem.
\end{proof}
\paragraph{With estimates of $\delta_{ij}$ and $V_S(i_j)$.} We now show that we can use the greedy algorithm with estimates of the increases and the marginal contributions, that we can compute. Let $\noisy{\quants}(S)$ be defined similarly to $\quants(S)$ but with estimates $\noisy{\delta}_{i_j}$. The first lemma shows that the value of the optimal solution to the reduced problem has almost the same value as when the increases $\delta_{ij}$ are estimated. The second lemma extends Lemma~\ref{l:greedyreduced} to the case where greedy is run with estimated marginal contributions $\noisy{V}_S(i_j)$ and any $\noisy{\delta}_{i_j}$. We defer the proofs of these two lemmas to the appendix.
\begin{lemma} \label{l:noisygreedy} Let $S^*$ be the optimal solution to the reduced problem with exact value of $\delta_{ij}$ and $V_S(i_j)$, then $ V(\noisy{\quants}(S^*)) \geq (1 - o(1)) V(\quants(S^*))$.
\end{lemma}
\begin{lemma} \label{l:ngreedyreduced}
Let $\noisy{S}$ be the set returned by the greedy algorithm on the reduced problem with estimates $\noisy{\delta}_{i_j}$ and $\noisy{V}_S(i_j)$, then $ V(\noisy{\quants}(\noisy{S})) \geq (1 - 1/e - o(1)) V(\noisy{\quants}(S^*))$ w.h.p., where $S^*$ is the optimal solution to the reduced problem with estimates $\noisy{\delta}_{i_j}$ and exact values $V_S(i_j)$.
\end{lemma}
Combining the previous results, we obtain the main result of this section.
\begin{theorem} \label{thm:greedy} Let $\noisy{S}$ be the output by the greedy algorithm on the reduced instance with estimates of $\delta_{ij}$ and $V_S(i_j)$, then $V(\noisy{\quants}(\noisy{S})) \geq (1 - 1/e - o(1)) V(\exquants)$ w.h.p., where $\exquants$ is the optimal solution to the multilinear extension program~\eqref{eq:multilinear-prog}.
\end{theorem}
\begin{proof}
This proof follows similarly to the one for Lemma~\ref{l:greedyExact}, the difference is that this proof adds the loss from the estimates.
By Lemma~\ref{l:noisygreedy} and Lemma~\ref{l:ngreedyreduced},
$$ V(\noisy{\quants}(\noisy{S})) \geq (1 - 1/e - o(1)) V(\noisy{\quants}(S^*)) \geq (1 - 1/e - o(1)) V(\quants(S^*))$$ where $S^*$ is the optimal solution to the reduced problem. Using Lemma~\ref{l:discretization} that connects the discretized reduced instance to the original continuous problem, we conclude that $$V(\noisy{\quants}(\noisy{S})) \geq (1 - 1/e - o(1)) V(\quants(S^*)) \geq (1 - 1/e - o(1)) V(\critquants).$$
\end{proof}
Note that in the case of additive value functions, the greedy algorithm is optimal when the optimization is subject to a cardinality constraint and the marginal contributions can be computed exactly. We therefore get the following result.
\begin{lemma} \label{l:greedy} Assume $\val(\cdot)$ is an additive value function. Let $S$ be the set returned by the greedy algorithm on the reduced problem with estimates $\noisy{\delta}_{i_j}$, then $ V(\noisy{\quants}(S)) \geq (1- o(1)) V(\exquants)$ w.h.p., where $\exquants$ is the optimal solution to the multilinear extension program~\eqref{eq:multilinear-prog}.
\end{lemma}
Therefore, all the results in previous sections suffer an extra $1 - 1/e - o(1)$ factor in the general case of submodular value function and an extra $1 - o(1)$ factor in the case of additive value function that are due to computational constraints.
\section{Symmetric Costs and Values}
\label{s:sym}
In this section we study symmetric environments where both the distribution of costs and the value function are symmetric. A submodular value function is symmetric if the value of a set only depends on the cardinality of that set, i.e., $\val(S) = g(|S|)$ for some function $g(\cdot)$. In this setting, we obtain an oblivious posted pricing that achieves an approximation ratio of $(1 - 1/\sqrt{2 \pi k })(1 - 1 / k)$ where $k$ is the size of the market, which is identical to the approximation obtained in the additive case with sequential posted pricing. We assume that the distribution of costs is regular.
The following technicalities are used for this section only. We overload the notation and denote by $\val(\cdot) : \mathbb{R}_+ \rightarrow \mathbb{R}_+$ the concave hull of the points $\{(i, v(S_i))\}_{i=0}^{n}$ where $S_i$ is any set of size $i$. The posted prices in this section are symmetric and are defined by a single price $\price$, i.e., $\prices = (\price, \cdots, \price)$ and $\exquants = (\exquant, \cdots, \exquant)$. Note that the market size $k$ in such a symmetric setting is $k = B / \price$.
We start with two lemmas that highlight symmetric properties of the optimal solution to the concave closure program in this symmetric setting.
\begin{lemma}
\label{l:prices-symmetric}
For symmetric submodular value function $\val(\cdot)$ and symmetric distributions of costs, the optimal solution $\exquants$ to the concave closure program~\eqref{eq:relaxed-concave-bayes-opt-prog} is symmetric, i.e., $\exquanti^+ = \exquant_j^+$ for all $i,j$.
\end{lemma}
\begin{proof} By the concavity of the concave closure and the convexity of cost curves (since the distribution of costs is regular), the program we wish to optimize is symmetric and convex, so the optimal solution is symmetric.
\end{proof}
\begin{lemma}
\label{l:sizeSym}
For symmetric monotone submodular value function $\val(\cdot)$ and symmetric distributions of costs, there exists a distribution ${\cal D}$ over sets of agents with marginals $\exquants^+ = (\exquant^+, \cdots, \exquant^+)$ such that $\expect[S \sim {\cal D}]{\val(S)} = V^+(\exquants^+)$ and such that all sets $S$ and $T$ that can be drawn from ${\cal D}$ have size either $\lfloor k \rfloor$ or $\lceil k \rceil$.
\end{lemma}
\begin{proof}
First, note that $B = \price \cdot n \cdot \exquant^+$ since $\exquants^+$ is the optimal solution to the concave closure program and since $\val(\cdot)$ is monotone, which implies that $k = n \cdot \exquant^+$ since $k = B / \price$.
The expected value of a set of expected size $n \cdot \exquant^+$ drawn from a distribution is at most $\val(n \cdot \exquant^+)$ by the definition of concave hull. By taking a distribution ${\cal D}$ that is a mixture of sets of size $\lfloor n \cdot \exquant^+ \rfloor = \lfloor k \rfloor$ and $\lceil n \cdot \exquant^+ \rceil = \lceil k \rceil$ such that the marginals are $\exquant^+$, the expected value of a set drawn from ${\cal D}$ is $\val(n \cdot \exquant^+)$ since $\val(S)$ is submodular. Combining the two previous observations, $\expect[S \sim {\cal D}]{\val(S)} = V^+(\exquants^+)$ since the concave closure is the maximum expected value over distributions with some marginals $\exquants$.
\end{proof}
Given quantiles $\exquants = (\exquant, \cdots, \exquant) $, the value of the concave closure $V^+(\exquants)$ can be computed easily by Lemma~\ref{l:sizeSym} and symmetricity. The concave closure program can therefore be approximated arbitrarily well and efficiently when there is symmetry, by using binary search to get arbitrarily close to the optimal quantile $\exquant$. Our approach for obtaining the desired approximation is to construct an additive function that lower bounds the symmetric submodular function on feasible sets and that upper bounds it otherwise.
\begin{theorem}
\label{t:pp-symmetric}
In the case of symmetric monotone submodular value functions and symmetric regular cost distributions, the oblivious posted pricing $\prices = (\price, \cdots, \price)$ with $\critcost = \dist^{-1}(\exquant^+)$ is an $(1 - 1 / \sqrt{2 \pi k}) (1 - 1 / k)$ approximation to the optimal ex ante mechanism, where $\exquants^+ = (\exquant^+, \cdots, \exquant^+)$ is the optimal solution to the concave closure program~\eqref{eq:relaxed-concave-bayes-opt-prog} and $k$ is the size of the market.
\end{theorem}
\begin{proof}
By Lemma~\ref{l:sizeSym}, there exists a distribution ${\cal D}$ over sets of agents with marginals $\exquant^+$ such that $\expect[S \sim {\cal D}]{\val(S)} = V^+(\exquants^+)$ and such that sets drawn from ${\cal D}$ have size $\lfloor k \rfloor$ or $\lceil k \rceil$. We consider the additive value function $\val^{add}(\cdot)$ defined as follow:
$$\val^{add}(S) = |S| \frac{\val(\lfloor k \rfloor)}{\lfloor k \rfloor}$$ and overload the notation for $\val^{add}(\cdot)$ similarly as for $\val(\cdot)$. We make the following observations about $\val^{add}(\cdot)$:
\begin{itemize}
\item $\val^{add}(i) \leq \val(i)$ for $i \leq \lfloor k \rfloor$ and $\val^{add}(i) \geq \val(i)$ otherwise, by submodularity.
\item $\expect[S \sim {\cal D}]{\val^{add}(S)} \geq \expect[S \sim {\cal D}]{\val(S)}$, since $\val^{add}(\lceil k \rceil) \geq \val(\lceil k \rceil)$ and $\val^{add}(\lfloor k \rfloor) = \val(\lfloor k \rfloor)$.
\item $\val(\cdot)$ is an additive set function with values $\vali = \frac{1}{\lfloor k \rfloor}\val(\lfloor k \rfloor)$ for each element.
\end{itemize}
Since the feasible sets are sets of size at most $\lfloor k \rfloor$ and by the first observation on $\val^{add}(\cdot)$, the performance of the posted pricing mechanism is at least the independent integral knapsack value of $\val^{add}(\cdot)$. The independent integral knapsack value of $\val^{add}(\cdot)$ is at most a factor $(1 - 1/k)$ away from its independent fractional knapsack value, $\expect[S \sim \exquants^+]{\val^{add}_{B}(S)} $. By Lemma~\ref{l:fractional-knapsack-additive-relaxation-cg} and the third observation on $\val^{add}(\cdot)$, $\expect[S \sim \exquants^+]{\val^{add}_{B}(S)} \geq (1 - \frac{1}{\sqrt{2 \pi k}}) \expect[S \sim {\cal D}]{\val^{add}(S)}$. By the second observation on $\val^{add}(\cdot)$, $ \expect[S \sim {\cal D}]{\val^{add}(S)} \geq V^+(\exquants^+) $. Since $V^+(\exquants^+)$ is an upper bound on the optimal mechanism by Lemma \ref{l:ub}, we get the desired result.
\end{proof}
Note that in previous settings, we used the solution to the multilinear extension program to define the posted pricing mechanisms. In this setting, we used the solution to the concave closure program in order to take advantage of the concavity of the objective function for computational purposes. Finally, note that in the symmetric case, sequential posted pricing offers no advantage compared to oblivious posted pricing.
\section{Irregular Distributions}
\label{s:irregular}
In this section, we consider irregular distributions. Recall that a distribution $\dist$ is regular if the virtual cost function is increasing, or equivalently, if the cost curve $\quant \cdot \dist^{-1}(\quant)$ is convex. The ironing method introduced by \citet{mye-81} gives monotone ironed virtual costs and convex cost curves. With these convex cost curves, we construct \emph{randomized} posted pricing mechanisms that enjoy the same approximation ratios as the deterministic mechanisms, albeit with a generalized definition of the market size $k$ for randomized posted pricings. Additionally, in the case of additive objective functions, the sequential posted pricing is derandomized.
Denote the cost curve of agent $i$ by $\costcurvei(\quanti) = \quanti \disti^{-1}(\quanti)$. \citet{BR-89} observed that the derivative of the cost curve with respect to quantile is equal to the virtual cost function, $\costcurvei'(\quanti) = \virti(\costi)$. The ironing method constructs the convex hull $\costcurvehi(\quanti)$ of the cost curve $\costcurvei(\cdot)$. For $\quanti = \disti(\costi)$, the ironed virtual costs are $\ivirti(\costi) = \costcurvehi'(\quanti)$. By taking the convex hull of the cost curves, we have convex cost curves and monotone ironed virtual costs as desired. The next two lemmas show that expected payments $\costcurvehi(\exquanti)$ are feasible while serving each agent with probability $\exquanti$, and that no incentive compatible mechanism can do better.
\begin{lemma}[\citealp{mye-81}, \citealp{BR-89}]
\label{l:irr-neg}
For any agent with cost drawn from distribution $\disti$ and any incentive compatible mechanism that selects agent $i$ with probability $\exquanti$, the expected payment to agent $i$ is at least $\costcurvehi(\exquanti)$.
\end{lemma}
We give the proof of the following known lemma since it exhibits how to pick the prices and the probabilities of the randomized mechanisms.
\begin{lemma} [\citealp{mye-81}]
\label{l:irr-pos}
Expected payment $\costcurvehi(\exquanti)$ while serving agent $i$ with probability $\exquanti$ is achievable using a randomized posted pricing with at most two prices.
\end{lemma}
\begin{proof}
Fix a seller $i$ and an ex ante sale probability $\exquanti$. If $\exquanti = \costcurvei(\exquanti)$, then it suffices to post price $\disti^{-1}(\exquanti)$. Otherwise, let $a$ be the largest quantile smaller than $\exquanti$ such that $\costcurvehi(a) = \costcurvei(a)$. Similarly, let $b$ the smallest quantile larger than $\exquanti$ such that $\costcurvehi(b) = \costcurvei(b)$. The interval $[a,b]$ corresponds to the ironed interval in which $\exquanti$ falls in. By the definition of convex hull, we get
$$\costcurvehi(\exquanti) = (1 - \frac{\exquanti - a}{b- a}) \costcurvehi(a) + (1 - \frac{b- \exquanti}{b- a}) \costcurvehi(b) = (1 - \frac{\exquanti - a}{b- a}) \costcurvei(a) + (1 - \frac{b- \exquanti}{b- a}) \costcurvei(b).$$
Therefore, posting price $\disti^{-1}(a)$ with probability $1 - \frac{\exquanti - a}{b- a}$ and $\disti^{-1}(b)$ with probability $1 - \frac{b- \exquanti}{b- a}$ has expected payment $\costcurvehi(\exquanti)$ and the ex ante probability that seller $i$ accepts the price is $\exquanti$.
\end{proof}
By Lemma~\ref{l:irr-neg} and Lemma~\ref{l:irr-pos}, the ex ante results also hold for the irregular case using randomized posted pricing. The following definition generalizes the notion of posted prices to allow for randomization.
\begin{definition} For a randomized posted pricing $\exquants$,
\begin{itemize}
\item Prices $\price_{i1}$ and $\price_{i2}$ with probabilities of picking each price are induced by $\exquanti$.
\item Randomly pick $\price_{i1}$ or $\price_{i2}$.
\item In the case of sequential posted pricing, set the ordering to be in decreasing order of bang-per-buck.
\end{itemize}
\end{definition}
\begin{definition} With randomized posted pricing, a market is $k$-large if $B / \price_{ij} \geq k$ for all agents $i$ and $j \in \{1,2\}$.
\end{definition}
\subsection{From Ex Ante to Ex Post with Additive Value Functions}
For the additive case, we first show that the ex post randomized posted pricing performs well and then derandomize the mechanism.
\begin{theorem}
\label{t:perf-irr}
The randomized sequential posted pricing mechanism $(\exquants, \orders(\cdot))$ that serve agents with probability $\exquants$, where $\exquants$ is the solution to the multilinear extension
program~\eqref{eq:multilinear-prog} and where the order $\orders(\cdot)$ is decreasing in
$\frac{\vali}{\pricei}$, is a $(1-1/\sqrt{2 \pi
k})(1 - 1/k)$ approximation to the optimal mechanism in a $k$-large market.
\end{theorem}
\begin{proof}
We show that the randomized sequential posted pricing performs better than a deterministic sequential posted pricing with the same ex ante performance and a market that is $k$-large. Consider a randomized agent $i$ who is offered $\pricei = \price_{i1}$ with probability $\rho$ and $\pricei =\price_{i2}$ otherwise. Remove agent $i$ and replace it with two deterministic agents $i1$ and $i2$ with value $\vali$, who are offered $\price_{i1}$ and $\price_{i2}$ and who accept their price with probability $\rho \disti(\price_{i1})$ and $(1-\rho) \disti(\price_{i2})$ respectively. Call this new posted pricing the deterministic instance and the original posted pricing the randomized instance.
Both instances have the same ex ante performance since the expected total cost remains the same and since agent $i$ accepts his offer with probability equal to the sum of the probabilities that agents $i_1$ and $i_2$ accept their offer. Fix a set $S$ of agents who accept their offer that does not include $i$ and fix these offers. Notice that in both the randomized and deterministic instance, there is an expected increase in the total cost of $\price_{i1} \rho \disti(\price_{i1}) + \price_{i2} (1- \rho) \disti(\price_{i2})$ caused by agent $i$ to $S$. However, in the randomized instance, this increase in cost is either $\price_{i1}$ or $\price_{i2}$ and in the deterministic instance, this increase in cost can also be $\price_{i1} + \price_{i2}$. Since agents are ordered by decreasing bang-per-buck, the loss from agents that do not fit in the ex post budget constraint
is greater in the deterministic case. Therefore, the loss of the fractional knapsack value with respect to the ex ante performance of the mechanism is greater in the deterministic instance.
Now note that this argument can be repeated inductively until all the agents left are deterministic. So the approximation ratio obtained by the randomized mechanism is $(1-1/\sqrt{2 \pi
k})(1 - 1/k)$, by combining Lemma~\ref{l:fractional-knapsack-additive-relaxation-cg} and the $1-1/k$ loss from dropping the fractional agent.
\end{proof}
We now show that the mechanism can be derandomized.
\begin{theorem}
Any sequential randomized posted pricing $(\exquants, \orders(\cdot))$ can be modified into a sequential deterministic posted pricing in the case of additive value functions.
\end{theorem}
\begin{proof}
The proof proceeds in two steps. The first reduces the number of randomized agents until there is one left by using properties of ironed intervals. The second step is to simply pick the best of the two prices that are offered to the last randomized agent.
Consider a randomized posted pricing $(\exquants, \orders(\cdot))$ with at least two agents $i$ and $j$ that are randomized. The marginal cost per unit value of these two agents are $ \costcurvehi'(\exquanti) / \vali = \ivirti(\costi) / \vali$ and $\ivirt_j(\cost_j) / \val_j$. Without loss of generality, assume $\ivirti(\costi) / \vali \leq \ivirt_j(\cost_j) / \val_j$. Since both of these agents are randomized, $\exquanti$ and $\exquant_j$ are within ironed intervals and their ironed virtual costs are constants within these intervals. With no loss in the objective function, we can therefore increase $\exquanti$ and decrease $\exquant_j$ such that the budget still binds and such that either $\exquanti$ or $\exquant_j$ is at the extremity of the ironed interval it is in, and therefore not randomized anymore. This construction can be repeated until one randomized agent is left.
Consider a randomized posted pricing with a unique randomized agent $i$ who is offered $\pricei = \price_{i1}$ with probability $\rho$ and $\pricei =\price_{i2}$ otherwise. The proof of Theorem~\ref{t:perf-irr} shows that the ratio between the performance of the optimal mechanism and the expected fractional knapsack value is at least $1-1/\sqrt{2 \pi
k}$. Agent $i$ is either offered $\price_{i1}$ or $\price_{i2}$, so by expectations, with at least one of these two offers, the previous ratio is at least $1-1/\sqrt{2 \pi
k}$. Dropping the fractional agent and keeping the best price to offer to agent $i$, we therefore get a $(1-1/\sqrt{2 \pi
k})(1 - 1/k)$ approximation for a deterministic mechanism. \end{proof}
\begin{corollary}
Any sequential randomized posted pricing $(\exquants, \orders(\cdot))$ can be modified with high probability into a sequential deterministic posted pricing in the case of additive value functions with an additional $o(1)$ loss in polynomial time.
\end{corollary}
\begin{proof}
We need to compute which offered price between $\price_{i1}$ and $\price_{i2}$ performs better in terms of fractional knapsack value. Fractional-knapsack is a submodular function and the multilinear extension of submodular functions can be approximated arbitrarily well by sampling using Chernoff bounds. Therefore, with high probability, it is possible to compare arbitrarily well the fractional knapsack value obtained with the two offered prices to agent $i$.
\end{proof}
\subsection{From Ex Ante to Ex Post with Submodular Value Functions}
With submodular value functions, the analysis for the oblivious randomized posted pricing is identical as the analysis for the oblivious deterministic posted pricing. In Section~\ref{s:crs}, Theorem~\ref{t:CR-knapsack} shows that by lowering the budget by some small amount, we get that the sum of the costs does not exceed the budget with high probability. Note that this results does not only hold for deterministic agents but also for randomized agents since the payment $p_i$ to an agent $i$ only need to be bounded by $B/k$ and is not restricted to be either $0$ or $\pricei$. Therefore, the sum of the costs does not exceed the budget with high probability in the randomized case as well and the remaining of the analysis of section~\ref{s:crs} also holds.
\begin{theorem}
For $\epsilon \in (0,1/2)$, if the randomized oblivious posted pricing $\exquants$, where $\exquants$
is the optimal solution to the multilinear
extension program~\eqref{eq:multilinear-prog} with budget $(1 -
\epsilon) B$,
satisfies $2/ \epsilon \leq k \leq B / \max_i \pricei$, then this
posted pricing mechanism is a $(1 - 1 /e)(1 - \epsilon) (1- e^{-
\epsilon^2 ( 1- \epsilon) k / 12})$ approximation to the optimal
mechanism for submodular value functions and $(1 - \epsilon) (1- e^{-
\epsilon^2 ( 1- \epsilon) k / 12})$ for additive value functions.
\end{theorem}
\section{Conclusion}
We consider questions of budget feasibility in a Bayesian setting. We
show that simple posted pricing mechanisms are ex post budget feasible
and approximate the Bayesian optimal mechanism. Our analysis first
considers the ex ante relaxation where the budget constraint is
allowed to hold in expectation. Good approximations are obtained when
this ex ante relaxation is optimized for a slightly reduced budget or
when the agents are ordered by bang-per-buck (value divided by offered
price). The latter approach, in the case of additive value functions
when it applies, gives better bounds.
Another method for designing posted pricing mechanisms from the
literature comes from the generalized magician's problem from
\citet{ala-14}. Unfortunately, this approach does not satisfy the
monotonicity property of Theorem~\ref{t:cr-main} needed to apply known
results that give a good approximation in the case of submodular
functions. Thus, it is unclear whether this approach can be adapted
to budget feasibility questions.
\newpage
\bibliographystyle{apalike}
|
2,877,628,088,380 | arxiv | \section{Introduction}
The anti-de Sitter/conformal field theory (AdS/CFT) correspondence, more generally the gauge/gravity duality \cite{Maldacena,Gubser,Witten}, which relates a weakly coupling gravity theory in a $(d+1)$-dimensional spacetime to a strongly coupling field theory on the $d$-dimensional boundary, has been widely applied to study the strongly correlated systems in the theoretical condensed matter physics. One of its most remarkable and successful applications is providing a holographically dual description of a high temperature superconducting phase transition. Holographic superconductors can be constructed by coupling an AdS black hole with the charged field and U$(1)$ gauge fields. When the Hawking temperature is decreased to some critical value, the black hole background becomes unstable against perturbations and gets hair by condensing some fields. According to the AdS/CFT duality, this hairy black hole solution can be regarded as a superconducting phase. The first simple model of the s-wave holographic superconductor was built by Hartnoll \emph{et al.} \cite{HartnollPRL,HartnollJHEP}. By considering the Yang-Mills theory/Maxwell complex vector field model or the charged tensor field in the bulk, one can get the p-wave holographic superconductors with a vector order parameter \cite{Gubser-Pufu,CaiPWave-1} and d-wave holographic superconductors with a tensor order parameter \cite{DWaveChen,DWaveBenini}. Until now, a lot of holographic superconductor models have been constructed and have attracted considerable interest for their potential applications to the condensed matter physics, see Refs.
\cite{CaiQongGen2015,Hartnoll2009,Herzog2009,Horowitz2011} and references therein.
Most of the works of holographic superconductors focus on the ground state, which is the first state to condense. Since there are many novel and important properties showing up in the excited states for superconducting materials in condensed matter systems \cite{Coffey,Sahoo,Demler,Semenov}, it is interesting and significant to explore the holographic superconductors with excited states. As a first step, Wang \emph{et al.} constructed a novel family of solutions of holographic superconductor with excited states in the probe limit where the backreaction of the matter fields on the spacetime metric is neglected \cite{WangYQ}, and pointed out that the excited states of the holographic superconductors could be related to the metastable states of the mesoscopic superconductors \cite{Peeters,Vodolazov}. Subsequently, they built a fully backreaction holographic model of superconductor with excited states \cite{WangYQBackreaction}. Qiao \emph{et al.} developed a general analytic technique by including more higher order terms in the expansion of the trial function to investigate the excited states of the holographic dual models in the backgrounds of AdS black hole \cite{QiaoXY2020} and AdS soliton \cite{OuYangLiang2021}. Li \emph{et al.} investigated the non-equilibrium dynamical transition process between the excited states of holographic superconductors \cite{Liran2020}. Along this line, there have been accumulated interest to study various holographic superconductors with excited states \cite{XiangZW,PanJie,YuBao,ZhangZPJNPB,Xiang2022}.
Within the AdS/CFT duality, the quantities in the boundary field theories are related to certain geometric quantities in the bulk spacetime. Two quantities introduced from the quantum information theory, which play important roles in investigating the quantum gravity and quantum field theory, are the entanglement entropy and complexity. The entanglement entropy is a powerful tool to probe the phase transitions and keep track of the degrees of freedom in a strongly coupled system. Holographically it can be computed by Ryu-Takayanagi formula \cite{Ryu,Ryu2006}, which states that the entanglement entropy of CFTs is associated with the minimal area surface in the gravity side, namely
\begin{equation}\label{areaA}
\mathcal{S}=\frac{Area(\gamma_{A})}{4G_{N}},
\end{equation}
where $G_{N}$ is the Newtonian constant in the dual gravity theory, and $\gamma_{\mathcal{A}}$ is the Ryu-Takayanagi minimal area surface in the bulk, which shares the same boundary $\partial_{\mathcal{A}}$ with the subregion $\mathcal{A}$. Since this dual description of the entanglement entropy has been checked for several cases, it can be applied to study the properties of holographic superconductors. The initial work was done by Albash and Johnson who evaluated the holographic entanglement entropy (HEE) in the s-wave holographic superconductor \cite{Albash}. Subsequently, the HEE in various superconducting phase transition models has also been studied \cite{Ogawa,XiDong,CaiRongGen027,LiLiFang,CaiRongGen088,CaiRongGen107,Kuang,Peng,YaoWeiping}. The entanglement entropy turns out to be a good probe to investigate the critical points and the order of the holographic phase transitions, and provides us new insights into the quantum structure of spacetime.
However, the entanglement entropy is not enough to understand the rich geometric structures that exist behind the horizon because it only grows for a very short time. Then the holographic dual of the complexity, which essentially describes the minimal number of gates of any quantum circuit to obtain a desired target state from a reference state, has recently been presented by Susskind \cite{Susskind2014}. The computation of the complexity in holography is refined into two concrete conjectures. One is known as ``complexity=volume" (CV) conjecture \cite{SusskindCV2014,SusskindCV2016}, which proposes that the holographic complexity is proportional to the volume of the extremal codimension-one bulk hypersurface which meets the asymptotic boundary on the desired time slice. The other one is known as ``complexity=action" (CA) conjecture \cite{BrownCAprl,BrownCAprd}. It states that the complexity corresponds to the on-shell bulk action in the Wheeler-DeWitt (WDW) patch which is the domain of dependence of some Cauchy surface in the bulk ending on the time slice of the boundary. In this work, we will focus on the holographic subregion complexity (HSC) proposed by Alishahiha \cite{Alishahiha}, which is another definition of holographic complexity based on the original CV conjecture. Following Alishahiha's proposal, we can evaluate the HSC by the codimension-one volume of the time-slice of the bulk geometry enclosed by the extremal codimension-two Ryu-Takayanagi (RT) hypersurface used for the computation of holographic entanglement entropy as
\begin{equation}\label{VolumeA}
\mathcal{C}=\frac{Volume(\gamma_{A})}{8\pi LG_{N}},
\end{equation}
where $\gamma_{\mathcal{A}}$ is the RT surface of the corresponding subregion $\mathcal{A}$,
and $L$ is the AdS radius.
Because the complexity measures the difficulty of turning one quantum state into another, it is expected that the holographic complexity should capture the behavior of phase transition of the boundary field theory, and there raises an intensive interest in studying the complexity for different types of holographic superconductors. Many efforts have been made in using the HSC
as a probe of phase transition in the s-wave superconductor \cite{Momeni,Zangeneh}, p-wave superconductor \cite{Fujita}, St$\ddot{u}$ckelberg superconductor \cite{Stuckelbergsuperconductor} and superconductor with nonlinear electrodynamics \cite{Shiyu2020,Lainonlinear2022}. These works show that the holographic complexity is a good parameter to characterize the superconducting phase transitions, and behaves in the different way with the entanglement entropy which means that the two quantities reflect different information of the holographic superconductor systems.
In order to further disclose the properties of the holographic superconductors with excited states, here we are aiming to study the HEE and HSC for the excited states of holographic superconductors with backreaction in Einstein gravity. Furthermore, we would like to extend our discussion to the case with the presence of higher curvature corrections to the Einstein gravity. The Einstein-Gauss-Bonnet gravity is one of the natural modifications for Einstein gravity by including Gauss-Bonnet term which arises naturally from the low-energy limit of heterotic string theory \cite{GB1,GB2,GB3,CaiRongGen2002}. Importantly, the presence of Gauss-Bonnet term does not lead to more than second derivatives of the metric in the corresponding field equations and thus the theory is ghost-free. The Gauss-Bonnet theory has earned much attentions in holographic studies in the past decades, and the previous works of the holographic superconductors in the Gauss-Bonnet gravity show that the higher curvature terms have nontrivial contributions to some universal properties in Einstein gravity, for example see Refs. \cite{Gregory,PanQiYuan2010,Gregory2011,Kanno,Gangopadhyay,Ghorai,Sheykhi,Salahi,LiZH,Nam,Parai,LiHF2011,LuJW,LiuSC,Mohammadi,LaiCY,NieZY}. Particularly, it was believed that the Gauss-Bonnet term only plays the role in spacetime with the dimension $d\geq5$ until Glavan and Lin presented a novel $4$-dimensional Einstein-Gauss-Bonnet gravity by rescaling the Gauss-Bonnet coupling constant $\alpha\rightarrow\alpha/(d-4)$ and taking the limit $d\rightarrow4$, where the Gauss-Bonnet term makes an important contribution to the gravitational dynamics \cite{Glavan}. Subsequently, the ``regularized" versions of the $4$-dimensional Einstein-Gauss-Bonnet gravity \cite{HluPLB809,Hennigar,Fernandes,Oikonomou} and the consistent theory of $d\rightarrow4$ \cite{Aoki} have also been proposed. In Ref. \cite{QiaoXY}, the authors constructed the $(2+1)$-dimensional Gauss-Bonnet superconductors in the probe limit, which shows that the critical temperature first decreases then increases as the Gauss-Bonnet parameter tends towards the Chern-Simons value in a scalar mass dependent fashion. This subtle effect of the higher curvature correction on the scalar condensates in the s-wave superconductor in $(2+1)$-dimensions is different from the findings in the higher-dimensional superconductors \cite{Gregory,PanQiYuan2010,Gregory2011}. In this work, we will also investigate the HEE and HSC for the excited states of the superconductors in the $4$-dimensional Gauss-Bonnet gravity away from the probe limit, which can present us some interesting details of excited states of superconductors under the impact of the Gauss-Bonnet curvature correction.
This work is organized as follows. In section II, we investigate the entanglement entropy and complexity for the excited states of the holographic superconductors with fully backreaction in the Einstein gravity. In section III, we extend the discussion to the entanglement entropy and complexity of the fully backreacting holographic superconductors in the $4$-dimensional Einstein-Gauss-Bonnet gravity. In section IV, we conclude our results.
\section{Entanglement entropy and complexity for excited states of holographic superconductors in Einstein gravity}
\subsection{Holographic model and condensates of the scalar field}
In a $d$-dimensional Einstein gravity, we consider a Maxwell field and a charged complex scalar field coupled via the action
\begin{eqnarray}
S=\int d^{d}x\sqrt{-g}\bigg[\frac{1}{2\kappa^{2}}(R-2\Lambda)-
\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-|\nabla\psi-iqA\psi|^{2}-m^{2}|\psi|^{2}\bigg],
\end{eqnarray}
where $\kappa^{2}=8\pi G_{d}$ is the gravitational constant and $\Lambda=-(d-1)(d-2)/(2L^{2})$ is the cosmological constant. $A$ and $\psi$ respectively represent the gauge field and a scalar field with mass $m$ and charge $q$. To include the backreaction, we choose the following ansatz of the metric for the black hole with a planar symmetric horizon
\begin{eqnarray}\label{metric}
ds^2&=&-f(r)e^{-\chi(r)}dt^{2}+\frac{dr^2}{f(r)}+r^{2}h_{ij}dx^{i}dx^{j}.
\end{eqnarray}
The Hawking temperature of the black hole, which gives the temperature of the holographic superconductor, is expressed as
\begin{eqnarray}\label{TH}
T_{H}=\frac{f^{\prime}(r_{+})e^{-\chi(r_{+})/2}}{4\pi},
\end{eqnarray}
with the radius of the event horizon $r_{+}$.
Considering the ansatz for the matter fields $\psi=\psi(r)$, $A=\phi(r)dt$, we obtain the equations of motion for the matter and metric
\begin{eqnarray}\label{chieq}
\chi^{\prime}+\frac{4\kappa^{2}r}{d-2}\bigg(\psi'^2+\frac{q^{2}e^{\chi}\phi^{2}\psi^{2}}{f^{2}}\bigg)=0,
\end{eqnarray}
\begin{eqnarray}\label{feq}
f^{\prime}-\bigg[\frac{(d-1)r}{L^{2}}-\frac{(d-3)f}{r}\bigg]+
\frac{2\kappa^{2}r}{d-2}\bigg[m^{2}\psi^{2}+\frac{e^{\chi}\phi'^2}{2}+f\bigg(\psi'^2+\frac{q^{2}e^{\chi}\phi^{2}\psi^{2}}{f^{2}}\bigg)\bigg]=0,
\end{eqnarray}
\begin{eqnarray}\label{phi}
\phi^{\prime\prime}+\bigg(\frac{d-2}{r}+\frac{\chi^{\prime}}{2}\bigg)\phi^{\prime}-\frac{2q^{2}\psi^{2}}{f}\phi=0,
\end{eqnarray}
\begin{eqnarray}\label{psi}
\psi^{\prime\prime}+\bigg(\frac{d-2}{r}+\frac{f^{\prime}}{f}-\frac{\chi^{\prime}}{2}\bigg)\psi^{\prime}+\bigg(\frac{q^{2}e^{\chi}\phi^{2}}{f^{2}}-\frac{m^{2}}{f}\bigg)\psi=0,
\end{eqnarray}
where the prime denotes the derivative with respect to the coordinate $r$. Just as in Ref. \cite{PanQY2012}, we will set $q=1$ and keep $\kappa^{2}$ finite when the backreaction is taken into account.
For the normal phase, there is no condensate, i.e., $\psi(r)=0$. Thus, we find that $\chi$ is a constant and the analytic solutions to Eqs. (\ref{feq}) and (\ref{phi}) lead to the AdS Reissner-Nordstr$\ddot{o}$m black holes
\begin{eqnarray}
f(r)&=&\frac{r^{2}}{L^{2}}-\frac{1}{r^{d-3}}\bigg[\frac{r_{+}^{d-1}}{L^{2}}+\frac{(d-3)\kappa^{2}\rho^{2}}{(d-2)r_{+}^{d-3}}\bigg]+\frac{(d-3)\kappa^{2}\rho^{2}}{(d-2)r^{2(d-3)}},\nonumber \\
\phi(r)&=&\mu-\frac{\rho}{r^{d-3}},
\end{eqnarray}
where $\mu$ and $\rho$ are the chemical potential and the charge density in the dual field theory respectively. If $\kappa=0$, the metric coefficient $f$ goes back to the case of the Schwarzschild AdS black hole.
In order to get the solutions corresponding to the superconducting phase, i.e., $\psi(r)\neq 0$, we have to impose the appropriate boundary conditions. At the horizon $r=r_{+}$, the metric coefficient $\chi$ and scalar field $\psi$ are regular, but the metric coefficient $f$ and gauge field $\phi$ obey $\phi(r_{+})=0$ and $f(r_{+})=0$, respectively. Near the boundary $r\rightarrow\infty$, the asymptotic behaviors of the solutions are
\begin{eqnarray}
\chi\rightarrow 0,f\sim\frac{r^{2}}{L^{2}},\phi\sim\mu-\frac{\rho}{r^{d-3}},
\psi\sim\frac{\psi_{-}}{r^{\lambda_{-}}}+\frac{\psi_{+}}{r^{\lambda_{+}}},
\end{eqnarray}
where the coefficients $\psi_{+}$ and $\psi_{-}$ are related to the vacuum expectation value of the boundary operator $\mathcal{O}$ with the conformal dimension $\lambda_{\pm}=[(d-1)\pm\sqrt{(d-1)^{2}+4m^{2}L^{2}}]/2$, respectively. When $\lambda_{-}$ is larger than the unitarity bound, both the modes are normalizable, and we may impose boundary condition that either $\psi_{-}$ and $\psi_{+}$ vanishes \cite{HartnollPRL,HartnollJHEP}.
From the equations of motion (\ref{chieq})-(\ref{psi}), we can get the useful scaling symmetries and the transformation of the relevant quantities
\begin{eqnarray}\label{scaling}
r\rightarrow \beta r,~~~~(t,x^{i})\rightarrow \frac{1}{\beta}(t,x^{i}),~~~~(\chi,\psi,L)\rightarrow (\chi,\psi,L),\nonumber \\
(\phi,\mu, T)\rightarrow \beta(\phi,\mu, T),~~~~\rho\rightarrow \beta^{d-2}\rho,~~~~\psi_{\pm}\rightarrow \beta^{\lambda_{\pm}}\psi_{\pm},
\end{eqnarray}
with a real positive number $\beta$. Thus, we can choose $r_{+}=1$ and $L=1$. For concreteness, we focus on the 4-dimensional AdS black hole spacetime, and set the backreaction parameter $\kappa=0.05$ and the mass of the scalar field $m^2L^2=-2$ above the Breitenlohner-Freedman (BF) bound ($m^2L^2\geq-9/4$ for $d=4$). In the following, we will transform the coordinate as $r\rightarrow z=r_{+}/r$ for simplicity.
\begin{figure}[ht]
\includegraphics[scale=0.62]{ZhengCond}\hspace{0.4cm
\includegraphics[scale=0.62]{FuCond}\hspace{0.4cm
\caption{\label{4dCond} The condensates of scalar operators $\mathcal{O}_{+}$ (left) and $\mathcal{O}_{-}$ (right) with excited states versus temperature for the fixed mass $m^{2}L^{2}=-2$. In each panel, the blue, red, green and black lines denote the ground $(n=0)$, first $(n=1)$, second $(n=2)$ and third $(n=3)$ states, respectively.}
\end{figure}
\begin{table}[ht]
\caption{The critical temperature $T_{c}$ of scalar operators $\mathcal{O}_{+}$ and $\mathcal{O}_{-}$ with excited states for the fixed mass $m^{2}L^{2}=-2$.}\label{4detemp}
\begin{center}
\begin{tabular}{c c c c c c}
\hline
$n$ & 0 & 1 & 2 & 3 & 4
\\
\hline
~~~~~~$<\mathcal{O}_{+}>$~~~~~&~~~~~$0.117710\rho^{1/2}$~~~~~&~~~~~$0.076345\rho^{1/2}$~~~~~&~~~~~$0.058377\rho^{1/2}$~~~~~&~~~~~$0.046861\rho^{1/2}$~~~~~&~~~~~$0.038202\rho^{1/2}$
\\
~~~~~~$<\mathcal{O}_{-}>$~~~~~&~~~~~$0.225271\rho^{1/2}$~~~~~&~~~~~$0.092168\rho^{1/2}$~~~~~&~~~~~$0.066290\rho^{1/2}$~~~~~&~~~~~$0.052181\rho^{1/2}$~~~~~&~~~~~$0.042283\rho^{1/2}$
\\
\hline
\end{tabular}
\end{center}
\end{table}
In Fig. \ref{4dCond}, we exhibit the condensates of scalar operators $\langle\mathcal{O}_{+}\rangle$ (left) and $\langle\mathcal{O}_{-}\rangle$ (right) as a function of temperature from the ground state to the third excited state. When the temperature drops below the critical value $T_c$, the condensates of the scalar field emerge. The results of the critical temperature $T_{c}$ for both operators from the ground state to the fourth excited states are presented in Table \ref{4detemp}. It is shown that the critical temperature $T_{c}$ of excited states is lower than that of the ground state, which means that the higher excited state makes it harder for the scalar hair to form. The results are consistent with the findings in Ref. \cite{WangYQBackreaction}. Fitting the curves in Fig. \ref{4dCond}, we have the condensate behavior of operators with respect to the temperature as $\langle\mathcal{O}_{\pm}\rangle\sim(1-T/T_{c})^{1/2}$ near the critical point, which tells us that for the ground and excited states, the superconducting phase transition of the $4$-dimensional holographic model with the backreactions belongs to the second order and the critical exponent of the system takes the mean field value $1/2$.
\subsection{HEE and HSC of the holographic model}
In this section, we will numerically study the behaviors of the HEE and HSC in the metal/superconductor phase transition with excited states, which will give more physics about the superconducting phase transition of excited states.
Let us consider a subsystem $\mathcal{A}$ with a straight strip geometry which is described by $-l/2\leq x\leq l/2$ and $-R/2\leq y\leq R/2$ $(R\rightarrow\infty)$. Here $l$ is defined as the size of region $\mathcal{A}$, and $R$ is a regulator which can be set to infinity. The radial minimal surface $\gamma_{A}$ starts from $z=\epsilon$ at $x=l/2$, and extends into the bulk until it reaches $z=z_{\ast}$, then returns back to the AdS boundary $z=\epsilon$ at $x=-l/2$, where $\epsilon$ is a UV cutoff. Therefore, the induced metric on the minimal surface takes the form
\begin{eqnarray}\label{inducedmetric}
ds_{induced}^{2}=\frac{r_{+}^{2}}{z^{2}}\bigg\{\bigg[1+\frac{1}{z^{2}f}\bigg(\frac{dz}{dx}\bigg)^{2}\bigg]dx^{2}+dy^{2}\bigg\},
\end{eqnarray}
By using the Ryu-Takayanagi formula given in Eq. (\ref{areaA}), the entanglement entropy in the strip geometry is
\begin{eqnarray}\label{RTformula}
\mathcal{S}=\frac{R}{4G_{4}}\int_{-\frac{l}{2}}^{\frac{l}{2}}\frac{r_{+}^{2}}{z^{2}}\sqrt{1+\frac{1}{z^{2}f}\bigg(\frac{dz}{dx}\bigg)^{2}}dx.
\end{eqnarray}
The minimality condition implies
\begin{eqnarray}\label{xeq}
\frac{dz}{dx}=\frac{1}{z}\sqrt{(z_{\star}^{4}-z^{4})f},
\end{eqnarray}
in which the constant $z_{\ast}$ satisfies the condition $\frac{dz}{dx}|_{z=z_{\ast}}=0$. Setting $x(z_*)=0$, we integrate the condition (\ref{xeq}) and obtain
\begin{eqnarray}\label{RTxz}
x(z)=\int_{z}^{z_{\ast}}\frac{z}{\sqrt{(z_{\star}^{4}-z^{4})f}}dz,
\end{eqnarray}
with $x(\epsilon\rightarrow 0)=l/2$. After minimizing the area by Eq. (\ref{xeq}), the HEE becomes
\begin{eqnarray}\label{HEE}
\mathcal{S}=\frac{R}{2G_{4}}\int_{\epsilon}^{z_{\epsilon}}\frac{z_{\ast}^{2}}{z^{3}\sqrt{(z_{\ast}^{4}-z^{4})f}}dz= \frac{R}{2G_{4}}\left(s+\frac{1}{\epsilon}\right),
\end{eqnarray}
where $s$ is the finite term and $1/\epsilon$ is the divergent term as $\epsilon\rightarrow0$ caused by the pure AdS geometry $f\rightarrow z^{-2}$ near the UV cutoff. We can subtract this divergent term from $\mathcal{S}$ in Eq. (\ref{HEE}), and analyze the physically important finite part $s$ of the HEE.
Following the proposal given by Eq. (\ref{VolumeA}), the HSC in the strip geometry is
\begin{eqnarray}\label{HSC}
\mathcal{C}=\frac{R}{4\pi LG_{4}}\int_{\epsilon}^{z_{\ast}}\frac{x(z)dz}{z^{4}f}=\frac{R}{4\pi LG_{4}}\left[c+\frac{F(z_{\ast})}{\epsilon^{2}}\right],
\end{eqnarray}
which also includes a universal term $c$ and a divergent term in the form of $F(z_{\ast})/\epsilon^{2}$ with a function of $z_*$. Note that the function $F(z_{\ast})$ has different forms under different situations, we cannot give the general analytical form of the HSC divergence term and subtract it off to find the universal part of $\mathcal{C}$. Fortunately, the value of the universal term $c$ is independent of the UV cutoff. So considering two different values of cutoff $\epsilon_1$ and $\epsilon_2$, we can numerically compute the value of $F(z_{\ast})$ by
\begin{eqnarray}
F(z_{\ast})=\frac{4\pi LG_{4}[\mathcal{C}(\epsilon_{1})-\mathcal{C}(\epsilon_{2})]}{R(\epsilon_{1}^{-2}-\epsilon_{2}^{-2})},
\end{eqnarray}
which can help us to pick up the universal term $c$ of the HSC in every situation.
\begin{figure}[ht]
\includegraphics[scale=0.62]{ZhengHEE}\hspace{0.4cm
\includegraphics[scale=0.62]{FuHEE}\hspace{0.4cm
\includegraphics[scale=0.62]{ZhengHSC}\hspace{0.40cm
\includegraphics[scale=0.62]{FuHSC}\hspace{0.4cm
\caption{ The HEE and HSC of scalar operators $\mathcal{O}_{+}$ (left) and $\mathcal{O}_{-}$ (right) with excited states versus temperature for the fixed width $l\sqrt{\rho}=1$, which shows that the higher excited state leads to a lager value of the HEE but a smaller value of the HSC for a given temperature in the superconducting phase. In each panel, the blue, red, green and black dashed lines denote the ground $(n=0)$, first $(n=1)$, second $(n=2)$ and third $(n=3)$ states of the superconducting phase, respectively. The magenta solid line denotes the normal phase.}\label{4dHEE}
\end{figure}
In Fig. \ref{4dHEE}, we plot the HEE (top) and HSC (bottom) as a function of temperature $T$ for the operators $\mathcal{O}_{+}$ and $\mathcal{O}_{-}$ with excited states, respectively. In each panel, we can see that the HEE and HSC change discontinuously at the critical point where the curves of the normal phase (solid) intersect with those of the superconducting phase (dashed). It characterizes that the system undergoes a phase transition from a normal phase to a superconducting phase as the temperature decreases below the critical value. Moreover, both for the operator $\mathcal{O}_{+}$ and operator $\mathcal{O}_{-}$, the discontinuous points of the curves of the HEE and HSC correspond to the critical temperatures from the ground state to the third excited state given in Table \ref{4detemp}. It is obvious that the critical temperature $T_{c}$ of the phase transition decreases as the number of nodes $n$ increases. On the other hand, there are some differences between the HEE and HSC. Firstly, the HEE increases as the temperature increases, and its value in the normal phase is larger than that in the superconducting phase. On the contrary, the HSC decreases with the increasing temperature and always has a smaller value in the normal phase than that in the superconducting phase. Secondly, it is interesting to find that, for a fixed temperature $T$, the higher excited state has a larger HEE but a smaller HSC in the superconducting phase.
\section{Entanglement entropy and complexity for excited states of holographic
superconductors in 4-dimensional Einstein-Gauss-Bonnet gravity}
\subsection{Holographic model and condensates of the scalar field}
In this section, we extend the study to the backreacting holographic superconductor in the $4$-dimensional Einstein-Gauss-Bonnet gravity. We consider the Gauss-Bonnet-AdS black hole solution by using the consistent $d\rightarrow 4$ Einstein-Gauss-Bonnet gravity \cite{Aoki}. In the ADM formalism, we adopt the metric ansatz
\begin{eqnarray}
ds^{2}=g_{\mu\nu}dx^{\mu}dx^{\nu}=-N^{2}dt^{2}+\gamma_{ij}(dx^{i}+N^{i}dt)(dx^{j}+N^{j}dt),
\end{eqnarray}
where $N$ is the lapse function, $\gamma_{ij}$ is the spatial metric and $N^{i}$ is the shift vector. We begin with a Maxwell field and a charged complex scalar field coupled via the action
\begin{eqnarray}\label{action4DEGB}
S=\int dtd^{3}xN\sqrt{\gamma}\bigg(\mathcal{L}_{EGB}^{4D}-\frac{1}{4}F_{\mu\nu}F^{\mu\nu}-|\nabla\psi-iqA\psi|^{2}-m^{2}|\psi|^{2}\bigg),
\end{eqnarray}
where the Lagrangian density reads
\begin{eqnarray}
\mathcal{L}_{EGB}^{4D}=\frac{1}{2\kappa^{2}}\bigg\{2R+\frac{6}{L^{2}}-\mathcal{M}+\frac{\alpha}{2}\bigg[8R^{2}-4R\mathcal{M}-\mathcal{M}^{2}-
\frac{8}{3}\bigg(8R_{ij}R^{ij}-4R_{ij}\mathcal{M}^{ij}-\mathcal{M}_{ij}\mathcal{M}^{ij}\bigg)\bigg]\bigg\},
\end{eqnarray}
with the Gauss-Bonnet coupling parameter $\alpha$ and the Ricci tensor of the spatial metric $R_{ij}$. Here, we have
\begin{eqnarray}
\mathcal{M}_{ij}=R_{ij}+\mathcal{K}^{\kappa}_{\kappa}\mathcal{K}_{ij}-\mathcal{K}_{i\kappa}\mathcal{K}^{\kappa}_{j}, ~~~~~~\mathcal{M}\equiv\mathcal{M}^{i}_{i},
\end{eqnarray}
where $\mathcal{K}_{ij} \equiv \left [\dot{\gamma}_{ij}-2D_{(i}N_{j)}-\gamma_{ij}D^2 \lambda_{\rm
GF} \right ]/(2N)$ with a dot denoting the derivative with respect to the time $t$, and $D_{i}$ being the covariant derivative compatible with the spatial metric.
We simply take the following ansatz for the metric
\begin{eqnarray}\label{4Dmetric}
N=\sqrt{f(r)}e^{-\chi(r)/2},~~~~~~N^{i}=0,~~~~~~\gamma_{ij}=diag\bigg(\frac{1}{f(r)},r^{2},r^{2}\bigg)
\end{eqnarray}
and consider the matter fields to be real functions of $r$, i.e., $\psi=|\psi(r)|$ and $A_{t}=\phi(r)$. So the equations of motion are
\begin{eqnarray}\label{4DGBchieq}
\chi^{\prime}+\frac{2\kappa^{2}r^{3}}{r^{2}-2\alpha f}\bigg(\psi'^2+\frac{q^{2}e^{\chi}\phi^{2}\psi^{2}}{f^{2}}\bigg)=0,
\end{eqnarray}
\begin{eqnarray}\label{4DGBfeq}
f^{\prime}-\frac{1}{r^{2}-2\alpha f}\bigg(\frac{3r^{3}}{L^{2}}-rf-\frac{\alpha f^{2}}{r}\bigg)+\frac{\kappa^{2}r^{3}}{r^{2}-2\alpha f}\bigg[m^{2}\psi^{2}+\frac{e^{\chi}\phi'^2}{2}+f\bigg(\psi'^2+\frac{q^{2}e^{\chi}\phi^{2}\psi^{2}}{f^{2}}\bigg)\bigg]=0,
\end{eqnarray}
\begin{eqnarray}\label{4Dphi}
\phi^{\prime\prime}+\bigg(\frac{2}{r}+\frac{\chi^{\prime}}{2}\bigg)\phi^{\prime}-\frac{2q^{2}\psi^{2}}{f}\phi=0,
\end{eqnarray}
\begin{eqnarray}\label{4Dpsi}
\psi^{\prime\prime}+\bigg(\frac{2}{r}+\frac{f^{\prime}}{f}-\frac{\chi^{\prime}}{2}\bigg)\psi^{\prime}+
\bigg(\frac{q^{2}e^{\chi}\phi^{2}}{f^{2}}-\frac{m^{2}}{f}\bigg)\psi=0,
\end{eqnarray}
where the prime denotes differentiation in $r$. When the Gauss-Bonnet parameter $\alpha\rightarrow 0$, Eqs. (\ref{4DGBchieq})-(\ref{4Dpsi}) will reduce to Eqs. (\ref{chieq})-(\ref{psi}) with $d=4$ for the backreacting holographic superconductors investigated in Ref. \cite{PanQY2012}. Here, the Hawking temperature has the same form as in (\ref{TH}), which is interpreted as the temperature of the dual field theory.
For the normal phase, i.e., $\psi(r)=0$, we can get the analytic solutions to the field equations (\ref{4DGBfeq}) and (\ref{4Dphi})
\begin{eqnarray}
f(r)&=&\frac{r^{2}}{2\alpha}\bigg[1-\sqrt{1-\frac{4\alpha}{L^{2}}\bigg(1-\frac{r_{+}^{3}}{r^{3}}\bigg)+\frac{2\alpha\kappa^{2}\rho^{2}}{r_{+}r^{3}}\bigg(1-\frac{r_{+}}{r}\bigg)}\bigg],\nonumber \\
\phi(r)&=&\mu-\frac{\rho}{r}.
\end{eqnarray}
In the limit $\alpha\rightarrow 0$, the solutions reduce to the case of the $4$-dimensional AdS Reissner-Nordstr$\ddot{o}$m black hole.
For the superconducting phase, i.e., $\psi(r)\neq 0$, the boundary conditions at the horizon and asymptotic AdS boundary have to be imposed to solve Eqs. (\ref{4DGBchieq})-(\ref{4Dpsi}). At the horizon $r=r_{+}$, we still have the regularity conditions, just as in section II for the Einstein gravity. Near the asymptotic boundary $r\rightarrow \infty$, we have
\begin{eqnarray}
\chi\rightarrow0,~~f\sim\frac{r^{2}}{L_{eff}^{2}},~~\phi\sim\mu-\frac{\rho}{r},
~~\psi\sim\frac{\psi_{-}}{r^{\lambda_{-}}}+\frac{\psi_{+}}{r^{\lambda_{+}}},
\end{eqnarray}
where the effective asymptotic AdS scale is defined by
\begin{eqnarray}
L_{eff}^{2}=\frac{2\alpha}{1-\sqrt{1-\frac{4\alpha}{L^{2}}}},
\end{eqnarray}
with the characteristic exponents $\Delta_{\pm}=(3\pm\sqrt{9+4m^{2}L_{eff}^{2}})/2$. In order to obtain the correct consistent influence due to $\alpha$ in various condensates for all dimensions, we will choose the mass by selecting the value of $m^{2}L_{eff}^{2}$, just as pointed out in Ref. \cite{QiaoXY}. In the following calculation, we fix the mass by $m^{2}L_{eff}^{2}=-2$ and the backreaction parameter $\kappa=0.05$. And we take the range $\alpha\leq L^{2}/4$, i.e., the so-called Chern-Simons limit, for the Gauss-Bonnet parameter. For simplicity, here we concentrate on the scalar operator $\mathcal{O}_{+}$ since the behaviors of the HEE or the HSC both for the scalar operators $\mathcal{O}_{+}$ and $\mathcal{O}_{-}$ are the same, just as shown in Fig. \ref{4dHEE} for the Einstein gravity.
\begin{figure}[ht]
\includegraphics[scale=0.41]{ZhengCond4D}\hspace{0.4cm
\includegraphics[scale=0.41]{ZhengCond4Dn1}\hspace{0.4cm
\includegraphics[scale=0.41]{ZhengCond4Dn2}\hspace{0.4cm
\caption{The condensates of the scalar operator $\mathcal{O}_{+}$ versus temperature with the fixed mass $m^{2}L_{eff}^{2}=-2$ for different Gauss-Bonnet parameters $\alpha$, i.e., $\alpha=0.0001$ (orange), $0.10$ (green), $0.24$ (red) and $0.25$ (blue). The three panels from left to right represent the ground ($n=0$), first ($n=1$) and second ($n=2$) states, respectively. }\label{4Dcond}
\end{figure}
In Fig. \ref{4Dcond}, we present the condensates of the scalar operator $\mathcal{O}_{+}$ as a function of the temperature for some chosen values of $\alpha$, i.e., $\alpha=0.0001$, $0.10$, $0.24$ and $0.25$, in the ground $(n=0)$, first $(n=1)$ and second $(n=2)$ states, respectively. It is observed that the condensate occurs for $\mathcal{O}_{+}$ with different values of $\alpha$ and $n$ if $T<T_{c}$. For small condensate, we see that there is a square root behavior $\langle O_{+}\rangle\sim(1-T/T_{c})^{1/2}$, which is shown that, for the ground and excited states, the phase transition of the 4D Gauss-Bonnet holographic superconductors with the backreactions is typical of second order one with the mean field critical exponent $1/2$ for all values of $\alpha$.
\begin{figure}[ht]
\includegraphics[scale=0.60]{Tcd4temp}\hspace{0.4cm
\caption{The critical temperature $T_{c}$ of the scalar operator $\mathcal{O}_{+}$ as a function of the Gauss-Bonnet parameter $\alpha$ with the fixed mass $m^{2}L_{eff}^{2}=-2$ for the ground $n=0$ (blue), first $n=1$ (red) and second $n=2$ (green) states, respectively.}\label{4Dtemp}
\end{figure}
In Fig. \ref{4Dtemp}, we give the critical temperature $T_{c}$ for the condensate of the operator $\mathcal{O}_{+}$ as a function of the Gauss-Bonnet parameter $\alpha$ with the fixed mass $m^{2}L_{eff}^{2}=-2$ from the ground state to the second excited state in order to get the effect of the curvature correction on $T_{c}$. An interesting feature we can find is that the critical temperature $T_{c}$ decreases as $\alpha$ increases, but slightly increases near the Chern-Simons limit $\alpha=0.25$. Furthermore, this non-monotonic behavior of the critical temperature is more pronounced in the ground state than that in the excited state, which is in good agreement with the results obtained in Ref. \cite{PanJie}.
\subsection{HEE and HSC of the holographic model}
Now we are ready to study the properties of the HEE and HSC for the backreacting holographic superconductor in the $4$-dimensional Einstein-Gauss-Bonnet gravity. Due to the presence of a Gauss-Bonnet term, we should use a general formula to calculate the HEE \cite{Dong2014,Wald1993,Wald1995,Jacobson1994}
\begin{eqnarray}
\mathcal{S}=-2\pi\int d^{d}y\sqrt{-g}\bigg\{\frac{\partial \mathcal{L}}{\partial R_{\mu\rho\nu\sigma}}\varepsilon_{\mu\rho}\varepsilon_{\nu\sigma}-\sum_{\alpha}\bigg(\frac{\partial^{2}\mathcal{L}}{\partial R_{\mu_{1}\rho_{1}\nu_{1}\sigma_{1}}\partial R_{\mu_{2}\rho_{2}\nu_{2}\sigma_{2}}}\bigg)_{\alpha}\frac{2K_{\lambda_{1}\rho_{1}\sigma_{1}}K_{\lambda_{2}\rho_{2}\sigma_{2}}}{q_{\alpha}+1}\times \nonumber\\
\bigg[(n_{\mu_{1}\mu_{2}}n_{\nu_{1}\nu_{2}}-\varepsilon_{\mu_{1}\mu_{2}}\varepsilon_{\nu_{1}\nu_{2}})n^{\lambda_{1}\lambda_{2}}+(n_{\mu_{1}\mu_{2}}\varepsilon_{\nu_{1}\nu_{2}}+\varepsilon_{\mu_{1}\mu_{2}}n_{\nu_{1}\nu_{2}})\varepsilon^{\lambda_{1}\lambda_{2}}\bigg]\bigg\},
\end{eqnarray}
where $n_{\mu\nu}$ and $\varepsilon_{\mu\nu}$ reduce to the metric and Levi-Civita tensor in the two orthogonal directions with all other components vanishing, and $q_{\alpha}$ is treated as ``anomaly coefficients". This will result in the corrections to expressions (\ref{RTformula}) and (\ref{HEE}). Still, we employ the shooting method to carry out our numerical calculation, and use $s$ and $c$ to denote the entanglement entropy and complexity of the universal term, respectively.
\begin{figure}[ht]
\includegraphics[scale=0.41]{ZHEE00001}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHEE024}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHEE02499}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHEEn100001}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHEEn10247}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHEEn102499}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHEEn200001}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHEEn202478}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHEEn202499}\hspace{0.4cm
\caption{ The HEE of the scalar operator $\mathcal{O}_{+}$ versus temperature from the ground state ($n=0$) to the second excited state ($n=2$) with a fixed width $l\sqrt{\rho}=1$ for different Gauss-Bonnet parameters $\alpha$, which shows that the HEE always increases monotonously with the increase of the temperature and its value in the normal phase always larger than that in the superconducting phase. The solid (blue) lines denote the normal phase and the dashed (red) lines are for the superconducting phase. }\label{4DZHEE}
\end{figure}
Fig. \ref{4DZHEE} shows our results from the ground state $(n=0)$ to the second excited state $(n=2)$ for the HEE of the scalar operator $\mathcal{O}_{+}$ as a function of the temperature $T$, respectively. In each panel, the critical temperature $T_{c}$ of the system can be read from the joint point of the solid line for the normal phase and the dashed line for the superconducting phase. For example, in the ground state $(n=0)$, we have $T_{c}/\sqrt{\rho}=0.117705$ for $\alpha=0.0001$ (top-left panel), $T_{c}/\sqrt{\rho}=0.100418$ for $\alpha=0.24$ (top-middle panel) and $T_{c}/\sqrt{\rho}=0.105064$ for $\alpha=0.25$ (top-right panel), which are consistent with those in Fig. \ref{4Dtemp}. It is obvious that, for the ground state, the critical temperature first decreases and then increases as $\alpha$ approaches to the Chern-Simons limit. In addition, for the ground state and excited states, we can always find that the value of the HEE in the superconducting phase is less than that in the normal phase when $T<T_{c}$, which is independent of the Gauss-Bonnet parameter $\alpha$. This behavior of the HEE is due to the fact that the condensate turns on at the critical temperature and the formation of Cooper pairs makes the degrees of freedom decrease in the superconducting phase. While we fix the Gauss-Bonnet parameter $\alpha$, we can see that the value of the HEE becomes larger as the number of nodes $n$ increases.
\begin{figure}[ht]
\includegraphics[scale=0.41]{ZHSC00001}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHSC024}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHSC02499}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHSCn100001}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHSCn10247}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHSCn102499}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHSCn200001}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHSCn202478}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHSCn202499}\hspace{0.4cm
\caption{ The HSC of the scalar operator $\mathcal{O}_{+}$ versus temperature from the ground state ($n=0$) to the second excited state ($n=2$) with a fixed width $l\sqrt{\rho}=1$ for different Gauss-Bonnet parameters $\alpha$, which shows that
the Gauss-Bonnet parameter has a more subtle effect on the HSC when compared to the HEE. The solid (blue) lines denote the normal phase and the dashed (red) lines are for the superconducting phase. }\label{4DZHSC}
\end{figure}
In Fig. \ref{4DZHSC}, we plot the HSC of the scalar operator $\mathcal{O}_{+}$ as a function of the temperature $T$, and find that the curve of the HSC in the normal phase and the one in the superconducting phase intersect at the same critical temperature as that reflected by the HEE in Fig. \ref{4DZHEE}. It means that the HSC is able to capture the emergence of the phase transitions in the ground state and excited states. What is noteworthy is that the Gauss-Bonnet parameter $\alpha$ has an interesting effect on the relation between the HSC and the temperature, which can be seen in the ground state and excited states. Obviously, there is a threshold $\alpha_t$ of the Gauss-Bonnet parameter. When $\alpha<\alpha_t$, the value of the HSC decreases as the temperature increases. And the value of the HSC as $n$ increases for the fixed $\alpha$, which agrees with the finding shown in the bottom-left panel of Fig. \ref{4dHEE}. At the threshold $\alpha=\alpha_t$, with the increasing $T/\sqrt{\rho}$, the value of the HSC first decreases and then increases. It should be noted that the threshold becomes larger with the higher excited state, i.e., $\alpha_t=0.2400$ for the ground state $n=0$, $\alpha_t=0.2470$ for the first state $n=1$ and $\alpha_t=0.2478$ for the second state $n=2$. Whereas as $\alpha$ goes up to the Chern-Simons limit $\alpha=0.25$, this non-monotonic behavior of the HSC will convert to a monotonic increasing function of the temperature, which is contrary to the case of $\alpha<\alpha_t$. Besides, we find that the normal phase always has a smaller HSC than the superconducting phase except for the situation of the Chern-Simons limit, namely, the value of the HSC in the normal phase is larger than that in the superconducting phase for $\alpha=0.25$. Under the influence of the Gauss-Bonnet parameter, these special features of the HSC with respect to the temperature imply that the higher curvature correction makes a difference to the properties of the spacetime and changes the geometric structure.
\begin{figure}[ht]
\includegraphics[scale=0.41]{Zalphan0}\hspace{0.4cm
\includegraphics[scale=0.41]{Zalphan1}\hspace{0.4cm
\includegraphics[scale=0.41]{Zalphan2}\hspace{0.4cm
\caption{ The HEE of the scalar operator $\mathcal{O}_{+}$ as a function of the Gauss-Bonnet parameter $\alpha$ at the temperature $T/\sqrt{\rho}=0.02$ with some chosen values of the widths, i.e., $l\sqrt{\rho}=0.80$ (blue), $l\sqrt{\rho}=1.00$ (red) and $l\sqrt{\rho}=1.20$ (green). The three panels from left to right represent the ground ($n=0$), first ($n=1$) and second ($n=2$) states, respectively.}\label{4DZHEEalpha}
\end{figure}
\begin{figure}[ht]
\includegraphics[scale=0.41]{ZHSCalphan0}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHSCalphan1}\hspace{0.4cm
\includegraphics[scale=0.41]{ZHSCalphan2}\hspace{0.4cm
\caption{ The HSC of the scalar operator $\mathcal{O}_{+}$ as a function of the Gauss-Bonnet parameter $\alpha$ at the temperature $T/\sqrt{\rho}=0.02$ with some chosen values of the widths, i.e., $l\sqrt{\rho}=0.80$ (blue), $l\sqrt{\rho}=1.00$ (red) and $l\sqrt{\rho}=1.20$ (green). The three panels from left to right represent the ground ($n=0$), first ($n=1$) and second ($n=2$) states, respectively.}\label{4DZHSCalpha}
\end{figure}
On the other hand, for the fixed temperature, from Figs. \ref{4DZHEE} and \ref{4DZHSC} we find that the larger Gauss-Bonnet parameter $\alpha$ leads to the larger values of the HEE and HSC. To further illustrate this, in Figs. \ref{4DZHEEalpha} and \ref{4DZHSCalpha}, we plot the HEE and HSC as a function of the Gauss-Bonnet parameter $\alpha$, respectively, from the ground state to the second excited state at a fixed temperature $T/\sqrt{\rho}=0.02$ with different widths, i.e., $l\sqrt{\rho}=0.80$, $l\sqrt{\rho}=1.00$ and $l\sqrt{\rho}=1.20$. We observe clearly that, regardless of the width and the number of nodes, the HEE and HSC increase with the increase of the Gauss-Bonnet parameter $\alpha$. Moreover, in each panel, the HEE and HSC become larger as the width increases.
\section{Conclusion}
In this work, we first study the HEE and HSC for the excited states of holographic superconductors with full backreaction in the $4$-dimensional Einstein gravity. We note that the changes of the HEE and HSC both for the scalar operators $\mathcal{O}_{-}$ and $\mathcal{O}_{+}$ are discontinuous at the critical temperature $T_{c}$, and $T_{c}$ in the excited states is lower than that in the ground state, which indicates that the higher excited state makes the scalar condensate harder to form. The values of $T_{c}$ reflected by the HEE and HSC are consistent with the results obtained from the condensate behavior, which means that both the HEE and HSC can be used as good probes to the superconducting phase transition in the excited state. However, there are some differences between the HEE and HSC. We observe that, for the ground state and excited states, the value of the HEE in the normal phase is larger than that in the superconducting phase and increases as the temperature increases, which is the opposite to the behavior of the HSC, namely, the normal phase always has a smaller HSC than the superconducting phase and the HSC decreases as the temperature increases. Meanwhile, for a given temperature $T$ in the superconducting phase, the higher excited state leads to a lager value of the HEE but a smaller value of the HSC.
Next, we extend the investigation to the HEE and HSC for the excited states of backreacting superconductors in the $4$-dimensional Einstein-Gauss-Bonnet gravity. One remarkable feature is that the critical temperature $T_{c}$ for the scalar operator $\langle\mathcal{O}_{+}\rangle$ decreases as the higher curvature correction $\alpha$ increases, but slightly increases as $\alpha$ grows to the Chern-Simons limit, and this non-monotonic bahavior of $T_{c}$ is more pronounced in the ground state than in the excited state, which can be supported by the findings obtained from both the HEE and HSC. The other noteworthy feature is that the effect of $\alpha$ on the relation between the HSC and the temperature is nontrivial in the ground state and excited states, which is a distinguishing property of the HSC and has not been found in the HEE. Specifically, the HSC always behaves as a monotonic decreasing function of the temperature till $\alpha$ reaches some threshold $\alpha_t$. At this critical point $\alpha_t$, the HSC changes non-monotonously with the temperature, i.e., it first decreases and then increases with the increasing temperature. It is shown that the value of $\alpha_t$ will be closer to the Chern-Simons limit in the higher excited state. More interestingly, when $\alpha$ approaches to the Chern-Simons limit, the HSC will convert to a monotonic increasing function of the temperature. Besides, the value of the HSC in the normal phase is less than that in the superconducting phase for the case of $\alpha\leq\alpha_t$, but it is just on the contrary for the Chern-Simons limit. Whereas the HEE always increases monotonously with the increase of the temperature and its value in the normal phase always larger than that in the superconducting phase, regardless of $\alpha$. Lastly, we find that, for the ground state and excited states, the increase of $\alpha$ makes both the HEE and HSC increase, which is independent of the strip width. Thus, we conclude that the HEE and HSC provide richer physics in the phase transition and the condensate of the scalar hair for holographic superconductors with excited states.
\begin{acknowledgments}
We would like to acknowledge helpful discussions with Jian-Pin Wu and Guoyang Fu. This work was supported by the National Key Research and Development Program of China (Grant No. 2020YFC2201400), National Natural Science Foundation of China (Grant Nos. 12275079, 12035005 and 11690034) and Postgraduate Scientific Research Innovation Project of Hunan Province (Grant No. CX20210472).
\end{acknowledgments}
\vspace*{0.5cm}
|
2,877,628,088,381 | arxiv | \section{Introduction}
The compressible Euler-Korteweg equations read
\begin{equation}
\label{EK}
\left\{
\begin{array}{ll}
\partial_{t}\rho+{\rm div}(\rho u)=0,\ (x,t)\in \mathbb{R}^d\times I\\
\partial_{t}u+u\cdot \nabla u+\nabla g(\rho)=\nabla \bigg(K(\rho)\Delta \rho +\frac{1}{2}K'(\rho)|\nabla \rho|^2
\bigg),\ (x,t)\in \mathbb{R}^d\times I\\
(\rho,u)|_{t=0}=(\rho_{0},u_{0}),\ x\in \mathbb{R}^d.
\end{array}\right.
\end{equation}
Here $\rho$ is the density of the fluid, $u$ the velocity, $g$ the bulk chemical potential, related to
the pressure by $p'(\rho)=\rho g'(\rho)$. $K(\rho)>0$ corresponds to the capillary coefficient.
On the left hand side we recover the Euler equations, while the right hand side of the second
equation contains the so called Korteweg tensor, which is intended to take into account capillary
effects and models in particular the behavior at the interfaces of a liquid-vapor mixture. The system arises
in various settings: the case $K(\rho)=\kappa/\rho$ corresponds to the
so-called equations of quantum hydrodynamics (which are formally equivalent to the Gross-Pitaevskii
equation through the Madelung transform, on this topic see the survey of Carles et al \cite{CDS}).\\
As we will see, in the irrotational case the system can be reformulated as a
quasilinear Schr\"odinger equation, this is in sharp contrast with the non homogeneous
incompressible case where the system is hyperbolic (see
\cite{Feireisl}).
For a general $K(\rho)$, local well-posedness was proved in \cite{Benzoni1}. Moreover \eqref{EK}
has a rich structure with special solutions such as planar traveling waves,
namely solutions that only depend on $y=t-x\cdot \xi$, $\xi\in \mathbb{R}^d$, with possibly
$\lim_{\infty}\rho(y)\neq \lim_{-\infty} \rho(y)$.
The orbital stability and instability of such solutions has been largely studied over the last
ten years (see \cite{BDD2} and the review article of Benzoni-Gavage \cite{Benzoni5}).
The existence and non uniqueness of global non dissipative weak solutions \footnote{These global weak solution do not verify the energy inequality} in the spirit of De Lellis-Szekelehidi\cite{DelSz})
was tackled by Donatelli et al \cite{Donatellietcie}, while weak-strong uniqueness has been
very recently studied by Giesselman et al \cite{GLT}.\\
Our article deals with a complementary issue, namely the global well-posedness and asymptotically
linear behaviour of small smooth solutions near the constant state $(\rho,u)=(\overline{\rho},0)$.
To our knowledge we obtain here the
first global well-posedness result for \eqref{EK} in the case of a general pressure and capillary
coefficient. This is in strong contrast with the existence of infinitely many \emph{weak}
solutions from \cite{Donatellietcie}.
\\
A precise statement of our results is provided in theorems \ref{theolarged},\ref{theod4} of
section \ref{results}, but first we will briefly
discuss the state of well-posedness theory, the structure of the equation, and the tools available
to tackle the problem.
Let us start with the local well-posedness result from \cite{Benzoni1}.
\begin{theo}
For $d\geq 1$, let $(\overline{\rho},\overline{u})$ be a smooth solution whose
derivatives decay rapidly at infinity, $s>1+d/2$. Then for $(\rho_0,u_0)\in (\overline{\rho},\overline{u})
+H^{s+1}(\mathbb{R}^d)\times H^s(\mathbb{R}^d)$,
$\rho_0$ bounded away from $0$, there exists $T>0$ and a unique solution $(\rho,u)$ of
$(\ref{EK})$ such that $(\rho-\overline{\rho},u-\overline{u})$ belongs to
$C([0,T],H^{s+1}\times H^s)\cap C^1([0,T],H^{s-1}\times H^{s-2})$ and $\rho$ remains bounded away
from $0$ on $[0,T]\times \mathbb{R}^d$.
\end{theo}
\noindent We point out that \cite{Benzoni1} includes local well-posedness results for
nonlocalized initial data (e.g. theorem $6.1$). The authors also obtained several blow-up
criterions. In the
irrotational case it reads:
\subparagraph{Blow-up criterion:} \label{blowcriter} for $s>1+d/2$, $(\rho,u)$ solution on
$[0,T)\times \mathbb{R}^d$ of $(\ref{EK})$, the solution can be continued beyond $T$ provided
\begin{enumerate}
\item $\rho([0,T)\times \mathbb{R}^d)\subset J\subset \mathbb{R}^{+*}$, $J$ compact and $K$ is smooth
on a neighbourhood of $J$.
\item $\int_0^T(\|\Delta \rho(t)\|_\infty+\|\text{div}u(t)\|_\infty )dt<\infty$.
\end{enumerate}
These results relied on energy estimates for an extended system that we write now.
If $\mathcal{L}$ is a
primitive of $\sqrt{K/\rho}$, setting $L=\mathcal{L}(\rho)$, $w=\sqrt{K/\rho}
\nabla \rho=\nabla L$, $a=\sqrt{\rho K(\rho)}$, from basic computations we verify (see \cite{Benzoni1}) that the equations on
$(L,u,w)$ are
\begin{equation*}
\left\{
\begin{array}{ll}
\partial_tL+u\cdot \nabla L+a\text{div}u=0,\\
\partial_tu+u\cdot \nabla u-w\cdot \nabla w-\nabla (a\text{div}w)=-\nabla g,\\
\partial_tw+\nabla (u\cdot w)+\nabla (a\text{div}w)=0,
\end{array}
\right.
\end{equation*}
or equivalently for $z=u+iw$
\begin{equation}\label{ES}
\left\{
\begin{array}{ll}
\partial_tL+u\cdot \nabla L+a\text{div}u=0,\\
\partial_t z+u\cdot \nabla z+i(\nabla z)\cdot w+i\nabla(a\text{div}z)=\nabla \widetilde{g}(L).
\end{array}
\right.
\end{equation}
Here we set $\widetilde{a}(L)=a\circ \mathcal{L}^{-1}(L),\ \widetilde{g}(L)=g\circ
\mathcal{L}^{-1}(L)$ which are well-defined since $\sqrt{K/\rho}>0$ thus $\mathcal{L}$ is
invertible. \\
This change of unknown clarifies the underlying dispersive structure of the model as the second
equation is a quasi-linear degenerate Schr\"odinger equation. It should be pointed out
however that the local existence results of \cite{Benzoni1} relied on $H^s$ energy estimates
rather than dispersive estimates. On the other hand, we constructed recently
in \cite{AudHasp} global
small solutions to $(\ref{EK})$ for $d\geq 3$ when the underlying system is
\emph{semi-linear}, that is $K(\rho)=\kappa/\rho$ with $\kappa$ a positive constant and for
$g(\rho)=\rho-1$. This case corresponds to the equations of quantum hydrodynamics. The
construction relied on the so-called Madelung transform, which establishes a formal
correspondance between these equations and the Gross-Pitaevskii equation, and recent results
on scattering for the Gross-Pitaevskii equation \cite{GNT1}\cite{GNT3}. Let us recall for completeness that
$1+\psi$ is a solution of the Gross-Pitaevskii equation if $\psi$ satisfies
\begin{equation}\label{GP}
i\partial_t\psi+\Delta \psi-2\text{Re}(\psi)=\psi^2+2|\psi|^2+|\psi|^2\psi.
\end{equation}
For the construction of global weak solutions (no uniqueness, but no smallness assumptions)
we refer also to the work of Antonelli-Marcati \cite{AntMarc2,AntMarc}.\\
In this article we consider perturbations of the constant state $\rho=\rho_c,\ u=0$ for a general
capillary coefficient $K(\rho)$ that we only suppose smooth and positive on an interval
containing $\rho_c$. In order to exploit the dispersive nature of
the equation we need to work with irrotational data $u=\nabla \phi$ so that $(\ref{ES})$ reduces
to the following system (where $L_c=\mathcal{L}(\rho_c)$ which has obviously similarities with $(\ref{GP})$ (more details are provided in sections
$3$ and $4$):
\begin{equation}\label{eqcanonique}
\left\{
\begin{array}{ll}
\partial_t\phi-\Delta (L-L_c)+\widetilde{g}'(L_c)(L-L_c)=\mathcal{N}_1(\phi,L),\\
\partial_t(L-L_c)+\Delta \phi=\mathcal{N}_2(\phi,L)
\end{array}\right.
\end{equation}
The sytem satisfies the dispersion relation
$\tau^2=|\xi|^2(\widetilde{g}'(L_c)+|\xi|^2)$, and the $\mathcal{N}_j$ are at least quadratic
nonlinearities that depend on $L,\phi$ and their derivatives (the system is thus quasi-linear). We
also point out that the stability condition $\widetilde{g}'(L_c)\geq 0$
is necessary in order to ensure that the solutions in $\tau$ of the dispersion relation are real.\\
The existence of global small solutions for nonlinear dispersive equations is a rather classical
topic which is impossible by far to describe exhaustively in this introduction. We shall
yet underline the main ideas that are important for our work here.
\paragraph{Dispersive estimates}
For the Schr\"odinger equation, two key tools are the dispersive estimate
\begin{equation}
\|e^{it\Delta}\psi_0\|_{L^q(\mathbb{R}^d)}\lesssim\frac{\|\psi_0\|_{L^2}}{t^{d(1/2-1/q)}},
\label{adisp1}
\end{equation}
and the Strichartz estimates
\begin{eqnarray}
\|e^{it\Delta}\psi_0\|_{L^p(\mathbb{R},L^q(\mathbb{R}^d))}\lesssim \|\psi_0\|_{L^2},\ \frac{2}{p}+\frac{d}{q}
=\frac{d}{2},\\
\|\int_0^te^{i(t-s)\Delta}f(s)ds\|_{L^p(\mathbb{R},L^q(\mathbb{R}^d))}\lesssim \|f\|_{L^{p'_1}(\mathbb{R},L^{q'_1}(\mathbb{R}^d)},
\
\frac{2}{p_1}+\frac{d}{q_1}=\frac{d}{2}.
\end{eqnarray}
Both indicate decay of the solution for long time in $L^p(L^q)$ spaces, it is of course of interest when we wish to prove the existence of global strong solution since it generally require some damping behavior for long time. Due to the pressure term the linear structure of our system is actually closer to the one of the
Gross-Pitaevskii equation (see (\ref{GP})), but the estimates are essentially the same as
for the Schr\"odinger
equation. Local smoothing is also an interesting feature of Schr\"odinger equations, in particular
for the study of quasilinear systems. A result in this direction was obtained by the first author
in \cite{Audiard4} but we will not need it here. The main task of our proof will consist in proving dispersive estimates
of the type (\ref{adisp1}) for long time, it is related to the notion of scattering for the solution of the
dispersive equations. Let us recall now some classical result on the theory of the scattering for the Schr\" odinger
equations and the Gross Pitaevskii equation.
\paragraph{Scattering} Let us consider the following nonlinear Schr\"odinger
equation
\begin{equation*}
i\partial_t \psi+\Delta \psi=\mathcal{N}(\psi).
\end{equation*}
Due to the dispersion, when the nonlinearity vanishes at a sufficient order
at $0$ and the initial data is sufficiently small and localized, it is possible to prove that the
solution is global and
the integral $\int e^{-is\Delta}\mathcal{N}(\psi(s))ds$ converges in $L^2(\mathbb{R}^d)$, so that there
exists $\psi_+\in L^2(\mathbb{R}^d)$ such that
\begin{equation*}
\|\psi(t)-e^{it\Delta}\psi_+\|_{L^2}\longrightarrow_{t\rightarrow \infty}0.
\end{equation*}
In this case, it is said that the solution is asymptotically linear, or \emph{scatters} to $\psi_+$.\\
In the case where $\mathcal{N}$ is a general power-like non-linearity, we can cite the seminal work
of Strauss \cite{Strauss}. More precisely if $\mathcal{N}(a)=O_0(|a|^p)$,
global well-posedness for small data in $H^1$ is merely a consequence of Strichartz estimates provided $p$ is
larger than the so-called Strauss exponent
\begin{equation}\label{expSL}
p_S(d)=\frac{\sqrt{d^2+12d+4}+d+2}{2d}.
\end{equation}
For example scattering for quadratic nonlinearities (indepently of their structure
$\phi^2, \overline{\phi}^2,|\phi|^2$...) can be obtained for $d\geq 4$, indeed $p_S(3)=2$.
The case $p\leq p_S$ is much harder and is discussed later.
\paragraph{Mixing energy estimates and dispersive estimates}
If $\mathcal{N}$ depends on derivatives of $\phi$, due to the loss of derivatives the situation is
quite different and it is important to take more precisely into account the structure of the system. In particular it is possible in some case to exhibit energy estimates which often lead after a Gronwall lemma to the following situation:
\begin{equation*}
\forall\,N\in \mathbb{N},\
\|\phi(t)\|_{H^N}\leq \|\phi_0\|_{H^N}\text{exp}\bigg(C_N\int_0^t\|\phi(s)\|_{W^{k,\infty}}^{p-1}
ds\bigg),\ k\text{ ``small'' and independent on $N$}.
\end{equation*}
A natural idea consists in mixing energy estimates in the $H^N$ norm, $N$ ``large'', with dispersive
estimates : if one obtains
\begin{equation*}
\bigg\|\int_0^t e^{i(t-s)\Delta}\mathcal{N}ds\bigg\|_{W^{k,\infty}}\lesssim
\frac{\|\psi\|_{H^N\cap W^{k,\infty}}^p}{t^\alpha},\ \alpha(p-1)>1,
\end{equation*}
then setting $\|\psi\|_{X_T}=\sup_{[0,T]}\|\psi(t)\|_{H^N}+t^\alpha\|\psi(t)\|_{W^{k,\infty}}$
the energy estimate yields for small data
\begin{equation*}
\|\psi\|_{X_T}\lesssim \|\psi_0\|_{H^N}\text{exp}(C\|\psi\|_X^{p-1})+
\|\psi\|_{X_T}^p+\varepsilon,
\end{equation*}
so that $\|\psi\|_{X_T}$ must remain small uniformly in $T$.
This strategy seems to have
been initiated independently by Klainerman and Ponce \cite{KlaiPonce} and Shatah \cite{Shatah}.
If the energy estimate is true, this method works ``straightforwardly'' and gives global
well-posedness for small initial data (this is the approach from
section $4$) if
\begin{equation}\label{expQL}
p>\widetilde{p}(d)=\frac{\sqrt{2d+1}+d+1}{d}>p_S(d).
\end{equation}
Again, there is a critical dimension: $\widetilde{p}(4)=2$, thus any quadratic nonlinearity
can be handled with this method if $d\geq 5$.
\paragraph{Normal forms, space-time resonances} When $p\leq p_S$ (semi-linear case) or
$\widetilde{p}$ (quasi-linear case), the
strategies above can not be directly applied, and one has to look more closely at the structure
of the nonlinearity. For the Schr\"odinger equation, one of the earliest result in this direction
was due to Cohn \cite{Cohn} who proved (extending Shatah's method of normal forms \cite{Shatah2})
the global well-posedness in dimension $2$ of
\begin{equation}\label{eqcohn}
\begin{array}{ll}
i\partial_t \psi+\Delta \psi=i\nabla \overline{\psi}\cdot \nabla\overline{\psi}.
\end{array}
\end{equation}
The by now standard strategy of proof was to use a normal form that transformed the quadratic
nonlinearity
into a cubic one, and since $3>\widetilde{p}(2)\simeq 2.6$ the new equation could be treated with the arguments from
\cite{KlaiPonce}.
In dimension $3$, similar results (with very different proofs using vector fields method
and time non resonance) were then obtained for the
nonlinearities $\psi^2$ and $\overline{\psi}^2$ by Hayashi,
Nakao and Naumkin \cite{HaNau} (it is important to observe that the quadratic nonlinearity is critical in terms of Strauss exponent for the semi-linear case when $d=3$). The existence of global solutions for the nonlinearity $|\psi|^2$
is however still open (indeed it corresponds to a nonlinearity where the set of time and space non resonance is not empty, we will give more explanations below on this phenomenon) .\\
More recently, Germain-Masmoudi-Shatah \cite{GMS}\cite{GMS2}\cite{GMS2D} and
Gustafson-Nakanishi-Tsai \cite{GNT2}\cite{GNT3} shed a new light on such
issues with the concept of space-time resonances. To describe it, let us rewrite the Duhamel
formula for the profile of the solution $f=e^{-it\Delta}\psi$, in the case \eqref{eqcohn}:
\begin{equation}\label{duhaprofile}
f=\psi_0+\int_0^te^{-is\Delta}\mathcal{N}(e^{is\Delta }\psi)ds
\Leftrightarrow
\widehat{f}=\widehat{\psi_0}+\int_0^t\int_{\mathbb{R}^d}e^{is(|\xi|^2+|\eta|^2+|\xi-\eta|^2)}
\eta\cdot (\xi-\eta)\widehat{\overline{f}}(\eta)\widehat{\overline{f}}(\xi-\eta)d\eta ds
\end{equation}
In order to take advantage of the non cancellation of $\Omega(\xi,\eta)=
|\xi|^2+|\eta|^2+|\xi-\eta|^2$ one might
integrate by part in time, and from the identity $\partial_tf=-ie^{-it\Delta}\mathcal{N}(\psi)$,
we see that this procedure effectively replaces the quadratic nonlinearity by a cubic one, ie acts
as a normal form.\\
On the other hand, if $\mathcal{N}(\psi)=\psi^2$ the phase becomes $\Omega(\xi,\eta)=
|\xi|^2-|\eta|^2-|\xi-\eta|^2$, which cancels on a large set, namely the ``time resonant set''
\begin{equation}\label{timeR}
\mathcal{T}=\{(\xi,\eta):\ \Omega(\xi,\eta)=0\}=\{\eta\perp\ \xi-\eta\}.
\end{equation}
The remedy is to use an integration by part in the $\eta$ variable using
$e^{is\Omega}=\frac{\nabla_{\eta}\Omega}{is |\nabla_\eta\Omega|^2}\nabla_\eta(e^{is\Omega})$, it
does not improve the nonlinearity, however we can observe a gain of time decay in $1/s$. This
justifies to define the ``space resonant set'' as
\begin{equation}\label{spaceR}
\mathcal{S}=\{(\xi,\eta):\ \nabla_\eta\Omega(\xi,\eta)=0\}=\{\eta=-\xi-\eta\},
\end{equation}
as well as the space-time resonant set
\begin{equation}\label{totalR}
\mathcal{R}=\mathcal{S}\cap \mathcal{T}=\{(\xi,\eta):\ \Omega(\xi,\eta)=0,\ \nabla_\eta\Omega
(\xi,\eta)=0\}.
\end{equation}
For $\mathcal{N}(\psi)=\psi^2$, we simply have $\mathcal{R}=\{\xi=\eta=0\}$; using the previous strategy
Germain et al \cite{GMS} obtained global well-posedness for the quadratic Schr\"odinger equation. \\
Finally, for $\mathcal{N}(\psi)=|\psi|^2$ similar computations lead to $\mathcal{R}=\{\xi=0\}$,
the ``large'' size of this set might explain why this nonlinearity is particularly difficult to
handle.
\paragraph{Smooth and non smooth multipliers} The method of space-time resonances in the case
$(\nabla \overline{\phi})^2$ is particularly simple because after the time integration by part, the
Fourier transform of the nonlinearity simply becomes
$$
\frac{\eta\cdot (\xi-\eta)}{|\xi|^2+|\eta|^2+|\xi-\eta|^2}\partial_s\widehat{\overline{\psi}}(\eta)
\widehat{\overline{\psi}}(\xi-\eta),
$$
where the multiplier $\frac{\eta\cdot (\xi-\eta)}{|\xi|^2+|\eta|^2+|\xi-\eta|^2}$ is of
Coifman-Meyer type, thus in term of product laws it is just a cubic nonlinearity. We might naively
observe that this is due to the fact that
$\eta\cdot (\xi-\eta)$ cancels on the resonant set $\xi=\eta=0$. Thus one might wonder what
happens in the general case if the nonlinearity writes as a bilinear Fourier multiplier whose symbol
cancels on $\mathcal{R}$. In \cite{GMS2D}, the authors treated the nonlinear Schr\"odinger
equation for $d=2$ by assuming that the nonlinearity
is of type $B[\psi,\psi]$ or $B[\overline{\psi},\overline{\psi}]$, with $B$ a bilinear Fourier
multiplier whose symbol is linear at $|(\xi,\eta)|\leq 1$ (and thus cancels on $\mathcal{R}$).
Concerning the Gross-Pitaevskii equation (\ref{GP}), the nonlinear terms include the worst one
$|\psi|^2$ but Gustafson et al \cite{GNT3}
managed to prove global existence and scattering in dimension $3$, one of the important ideas of
their proof was a change of unknown $\psi\mapsto Z$ (or normal form) that replaced the nonlinearity
$|\psi|^2$ by $\sqrt{-\Delta/(2-\Delta)}|Z|^2$ which compensates the resonances at $\xi=0$.
To some extent, this is also a strategy that we will follow here.\\
Finally, let us point out that the method of space-time resonances proved remarkably efficient for
the water wave equation \cite{GMS2} partially because the group velocity
$|\xi|^{-1/2}/2$ is large near $\xi=0$, while it might not be the
most suited for the Schr\"odinger equation whose group velocity $2\xi$ cancels at $\xi=0$. The
method of vector fields is an interesting
alternative, and this approach was later chosen by Germain et al in \cite{GMScap} to study the capillary water
waves (in this case the group velocity is $3|\xi|^{1/2}/2$). Nevertheless, in our case the term
$\widetilde{g}(L_c)$ in $(\ref{eqcanonique})$ induces a lack of symetry which seems to limit the
effectiveness of this approach.
\vspace{2mm}
\paragraph{Plan of the article} In section $2$ we introduce the notations and state
our main results. Section $3$ is devoted to the reformulation of $(\ref{EK})$ as a non degenerate
Schr\"odinger equation, and we derive the energy estimates in ``high''Sobolev spaces. We use a
modified energy compared with \cite{Benzoni1} in order to avoid some time growth of the norms.
In section $4$ we prove our main result in dimension at least $5$. Section $5$ begins the analysis of
dimensions $3$ and $4$, which is the heart of the paper. We only detail the case $d=3$ since $d=4$ follows
the same ideas with simpler computations. We first introduce the functional settings, a normal form and check that it
defines an invertible change of variable in these settings, then we bound the high order terms (at least cubic).
In section $6$ we use the method of space-time resonances (similarly to \cite{GNT3}) to bound quadratic terms and close
the proof of global well-posedness in dimension $3$. The appendix provides some technical
multipliers estimtes required for section $6$.
\section{Main results, tools and notations}
\label{results}
\paragraph{The results}As pointed out in the introduction, we need a condition on
the pressure.
\vspace{2mm}
\begin{assumption}\label{stabassump}
Throughout all the paper, we work near a constant state
$\rho=\rho_c>0,\ u=0$, with $g'(\rho_c)>0$.
\end{assumption}
\noindent In the case of the Euler equation, this standard condition implies
that the linearized system
\begin{equation*}
\left\{
\begin{array}{ll}
\partial_t\rho+\rho_c\text{div} u=0,\\
\partial_tu+g'(\rho_c)\nabla \rho=0.
\end{array}\right.
\end{equation*}
is hyperbolic, with eigenvalues (sound speed) $\pm\sqrt{\rho_c g'(\rho_c)}$.
\begin{theo}\label{theolarged}
Let $d\geq 5$, $\rho_c\in \mathbb{R}^{+*}$, $u_0=\nabla \phi_0$ be irrotational. For
$(n,k)\in \mathbb{N},\ k>2+d/4,\ 2n+1\geq k+2+d/2$, there exists $\delta>0,$ such that if
\begin{equation*}
\|u_0\|_{H^{2n}\cap W^{k-1,4/3}}+\|\rho_0-\rho_c\|_{H^{2n+1}\cap W^{k,4/3}}\leq \delta
\end{equation*}
then the unique solution of $(\ref{EK})$ is global with
$\|\rho-\rho_c\|_{L^\infty(\mathbb{R}^+\times \mathbb{R}^d)}\leq \frac{\rho_c}{2}$.
\end{theo}
\begin{theo}\label{theod4}
Let $d=3$ or $4$, $u=\nabla \phi_0$ irrotational, $k > 2+d/4$, there exists
$\delta>0$, $\varepsilon>0$, small enough, $n\in \mathbb{N}$ large enough, such that for
$\displaystyle \frac{1}{p}=\frac{1}{2}-\frac{1}{d}-\varepsilon$, if
\begin{equation*}
\|u_0\|_{H^{2n}}+\|\rho_0-\rho_c\|_{H^{2n+1}}+\|xu_0\|_{L^2}+\|x(\rho_{0}-\rho_c)\|_{L^2}
+\|u_0\|_{W^{k-1,p'}}+\|\rho_{0}-\rho_c\|_{W^{k,p'}}\leq \delta,
\end{equation*}
then the solution of $(\ref{EK})$ is global with
$\|\rho-\rho_c\|_{L^\infty(\mathbb{R}^+\times \mathbb{R}^d)}\leq\frac{\rho_c}{2}$.
\end{theo}
\begin{rmq}
While the proof implies to work with the velocity potential, we only need assumptions on the
physical variables velocity and density.
\end{rmq}
\begin{rmq}
Actually we prove a stronger result: in the appropriate variables the solution scatters.
Let $\mathcal{L}$ be the primitive of $\sqrt{K/\rho}$ such that $\mathcal{L}(\rho_c)=1$,
$L=\mathcal{L}(\rho)$,
$\mathcal{H}=\sqrt{-\Delta(\widetilde{g}'(1)-\Delta)}$, $\mathcal{U}
=\sqrt{-\Delta/(\widetilde{g}'(1)-\Delta)}$ ,
$\displaystyle f=e^{-it\mathcal{H}}(\mathcal{U}\phi+iL)$, then there exists
$f_\infty$ such that
$$\forall\,s<2n+1,\ \|f(t)-f_\infty\|_{H^{s}\cap L^2/\langle x\rangle}
\rightarrow_{t\rightarrow \infty} 0.$$
The analogous result is true in dimension $\geq 5$ with $t^{-d/2+1}$ for the convergence rate
in $L^2$. See section \ref{secscatt} for a discussion in dimension $3$. It is also possible
to quantify how large $n$ should be (at least of order $20$, see remark \ref{precN}).
In both theorems, the size of $k$ and $n$ can be slightly
decreased by working in fractional Sobolev spaces, but since it would remain quite large we
chose to avoid these technicalities.
\end{rmq}
\paragraph{Some tools and notations} Most of our tools are standard analysis, except a singular
multiplier estimate.
\subparagraph{Functional spaces}
The usual Lebesgue spaces are $L^p$ with norm $\|\cdot\|_p$, the Lorentz spaces are $L^{p,q}$.
If $\mathbb{R}^+$ corresponds to the time variable, and for $B$ a Banach space, we write for short
$L^p(\mathbb{R}^+,B)=L^p_tB$, similarly $L^p([0,T],B)=L^p_TB$.\\
The Sobolev spaces
are $W^{k,p}=\{u\in L^p:\ \forall\,|\alpha|\leq k,\ D^\alpha u\in L^p\}$.
We also use homogeneous spaces $\dot{W}^{k,p}=\{u\in L^1_{loc}:\ \forall\,|\alpha|=k,\
D^\alpha u\in L^p\}$. We recall the Sobolev embedding
\begin{equation*}
\forall\,kp<d,\ \dot{W}^{k,p}(\mathbb{R}^d)\hookrightarrow L^{q,p}\hookrightarrow L^q,\ q=\frac{dp}{d-kq},
\ \forall\,kp>d,\ W^{k,p}(\mathbb{R}^d)\hookrightarrow L^\infty.
\end{equation*}
If $p=2$, as usual $W^{k,2}=H^k$, for which we have equivalent norm $\int_{\mathbb{R}^d}(1+|\xi|^2)^k|
\widehat{u}|^2d\xi$, we define in the usual way $H^s$ for $s\in \mathbb{R}$ and $\dot{H}^s$ for which
the embeddings remain true. The following dual estimate will be of particular use
\begin{equation*}
\forall\,d\geq 3,\ \|u\|_{\dot{H}^{-1}}\lesssim \|u\|_{L^{2d/(d+2)}}.
\end{equation*}
We will use the following Gagliardo-Nirenberg type inequality (see for example \cite{TaylorIII})
\begin{equation}\label{GN1}
\forall\, l\leq p\leq k-1\text{ integers},\
\|D^lu\|_{L^{2k/p}}\lesssim \|u\|_{L^{2k/(p-l)}}^{(k-p)/(k+l-p)}
\|D^{k+l-p}u\|_{L^2}^{l/(k+l-p)}.
\end{equation}
and its consequence
\begin{equation}\label{GN2}
\forall\,|\alpha|+|\beta|=k,\ \|D^\alpha fD^\beta g\|_{L^2}\lesssim \|f\|_\infty\|g\|_{\dot{H}^k}
+\|f\|_{\dot{H}^k}\|g\|_\infty.
\end{equation}
Finally, we have the basic composition estimate (see \cite{BCD}): for $F$ smooth,
$F(0)=0$, $u\in L^\infty\cap W^{k,p}$ then\footnote{$k\in \mathbb{R}^+$ is allowed, but not needed.}
\begin{equation}\label{compo}
\|F(v)\|_{W^{k,p}}\lesssim C(k,\|u\|_\infty))\|u\|_{W^{k,p}}.
\end{equation}
\subparagraph{Non standard notations}
Since we will often estimate indistinctly $z$ or $\overline{z}$, we follow the notations
introduced in \cite{GNT3}: $z^+=z,\ z^-=\overline{z}$, and $z^\pm$ is a placeholder for $z$ or
$\overline{z}$. The Fourier transform of $z$ is as usual $\widehat{z}$, however we also need to
consider the profile $e^{-itH}z$, whose Fourier transform will be denoted
$\widetilde{z^\pm}:=e^{\mp itH}\widehat{z^\pm}$.\\
When there is no ambiguity, we write $W^{k,\frac{1}{p}}$ (or $L^{\frac{1}{p}}$) instead of $W^{k,p}$ (or
$L^{p}$) since it is convenient to use H\"older's inequality.
\subparagraph{Multiplier theorems}
We remind that the Riesz multiplier $\nabla/|\nabla|$ is bounded on $L^p$, $1<p<\infty$.
A bilinear Fourier multiplier is defined by its symbol $B(\eta,\xi)$, it acts on $(f,g)\in
\mathcal{S}(\mathbb{R}^d)$
\begin{equation*}
\widehat{B[f,g]}(\xi)=\int_{\mathbb{R}^d}B(\eta,\xi-\eta)\widehat{f}(\eta)\widehat{g}(\xi-\eta)d\eta.
\end{equation*}
\begin{theo}[Coifman-Meyer]
If $\partial_\xi^\alpha\partial_\eta^\beta B(\xi,\eta)\lesssim (|\xi|+|\eta|)
^{-|\alpha|-|\beta|}$,
for sufficiently many $\alpha, \beta$ then for any $1<p,q\leq \infty$, $1/r=1/p+1/q$,
\begin{equation*}
\|B(f,g)\|_r\lesssim \|f\|_p\|g\|_q.
\end{equation*}
If moreover $\text{supp}(B(\eta,\xi-\eta))\subset \{|\eta|\gtrsim |\xi-\eta|\}$, $(p,q,r)$
are finite and $k\in \mathbb{N}$ then
\begin{equation*}
\|\nabla^kB(f,g)\|_r\lesssim \|\nabla^kf\|_p\|g\|_q.
\end{equation*}
\end{theo}
\noindent
Mixing this result with the Sobolev embedding, we get for $2<p\leq \infty,\
\frac{1}{p}+\frac{1}{q}=\frac{1}{2}$
\begin{equation}\label{GN3}
\|fg\|_{H^s}\lesssim \|f\|_{L^p}\|g\|_{H^{s,q}}+\|g\|_{L^p}\|f\|_{H^{s,q}}\lesssim
\|f\|_{L^p}\|g\|_{H^{s+d/p}}+\|g\|_{L^p}\|f\|_{H^{s+d/p}}.
\end{equation}
Due to the limited regularity of our multipliers, we will need a multiplier theorem with loss
from \cite{GuoPaus} (and inspired by corollary $10.3$ from \cite{GNT3}). Let us first describe the
norm on symbols: for $\chi_j$ a smooth dyadic partition of the space,
$\text{supp}(\chi_j)\subset \{2^{j-2}\leq |x|\leq 2^{j+2}\}$
\begin{equation*}
\|B(\eta,\xi-\eta)\|_{\tilde{L}^\infty_\xi\dot{B}^s_{2,1,\eta}}
=\|2^{js}\chi_j(\nabla)_\eta B(\eta,\xi-\eta)\|_{l^1(\mathbb{Z},L^\infty_\xi L^2_\eta)}
\end{equation*}
The norm $\|B(\xi-\zeta,\zeta)\|_{\tilde{L}^\infty_\xi\dot{B}^s_{2,1,\zeta}}$ is defined similarly.
In practice, we rather estimate $\|B\|_{L^\infty_\xi\dot{H}^s}$ and use the interpolation estimate
(see \cite{GNT3})
\begin{equation*}
\|B\|_{\tilde{L}^\infty_\xi\dot{B}^s_{2,1,\eta}}\lesssim \|B\|_{L^\infty_\xi\dot{H}^{s_1}}^\theta
\|B\|_{L^\infty_\xi\dot{H}^{s_2}}^{1-\theta},\ s=\theta s_1+(1-\theta)s_2.
\end{equation*}
We set $\|B\|_{[B^s]}=\min\big(\|B(\eta,\xi-\eta)\|_{\tilde{L}^\infty_\xi\dot{B}^s_{2,1,\eta}},
\ \|B(\xi-\zeta,\zeta)\|_{\tilde{L}^\infty_\xi\dot{B}^s_{2,1,\zeta}}\big)$.
The rough multiplier theorem is the following:
\begin{theo}[\cite{GuoPaus}]\label{singmult}
Let $0\leq s\leq d/2$, $q_1,q_2$ such that $\displaystyle \frac{1}{q_2}+\frac{1}{2}=
\frac{1}{q_1}+\bigg(\frac{1}{2}-\frac{s}{d}\bigg)$ \footnote{We write the relation between
$(q_1,q_2)$ in a rather odd way in order to emphasize the similarity with the standard
H\"older's inequality.}, and $\displaystyle 2\leq q_1',q_2\leq \frac{2d}{d-2s}$, then
\begin{equation*}
\|B(f,g)\|_{L^{q_1}}\lesssim \|B\|_{[B^s]}\|f\|_{L^{q_2}}\|g\|_{L^2}.\\
\end{equation*}
Furthermore for $\displaystyle \frac{1}{q_2}+\frac{1}{q_3}=
\frac{1}{q_1}+\bigg(\frac{1}{2}-\frac{s}{d}\bigg)$, $2\leq q_i\leq \frac{2d}{d-2s}$ with $i=2,3$,
\begin{equation*}
\|B(f,g)\|_{L^{q_1}}\lesssim \|B\|_{[B^s]}\|f\|_{L^{q_2}}\|g\|_{L^{q_3}},\\
\end{equation*}
\end{theo}
\subparagraph{Dispersion for the group $e^{-itH}$}
According to $(\ref{eqcanonique})$, the linear part of the equation reads
$\partial_tz-i\mathcal{H}z=0$, with $\mathcal{H}=\sqrt{-\Delta(\widetilde{g}'(L_c)-\Delta)}$
(see also section $4$). We will use a change of variable to reduce it to
$\widetilde{g}'(L_c)=2$, set $H=\sqrt{-\Delta(2-\Delta)}$, and use the dispersive estimate
from \cite{GNT1}, the version in Lorentz spaces follows from real interpolation as pointed out
in \cite{GNT3}.
\begin{theo}[\cite{GNT1}\cite{GNT3}]\label{dispersion}
For $2\leq p\leq \infty$, $s\in \mathbb{R}$, $U=\sqrt{-\Delta/(2-\Delta)}$, we have
\begin{equation*}
\|e^{itH}\varphi\|_{\dot{B}^s_{p,2}}\lesssim \frac{\|U^{(d-2)(1/2-1/p)}\varphi\|
_{\dot{B}^s_{p',2}}}{t^{d(1/2-1/p)}},
\end{equation*}
and for $2\leq p<\infty$
\begin{equation*}
\|e^{itH}\varphi\|_{L^{p,2}}\lesssim \frac{\|U^{(d-2)(1/2-1/p)}\varphi\|
_{L^{p',2}}}{t^{d(1/2-1/p)}}
\end{equation*}
\end{theo}
\begin{rmq}
The slight low frequency gain $U^{(d-2)(1/2-1/p)}$ is due to the fact that
$H(\xi)=|\xi|\sqrt{2+|\xi|^2}$ behaves like $|\xi|$ at low frequencies, which has a strong
angular curvature and no radial curvature.
\end{rmq}
\begin{rmq}
Combining the dispersion estimate and the celebrated $TT^*$ argument, Strichartz estimates follow
$$
\|e^{itH}\varphi\|_{L^pL^q}\lesssim \|U^{\frac{d-2}{2}(1/2-1/p)}\varphi\|_{L^2},\ \frac{2}{p}
+\frac{d}{q}=\frac{d}{2},\ 2\leq p\leq \infty,
$$
however the dispersion estimates are sufficient for our purpose.
\end{rmq}
\section{Reformulation of the equations and energy estimate}\label{secenergie}
As observed in \cite{Benzoni1}, setting $w=\sqrt{K/\rho}\nabla \rho$, $\mathcal{L}$ the primitive of
$\sqrt{K/\rho}$ such that $\mathcal{L}(\rho_c)=1$,
$L=\mathcal{L}(\rho)$, $z=u+iw$ the Euler-Korteweg system rewrites
\begin{eqnarray*}
\partial_tL+u\cdot\nabla L+a(L)\text{div}u&=&0,\\
\partial_tu+u\cdot \nabla u-w\cdot \nabla w -\nabla (a(L)\text{div}w)&=&-\widetilde{g}'(L)w,\\
\partial_tw+\nabla(u\cdot w)+\nabla (a(L)\text{div}u)&=&0,\\
\end{eqnarray*}
where the third equation is just the gradient of the first.
Setting $l=L-1$, in the potential case $u=\nabla\phi$, the system on $\phi,l$ then reads
\begin{equation}\label{EKpot}
\left\{
\begin{array}{lll}
\displaystyle \partial_t\phi+\frac{1}{2}\big(|\nabla \phi|^2-|\nabla l|^2\big)-a(1+l)\Delta l=
-\widetilde{g}(1+l),\\
\displaystyle \partial_tl+\nabla \phi\cdot \nabla l+a(1+l)\Delta \phi=0,
\end{array}\right.
\end{equation}
with $\widetilde{g}(1)=0$ since we look for integrable functions. As a consequence of the stability
condition $(\ref{stabassump})$, up to a change of variables we can and will assume through the
rest of the paper that
\begin{equation}\label{valeurg'}
\widetilde{g}'(1)=2.
\end{equation}
The number $2$ has no significance except that this choice gives the same linear part as for
the Gross-Pitaevskii equation linearized near the constant state $1$.
\begin{prop}\label{energy}
Under the following assumptions
\begin{itemize}
\item $(\nabla \phi_0,l)\in H^{2n}\times H^{2n+1}$
\item Normalized $(\ref{stabassump})$: $\widetilde{g}'(1)=2$
\item $L(x,t)=1+l(x,t)\geq m>0$ for $(x,t)\in \mathbb{R}^d\times [0,T]$,
\end{itemize}
then for $n>d/4+1/2$, there exists a continuous function $C$ such that the solution of $(\ref{EKpot})$ satisfies
the following estimate
\begin{equation*}
\begin{aligned}
&\|\nabla \phi\|_{H^{2n}}+\|l\|_{H^{2n+1}}\\
&\leq
\big(\|\nabla \phi_0\|_{H^{2n}}+\|l_0\|_{H^{2n+1}}\big)
\rm{exp}\bigg(\int_0^t C(\|l\|_{L^\infty},\|\frac{1}{l+1}\|_{L^\infty},\|z\|_{L^\infty})\\
&\hspace{75mm}\times(\|\nabla\phi(s)\|_{W^{1,\infty}} +\|l(s)\|_{W^{2,\infty}})ds\bigg),
\end{aligned}
\end{equation*}
where $z(s)=\nabla\phi(s)+i\nabla w(s)$.
\end{prop}
This is almost the same estimate as in \cite{Benzoni1} but for an essential point: in the integrand
of the right hand side there is no constant added to $\|\nabla \phi(s)\|_{W^{1,\infty}}
+\|l(s)\|_{W^{2,\infty}}$, the price to pay is that we can not control $\phi$ but its gradient (this is naturel since the difficulty is related to the low frequencies).
Before going into the detail of the computations, let us underline on a very simple example
the idea behind it. We consider the linearized system
\begin{eqnarray}
\label{lin1}
\partial_t\phi-\Delta l+2l=0,\\
\label{lin2}
\partial_tl+\Delta \phi=0.
\end{eqnarray}
Multiplying $(\ref{lin1})$ by $\phi$, $(\ref{lin2})$ by $l$, integrating and using Young's
inequality leads to the ``bad'' estimate
\begin{equation*}
\frac{d}{dt}\big(\|\phi\|_{L^2}^2+\|l\|_{L^2}^2\big)\lesssim 2(\|\phi\|_{L^2}^2+\|l\|_{L^2}^2),
\end{equation*}
on the other hand if we multiply $(\ref{lin1})$ by $-\Delta\phi$,
$(\ref{lin2})$ by $(-\Delta+2)l$ we get
\begin{equation*}
\frac{d}{dt}\int_{\mathbb{R}^d}(\frac{|\nabla l|^2+|\nabla\phi|^2}{2}+l^2)dx=0,
\end{equation*}
the proof that follows simply mixes this observation with the gauge method from \cite{Benzoni1}.
\begin{proof}
Let us start with the equation on $z=\nabla \phi+i\nabla l=u+iw$, we remind that
$\widetilde{g}'(1)=2$, so that we write it
\begin{equation}\label{schroddeg}
\partial_tz+z\cdot \nabla z+i\nabla (a \text{div}z)=-2w+(2-\widetilde{g}'(1+l))w.
\end{equation}
We shortly recall the method from \cite{Benzoni1} that we will slightly simplify since we do not
need to work in fractional Sobolev spaces. Due to the quasi-linear nature of the system
(and in particular the bad ``non transport term'' $iw\cdot \nabla z$), it is not possible to directly estimate $\|z\|_{H^{2n}}$ by energy estimates, instead
one uses a gauge function $\varphi_n(\rho)$ and control $\|\varphi_n\Delta^nz\|_{L^2}$. When we take
the product of $(\ref{schroddeg})$ with $\varphi_n$ real, a number of
commutators appear:
\begin{equation}\label{C1}
\varphi_n\Delta^n\partial_tz=\partial_t(\varphi_n\Delta^nz)-(\partial_t\varphi_n)\Delta^nz
=\partial_t(\varphi_n\Delta^nz)+C_1
\end{equation}
\begin{equation}\label{C2}
\varphi_n\Delta^n(u\cdot \nabla z)=u\cdot \nabla (\varphi_n\Delta^nz)+[\varphi_n\Delta^n,u\cdot\nabla]
z:=u\cdot \nabla(\varphi_n\Delta^n z)+C_2
\end{equation}
\begin{equation}\label{C3}
i\varphi_n\Delta^n(w\cdot \nabla z)=iw\cdot \nabla (\varphi_n\Delta^nz)+
[\varphi_n\Delta^n,w\cdot\nabla]z:=iw\cdot \nabla (\varphi_n\Delta^nz)+C_3,
\end{equation}
The term $\nabla (a\text{div}z)$ requires a bit more computations:
\begin{eqnarray*}
i\varphi_n\Delta^n\nabla (a\text{div}z)=i\nabla (\varphi_n\Delta^n (a \text{div}z))-i(\nabla \varphi_n)
\Delta^n(a\text{div}z),
\end{eqnarray*}
then using recursively $\Delta (fg)=2\nabla f\cdot\nabla g+f\Delta g+(\Delta f)g$ we get
\begin{equation*}
\Delta^n (a\text{div}z)=a\text{div}\Delta^n z+2n(\nabla a)\cdot \Delta^{n}z+C,
\end{equation*}
where $C$ contains derivatives of $z$ of order at most $2n-1$, so that
\begin{eqnarray}
\nonumber
i\varphi_n\Delta^n\nabla (a\text{div}z)&=&i\nabla \bigg(\varphi_n\big(a\text{div}\Delta^n z
+2n(\nabla a)\cdot \Delta^{n}z\big)\bigg)-i\nabla \varphi_n a\text{div} \Delta^n z+i\nabla (\varphi_n C)\\
\nonumber &=&i\nabla \big(a\text{div}(\varphi_n\Delta^nz)\big)
+2in\nabla a\cdot \varphi_n\nabla \Delta^nz-ia(\nabla+I_d\text{div}) \Delta^nz\cdot \nabla
\varphi_n\\
\label{C4}&&+C_4,
\end{eqnarray}
where $C_4$ contains derivatives of $z$ of order at most $2n$ and by notation $I_d\text{div}\Delta^nz\cdot \nabla
\varphi_n={\rm div}\Delta^n z\,\nabla\varphi _n$. Finally, we define
$C_5=-\varphi_n\Delta^n\big((2-\widetilde{g}'(1+l))w\big)$. The equation on
$\varphi_n\Delta^nz$ thus reads
\begin{eqnarray}
\partial_t(\varphi_n\Delta^nz)+u\cdot \nabla (\varphi_n\Delta^nz)+i\nabla \big(a\text{div}(\varphi_n\Delta^nz)
\big)+iw u\cdot \nabla (\varphi_n\Delta^nz)+ 2\varphi_n\Delta^nw=\\
-\sum_1^5C_k-2in\varphi_n\nabla \Delta^nz\cdot \nabla a+ia(\nabla+I_d\text{div}) \Delta^nz\cdot \nabla
\varphi_n
\end{eqnarray}
Taking the scalar product with $\varphi_n\Delta^nz$, integrating and taking the real part gives
for the first three terms
\begin{equation}
\frac{1}{2}\frac{d}{dt}\int_{\mathbb{R}^d}(\varphi_n\Delta^nz)^2dx-\frac{1}{2}\int_{\mathbb{R}^d} \text{div}u
|\varphi_n\Delta^nz|^2dx.
\end{equation}
And we are left to control the remainder terms from $(\ref{C3},\, \ref{C4})$. Using
$w=\frac{a}{\rho} \nabla \rho$, $\varphi_n=\varphi_n(\rho)$, we rewrite
\begin{eqnarray*}
i\varphi_nw\cdot \nabla (\Delta^nz)+2ni\varphi_n\nabla (\Delta^nz)\cdot \nabla a-ia \nabla
(\Delta^nz)\cdot \nabla \varphi_n-ia\nabla \varphi_n \,\text{div}\Delta^nz
\\
=i\varphi_n\bigg(w\cdot \nabla -\frac{a\nabla \varphi_n}{\varphi_n}\cdot \nabla
-\frac{a\nabla \varphi_n}{\varphi_n}\text{div}+2n\nabla a\cdot\nabla\bigg)\Delta^nz.
\end{eqnarray*}
\begin{equation}\label{reste}
=i\varphi_n\bigg[\bigg(\frac{a}{\rho}-a\frac{\varphi_n'}{\varphi_n}\bigg)\nabla \rho\cdot\nabla-
\frac{a\varphi_n'}{\varphi_n}\nabla \rho\,\text{div}+2na'\nabla \rho\cdot \nabla\bigg]
\Delta^nz
\end{equation}
If the $\text{div}$ operator was a gradient, the most natural choice for $\varphi_n$ would be
to take
\begin{equation*}
\frac{a}{\rho}-\frac{2a\varphi_n'}{\varphi_n}+2na'=0\Leftrightarrow \frac{\varphi_n'}{\varphi_n}
=\frac{1}{2\rho}+\frac{na'}{a}\Leftarrow \varphi_n(\rho)=a^n(\rho)\sqrt{\rho}.
\end{equation*}
For this choice the remainder $(\ref{reste})$ rewrites
\begin{equation*}
\bigg[\bigg(\frac{a}{\rho}-a\frac{\varphi_n'}{\varphi_n}\bigg)\nabla \rho\cdot\nabla-
\frac{a\varphi_n'}{\varphi_n}\nabla \rho\text{div}+2na'\nabla \rho\cdot \nabla\bigg]\Delta^nz
=\bigg(\frac{a}{2\rho}+na'\bigg)\nabla \rho\cdot (\nabla-I_d\text{div})\Delta^nz.
\end{equation*}
Using the fact that $\varphi_n(a/(2\rho)+na')(\rho)\nabla \rho$ is a real valued gradient, and setting
$z_n=\Delta^nz$, we see that the contribution of $(\ref{reste})$ in the energy estimate is
actually $0$ from the following identity (with the Hessian $\text{Hess}H$):
\begin{eqnarray*}
\text{Im}\int_{\mathbb{R}^d} \overline{z_n}\cdot (\nabla-I_d\text{div})z_n\cdot \nabla H(\rho)dx
&=&\text{Im}\int_{\mathbb{R}^d}\overline{z_{i,n}}\partial_jz_{i,n}\partial_j H-\overline{z_{i,n}}\partial_jz_{j,n}\partial_iH
\\
&=&
\text{Im}\int_{\mathbb{R}^d} \overline{z_n}\text{Hess}Hz_n-\Delta H|z_n|^2 \\
&&\hspace{2cm}-\partial_jH z_{i,n}(\overline{\partial_jz_{i,n}}
-\overline{\partial_iz_{j,n}})dx\\
&=&0.
\end{eqnarray*}
We have used the fact that $z$ is irrotationnal.
Finally, we have obtained
\begin{equation}\label{avantdernier}
\frac{1}{2}\frac{d}{dt}\int \|\varphi_n\Delta^nz\|_{L^2}^2dx-\frac{1}{2}\int_{\mathbb{R}^d}(\text{div}u)
|\varphi_n\Delta^nz|^2
=-\int\sum_1^5C_k\varphi_n\Delta^n\overline{z}dx-2\int\varphi_n^2 \Delta^nw\Delta^nu\,dx.
\end{equation}
Note that the terms $C_k\varphi_n\Delta^nz$ are cubic while $\varphi_n\Delta^nw\Delta^nu$ is
only quadratic, thus we will simply bound the first ones while we will need to cancel the later.
\paragraph{Control of the $C_k$ :}
From their definition, it is easily seen that the $(C_k)_{2\leq i\leq 4}$ only contain terms of
the kind $\partial^\alpha f\partial^\beta g$ with $f,g=u$ or $w$, $|\alpha|+|\beta|\leq 2n$, thus
\begin{equation*}
\forall\, 2\leq k\leq 4,\ \bigg|\int C_k\varphi_n\Delta^nzdx\bigg|\lesssim
\sum_{|\alpha|+|\beta|=2n,\ f,g=u\text{ or }w}
\|\partial^\alpha f\partial^\beta g\|_{L^2}\|z\|_{H^{2n}}
\end{equation*}
When $|\alpha|=0,\ |\beta|=2n$, we have obviously
$\|f\partial^\beta g\|_{L^2}\lesssim \|f\|_\infty\|g\|_{H^{2n}}$, while the general case
$\|\partial^\alpha f\partial^\beta g\|_2\lesssim \|f\|_\infty\|g\|_{H^{2n}}
+\|g\|_\infty\|f\|_{H^{2n}}$
is Gagliardo-Nirenberg'interpolation inequality $(\ref{GN2})$. We deduce
\begin{equation*}
\forall\, 2\leq k\leq 4,\ \bigg|\int C_k\varphi_n\Delta^nzdx\bigg|\lesssim
\|z\|_\infty\|z\|_{H^{2n}}^2.
\end{equation*}
Let us deal now with $C_1=-\partial_t\varphi_n\Delta^n z$, since $\partial_t\varphi_n=-\varphi_n'\text{div}(\rho u)$ we have
\begin{equation*}
\bigg|\int_{\mathbb{R}^d}C_1\varphi_n\Delta^n\overline{z}dx\bigg|\lesssim F((\|l\|_{L^\infty},\|\frac{1}{l+1}\|_{L^\infty}) (\|u\|_{W^{1,\infty}}+\|z\|_{L^\infty}^2)
\|z\|_{H^{2n}}^2
\end{equation*}
with $F$ a continuous function.\\
We now estimate the contribution of $C_5=-\varphi_n\Delta^n\big((2-\widetilde{g}'(1+l))w\big)$: since $\widetilde{g}'(1)=2$, from the composition rule
$(\ref{compo})$ we have $\|\widetilde{g}'(1+l)-2\|_{H^{2n}}\lesssim F_1(\|l\|_{L^\infty},\|\frac{1}{l+1}\|_{L^\infty}) \|l\|_{H^{2n}}$ with $F_1$ a continuous function with $F_1(0,\cdot)=0$
so that
\begin{equation*}
\begin{aligned}
&\bigg|\int_{\mathbb{R}^d}C_5\varphi_n\Delta^n \overline{z} dx\bigg|\lesssim \|(2-\widetilde{g}')w\|_{H^{2n}}
\|z\|_{H^{2n}}
\lesssim (\|(2-\widetilde{g}'(1+l))\|_{L^\infty}\|z\|_{H^{2n}}\\
&\hspace{5cm}+ F_1(\|l\|_{L^\infty},\|\frac{1}{l+1}\|_{L^\infty})\|l\|_{H^{2n}}\|z\|_\infty)\|z\|_{H^{2n}}.
\end{aligned}
\end{equation*}
To summarize, for any $1\leq k\leq 5$, we have
\begin{equation}\label{estimCk}
\bigg|\int_{\mathbb{R}^d}C_k\varphi_n\Delta^n z dx\bigg|
\lesssim F_2(\|l\|_{L^\infty},\|\frac{1}{l+1}\|_{L^\infty})(\|l\|_\infty+\|z\|_{W^{1,\infty}}+\|z\|_{L^\infty}^2)(\|l\|_{H^{2n}}^2+\|z\|_{H^{2n}}^2),
\end{equation}
with $F_2$ a continuous function.
\paragraph{Cancellation of the quadratic term}
We start with the equation on $l$ to which we apply $\varphi_n\Delta^n$, multiply by
$\varphi_n(\Delta^nl)/a$ and
integrate in space
\begin{eqnarray*}
\int_{\mathbb{R}^d}\frac{\varphi_n^2}{a}\Delta^nl\partial_t\Delta^nl+\frac{\varphi_n^2}{a}(\Delta^nl)
\Delta^n(\nabla \phi\cdot \nabla l)+\varphi_n^2\Delta^nl\frac{\Delta^n(a\Delta \phi)}{a}=0.
\end{eqnarray*}
Commuting $\Delta^n$ and $a$, and using an integration by part, this rewrites
\begin{equation*}
\begin{aligned}
&\displaystyle \frac{1}{2}\frac{d}{dt}\int_{\mathbb{R}^d}\frac{\varphi_n^2}{a}(\Delta^nl)^2dx-\int_{\mathbb{R}^d}\frac{d}{dt}(\frac{\varphi_n^2}{2a})|\Delta^nl|^2 dx+\int_{\mathbb{R}^d}\frac{\varphi_n^2}{a}(\Delta^nl)
\Delta^n(\nabla \phi\cdot \nabla l)\\
&\hspace{6cm}+\int_{\mathbb{R}^d}\varphi_n^2\Delta^n l\,\Delta\Delta^n \phi dx+\frac{\varphi_n^2}{a}\Delta^nl[\Delta^n,a]
\Delta \phi dx\\[2mm]
&\displaystyle \frac{1}{2}\frac{d}{dt}\int_{\mathbb{R}^d}\frac{\varphi_n^2}{a}(\Delta^nl)^2dx-\int_{\mathbb{R}^d}\frac{d}{dt}(\frac{\varphi_n^2}{2a})|\Delta^nl|^2 dx+\int_{\mathbb{R}^d}\frac{\varphi_n^2}{a}(\Delta^nl)
\Delta^n(\nabla \phi\cdot \nabla l)\\
&-\int_{\mathbb{R}^d}\varphi_n^2\,\nabla\Delta^n l\cdot\nabla\Delta^n \phi \,dx- \int_{\mathbb{R}^d}\Delta^n l\, \nabla \varphi_n^2\cdot\nabla\Delta^n \phi \,dx+\frac{\varphi_n^2}{a}\Delta^nl[\Delta^n,a]
\Delta \phi dx\\
\end{aligned}
\end{equation*}
We remark that the integrand in the right hand side only depends on $l,\nabla\phi$ and their
derivatives, therefore using the same commutator arguments as previously, we get the bound
\begin{equation}
\label{suprquad}
\begin{aligned}
&\frac{1}{2}\frac{d}{dt}\int_{\mathbb{R}^d}\frac{\varphi_n^2}{a}(\Delta^nl)^2dx
-\int_{\mathbb{R}^d}\varphi_n^2(\Delta^n\nabla \phi)\Delta^n\nabla ldx\\
&\hspace{2cm}\lesssim F_3(\|l\|_{L^\infty},\|\frac{1}{l+1}\|_{L^\infty})(\|l\|_\infty+\|z\|_{W^{1,\infty}}+\|z\|_{L^\infty}^2)(\|l\|_{H^{2n}}^2+\|z\|_{H^{2n}}^2),
\end{aligned}
\end{equation}
with $F_3$ a continuous function.
Now if we add $(\ref{avantdernier})$ to $2\times(\ref{suprquad})$ and use the estimates on $(C_k)$
we obtain
\begin{equation*}
\begin{aligned}
&\frac{1}{2}\frac{d}{dt}\int \|\varphi_n\Delta^nz\|_{L^2}^2+\|\Delta^nl\|_{L^2}^2dx\\
&\hspace{2cm}\lesssim F_4(\|l\|_{L^\infty},\|\frac{1}{l+1}\|_{L^\infty})(\|l\|_\infty+\|z\|_{W^{1,\infty}}+\|z\|_{L^\infty}^2)(\|l\|_{H^{2n}}^2+\|z\|_{H^{2n}}^2),
\end{aligned}
\end{equation*}
with $F_4$ a continuous function. The conclusion then follows from Gronwall's lemma.
\end{proof}
\section{Global well-posedness in dimension larger than 4}\label{sectiondlarge}
We first make a further reduction of the equations that will be also used for the cases $d=3,4$,
namely we rewrite it as a linear Schr\"odinger equation with some remainder. In addition to
$\widetilde{g}'(1)=2$, we can also assume $a(1)=1$, so that $(\ref{EKpot})$
rewrites\footnote{The assumption $a(1)=1$ should add some constants in factor of the nonlinear
terms, we will neglect it as it will be clear in the proof that multiplicative constants do not
matter.}
\begin{equation}
\left\{
\begin{array}{lll}
\displaystyle \partial_t\phi-\Delta l+2l=(a(1+l)-1)\Delta l
-\frac{1}{2}\big(|\nabla \phi|^2-|\nabla l|^2\big)
+(2l-\widetilde{g}(1+l)),\\
\displaystyle \partial_tl+\Delta \phi=-\nabla \phi\cdot \nabla l+(1-a(1+l))\Delta \phi.
\end{array}\right.
\label{systedepart}
\end{equation}
The linear part precisely corresponds to the linear part of the Gross-Pitaevskii equation.
In order to diagonalize it, following \cite{GNT1} we set
\begin{equation*}
U=\sqrt{\frac{-\Delta}{2-\Delta}},\ H=\sqrt{-\Delta(2-\Delta)},\ \phi_1=U\phi,\ l_1=l.
\end{equation*}
The equation writes in the new variables
\begin{equation}\label{ekdiag}
\left\{
\begin{array}{lll}
\displaystyle \partial_t\phi_1+Hl_1=U\bigg((a(1+l_1)-1)\Delta l_1
-\frac{1}{2}\big(|\nabla U^{-1} \phi_1|^2-|\nabla l_1|^2\big)
+(2l_1-\widetilde{g}(1+l_1))\bigg),\\
\displaystyle \partial_tl_1-H \phi_1=-\nabla U^{-1}\phi_1\cdot \nabla l_1-(1-a(1+l_1))H U^{-1}\phi_1.
\end{array}\right.
\end{equation}
More precisely, if we set $\psi=\phi_1+il_1,\ \psi_0=(U\phi+il)|_{t=0}$, the Duhamel formula
gives
\begin{eqnarray}\label{duhamel}
\psi(t)&=&e^{itH}\psi_0+\int_0^te^{i(t-s)H}\mathcal{N}(\psi(s))ds,\\
\nonumber
\text{with }\mathcal{N}(\psi)&=&U\big((a(1+l_1)-1)\Delta l_1-\frac{1}{2}\big(|\nabla U^{-1}
\phi_1|^2-|\nabla l_1|^2\big)+(2l_1-\widetilde{g}(1+l_1))\big)\\
&&+i\big(-\nabla U^{-1}\phi_1\cdot \nabla l_1-\big(1-a(1+l_1)\big)H\phi
\big).
\end{eqnarray}
We underline that for low frequencies the situation is more favorable than for
the Gross-Pitaevskii equation, as all the terms where $U^{-1}$ appears already contain derivatives
that compensate this singular multiplier. Note however that the Gross-Pitaevskii equations are
formally equivalent to this system via the Madelung transform in the special case $K(\rho)=\kappa
/\rho$, so our computations are a new way of seeing that these singularities can be removed in
appropriate variables.
Let us now state the key estimate:
\begin{prop}\label{decay}
Let $d\geq 5$, $T>0$, $k\geq 2$, $N\geq k+2+d/2$, we set
$\displaystyle \|\psi\|_{X_T}=\|\psi\|_{L^\infty([0,T], H^N)}+
\sup_{t\in [0,T]}(1+t)^{d/4}\|\psi(t)\|_{W^{k,4}}$,
then the solution of $(\ref{duhamel})$ satisfies
\begin{equation*}
\forall\,t\in [0,T],\ \|\psi(t)\|_{W^{k,4}}\lesssim \frac{\|\psi_0\|_{W^{k,4/3}}+\|\psi_0\|_{H^N}
+G(\|\psi\|_{X_t},\|\frac{1}{1+l_1}\|_{L^\infty_t(L^\infty)})\|\psi\|_{X_T}^2}{(1+t)^{d/4}},
\end{equation*}
with $G$ a continuous function.
\end{prop}
\begin{proof}
We start with $(\ref{duhamel})$. From the dispersion estimate $(\ref{dispersion})$ and the Sobolev
embedding, we have for any $t\geq 0$
\begin{equation*}
(1+t)^{d/4}\|e^{itH}\psi_0\|_{W^{2,4}}\lesssim
(1+t)^{d/4}\min\bigg(\frac{\|U^{(d-2)/4}\psi_0\|_{W^{2,4/3}}}{t^{d/4}},\,
\|\psi_0\|_{H^N}\bigg)
\lesssim \|\psi_0\|_{W^{2,4/3}}+\|\psi_0\|_{H^N}.
\end{equation*}
The only issue is thus to bound the nonlinear part. Let $f,g$ be a placeholder for $l_1$ or
$U^{-1}\phi_1$, there are several kind of terms : $\nabla f\cdot \nabla g$, $(a(1+l_1)-1)\Delta f$, $2l_1-\widetilde{g}(1+l_1)$, $|\nabla f|^2$, $|\nabla g|^2$, $(a(1+l_1)-1)H g$. The estimates for $0\leq t\leq 1$ are easy (it corresponds to the existence of strong solution in finite time), so we assume $t\geq 1$ and
we split the integral from $(\ref{duhamel})$ between $[0,t-1]$ and $[t-1,t]$. For the first kind
we have from the dispersion estimate and (\ref{GN3}):
\begin{eqnarray*}
\bigg\|\int_0^{t-1}e^{i(t-s)H}\nabla f\cdot \nabla g\,ds\bigg\|_{W^{k,4}}
&\lesssim& \int_0^{t-1}\frac{\|\nabla f\cdot \nabla g\|_{W^{k,4/3}}}{(t-s)^{d/4}}ds
\\
&\lesssim&
\int_0^{t-1} \frac{\|\nabla f\|_{H^{k}}\|\nabla g\|_{W^{k-1,4}}}{(t-s)^{d/4}}ds,\\
&\lesssim&\|\psi\|_{X_t}^2 \displaystyle \int_0^{t-1}
\frac{1}{(t-s)^{d/4}(1+s)^{d/4}}ds\\
&\lesssim& \frac{\|\psi\|_{X_t}^2}{t^{d/4}}.
\end{eqnarray*}
(actually we should also add on the numerator $\|\nabla f\|_{W^{k-1,4}}\|\nabla g\|_{H^{k}}$, but
since $f,g$ are symmetric placeholders we omit this term).
We have used the fact that $\nabla U^{-1}$ is
bounded on $W^{1,p}\rightarrow L^p$,
$1<p<\infty$ so that $\|\nabla f(s)\|_{H^k}\lesssim \|f\|_{X_t}$ for $s\in [0,t]$, $(1+s)^{d/4}\|\nabla g\|_{W^{k-1,4}}
\lesssim \|g\|_{X_t}$.\\
For the second part on $[t-1,t]$ we use the Sobolev embedding $H^{d/4}\hookrightarrow L^4$ and
(\ref{GN3}):
\begin{eqnarray*}
\bigg\|\int_{t-1}^te^{i(t-s)H}(\nabla f\cdot \nabla g )ds\bigg\|_{W^{k,4}}\lesssim
\int_{t-1}^t\big\|\nabla f\cdot\nabla g\big\|_{H^{k+d/4}} ds&\lesssim&
\int_{t-1}^t\|\nabla f\|_{L^4}\|\nabla g\|_{H^{k+d/2}}ds\\
&\lesssim&
\|\psi\|_{X_t}^2 \int_{t-1}^t \frac{1}{(1+s)^{\frac{d}{4}}}ds\\
&\lesssim& \frac{\|\psi\|_{X_t}^2 }{(1+t)^{d/4}}.
\end{eqnarray*}
The terms of the kind $(a(1+l_1)-1)\Delta f$ are estimated similarly: splitting the
integral over $[0,t-1]$ and $[t-1,t]$,
\begin{eqnarray*}
\bigg\|\int_0^{t-1}e^{i(t-s)H}(a(1+l_1)-1)\Delta fds\bigg\|_{W^{k,4}}
&\lesssim&
\int_0^{t-1} \frac{\|a(1+l_1)-1\|_{W^{k,4}}\|\Delta f\|_{H^k}}{(t-s)^{d/4}}ds
\\
&\lesssim&
\int_0^{t-1} \frac{\|a(1+l_1)-1\|_{W^{k,4}}\|\nabla f\|_{H^{k+1}}}{(t-s)^{d/4}}ds.
\end{eqnarray*}
As for the first kind terms, from the composition estimate we deduce that:
$$\|a(1+l_1)-1\|_{W^{k,4}}\lesssim
F(\|l_1\|_{L^\infty_t(L^\infty)},\|\frac{1}{1+l_1}\|_{L^\infty_t(L^\infty)})\|l_1\|_{W^{k,4}},$$
with $F$ continuous, we can bound the integral above by $F(\|\psi\|_{X_t},\|\frac{1}{1+l_1}\|_{L_t^\infty(L^\infty)})
\|\psi\|_X^2/t^{5/4}$. For the integral
over $[t-1,t]$ we can again do the same computations using
the composition estimates $
\|a(1+l_1)-1\|_{H^{k+d/2}}\lesssim F_1(\|l_1\|_{L^\infty_t(L^\infty)},\|\frac{1}{1+l_1}\|_{L^\infty_t(L^\infty)}) \|l_1\|_{H^{k+d/2}}$ with $F_1$ continuous. The restriction $N\geq k+2+d/2$ comes
from the fact that we need $\|\Delta f\|_{H^{k+d/2}}\lesssim \|f\|_X$.\\
Writing $2l_1-\widetilde{g}(1+l_1)=l_1(2-\widetilde{g}(l_1)/l_1)$ we see that the estimate for the
last term is the same as for $(a(1+l_1)-1)\Delta f$ but simpler so we omit it. The other terms can be also handled in a similar way.
\end{proof}
\paragraph{End of the proof of theorem $(\ref{theolarged})$} We fix $k>2+d/4$, $n$ such that
$2n+1\geq k+2+d/2$, and use these values for $X_T=L^\infty([0,T],H^{2n+1}\cap
(1+t)^{-d/4}W^{k,4})$.
First note that since $\mathcal{L}$ is a smooth diffeomorphism near $1$ and $u_0=\nabla \phi_0$,
we have
\begin{eqnarray*}
\|u_0\|_{H^{2n}\cap W^{k-1,4/3}}+\|\rho_0-\rho_c\|_{H^{2n+1}\cap W^{k,4/3}}&\sim&
\|(U\phi_0,\mathcal{L}^{-1}(1+l_0)-1)\|_{(H^{2n+1}\cap W^{k,4/3})^2}
\\
&\sim& \|\psi_0\|_{H^{2n+1}\cap W^{k,4/3}},
\end{eqnarray*}
if $\|l_0\|_\infty$ is small enough. In particular we will simply write the smallness
condition in term of $\psi_0$. Now using the embedding $W^{k,4}\hookrightarrow W^{2,\infty}$,
the energy estimate of proposition $(\ref{energy})$ implies
\begin{equation*}
\|\psi(t)\|_{H^{2n+1}}\leq \|\psi_0\|_{H^{2n+1}}\text{exp}\bigg(C\int_0^t H(\|\psi\|_{X_s},\|\frac{1}{l+1}\|_{L^\infty})( \|\psi\|_{W^{k,4}}+\|\psi\|_{W^{k-1,4}}^2) ds
\bigg).
\end{equation*}
Combining it with the decay estimate of proposition $(\ref{decay})$ we get with $G$ and $H$ continuous:
\begin{eqnarray*}
\begin{aligned}
&\|\psi\|_{X_T}\leq C_1\bigg(\|\psi_0\|_{W^{k,4/3}}+\|\psi_0\|_{H^{2n+1}}+\|\psi\|_{X_T}^2 G(\|\psi\|_{X_T},\|\frac{1}{1+l_1}\|_{L^\infty_T(L^\infty)})\\
&+\|\psi_0\|_{H^N}\text{exp}\bigg(C\int_0^T H(\|\psi\|_{X_T},\|\frac{1}{l+1}\|_{L^\infty_T(L^\infty)})( \|\psi\|_{W^{k,4}}+\|\psi\|_{W^{k-1,4}}^2) ds\\
&\leq C_1\bigg(\|\psi_0\|_{W^{k,4/3}}+\|\psi_0\|_{H^N}+\|\psi\|_{X_T}^2 G(\|\psi\|_{X_T},\|\frac{1}{1+l_1}\|_{L^\infty_T(L^\infty)})\\
&\hspace{3cm}+ \|\psi_{0}\|_{H^{2n+1}}\text{exp}\big(C'\|\psi\|_{X_T} H(\|\psi\|_{X_T},\|\frac{1}{l+1}\|_{L^\infty_T(L^\infty)})\big)\bigg).
\end{aligned}
\end{eqnarray*}
From the usual bootstrap argument, we find that for
$\|\psi_0\|_{W^{k,4/3}}+\|\psi_0\|_{H^N}\leq \varepsilon$ small enough then for any
$T>0$, $\|\psi\|_{X_T}\leq 3C_1\varepsilon$ (it suffices to note that for $\varepsilon$ small
enough, the application $m\mapsto C_1(\varepsilon+\varepsilon e^{C'm}+m^2)$ is smaller than $m$ on
some interval $[a,b]\subset ]0,\infty[$ with $a\simeq 2C_1\varepsilon$). \\
In particular $\|l\|_\infty\lesssim \varepsilon$ and up to diminishing $\varepsilon$, we have
$$\|\rho-\rho_c\|_{L^\infty([0,T]\times \mathbb{R}^d)}=\|\mathcal{L}^{-1}(1+l)-\rho_c\|_\infty
\leq \rho_c/2.$$
This estimate and the $H^{2n+1}$ bound allows to apply the blow-up criterion of \cite{Benzoni1}
to get global well-posedness.
\section[The case of dimension 3]{The case of dimension d=3,4: normal form, bounds for cubic
and quartic terms}
In dimension $d=4$ the approach of section $4$ fails, and $d=3$ is even worse. Thus we
need to study more carefully the structure of the nonlinearity. We start with $(\ref{ekdiag})$,
that we rewrite in complex form
\begin{eqnarray}
\nonumber
\partial_t\psi-iH\psi&=&U\big[(a(1+l)-1)\Delta l
-\frac{1}{2}\big(|\nabla \phi|^2-|\nabla l|^2\big)
+(2l-\widetilde{g}(1+l))\big]\\
\nonumber &&+i\big[-\nabla \phi\cdot \nabla l+\big(1-a(1+l)\big)\Delta\phi)\big]\\
\label{GPcomp}&=& U\mathcal{N}_1(\phi,l)+i\mathcal{N}_2(\phi,l)=\mathcal{N}(\psi).
\end{eqnarray}
As explained in the introduction (see \eqref{duhaprofile}), we can rewrite the Duhamel
formula in term of the profile $e^{-itH}\psi$. In particular, (the Fourier transform of)
quadratic terms read
\begin{equation}\label{genericquad}
I_{\text{quad}}=
e^{itH(\xi)}\int_0^te^{-is \big(H(\xi)\mp H(\eta)\mp H(\xi-\eta)\big)}B(\eta,\xi-\eta)
\widetilde{\psi^\pm}(\eta)\widetilde{\psi^\pm}(\xi-\eta)d\eta ds,
\end{equation}
where we remind the notation $\widetilde{\psi^\pm}=e^{\mp itH}\widehat{\psi^\pm}$, and $B$ is
the symbol of a bilinear multiplier.
For some $\varepsilon>0$ to choose later, $1/p=1/6-\varepsilon$, $T>0$ we set with $N=2n+1$:
\begin{equation}\label{Fspaces}
\left\{
\begin{array}{lll}
\|\psi\|_{Y_T}&=&\|xe^{-itH}\psi\|_{L^\infty_T( L^2})
+\|\langle t\rangle^{1+3\varepsilon}\psi\|_{L^\infty_T(W^{k,p})},\\
\|\psi\|_{X(t)}&=&\|\psi(t)\|_{H^N}+\|xe^{-itH}\psi(t)\|_{L^2}
+\|\langle t\rangle^{1+3\varepsilon}\psi(t)\|_{W^{k,p}},\\
\|\psi\|_{X_T}&=&\displaystyle \sup_{[0,T)}\|\psi\|_{X(t)}.
\end{array}\right.
\end{equation}
From the embedding $W^{3,p}\subset W^{2,\infty}$, proposition \ref{energy} implies
\begin{equation*}
\|\psi\|_{L^\infty_TH^{2n+1}}\lesssim \|\psi_0\|_{H^{2n+1}}\text{exp}\big(C(\|l\|_{L^\infty},\|\frac{1}{l+1}\|_{L^\infty})(\|\psi\|_{X_T}+\|\psi\|_{X_T}^2)).
\label{energyd3}
\end{equation*}
with $C$ a continuous function. Thus the main difficulty of this section will be to prove
$\displaystyle \|I_{\text{quad}}\|_{Y_T}\lesssim \|\psi\|_{X_T}^2$, uniformly in $T$.
Combined with the energy estimate \eqref{energyd3} and similar (easier) bounds for higher order
terms, this provides global bounds for $\psi$ which imply global well-posedness.\vspace{2mm}\\
In order to perform such estimates we can use integration by part in \eqref{genericquad}
either in $s$ or
$\eta$ (for the relevance of this procedure, see the discussion on space time resonances in the
introduction). It is thus essential to study where and at which order we have a cancellation of
$\Omega_{\pm,\pm}(\xi,\eta)=H(\xi)\pm H(\eta)\pm H(\xi-\eta)$ or $\nabla_\eta \Omega_{\pm\pm}$.
We will denote abusively $H'(\xi)=\frac{2+2|\xi|^2}{\sqrt{2+|\xi|^2}}$ the radial
derivative of $H$ and note that $\nabla H(\xi)=H'(\xi)\xi/|\xi|$, we also point out that
$H'(r)=\frac{2+2r^2}{\sqrt{2+r^2}}$ is stricly increasing.\\
There are several cases that
have some similarities with the situation for the Schr\"odinger equation,
see $(\ref{timeR}\ref{spaceR},\ref{totalR})$ for the definition of the resonant sets $\mathcal{T},
\ \mathcal{S}, \ \mathcal{R}$.
\begin{itemize}
\item $\Omega_{++}=H(\xi)+H(\eta)+H(\xi-\eta)\gtrsim (|\xi|+|\eta|+|\xi-\eta|)(1+
|\xi|+|\eta|+|\xi-\eta|)$, the time resonant set is reduced to $\mathcal{T}=\{\xi=\eta=0\}$,
\item $\Omega_{--}=H(\xi)-H(\eta)-H(\xi-\eta)$, we have $\nabla_\eta \Omega_{--}=H'(\eta)
\frac{\eta}{|\eta|}+H'(\xi-\eta)\frac{\eta-\xi}{|\eta-\xi|}$. From basic computations
\begin{equation*}
\nabla_\eta \Omega_{--}=0\Rightarrow
\left\{
\begin{array}{ll}
H'(\eta)=H'(\xi-\eta)\\
\frac{\xi-\eta}{|\eta-\xi|}= \frac{\eta}{|\eta|}
\end{array}\right.
\Rightarrow
\left\{
\begin{array}{ll}
|\eta|=|\xi-\eta|\\
\xi= 2\eta
\end{array}
\right.
\end{equation*}
On the other hand $\Omega_{--}(2\eta,\eta)=H(2\eta)-2H(\eta)=0\Leftrightarrow \eta=0$, thus
$\mathcal{R}=\{\xi=\eta=0\}$.
\item $\Omega_{-+}=H(\xi)-H(\eta)+H(\xi-\eta)$, from similar computations we find that the
space-time resonant set is $\mathcal{R}=\mathcal{S}=\{\xi=0\}$. The case $\Omega_{+-}$ is
symmetric.
\end{itemize}
The fact that the space-time resonant set for $\Omega_{+-}$ is not trivial explains why it is quite
intricate to bound quadratic terms. An other issue pointed out in \cite{GNT3} for their study of
the Gross-Pitaevskii equation is that the small frequency ``parallel'' resonances are worse than
for the nonlinear Schr\"odinger equation. Namely near $\xi=\varepsilon\eta$, $\eta<<1$ we have
\begin{equation*}
H(\varepsilon\eta)-H(\eta)+H((\varepsilon-1)\eta)\sim \frac{-3\varepsilon |\eta|^3}{2\sqrt{2}}
=\frac{-3|\xi|\,|\eta|^2}{2\sqrt{2}},
\text{ while }|\varepsilon\eta|^2-|\eta|^2+|(1-\varepsilon)\eta|^2\sim -2|\eta|\, |\xi|,
\end{equation*}
we see that integrating by parts in time causes twice more loss of derivatives than prescribed by
Coifman-Meyer's theorem, and there is no hope
even for $\xi/\Omega$ to belong to any standard class of multipliers. Thus it seems unavoidable
to use the rough multiplier theorem \ref{singmult}.
\subsection{Normal form}\label{secnorm}
In view of the discussion above, the frequency set $\{(\xi,\eta):\ \xi=0\}$ is expected to
raise some special difficulty. On the other hand the real part of the nonlinearity in
$(\ref{GPcomp})$ is better behaved than the imaginary part since
it has the operator $U(\xi)$ in factor whose cancellation near $\xi=0$ should compensate the
resonances. In the spirit of $\cite{GNT3}$ we will use a
normal form in order to have a similar cancellation on the imaginary part.
In order to write the nonlinearity as essentially quadratic we set $a'(1)=\alpha$, and rewrite
\begin{equation}\label{cub1}
\text{Im}(\mathcal{N})(\psi)=-\alpha l\Delta \phi-\nabla \phi\cdot \nabla l+
\big[\big(1+\alpha l-a(1+l)\big)\Delta \phi\big]=
-\alpha l\Delta \phi-\nabla \phi\cdot \nabla l+R.
\end{equation}
From now on, we will use the notation $R$ as a placeholder for remainder terms that should be at
least cubic. The detailed analysis of $R$ will be provided in section \ref{estimR}.
At the Fourier level, the quadratic terms $-\alpha l\Delta \phi -\nabla \phi\cdot \nabla l$ can be written as follows:
\begin{equation}
\label{acub1}
-\alpha l\Delta \phi -\nabla \phi\cdot \nabla l=-\alpha{\rm div}(l\nabla\phi)+(\alpha-1)\nabla\phi\cdot\nabla l.
\end{equation}
We define the change of variables as $l\rightarrow l-B[\phi,\phi]+B[l,l]$, with $B$ a symmetric
bilinear multiplier to choose later. We have
\begin{equation}
\begin{aligned}
& \partial_t\big(-B[\phi,\phi]+ B[l,l]\big)=2B[\phi,(-\Delta+2)l]+2B[-\Delta \phi,l]\\
&\hspace{5cm}+2B\big[\phi,\mathcal{N}_1(\phi,l)\big]+2B\big[\mathcal{N}_2(\phi,l),l\big]\\
&=2B[\phi,(-\Delta+2)l]+2B[-\Delta \phi,l]+R,
\end{aligned}
\label{bcub1}
\end{equation}
where the quadratic terms amount to a bilinear Fourier multiplier $B'[\phi,l]$, with symbol
$B'(\eta,\xi-\eta)=2B(\eta,\xi-\eta)\big(|\eta|^2+2+|\xi-\eta|^2\big)$. The evolution equation on
$l_1= l-B(\phi,\phi)+B(l,l)$ is using (\ref{acub1}), (\ref{bcub1})
\begin{eqnarray*}
\partial_t l_1+\Delta \phi=B''(\phi,l)-\alpha\text{div}(l\nabla \phi )+R,\\
B''(\eta,\xi-\eta)=2B(\eta,\xi-\eta)(2+|\eta|^2+|\xi-\eta|^2)+(1-\alpha)\eta\cdot (\xi-\eta).
\end{eqnarray*}
The natural choice is thus to take (note that if $\alpha=1$ the normal form is just the identity)
\begin{equation*}
B(\eta,\xi-\eta)=\frac{(\alpha-1) \eta\cdot (\xi-\eta)}{2+|\eta|^2+|\xi-\eta|^2}.
\end{equation*}
For this choice, we have then:
\begin{equation}
\partial_t l_1+\Delta \phi=-\alpha\text{div}(l\nabla \phi )+R,\\
\end{equation}
In addition from (\ref{systedepart}) we get:
\begin{equation}
\begin{array}{lll}
\displaystyle \partial_t\phi-\Delta l_1+2l_1&=&-\Delta b(\phi,l)+2 b(\phi,l) +(a(1+l)-1)\Delta l
-\frac{1}{2}\big(|\nabla \phi|^2-|\nabla l|^2\big)\\
&&+(2l-\widetilde{g}(1+l)),
\end{array}
\label{systedepart2}
\end{equation}
with $l_1=l-B[\phi,\phi]+B[l,l]=l+b(\phi,l)$.
Setting $\phi_1=U\phi$ the system becomes:
\begin{eqnarray*}
\partial_t\phi_1+Hl_1&=&U\bigg(\alpha \,l\Delta l-
\frac{1}{2}\big(|\nabla U^{-1}\phi_1|^2-|\nabla l|^2\big)+(-\Delta+2)b(\phi,l)-\widetilde{g}''(1) l^2\bigg)
+R,\\
\partial_tl_1-H \phi_1&=&-\alpha\text{div}(l\nabla \phi)+R.
\end{eqnarray*}
\paragraph{Final form of the equation}
Finally, if we replace in the quadratic terms $l=l_1-b(\phi,l)$ and set $z=\phi_1+il_1$
we obtain
\begin{eqnarray}
\nonumber
\partial_tz-iHz&=&U\big(\alpha \,l_1\Delta l_1- \frac{1}{2}\big(|\nabla U^{-1}\phi_1|^2
-|\nabla l_1|^2-\widetilde{g}''(1) l_1^2\big)+(-\Delta+2)b(\phi,l_1)\big)
-i\alpha\text{div}(l_1\nabla \phi)\\
\nonumber
&&+U\big(\alpha(-b(\phi,l)\Delta l_1-l_1\Delta b(\phi,l)+b(\phi,l)\Delta b(\phi,l)-2\nabla b(\phi,l)\cdot \nabla l+|\nabla b(\phi,l)|^2\\
\nonumber &&+(-\Delta+2)(-2B[l_1,b(\phi,l)]+B[b(\phi,l),b(\phi,l)])-\widetilde{g}''(1) (b(\phi,l))^2+2\widetilde{g}''(1) l_1 b(\phi,l) \big) \big)\\
\nonumber &&\hspace{9cm}+i\alpha\text{div}(b(\phi,l)\nabla \phi)
+R\\
\label{EKnormal}&=&Q(z)+R:=\mathcal{N}_z,
\end{eqnarray}
where $Q(z)$ contains the quadratic terms (the first line), $R$ the cubic and quartic terms.
\begin{rmq}
It is noticeable that this change of unknown is not singular in term of the new variable
$\phi_1=U \phi$, indeed $B(\phi,\phi)=\widetilde{B}(\nabla \phi,\nabla \phi)$ where
$\widetilde{B}(\eta,\xi-\eta)=\frac{\alpha-1}{(2+|\eta|^2+|\xi-\eta|^2)}$ is smooth, so that
$B(\phi,\phi)=\widetilde{B}(\nabla U^{-1}\phi_1,\nabla U^{-1}\phi_1)$ acts on $\phi_1$ as a composition
of smooth bilinear and linear multipliers.
\end{rmq}
It remains to check that the normal form is well defined in our functional framework.
We shall also prove that is cancels asymptotically.
\begin{prop}\label{estimformenormale}
For $N>4$, $k\geq 2$, the map $\phi_1+il\mapsto z:=\phi_1+i(l+b(\phi,l))$ is bi-Lipschitz on the
neighbourhood of $0$ in $X_\infty$,
Moreover, $\psi=\phi_1+i l$ and $z$ have the same asymptotic as $t\rightarrow \infty$:
\begin{equation*}
\|\psi-z\|_{X(t)}=O(t^{-1/2}).
\end{equation*}
\end{prop}
\begin{proof}
The terms $B[\phi,\phi]$ and $B[l,l]$ are handled in a similar way, we only treat the first case
which is a bit more involved as we have the singular relation $\phi=U^{-1}\phi_1$.
Note that $B[\phi,\phi]=\widetilde{B}(\nabla \phi,\nabla \phi)$, with $\widetilde{B}[\eta,\xi-\eta]=
(\alpha-1)\frac{1}{2+|\eta|^2+|\xi-\eta|^2}$, and $\nabla U^{-1}=\langle \nabla \rangle \circ R_i$ so there is
no real issue as long as we avoid the $L^\infty$ space.
Also,we split $B=B\chi_{|\eta|\gtrsim|\xi-\eta|}+B(1-\chi_{|\eta|\gtrsim|\xi-\eta|})$ where
$\chi$ is smooth outside $\eta=\xi=0$, homogeneous of degree $0$, equal to $1$ near
$\{|\xi-\eta|=0\}\cap \mathbb{S}^{2d-1}$ and $0$ near $\{|\eta|=0\}\cap \mathbb{S}^{2d-1}$. As can
be seen from the change of variables $\zeta=\xi-\eta$, these terms are symmetric so we can simply
consider the first case.\\
By interpolation, we have:
\begin{equation}
\forall\,2\leq q\leq p,\ \|\psi\|_{W^{k,q}}\lesssim \|\psi\|_{X(t)}/\langle t\rangle^{3(1/2-1/q)}.
\label{techj}
\end{equation}
For the $H^N$ estimate we have
from the Coifman-Meyer theorem (since the symbol $\widetilde{B}$ has the form $\frac{1}{2+|\eta|^2+|\xi-\eta|^2}$), the embedding $H^1\mapsto L^3$ and the boundedness of the Riesz
multiplier,
\begin{equation*}
\|B[U^{-1}\phi_1,U^{-1}\phi_1]\|_{H^N}
\lesssim \big\|\nabla U^{-1}
\phi_1\big\|_{W^{N-2,3}} \big\|\nabla U^{-1}\phi_1\big\|_{L^{6}}\lesssim \|\phi_1\|_{X(t)}^2/\langle t\rangle.
\end{equation*}
For the weighted estimate $\|xe^{-itH}B[\phi,\phi]\|_{L^2}$, since $\phi=U^{-1}(\psi
+\overline{\psi})/2$, we have a collection of terms that read in the Fourier variable:
\begin{eqnarray*}
\mathcal{F}\big( xe^{-itH}B[U^{-1}\psi^\pm,U^{-1}\psi^\pm]\big)=
\nabla_\xi \int e^{-it\Omega_{\pm\pm}}B_1(\eta,\xi-\eta)
\tilde{\psi}^\pm(\eta)\tilde{\psi}^{\pm}(\xi-\eta)d\eta,\\
\text{ where }B_1=\frac{\eta U^{-1}(\eta)\cdot (\xi-\eta)
U^{-1}(\xi-\eta)}{2+|\eta|^2+|\xi-\eta|^2}\chi_{|\eta|\gtrsim|\xi-\eta|},\
\Omega_{\pm\pm}=-H(\xi)\mp H(\eta)\mp H(\xi-\eta).
\end{eqnarray*}
If the derivative hits $B_1$, in the worst case it adds a singular term $U^{-1}(\xi-\eta)$, so
that from the embedding $\dot{H}^1\hookrightarrow L^6$
\begin{eqnarray*}
\bigg\|\int e^{-it\Omega_{\pm\pm}}(\nabla_\xi B_1)
\tilde{\psi}^\pm(\eta)\tilde{\psi}^{\pm}(\xi-\eta)d\eta\bigg\|_{L^2}
=\big\|\nabla_\xi B_1[\psi^\pm,\psi^{\pm}]\big\|_{L^2} &\lesssim& \|U^{-1}\psi\|_{W^{1,6}}
\|\psi\|_{W^{1,3}}\\
&\lesssim&\|\psi\|_{X(t)}^2/\langle t\rangle^{1/2}.
\end{eqnarray*}
If the derivative hits $\widetilde{\psi}^\pm(\xi-\eta)$ we use the fact that the symbol
$\frac{\langle \xi-\eta\rangle^2\chi_{|\eta|\gtrsim |\xi-\eta|}}{2+|\eta|^2+|\xi-\eta|^2}$ is of
Coifman-Meyer type
\begin{eqnarray*}
\bigg\|\int e^{it\Omega_{\pm\pm}}B_1(\eta,\xi-\eta)
\tilde{\psi}^\pm(\eta)\nabla_\xi \tilde{\psi}^{\pm}(\xi-\eta)d\eta\bigg\|_{L^2}
&\lesssim&
\|\langle\nabla\rangle\psi \|_{L^{6}}\|\langle \nabla\rangle ^{-2}\langle\nabla\rangle e^{itH}xe^{-itH}\psi\|_{L^3}\\
&\lesssim& \|\psi\|_{X(t)}^2/\langle t\rangle.
\end{eqnarray*}
Finally, if the derivative hits $e^{-it\Omega_{\pm\pm}}$ we note that $\nabla_\xi\Omega_{\pm\pm}
=\nabla_\xi H(\xi)\mp \nabla_\xi H(\xi-\eta)$, where both term are multipliers of order $1$ so
\begin{eqnarray*}
\bigg\|\int e^{it\Omega_{\pm\pm}}it(\nabla_\xi\Omega_{\pm\pm})B_1
\tilde{\psi}^\pm(\eta)\tilde{\psi}^{\pm}(\xi-\eta)d\eta\bigg\|_{L^2}
&\lesssim &t\|\psi\|_{W^{1,3}}\|\psi\|_{W^{1,6}}\\
&\lesssim& \|\psi\|_{X(t)}^2/\langle t\rangle^{1/2}.
\end{eqnarray*}
The $W^{k,p}$ norm is also estimated using the Coifman-Meyer theorem and the boundedness
of the Riesz multipliers:
\begin{eqnarray*}
\|B_1[\psi^\pm(t),\psi^\pm(t)]\|_{W^{k,p}}\lesssim \|\psi\|_{W^{k-1,1/12-\varepsilon/2}}^2
\lesssim \|\psi\|_{W^{k,1/6-\varepsilon}}^2\lesssim \frac{\|\psi\|_{X(t)}^2}{\langle t\rangle^{2+6\varepsilon}}.
\end{eqnarray*}
Gluing all the estimates we have proved
\begin{equation*}
\|B[U^{-1}\psi,U^{-1}\psi]\|_{X(t)}^2\lesssim \|\psi\|_{X(t)}^2/\langle t\rangle^{1/2},
\|B[U^{-1}\psi,U^{-1}\psi]\|_X^2\lesssim \|\psi\|_X^2,
\end{equation*}
thus using the second estimate we obtain from a fixed point argument that the map
$\phi_1+il\mapsto \phi_1+i(l-B[\phi,\phi]+B[l,l])$
defines a diffeomorphism on a neighbourhood of $0$ in $X$. The first estimate proves the second
part of the proposition.
\end{proof}
With similar arguments, we can also obtain the following:
\begin{prop}
Let $z_0=U\phi_0+i(l_0-B[\phi_0,\phi_0]+B[l_0,l_0])$, the smallness condition of theorem
$(\ref{theod4})$ is equivalent to the smallness of
$\|z_0\|_{H^{2n+1}}+\|xz_0\|_{L^2}+\|z_0\|_{W^{k,p}}$.
\end{prop}
\subsection{Bounds for cubic and quartic nonlinearities}\label{estimR}
Let us first collect the list of terms in $R$ (see $(\ref{cub1}), (\ref{bcub1}), (\ref{EKnormal})$ ) with $b=b(\phi,l)$:
\begin{eqnarray*}
\begin{aligned}
&(1+\alpha l-(a(1+l))\Delta \phi,\ B[\phi,\mathcal{N}_1(\phi,l)],\ B[\mathcal{N}_2(\phi,l),l],\
i\alpha\text{div}(b\nabla \phi),\\
&U\big(\alpha(-b\Delta l_1-l_1\Delta b+b\Delta b-2\nabla b\cdot \nabla l+|\nabla b|^2
(-\Delta+2)b(\phi,-b)-2B[l_1,b]+B[b,b]\big).
\end{aligned}
\end{eqnarray*}
We note that they are all either cubic (for example $B[\phi,|\nabla \phi|^2]$) or quartic (for
example $B[b,b]$). $B$ is a smooth bilinear multiplier and as we already pointed out,
$\phi$ always appears with a gradient, we can replace everywhere $\phi$ by $\phi_1=U\phi$ up to
the addition of Riesz multipliers. \\
Since the estimates are relatively straighforward, we only detail the case of the cubic
term $B[\phi,|\nabla \phi|^2]$ which comes from $B[\phi,\mathcal{N}_1(\phi)]$
(quartic terms are simpler). Since $\phi=U^{-1}(\psi+\overline{\psi})/2$ we are
reduced to bound in $Y_T$ (see \ref{Fspaces}) terms of the form
\begin{equation*}
I(t)=\int_0^te^{i(t-s)H}B[U^{-1}\psi^\pm,|U^{-1}\nabla \psi^\pm|^2]ds.
\end{equation*}
\begin{prop}
For any $T>0$, we have the a priori estimate
\begin{equation*}
\sup_{[0,T]}\|I(t)\|_{Y_T}\lesssim \|\psi\|_{X_T}^3.
\end{equation*}
\end{prop}
\begin{proof}\textbf{The weighted bound}\\[2mm]
First let us write
\begin{eqnarray*}
\begin{aligned}
&xe^{-itH}I(t)=\int_0^te^{-isH}\bigg((-is\nabla_\xi H)B[U^{-1}\psi^\pm,(U^{-1}\nabla \psi^\pm)^2]
+B[U^{-1}\psi^\pm,x(U^{-1}\nabla \psi^\pm)^2]\\
&\hspace{3cm}+\nabla_\xi B[U^{-1}\psi^\pm,(U^{-1}\nabla \psi^\pm)^2] \bigg)ds,\\
&= I_1(t)+I_2(t)+I_3(t).
\end{aligned}
\end{eqnarray*}
Taking the $L^2$ norm and using the Strichartz estimate with $(p',q')=(2,6/5)$ we get
\begin{eqnarray*}
\|I_1\|_{L^\infty_TL^2}&\lesssim& \|(s\nabla_\xi H)B[U^{-1}\psi^\pm,(U^{-1}\nabla \psi^\pm)^2]\|
_{L^2(L^{6/5})}\\
&\lesssim& \|sB[U^{-1}\psi^\pm,(U^{-1}\nabla \psi^\pm)^2]\|_{L^2(W^{1,6/5})},\\
\|I_2\|_{L^\infty_TL^2}&\lesssim& \|B[U^{-1}\psi^\pm,x(U^{-1}\nabla \psi^\pm)^2]\|
_{L^2(L^{6/5})}.
\end{eqnarray*}
We have then from Coifman-Meyer's theorem, H\"older's inequality, continuity of the Riez operator and (\ref{techj})
\begin{equation}
\label{weightloss}
\begin{aligned}
&\|sB[U^{-1}\psi^\pm,(U^{-1}\nabla \psi^\pm)^2]\|_{L^2_T(W^{1,6/5})}\lesssim
\big\|s\|\psi\|_{W^{2,6}}^2\|\psi\|_{H^2}\big\|_{L^2_T}\lesssim \|\psi\|_{X_T}^3,\\
&\|I_2\|_{L^\infty_T(L^2)}\lesssim \big\|\|\psi\|_{W^{1,6}}\|x(\nabla
U^{-1}\psi^\pm)^2\|_{L^{\frac{3}{2}}}\big\|_{L^2_T}.
\end{aligned}
\end{equation}
The loss of derivatives in $I_2$ can be controlled thanks to a paraproduct: let
$(\chi_j)_{j\geq 0}$ with $\sum \chi_j(\xi)=1$, $\text{supp}(\chi_0)\subset B(0,2),\
\text{supp}(\chi_j)\subset \{2^{j-1}\leq \xi \leq 2^{j+1}\},\ j\geq 1$, and set
$\widehat{\Delta_j\psi}:=\chi_j\widehat{\psi}$, $S_j\psi=\sum_0^j\Delta_k\psi$. Then
\begin{equation*}
(U^{-1}\nabla \psi^\pm)^2=\sum_{j\geq 0}(\nabla U^{-1}S_j\psi^\pm)(\nabla U^{-1}\Delta_j\psi^\pm)
+\sum_{j\geq 1}(\nabla U^{-1} S_{j-1}\psi^\pm) (\nabla U^{-1}\Delta_j\psi^\pm)
\end{equation*}
For any term of the first scalar product we have
\begin{eqnarray*}
x\big((\partial_kU^{-1}S_j\psi^\pm)(\partial_kU^{-1}\Delta_j\psi^\pm)\big)&=&
(\partial_kU^{-1}S_jx\psi^\pm)(\partial_k U^{-1}\Delta_j \psi^\pm)\\
&&+([x,\partial_kU^{-1}S_j]\psi^\pm) (\partial_kU^{-1}\Delta_j\psi^\pm).
\end{eqnarray*}
From H\"older's inequality, standard commutator estimates, the Besov embedding
$W^{3,6}\hookrightarrow B^{2}_{6,1}$ and $(\ref{5.5})$ we get
\begin{eqnarray}
\label{para1}
\sum_j\|(\partial_kU^{-1}S_jx\psi^\pm)(\partial_k U^{-1}\Delta_j \psi^\pm)\|_{L^{3/2}}
\lesssim \sum_j2^j\|x\psi \|_{L^{2}} 2^j\|\Delta_j\psi\|_{L^6}\lesssim \|x\psi\|_{L^2}
\|\psi\|_{W^{3,6}},\\
\label{para2}
\sum_j \|([x,\partial_kU^{-1}S_j]\psi^\pm) (\partial_kU^{-1}\Delta_j\psi^\pm)\|_{L {3/2}}\lesssim
\|U^{-1}\psi\|_{H^1}\|\psi\|_{W^{1,6}}\lesssim \|\psi\|_{X_T}^2/\langle t\rangle.
\end{eqnarray}
Moreover, $x\psi=x e^{itH}e^{-itH}\psi=e^{itH}xe^{-itH}\psi+it\nabla_\xi H \psi$ so that :
$$\|x\psi(t)\|_{L^2}\lesssim \langle t\rangle \|\psi\|_{X_T}$$
Similar computations can be done for
$\sum_{j\geq 1}(\nabla U^{-1}S_{j-1} \psi^\pm) (\nabla U^{-1}\Delta_j\psi^\pm)$, finally
(\ref{para1}), (\ref{para2}) and (\ref{techj}) imply
\begin{equation*}
\|x(U^{-1}\nabla \psi^\pm)^2\|_{L^{3/2}}\lesssim \|\psi\|_{X_T}^2.
\end{equation*}
Plugging the last inequality in $(\ref{weightloss})$ we can conclude
\begin{equation*}
\|I_2\|_{L^\infty_TL^2}\lesssim \big\|\|\psi\|_{X_T}^3/\langle t\rangle\|_{L^2_T}\lesssim \|\psi\|_{X_T}^3.
\end{equation*}
\textbf{The $W^{k,p}$ decay} We can apply the dispersion estimate in the same way as
in section $\ref{sectiondlarge}$:
\begin{eqnarray}
\nonumber
\bigg\|\int_0^{t-1}e^{i(t-s)H}B[U^{-1}\psi^\pm,\,(U^{-1}\nabla \psi^\pm)^2]ds\bigg\|_{W^{k,p}}
&\lesssim &\int_0^{t-1}\frac{\|B[U^{-1}\psi^\pm,\,(U^{-1}\nabla \psi^\pm)^2]\|_{W^{k,p'}}}
{(t-s)^{1+3\varepsilon}}ds\\
\nonumber
&\lesssim& \int_0^{t-1}\frac{\|\nabla U^{-1}\psi\|_{W^{k,3p'}}^3}{(t-s)^{1+3\varepsilon}}\\
\label{intermcub} &\lesssim& \int_0^{t-1}\frac{\|\psi\|_{W^{k+1,3p'}}^3}{(t-s)^{1+3\varepsilon}}
\end{eqnarray}
We then use interpolation and the estimate (\ref{techj}) with $q=p'$, we have then:
\begin{equation*}
\|\psi\|_{W^{k+1,3p'}}\lesssim \|\psi\|_{W^{k,3p'}}^{(J-1)/J}\|\psi\|_{W^{k+J,3p'}}^{1/J},
\|\psi(t)\|_{W^{k,3p'}}\lesssim \frac{\|\psi\|_{X_T}}{(1+t)^{2/3-\varepsilon}}.
\end{equation*}
Since $3p'<6$, we have $\|\psi\|_{W^{k+J,3p'}}\lesssim \|\psi\|_{H^{k+J+1}}$ by Sobolev embedding, so that for
$\varepsilon$ small enough, $J$ large enough such that $(2-3\epsilon)(1-\frac{1}{J})\geq 1+3\epsilon$ (but $J\leq N-k-1$) we observe that:
$$\|\psi\|^3_{W^{k+1,3p'}}\lesssim
\frac{\|\psi\|^3_{X_T}}{\langle t\rangle^{1+3\epsilon}}$$
Plugging this inequality in $(\ref{intermcub})$ we conclude that:
$$
\int_0^{t-1}\frac{\|\psi\|_{W^{k+1,3p'}}^3}{(t-s)^{1+3\varepsilon}}\lesssim \frac{\|\psi\|_{X_T}^3}{
\langle t\rangle^{1+3\varepsilon}}.
$$
For the integral on $[t-1,t]$ it suffices to bound $\|\int_{t-1}^te^{i(t-s)H}B[U^{-1}\psi^\pm,
(U^{-1}\nabla \psi^\pm)^2]ds\|_{W^{k,p}}\lesssim \|\int_{t-1}^t\|B[U^{-1}\psi^\pm,
(U^{-1}\nabla \psi^\pm)^2]ds\|_{H^{k+2}}$ and follow the argument of the proof of proposition
$\ref{decay}$.
\end{proof}
\section[Quadratic nonlinearities, end of proof]{Bounds for quadratic nonlinearities in
dimension 3, end of proof}
\label{estimquad}
The following proposition will be repeatedly used (see proposition 4.6 \cite{AudHasp} or
\cite{GNT3}).
\begin{prop}\label{controlX}
We have the following estimates with $0\leq\theta\leq 1$:
\begin{equation}
\|\psi(t)\|_{\dot{H}^{-1}}\lesssim \|\psi(t)\|_{X(t)},
\label{5.5}
\end{equation}
\begin{equation}
\|U^{-2}\psi\|_{L^6}\lesssim \|\psi(t)\|_{X(t)}
\label{5.6}
\end{equation}
\begin{equation}
\begin{aligned}
&\||\nabla|^{-2+\frac{5\theta}{3}}\psi_{<1}(t)\|_{L^6}\lesssim\min(1,t^{-\theta})\|\psi(t)\|_{X(t)},\\
&\||\nabla|^{\theta}\psi_{\geq1}(t)\|_{L^6}\lesssim\min(t^{-\theta},t^{-1})\|\psi(t)\|_{X(t)}.
\end{aligned}
\label{5.8}
\end{equation}
\begin{equation}
\begin{aligned}
&\|U^{-1}\psi(t)\|_{L^6}\lesssim \langle t\rangle^{-\frac{3}{5}}\|\psi(t)\|_{X(t)},\\
\end{aligned}
\label{5.9}
\end{equation}
\label{gain}
\end{prop}
In this section, we will assume $\|\psi\|_{X_T}<<1$, for the only reason that
\begin{equation*}
\forall\,m>2,\ \|\psi\|_{X_T}^2+\|\psi\|_{X_T}^m\lesssim \|\psi\|_{X_T}^2.
\end{equation*}
All computations that follow can be done without any smallness assumption, but they would require
to always add in the end some $\|\psi\|_{X_T}^m$, that we avoid for conciseness.
\subsection{\texorpdfstring{The $L^p$ decay}{decay}}\label{declp}
We now prove decay for the quadratic terms in $(\ref{EKnormal})$, namely
$$\langle t\rangle^{1+3\varepsilon}\|\int^{t}_{0}e^{i(t-s)H}Q(z)(s)ds\|_{W^{k,p}}\lesssim \|z\|_{X_T}^2.$$
For $t\leq 1$, the estimate is a simple consequence of the product estimate
$\|Q(z)\|_{H^{k+2}}\lesssim \|z\|_{H^N}^2$
and the boundedness of $e^{itH}:H^s\mapsto H^s$. Thus we focus on the case $t\geq 1$ and note that
it is sufficient to bound $t^{1+3\varepsilon}\|\int^{t}_{0}e^{i(t-s)H}Q(z)(s)ds\|_{W^{k,p}}$.\\
We recall that the quadratic terms have the following structure (see \eqref{EKnormal})
\begin{equation}\label{strucquad}
Q(z)=U\big(\alpha \,l_1\Delta l_1- \frac{1}{2}\big(|\nabla U^{-1}\phi_1|^2
-|\nabla l_1|^2 -\widetilde{g}''(1) l_1^2\big)+(-\Delta+2)b(\phi,l_1)\big)
-i\alpha\text{div}(l_1\nabla U^{-1}\phi_1),
\end{equation}
where $b=-B[\phi,\phi]+B[l_1,l_1],\ B(\eta,\xi-\eta)=\frac{(\alpha-1)\eta\cdot (\xi-\eta)}{2+|\eta|^2
+|\xi-\eta|^2}$ so that any term in $Q$ is of the form
$(U\circ B_j)[z^\pm,z^\pm],\ j=1\cdots 5$ where $B_j$
satisfies $B_j(\eta,\xi-\eta)\lesssim 2+|\eta|^2+|\xi-\eta|^2$.
\subsubsection{Splitting of the phase space}
We split the phase space $(\eta,\xi)$ in non time resonant and non space resonant sets:
let $(\chi^a)_{a\in 2^\mathbb{Z}}$ standard dyadic partition of unity: $\chi^a \geq 0$,
$\text{supp}(\chi^a)\subset \{|\xi|\sim a\}$,
$\forall\,\xi\in \mathbb{R}^3\setminus \{0\},\ \sum_{a} \chi^a(\xi)=1$. We define the frequency localized
symbol $B_j^{a,b,c}=\chi^a(\xi)\chi^b(\eta)\chi^c(\zeta)B_j$.\\
Note that due to the relation
$\xi=\eta+\zeta$, we have only to consider $B_j^{a,b,c} $ when $a\lesssim b\sim c,\ b\lesssim c\sim a$ or
$c\lesssim a\sim b$.
We will define in the appendix two disjoint sets of indices $\mathcal{NT},\mathcal{NS}$ such that
$\mathcal{NT}\cup \mathcal{NS}=\mathbb{Z}^3$ and which correspond, in a sense precised by lemma
\ref{bacrucial1},\ref{bacrucial} to non time resonant and non space resonant frequencies.
Provided such sets have been constructed, we write
\begin{eqnarray*}
\sum_{a,b,c}\int^{t}_{0}e^{i(t-s)H}UB_j^{a,b,c}[z^\pm,z^\pm](s)ds&=&
\int^{t}_{0}e^{i(t-s)H}\sum_{a,b,c\in \mathcal{NT}}UB_j^{a,b,c,T}+
\sum_{a,b,c\in \mathcal{NS}}UB_j^{a,b,c,X}ds\\
&:=&\sum_{a,b,c\in \mathcal{NT}}I^{a,b,c,T}+
\sum_{a,b,c\in \mathcal{NS}}I^{a,b,c,X}
\end{eqnarray*}
For $(a,b,c)\in \mathcal{NT}$ (resp. $\mathcal{NS}$) we will use an integration by parts
in time (resp. in the ``space'' variable $\eta$).
\subsubsection{Control of non time resonant terms}\label{secNRT}
The generic frequency localized
quadratic term is
\begin{eqnarray}
&e^{itH(\xi)}
\displaystyle\int_0^t\int_{\mathbb{R}^d} \bigg(e^{-is(H(\xi)\mp H(\eta)\mp
H(\xi-\eta)}U(\xi)B_j^{a,b,c,T}(\eta,\xi-\eta)\widetilde{z^\pm}(s,\eta)
\widetilde{z^\pm}(s,\xi-\eta)\biggl)d\eta \, ds
\end{eqnarray}
Regardless of the $\pm$, we set $\Omega=H(\xi)\mp H(\eta)\mp H(\xi-\eta)$. An
integration by part in $s$ gives using the fact that $e^{-i s\Omega}=\frac{-1}{i\Omega} \partial_s(e^{i s\Omega} )$ and $\partial_s
\widetilde{z^\pm}(\eta)=e^{ \mp i s H(\eta)} (\mathcal{N}_z)^\pm(\eta)$, $\partial_s
\widetilde{z^\pm}(\xi-\eta)= e^{ \mp i s H(\xi-\eta)} (\mathcal{N}_z)^\pm(\xi-\eta) $:
\begin{equation}
\begin{aligned}
I^{a,b,c,T}
=&{\cal F}^{-1}(e^{itH(\xi)}\biggl(
\int_0^{t} \int_{\mathbb{R}^N} \bigg(\frac{1}{i\Omega} e^{-is \Omega} U(\xi)B^{a,b,c,T}_j(\eta,\xi-\eta)\partial_s
\big(\widetilde{z^\pm}(\eta) \widetilde{z^\pm}(\xi-\eta)\big)\bigg)d\eta ds\biggl)\\
&-\biggl[{\cal F}^{-1}(e^{itH(\xi)}\biggl(
\int_{\mathbb{R}^N} \bigg(\frac{1}{i\Omega} e^{-is \Omega(\xi,\eta)} U(\xi)B^{a,b,c,T}_j(\eta,\xi-\eta)
\big(\widetilde{z^\pm}(\eta) \widetilde{z^\pm}(\xi-\eta)\big)\bigg)d\eta ds\biggl)
\biggl]^{t}_0\\
&=\int_0^{t}e^{i(t-s)H}\bigg(\mathcal{B}^{a,b,c,T}_3[(\mathcal{N}_z)^\pm,z^\pm]
+\mathcal{B}^{a,b,c,T}_3[z^\pm,\,(\mathcal{N}_z)^\pm]\bigg)ds\\
\label{IPPtemps}&\hspace{2cm}-\big[e^{i(t-s) H}\mathcal{B}^{a,b,c,T}_3[z^\pm,z^\pm]\big]_0^{t},
\end{aligned}
\end{equation}
with
$\displaystyle
\mathcal{B}^{a,b,c,T}_3(\eta,\xi-\eta)=\frac{U(\xi)}{i\Omega}\chi^{a}(\xi)\chi^b(\eta)
\chi^c (\xi-\eta)B_{j}(\eta,\xi-\eta)$.
\\
In order to use the rough multiplier estimate from theorem \ref{singmult}, we need to control
$\mathcal{B}_3^{a,b,c,T}$.
The following lemma extends to our settings the crucial multiplier estimates from \cite{GNT3}.
\begin{lemma}Let $m=\min(a,b,c),\ M=\max(a,b,c),\ l=\min(b,c)$.
For $0<s<2$, we have
\begin{equation}
\text{if }M\gtrsim 1,\
\|\mathcal{B}_3^{a,b,c,T}\|_{[B^{s}]}\lesssim \frac{\langle M\rangle l^{\frac{3}{2}-s}}{\langle a\rangle},
\text{ if }M<<1,\ \|\mathcal{B}_3^{a,b,c,T}\|_{[B^{s}]}\lesssim l^{1/2-s}M^{-s}.
\label{10.43}
\end{equation}
\label{bacrucial1}
\end{lemma}
\noindent We postpone the proof to the appendix.
\begin{rmq}
We treat differently $M$ small and $M$ large since we have a loss of derivative on the symbol in low frequencies. Let us mention that the estimate \eqref{10.43} can be written simply as follows:\begin{equation*}
\|\mathcal{B}_3^{a,b,c,T}\|_{[B^{s}]}\lesssim \frac{\langle M\rangle \langle l\rangle l^{\frac{1}{2}-s}U(M)^{-s}}
{\langle a\rangle}
\end{equation*}
\end{rmq}
Lets us start by estimating the first term in $(\ref{IPPtemps})$: we split the time integral
between $[0,t-1]$ and $[t-1,t]$.
The sum over $a,b,c$ involves three cases: $b\lesssim a\sim c,\ c\lesssim a\sim b$ and
$a\lesssim b\sim c$.
\subparagraph{The case \mathversion{bold}{$b\lesssim a\sim c$}:} for $k_1\in[0,k]$ we have from
theorem $\ref{singmult}$ with $\sigma=1+3\varepsilon$:
\begin{equation}\label{ipptemps}
\begin{aligned}
&\|\nabla^{k_1}\int_0^{t-1}e^{i(t-s)H}\sum_{b\lesssim a\sim c} \mathcal{B}^{a,b,c,T}_3
[\mathcal{N}_z^\pm,z^\pm]ds\|_{L^p} \\
&\lesssim \int_0^{t-1}\frac{1}{(t-s)^{1+3\varepsilon}} \sum_{b\lesssim a\sim c}\langle a\rangle^{k_1}
\| \mathcal{B}^{a,b,c,T}_3[\mathcal{N}_z^\pm,z^\pm]\|_{L^{p'}} ds,\\
&\lesssim \int_0^{t-1}\frac{1}{(t-s)^{1+3\varepsilon}}\biggl( \sum_{b\lesssim a\sim c\lesssim 1}
ab\|\mathcal{B}^{a,b,c,T}_3\|_{[B^{\sigma}]} \|U^{-1}
Q(z)\|_{L^{2}}\|U^{-1}z\|_{L^{2}}\\
&\hspace{3cm}+ \sum_{b\lesssim a\sim c, 1\lesssim a} \langle c\rangle^{-N+k} U(b)
\|\mathcal{B}^{a,b,c,T}_3\|_{[B^{\sigma}]} \|U^{-1}Q(z)\|_{L^{2}}\|\langle \nabla\rangle^N z\|_{L^{2}}
\biggl) ds+\mathcal{R}
\end{aligned}
\end{equation}
where $\displaystyle
\mathcal{R}=\int_0^{t-1}\frac{1}{(t-s)^{1+3\varepsilon}} \sum_{b\lesssim a\sim c}\langle a\rangle^{k_1}
\|\mathcal{B}^{a,b,c,T}_3[R^\pm,z^\pm]\|_{L^{p'}} ds$.
Using lemma $\ref{bacrucial1}$ we have, provided $\epsilon<\frac{1}{12}$ and $N-k-\frac{1}{2}+3\epsilon>0$:
$$
\begin{aligned}
&\sum_{b\lesssim a\sim c\lesssim 1}ab\|\mathcal{B}^{a,b,c,T}_3\|_{[B^{\sigma}]}\lesssim
\sum_{ a\lesssim 1}\sum_{b\lesssim a} ab b^{1/2-1-3\varepsilon}a^{-1-3\varepsilon}\lesssim
\sum_{a\lesssim 1}a^{1/2-6\varepsilon}\lesssim 1,\\
&\sum_{b\lesssim a\sim c,\ a\gtrsim 1}U(b)\langle c\rangle^{-N+k}\|\mathcal{B}^{a,b,c,T}_3\|_{[B^{\sigma}]}
\lesssim \sum_{ a\gtrsim 1}\sum_{b\lesssim a} U(b) \frac{ b^{\frac{1}{2}-3\epsilon}}{a^{N-k}}
\lesssim \sum_{ a\gtrsim 1} \frac{ 1}{a^{N-k}}+ \sum_{ a\gtrsim 1} \frac{ 1}{a^{N-k-\frac{1}{2}+3\epsilon}}\lesssim 1.
\end{aligned}
$$
Using the gradient structure of $Q(z)$ (see \ref{EKnormal}) :
\begin{equation}
\begin{aligned}
& \|U^{-1}Q(z)\|_{L^{2}}\lesssim \|z\|_{W^{2,4}}^2
\lesssim\|z\|_{W^{2,6}}^{\frac{3}{2}}\|z\|_{H^2}^{\frac{1}{2}},
\end{aligned}
\label{decay1}
\end{equation}
so that if we combine these estimates with $\eqref{5.5}$, we get
$$
\begin{aligned}
\|\nabla^{k_1}\int_0^{t-1}e^{i(s-t)H}\sum_{b\lesssim a\sim c}
\mathcal{B}^{a,b,c,T}_3[Q(z)^\pm,z]ds\|_{L^p}&\lesssim\|z\|_{X}^3 \int_0^{t-1}
\frac{1}{(t-s)^{1+3\varepsilon}}\frac{1}{\langle s\rangle^{\frac{3}{2}} }ds\\
&\lesssim\frac{\|z\|_{X}^3}{t^{1+3\varepsilon}}.
\end{aligned}
$$
We bound now $\mathcal{R}$ from $(\ref{ipptemps})$: contrary to the quadratic terms, cubic
terms have no gradient structure, however
the nonlinearity is so strong that we can simply use $\|1_{|\eta|\lesssim 1}U^{-1}R\|_2
\lesssim \|R\|_{L^{6/5}}$. Using the same computations as for quadratic terms we get
$$
\begin{aligned}
&\|\nabla^{k_1}\int_0^{t-1}e^{i(t-s)H}\sum_{b\lesssim a\sim c}
\mathcal{B}^{a,b,c,T}_3[R,z^\pm]ds\|_{L^p}\\
&\hspace{2cm} \lesssim \int_0^{t-1}\frac{1}{(t-s)^{1+3\varepsilon}}\biggl(
\|1_{\{|\eta|\lesssim1\}}U^{-1}R\|_{L^{2}}\|U^{-1}z\|_{L^{2}}
+\|U^{-1}R\|_{L^{2}}\|\langle \nabla\rangle^N z\|_{L^{2}}\biggl) ds.
\end{aligned}
$$
According to $(\ref{EKnormal})$ the cubic terms involve only smooth multipliers
and do not contain derivatives of order larger than $2$, thus we can generically treat them like
$(\langle \nabla \rangle^2z)^3$ using the proposition \ref{estimformenormale}; we have then:
\begin{eqnarray*}
\|R\|_{L^{6/5}}\lesssim \|z\|_{H^2}\|z\|_{W^{2,6}}^2\lesssim \frac{\|z\|_X^3}{
\langle t\rangle^2},\
\|R\|_{L^2}\lesssim \|z\|_{W^{2,6}}^3\lesssim \frac{\|z\|_X^3}{\langle t\rangle^2}.
\end{eqnarray*}
This closes the estimate as $\displaystyle \int_0^{t-1}\frac{1}{(t-s)^{1+3\varepsilon}\langle s\rangle^2}ds
\lesssim \frac{1}{t^{1+3\varepsilon}}$. We proceed similarly for the quartic terms.
\\
It remains to deal with the term $\int^t_{t-1}$, using Sobolev embedding we have:
$$
\begin{aligned}
&\|\nabla^{k_1}\int_{t-1}^{t}e^{i(t-s)H}\sum_{b\lesssim a\sim c} \mathcal{B}^{a,b,c,T}_3
[\mathcal{N}_z^\pm,z^\pm]ds\|_{L^p}\lesssim \int_{t-1}^{t}\|(\cdots )\|_{H^{k_2}}ds,
\end{aligned}
$$
with $k_2=k+1+3\varepsilon$. Again, with $\sigma=1+3\varepsilon$ we get using theorem \ref{singmult} and Sobolev embedding:
$$
\begin{aligned}
&\|\nabla^{k_1}\int_{t-1}^{t}e^{i(t-s)H}\sum_{b\lesssim a\sim c} \mathcal{B}^{a,b,c,T}_3
[\mathcal{N}_z^\pm,z^\pm]ds\|_{L^p}\lesssim \int_{t-1}^{t}\|\sum_{b\lesssim a\sim c}
\mathcal{B}^{a,b,c,T}_3[\mathcal{N}_z^\pm,z^\pm]\|_{H^{k_2}} ds\\
&\lesssim \int_{t-1}^{t}\big(\sum_{b\lesssim a\sim c\lesssim 1} ab\|\mathcal{B}^{a,b,c,T}_3
\|_{[B^\sigma]} \|U^{-1}Q\|_{L^{2}}\|U^{-1}z\|_{L^p} \\
&\hspace{3cm}+\sum_{b\lesssim a\sim c, 1\lesssim a} U(b)a^{k_2-(N-1-3\epsilon)} \|\mathcal{B}^{a,b,c,T}_3\|_{[B^\sigma]}
\|U^{-1}Q\|_{L^{2}}\|\langle\nabla\rangle^Nz\|_{L^2}\big) ds+\mathcal{R},\\[2mm]
\end{aligned}
$$
where $\mathcal{R}$ contains higher order terms that are easily controlled.
Using $\|U^{-1}z\|_{L^p}\lesssim \|z\|_{H^2}$ and the same estimates as previously, we
can conclude provided that $N$ is sufficiently large:
$$
\|\nabla^{k_1}\int_{t-1}^{t}e^{i(t-s)H}\sum_{b\lesssim a\sim c} \mathcal{B}^{a,b,c,T}_3
[\mathcal{N}_z^\pm,z^\pm] ds\|_{L^{p}}\lesssim \|u\|_{X}^3 \int_{t-1}^{t}
\frac{1}{\langle s\rangle^{3/2}} ds \lesssim\frac{\|z\|_{X}^3}{t^{1+3\varepsilon}}.
$$
\subparagraph{The case \mathversion{bold}{$c\lesssim a\sim b$}} As for
$b\lesssim a\sim c$ we start with
$$
\begin{aligned}
&\|\nabla^{k_1}\int_1^{t-1}e^{i(t-s)H}\sum_{c\lesssim a\sim b} \mathcal{B}^{a,b,c,T}_3
[\mathcal{N}_z^\pm,z^\pm]ds\|_{L^p}\\
&\lesssim \int_1^{t-1}\frac{1}{(t-s)^{1+3\varepsilon}}\biggl( \sum_{c\lesssim a\sim b\lesssim 1}bc
\|\mathcal{B}^{a,b,c,T}_3\|_{[B^{\sigma}]}\|U^{-1}Q(z)\|_{L^{2}}\|U^{-1}z\|_{L^{2}}\\
&\hspace{3cm}+ \sum_{c\lesssim a\sim b, 1\lesssim a}
\langle b\rangle^{-1}\|\mathcal{B}^{a,b,c,T}_3\|_{[B^{\sigma}]}
\|\langle\nabla\rangle^{k+1} Q(z)\|_{L^{2}}\|z\|_{L^{2}}\biggl) ds+
\mathcal{R}.
\end{aligned}
$$
with $\sigma=1+3\varepsilon$ and $\mathcal{R}$ contains the other nonlinear terms (which, again, we will
not detail). This case is symmetric
from $b\lesssim a\sim c$ except for the term $\|\langle \nabla\rangle ^{k+1}Q(z)\|_{L^2}$, which is
estimated as follows.
Let $1/q=1/3+\varepsilon$, $k_3=\frac{1}{2}-3\varepsilon$. If $k+2+k_3\leq N$ then using the structure of $Q$
(see \eqref{strucquad}) and Gagliardo Nirenberg inequalities we get:
$$
\begin{aligned}
\|\langle \nabla\rangle ^{k+1} Q(z)\|_{L^{2}}\lesssim
\|z\|_{W^{2,p}}\|z\|_{W^{k+3,q}}\lesssim \|z\|_{W^{2,p}}\|z\|_{H^{k+3+k_3}}
&\lesssim \|z\|_{X}^2/\langle t\rangle^{1+3\varepsilon},
\end{aligned}
$$
Using the multiplier bounds as for the case $b\lesssim a\sim c$, we obtain via the lemma \ref{bacrucial1}:
$$
\begin{aligned}
\|\nabla^{k_1}\int_0^{t-1}e^{i(t-s)H}\sum_{c\lesssim a\sim b} \mathcal{B}^{a,b,c,T}_3
[\mathcal{N}_z^\pm,z^\pm]ds\|_{L^{p}}
\lesssim &\|z\|_{X}^3 \int_{0}^{t-1}\frac{1}{(t-s)^{1+3\varepsilon}}\frac{1}{\langle s\rangle^{(1+3\varepsilon)}}
ds\\
\lesssim&\frac{\|z\|_{X}^3}{t^{1+3\varepsilon}}.
\end{aligned}
$$
The bound for the integral on $[t-1,t]$ is obtained by similar arguments.
\subparagraph{The case \mathversion{bold}{$a\lesssim b\sim c$}} We have using theorem \ref{singmult} and the fact that
the support of ${\cal F}(\sum_{a\lesssim b}a^{k_1}
\mathcal{B}^{a,b,c,T}_3[\mathcal{N}_z^\pm,z^\pm])$ is localized in a ball $B(0,b)$ :
$$
\begin{aligned}
&\|\nabla^{k_1}\int_0^{t-1}e^{i(t-s)H}\sum_{a\lesssim b\sim c} \mathcal{B}^{a,b,c,T}_3
[\mathcal{N}_z^\pm,z^\pm]ds\|_{L^p}\\
&\lesssim \int_0^{t-1}\frac{1}{(t-s)^{1+3\varepsilon}}\|\sum_{a\lesssim b \sim c}a^{k_1}
\mathcal{B}^{a,b,c,T}_3[\mathcal{N}_z^\pm,z^\pm]\|_{L^{p'}}ds\\
&\lesssim \int_0^{t-1}\frac{1}{(t-s)^{1+3\varepsilon}} \sum_{b\sim c}\frac{1}{\langle b\rangle^{N-2}}U(b)U(c)\|
\sum_{a\lesssim b}\langle a\rangle^{k} \mathcal{B}^{a,b,c,T}_3\|_{[B^\sigma]}
\|U^{-1}Q(z)\|_{L^2}\|U^{-1}\langle\nabla\rangle^Nz\|_{L^{2}} ds\\
&\hspace{1cm} +\mathcal{R},
\end{aligned}
$$
where as previously, $\mathcal{R}$ is a remainder of higher order terms that are not difficult to
bound. We observe that for any symbols $(B^a(\xi,\eta))$ such that
$$\forall\,\eta,\ |a_1-a_2|\geq 2\Rightarrow
\text{supp}(B^{a_1}(\cdot,\eta))\cap \text{supp}(B^{a_2}(\cdot,\eta))=\emptyset,$$
then
\begin{equation}\label{tricksuma}
\|\sum_a B^{a}\|_{[B^\sigma]}\lesssim \sup_a \|B^a\|_{[B^\sigma]}.
\end{equation}
This implies using lemma \ref{bacrucial1} and provided that $N$ is large enough:
$$
\begin{aligned}
\sum_{b\sim c }\frac{1}{\langle b\rangle^{N-2}} U(b)U(c)\|\sum_{a\lesssim b}\langle a\rangle^{k}
\mathcal{B}^{a,b,c,T}_3\|_{[B^\sigma]}
&\lesssim \sum_{b }\frac{1}{\langle b\rangle^{N-2}} U(b)^2\sup_{a\lesssim b} \langle a\rangle^{k}
\frac{ b^{\frac{1}{2}-\sigma}U(M)^{-\sigma}\langle b\rangle\langle M\rangle}{\langle a\rangle } \\
&\lesssim \sum_b\frac{U(b)^{5/2-2\sigma} }{\langle b\rangle^{N+\sigma-k-7/2}}\lesssim 1.
\end{aligned}
$$
We have finally using (\ref{decay1}):
$$
\begin{aligned}
\|\nabla^{k_1}\int_0^{t-1}e^{i(t-s)H}\sum_{a\lesssim b\sim c} \mathcal{B}^{a,b,c,T}_3
[\mathcal{N}_z^\pm,z^\pm]ds\|_{L^{p}}&\lesssim \|z\|_{X}^3 \int_{0}^{t-1}\frac{1}{(t-s)^{1+3\varepsilon}}
\frac{1}{\langle s\rangle^{3/2}}ds\\
&\lesssim\frac{ \|u\|_{X}^3}{t^{1+3\varepsilon}}.
\end{aligned}
$$
We proceed in a similar way to deal with the integral on $[t-1,t]$. This end the estimate for the first term in \eqref{IPPtemps}.\vspace{2mm}\\
The second term is symmetric from the first, it remains to deal with the
boundary term:
$\displaystyle \|\nabla^{k_1}\big[e^{i(t-s)H}
\mathcal{B}^{a,b,c,T}_3[z^\pm,z^\pm]\big]_0^t\|_{L^{p}}.$
We have:
\begin{equation}
\begin{aligned}
\|\big[\nabla^{k_1}e^{i(t-s)H}\mathcal{B}^{a,b,c,T}_3[z^\pm,z^\pm]\big]_0^t\|_{L^{p}}
\leq &\|\nabla^{k_1}e^{-itH}\mathcal{B}^{a,b,c,T}_3[z^\pm_0,z^\pm_0]\|_{L^{p}}\\
&+\|\nabla^{k_1}\mathcal{B}^{a,b,c,T}_3[z^\pm(t),z^\pm(t)]\|_{L^{p}}
\end{aligned}
\label{inttemps}
\end{equation}
The first term on the right hand-side
of (\ref{inttemps}) is easy to deal with using the dispersive estimates of the theorem \ref{dispersion}. For the second term we focus on the case
$b\lesssim a\sim c$, the other areas can be treated in a similar way. Using proposition
\ref{gain}, Sobolev embedding and the rough multiplier theorem \ref{singmult} with $s=1+3\epsilon$, $q_1=q_2=q_3=p$ (which verifies $2\leq p=\frac{6}{3-2\epsilon}$) we have:
$$
\begin{aligned}
&\sum_{b\lesssim a\sim c\lesssim 1}\|\nabla^{k_1}\mathcal{B}^{a,b,c,T}_3[z^\pm(t),z^\pm(t)]\|_{L^{p}}
\lesssim \displaystyle
\sum_{b\lesssim a\sim c}b^{-\frac{1}{2}-3\epsilon}a^{-1-3\epsilon}U(b)U ( c)\|U^{-1}z\|_{L^{p}}^2\\
&\hspace{6cm}\lesssim \sum_{b\lesssim a\sim c}b^{-\frac{1}{2}-3\epsilon}a^{-1-3\epsilon}U(b)U( c)\|U^{-1+3\epsilon}z\|_{L^{6}}^2
\lesssim \frac{\|z\|_X^2}{\langle t\rangle^{\frac{6}{5}+6\epsilon}},\\
&\sum_{b\lesssim a\sim c,\ a\gtrsim 1}\|\nabla^{k_1}\mathcal{B}^{a,b,c,T}_3[z^\pm(t),z^\pm(t)]\|_{L^{p}}
\lesssim \displaystyle
\sum_{b\lesssim a\sim c,\ a\gtrsim 1}\frac{\langle a\rangle^{k_1}b^{1/2-3\epsilon}}{\langle a\rangle^{k_1+1}}
\|z\|_{L^{p}}\|z\|_{W^{k_1+1,p}}
\lesssim \frac{\|z\|_X^2}{\langle t\rangle^{\frac{3}{2}(1+\epsilon)}}
\end{aligned}
$$
where in the last inequality we also used $\|z\|_{W^{k+1,6}}^2\lesssim \|z\|_{W^{k,p}}
\|z\|_{W^{k+2,p}}\lesssim \|z\|_{W^{k,p}}\|z\|_{H^N}$.
\subsubsection{Non space resonance}
\label{secNRS}
In this section we treat the term $\sum_{a,b,c}I^{a,b,c,X}$.
Since control for $t$ small just follows from the $H^N$ bounds, we focus on $t\geq 1$, and
first note that the integral over $[0,1]\cup [t-1,t]$ is easy to estimate.\vspace{3mm}\\
\textbf{Bounds for $(\int_0^1+\int^t_{t-1})e^{i(t-s)H}Q(z)ds$}\vspace{2mm}\\
In order to estimate $\displaystyle \|\nabla ^{k_1}\int^t_{t-1}e^{i(t-s)H}Q(z)ds
\|_{L^{p}},$
with $k_1\in[0,k]$ we can simply use Sobolev's embedding ($H^{k+2}\hookrightarrow W^{k,p}$, $H^{N}\hookrightarrow W^{k+4,q}$) and a Gagliardo-Nirenberg type
inequality (\ref{GN3}) with $\frac{1}{2}=\frac{1}{q}+\frac{1}{p}$ :
$$
\begin{aligned}
\|\int^t_{t-1}\nabla^{k_1}e^{i(t-s)H}Q(z)ds\|_{L^{p}}
&\lesssim \int^t_{t-1}\|Q(z)\|_{H^{k+2}}ds\\
&\lesssim \int^t_{t-1}\|z\|_{W^{k+4,q}}\|z\|_{W^{k,p}}ds\\
&\lesssim \|z\|_{X}^2\int^t_{t-1} \frac{1}{\langle s\rangle^{1+3\varepsilon}}ds
\lesssim \frac{\|z\|_{X}^2}{\langle t\rangle^{1+3\varepsilon}}.
\end{aligned}
$$
The estimate on $[0,1]$ follows from similar computations using Minkowski's inequality and the
dispersion estimate from theorem $\ref{dispersion}$.\vspace{2mm}\\
\textbf{Frequency splitting}\vspace{2mm}\\
Since we only control $xe^{-itH}z$ in $L^\infty L^2$, in order to handle the loss of derivatives
we follow the idea from \cite{GMS2} which corresponds to distinguish
low and high frequencies with a threshold frequency depending on $t$. Let $\theta\in C_c^\infty (\mathbb{R}^+)$,
$\theta|_{[0,1]}=1,\
\text{supp}(\theta)\subset[0,2]$, $\Theta(t)=\theta(\frac{|D|}{t^\delta})$, for any quadratic term
$B_j[z,z]$, we write
\begin{equation*}
B_j[z^\pm,z^\pm]=\overbrace{B_j[(1-\Theta(t))z^\pm,z^\pm]
+B_j[\Theta(t)z^\pm,(1-\Theta)(t)z^\pm]}
^{\text{high frequencies}}
+\overbrace{B_j[\Theta(t)z^\pm,\Theta(t)z^\pm]}^{\text{low frequencies}}.
\end{equation*}
\subsubsection*{High frequencies}
Using the dispersion theorem \ref{dispersion}, Gagliardo-Nirenberg estimate (\ref{GN3}) and Sobolev embedding we have for $\frac{1}{p_1}=\frac{1}{3}+\varepsilon$ and for any quadratic term of $Q$ writing under the form $U B_j[z^\pm,z^\pm]$:
\begin{equation}
\begin{aligned}
&\bigg\|\int^{t-1}_{1}e^{i(t-s)H}\big(UB_j[(1-\Theta(t))z^\pm,z^\pm]
+U B_j[\Theta(t)z,(1-\Theta)(t)z^\pm]\big)ds\bigg\|_{W^{k,p}}\\
&\leq \int^{t-1}_{1}\frac{1}{(t-s)^{1+3\varepsilon}}\|z\|_{W^{k+2,p_1}} \|(1-\Theta(s))z\|_{H^{k+2}}ds\\
&\leq \int^{t-1}_{1}\frac{1}{(t-s)^{1+3\varepsilon}}\|z\|_{H^N}^2\frac{1}{s^{\delta(N-2-k)}}ds,
\end{aligned}
\end{equation}
choosing $N$ large enough so that $\delta(N-2-k)\geq 1+3\varepsilon$, we obtain the expected decay.
\subsubsection*{Low frequencies}
Following the section $\ref{secNRT}$, we have to estimate quadratic term of the form $UB_j[z^\pm,z^\pm]$ wich leads to consider:
$${\cal F}I^{a,b,c,X}_3=e^{itH(\xi)}
\int_1^{t-1}\int_{\mathbb{R}^N} \bigg(( e^{-is\Omega} UB^{a,b,c,X}_j(\eta,\xi-\eta)
\widetilde{\Theta z^\pm}(s,\eta)\widetilde{\Theta z^\pm}(s,\xi-\eta)\bigg)d\eta ds,$$
with $\Omega=H(\xi)\mp H(\eta)\mp H(\xi-\eta)$. Using $\displaystyle e^{-is\Omega}=\frac{i\nabla_{\eta}\Omega}{s|\nabla_{\eta}\Omega|^2}
\cdot \nabla_{\eta}e^{-is\Omega}$ and denoting $Ri=\frac{\nabla}{|\nabla|}$
the Riesz operator, $\Theta'(t):=\theta'(\frac{|D|}{t^\delta})$, $J=e^{itH}xe^{-itH}$, an integration by
part in $\eta$ gives:
\begin{equation}
\begin{aligned}
I^{a,b,c,X}_3=
&-{\cal F}^{-1}(e^{itH(\xi)}\biggl(
\int_1^{t-1}\frac{1}{s} \int_{\mathbb{R}^N} \big(e^{-is \Omega(\xi,\eta)} \mathcal{B}^{a,b,c,X}_{1,j}(\eta,\xi-\eta)
\cdot \nabla_\eta [\Theta\widetilde{z^\pm}(\eta)\Theta\widetilde{z^\pm}(\xi-\eta)]\\
&\hspace{65mm}+\mathcal{B}^{a,b,c,X}_{2,j}(\eta,\xi-\eta)\widetilde{\Theta z^\pm}(\eta)
\widetilde{\Theta z^\pm}(\xi-\eta)d\eta\big)ds\biggl)\\
=&
-\int_1^{t-1}\frac{1}{s}e^{i(t-s)H}\bigg(\mathcal{B}^{a,b,c,X}_{1,j}
[\Theta(s)(Jz)^\pm,\Theta(s)z^\pm]
-\mathcal{B}^{a,b,c,X}_{1,j}[\Theta(s)z^\pm,\Theta(s)(Jz)^\pm]\\
&\hspace{8cm}
+\mathcal{B}^{a,b,c,X}_{2,j}[\Theta(s)z^\pm,\Theta(s)
z^\pm]\bigg)ds\\
&-\int_1^{t-1} \frac{1}{s}e^{i(t-s)H}\bigg(\mathcal{B}^{a,b,c,X}_{1,j}
[\frac{1}{s^\delta}Ri\,\Theta'(s)z^\pm,\Theta(s) z^\pm]\\
&\hspace{3.5cm}-\mathcal{B}^{a,b,c,X}_{1,j}[\Theta(s)z^\pm,\frac{1}{s^\delta}Ri
\Theta'(s)z^\pm]\bigg)ds.
\end{aligned}
\label{ippespace}
\end{equation}
with:
\begin{equation*}
\displaystyle\mathcal{B}^{a,b,c,X}_{1,j}=\frac{ U(\xi)\nabla_{\eta}\Omega}{|\nabla_{\eta}
\Omega|^2}B^{a,b,c,X}_j,\
\displaystyle\mathcal{B}^{a,b,c,X}_{2,j}=\nabla_{\eta}B_{1,j}^{a,b,c,X}.
\end{equation*}
The following counterpart of lemma $\ref{bacrucial1}$ slightly improves the estimates from
\cite{GNT3}.
\begin{lemma}
\label{bacrucial} Denoting $M=\max (a,b,c)$, $m=\min(a,b,c)$ and $l=\min(b,c)$ we have:
\begin{itemize}
\item If $M<<1$ then for $0\leq s\leq 2$:
\begin{equation}
\|\mathcal{B}^{a,b,c,X}_{1,j}\|_{[B^{s}]}\lesssim l^{\frac{3}{2}-s}M^{1-s},\;\;
\|\mathcal{B}^{a,b,c,X}_{2,j}\|_{H^{s}}\lesssim l^{\frac{1}{2}-s}M^{-s},
\end{equation}
\item If $M\gtrsim 1$ then for $0\leq s\leq 2$:
\begin{equation}
\|\mathcal{B}^{a,b,c,X}_{1,j}\|_{[B^{s}]}\lesssim \langle M\rangle^2l^{3/2-s}\langle a\rangle^{-1},\;\;
\|\mathcal{B}^{a,b,c,X}_{2,j}\|_{[B^{s}]}\lesssim \langle M\rangle^2l^{1/2-s}\langle a\rangle^{-1},
\end{equation}
\end{itemize}
\end{lemma}
\noindent We now use these estimates to bound the first term of \eqref{ippespace}. Since
they are independent of $j$ we now drop this index for concision. As in paragraph \ref{secNRT}
the $j$ index is dropped for conciseness, and
there are
three areas to consider: $b\lesssim c\sim a,\ c\lesssim c\lesssim a\sim b,\ a\lesssim b\sim c$.
\subparagraph{The case \mathversion{bold}{$c\lesssim a\sim b$}} Let $\varepsilon_1>0$ to be
fixed later. Using Minkowski's inequality,
dispersion and the rough multiplier theorem \ref{singmult} with $s=1+\varepsilon_1$, $\frac{1}{q}=1/2+\epsilon-\frac{\epsilon_1}{3}$
for $a\lesssim 1$, $s=4/3$, $\frac{1}{q_1}=7/18+\epsilon$ for $a\gtrsim 1$ we obtain
$$
\begin{aligned}
&\big\|\nabla^{k_1}\int_1^{t-1}\frac{1}{s}e^{i(t-s)H}\sum_{c\lesssim a\sim b}
\mathcal{B}^{a,b,c,X}_{1}[\Theta(s)(Jz)^\pm,\Theta(s)z^\pm]ds\big\|_{L^{p}}\\
&\lesssim \int_1^{t-1}\frac{1}{s(t-s)^{1+3\varepsilon}}\sum_{c\lesssim a\sim b\lesssim 1}
\|\mathcal{B}^{a,b,c,X}_{1}\|_{[B^{1+\varepsilon_1}]}\|\Theta(s)Jz\|_{L^2}
\|\Theta(s)z]\|_{L^{q}}\\
&\hspace{13mm}+\sum_{c\lesssim a\sim b,\ 1\lesssim a\lesssim s^\delta}a^{k}
\|\mathcal{B}^{a,b,c,X}_{1}\|_{[B^{4/3}]}\|\Theta(s)Jz\|_{L^{2}}
\|\Theta(s)z]\|_{L^{q_1}}\big)ds\\[2mm]
&\lesssim \int_1^{t-1}\frac{1}{s(t-s)^{1+3\varepsilon}}\big(\sum_{a\lesssim 1}\sum_{c\lesssim a\sim b}
\|\mathcal{B}^{a,b,a,X}_{1}\|_{[B^{1+\varepsilon_1}]}\|\Theta(s)Jz\|_{L^2}
\|\Theta(s)z]\|_{L^{q}}\\
&\hspace{2cm}+\sum_{1\lesssim a \lesssim s^{\delta}}a^{k}\sum_{c\lesssim a\sim b}
\|\mathcal{B}^{a,b,c,X}_{1}\|_{[B^{4/3}]}\|\Theta(s)Jz\|_{L^{2}}
\|\Theta(s)z]\|_{L^{q_1}}\big)ds
\end{aligned}
$$
Using lemma \ref{bacrucial} and interpolation we have for $\varepsilon_1<1/4$ and
$\varepsilon_1-3\varepsilon>0$,
$$
\begin{aligned}
&\sum_{a\lesssim 1}\sum_{c\lesssim a\sim b}
\|\mathcal{B}^{a,b,c,X}_{1}\|_{[B^{1+\varepsilon_1}]}\lesssim
\sum_{a\lesssim 1}a^{1-(1+\varepsilon_1)}\sum_{c\lesssim a}c^{\frac{3}{2}-(1+\varepsilon_1)}\lesssim 1,\\
&\|\psi(s)\|_{L^{q}}\lesssim\|\psi(s)\|^{\frac{\epsilon_1-3\epsilon}{1+3\epsilon}}_{L^{p}}\|\psi(s)\|^{1-\frac{\epsilon_1-3\epsilon}{1+3\epsilon}}_{L^{2}} \lesssim \frac{\|\psi\|_{X}}{s^{\varepsilon_1-3\varepsilon}}.
\end{aligned}
$$
In high frequencies we have:
$$
\begin{aligned}
&\sum_{1\lesssim a \lesssim s^{\delta}}a^{k} \sum_{c\lesssim a\sim b}
\frac{\langle M\rangle^2c^{3/2-4/3}}{\langle a\rangle}
\lesssim s^{\delta(k+7/6)},\
\|\psi(s)\|_{L^{q_1}}\lesssim \frac{\|\psi\|_X}{s^{1/3-3\varepsilon}}
\end{aligned}
$$
Finally we conclude that if $\min\big(\varepsilon_1-3\varepsilon,1/3-3\varepsilon
-\delta(k+7/6)\big)\geq 3\varepsilon$ (this choice is possible provided $\varepsilon$ and $\delta$ are
small enough):
$$
\begin{aligned}
\|\nabla^{k_1}\int_1^{t-1}\frac{1}{s}e^{-i(t-s)H}\big(\sum_{a,b,c}\mathcal{B}^{a,b,c,X}_{1}
[\Theta(s)(Jz)^\pm,\Theta(s)z^\pm]ds\|_{L^{p}}
&\lesssim \int_1^{t-1}\frac{\|z\|_X^2}{s^{1+3\varepsilon}(t-s)^{1+3\varepsilon}}
ds
\\
&\lesssim \frac{\|z\|_X^2}{t^{1+3\varepsilon}}.
\end{aligned}
$$
The case $b\lesssim c\sim a$ is very similar, the case $a\lesssim b\sim c$ involves an infinite sum
over $a$ which can be handled as in the non time resonant case with observation \eqref{tricksuma}.
The term $\displaystyle \nabla^{k_1}\int_1^{t-1}\frac{1}{s}e^{i(t-s)H}\mathcal{B}^{a,b,c,X}_{1}
[\Theta(s)z^\pm,\Theta(s)(Jz)^\pm]ds$ is symmetric while the terms
$$
\begin{aligned}
&\|\nabla^{k_1}\int_1^{t-1} \frac{1}{s}e^{i(t-s)H}\big(\mathcal{B}^{a,b,c,X}_{1}
[\frac{1}{s^\delta}Ri\Theta'(s)z^\pm,\Theta(s)z^\pm]\\
&\hspace{3cm}-\mathcal{B}^{a,b,c,X}_{1}[\Theta(s)z^\pm,
\frac{1}{s^\delta}Ri\Theta'(s)z^\pm]\big)ds\|_{L^p},
\end{aligned}
$$
are simpler since there is no weighted term $Jz$ involved.\vspace{2mm} \\
The last term to consider
is
$$\big\|\nabla^{k_1}\int_1^{t-1}\frac{1}{s}e^{i(t-s)H}\sum_{a,b,c}\mathcal{B}^{a,b,c,X}_{2}
[\Theta(s)z^\pm,\Theta(s)z^\pm]ds\big\|_{L^p}. $$
Let us start with the zone $b\lesssim a\sim c$. We use the same indices as for
$\mathcal{B}_1^{a,b,c}$: $s=1+\varepsilon_1$,
$\frac{1}{q}=1/2+\varepsilon-\varepsilon_1/3$, $s_1=4/3$, $\frac{1}{q_1}=7/18+\varepsilon$,
\begin{equation}\label{estimX}
\begin{aligned}
&\big\|\nabla^{k_1}\int_1^{t-1}\frac{1}{s}e^{i(t-s)H}\sum_{b\lesssim a}\mathcal{B}^{a,b,c,X}_{2}
[\Theta(s)z^\pm,\Theta(s)z^\pm]ds\big\|_{L^p}\\
&\lesssim \int_1^{t-1}\frac{1}{s(t-s)^{1+3\varepsilon}}\big(\sum_{a\lesssim 1}\sum_{b\lesssim a\sim c}
U(b)U(c)\|\mathcal{B}^{a,b,c,X}_{2}\|_{[B^{1+\varepsilon_1}]}\|U^{-1}\Theta(s)z\|_{L^2}\|U^{-1}\Theta(s)
z]\|_{L^q}\\
&\hspace{3cm}
+\sum_{1\lesssim a \lesssim s^{\delta}}a^{k}\sum_{b\lesssim a\sim c}\frac{U(b)}{\langle c\rangle^k}
\|\mathcal{B}^{a,b,c,X}_{2}\|_{[B^{4/3}]}\|U^{-1}\Theta(s)z\|_{L^{2}}
\|\langle \nabla\rangle^k\Theta(s)z]\|_{L^{q_1}}\big)ds
\end{aligned}
\end{equation}
For $M\lesssim 1$ we have if $\varepsilon_1<1/4$:
$$
\sum_{a\lesssim 1}\sum_{b\lesssim c\sim a}U(b)U(c)\|\mathcal{B}_2^{a,b,c,X}\|
_{[B^{1+\varepsilon_1}]}\lesssim
\sum_{a\lesssim 1}\sum_{b\lesssim c\sim a}b^{1/2-\varepsilon_1}a^{-\varepsilon_1}\lesssim 1.
$$
Furthermore we have from proposition $\ref{controlX}$:
\begin{equation*}
\|U^{-1}\psi(s)\|_{L^2}\lesssim\|\psi\|_{X},
\
\|U^{-1}\psi(s)\|_{L^{q}}\lesssim \|U^{-1}\psi\|_{L^2}^{1-\varepsilon_1+3\varepsilon}
\|U^{-1}\psi\|_{L^6}^{\varepsilon_1-3\varepsilon}\lesssim \frac{\|\psi\|_{X}}{s^{\frac{3(\varepsilon_1-3\varepsilon)}{5}}},
\end{equation*}
Now for $M\gtrsim 1$ $$
\sum_{1\lesssim a \lesssim s^{\delta}}a^{k}\sum_{b\lesssim c\sim a}
\frac{ U(b) \langle M\rangle^2b^{1/2-4/3}}{\langle a\rangle \langle c\rangle^k}\lesssim \sum_{1\lesssim a\lesssim s^\delta}
a \lesssim s^{\delta},\hspace{0.4cm}
\|\langle \nabla\rangle^k\Theta(s)z\|_{L^{q_1}}\lesssim \frac{\|z\|_X}{s^{1/3-3\varepsilon}}.$$
If $\min\big(3(\varepsilon_1-3\varepsilon)/5,1/3-3\varepsilon-\delta\big)\gtrsim 3\varepsilon$,
injecting these estimates in \eqref{estimX} gives
$$
\big\|\nabla^{k_1}\int_1^{t-1}\frac{1}{s}e^{i(t-s)H}\big(\sum_{b\lesssim c\sim a}
\mathcal{B}^{a,b,c,X}_{2}
[\Theta(s)Jz,\Theta(s)z]ds\big\|_{L^{p}}
\lesssim \int_1^{t-1}\frac{\|z\|_X^2}{(t-s)^{1+3\varepsilon}s^{1+3\varepsilon}}ds
\lesssim \frac{\|z\|_X^2}{t^{1+3\varepsilon}}.
$$
The two other cases $c\lesssim a\sim b$ and $a\lesssim b\sim c$ can be treated in a similar way, we refer
again to the observation \eqref{tricksuma} in the case $a\lesssim b\sim c$.\\
It concludes this section, the combination of paragraphs \ref{secNRT} and \ref{secNRS} gives
\begin{equation*}
\bigg\|\int_0^te^{i(t-s)H}Q(z(s))ds\bigg\|_{W^{k,p}}\lesssim
\frac{\|z\|_X^2+\|z\|_X^3}{\langle t\rangle^{1+3\varepsilon}}.
\end{equation*}
\begin{rmq}\label{precN}
From the energy estimate, we recall that we need $k\geq 3$ (see \eqref{Fspaces}). The strongest
condition on $N$ seems to be $(N-2-k)\delta> 1$. In the limit $\varepsilon\rightarrow 0$, we must
have at least $1/3-\delta(k+7/6)>0$, so that $N\geq 18$.
\end{rmq}
\subsection{Bounds for the weighted norm }
The estimate for $\|x\int_0^te^{-isH}B_j[z,z]ds\|_{L^2}$ can be done with almost the same
computations as in section $10$ from \cite{GNT3}. The only difference is that Gustafson et al
deal with nonlinearities without loss of derivatives. As we have seen in paragraph \ref{declp},
the remedy is to use appropriate frequency truncation, so we will only give a sketch
of proof for the bound in this paragraph.
\paragraph{First reduction}
Applying $xe^{-itH}$ to the generic bilinear term $U\circ B_j[z^\pm,z^\pm]$, we have for
the Fourier transform:
\begin{equation}
\begin{aligned}
\mathcal{F}\big(xe^{-itH}\int_0^te^{i(t-s)H}UB_j[z^\pm,z^\pm]\big)=&
\int_0^t\int_{\mathbb{R}^d} \nabla_{\xi}\bigg(e^{-is\Omega}UB_j(\eta,\xi-\eta)
\widetilde{z^\pm}(s,\eta)\widetilde{z^\pm}(s,\xi-\eta)\biggl)d\eta \, ds\\
\end{aligned}
\end{equation}
As the $X_T$ norm only controls $\|Jz\|_{L^2}$, we have to deal with the loss of derivative in the nonlinearities. It is then convenient that $\xi-\eta\lesssim \eta$ in order to absorb the loss of derivatives; to do this we use a cut-off function $\theta(\xi,\eta)$
which is valued in $[0,1]$, homogeneous of degree $0$, smooth outside of $(0,0)$ and such that
$\theta(\xi,\eta)=0$ in a neighborhood of $\{\eta=0\}$ and $\theta(\xi,\eta)=1$ in a neighborhood
of $\{\xi-\eta=0\}$ on the sphere. Using this splitting we get two terms
\begin{equation}
\begin{aligned}
&\int_0^t\int_{\mathbb{R}^d} \nabla_{\xi}\bigg(e^{-is\Omega}UB_j(\eta,\xi-\eta)\theta(\xi,\eta)
\widetilde{z^\pm}(s,\eta)\widetilde{z^\pm}(s,\xi-\eta)\biggl)d\eta \, ds, \\
&\int_0^t\int_{\mathbb{R}^d} \nabla_{\xi}\bigg(e^{-is\Omega}(1-\theta(\xi,\eta))UB_j(\eta,\xi-\eta)
\widetilde{z^\pm}(s,\eta)\widetilde{z^\pm}(s,\xi-\eta)\biggl)d\eta \, ds.
\end{aligned}
\label{intGer}
\end{equation}
By symmetry it suffices to consider the first one which corresponds to a region
where $|\eta|\gtrsim |\xi|,|\xi-\eta|$ so that we avoid loss of derivatives for
$\nabla_\xi \widetilde{z^\pm}(s,\xi-\eta)$.
\paragraph{An estimate in a different space and high frequency losses}
Depending on which term $\nabla_{\xi}$ lands, the following integrals arise:
$$
\begin{aligned}
{\cal F}I_1&=
\int_0^t\int_{\mathbb{R}^N} e^{-is\Omega}\nabla_\xi^{(\eta)} (\theta(\xi,\eta)UB_j(\eta,\xi-\eta))
\widetilde{z^\pm}(s,\eta)\widetilde{z^\pm}(s,\xi-\eta)d\eta ds,\\
{\cal F} I_2&=
\int_0^t\int_{\mathbb{R}^N} e^{-is\Omega} \theta(\xi,\eta)UB_j(\eta,\xi-\eta)
\widetilde{z^\pm}(s,\eta)\nabla_{\xi}^{(\eta)} \widetilde{z^\pm}(s,\xi-\eta)d\eta ds,\\
{\cal F} I_3&=
\int_0^t\int_{\mathbb{R}^N} e^{-is\Omega} (is\nabla_\xi\Omega)\theta(\xi,\eta)UB_j(\eta,\xi-\eta)
\widetilde{z^\pm}(s,\eta) \widetilde{z^\pm}(s,\xi-\eta)d\eta ds\\
&:= \mathcal{F}\bigg(\int_0^t e^{-isH} s\mathcal{B}_j[z^\pm,z^\pm]ds\bigg),
\end{aligned}
$$
with:
$$\mathcal{B}_j(\eta,\xi-\eta)=(is\nabla_\xi\Omega)\theta(\xi,\eta)UB_j(\eta,\xi-\eta).$$
The control of the $L^2$ norm of $I_1$ and $I_2$ is not a serious issue: basically we deal here with
smooth multipliers, and from the estimate $\|z\,xe^{-itH}z\|_{L^1_TL^2}\lesssim
\|z\|_{L^1_TL^\infty}\|xe^{-itH}z\|_{L^\infty _TL^2}\lesssim \|z\|_{X_T}^2$ it is apparent that we can conclude. The only point is that we can control the loss of derivative on $Jz$ via the truncation function $\theta_1$ and it suffices to absorb the loss of derivatives by $z$.
Due to the $s$ factor, the case of $I_3$ is
much more intricate and requires to use again the method of space-time resonances.\\
Let us set
\begin{equation*}
\begin{aligned}
&\|z\|_{S_T}=\|z\|_{L^\infty_TH^1}+\|U^{-1/6}z\|_{L^2_T W^{1,6}},\\
&\|z\|_{W_T}=\|x e^{-itH}z\|_{L^\infty_T H^1}.
\end{aligned}
\end{equation*}
Gustafson et al prove in \cite{GNT3} the key estimate
\begin{equation*}
\big\|\int_0^te^{-isH}sB[z^\pm,z^\pm]ds\big\|_{L^\infty_TL^2}\lesssim \|z\|_{S_T\cap W_T}^2,
\end{equation*}
where $B$ is a class of multipliers very similar to our $\mathcal{B}_j$, the only difference being
that they are associated to semi-linear nonlinearities, and thus cause no loss of derivatives at
high frequencies. We point out that the $S_T$ norm is weaker than the $X_T$ norm, indeed
$\|U^{-1/6}z\|_{L^2_T W^{1,6}}\lesssim \|z\|_{L^2_TW^{2,9/2}}\lesssim \|z\|_{X_T}
\|1/\langle t\rangle^{5/6}\|_{L^2_T}\lesssim \|z\|_{X_T}$. Moreover we have already seen how to deal with
high frequency loss of derivatives by writing (see paragraph
\ref{secNRS})
\begin{equation}
\mathcal{B}_j[z^\pm,z^\pm]=\mathcal{B}_j[1-\Theta(t)z^\pm,z^\pm]+\mathcal{B}_j[\Theta(t)z^\pm,z^\pm].
\label{poidsterm}
\end{equation}
Let $1/q=1/3+\varepsilon$, the first term is estimated using Sobolev embedding and the fact that $N$ is large enough compared to $\delta$:
\begin{eqnarray*}
\big\|\int_0^t\int_{\mathbb{R}^N} e^{-isH} s\mathcal{B}_j[z^\pm,z^\pm]ds\big\|_{L^2}
\lesssim\int_0^t s\|(1-\Theta(s))z\|_{W^{3,q}}\|z\|_{W^{3,p}}ds
&\lesssim& \int_0^t\frac{\|z\|_{H^N}\|z\|_{X_T}}{\langle s\rangle^{(N-4)\delta}}ds\\
&\lesssim& \|z\|_{X_T}^2.
\end{eqnarray*}
The estimate of the second term of (\ref{poidsterm}) follows from the (non trivial) computations
in \cite{GNT3}, section $10$. They are very similar to the analysis of the previous section (based
on the method of space-time resonances), for the sake of completeness we reproduce hereafter
a small excerpt from their computations.
\\
As in section \ref{declp}, one starts by splitting the phase space
\begin{equation*}
\int_0^t e^{i(t-s)H}s\mathcal{B}_j[\Theta(s) z^\pm,z^\pm]ds=\sum_{a,b,c}
\int_0^t e^{i(t-s)H}s\big(\mathcal{B}_j^{a,b,c,T}+\mathcal{B}_j^{a,b,c,X}\big)[\Theta(s)z^\pm,z^\pm]ds
\end{equation*}
For the time non-resonant terms, an integration by parts in $s$ implies:
\begin{equation}
\begin{array}{ll}
\displaystyle\int_0^t e^{i(t-s)H}s\mathcal{B}_j^{a,b,c,T}[\Theta(s) z^\pm,z^\pm]ds\\
\displaystyle\hspace{3cm}=-\int_0^te^{isH}\bigg((\mathcal{B}'_j)^{a,b,c,T}[\Theta(s) z^\pm,z^\pm]ds
\displaystyle+(\mathcal{B}'_j)^{a,b,c,T}[s\Theta(s) \mathcal{N}_z^\pm,z^\pm]\\
\displaystyle\hspace{35mm}+(\mathcal{B}'_j)^{a,b,c,T}[\Theta(s) z^\pm,s\mathcal{N}_z^\pm]+(\mathcal{B}'_j)^{a,b,c,T}[-\delta s^{-\delta}\Theta(s) |\nabla| z^\pm,z^\pm]\bigg)ds\\
\hspace{35mm}+
\displaystyle\big[e^{isH}(\mathcal{B}'_j)^{a,b,c,T}[s \Theta(s) z^\pm,z^\pm]\big]_0^t,
\end{array}
\label{10.41}
\end{equation}
with:
$$
(\mathcal{B}'_j)^{a,b,c,T}=\frac{1}{\Omega}\mathcal{B}_j^{a,b,c,T}=
\frac{i\nabla_{\xi}\Omega}{\Omega}B_j^{a,b,c,T}\theta(\xi,\eta),$$
We only consider the second term in the right hand side of \eqref{10.41}, in the case
$c\lesssim b\sim a$. All the other terms can be treated in a similar way. The analog of lemma \ref{bacrucial1} in these settings is the following:
\begin{lemma}
Denoting $M=\max (a,b,c)$, $m=\min(a,b,c)$ and $l=\min(b,c)$ we have:
\begin{equation}
\|(\mathcal{B}_j')^{a,b,c,T}\|_{[H^{s}]}\lesssim \langle M\rangle^2\bigg(\frac{\langle M \rangle}{M}\bigg)^s
l^{\frac{3}{2}-s}\langle a\rangle ^{-1}.
\end{equation}
\label{acrucial1}
\end{lemma}
We have then by applying theorem \ref{singmult}:
\begin{equation}
\begin{aligned}
\|\int^{T}_{0}e^{-isH} \sum_{c\lesssim a\sim b}(\mathcal{B}'_j)^{a,b,c,T}&[s \Theta(s)\mathcal{N}z^\pm,z^\pm]
ds\|_{L^2}\\
&\lesssim \big\|\sum_{c\lesssim a\sim b} \frac{U(c)}{\langle b\rangle^{2}}\|(\mathcal{B}'_j)^{a,b,c,T}\|_{[B^{1+\varepsilon}]}
\| s \langle \nabla\rangle^2\mathcal{N}_z\|_{L^{2}} \| U^{-1}z\|_{L^{\infty}(L^{6})}\big\|_{L^1_T}
\end{aligned}
\label{10.44a}
\end{equation}
From lemma \ref{acrucial1} we find
\begin{equation}
\begin{aligned}
\sum_{c\lesssim a\sim b}U(c)\|(\mathcal{B}_3')^{a,b,c,T}\|_{[B^1]}
&\lesssim \sum_{ c\lesssim a}\frac{U(c)}{\langle a\rangle^2}\langle a \rangle^2a^{-1}
c^{\frac{1}{2}},\\
&\lesssim \sum_{ a\leq 1}a^{1/2}+\sum_{a\geq 1}a^{-1/2}\lesssim 1.
\end{aligned}
\label{10.44ab}
\end{equation}
Next we have (as previously forgetting cubic and quartic nonlinearities)
$$\|\langle\nabla \rangle^2\mathcal{N}_z\|_{L^2}\lesssim \|z\|_{W^{4,4}}^2\lesssim
\|z\|_{X_T}^2/\langle s\rangle^{3/2},$$ and from \eqref{5.9}
$\|U^{-1}z(s)\|_{L^6}\lesssim \langle s\rangle^{-3/5}$ so that
\begin{equation*}
\begin{aligned}
\|\int^{T}_{0}e^{-isH} \sum_{c\lesssim a\sim b}(\mathcal{B}'_j)^{a,b,c,T}&[s\mathcal{N}z^\pm,z^\pm]
ds\|_{L^2}\lesssim \|\|z\|_{X_T}^3\langle s\rangle^{-21/10}\|_{L^1_T}\lesssim \|z\|_{X_T}^3.
\end{aligned}
\end{equation*}
\subsection{Existence and uniqueness}
The global existence follows from the same argument as in dimension larger than $4$: for $N=3,4$
combining the energy estimate (proposition \ref{energy}), the a priori estimates for cubic, quartic
(section \ref{estimR}) and quadratic nonlinearities (section \ref{estimquad}) and the proposition \ref{estimformenormale} we have uniformly in
$T$
\begin{eqnarray*}
\begin{aligned}
&\|\psi\|_{X_T}\leq C_1\bigg(\|\psi_0\|_{W^{k,4/3}}+\|\psi_0\|_{H^N}+\|\psi\|_{X_T}^2 G(\|\psi\|_{X_T},\|\frac{1}{1+l_1}\|_{L^\infty_T(L^\infty)})\\
&\hspace{3cm}+ \|\psi_{0}\|_{H^{2n+1}}\text{exp}\big(C'\|\psi\|_{X_T} H(\|\psi\|_{X_T},\|\frac{1}{l+1}\|_{L^\infty_T(L^\infty)})\big)\bigg).
\end{aligned}
\end{eqnarray*}
with $G$ and $H$ continuous functions so that from the standard bootstrap argument and the blow up criterion (see page
\pageref{blowcriter}) the local solution is global.
\subsection{Scattering}\label{secscatt}
It remains to prove that $e^{-itH}\psi(t)$ converges in $H^s(\mathbb{R}^3)$, $s< 2n+1$. This is a
consequence of the following lemma:
\begin{lemma}
For any $0\leq t_1\leq t_2$, we have
\begin{equation}\label{estimsanspoids}
\|\int_{t_1}^{t_2} e^{isH}\mathcal{N}\psi ds\|_{L^2}\lesssim \frac{\|\psi\|_{X}^2}{(t_1+1)^{1/2}}.
\end{equation}
\end{lemma}
\begin{proof}
We focus on the quadratic terms since the cubic and quartic terms give even stronger decay.
From Minkowski and H\"older's inequality and the dispersion
$\|\psi\|_{L^{p}}\leq \frac{\|\psi\|_X}{\langle t\rangle+^{3(1/2-1/p)}}$:
\begin{eqnarray*}
\|\int_{t_1}^{t_2}e^{-i(t-s)H}{\cal N}\psi ds\|_{L^2}
\lesssim \int_{t_1}^{t_2}\|\langle\nabla\rangle^2\psi\langle\nabla\rangle^2\psi \|_{L^2}ds,
& \lesssim &\int_{t_1}^{t_2}\|\langle\nabla\rangle^2\psi \|_{L^4}^2ds,\\
&\lesssim& \|\psi\|_{X}^2\int_{t_1}^{t_2}\frac{1}{\langle s\rangle^{d/2}}ds .
\end{eqnarray*}
\end{proof}
\noindent Interpolating between the uniform bound in $H^{2n+1}$ and the decay in $L^2$ we get
$$\|e^{-it_1H}\psi(t_1)-e^{-it_2H}\psi(t_2)\|_{H^s}\lesssim 1/\langle t_1\rangle^{(2n+1-s)/(4n+2)},$$
thus
$e^{-itH}u$ converges in $H^s$ for any $s<2n+1$. For $d=3$, the convergence of
$xe^{-itH}\psi$ in $L^2$ follows from an elementary but cumbersome inspection of the proof of
boundedness of $xe^{-itH}\psi$. If one replaces everywhere
$\displaystyle \int_0^txe^{-isH}\mathcal{N}_zds$ by
$\displaystyle \int_{t_1}^{t_2}xe^{-isH}\mathcal{N}_zds$, every estimates ends up
with $\|\psi\|_X^2\int_{t_1}^{t_2}/(1+s)^{1+\varepsilon'}ds$, $k=2,3,4,\ \varepsilon'>0$,
so that $xe^{-itH}\psi$ is a Cauchy sequence in $L^2$. A careful inspection of the proof would
also allow to quantify the value of $\varepsilon'$.
|
2,877,628,088,382 | arxiv | \section{Introductions}
\label{sect:intro}
The great success of multiple-input multiple-output (MIMO) technology makes it widely accepted for current and future high data-rate communication systems\cite{PimmWaveIntroMCOMM2011}. Acting as a pillar to satisfy data hungry applications, a natural question is how to reduce the cost of MIMO technology, especially that of large scale antenna arrays. The traditional setting of one radio-frequency (RF) chain per antenna element is too expensive for large-scale MIMO systems, especially at high frequencies, such as millimeter wave bands or Terahertz bands. Hybrid analog-digital architecture is promising to alleviate the straits and strike a balance between the cost and the performance of practical MIMO systems.
A typical hybrid analog-digital MIMO transceiver consists of four components, i.e., digital precoder, analog precoder, analog processor, and digital processor \cite{HybSurvey2017}. In the early transceiver design, hybrid MIMO technology is often referred to antenna selection \cite{SwitchTSP2003, SwitchTWC2005} to reap spatial diversity. In these works, analog switches are used in the radio-frequency domain. Phase-shifter based soft antenna selection \cite{VariablePhaseTSP2005,AntennaSeLTWC2006,SwitchAnalyzeTVT2009} has been proposed to improve performance for correlated MIMO channels. Nowadays, the phase-shifter based hybrid structure has been widely used.
For a phase shifter, only signal phase, instead of both magnitude and phase, can be adjusted. Thus, the optimization of a MIMO transceiver with phase shifters becomes complicated due to the constant-modulus constraints on analog precoder and analog processor.
It has been shown in \cite{YPLinTSP2017} that the performance of a full-digital system can be achieved when the number of shifters is doubled in a phase-shifter based hybrid structure. However, this can hardly be practical due to the requirement on a large number of phase shifters, especially in large-scale MIMO systems. As a matter of fact, the phase shifters in large-scale MIMO systems have been considered to be a burden sometimes. Thus, sub-connected hybrid structure has emerged as an alternative option \cite{CLinJSAC2017,dynamicSubTWC2017} and it has received much attention recently \cite{CLinJSAC2017, CLinWCNC2017,OverlapLSP2017,dynamicSubTWC2017,HybridSubarrayTCOMM2018,ReconfigureTCOMM2018}.
Unit modulus and discrete phase make the optimization of analog transceivers nonconvex and thus difficult to address \cite{SwitchTSP2003,SwitchTWC2005}. There have been some works on hybrid transceiver optimization considering different design limitations and requirements. Their motivation is to exploit the underlying structures of the hybrid transceiver to achieve high performance but with low complexity.
Early hybrid transceiver design is based on approximating digital transceivers in terms of the norm difference between all-digital design and the hybrid counterpart. For the millimeter wave (mmWave) band channels, which are usually with sparsity, an orthogonal matching pursuit (OMP) algorithm has been used in signal recovery for the hybrid transceiver \cite{HeathTWC2014}. In order to overcome the non-convexity in hybrid transceiver optimization, some distinct characteristics of mmWave channels must be exploited \cite{HeathTWC2014}. This methodology is a compromise on the constant-modulus constraint, which has been validated in different environments, including multiuser and relay scenarios \cite{hybridMUTVT2015,hybridRelay2014}. However, it has been found later on that the OMP algorithm cannot achieve the optimal solution sometimes. A singular-value-decomposition (SVD) based descent algorithm \cite{DongTSP2017} has been proposed, which is nearly optimal. An alternative fast constant-modulus algorithm \cite{fastAlgTSP2017} has also been developed to reduce the gap between the analog and digital precoders. The above methods are hard for complex scenarios due to high computation complexity \cite{HanzoHybridTCOMM2016,LLDaiJSAC2016,PhaseDirectTCOMM2017}. Therefore, based on the idea of unitary matrix rotation, several algorithms \cite{MaiRuikaiTWC,Y.C.Eldar2017} have been proposed to improve the approximation performance while maintaining a relative low complexity at the same time.
On the other hand, some works for hybrid precoding design are based on codebooks, which relax the problem into a convex optimization problem \cite{JHWangTSP2017}. However, the codebook-based algorithm suffers performance loss if channel state information (CSI) is inaccurate \cite{ImpactTSP2016}. In order to reduce the complexity of codebook design and the impact of partial CSI, special structures of massive MIMO channels \cite{WeiyuHybridJSTSP2016,WeiyuHybridOFDMJSAC2017}, can be exploited. Recently, an angle-domain based method has been proposed from the viewpoint of array signal processing \cite{LinHaiJSAC,FFGaoTWC2017angle}, which provides a useful insight on hybrid analog and digital signal processing. Based on the concept of the angle-domain design, some mathematical approaches, such as matrix decomposition algorithm, have been developed \cite{HybridKroneckerJSAC2017,LowComplexityTWC2018}. Energy efficient hybrid transceiver design for Rayleigh fading channels has been investigated in \cite{PayamiHybridTWC2016}. Hybrid transceiver optimization with partial CSI and with discrete phases has been discussed in \cite{ANovelTCOMM2017} and \cite{DiscreteTVT2017}, respectively.
Hybrid MIMO transceivers are not only limited to mmWave frequency bands or terahertz frequency bands but also potentially work in other frequency bands. The transceiver itself could either be linear or nonlinear. Moreover, the performance metrics for MIMO transceiver could be different, including capacity, mean-squared error (MSE), bit-error rate (BER), etc. A unified framework on hybrid MIMO transceiver optimization will be of great interest. In this paper, we will develop a unified framework for hybrid linear and nonlinear MIMO transceiver optimization. Our main contributions are summarized as follows.
\begin{itemize}
\item Both linear and nonlinear transceivers with Tomlinson-Harashima precoding (THP) or deci-sion-feedback detection (DFD) are taken into account in the proposed framework for hybrid MIMO transceiver optimization.
\item Different from the existing works in which a single performance metric is considered for hybrid MIMO transceiver designs, more general performance metrics are considered.
\item Based on matrix-monotonic optimization framework, the optimal structures of both digital and analog transceivers with respect to different performance metrics have been analytically derived. From the optimal structures, the optimal analog precoder and processor correspond to selecting eigenchannels, which facilitates the analog transceiver design. Furthermore, several effective analog design algorithms have been proposed.
\end{itemize}
The rest of this paper is organized as follows. In Section II, a general hybrid system model and the MSE matrices corresponding to different transceivers are introduced. In Section III, a unified hybrid transceiver is discussed in detail and the related transceiver optimization is present. In Section IV, the optimal structure of digital transceivers is derived based on matrix-monotonic optimization. In Section V, basic properties of the optimal analog precoder and processor are investigated, based on which effective algorithms to compute the analog transceiver are proposed. Next, in Section VI, simulation results are provided to demonstrate the performance advantages of the proposed algorithms. Finally,
\begin{figure*}[!ht]
\centering
\includegraphics[width = 1\textwidth]{HybSysModel-Full.eps}
\caption{General hybrid MIMO transceiver.}
\label{fig01}
\end{figure*}conclusions are drawn in Section VII.
\noindent {\textbf{Notations}}: In this paper, scalars, vectors, and matrices are denoted by non-bold, bold lower-case, and bold upper-case letters, respectively.
The notations ${\bf X}^{\rm{H}}$ and ${\rm{Tr}}({\bf X})$ denote the Hermitian and the trace of a complex matrix ${\bf X}$, respectively.
Matrix ${\bf X}^{\frac{1}{2}}$ is the Hermitian square root of a positive semi-definite matrix ${\bf X}$.
The expression $ \mathrm{diag} \lbrace \mathbf{X} \rbrace $ denotes a square diagonal matrix with the same diagonal elements as matrix $ \mathbf{X} $.
The $ i $th row and the $ j $th column of a matrix are denoted as $ [\cdot ]_{i,:} $ and $ [\cdot ]_{:,j} $, respectively, and the element in the $ k $th row and the $ \ell $th column of a matrix is denoted as $ [\cdot ]_{k,\ell} $.
In the following derivations, ${\boldsymbol \Lambda}$ always denotes a diagonal matrix (square or rectangular diagonal matrix) with diagonal elements arranged in a nonincreasing order.
Representation $ \mathbf{A} \preceq \mathbf{B} $ means that the matrix $ \mathbf{B} - \mathbf{A} $ is positive semidefinite. The real and imaginary parts of a complex variable are represented by $ \Re \lbrace \cdot \rbrace $ and $ \Im \lbrace \cdot \rbrace $, respectively, and statistical expectation is denoted by $\mathbb{E}\{\cdot\}$.
\section{General Structure of Hybrid MIMO Transceiver}
In this section, we will first introduce the system model of MIMO hybrid transceiver designs. Then a general signal model is introduced, which includes nonlinear transceiver with THP or DFD and linear transceiver as its special cases. Based on the general signal model, the general linear minimum mean-squared error (LMMSE) processor and data estimation mean-squared error (MSE) matrix are derived, which are the basis for the subsequent hybrid MIMO transceiver design.
\subsection{System Model}
As shown in Fig.~\ref{fig01}, we consider a point-to-point hybrid MIMO system where the source and the destination are equipped with $ N $ and $ M $ antennas, respectively. Without loss of generality, it is assumed that both the source and the destination have $L$ RF chains. A transmit data vector ${\bf{a}}$ $\in \mathbb{C}^{D\times 1}$ is first processed by a unit with feedback operation and then goes through a digital precoder $ {\bf F}_{\rm D} \in \mathbb{C}^{L \times D} $ and an analog precoder $ {\bf F}_{\rm A} \in \mathbb{C}^{N \times L} $.
This is a more general model as it includes both linear precoder and nonlinear precoder as its special cases.
For the nonlinear transceiver with THP at source, the feedback matrix $ {\bf B}^{\rm Tx} $ is strictly lower triangular. The key idea behind THP is to exploit feedback operations to pre-eliminate mutual interference between different data streams. In order to control transmit signals in a predefined region, a modulo operation is introduced for the feedback operation \cite{XingJSAC2012}. Based on lattice theory, it can be proved that the modulo operation is equivalent to adding an auxiliary complex vector ${\bf{d}}$ whose element is with integer imaginary and real parts \cite{XingJSAC2012,majorizationTHP2008}. The vector ${\bf{d}}$ makes sure ${\bf{x}}={\bf a} + {\bf d}$ in a predefined region \cite{XingJSAC2012,majorizationTHP2008}.
Based on this fact, the output vector ${\bf{b}}$ of the feedback unit satisfies the following equation
\begin{align}
{\bf b} = ({\bf a} + {\bf d})-{\bf{B}}^{\rm{Tx}}{\bf{b}},
\end{align}that is
\begin{align}
\label{signal_model_b}
{\bf{b}}=({\bf{I}}+{\bf{B}}^{\rm{Tx}})^{-1}\underbrace{({\bf a} + {\bf d})}_{\triangleq {\bf{x}}}.
\end{align} It is worth noting that ${\bf{d}}$ can be perfectly removed by a modulo operation \cite{XingJSAC2012,majorizationTHP2008} and thus recovering ${\bf{x}}$ is equivalent to recovering ${\bf{a}}$. On the other hand, for linear precoder, there is no feedback operation, i.e., ${\bf{B}}^{\rm{Tx}}={\bf{0}}$ and ${\bf{d}}={\bf{0}}$ \cite{XingTSP201502}. Moreover, based on (\ref{signal_model_b}) we have ${\bf{b}}={\bf{a}}$.
Then, the received signal ${\bf{y}}$ at the destination is
\begin{align}
{\bf{y}}={\bf{H}}{\bf{F}}_{\rm{A}}{\bf{F}}_{\rm{D}}({\bf I}+{\bf B}^{\rm Tx})^{-1}{\bf{x}}+{\bf{n}},
\label{eq-recvSig}
\end{align}
where $ {\bf n} $ is an $ M \times 1 $ additive Gaussian noise vector with zero mean and covariance $ {\bf R}_{\rm n} $, $ {\bf H} $ is an $ M \times N $ channel matrix, and $ {\bf B}^{\rm Tx} $ is a general feedback matrix at source, which is determined by the types of precoders.
It is worth noting that $ {\bf B}^{\rm Tx} = {\bf 0} $ corresponds to linear precoder without feedback operation. As shown in Fig.~\ref{fig01}, after analog and digital processing at the destination, the recovered signal is given by
\begin{align}
{\bf{\hat x}}^{\rm{General}}={\bf{G}}_{\rm{D}}{\bf{G}}_{\rm{A}}{\bf{y}} - {\bf B}^{\rm Rx}{\bf x},
\label{eq-dsrSig}
\end{align}
where $ {\bf G}_{\rm A} \in \mathbb{C}^{L \times M} $ is an analog processor, $ {\bf G}_{\rm D} \in \mathbb{C}^{D \times L} $ is a digital processor, and $ {\bf B}^{\rm Rx} $ is a general feedback matrix at the destination.
Note that since the analog precoder $ {\bf F}_{\rm A} $ and analog processor $ {\bf G}_{\rm A} $ are implemented through phase shifters, they are restricted to constant-modulus matrices with constant magnitude elements.
For DFD at the receiver, the decision feedback matrix $ {\bf B}^{\rm Rx} $ in (\ref{eq-dsrSig}) is a strictly lower-triangular matrix.
For linear detection, the feedback matrix in (\ref{eq-dsrSig}) is an all-zero matrix, i.e., $ {\bf B}^{\rm Rx} = {\bf 0} $. Based on (\ref{eq-recvSig}) and (\ref{eq-dsrSig}), the recovered signal vector can be rewritten as
\begin{align}
\label{signal_model_general}
{\bf{\hat x}}= {\bf{G}}_{\rm{D}}{\bf{G}}_{\rm{A}}{\bf{H}}{\bf{F}}_{\rm{A}} {\bf{F}}_{\rm{D}}( {\bf I} + {\bf B}^{\rm Tx} )^{-1}{\bf{x}} -{\bf B}^{\rm Rx}{\bf{x}} + {\bf{G}}_{\rm{D}}{\bf{G}}_{\rm{A}}{\bf{n}}.
\end{align} This is a general signal model and includes nonlinear hybrid transceivers with THP or DFD and linear hybrid transceiver as its special cases.
More specifically, for a linear hybrid transceiver, there is no feedback, either at the source or at the destination, i.e., $ {\bf B}^{\rm Tx} = {\bf B}^{\rm Rx} = {\bf 0} $. Therefore, the recovered signal in (\ref{signal_model_general}) becomes
\begin{align}
\label{signal_model_linear}
{\bf{\hat x}}^{\rm Linear} = {\bf{G}}_{\rm{D}}{\bf{G}}_{\rm{A}}{\bf{H}}{\bf{F}}_{\rm{A}}
{\bf{F}}_{\rm{D}}{\bf{x}}+{\bf{G}}_{\rm{D}}{\bf{G}}_{\rm{A}}{\bf{n}}.
\end{align}
For the nonlinear transceiver with THP at the source and linear decision at the destination, i.e., $ {\bf B}^{\rm Rx} = {\bf 0} $ \cite{XingJSAC2012,majorizationTHP2008}, the detected signal vector in (\ref{signal_model_general}) becomes
\begin{align}
\label{signal_model_thp}
{\bf{\hat x}}^{\rm THP} = {\bf{G}}_{\rm{D}}{\bf{G}}_{\rm{A}}{\bf{H}}{\bf{F}}_{\rm{A}}
{\bf{F}}_{\rm{D}}({\bf{I}}+{\bf{B}}^{\rm Tx})^{-1}{\bf{x}}
+ {\bf{G}}_{\rm{D}} {\bf{G}}_{\rm{A}} {\bf{n}}.
\end{align}
For the nonlinear transceiver with DFD at the destination and a linear precoder at the source, i.e., $ {\bf B}^{\rm Tx} = {\bf 0} $, the detected signal vector in (\ref{signal_model_general}) becomes
\begin{align}
\label{signal_model_dfe}
{\bf{\hat x}}^{\rm DFD} = ({\bf{G}}_{\rm{D}}{\bf{G}}_{\rm{A}}{\bf{H}}{\bf{F}}_{\rm{A}}
{\bf{F}}_{\rm{D}}-{\bf{B}}^{\rm Rx}){\bf{x}}+{\bf{G}}_{\rm{D}}
{\bf{G}}_{\rm{A}}{\bf{n}}.
\end{align}
\subsection{Unified MSE Matrix for Different Precoders and Processors}
Based on the general signal model in (\ref{signal_model_general}), the general MSE matrix of the recovered signal at the destination equals
\begin{align}
\label{def_MSE_Matrix}
& {\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{D}},{\bf{G}}_{\rm{A}},
{\bf{F}}_{\rm{A}},{\bf{F}}_{\rm{D}},{\bf B}^{\rm Tx},{\bf B}^{\rm Rx}) \nonumber \\
= \, & \mathbb{E}\{({\bf{\hat x}}-{\bf{x}})({\bf{\hat x}}-{\bf{x}})^{\rm{H}}\} \nonumber \\
=\, & \mathbb{E}\{\left( {\bf{G}}_{\rm{D}}{\bf{G}}_{\rm{A}}{\bf{H}}{\bf{F}}_{\rm{A}} {\bf{F}}_{\rm{D}}({\bf B}^{\rm Rx}+{\bf{I}})^{-1}{\bf{x}}-{\bf{x}}-{\bf{G}}_{\rm{D}}{\bf{G}}_{\rm{A}}{\bf{n}} \right)\nonumber \\
&\times \left( {\bf{G}}_{\rm{D}}{\bf{G}}_{\rm{A}}{\bf{H}}{\bf{F}}_{\rm{A}} {\bf{F}}_{\rm{D}}({\bf B}^{\rm Rx}+{\bf{I}})^{-1}{\bf{x}}-{\bf{x}}-{\bf{G}}_{\rm{D}}{\bf{G}}_{\rm{A}}{\bf{n}} \right)^{\rm{H}}\} \nonumber \\
= \, & \mathbb{E} \lbrace \big( {\bf{G}}_{\rm{D}}{\bf{G}}_{\rm{A}}{\bf{H}}{\bf{F}}_{\rm{A}} {\bf{F}}_{\rm{D}} - ({\bf B}^{\rm Rx}+{\bf{I}})( {\bf I} + {\bf B}^{\rm Tx} ) \big) {\bf b} {\bf b}^{\rm H}\nonumber \\
& \times\big( {\bf{G}}_{\rm{D}}{\bf{G}}_{\rm{A}}{\bf{H}}{\bf{F}}_{\rm{A}} {\bf{F}}_{\rm{D}} - ({\bf B}^{\rm Rx}+{\bf{I}})( {\bf I} + {\bf B}^{\rm Tx} ) \big)^{\rm H} \rbrace \nonumber \\
& \, + {\bf{G}}_{\rm{D}}{\bf{G}}_{\rm{A}}{\bf{R}}_{\rm{n}}
{\bf{G}}_{\rm{A}}^{\rm{H}}{\bf{G}}_{\rm{D}}^{\rm{H}},
\end{align}where the third equality is based on ${\bf{b}}=({\bf B}^{\rm Rx}+{\bf{I}})^{-1}{\bf{x}}$ given in (\ref{signal_model_b}).
Based on lattice theory, each element of ${\bf b}$ is identical and independent distributed, i.e., $\mathbb{E}\{{\bf b} {\bf b}^{\rm H}\}\propto{\bf{I}}$ \cite{majorizationTHP2008}.
Thus, for notational simplicity, we can assume $\mathbb{E}\{{\bf b} {\bf b}^{\rm H}\}={\bf{I}}$ in the following derivations. Denote ${\bf{B}}={\bf B}^{\rm Rx}+{\bf B}^{\rm Tx}+{\bf B}^{\rm Rx}{\bf B}^{\rm Tx}$, then
\begin{align}
\label{B}
({\bf B}^{\rm Rx}+{\bf{I}})( {\bf I} + {\bf B}^{\rm Tx} )={\bf{I}}+{\bf{B}}.
\end{align}
It is obvious that ${\bf{B}}$ is a strictly lower-triangular matrix based on the definitions of ${\bf B}^{\rm Tx}$ and ${\bf B}^{\rm Rx}$, which implies that using nonlinear precoding at transmitter and nonlinear detection at the receiver at the same time is equivalent to just one of two. Therefore, nonlinear precoding at the transmitter and nonlinear detection at the receiver are equivalent and only one is enough.
Direct matrix derivation \cite{XingTSP201502} yields that the optimal $ {\bf G}_{\rm D} $ will be
\begin{align}
{\bf{G}}_{\rm D}^{\rm opt} & = ({\bf{I}}+{\bf B}) ({\bf{G}}_{\rm{A}}{\bf{H}}
{\bf{F}}_{\rm{A}}{\bf{F}}_{\rm{D}})^{\rm{H}}\nonumber\\
& \ \ \ \times[({\bf{G}}_{\rm{A}}{\bf{H}}
{\bf{F}}_{\rm{A}}{\bf{F}}_{\rm{D}})({\bf{G}}_{\rm{A}}{\bf{H}}
{\bf{F}}_{\rm{A}}{\bf{F}}_{\rm{D}})^{\rm{H}}+{\bf{G}}_{\rm{A}} {\bf{R}}_{\rm{n}}
{\bf{G}}_{\rm{A}}^{\rm{H}}]^{-1}.
\label{eq-MMSE-Equalizer-Nonlinear}
\end{align}That is, the general MSE matrix can be further simplified into
\begin{align}
\label{MSE_Matrix_Nonlinear}
& {\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{A}},{\bf{F}}_{\rm{A}},
{\bf{F}}_{\rm{D}},{\bf B}) \nonumber \\
= \, & {\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{D}}^{\rm{opt}},{\bf{G}}_{\rm{A}},{\bf{F}}_{\rm{A}},
{\bf{F}}_{\rm{D}},{\bf B}^{\rm Tx},{\bf B}^{\rm Rx}) \nonumber \\
= \, & ( {\bf I} + {\bf B}) ({\bf I} + {\bf F}_{\rm D}^{\rm H} {\bf F}_{\rm A}^{\rm H} {\bf H}^{\rm H} {\bf G}_{\rm A}^{\rm H}
({\bf G}_{\rm A} {\bf R}_{\rm n} {\bf G}_{\rm A}^{\rm H})^{-1} {\bf G}_{\rm A} {\bf H} {\bf F}_{\rm A}{\bf F}_{\rm D})^{-1}\nonumber \\
&\times({\bf I} + {\bf B} )^{\rm{H}} \nonumber \\
\preceq \, & {\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{D}},{\bf{G}}_{\rm{A}},{\bf{F}}_{\rm{A}},
{\bf{F}}_{\rm{D}},{\bf B}^{\rm Tx},{\bf B}^{\rm Rx}),
\end{align}for any $ {\bf G}_{\rm D} $.
If $ {\bf B}= {\bf 0} $ in (\ref{eq-MMSE-Equalizer-Nonlinear}) and (\ref{MSE_Matrix_Nonlinear}), the results are reduced to linear transceiver. Specifically, the corresponding digital LMMSE processor for linear transceiver is given as follows
\begin{align}
{\bf{G}}_{\rm D,L}^{\rm{opt}} = & ({\bf{G}}_{\rm{A}}{\bf{H}}
{\bf{F}}_{\rm{A}}{\bf{F}}_{\rm{D}})^{\rm{H}}[({\bf{G}}_{\rm{A}}{\bf{H}}
{\bf{F}}_{\rm{A}}{\bf{F}}_{\rm{D}})({\bf{G}}_{\rm{A}}{\bf{H}}
{\bf{F}}_{\rm{A}}{\bf{F}}_{\rm{D}})^{\rm{H}}\nonumber \\
&+{\bf{G}}_{\rm{A}}{\bf{R}}_{\rm{n}}
{\bf{G}}_{\rm{A}}^{\rm{H}}]^{-1},
\label{eq-MMSE-Equalizer-Linear}
\end{align}
and the MSE matrix for linear transceiver is
\begin{align}
\label{MSE_Matrix_Linear}
& {\boldsymbol{\Phi}}_{\rm{MSE}}^{\rm{L}}
({\bf{G}}_{\rm{A}},{\bf{F}}_{\rm{A}},
{\bf{F}}_{\rm{D}})\nonumber \\
=&{\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{A}},{\bf{F}}_{\rm{A}},
{\bf{F}}_{\rm{D}},{\bf 0})\nonumber \\
= &\left[{\bf{I}} + {\bf{F}}_{\rm{D}}^{\rm{H}} {\bf{F}}_{\rm{A}}^{\rm{H}} {\bf{H}}^{\rm{H}} {\bf{G}}_{\rm{A}}^{\rm{H}}
({\bf{G}}_{\rm{A}}{\bf{R}}_{\rm{n}}{\bf{G}}_{\rm{A}}^{\rm{H}})^{-1} {\bf{G}}_{\rm{A}}{\bf{H}}{\bf{F}}_{\rm{A}}{\bf{F}}_{\rm{D}}\right]^{-1}\nonumber \\
\triangleq&\left[{\bf{I}}+{\boldsymbol \Gamma}({\bf{G}}_{\rm{A}}, {\bf{F}}_{\rm{D}}, {\bf{F}}_{\rm{A}})\right]^{-1},
\end{align}where
\begin{align}
\label{matrix_SNR}
{\boldsymbol \Gamma}({\bf{G}}_{\rm{A}}, {\bf{F}}_{\rm{D}}, {\bf{F}}_{\rm{A}})={\bf{F}}_{\rm{D}}^{\rm{H}} {\bf{F}}_{\rm{A}}^{\rm{H}} {\bf{H}}^{\rm{H}} {\bf{G}}_{\rm{A}}^{\rm{H}}
({\bf{G}}_{\rm{A}}{\bf{R}}_{\rm{n}}{\bf{G}}_{\rm{A}}^{\rm{H}})^{-1} {\bf{G}}_{\rm{A}}{\bf{H}}{\bf{F}}_{\rm{A}}{\bf{F}}_{\rm{D}}\end{align},
which is signal-to-noise ratio for single antenna case.
For the nolinear transceivers, ${\bf{B}}={\bf B}^{\rm Tx}$ for THP or ${\bf{B}}={\bf B}^{\rm Rx}$ for DFD in (\ref{B})-(\ref{MSE_Matrix_Nonlinear}).
Based on (\ref{MSE_Matrix_Nonlinear}) and (\ref{MSE_Matrix_Linear}), the general MSE matrix for nonlinear transceivers can also be written in the following unified formula
\begin{align}
\label{MSE_general_simple}
{\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{A}}, {\bf{F}}_{\rm{A}}, {\bf{F}}_{\rm{D}}, {\bf{B}})
&= ({\bf{I}} + {\bf{B}}) {\boldsymbol{\Phi}}_{\rm{MSE}}^{\rm{L}}
({\bf{G}}_{\rm{A}}, {\bf{F}}_{\rm{A}}, {\bf{F}}_{\rm{D}}) \nonumber \\
&\times({\bf{I}} + {\bf{B}})^{\rm H},
\end{align}which turns into the MSE matrix in (\ref{MSE_Matrix_Nonlinear}) when ${\bf{B}}={\bf{0}}$.
In the following, we will investigate unified hybrid MIMO transceiver optimization, which is applicable to various objective functions based on on the general MSE matrix (\ref{MSE_general_simple}).
\section{The Unified Hybrid MIMO Transceiver Optimization}
Because of the multi-objective optimization nature for MIMO systems with multiple data streams, there are different kinds of objectives that reflect different design preferences \cite{Palomar03}. All can be regarded as a matrix monotonic function of the data estimation MSE matrix in (\ref{MSE_general_simple}) \cite{XingTSP201501}. A function $f(\cdot)$ is a matrix monotone increasing function if $ f({\bf X}) \ge f({\bf Y}) $ for $ {\bf X} \succeq {\bf Y} \succeq {\bf 0}$ \cite{XingTSP201501}.
To avoid case-by-case discussion, we will investigate in depth hybrid MIMO transceiver optimization with different performance metrics from a unified viewpoint, in this section.
Based on the MSE matrix in (\ref{MSE_general_simple}), the unified hybrid MIMO transceiver design can be formulated in the following form
\begin{align}
\label{General_Transceiver}
\min_{{\bf{G}}_{\rm{A}},{\bf{F}}_{\rm{A}},{\bf{F}}_{\rm{D}},{\bf{B}}} \ & f \left(
{\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{A}},{\bf{F}}_{\rm{A}},
{\bf{F}}_{\rm{D}},{\bf{B}})
\right) \nonumber \\
{\rm{s.t.}} \qquad & {\rm{Tr}}({\bf{F}}_{\rm{A}}{\bf{F}}_{\rm{D}}{\bf{F}}_{\rm{D}}^{\rm{H}}{\bf{F}}_{\rm{A}}^{\rm{H}})\le P \nonumber \\
& {\bf{F}}_{\rm{A}} \in \mathcal{F}, \ \ {\bf{G}}_{\rm{A}} \in \mathcal{G},
\end{align}
where $f(\cdot)$ is a matrix monotone increasing function \cite{XingTSP201501}. The sets $\mathcal{F}$ and $\mathcal{G}$ are the feasible analog precoder set and analog processor set satisfying constant-modulus constraint, and $ P $ denotes the maximum transmit power at the source.
\subsection{Specific Objective Functions}
There are many ways to choose the matrix monotone increasing function. In this subsection, we will investigate the properties of different objective functions in (\ref{MSE_general_simple}).
One group of matrix monotone increasing functions can be expressed as
\begin{align}
\label{f_1}
&f_1\left(
{\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{A}},{\bf{F}}_{\rm{A}},
{\bf{F}}_{\rm{D}},{\bf{B}})
\right)\nonumber \\
&=f_{\rm{Schur}} \left({\bf{d}}(
{\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{A}},{\bf{F}}_{\rm{A}},
{\bf{F}}_{\rm{D}},{\bf{B}}))
\right),
\end{align}where ${\bf{d}}(
{\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{A}},{\bf{F}}_{\rm{A}},
{\bf{F}}_{\rm{D}},{\bf{B}}))$ is a vector consisting of the diagonal elements of the matrix ${\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{A}},{\bf{F}}_{\rm{A}},
{\bf{F}}_{\rm{D}},{\bf{B}})$ and $f_{\rm{Schur}}(\cdot)$ is a function of a vector satisfying one the following four properties discussed in Appendix~\ref{Appendix_A}:
\begin{enumerate}
\item Multiplicatively Schur-convex
\item Multiplicatively Schur-concave
\item Additively Schur-convex
\item Additively Schur-concave.
\end{enumerate}
Many widely used metrics can be regarded as a special case of this group of functions \cite{Palomar03,majorizationTHP2008,XingJSAC2012}.
\noindent \textbf{Conclusion 1:} For linear transceiver, the feedback matrix ${\bf{B}}$ in (\ref{General_Transceiver}) is an all-zero matrix, i.e., $ {\bf B}^{\rm{opt}} = {\bf 0} $.
For nonlinear transceiver, from Appendix~\ref{Appendix_Optimal_B} the optimal feedback matrix ${\bf{B}}$ for $f_1(\cdot)$ is
\begin{align}
\label{optimal_B}
{\bf{B}}^{\rm{opt}}
={\rm{diag}}\{[[{\bf{L}}]_{1,1},\cdots,[{\bf{L}}]_{L,L}]^{\rm{T}}\}
{\bf{L}}^{-1}-{\bf{I}},
\end{align}
where ${\bf{L}}$ is a lower triangular matrix of the following Cholesky decomposition
\begin{align}
\label{Definition_L}
{\boldsymbol{\Phi}}_{\rm{MSE}}^{\rm{L}}
({\bf{G}}_{\rm{A}},{\bf{F}}_{\rm{A}},{\bf{F}}_{\rm{D}})
={\bf{L}}{\bf{L}}^{\rm{H}}.
\end{align}
It has been proved in \cite{XingTSP201501} and \cite{XingTSP201502} that for nonlinear transceiver design each data stream will have the same performance if $f_{\rm{Schur}}(\cdot)$ in (\ref{f_1}) is multiplicatively Schur-convex. On the other hand, if $f_{\rm{Schur}}(\cdot)$ in (\ref{f_1}) is multiplicatively Schur-concave, for nonlinear transceiver design the objective function includes geometrically weighted signal-to-noise-plus-interference-ratio (SINR) maximization as its special case.
If $f_{\rm{Schur}}(\cdot)$ in (\ref{f_1}) is additively Schur-convex, the objective function includes the the maximum MSE minimization and the minimum BER with the same constellation on each data stream as special cases. If $f_{\rm{Schur}}(\cdot)$ in (\ref{f_1}) is additively Schur-concave, the objective function includes weighted MSE minimization as its special case. Additive Schur functions are usually used for linear transceivers (${\bf{B}}={\bf{0}}$ in (\ref{f_1})) since closed-form solutions can be obtained in this case.
Besides the above group of matrix monotone increasing functions, we can choose one to reflect capacity and MSE for linear transceivers.
Capacity is one of the most popular performance metrics in MIMO transceiver optimization. It can be expressed as the form of MSE matrix considering the well-known relationship between the MSE matrix and capacity \cite{XingTSP201501}, i.e., $ C = -{\rm log}| {\bf \Phi}_{\rm MSE} | $. Then, the objective can be given as
\begin{align}
& f_{\rm 2} (\cdot)
= {\rm{log}}|{\boldsymbol{\Phi}}_{\rm{MSE}}^{\rm{L}}
({\bf{G}}_{\rm{A}},{\bf{F}}_{\rm{A}},{\bf{F}}_{\rm{D}})|.
\label{obj-cap}
\end{align}MSE is another widely used performance metric that demonstrates how accurately a signal can be recovered. The corresponding weighted MSE minimization objective is
\begin{align}
& f_{\rm 3} (\cdot)
= {\rm{Tr}}\left[{\bf{A}}^{\rm{H}}
{\boldsymbol{\Phi}}_{\rm{MSE}}^{\rm{L}}
({\bf{G}}_{\rm{A}},{\bf{F}}_{\rm{A}},{\bf{F}}_{\rm{D}}){\bf{A}}\right],
\label{obj-wmse}
\end{align}
where $ {\bf A} $ is a general, not necessarily diagonal, weight matrix, even if it is often diagonal in many applications.
\subsection{Hybrid MIMO Transceiver Optimization}
Denote
\begin{align}
{\boldsymbol{\Pi}}_{\rm{L}} & =
( {\bf{G}}_{\rm{A}} {\bf{R}}_{\rm{n}} {\bf{G}}_{\rm{A}}^{\rm{H}} )^{-1/2}{\bf{G}}_{\rm{A}}{\bf{R}}_{\rm{n}}^{1/2}, \nonumber \\
{\boldsymbol{\Pi}}_{\rm{R}} & = {\bf{F}}_{\rm{A}} ( {\bf{F}}_{\rm{A}}^{\rm{H}} {\bf{F}}_{\rm{A}} )^{-\frac{1}{2}}, \nonumber
\end{align}
and
\begin{align}
{\bf{\tilde F}}_{\rm{D}} & = ( {\bf{F}}_{\rm{A}}^{\rm{H}} {\bf{F}}_{\rm{A}} )^{\frac{1}{2}}{\bf{ F}}_{\rm{D}}{\bf{Q}}^{\rm{H}},
\label{def-PI}
\end{align}
where ${\bf{Q}}$ is a unitary matrix to be determined by digital transceiver optimization in the next section. Then (\ref{matrix_SNR}) can be rewritten as
\begin{align} {\boldsymbol{\Gamma}}({\bf{G}}_{\rm{A}},{\bf{F}}_{\rm{D}},{\bf{F}}_{\rm{A}})
={\bf{Q}}^{\rm{H}}{\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}}){\bf{Q}},
\label{eq-SNR}
\end{align}
where
\begin{align}
& {\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}})
= {\bf{\tilde F}}_{\rm{D}}^{\rm{H}}{\boldsymbol{\Pi}}_{\rm{R}}^{\rm{H}}
{\bf{H}}^{\rm{H}}{\bf{R}}_{\rm{n}}^{-1/2}
{\boldsymbol{\Pi}}_{\rm{L}}^{\rm{H}}
{\boldsymbol{\Pi}}_{\rm{L}} {\bf{R}}_{\rm{n}}^{-1/2}{\bf{H}}{\boldsymbol{\Pi}}_{\rm{R}}{\bf{\tilde F}}_{\rm{D}}.
\end{align}
The optimal ${\bf{B}}$ is usually a function of ${\boldsymbol{\Gamma}}({\bf{G}}_{\rm{A}},
{\bf{F}}_{\rm{D}},{\bf{F}}_{\rm{A}})$, for all objective functions as demonstrated by (\ref{optimal_B}) for $f_{\rm{Schur}}(\cdot)$ in (\ref{f_1}). From (\ref{eq-SNR}), we can conclude that the optimal ${\bf{B}}$ is a function of ${\bf{Q}}^{\rm{H}}{\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}}){\bf{Q}}$.
Therefore, using (\ref{MSE_Matrix_Linear}) and (\ref{eq-SNR}), the objective function of (\ref{General_Transceiver}) can be expressed in terms of ${\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}})$ as
\begin{align}
\label{function_function}
& f \left( {\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{A}},{\bf{F}}_{\rm{A}},{\bf{F}}_{\rm{D}},{\bf{B}}^{\rm{opt}}) \right) \nonumber \\
= & f \big( ({\bf{I}}+{\bf{B}}^{\rm opt}) ({\bf I} + {\bf{Q}}^{\rm{H}}{\boldsymbol{\tilde \Gamma}} ({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}}) {\bf{Q}} )^{-1} ({\bf{I}}+{\bf{B}}^{\rm opt})^{\rm{H}} \big) \nonumber \\
\triangleq & f_{{\rm S}} \left({\bf{Q}}^{\rm{H}}{\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}}){\bf{Q}}\right).
\end{align}
After introducing ${\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}})$ and a new auxiliary matrix $ {\bf Q} $, the objective function is transferred into $ f_{{\rm S}}(\cdot)$ rather than $ f(\cdot) $. Note that this new function notation, $ f_{{\rm S}}(\cdot)$, is defined only for notational simplicity and it explicitly expresses the objective as a function of matrix variables ${\bf Q}$ and ${\boldsymbol{\tilde \Gamma}}$.
Therefore, the optimization problem in (\ref{General_Transceiver}) is further rewritten into the following one
\begin{align}
\label{unified_transceiver}
\min_{{\bf{Q}},{\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}}} \ & f_{{\rm S}} \left({\bf{Q}}^{\rm{H}}{\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}}){\bf{Q}}\right) \nonumber \\
{\rm{s.t.}} \ \ \ \ \ \ & {\rm{Tr}}({\bf{\tilde F}}_{\rm{D}}{\bf{\tilde F}}_{\rm{D}}^{\rm{H}})\le P \nonumber \\
& {\bf{F}}_{\rm{A}} \in \mathcal{F}, \ \ {\bf{G}}_{\rm{A}} \in \mathcal{G}.
\end{align} We will discuss in detail how to solve the optimization problem (\ref{unified_transceiver}) with respect to ${\bf{Q}}$, ${\bf{G}}_{\rm{A}}$,${\bf{\tilde F}}_{\rm{D}}$, and ${\bf{F}}_{\rm{A}}$ subsequently. In (\ref{unified_transceiver}), ${\bf{B}}$ has been formulated as a function of ${\bf{Q}}$, ${\bf{G}}_{\rm{A}}$,${\bf{\tilde F}}_{\rm{D}}$, and ${\bf{F}}_{\rm{A}}$. When ${\bf{Q}}$, ${\bf{G}}_{\rm{A}}$,${\bf{\tilde F}}_{\rm{D}}$, and ${\bf{F}}_{\rm{A}}$ are calculated, the optimal ${\bf{B}}$ can be directly derived based on (\ref{optimal_B}).
\section{Digital Transceiver Optimization}
In the following, we focus on the digital transceiver optimization for the optimization problem (\ref{unified_transceiver}). More specifically, we first derive the optimal unitary matrix ${\bf{Q}}$ and then find the optimal ${\bf{\tilde F}}_{\rm{D}}$.
\subsection{Optimal $ {\bf{Q}} $}
At the beginning of this section, two fundamental definitions are given based on the following eigenvalue decomposition (EVD) and SVD
\begin{align}
\label{EVD_SVD}
&{\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}})={\bf{U}}_{{\boldsymbol{\tilde \Gamma}}}{\boldsymbol \Lambda}_{{\boldsymbol{\tilde \Gamma}}}{\bf{U}}_{{\boldsymbol{\tilde \Gamma}}}^{\rm{H}} \nonumber \\
&{\bf{A}}={\bf{U}}_{{\bf{A}}}{\boldsymbol \Lambda}_{{\bf{A}}}{\bf{V}}_{{\bf{A}}}^{\rm{H}},
\end{align}
where $ {\boldsymbol{\Lambda}}_{\rm \Phi} $ and $ {\boldsymbol{\Lambda}}_{\rm A} $ denote a diagonal matrix with the diagonal elements in nondecreasing order.
Denote ${\bf{U}}_{\rm{GMD}}$ as the unitary matrix that makes the lower triangular matrix ${\bf{L}}$ in (\ref{Definition_L}) has the same diagonal elements. It has been shown in \cite{Palomar03,XingTSP201501,XingTSP201502} that the optimal ${\bf{Q}}$ for the first group of matrix-monotonic functions can be expressed as
\begin{align}
\label{optimal_Q_1}
{\bf{Q}}^{\rm{opt}} = \left\{ {\begin{array}{l}
{{\bf{U}}_{{\boldsymbol{\tilde \Gamma}}}{\bf{U}}_{\text{GMD}}^{\rm{H}}}
\ \ \text{if} \ f(\cdot) \ \text{is multiplicatively Schur-convex} \\
{{\bf{U}}_{{\boldsymbol{\tilde \Gamma}}}} \ \ \ \ \ \ \ \ \ \text{if} \ f(\cdot) \ \text{is multiplicatively Schur-concave} \\
{{\bf{U}}_{{\boldsymbol{\tilde \Gamma}}}{\bf{U}}_{\text{DFT}}^{\rm{H}}}
\ \ \ \text{if} \ f(\cdot) \ \text{is additively Schur-convex}
\\
{{\bf{U}}_{{\boldsymbol{\tilde \Gamma}}}} \ \ \ \ \ \ \ \ \ \text{if} \ f(\cdot) \ \text{is additively Schur-concave}.
\end{array}} \right.
\end{align}
The above results are obtained by directly manipulating with the objective function $f(\cdot)$ in \eqref{function_function}, and thus the optimal ${\bf{Q}}$ varies with the matrix-monotone increasing function in (\ref{General_Transceiver}).
For the capacity maximization in (\ref{obj-cap}), the objective function of (\ref{unified_transceiver}) can be written as
\begin{align}
\label{Q_X_Obj_1}
& f_{\rm S, 2}(\cdot) = -{\rm{log}} |{\bf{Q}}^{\rm{H}} {\boldsymbol{\tilde \Gamma}} ({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}}) {\bf{Q}} + {\bf{I}}|.
\end{align}
Since the function in (\ref{Q_X_Obj_1}) is independent of $ {\bf Q} $ as long as it is a unitary matrix, the optimal $ {\bf Q} $, namely $ {\bf Q}^{\rm opt} $, can be any unitary matrix with proper dimension.
For the weighted MSE minimization given by (\ref{obj-wmse}), the objective function of (\ref{unified_transceiver}) can be rewritten as
\begin{align}
f_{\rm S, 3} (\cdot) & =
{\rm{Tr}} [{\bf{A}}^{\rm{H}} ({\bf{Q}}^{\rm{H}} {\boldsymbol{\tilde \Gamma}} ({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}}) {\bf{Q}}
+ {\bf{I}})^{-1} {\bf{A}}].
\end{align}
Based on the EVD and SVD defined in (\ref{EVD_SVD}) and the matrix inequality in Appendix~\ref{Appendix_Inequality}, the optimal ${\bf{Q}}$ is
\begin{align}
\label{optimal_Q_3}
{\bf{Q}}^{\rm opt} = {\bf{U}}_{{\boldsymbol{\tilde \Gamma}}}{\bf{ U}}_{{\bf{A}}}^{\rm{H}}.
\end{align}
We have to stress that it is still hard to find the closed-form expression for the optimal ${\bf{Q}}$ for an arbitrary function $f(\cdot)$. However, most of the meaningful and popular metric functions have been shown included in one of the above function families, and are with the closed-form expression for optimal ${\bf{Q}}$.
\subsection{Optimal ${\bf{\tilde F}}_{\rm{D}}$}
After substituting the optimal ${\bf{Q}}$ into the objective function of (\ref{unified_transceiver}),
the objective function becomes a function of the eigenvalues of ${\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}}, {\bf{\tilde F}}_{\rm{D}}, {\bf{F}}_{\rm{A}})$, i.e.,
\begin{align}
& f_{\rm{S}}\left({\bf{Q}}^{\rm{H}}{\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}}){\bf{Q}}\right)
\triangleq f_{\rm{E}} \left({\boldsymbol\lambda} \Big( {\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}}, {\bf{\tilde F}}_{\rm{D}}, {\bf{F}}_{\rm{A}}) \Big) \right),
\label{obj-Opt-Fd}
\end{align}
where ${\boldsymbol \lambda}({\boldsymbol{X}})=[\lambda_1({\boldsymbol{X}}), \cdots, \lambda_L({\boldsymbol{X}})]^{\rm{T}}$ and $\lambda_i({\boldsymbol{X}})$ is the $ i $th largest eigenvalue of ${\boldsymbol{X}}$. It is worth highlighting that for $f_{{\rm{S}},1}(\cdot)$ and $f_{{\rm{S}},3}(\cdot)$ based on (\ref{optimal_Q_1}) and (\ref{optimal_Q_3}) we can directly have (\ref{obj-Opt-Fd}). For $f_{{\rm{S}},2}(\cdot)$
the optimal ${\bf{Q}}$ can be an arbitrary unitary matrix, minimizing $f_{{\rm{S}},2}(\cdot)$ mathematically equals to minimizing $-\sum_{l=1}^L{\rm{log}}\left(1+{\lambda}_l ( {\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}}, {\bf{\tilde F}}_{\rm{D}}, {\bf{F}}_{\rm{A}}))\right)$ for any ${\bf{Q}}$. In other words, (\ref{obj-Opt-Fd}) always holds for these kinds of functions discussed above.
Note that the definition in (\ref{obj-Opt-Fd}) follows from the facts that the unitary matrix in ${\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}}, {\bf{\tilde F}}_{\rm{D}}, {\bf{F}}_{\rm{A}})$ has been removed by the optimal ${\bf{Q}}$ and only its eigenvalues remain to be optimized.
Therefore, the unified hybrid MIMO transceiver optimization in (\ref{unified_transceiver}) is simplified to
\begin{align}
\label{unified_transceiver_eigenvalue}
\min_{{\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}}} \ \ & f_{\rm{E}}\left({\boldsymbol\lambda}\left({\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}})\right)\right) \nonumber \\
{\rm{s.t.}} \ \ \ & {\rm{Tr}}({\bf{\tilde F}}_{\rm{D}}{\bf{\tilde F}}_{\rm{D}}^{\rm{H}})\le P \nonumber \\
& {\bf{F}}_{\rm{A}} \in \mathcal{F}, \ \ {\bf{G}}_{\rm{A}} \in \mathcal{G}.
\end{align}
By applying the obtained results of ${\bf Q}^{\rm opt}$ and the fact that $ f(\cdot) $ is a matrix-monotone increasing function, it can be concluded from the discussion in \cite{Palomar03,XingTSP201502} that $ f_{{\rm E}}(\cdot) $ is a vector-decreasing function for $f_{{\rm{S}},1}(\cdot)$. Moreover, substituting the optimal ${\bf{Q}}$ into the objective function of (\ref{unified_transceiver}), for
$f_{{\rm{S}},2}(\cdot)$ and $f_{{\rm{S}},3}(\cdot)$ we have
\begin{align}
f_{{\rm{E}}}(\cdot)&=-\sum_{l=1}^L{\rm{log}}\left(1+{\lambda}_l ( {\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}}, {\bf{\tilde F}}_{\rm{D}},
{\bf{F}}_{\rm{A}}))\right), \\
f_{{\rm{E}}}(\cdot)&=\sum_{l=1}^L\frac{\lambda_l({\bf{A}})}{1+{\lambda}_l ( {\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}}, {\bf{\tilde F}}_{\rm{D}}, {\bf{F}}_{\rm{A}}))},
\end{align}respectively, which implies that $ f_{{\rm E}}(\cdot) $ is also vector-decreasing. In a nutshell, based on ${\bf Q}^{\rm opt}$ we can conclude that $ f_{{\rm E}}(\cdot) $ in (\ref{unified_transceiver_eigenvalue}) is a vector-decreasing function. Thus, from (\ref{unified_transceiver_eigenvalue}), the optimization becomes maximizing the eigenvalues of ${\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}})$. Each eigenvalue of ${\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}})$ corresponds to SNR of an eigenchannel.
In problem (\ref{unified_transceiver_eigenvalue}), the variables are still matrix variables. To simplify the optimization, we will first derive the diagonalizable structure of the optimal matrix variables. Based on the derived optimal structure, the dimensionality of the optimization problem are reduced significantly.
In order to derive the optimal structure and to avoid tedious case-by-case discussion, we consider a multi-objective optimization problem in the following. Its Pareto optimal solution set contains all the optimal solutions of different types of transceiver optimizations.
In particular, as discussed in \cite{XingTSP201501}, the optimal solution of problem (\ref{unified_transceiver_eigenvalue}) with a specific objective function, i.e., $f_1(\cdot)$, $f_{2}(\cdot)$, or $f_3(\cdot)$, must be in the Pareto optimal solution set of the following vector optimization (multi-objective) problem
\begin{align}
\label{MM_Eigen}
& \ \ \max_{{\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}}} \ \ {\boldsymbol\lambda}\left({\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}})\right) \nonumber \\
& \ \ \ \ \ \ \ {\rm{s.t.}} \ \ \ \ \ \ {\rm{Tr}}({\bf{\tilde F}}_{\rm{D}}{\bf{\tilde F}}_{\rm{D}}^{\rm{H}})\le P \nonumber \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\bf{F}}_{\rm{A}} \in \mathcal{F}, \ \ {\bf{G}}_{\rm{A}} \in \mathcal{G}.
\end{align}Equivalently,
the vector optimization problem in (\ref{MM_Eigen}) can be rewritten as the following matrix-monotonic optimization problem
\begin{align}
\label{MM_SNR}
& \ \ \max_{{\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}}} \ \ {\boldsymbol{\tilde \Gamma}}({\bf{G}}_{\rm{A}},{\bf{\tilde F}}_{\rm{D}},{\bf{F}}_{\rm{A}}) \nonumber \\
& \ \ \ \ \ \ \ {\rm{s.t.}} \ \ \ \ \ {\rm{Tr}}({\bf{\tilde F}}_{\rm{D}}{\bf{\tilde F}}_{\rm{D}}^{\rm{H}})\le P \nonumber \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\bf{F}}_{\rm{A}} \in \mathcal{F}, \ \ {\bf{G}}_{\rm{A}} \in \mathcal{G}.
\end{align}
It is worth noting that optimization (\ref{MM_SNR}) aims at maximizing a positive semi-definite matrix. Generally speaking, maximizing a positive semi-definite matrix includes two tasks, i.e., maximizing its eigenvalues and choosing a proper EVD unitary matrix. Note that in (\ref{MM_SNR}) there is no need to optimize the EVD unitary matrix, because the constraints can remain satisfied if only EVD unitary matrix changes. Using the definitions in (\ref{def-PI}) and given analog precoder ${\bf{F}}_{\rm{A}}$ and analog processor ${\bf{G}}_{\rm{A}}$, problem (\ref{MM_SNR}) is a standard matrix-monotonic optimization with respect to ${\bf{\tilde F}}_{\rm{D}}$. It follows
\begin{align}\label{optProblemF}
& \ \ \max_{{\bf{\tilde F}}_{\rm{D}}} \ \ {\bf{\tilde F}}_{\rm{D}}^{\rm{H}}{\boldsymbol{\Pi}}_{\rm{R}}^{\rm{H}}{\bf{H}}^{\rm{H}}{\bf{R}}_{\rm{n}}^{-1/2}
{\boldsymbol{\Pi}}_{\rm{L}}^{\rm{H}}
{\boldsymbol{\Pi}}_{\rm{L}} {\bf{R}}_{\rm{n}}^{-1/2}{\bf{H}}{\boldsymbol{\Pi}}_{\rm{R}}{\bf{\tilde F}}_{\rm{D}} \nonumber \\
& \ \ \ {\rm{s.t.}} \ \ \ \ {\rm{Tr}}({\bf{\tilde F}}_{\rm{D}}{\bf{\tilde F}}_{\rm{D}}^{\rm{H}})\le P.
\end{align}
Based on the matrix-monotonic optimization theory developed in \cite{XingTSP201501}, the optimal solution of \eqref{optProblemF} satisfies the following diagonalizable structure.
\noindent \textbf{Conclusion 2:} Defining the following SVD,
\begin{align}
&{\boldsymbol{\Pi}}_{\rm{L}}{\bf{R}}_{\rm{n}}^{-1/2}{\bf{H}}{\boldsymbol{\Pi}}_{\rm{R}}
=
{\bf{U}}_{{\boldsymbol{\mathbb{H}}}}
{\boldsymbol \Lambda}_{\boldsymbol{\mathbb{H}}}{\bf{V}}_{\boldsymbol{\mathbb{H}}}^{\rm{H}},
\end{align} with the diagonal elements of ${\boldsymbol \Lambda}_{\boldsymbol{\mathbb{H}}}$ in decreasing order,
the optimal ${\bf{\tilde F}}_{\rm{D}}$ satisfies
\begin{align}
{\bf{\tilde F}}_{\rm{D}}^{\rm opt} = {\bf{V}}_{\boldsymbol{\mathbb{H}}} {\boldsymbol \Lambda}_{\bf{F}} {\bf{U}}_{\rm{Arb}}^{\rm{H}},
\end{align}
where ${\boldsymbol \Lambda}_{\bf{F}}$ is a diagonal matrix determined by the specific objective functions, e.g., sum MSE, capacity maximization, etc., as discussed in the previous section.
The unitary matrix ${\bf{U}}_{\rm{Arb}}$ can be an arbitrary unitary matrix.
Thus far by using Conclusion 2, the optimal ${\bf{\tilde F}}_{\rm{D}}$ can be obtained by conducting basic manipulations as in \cite{XingTSP201501} on optimizing ${\boldsymbol \Lambda}_{\bf{F}}$ given a specific objective function.
As a result, the remaining key task is to optimize the analog precoder and processor, which is the focus of the following section.
\section{Analog Transceiver Optimization}
Based on the optimal solution of digital precoder given in the previous section, we optimize the analog precoder and processor under constant-modulus constraints. In the following, the optimal structure of the analog transceiver is first derived. Different from existing works, we show that the analog precoder and processor design can be decoupled by using the optimal transceiver structure. This optimal structure greatly simplifies the involved analog transceiver design.
For the analog transceiver optimization in (\ref{MM_SNR}) and using (\ref{def-PI}), we have the following matrix-monotonic optimization problem
\begin{align}
& \ \ \max_{{\bf{F}}_{\rm{A}},{\bf{G}}_{\rm{A}}} \ \ {\bf{\tilde F}}_{\rm{D}}^{\rm{H}}{\boldsymbol{\Pi}}_{\rm{R}}^{\rm{H}}{\bf{H}}^{\rm{H}}{\bf{R}}_{\rm{n}}^{-1/2}
{\boldsymbol{\Pi}}_{\rm{L}}^{\rm{H}}
{\boldsymbol{\Pi}}_{\rm{L}} {\bf{R}}_{\rm{n}}^{-1/2}{\bf{H}}{\boldsymbol{\Pi}}_{\rm{R}}{\bf{\tilde F}}_{\rm{D}} \nonumber \\
& \ \ \ {\rm{s.t.}} \ \ \ \ \ \ {\bf{F}}_{\rm{A}} \in \mathcal{F}, \ \ {\bf{G}}_{\rm{A}} \in \mathcal{G}.
\label{eq-analog-problem}
\end{align}
Denote the SVDs
\begin{align}
\label{effective_channel} {\bf{R}}_{\rm{n}}^{-1/2}{\bf{H}}& \triangleq {\bf{U}}_{{\boldsymbol{\mathcal{H}}}}
{\boldsymbol \Lambda}_{\boldsymbol{\mathcal{H}}}{\bf{V}}_{\boldsymbol{\mathcal{H}}}^{\rm{H}},\\
{\bf{R}}_{\rm{n}}^{1/2} {\bf{G}}_{\rm{A}}^{\rm{H}}& \triangleq {\bf{U}}_{{\bf{R}}{\bf{G}}} {\boldsymbol{\Lambda}}_{{\bf{R}}{\bf{G}}}{\bf{V}}_{{\bf{R}}{\bf{G}}}^{\rm{H}}.
\end{align} In Appendix~\ref{Appendix_Analog_Transceiver}, we prove the following conclusion on the optimal structure of ${\bf{F}}_{\rm{A}}$ and ${\bf{G}}_{\rm{A}}$.
\noindent \textbf{Conclusion 3:} Let the SVD of ${\bf{F}}_{\rm{A}}$ be
\begin{align}
{\bf{F}}_{\rm{A}} \triangleq {\bf{U}}_{{\bf{F}}_{\rm{A}}}
{\boldsymbol{\Lambda}}_{{\bf{F}}_{\rm{A}}}{\bf{V}}_{{\bf{F}}_{\rm{A}}}^{\rm{H}}.
\end{align}
The singular values in ${\boldsymbol{\Lambda}}_{{\bf{F}}_{\rm{A}}}$ do not affect the objective function in \eqref{eq-analog-problem}, and the unitary matrix ${\bf{U}}_{{\bf{F}}_{\rm{A}}}$ for the optimal ${\bf{F}}_{\rm{A}}$ satisfies
\begin{align}
\label{Analog_Precoder_Optimization} [{\bf{U}}_{{\bf{F}}_{\rm{A}}}]_{:,1:L}^{\rm{opt}}={\rm{arg\,max}}
\{
\| [{\bf{V}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} [{\bf{U}}_{{\bf{F}}_{\rm{A}}}]_{:,1:L}^{\rm{H}}\|_{\rm{F}}^2
\}.
\end{align}
On the other hand, denote the SVD of $ {\bf{R}}_{\rm{n}}^{1/2} {\bf{G}}_{\rm{A}}^{\rm{H}} $ as
\begin{align}
{\bf{R}}_{\rm{n}}^{1/2} {\bf{G}}_{\rm{A}}^{\rm{H}} \triangleq {\bf{U}}_{{\bf{R}}{\bf{G}}} {\boldsymbol{\Lambda}}_{{\bf{R}}{\bf{G}}}{\bf{V}}_{{\bf{R}}{\bf{G}}}^{\rm{H}}.
\end{align}
The singular values in ${\boldsymbol{\Lambda}}_{{\bf{R}}{\bf{G}}}$ do not affect the objective in \eqref{eq-analog-problem}, and the unitary matrix ${\bf{U}}_{{\bf{R}}{\bf{G}}}$ for the optimal ${\bf{G}}_{\rm{A}}$ satisfies
\begin{align}
[{\bf{U}}_{{\bf{R}}{\bf{G}}}]_{:,1:L}^{\rm{opt}}=
{\rm{arg\,max}}\{ \|
[{\bf{U}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} [{\bf{U}}_{{\bf{R}}{\bf{G}}}]_{:,1:L}^{\rm{H}}\|_{\rm{F}}^2\}.
\end{align}
Based on the optimal structure given in Conclusion 3, in the following two kinds of algorithms are proposed to compute the analog precoder and processor. The first one is based on phase projection, which provides better performance while the second one based on a heuristic random selection, is with low complexity.
\subsection{Phase Projection Based Algorithm}
\noindent \textbf{Analog Precoder Design}
From Conclusion 3, the optimal analog precoder should select the first $L$-best eigenchannels. It is challenging to directly optmize ${\bf{F}}_{\rm{A}}$ based on (\ref{Analog_Precoder_Optimization}) because of the SVD of a constant-modulus matrix.
Alternatively, we resort to finding a matrix in the constant-modulus space with the minimum distance to the space spanned by $[{\bf{V}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} $. Then, the corresponding optimization problem of analog precoder design can be formulated as
\begin{align} &\min_{{\bf{F}}_{\rm{A}},{\mathrm{\mathbf{\Lambda}_A}},{\bf{Q}}_{\rm{A}}} \ \ \|[{\bf{V}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} {\mathrm{\mathbf{\Lambda}_A}}{\bf{Q}}_{\rm{A}}-{\bf{F}}_{\rm{A}}\|_{\rm{F}}^2
\nonumber \\
& \ \ \ \ {\rm{s.t.}} \ \ \ \ \ \ {\bf{Q}}_{\rm{A}}{\bf{Q}}_{\rm{A}}^{\rm{H}}={\bf{I}}\nonumber \\
& \ \ \ \ \ \ \ \ \ \ \ \ \ \ {\bf{F}}_{\rm{A}} \in \mathcal{F}.
\label{eq-pre-org}
\end{align}
Different from the existing work \cite{Y.C.Eldar2017}, the diagonal matrix $ \mathrm{\mathbf{\Lambda}_A} $ and the unitary matrix $ \mathrm{\mathbf{Q}_A} $ in our work are jointly optimized to
make $ {\bf F}_{\rm A} $ as close as possible in the space spanned by $ [{\bf{V}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} $ in terms of Frobenius norm.
As there is no constraint on the diagonal matrix $ \mathrm{\mathbf{\Lambda}_A} $, given matrices $ {\bf Q}_{\rm A} $ and $ {\bf F}_{\rm A} $, the optimal $ \mathbf{\Lambda}_{\rm A} $ is
\begin{align}
\mathbf{\Lambda}_{\rm{A}}^{\rm{opt}}= \mathrm{diag} \Big\lbrace \Re \big( {\mathrm{\mathbf{Q}_A}} {\mathrm{\mathbf{F}_A^H}} [{\bf{V}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} \big) \Big\rbrace.
\label{eq-diagonal-matrix}
\end{align}
Then we rewrite the objective function in (\ref{eq-pre-org}) as
\begin{align}
\label{aa_MSE}
&\|[{\bf{V}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} \mathbf{\Lambda}_{\rm{A}}{\bf{Q}}_{\rm{A}}-{\bf{F}}_{\rm{A}}\|_{\rm{F}}^2 \nonumber \\
=&{\rm{Tr}}
([{\bf{V}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L}
\mathbf{\Lambda}_{\rm{A}}^{\rm{opt}}(\mathbf{\Lambda}_{\rm{A}}^{\rm{opt}})
^{\rm{H}}[{\bf{V}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L}^{\rm{H}})\nonumber \\
&
+{\rm{Tr}}({\bf{F}}_{\rm{A}}{\bf{F}}_{\rm{A}}^{\rm{H}})-2\Re\{{\rm{Tr}}({\bf{F}}_{\rm{A}}^{\rm{H}}[{\bf{V}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} \mathbf{\Lambda}_{\rm{A}}^{\rm{opt}}{\bf{Q}}_{\rm{A}})\}.
\end{align}To minimize (\ref{aa_MSE}) given ${\boldsymbol \Lambda}_{\rm{A}}$ and ${\bf{F}}_{\rm{A}}$, the term $\Re\{{\rm{Tr}}({\bf{F}}_{\rm{A}}^{\rm{H}}[{\bf{V}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} {\mathrm{\mathbf{\Lambda}_A}^{\rm{opt}}}{\bf{Q}}_{\rm{A}})\}$ should be maximized. By applying the matrix inequality \cite{inequalityMajorBook}, the optimal $ {\bf Q}_{\rm A} $ is
\begin{align}
\mathbf{Q}_{\mathrm{A}}^{\mathrm{opt}} = \mathbf{V}_{\mathrm{Q}} \mathbf{U}_{\mathrm{Q}}^\mathrm{H},
\label{eq-uni}
\end{align} where $\mathbf{V}_{\mathrm{Q}}$ and $\mathbf{U}_{\mathrm{Q}}$ are defined based on the following SVD
\begin{align}
{\bf{F}}_{\rm{A}}^\mathrm{H} \, [{\bf{V}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} {\mathbf{\Lambda}_\mathrm{A}} = \mathbf{U}_{\mathrm{Q}} \mathbf{\Sigma}_{\mathrm{Q}} \mathbf{V}_{\mathrm{Q}}^\mathrm{H}.
\end{align}
Now that for given $ {\bf Q}_{\rm A} $ and $\mathbf{\Lambda}_\mathrm{A} $, the optimal analog precoder $ \mathrm{\mathbf{F}_A} $ is \cite{HanzoHybridTCOMM2016} \begin{align}
\label{phase_projection}
{\bf F}_{\rm A}^{\rm opt} = {\bf P}_{\mathcal F} \left( [ {\bf{V}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} {\mathrm{\mathbf{\Lambda}_A}} {\bf{Q}}_{\rm{A}} \right),
\end{align}
where the phase projection ${\bf P}_{\mathcal F}( \mathrm{\mathbf{A}} )$ is defined as
\begin{align}
\left[ {\bf P}_{\mathcal F}( \mathrm{\mathbf{A}} ) \right]_{i,j} =
\begin{cases}
\left[ \mathrm{\mathbf{A}} \right]_{i,j} / | \left[ \mathrm{\mathbf{A}} \right]_{i,j} |, & \text{if} \ \left[ \mathrm{\mathbf{A}} \right]_{i,j} \neq 0 \\
1, & \text{otherwise}.
\end{cases}
\end{align}
Using (\ref{eq-diagonal-matrix}), (\ref{eq-uni}) and (\ref{phase_projection}), the phased projection based analog precoder optimization is proposed in Algorithm~\ref{alg-analog-design}.
\begin{algorithm}[t]
\caption{Analog Precoder Design}
\label{alg-analog-design}
\begin{algorithmic}[1]
\REQUIRE{ Left singular matrix of equivalent channel $ [{\bf{V}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} $, algorithm threshold $ \zeta $. }
\STATE{ Initialize ${\bf{F}}_{\rm{A}}$ with $\mathbf{P}_{\mathcal{F}} \{ [{\bf{V}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} \} $. }
\WHILE{ the decrement of the objective function in (\ref{eq-pre-org}) is larger than $ \zeta $}
\STATE{ Calculate ${\mathbf{\Lambda}_{\rm A}}$ based on (\ref{eq-diagonal-matrix}). }
\STATE{ Calculate $\mathbf{Q}_{\mathrm{A}} $ based on (\ref{eq-uni}). }
\STATE{ Calculate $\mathbf{F}_{\mathrm{A}}$ based on (\ref{phase_projection}). }
\STATE{ Update the decrement value of the objective function in (\ref{eq-pre-org}). }
\ENDWHILE
\RETURN{$ \mathbf{F}_{\mathrm{A}} $.}
\end{algorithmic}
\end{algorithm}
\noindent \textbf{Analog Processor Design}
Based on Conclusion 3, the optimal structure of analog processor is similar to the analog precoder, but a bit complicated in that the noise variance is tangled in the analog processor formulation. In this case, the left singular matrix of $ {\bf{R}}_{\rm{n}}^{1/2} {\bf{G}}_{\rm{A}}^{\rm{H}} $ is required to match the first $ L $ column of left singular matrix of effective channel, i.e., $ [{\bf{U}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} $. \begin{algorithm}[ht]
\caption{Iterative Analog Processor Design}
\label{alg-unit-mod}
\begin{algorithmic}[1]
\REQUIRE{ The matrix $ [{\bf{U}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} $, the unitary matrix $ \mathbf{Q}_{\mathrm{G}} $, the diagonal matrix $ \mathbf{\Lambda}_{\mathrm{G}} $, controlling factor $ \eta $, and convergent threshold $ \upsilon $. }
\STATE{ Compute $ \mathbf{W} $ in (\ref{QCQP_3}). }
\STATE{ Initialize constant-modulus processor as ${\mathbf{r}}_{(0)} = \frac{\sqrt{2}}{2}\mathbf{1} $. }
\WHILE{The decrement of the objective function in (\ref{QCQP_3}) is larger than $ \upsilon $}
\STATE{ Calculate $ \mathbf{P} $ using (\ref{para-tangent}) based on $ \mathbf{G}_{\mathrm{A}} $ computed in the previous iteration. }
\STATE{ Find out the optimal solution of (\ref{QCQP_3}) based on (\ref{eq-opt-QP}). }
\STATE{ Update the decrement of the objective function in (\ref{QCQP_3}). }
\ENDWHILE
\STATE{ Construct $ \mathbf{G}_{\mathrm{A}} $ based on the optimal solution in Step 5. }
\RETURN{ $ {\mathbf{G}}_{\mathrm{A}} $. }
\end{algorithmic}
\end{algorithm}Thus, analogous to the analog precoder design in (\ref{eq-pre-org}), we have the following optimization problem
\begin{align}
\min_{{\bf{G}}_{\rm{A}},{\mathrm{\mathbf{\Lambda}_G}},{\bf{Q}}_{\rm{G}}} \;\; & \big\Vert [{\bf{U}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L}
{\mathbf{\Lambda}_\mathrm{G}} {\bf{Q}}_{\rm{G}} - {\bf{R}}_{\rm{n}}^{1/2} {\bf{G}}_{\rm{A}}^\mathrm{H} \big\Vert_{\rm{F}}^2
\nonumber \\
{\rm{s.t.}} \quad\;\; & {\bf{Q}}_{\rm{G}}{\bf{Q}}_{\rm{G}}^{\rm{H}} = {\bf{I}}\nonumber \\
& {\bf{G}}_{\rm{A}} \in \mathcal{G}.
\label{eq-comb-org}
\end{align}
The optimization of unitary matrix $ {\bf{Q}}_{\rm{G}} $ and diagonal matrix $ {\mathbf{\Lambda}_\mathrm{G}} $ in (\ref{eq-comb-org}) is exactly the same as that for the analog precoder optimization. However, the optimization of the analog processor, ${\bf{G}}_{\rm{A}}$, in (\ref{eq-comb-org}) is different.
When noises from different antennas are correlated the analog processor design is more challenging than the analog precoder design. In order to overcome this challenge, problem (\ref{eq-comb-org}) is relaxed to minimize an upper bound of the original objective function. Applying
\begin{align}
& \big\Vert [{\bf{U}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L}
{\mathbf{\Lambda}_\mathrm{G}} {\bf{Q}}_{\rm{G}} - {\bf{R}}_{\rm{n}}^{1/2} {\bf{G}}_{\rm{A}}^\mathrm{H} \big\Vert_{\mathrm{F}}^2 \notag \\
\le \;\; & \lambda_{\mathrm{max}}( \mathbf{R}_{\mathrm{n}} )\big\Vert {\bf{R}}_{\rm{n}}^{-1/2} [{\bf{U}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L}
{\mathbf{\Lambda}_\mathrm{G}} {\bf{Q}}_{\rm{G}} - {\bf{G}}_{\rm{A}}^\mathrm{H} \big\Vert_{\mathrm{F}}^2,
\end{align}the objective function of (\ref{eq-comb-org}) is relaxed with $ \big\Vert {\bf{R}}_{\rm{n}}^{-1/2} [{\bf{U}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} {\mathbf{\Lambda}_\mathrm{G}} {\bf{Q}}_{\rm{G}} - {\bf{G}}_{\rm{A}}^\mathrm{H} \big\Vert_{\mathrm{F}}^2$. Note that solving (\ref{eq-comb-org}) is the same as that for the analog precoder design. It is obvious that this relaxation is tight when $ {\bf R}_{\rm n} = \sigma_n^2 {\bf I} $.
This relaxation may result in some performance loss. Inspired by the work in \cite{tractableTSP2017}, an iterative algorithm is also proposed to compute ${\bf{G}}_{\rm{A}}$. The constant modulus constraints is asymptotically satisfied via iteratively updating an additional constraint. This iterative algorithm is given in Algorithm~\ref{alg-unit-mod}, and detailed derivation is given in Appendix~\ref{Appendix_C}.
\subsection{Random Algorithm}
The proposed phase projection based analog transceiver design suffers from high computation complexity. This may prohibit the proposed analog transceiver design from practical implementation. In order to reduce complexity, we can
randomly generate analog precoder and processor matrices to avoid the heavy computations involved in the phase projection based algorithms.
In this random algorithm, we randomly select multiple matrices in the column or row space of ${\bf H}^{\rm{H}}$ and use their phase projections as the candidates for the analog transceiver design. Then the best candidate matrix is chosen according to some criterion.
Specifically, the random algorithm consists of three steps. First, a series of parameter matrices, denoted by $ \{\mathbf{R}_k\}$ and $\{{\mathbf{T}}_k\}$, are generated, whose elements are randomly generated following a specific distribution e.g., uniform distribution or Gaussian distribution. Secondly, a series of candidate analog precoder and processor matrices are computed based on the parameter matrices. Specifically, based on the parameter matrices and after computing ${\bf{H}}^{\rm{H}}{\bf{R}}_k$ and ${\mathbf{T}}_k{\bf{H}}^{\rm{H}}$, the constant-modulus candidate matrices are obtained using their phase projections. Finally, the analog precoder and processor are chosen from these candidates according to the determinant of a certain matrix version SNR matrix. The procedure is detailed in Algorithm~\ref{alg-rand}.
\begin{algorithm}[t]
\caption{Random Algorithm for Analog Transceiver Design}
\label{alg-rand}
\begin{algorithmic}[1]
\REQUIRE{The number of transmitter antennas $ N $, number of RF-Chain $ L $, selection number $ K $, probability density function $ f_\mathrm{trans} (x) $, and ${\bf{H}}$ }
\STATE{ Generate $ K $ parameter matrices, $ {\bf R}_{1},\ldots,{\bf R}_{K} \in \mathbb{C}^{ M \times L} $, whose elements are randomly generated based on $ f_\mathrm{trans} (x) $. }
\STATE{ Rotate the channel as $ \mathrm{\mathbf{H}^H} {\bf R}_{k} $. Calculate ${\bf F}_{k} = \mathrm{\mathbf{P}}_{\mathcal{F}} \left( \mathrm{\mathbf{H}^H} {\bf R}_{k} \right) $. }
\STATE{ ${\bf{F}}_{\rm{max}}={\arg \max}_{{\bf{F}}_i} \left\lvert { \bf F}_{i}^{\mathrm{H}} {\bf H}^{\mathrm{H}} \mathrm{ \mathbf{R}_n^{-1} } {\bf H}{\bf F}_{i} \right\rvert $. }
\STATE{ Generate $ K $ parameter matrices $ {\bf T}_{1},\ldots,{\bf T}_{K} \in \mathbb{C}^{ L \times N } $ randomly based on $ f_\mathrm{trans} (x) $. }
\STATE{ Rotate the channel as $ {\bf T}_{k} \mathrm{\mathbf{H}^H} $. Calculate $ {\bf G}_{k} = \mathrm{\mathbf{P}}_{\mathcal{F}} \left( {\bf T}_{k} \mathrm{\mathbf{H}^H} \right) $. }
\STATE{ $ {\bf{G}}_{\max}={\arg \max}_{{\bf{G}}_i}\bigl\lvert {\bf G}_{i} \mathrm{ \mathbf{R}_n^{-1/2} } {\bf H} {\bf H}^{\mathrm{H}} \mathrm{ \mathbf{R}_n^{-1/2} } {\bf G}_{i}^{\mathrm{H}} \bigr\rvert $.}
\RETURN{ $ {\bf F}_\mathrm{A} = {\bf F}_\mathrm{max} $, $ {\bf G}_\mathrm{A} = {\bf G}_\mathrm{max} $ }
\end{algorithmic}
\end{algorithm}
\section{Simulation Results}
In this part, some numerical results are provided to assess the performance of the proposed hybrid transceiver design. As our algorithms are applicable to any frequency band, both microwave frequency band and mmWave frequency band are simulated.
In addition, quantization of phase shifters is also taken into account.
\begin{figure}[!t]
\centering
\includegraphics[width = .4\textwidth, trim={0 1 1 1}, clip]{simHybTotal-Mark-mmW-32-16-4-4d-x2000.eps}
\caption{Spectral efficiency comparison of 5 different hybrid transceiver design methods. Here, $ 32 \times 16 $ mmWave channel model is adopted in the simulation. Both the transmitter and receiver are equipped with $ L = 4 $ RF-chains and the system is conveying $ D = 4 $ data streams.}
\label{fig20}
\end{figure}
More specifically, both mmWave channel model $ {\mathrm{\mathbf{H}_{m}}} $ and classic Rayleigh channel model $ {\mathrm{\mathbf{H}_{r}}} $ are tested. For mmWave channel, $ {\mathrm{\mathbf{H}_{m}}} $, the uniformed linear arrays (ULA) is adopted.
Unless otherwise specified, it is assumed that 1) the mmWave channel has $ N_{\rm cl} = 2 $ clusters with each of them containing $ N_\mathrm{path} = 5 $ paths; 2) the azimuth angle spread of transmitter is restricted to $ 7.5^{\circ} $ at the mean of azimuth angle $ \hat{\theta} = 45^{\circ} $, and the receiver is omni-directional; 3) the path loss factors obey the standard Gaussian distribution; 4) the inter-antenna spacing $ d $ equals to half-wavelength. The channel is normalized to meet $ \mathbb{E}{ \left\lbrace \lVert {\bf H}_{\rm m} \rVert_{\rm{F}}^2 \right\rbrace } = NM $.
For the random phase algorithm, we set $ K = 10 $, which means that the best analog precoder and processor are selected from 10 candidates and uniform distribution is utilized, i.e., $ f_{\mathrm{ trans }}(x) = 1$ for $0 \le x \le 1 $. We average the result over 2,000 independent trials. The transmitting power is denoted as $ P_{\mathrm{Tx}} $. OMP and MaGiQ algorithms refers to the corresponding algorithms in \cite{HeathTWC2014} and \cite{Y.C.Eldar2017}, respectively. The analog precoder and processor for the direct phase algorithm are obtained by phase projection.
\begin{figure}[!t]
\centering
\includegraphics[width = .4\textwidth, trim={0 1 0 0}, clip]{simHybTotal-Mark-mmW-32-16-6-4d-x2000.eps}
\caption{Spectral efficiency comparison of 5 different hybrid transceiver design methods. The $ 32 \times 16 $ mmWave channel model, which involves $ N_{\rm cl} = 3 $ clusters with $ N_{\rm path} = 5 $ multipath at each cluster, is adopted in the simulation. Both the transmitter and receiver are equipped with $ L = 6 $ RF-chains and the system is conveying $ D = 4 $ data streams.}
\label{fig21}
\end{figure}
\begin{figure}[!ht]
\centering
\includegraphics[width = 0.4\textwidth, trim={0 0 0 0}, clip]{simHybTotal-Mark-Gss-32-16-6-4d-x2000.eps}
\caption{Spectral efficiency comparison of 5 different hybrid transceiver design methods. The $ 32 \times 16 $ Rayleigh channel model is adopted in the simulation. Both the transmitter and receiver are equipped with $ L = 6 $ RF-chains and the system is conveying $ D = 4 $ data streams.}
\label{fig22}
\end{figure}
Fig.~\ref{fig20} demonstrates spectral efficiency versus the transmit power for different algorithms, where the hybrid transceiver is with $ N = 32 $ transmit antennas, $ M = 16 $ receive antennas, and 4 data streams. Both the transmitter and receiver are equipped with $ L = 4 $ RF-chains.
From Fig.~\ref{fig20}, the proposed phased projection based hybrid transceiver design outperforms the other hybrid transceiver design algorithms. The performance of the proposed algorithm is very close to the full digital one.
Fig.~\ref{fig21} shows the performance of the hybrid transceiver design with 6 RF-chains for channel with $ N_{\rm cl} = 3 $ clusters, each with $ N_{\rm path} = 5 $ paths. From this figure, the proposed phase projection algorithm works well for different numbers of RF-chains and performs very close to the full-digital one and it is better than that of other hybrid transceiver designs. It is worth noting that the direct phase projection method performs\begin{figure}[!ht]
\centering
\includegraphics[width = .4\textwidth, trim={0 0 0 0}, clip]{simHybTotal-Mark-Qnt-mmW-32-16-4-4d-x2000.eps}
\caption{Spectral efficiency comparison of 5 different hybrid transceiver design methods concerning 2 bits quantization of phase shifters. The $ 32 \times 16 $ mmWave channel model is adopted in the simulation. Both the transmitter and receiver are equipped with $ L = 4 $ RF-chains and the system is conveying $ D = 4 $ data streams.}
\label{fig23}
\end{figure} even better than OMP and MaGiQ. This is because the error bound of the method decreases when the number of RF chains increases \cite{HanzoHybridTCOMM2016}. However, as the limitation that the number of data streams should be equal to that of RF-chains \cite{Y.C.Eldar2017} is not satisfied in this case, MaGiQ algorithm is the worst at high SNR.
The following simulations focus on Rayleigh channels at micro-wave bands. Under this circumstance, the $ 32 \times 16 $ system is adopted with $ L = 6 $ RF-chains are in use transferring $ D = 4 $ data streams. After performing extensive simulation compared with randomly generated codebooks or DFT codebook, we found that the codebook constructed by the phase projection, i.e., $ \mathcal{C} = {\bf P}_{\mathcal{F}}( \mathrm{\mathbf{H}}) $, has much better performance. This codebook is used for performance comparison in the following simulation.
Fig.~\ref{fig22} compares the performance for the different algorithms under Rayleigh channels. \begin{figure}[!ht]
\centering
\includegraphics[width = .4\textwidth, trim={20 0 30 20}, clip]{simHybRand-Mark-mmW-Rnd-64-16-6-4d-x2000.eps}
\caption{Spectral efficiency of random algorithm under mmWave channel. $ L = 6 $ RF-chains are assumed to be equipped both transmitter and receiver. Both $ 32 \times 16 $ and $ 64 \times 16 $ system are involved during the simulation. The number of data streams is set to be $ D = 4 $.}
\label{fig24}
\end{figure} From the figure, the proposed algorithm obtains nearly the optimal performance as the full-digital one. The proposed algorithm performs better than MaGiQ algorithm in \cite{Y.C.Eldar2017}. Moreover, it is worth noting that even with the carefully chosen codebook, the OMP algorithm exhibits a large performance gap compared with the full-digital one, which indicates that the OMP algorithm is not suitable for micro-wave frequency bands.
As the practical analog phase shifters are often implemented by digital controller with finite resolution, Fig.~\ref{fig23} compares the performance of different hybrid transceiver designs for $ 32 \times 16 $ mmWave channel when phase quantization is taken into account. Each hybrid transceiver design only uses the phase shifter with 2-bit resolution and $ L = 4 $. From the figure the performance of the proposed hybrid transceiver design still outperforms other hybrid transceiver designs with finite resolution phase shifters.
In Fig.~\ref{fig24}, both $ 32 \times 16 $ and $ 64 \times 16 $ mmWave channels are used to assess the performance. In this case, the number of RF-chains is $ 6 $. From Fig.~\ref{fig24}, with the same number of transmit antennas, the random algorithm is worse than that of the phase projection based algorithm. Although the random algorithm suffers nearly $ 5\,{\rm{dB}} $ performance loss comparing with the full-digital one, by involving more antennas at base station, e.g., $ N = 64 $, the performance of random algorithm will be comparable to the performance corresponding to the full-digital transmitter with $ 32 $ antennas. This implies that we can obtain appropriate performance using the low complexity random algorithm by simply increasing the number of transmit antennas. Because of its low complexity, the random algorithm will be a friendly algorithm for hardware realization.
Fig.~\ref{fig25} shows the BER performances of different kinds of hybrid MIMO transceiver designs for $ 32 \times 16 $ Raleigh channel with 4 RF chains. In this case, there are 4 data streams and 16-QAM is used. From this figure, at high SNR, the BER performance of the hybrid nonlinear transceiver design is much better than that of the hybrid linear transceiver design. Furthermore, the hybrid nonlinear transceivers with THP and DFD have almost the same BER performance because of the duality between precoder design and processor design.
\begin{figure}[ht]
\centering
\includegraphics[width = .4\textwidth, trim={20 0 20 10}, clip]{BER_Nonlinear_Linear.eps}
\caption{BERs of the linear hybrid transceiver for capacity maximization, nonlinear transceiver with DFD and nonlinear transceiver THP. The $ 32 \times 16 $ Rayleigh channel model is involved in the simulation. Both transmitter and receiver are equipped with $ L = 4 $ RF-chains transferring $ D = 4 $ data streams simultaneously. }
\label{fig25}
\end{figure}
\section{Conclusions}
In this paper, we have investigated the hybrid digital and analog transceiver design for MIMO system based on matrix-monotonic optimization theory. We have proposed a unified framework for both linear and nonlinear transceivers.
Based on the matrix-monotonic optimization theory, the optimal transceiver structure for various MIMO transceivers has been derived, from which the function of analog transceiver part can be regarded as eigenchannel selection.
Using the derived optimal structure, effective algorithms have been proposed considering the constant-modulus constraint.
Finally, it is shown that the proposed algorithms outperform existing hybrid transceiver designs.
\appendices
\section{Preliminary Definition of Majorization Theory}
\label{Appendix_A}
In this appendix, some fundamental functions in majorization theory are defined for the convenience of unified framework analysis. These definitions are also given in \cite{XingTSP201502} and in order to make the paper self-contained, they are also given here.
\textit{\textbf{Definition 1 \cite{inequalityMajorBook}: }}
For a $K\times 1$ vector $ {\bf x} \in \mathbb{R}^{K} $, the $ \ell $th largest element of $ {\bf x} $ is denoted as $ {x}_{[\ell]} $, and in other words, we have $ {x}_{[1]} \ge {x}_{[2]} \ge \cdots \ge {x}_{[K]} $. Based on this definition, for two vectors $ {\bf x}, {\bf y} \in \mathbb{R}^{K} $, it state that $ {\bf y} $ majorizes $ {\bf x} $ additively, denoted by $ {\bf x} \prec_{+} {\bf y} $, if and only if the following properties are satisfied
\begin{align}
\sum_{k = 1}^{p} {x}_{[k]} \le \sum_{k = 1}^{p} {y}_{[k]}, \; p = 1,2,\ldots, K-1 \; \text{and} \; \sum_{k = 1}^{K} {x}_{[k]} = \sum_{k = 1}^{K} {y}_{[k]}.
\end{align}
\textit{\textbf{Definition 2 \cite{inequalityMajorBook}: }}
A function $ f(\cdot) $ is Schur-convex if and only if it satisfies the following property
\begin{align}
{\bf x} \prec_{+} {\bf y} \, \Longrightarrow \, f( {\bf x} ) \le f( {\bf y} ).
\end{align}
On the other hand, a function $ f(\cdot) $ is additively Schur-concave if $ -f(\cdot) $ is additively Schur-convex.
\textit{\textbf{Definition 3 \cite{XingTSP201502}: }}
For two $K\times 1$ vectors $ {\bf x}, {\bf y} \in \mathbb{R}^{K} $ with nonnegative elements, it states that the vector $ {\bf y} $ majorizes vector $ {\bf x} $ multiplicatively, i.e., $ {\bf x} \prec_{\times} {\bf y} $, if and only if the following properties are satisfied
\begin{align}
\prod_{k = 1}^{p} {x}_{[k]} \le \prod_{k = 1}^{p} {y}_{[k]}, \; p = 1,2,\ldots, K-1 \; \text{and} \; \prod_{k = 1}^{K} {x}_{[k]} = \prod_{k = 1}^{K} {y}_{[k]}.
\end{align}
\textit{\textbf{Definition 4 \cite{XingTSP201502}: }}
A function $ f(\cdot) $ is multiplicatively Schur-convex if and only if it satisfies the following property
\begin{align}
{\bf x} \prec_{+} {\bf y} \, \Longrightarrow \, f( {\bf x} ) \le f( {\bf y} ).
\end{align}
On the other hand, a function $ f(\cdot) $ is multiplicatively Schur-concave if $ -f(\cdot) $ is multiplicatively Schur-convex.
\section{The optimal ${\bf{B}}$}
\label{Appendix_Optimal_B}
Note that this optimal ${\bf{B}}$ for nonlinear transceiver was previously obtained in \cite{XingJSAC2012} when function belongs to the family of multiplicatively Schur-concave/convex functions defined in Appendix A. The following presents a slightly different proof of the optimal B, which generalizes the result to the case with an arbitrary monotone increasing function $f(\cdot)$. Here, the function f operates only on the diagonal elements of ${\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{A}}, {\bf{F}}_{\rm{A}}, {\bf{F}}_{\rm{D}},{\bf{B}})$ and ${\bf{B}}$ is restricted as a strictly lower triangular matrix which specifies the use of nonlinear transceiver.
Based on the Cholesky decomposition
\begin{align}
{\boldsymbol{\Phi}}_{\rm{MSE}}^{\rm{L}}
({\bf{G}}_{\rm{A}}, {\bf{F}}_{\rm{A}}, {\bf{F}}_{\rm{D}}) ={\bf{L}}{\bf{L}}^{\rm{H}},
\end{align}
we have
\begin{align}
&{\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{A}}, {\bf{F}}_{\rm{A}}, {\bf{F}}_{\rm{D}}, {\bf{B}})\nonumber \\
&= ({\bf{I}} + {\bf{B}}) {\boldsymbol{\Phi}}_{\rm{MSE}}^{\rm{L}}
({\bf{G}}_{\rm{A}}, {\bf{F}}_{\rm{A}}, {\bf{F}}_{\rm{D}}) ({\bf{I}} + {\bf{B}})^{\rm H}\nonumber \\
&=({\bf{I}} + {\bf{B}}){\bf{L}}{\bf{L}}^{\rm{H}}({\bf{I}} + {\bf{B}})^{\rm H},
\end{align} based on which the $n$th diagonal element of ${\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{A}}, {\bf{F}}_{\rm{A}}, {\bf{F}}_{\rm{D}}, {\bf{B}})$ equals
\begin{align}
\label{app_1}
[{\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{A}}, {\bf{F}}_{\rm{A}}, {\bf{F}}_{\rm{D}}, {\bf{B}})]_{n,n}&=[({\bf{I}} + {\bf{B}}){\bf{L}}]_{n,:}[({\bf{I}} + {\bf{B}}){\bf{L}}]_{n,:}^{\rm{H}}\nonumber \\
&=\|[({\bf{I}} + {\bf{B}}){\bf{L}}]_{n,:}\|^2.
\end{align}In addition, as ${\bf{B}}$ is strictly lower triangular it can be calculated that the last element of the vector $[({\bf{I}} + {\bf{B}}){\bf{L}}]_{n,:}$ equals $[{\bf{L}}]_{n,n}$, i.e.,
\begin{align}
\label{app_2}
[({\bf{I}} + {\bf{B}}){\bf{L}}]_{n,:}=[\cdots,[{\bf{L}}]_{n,n}].
\end{align} Therefore, from (\ref{app_1}) to (\ref{app_2}) the following relationship holds
\begin{align}
[{\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{A}}, {\bf{F}}_{\rm{A}}, {\bf{F}}_{\rm{D}}, {\bf{B}})]_{n,n}
&=\|[({\bf{I}} + {\bf{B}}){\bf{L}}]_{n,:}\|^2\ge [{\bf{L}}]_{n,n}^2.
\end{align}It is obvious that the above inequality can be achieved with equality as $[{\boldsymbol{\Phi}}_{\rm{MSE}}
({\bf{G}}_{\rm{A}}, {\bf{F}}_{\rm{A}}, {\bf{F}}_{\rm{D}}, {\bf{B}})]_{n,n}
= [{\bf{L}}]_{n,n}^2$ when the following equality holds for different $n$
\begin{align}
({\bf{I}}+{\bf{B}}){\bf{L}} ={\rm{diag}}\{[[{\bf{L}}]_{1,1},\cdots,
[{\bf{L}}]_{L,L}]^{\rm{T}}\},
\end{align}based on which the optimal ${\bf{B}}$ equals
\begin{align}
{\bf{B}}^{\rm{opt}} ={\rm{diag}}\{[[{\bf{L}}]_{1,1},\cdots,
[{\bf{L}}]_{L,L}]^{\rm{T}}\}
{\bf{L}}^{-1}-{\bf{I}}.
\end{align}
\section{Fundamental Matrix Inequalities}
\label{Appendix_Inequality}
In this appendix, two fundamental matrix inequalities are given.
For two positive semi-definite matrices ${\boldsymbol X}$ and ${\boldsymbol Y}$, there are following EVDs defined
\begin{align}
{\boldsymbol X}&={\bf{U}}_{\boldsymbol X} {\boldsymbol\Lambda}_{\boldsymbol X} {\bf{U}}^{\rm{H}}_{\boldsymbol X} \ \ \text{with} \ \ {\boldsymbol\Lambda}_{\boldsymbol X}\searrow \nonumber \\
{\boldsymbol Y}&={\bf{U}}_{\boldsymbol Y} {\boldsymbol\Lambda}_{\boldsymbol Y} {\bf{U}}^{\rm{H}}_{\boldsymbol Y} \ \ \text{with} \ \ {\boldsymbol\Lambda}_{\boldsymbol Y}\searrow \nonumber \\
{\boldsymbol Y}&={\bf{\bar U}}_{\boldsymbol Y}{\boldsymbol{\bar \Lambda}}_{\boldsymbol Y} {\bf{\bar U}}^{\rm{H}}_{\boldsymbol Y} \ \ \text{with} \ \ {\boldsymbol{\bar \Lambda}}_{\boldsymbol Y} \nearrow.
\end{align}
For the trace of the two matrices, we have the following fundamental matrix inequalities \cite{XingTSP201501}
\begin{align}
&\sum_{i=1}^{N}\lambda_{i-1+N}({\boldsymbol X}) \lambda_i({\boldsymbol Y})\le {\rm{Tr}}({\boldsymbol X}{\boldsymbol Y}) \le \sum_{i=1}^{N}\lambda_i({\boldsymbol X}) \lambda_i({\boldsymbol Y}),
\end{align}
where $ \lambda_i( { \mathbf{X} } ) $ is the $ i $th ordered eigenvalue of $ { \mathbf{X} } $, and the left equality holds when ${\bf{U}}_{\boldsymbol X}={\bf{\bar U}}_{\boldsymbol Y}$. On the other hand, the right equality holds when ${\bf{U}}_{\boldsymbol X}={\bf{ U}}_{\boldsymbol Y}$.
\section{Optimal Structure of Analog Transceiver}
\label{Appendix_Analog_Transceiver}
It is worth noting that the nonzero singular values of the matrix, $ {\boldsymbol{\Pi}}_{\rm{R}} = {\bf{F}}_{\rm{A}} ({\bf{F}}_{\rm{A}}^{\rm{H}}{\bf{F}}_{\rm{A}})^{-\frac{1}{2}} $, are all ones. Similarly for ${\boldsymbol{\Pi}}_{\rm{L}}=
({\bf{G}}_{\rm{A}}{\bf{R}}_{\rm{n}}{\bf{G}}_{\rm{A}}^{\rm{H}})^{-1/2}
{\bf{G}}_{\rm{A}}{\bf{R}}_{\rm{n}}^{1/2}$, the nonzero singular values of ${\boldsymbol{\Pi}}_{\rm{L}}$ are all ones.
It implies that the singular values of ${\bf{F}}_{\rm{A}}$ and ${\bf{G}}_{\rm{A}}{\bf{R}}_{\rm{n}}^{1/2}$ do not affect the optimization problem. Based on the SVDs ${\bf{R}}_{\rm{n}}^{-1/2}{\bf{H}}={\bf{U}}_{{\boldsymbol{\mathcal{H}}}}
{\boldsymbol \Lambda}_{\boldsymbol{\mathcal{H}}}
{\bf{V}}_{\boldsymbol{\mathcal{H}}}^{\rm{H}}
$, ${\bf{R}}_{\rm{n}}^{1/2} {\bf{G}}_{\rm{A}}^{\rm{H}} = {\bf{U}}_{{\bf{R}}{\bf{G}}} {\boldsymbol{\Lambda}}_{{\bf{R}}{\bf{G}}}
{\bf{V}}_{{\bf{R}}{\bf{G}}}^{\rm{H}}
$, and ${\bf{F}}_{\rm{A}}={\bf{U}}_{{\bf{F}}_{\rm{A}}} {\boldsymbol{\Lambda}}_{{\bf{F}}_{\rm{A}}}{\bf{V}}_{{\bf{F}}_{\rm{A}}}^{\rm{H}}$
with the singular values in decreasing order,
the objective function in (\ref{eq-analog-problem}) becomes
\begin{align}
\label{New_Matrix_Objective}
{\bf{\tilde F}}_{\rm{D}}^{\rm{H}}{\bf{V}}_{{\bf{F}}_{\rm{A}}}
{\boldsymbol \Lambda}_{\rm{R}}^{\rm{T}}
{\bf{U}}_{{\bf{F}}_{\rm{A}}}^{\rm{H}}{\bf{H}}^{\rm{H}}
{\bf{U}}_{\rm{RG}}{\boldsymbol \Lambda}_{\rm{L}}^{\rm{T}}
{\boldsymbol \Lambda}_{\rm{L}}{\bf{U}}_{\rm{RG}}^{\rm{H}}
{\bf{H}}{\bf{U}}_{{\bf{F}}_{\rm{A}}}{\boldsymbol \Lambda}_{\rm{R}}{\bf{V}}_{{\bf{F}}_{\rm{A}}}^{\rm{H}}{\bf{\tilde F}}_{\rm{D}}
\end{align}where the diagonal elements of the diagonal matrices ${\boldsymbol \Lambda}_{\rm{R}}$ and ${\boldsymbol \Lambda}_{\rm{L}}$ satisfies
\begin{align}
&[{\boldsymbol \Lambda}_{\rm{R}}]_{i,i}=1, \ \ i\le L \nonumber \\
& [{\boldsymbol \Lambda}_{\rm{R}}]_{i,i}=0, \ \ i> L \nonumber \\
&[{\boldsymbol \Lambda}_{\rm{L}}]_{i,i}=1, \ \ i\le L \nonumber \\
& [{\boldsymbol \Lambda}_{\rm{R}}]_{i,i}=0, \ \ i> L.
\end{align}Therefore, ${\bf{F}}_{\rm{A}}$ and ${\bf{G}}_{\rm{A}}$ do not affect the optimal solution. Moreover, the unitary matrices ${\bf{V}}_{{\bf{F}}_{\rm{A}}}$ and ${\bf{V}}_{\rm{RG}}$ do not affect the optimal solution as ${\bf{\tilde F}}_{\rm{D}}$ in the constraint is unitary invariant.
Based on the above the discussion and (\ref{New_Matrix_Objective}), the remaining task to maximize the singular values of matrix $[{\bf{U}}_{\rm{RG}}^{\rm{H}}
{\bf{H}}{\bf{U}}_{{\bf{F}}_{\rm{A}}}]_{1:L,1:L}$. Note that ${\bf{U}}_{\rm{RG}}$ and ${\bf{U}}_{{\bf{F}}_{\rm{A}}}$ are unitary matrices, for the optimal solution, the left eigenvectors of its first $ L $ largest singular values of $ {\bf F}_{\rm A} $ should have the maximum inner product with $[{\bf{V}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L}$ i.e.,
\begin{align} [{\bf{U}}_{{\bf{F}}_{\rm{A}}}]_{:,1:L}^{\rm{opt}}
={\rm{arg\,max}}
\{
\|[{\bf{V}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} [{\bf{U}}_{{\bf{F}}_{\rm{A}}}]_{:,1:L}^{\rm{H}}\|_{\rm{F}}^2
\}.
\end{align}
Similarly for the optimal solution, the left eigenvectors of its first $ L $ largest singular values of $ {\bf{R}}_{\rm{n}}^{1/2} {\bf{G}}_{\rm{A}}^{\mathrm{H}} $ should have the maximum inner product with $[{\bf{U}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L}$, i.e.,
\begin{align} [{\bf{U}}_{{\bf{R}}{\bf{G}}}]_{:,1:L}^{\rm{opt}}={\rm{arg\,max}}
\{\|[{\bf{U}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} [{\bf{U}}_{{\bf{R}}{\bf{G}}}]_{:,1:L}^{\rm{H}}\|_{\rm{F}}^2\}.
\end{align}
\section{Analog Transceiver Design}
\label{Appendix_C}
For fixed ${\mathbf{\Lambda}_\mathrm{G}}$ and ${\bf{Q}}_{\rm{G}}$, the optimization problem (\ref{eq-comb-org}) can be transferred into the following vector variable optimization problem
\begin{align}
\label{QCQP_1}
\min_{\mathbf{r}} \;\; & {\mathbf{r}}^\T \mathbf{W} {\mathbf{r}} - \mathbf{p}^\T {\mathbf{r}} - {\mathbf{r}}^\T \mathbf{p} + q \notag \\
\text{s.t.} \;\;\; & \mathbf{r}^\T \mathbf{K}_{i} \mathbf{r} = a^2, \quad i = 1,2,\ldots, NL.
\end{align}
The vector $ \mathbf{r} $ is constructed via vectorizing $ \mathbf{G}_{\mathrm{A}} $, i.e.,
\begin{align*}
\mathbf{r} = \big[ \Re \{ \vecm{( \mathbf{G}_{\mathrm{A}} )} \}^\T, \, \Im \{ \vecm{( \mathbf{G}_{\mathrm{A}} )} \}^\T \big]^\T,
\end{align*}
and the matrices $ \mathbf{W}, \, \mathbf{K}_{i} $ and vector $ \mathbf{p} $ are defined as follows:
\begin{align}
\notag \mathbf{W} & =
\begin{bmatrix}
\; \Re \{ \mathbf{I} \otimes { \mathbf{R}_{\mathrm{n}} } \} & - \Im \{ \mathbf{I} \otimes { \mathbf{R}_{\mathrm{n}} } \} \;\; \\
\; \Im \{ \mathbf{I} \otimes { \mathbf{R}_{\mathrm{n}} } \} & \Re \{ \mathbf{I} \otimes { \mathbf{R}_{\mathrm{n}} } \} \;\;
\end{bmatrix}, \\
\notag \mathbf{K}_{i} & = \mathrm{diag} \Big\lbrace \bigl[ \mathbf{0}_{(i-1) \times 1}^\T, 1 ,\mathbf{0}_{(NL - 1) \times 1}^\T, 1 ,\mathbf{0}_{(NL - i) \times 1}^\T \bigr] \Big\rbrace,
\end{align}
and
\begin{align}
\mathbf{p} =
\begin{bmatrix}
\Re \{ \big( \mathbf{I} \otimes { \mathbf{R}_{\mathrm{n}}^{ {1}/{2}} } \big)^{\mathrm{H}} \vecm{ \big( [{\bf{U}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} {\mathbf{\Lambda}_\mathrm{G}} {\bf{Q}}_{\rm{G}} \big) } \} \\
\Im \{ \big( \mathbf{I} \otimes { \mathbf{R}_{\mathrm{n}}^{ {1}/{2}} } \big)^{\mathrm{H}} \vecm{ \big( [{\bf{U}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} {\mathbf{\Lambda}_\mathrm{G}} {\bf{Q}}_{\rm{G}} \big) } \}
\end{bmatrix}.
\end{align}The constant scalar, $q$, in (\ref{QCQP_1}) equals $q = || \vecm{ \big( [{\bf{U}}_{\boldsymbol{\mathcal{H}}}]_{:,1:L} {\mathbf{\Lambda}_\mathrm{G}} {\bf{Q}}_{\rm{G}} \big) } ||_2^2$.
Note that because of the constant modulus constraints, the term ${\bf{r}}^{\rm{T}}{\bf{r}}$ is a constant. As a result, for a constant real scalar, $\eta$, the objective function in (\ref{QCQP_1}) is equivalent to ${\mathbf{r}}^\T (\mathbf{W}+\eta{\bf{I}}) {\mathbf{r}} - \mathbf{p}^\T {\mathbf{r}} - {\mathbf{r}}^\T \mathbf{p} + q$. As the constant modulus constraints in (\ref{QCQP_1}) are all quadratic equalities, the optimization problem (\ref{QCQP_1}) is nonconvex. Following the idea of \cite{tractableTSP2017}, an iterative algorithm is proposed via iteratively updating constraints to guarantee the constant modulus constraints. Specifically, at the $n$th iteration each constraint $\mathbf{r}^\T \mathbf{K}_{i} \mathbf{r} = a^2$ is replaced by $\mathbf{\tilde r}_{(n-1)}^\T \mathbf{K}_{i} \mathbf{r}_{(n)} = a^2$ where $\mathbf{\tilde r}_{(n-1)}$ is a vector computed based on ${\bf{r}}$ computed in the $(n-1)$th iteration. After stacking $\mathbf{\tilde r}_{(n-1)}^\T \mathbf{K}_{i}$ for $i = 1,2,\ldots NL$ in ${\bf{P}}_{(n-1)}$, optimization problem (\ref{QCQP_1}) is transferred to
\begin{align}
\label{QCQP_3}
\min_{\mathbf{r}_{(n)}} \;\; & {\mathbf{r}}_{(n)}^\T (\mathbf{W}+\eta{\bf{I}}) {\mathbf{r}}_{(n)} - \mathbf{p}^\T {\mathbf{r}}_{(n)} - {\mathbf{r}}_{(n)}^\T \mathbf{p} + q \notag \\
\text{s.t.} \;\;\; & {\bf{P}}_{(n-1)}\mathbf{r}_{(n)} = a^2{\bf{1}},
\end{align}
where the matrix $ \mathbf{P}_{(n-1)} $ is defined as
\begin{align}
&[ \mathbf{P}_{(n-1)} ]_{\ell,j} \nonumber \\
=&
\begin{cases}
\cos \big( \angle [{\rm{vec}}({\bf{G}}_{{\rm{A}},(n-1)})]_{\ell} \big) & \text{if } \ell = j, \; \ell \le NL \\
\sin \big( \angle [ {\rm{vec}}({\bf{G}}_{{\rm{A}},(n-1)}) ]_{\ell} \big) & \text{if } j = \ell+NL, \; \ell \le NL \\
0 & \text{otherwise.}
\end{cases}
\label{para-tangent}
\end{align}The vector $ \mathbf{1} $ is a column vector with all elements equal to 1. As proved in \cite{tractableTSP2017}, when $\eta \ge \sigma_{\max} NL / 8 + ||\mathbf{p}||_2^2$,
where $ \sigma_{\max} $ is the largest eigenvalue of $ { \mathbf{R}_{\mathrm{n}} }$,
the optimal solution of the iterative optimization (\ref{QCQP_3}) minimizes the objective function and satisfies the constant modulus constraints asymptotically. As (\ref{QCQP_3}) is convex at each iteration, based on its KKT conditions, at the $n$th iteration the optimal solution of (\ref{QCQP_3}) is
\begin{align}
{\mathbf{r}}_{(n)} =
( {\mathbf{W}} + \eta \mathbf{I} )^{-1}\left({\bf{q}}+\frac{\lambda}{2}{\bf{P}}_{(n-1)}^{\rm{T}}\right)
\label{eq-opt-QP}
\end{align}with
\begin{align}
\frac{\lambda}{2}&=\left( {\bf{P}}_{(n-1)}( {\mathbf{W}} + \eta \mathbf{I} )^{-1}{\bf{P}}_{(n-1)}^{\rm{T}}\right)^{-1}\nonumber \\
& \ \ \ \ \ \times\left(a^2{\bf{1}}-{\bf{P}}_{(n-1)}( {\mathbf{W}} + \eta \mathbf{I} )^{-1}{\bf{q}}\right).
\end{align}
In a nutshell, the iterative algorithm is given in Algorithm~\ref{alg-unit-mod}. Using the iterative algorithm, the numerical result of analog processor can be found.
\bibliographystyle{IEEEtran}
|
2,877,628,088,383 | arxiv | \section{Introduction} \label{sec:1}
The close interactions between stars through mass transfer, common-envelope evolution (CEE), mergers and collsions play an important part in the evolution of isolated close interacting binary systems.
In triple systems, secular perturbations, such as Von Ziepel-Lidov-Kozai oscillations \citep[ZLK;][]{vZ1910, lid62, koz62} and quasi-secular evolution \citep{ap12,luo16, gpf18} induced by the outer third companion, can drive the inner binaries into highly eccentric configurations. The close pericentre approach of such binaries then gives rise to strongely interacting binaries.
Indeed, triple secular and quasi-secular dynamics ware proposed as catalyze the formation of a wide variety of exotic stars and binaries, as well as various explosive and transient events, e.g. blue stragglers \citep{per_fab}, X-ray sources \citep{nao+16}, gravitational-wave (GW) mergers \citep{ap12,ant2014, antog2014, ant2017, ll18, hoang2018, martinez2020, tbh, vg2021} and type Ia SNe \citep{katz2011,tho11}, however, the latter can only produce a fraction of the inferred typa Ia SN rate \citep{toonen2018}, (see \citealp{naoz_review} for a review on many of these issues). Such evolution becomes particularly important under appropriate conditions in which the relative inclinations between the inner binary and the outer binary in a triple are high.
Secular evolution of stellar-binaries could also occur due to perturbation by external non-stellar potentials, such as binaries perturbed by a massive black hole in galactic nuclei \citep{ap12,pro+15}; binaries in cluster potentials \citep{hamilton1, hamilton2, hamilton3}, or wide binaries perturbed by the Galactic tidal field \citep{ccg17}. In fact, already decades ago the secular evolution of Oort cloud objects due to the Galactic tidal field was shown to be important for the production of highly eccentric comets and the formation of long-period and sun-grazing comets \cite[e.g][]{ht86}. The timescale for such oscillations, however, is long, and of order of $\rm Gyr$ for Oort cloud objects in the Solar neighbourhood.
Secular triple evolution becomes even more complex when additional outer perturbations. \protect{\citep{hamers15}} considered the secular evolution of quadruple hierarchical systems. In nuclear star clusters, a binary around a supermassive black hole can undergo chaotic evolution if the cluster has non-spherical potential \protect{\citep{petrovich17, bub2020}}, or if additional stochastic torque is present due to resonant relaxaion \protect{\cite{hamers_vrr}}. Indeed, the sensitivity of triple evolution to the mutual inclinations between the inner and outer binary makes them even more suceptible to significant secular evolution by an external potential, which could also arise from a non-stellar external perturbation. Here we study, for the first time, the coupling of the secular evolution of wide triple systems with the external Galactic tidal field and show it has major implplications for the formation of closely interacting binaries in the field.
The secular evolution due to the galactic tidal field is very similar to ZLK oscillations, with small quantitative variations underlying the same mechanism \citep{hamilton1, hamilton2, hamilton3}.
Hence the systems of wide triple and galactic tides are qualitatively similar to a quadruple hierarchial system. These systems are known to experience chaotic dynamics if the secular frequencies are comparable \citep{hamers15, hl17, glp18, hamers19}. Similar ideas of chaotic dynamics had been carried out for chaotic evolution of stellar spin \citep{sl14, sl15} and the spin of binary black holes in the final moments of their merger \citep{ll18}, and they all rely on the idea of overlapping resonances \citep{chirikov79}.
Tidal effects coupling to the secular evolution of triples become important for very wide systems, (typically $>10^4$ AU). Such systems are not rare. Half of Solar-type stars and a quarter of lower-mass M-dwarf stars are are part of binary systems \citep{raghavan2010, duchene2013}, while multiple stars are even more common for higher masses. The Gaia mission \citep{gaia_mission} had revolutionized modern astrometry and provided unprecidented data of more than a billions stars in several data releases \citep{gaia-dr1, gaia-dr2, gaia-dr3}. Such surveys enabled the identification of wide binary systems \citep{el-badry2019,el-badry-twin,hartman2020,el-badry2021}. As wide binaries are generally detached, they do not affect each other through the major course of their evolution.
Wide binaries can serve as a tool to study galactic dynamics \citep{weinberg1987, tremaine2010}, the galactic tidal field and the exitence of MACHO dark matter \citep{macho_dm}. They are used to constrain stellar flarings \citep{2morgan2016}, natal kicks of white dwarfs (WDs) \citep{el-bardy2018} and neutron stars (NS) \citep{nk}, and CEE timescales \citep{ce1, ce2}. Moreover, wide binaries and triple are suceptible to collisional dynamics in the field and the production of closely interacting binaries, merger products, X-ray sources and GWs sources \citep{kaib2014,michaely2016,michaely2019,michaely2021,mic_sha21}. Recent observations report a few percents of stellar systems are in ultra-wide binaries \citep{hwa+21}, and given the high fraction of close binaries \cite{raghavan2010}, about half of the ultra-wide binaries whould be ultra-wide triples, while triple configurations at larger masses are even more common \citep{nk}.
In this paper we study the chaotic dynamics of wide triples induced by the galactic tide. Without the galactic tide, the range of initial conditions that lead to highly eccentric encounter and significant close interaction, is narrow (namely due to the stringent constrain on the mutual inclination). We outline the parameter space most feasible for chaotic evolution, and develop the secular code \href{https://github.com/eugeneg88/SecuLab}{\texttt{SecuLab}} to probe its complex evolution, which includes various relevant physical processes as described in detail below. We showcase the importance of this wide chaotic-triple-galactic-tide-evolution (CATGATE) channel and its contribution to the overall rates of the aforementioned transient phenomena and newly formed star and stellar remnants in the Galaxy, and in other galaxies.
This paper is organized as follows. Sec. \ref{sec:2} describes the various details of the secular evolution itself. We review some aspects of LK evolution in \ref{sec:2.1}, galactic tides in \ref{sec:2.2}, consevative forces induced by rotation, tides, and general relativity (GR) in \ref{sec:2.3}, as well as dissipative forces, and outline a transition point from equilibrium to dynamical tides in \ref{sec:2.4}. Sec. \ref{sec:3} describes the evolution of particular chaotic systems and qualitative trends. In \ref{sec:3.1} we showcase the chaotic evolution of individual systems, in \ref{sec:3.2} we outline the choice of our initial and stopping conditios, and our simplified modelling of stellar evolution in \ref{sec:3.3}. Sec. \ref{sec:4} presents the results from a population synthesis study. We present results for low mass stars in \ref{sec:4.1} and for massive stars up to $8M_\odot$ in \ref{sec:4.2}. In sec. \ref{sec:5} we interpret and discuss our results in context of our transients universe and the formation channels of various stars and stellar remnants. We discuss overall rates in \ref{5.1} and astrophysical implications in \ref{implications}. We also revisit our assumptions and caveats in sec. \ref{sec. 5.4}. Finally sec. \ref{sec:6} summarizes the main findings of this paper.
\section{Secular dynamics} \label{sec:2}
In this section we overview the different processes that govern the secular dynamics of triple systems (\citealp[see][]{naoz_review} for a review), as well as additional forces due external and internal perturbations. In addition, we derive an elegant form of the Galactic tide Hamiltonian and subsequent secular equations and stress their similarities to ZLK cycles (cf. sec. \ref{sec:2.2}).
\subsection{von Ziepel-Lidov-Kozai oscillations} \label{sec:2.1}
Consider a hierarchial triple system with an inner binary of masses $m_0$ and $m_1$, separation $a_1$ and eccentricity $e_1$, perturbed by a distant companion of mass $m_2$ at separation $a_2 \gg a_1$ and eccentricity $e_2$. The interaction term is expanded in multipoles of increasing powers of $a_1/a_2$ and double-averaged over the two fast mean motion angles of both orbits. The resulting secular Hamiltonian is responsible for the secular evolution of the system.
If the relative inclination between the orbital planes, $i$ is sufficiently large, then coherent oscillations could bring the inner binary to large eccentricities \citep{vZ1910, lid62, koz62}. The typical secular time in the quadrupole approximation (i.e. truncation up to order $(a_1/a_2)^2$ is \footnote{The actual timescale depends on the inclination. \cite{ant15} derived the timescale more rigorously and found a correction factor of $16/15$.}
\begin{equation}
\tau_{{\rm sec}}=\frac{1}{2\pi}\frac{m_{\rm tot}}{m_2}\frac{P_{2}^{2}}{P_{1}}(1-e_2^{2})^{3/2}, \label{eq:tsec_in}
\end{equation}
where $P_1 = 2\pi (Gm_{\rm in}/a_1^3)^{-1/2}$ and $P_2 = 2\pi (Gm_{\rm tot}/a_2^3)^{-1/2}$ are the inner and outer binary periods, respectively, with $m_{\rm in} = m_0 + m_1$ is the inner binary mass and $m_{\rm tot} = m_{\rm in} + m_2$ is the total mass.
In the test particle limit (i.e. where the outer binary angular momentum dominates) the maximal eccentricity depends only on the initial mutual inclination\footnote{It is true for a orbit where the argument of pericentre, $\omega$, circulates. For librating orbits the eccentricity could more constrained if the initial conditions are close to the fixed point.}. And the existense of a LK resonance is restricted to inclinations of $|\cos i| \le\sqrt{3/5}$.
When the orbit is not sufficiently hierarchical, the averaging of the orbit of the tertiary may no longer be accurate. \cite{ap12} pointed out the key importance of quasi-secular regime for triple evolution, where the long-term orbital evolution can be altered due to short term corrections. Corrections to the double-averaged secular Hamiltonian occur on the 'single-averaged' (SA)
timescale, and their strength is characterized by the dimensionless parameter \citep{luo16},
\begin{equation}
\epsilon_{\rm SA} = \frac{P_2}{2\pi \tau_{\rm sec}} = \left(\frac{a_1}{a_{2}(1 - e_{2}^2)}\right)^{3/2}\left(\frac{m_2^2}{(m_{\rm in} + m_2)m_{\rm in}}\right)^{1/2}. \label{eq:eps_sa}
\end{equation}
\cite{gpf18} obtained analytical modified expressions for $e_{\rm max}$ and the critical inclination for ZLK resonance. The astrophysical implications of the correction for double-averaging are also relevant for evection resonances and stability of irregular satellites \citep{grish17}, formation of contact binaries in the Kuiper belt \citep{gri20, roz2020}, Hot Jupiters and gravitational wave mergers of binary black holes \citep{ap12, ll18, gpf18, fg19} and type Ia supernovae \citep{katz2011, hk18, toonen2018}.
Although for the very wide stellar triples we consider, we expect $\epsilon_{\rm SA}$ to be small, due to the large $a_2$ values, the secular evolution is not affected much. However, short term fluctuations could render the eccentricity to be unconstrained, provided that $\sqrt{1-e^2} \lesssim \epsilon_{\rm SA}^2$, which potentially occurs in our case. We defer the analysis of these non-secular effects to sec. \ref{sec. 5.4}.
\subsubsection{Quadruple systems and chaos} \label{2.1.1}
Quadruple hierarchical system can be divided between two type of architectures, either $2+2$ quadruple system, where two inner binaries perturb each other, or $(2+1)+1$ binaries, where the heirarchy is nested \citep{hamers15, hl17}.
Consider a fourth body of mass $m_3$ and separation $a_3 \gg a_2$ and eccentricity $e_3$ around the centre of mass of the hierarchical triple, such that the configuration is a $3+1$ quadruple system. In this case there are three binaries, with inner, intermediate and outer binary denoted by A, B and C, respectively. The the intermediate binary B is then perturbed on a secular timescale $\tau_{\rm sec, out}$ which is given by Eq. (\ref{eq:tsec_in}, where the inner binary period is $P_2$ and the outer binary period is $P_3 = 2\pi (G m_{\rm tot}/a_3)^{-1/2}$, with $m_{\rm tot} = m_0 + m_1 + m_2 + m_3$.
The qualitative behavior is determined by the 'adiabatic parameter' $\mathcal{R}_0 = \tau_{\rm sec} / \tau_{\rm sec, out}$ \citep{sl14, sl15, hamers15, glp18}. If $\mathcal{R}_0 \ll 1$, the angular momentum of the innermost binary A precesses around the angular momentum of the intermediate binary B, which in turn slowly precess around the angular momentum of binary C. If $\mathcal{R}_0 \gg 1$, the angular momentum of both binary A and B precess around the angular momentum of the intermediate binary C, where the binaries A and B are effectively decoupled.
In the case where $\mathcal{R}_0 \sim 1$, the evolution is complex and chaotic due to resonanse overlap of both secular frequencies \citep{chirikov79}. \cite{sl14, sl15} have analyzed analytically a related problem of the evolution of the stellar spin orbited by an eccentric Jovian planet in a binary star system, while \cite{glp18} connected and extended the analytical theory to hierarchical $3+1$ quadruple systems, mapping out the phase-space of chaotic evolution. Recently, \cite{hamers19} compared the results of a population synthesis of quadruple main sequence stars to the observed properties and concluded that the results fit reasonably well for $3+1$ systems, but not for $2+2$ systems.
\subsection{Galactic tides} \label{sec:2.2}
When a binary is on a wide orbit ($a > 10^4 \rm au$), then is can be significantly influenced by the tidal field of the Galactic potential. \cite{ht86} first studied the effect of\ Galactic tide on the orbits or Oort-cloud comets and found it to be the main effect that drives comets onto Sun-grazing orbits; the perturbations from Galactic tide accumulate similarly to ZLK oscillations, enhancing the eccentricity of the comet until it grazes the Sun.
Here we follow the simplified model of \cite{ht86}, taking into account only the leading term of the vertical tide, neglecting perturbations on other directions (which are about $\sim 15$ times smaller, \citealp{ht86}). Various studies looked on the additional terms \citep[e.g][]{veras13, ccg17} and found the behaviour to be qualitatively similar to \cite{ht86}.
The potantial due to Galactic tide in the rotating frame centered on the midplane at distance $R = 8 \ \rm kpc$ from the Galactic centre is
\begin{equation}
U(x,y,z) =-\frac{Gm_{\rm tot}}{\sqrt{x^2+y^2+z^2}}+2\pi G\rho_{0}z^{2}, \label{eq:gt}
\end{equation}
where $m_{\rm tot}$ is the total mass of the binary and $\rho_0 = 0.185\ \rm M_{\odot} \ pc^{-3} $ is the local density at the solar neighbourhood.
After performing secular averaging on the mean anomaly, the resulting Hamiltonian is:
\begin{equation}
H_{{\rm GT}}=-\frac{Gm_{{\rm tot}}}{2a}+\pi G \rho_{0}a^{2}\sin^{2}i(1-e^{2}+5e^{2}\sin^{2}\omega), \label{eq:gtsec}
\end{equation}
where $i$ is the inclination angle between the binary and the Galactic disc, and $\omega$ is the argument of pericentre. Expressing the Hamiltonian in terms of the canonical Delaunay elements and writing down the equations of motion, the only difference is in the evolution for $\omega$:
\begin{equation}
\frac{d\omega}{dt} = \frac{3}{4 \tau_{\rm sec}}\left[ (\gamma+1)(1-e^{2})+5\sin^{2}\omega\left(e^{2}-\sin^{2}i\right)\right],
\label{eq:domega_dt}
\end{equation}
where $\gamma = 1$ for ZLK cycles and $\gamma = 0$ for Galactic tides.
It is then straightforward to show than the maximal eccentricity is
\begin{equation}
e_{\rm max} = \sqrt{1 - \frac{5}{4 - \gamma }\cos^2{i}} \label{eq:emax2}
\end{equation}
for a critical inclination of $\cos^{2}i_c<(4-\gamma)/5$. The secular timescale is
\begin{equation}
\tau_{\rm GT} = \frac{3}{8\pi G\rho_0} \sqrt{ \frac{Gm_{\rm tot}}{a^3} } = 0.89 \left( \frac{m_{\rm tot}}{M_\odot} \right)^{1/2} \left( \frac{a}{10^4 {\rm AU}} \right)^{-3/2} \rm Gyr \label{eq:tsecgt}
\end{equation}
When the secular timescales $\tau_{\rm sec}$ and $\tau_{\rm GT}$ (Eq. \ref{eq:tsec_in} and \ref{eq:tsecgt}) are comparable, the dynamics will be chaotic, similar to the dynamics of hierarchical quadruples discussed in sec. \ref{2.1.1}, and will be demonstrated in sec. \ref{sec:3}.
\subsection{Conservative forces and extra precession} \label{sec:2.3}
The ZLK mechanism requires that $\omega$ will slowly evolve, and a resonance will occur if, on average, $\dot{\omega} = 0$. In reality, $\omega$ is also precessing due to General-Relativistic (GR) corrections, or tidal and rotational bulges that break the spherical symmetry. If the precession is too fast compared with the ZLK secular timescale, the ZLK oscillations will be quenched.
The maximal eccentricity can be limited even for slow additional precession. \cite{lml15} found an analytical expression for the maximal eccentricity:
\begin{align}
\epsilon_{\rm GR} \left( \frac{1}{j_{\rm min}} - 1 \right) + \frac{\epsilon_{{\rm Tide}}}{15}\left(\frac{f_1(e_{{\rm max}})}{8j_{{\rm min}}^{9}}-1\right) & =\frac{9e_{{\rm max}}^{2}}{8j_{{\rm min}}^{2}}\left(j_{{\rm min}}^{2}-\frac{5}{3}\cos^{2}i\right). \label{etide}
\end{align}
where $f_1(e) = 1 + 3e^2 + 3e^4/8$, $j_{\rm min} = \sqrt{1 - e_{\rm max}^2}$ and
\begin{align}
\epsilon_{{\rm GR}}& \equiv \frac{3m_{{\rm bin}}(1-e_{2}^{2})^{3/2}}{m_{2}}\left(\frac{a_{2}}{a_1}\right)^{3}\frac{r_{g}}{a_1}\nonumber \\
\epsilon_{{\rm Tide}} & \equiv\frac{15m_{0}^{2}a_{2}^{3}(1-e_{2}^{2})^{3/2}(2k_1)R_{1}^{5}}{a_1^{8}m_{1}m_{2}}, \label{eq:epstide-1}
\end{align}
are the relative precession rates compared to ZLK precession rate. In Eq. (\ref{eq:epstide-1}), $r_g \equiv G m_{\rm bin} /c^2$ is the gravitational radius, $k_1$ is the apsidal motion constant and $R_1$ is the radius of body 1. The maximal eccentricity is small for $\epsilon_{\rm GR/Tide} \gg 1$ and reduces to the standard ZLK maximal eccentricity for $\epsilon_{\rm GR/Tide} \to 0$ (cf. \citealp{lml15} for details).
Since the dependence on tides is extremely sensitive to the initial inner separation we can approximately solve for Eq. \ref{etide} by taking either GR or tidal terms under the assumptions of $\cos i_0=0$ and $j_{\rm min}=(1-e^2_{\rm max})^{0.5}\ll1$. For GR or tides only, the solutions are
\begin{equation}
j_{\rm GR} = \frac{8\epsilon_{\rm GR}}{9};\quad \quad j_{\rm Tide} = \left(\frac{7\epsilon_{\rm Tide}}{216}\right)^{1/9}.\label{eq:j_approx}
\end{equation}
If $j$ from Eq. \ref{eq:j_approx} is $>1$, then $e_{\rm max}=0$.
Fig. \ref{fig:min_p} shows the minimal pericentre (which is derived from the maximal eccentricity) from solving Eq. \ref{etide} approximately, using $e_{\rm max}=(1-j^2)^{1/2}$ where $j=\max(j_{\rm GR}, j_{\rm Tide})$. The orbits of both binaries are circular and the inclination is $\cos i_0=0$. Each colour indicates a different stellar type of star $m_1$, while the other stars are assumed to be point Solar mass stars. The other parameters are described in the caption.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{fig1.png}
\caption{The minimal pericentre possible for the inner binary due to secular evolution. The other masses are $m_2=m_3=1M_\odot$. The radius is star 1 is $R_\star = R_\odot(m_1/M_\odot)^{0.57}$ for MS stars. The final mass of the WD is given by Eq. \ref{eq:mwd}. For RGs, the radius is 10 times larger, and for WDs it is $0.01R_\odot$. The apsidal motion constant is $k_1=0.014$ for MS, and $0.1$ for RGs and WDs. Blue curves indicate MS stars, red curves indicate RG stars and green curves indicate WDs. Solid lines are for the default model of $a_2=10^4\ \rm au$ and $m_1=M_\odot$. Dashed lines are for increasing $a_2$ to $3\cdot 10^4\ \rm au$ and dotted lines are for increasing $m_1$ to $5M_\odot$ (while keeping $a_2$ fixed at $10^4\ \rm au$). The horizontal dashed black line indicates the proxy for the tidal raidus at $2.7R_\odot$. The grey areas indicate regions where $\epsilon_{\rm GR}\gg1$ and hence the eccentricity is significantly suppressed at zero. }
\label{fig:min_p}
\end{figure}
We see that there are three typical phases:
i) The eccentricity is essentially zero due to large $\epsilon_{\rm GR}$, and the pericentre is just $a_1$ and by definition increases with $a_1$. This occurs up to separations of $a_1 \approx 20-40\ \rm au$.
ii) The minimal pericentre is rapidly decreasing with $a_1$. This occurs since $\epsilon_{\rm Tide} \ll \epsilon_{\rm GR} \ll 1$ in Eq. \ref{eq:j_approx} (recall that $\epsilon_{\rm GR} \propto a_1^{-4}$), and stops roughly at $a_1 \sim 50\ \rm au$ for the default MS model (blue solid line), close to the tidal disruption limit. For other models the range is from $30-200 \rm au$.
iii) The pericentre is slowly decreasing with $a_1$. This occurs when $\epsilon_{\rm GR}<\epsilon_{\rm Tide} \ll 1$, and $j$ depends on $\epsilon_{\rm GR}$ only as a power of $1/9$. Most of the models are below the tidal disruption limit at this point and occur around $a_1\sim 50-100\ \rm au$.
To summarize, on average, for ultra-wide triples (when $a_2 \gtrsim 10^4\ \rm au$) extra precession completly quenches ZLK oscillation for $a_1\lesssim 20\ \rm au$ and prevents collisions and disruptions for $a_1 \lesssim 50-100\ \rm au$. Triples with $a_1 \gtrsim 100\ \rm au$ could potentially have unconstrained eccentricities that even allow direct collisions of the inner binary.
\subsection{Dissipative forces} \label{sec:2.4}
When the innermost two bodies are close enough, the dissipation timescale can be short enough to affect the dynamics. The dissipative forces usually scale sharply with the instantaneous separation, and therefore often approximated by impulsive dissipation at the pericentre for highly eccentric orbits. For compact objects, dissipation from gravitational wave emission is the dominant contribution, while for planets, MS and RG stars, tidal dissipation is more important \citep{FT07, p15_1}.
\subsubsection{Equilibrium tides}
\cite{hut81} studied the equilibrium tide model, where a constant tidal bulge lags behind the line connecting the two bodies, and creates a torque that changes the angular momentum, which in turn drives the system into synchronous state.
The synchronisation time is an order of magnitude shorter than the circularisation time. Therefore the system spends most of its time in a pseudo-synchronous state, where the spin of the body is aligned with the orbit, and the spin rate is
\begin{equation}
\Omega_{\rm ps}(e) = \frac{2 \pi}{P_1} \frac{f_2(e)}{(1-e^2)^{3/2}f_1(e)} \label{eq:ps}
\end{equation}
Under this approximation, the change in the semi-major axis and eccentricity vector is
\begin{align}
\frac{da_1}{dt} = & - \frac{2}{9t_{\rm TF}}\frac{a_1}{(1-e_1^2)^{15/2}} \left( f_1(e_1) - (1-e_1^2)^{3/2}f_2(e_1)\frac{\Omega_{\rm ps} (e_1)}{(2 \pi / P_1)} \right) \nonumber \\
\frac{d\boldsymbol{e}_1}{dt} = & -\frac{1}{t_{\rm TF}}\frac{\boldsymbol{e}_1}{(1-e_1^2)^{13/2}}\left(f_3(e_1) - (1-e_1^2)^{3/2}f_4(e_1)\frac{11 \Omega_{\rm ps} (e_1)}{18 (2 \pi / P_1)}\right)
\label{eq:dedt_tide}
\end{align}
where the tidal friction time is \citep{FT07, p15_1}
\begin{equation}
t_{\rm TF} = \frac{t_{\nu}}{9(1+2k_1)^2}\left( \frac{a_1}{R_1}\right)^8 \frac{m_1^2}{(m_1+m_0)m_0}, \label{eq:tf}
\end{equation}
here $t_{\nu}$ is the typical viscous time of body 1 and $k_1$ is the apsidal motion constant. Typical viscous times are around $\sim 5\ \rm yr$ for MS stars \citep[e.g. ][]{hamers21}. Under the pseudo-synchronisation aproximation, the angular momentum is conserved, and the dissipation of the energy is governed by the change of the eccentricity under the assumption of constant angular momentum.
The polynomial functions $f_i$ are (e.g. Eq. 13 in \citealp{mk18})
\begin{align}
f_1(e)=&1+\frac{31e^2}{2} + \frac{255e^4}{8} + \frac{185e^6}{16} + \frac{25e^8}{64} \nonumber \\
f_2(e)=&1+\frac{15e^2}{2} + \frac{45e^4}{8} + \frac{5e^6}{16} \nonumber \\
f_3(e)=&1+\frac{15e^2}{2} + \frac{15e^4}{8} + \frac{5e^6}{64} \nonumber \\
f_4(e)=&1+\frac{3e^2}{2} + \frac{1e^4}{8} \label{eq:fi}
\end{align}
\subsubsection{Dynamical tides}
The equilibrium tide model is valid for static stars, that usually have low eccentricity. Conversely, for highly eccentric or unbound binaries, the tidal bulge is raised essentially during the closest approach, and the subsequent energy is dissipated via non radial dynamical oscillations \citep{pt77, mardling1, mardling2, lai1997}, but the general trend is to drive the system into short period and circular orbit faster than in the equilibrium tide model.
In a recent paper, \cite{mk18} included a simplified prescription for dynamical tides. Ignoring the chaotic phase, the energy dissipation in the leading terms of the dynamical modes is
\begin{equation}
\Delta E = f_{\rm dyn} \frac{m_0+m_1}{m_1} \frac{G m_0^2}{R_1} \left( \frac{R_1}{a_1 (1-e_1)} \right)^9
\end{equation}
which ralates to the total change in the semi-major axis:
\begin{equation}
\frac{da_1}{dt}=-\frac{2a_1^2}{P_1} \frac{\Delta E}{G m_0 m_1}.
\end{equation}
Here $f_{\rm dyn}$ parametrizes the efficiency of dynamical tides, and varies from $f_{\rm dyn} \sim 0.03 -1$, depending on the properties stellar structure \citep{mk18}.
Note that in the discussion above the raised only on body 1, while the primary is effectively treated as a point mass. The roles can be reversed, and in reality both bodies experience tidal dissipation.
\subsubsection{Transitional eccentricity}
\begin{figure}
\centering
\includegraphics[width=8.2cm]{fig2.png}
\caption{The ratio between the dissipation rate of dynamical and equilibrium tides as a function of the eccentricity. For each curves the parameters are $k_1 0.014, = f_{\rm dyn} =0.03$. The orbital periods are $P_1/t_{\nu} = 1000, 100, 10, 1$, which correspond to blue, orange, green and red lines, respectively. The thick dots are approximate solutions, dereived in Eq. (\ref{eq:e_trans}).}
\label{fig2}
\end{figure}
Since equilibrium tides operate on almost circular binaries, while dynamical tides operate mostly on highly eccentric orbits, there is a typical value of a transitional eccentricity $e_{\rm trans}$ where both terms are equal. \cite{mk18} used $e_{\rm trans}=0.8$, somewhat similar value to \cite{wu18}, and also assumed that only one of the tidal prescriptions is valid. However, by doing so, the tidal dissipation is inherently discontinuous.
Moreover, the transitional eccentricty is extremely sensitive to the assumed viscous time and the initial period of the binary. Comparing the dissipation rate for both prescriptions, the ratio between the energy dissipation rate is \citep[e.g.][]{rozner2021, glanz_roz21}
\begin{equation}
\beta(e) \equiv \left| \frac{\dot{a}_{\rm dyn}}{\dot{a}_{\rm eq}} \right| = \frac{2f_{\rm dyn} R_1^3}{21Gm_1k_1\tau_L P_1} \frac{(1-e^2)^{15/2}}{e^2(1-e)^9 g(e)} \label{eq:beta}
\end{equation}
where $\tau_L$ is the lag time and
\begin{align}
g(e) = \frac{1 + \frac{45}{14} e^2+ 8 e^4 + \frac{685}{224} e^6 + \frac{255}{448} e^8 + \frac{25}{1792} e^{10}}{1 + 3 e^2 + \frac{3}{8} e^4}
\end{align}
For highly eccentric orbits, the dependence on the eccentricity reduces to
\begin{equation}
\frac{(1-e^2)^{15/2}}{e^2(1-e)^9 g(e)} \to \frac{50}{(1-e)^{3/2}}.
\end{equation}
The viscous time can be related to the lag time via
\begin{equation}
t_\nu = \frac{3(1+2k_1)^2}{2k_1}\frac{R_1^3}{Gm_1\tau_L}\label{eq:t_visc}
\end{equation}
thus, $\beta(e\to 1)$ is given by
\begin{equation}
\beta = \frac{3.17f_{\rm dyn} t_\nu}{ P_1 (1+2k_1)^2 (1-e)^{3/2}}
\end{equation}
Setting $\beta=1$ and $k_1=0.014$ yields a transition eccentricity
\begin{equation}
e_{\rm trans} = 1 - \left(\frac{3f_{\rm dyn}t_\nu}{P_1} \right)^{2/3}, \label{eq:e_trans}
\end{equation}
Fig. \ref{fig2} shows $\beta(e)$ (Eq. \ref{eq:beta}) for various ratios of $P_1/t_{\nu}$. The filled circles represent the approximate formula (Eq. \ref{eq:e_trans}). There is a good correspondence, therefore the qualitative behaviour can be undrestood using Eq. (\ref{eq:e_trans}). Generally, the transition eccentricity will approach unity once the inner period will increase. For a typical viscous time of $1\ \rm yr$ and wide binaries, $e_{\rm trans}$ is close to unity. For warm Jupiters and binaries around $\sim 5 \ \rm AU$, $e_{\rm trans}$ is around $0.8-0.9$\footnote{We note that for Jupiters at $\sim 1\ \rm au$, the viscous and orbital times are comparable, and dynamical tides could be dominant throughtout most of th eeccentricity range, and it required artificial truncation and better tidal models \citep{glanz_roz21, rozner2021}}.
To summarize, taking $e_{\rm trans} \approx 0.8$ is consistent with planetary systems of warm Jupiters, where dynamical tides speed up the tidal dissipation beyond $e>e_{\rm trans}$, but for wider orbits the equilibrium tide model is the dominant one for much higher eccentricities compared with closer binaries.
\section{Numerical set-up and qualitative trends} \label{sec:3}
In order to explore the effect of wide triples coupling to galactic tide, we use \href{https://github.com/eugeneg88/SecuLab}{\texttt{SecuLab}}, a secular code which solves the equations of motion up to octupole order. In addition, the code includes additional effects of precession from GR and tidal bulges, and dissipation from both equilibrium and dynamical tides, and GW emission accorting to 2.5 Post-Newtonian expansion. We also include additional novel terms that correspond to single-averaging corrections for short-time variations of the outer orbit. The code is found in the public \href{https://github.com/eugeneg88/SecuLab}{GitHub repository}.
We first describe specific examples for CATGATE as to provide a detailed view on the type of secular and chaotic secular evolution through this process. We then describe our methods, and parameter space for a population synthesis study of CATGATE for low mass stars (below eight Solar masses, i.e. not explosing as core-collpase supernovae), including simplified stellar evolution considerations, tides, collisions and GR precession.
\subsection{Example of chaotic evolution} \label{sec:3.1}
The evolution of the inner binary is chaotic when the secular timescales are comparable. The ratio between the secular timescales is
\begin{align} \label{r0}
\mathcal{R}_0 & = \frac{\tau_{\rm sec}}{\tau_{\rm GT}} = \frac{2G\rho_{0}}{3\pi}\frac{m_{{\rm tot}}}{m_{{\rm out}}}\frac{P_{2}^{3}(1-e_{2}^{2})^{3/2}}{P_{1}} \\
& = 1.16 \left( \frac{M_\odot}{m_{\rm out}} \right) \left( \frac{3 m_{\rm in} }{2 m_{\rm out}} \right)^{1/2} \left( \frac{200\rm au}{a_1} \right)^{3/2} \left( \frac{a_2}{2\cdot 10^4\rm au} \right)^{9/2} \biggr\rvert_{e_2=0}, \nonumber
\end{align}
where in the last row we assumed that $e_2=0$. Chaotic behavior is expected for $\mathcal{R}_0 \gtrsim 1$ \citep{hamers15,glp18}, or $a_1 \gtrsim 200\ \rm AU$. For these systems, the dominant conservative non Keplerian effect is GR precession from 1 PN terms.
\begin{figure*}
\centering
\includegraphics[width=8.5cm]{fig3a.png} \includegraphics[width=8.9cm]{fig3b.png}
\caption{\label{fig1} Example of a chaotic CATGATE evolution. The initial conditions are $m_0 = m_1 = m_2 = M_\odot, e_1 = e_2 = 0.01, a_1 = 200\ \rm AU, a_2 = 2\cdot10^4\ \rm AU$. The initial inclinaion is $70^{\circ}$ to the Galactic plane. The apsidal and nodal angles are zero. The red lines represent the inner binary, while the green lines represent the outer binary. Solid lines represent the run with GR precession, while dashed lines represent purely Newtonian dynamics. Top: eccentricity. Middle: inclination between the planes of the inner binary A and the outer binary B. Bottom: Argument of pericentre. Left: Full evolution for $14$ Gyr. Right: The same evolution, zoomed in on the range of $9-10$ Gyr, where the inner eccentricity is excited to extreme values, the orbit experiences orbital flips and the argument of pericentre is resonant.}
\label{fig:2}
\end{figure*}
Fig. \ref{fig:2} shows the evolution of a chaotic system, which consists of three equal Solar mass stars on almost circular orbits ($e=0.01$), separated by $200$ and $2\cdot 10^4\ \rm AU$, respectively, forming a hierarchical triple. The secular parameter $\mathcal{R}_0$ from Eq. \ref{r0} is of order unity, which indicates that strong chaotic excitations may occur. The initial triple systems is aligned, but it is misaligned with the Galactic plane by $70^{\circ}$.
We see that the outer binary oscillates every $2\ \rm Gyr$ and drives the inner binary onto extremely eccentric orbit. Eventually at the forth cycle the eccentricity of the inner binary is extremely large, $1-e_1 \approx 10^{-4.5}$. We note that this large eccentricity is only present when we include extra precession terms, such as GR precession.
The right panel of figure \ref{fig:2} shows the zoomed-in evolution between $9-10\ \rm Gyr$. We see that the argument of pericentre is caught in a transient librating state, which allows the rapid high eccentricity variations.
We note that figure \ref{fig:2} is integrated with point sources without dissipative tides (only GW dissipation) or stopping conditions, and is merely an illustration of a proof of concept. Realistic stars with finite radii would have collided directly if the pericentre is low enough or interacted via mass transfer or CEE if the semi-major axis is reduced sufficiently via tidal interactions. We study a more realistic scenarios by means of population synthesis below.
\subsection{Population synthesis} \label{sec:3.2}
\subsubsection{Parameter space and initial conditions} \label{3.2.1}
\begin{table}
\begin{center}
\begin{tabular}{|c|c||c||c|}
\hline
\multicolumn{4}{|c|}{{\bf Orbital Parameters}}\tabularnewline
\hline
\hline
$a_{1}$ & \multicolumn{3}{c|}{$\log\ U\sim[50,500]\ {\rm au}$}\tabularnewline
\hline
$a_{2}$ & \multicolumn{3}{c|}{$\log\ U\sim[5\times10^{3},5\times10^{4}]\ {\rm au}$}\tabularnewline
\hline
$e_{1},e_{2}$ & \multicolumn{3}{c|}{thermal: $ f(e)\propto2e$}\tabularnewline
\hline
$i_{1},i_{2}$ & \multicolumn{3}{c|}{cosine Uniform}\tabularnewline
\hline
$\Omega_{i},\omega_{i}$ & \multicolumn{3}{c|}{Uniform in $[0,2\pi)$}\tabularnewline
\hline
\multicolumn{4}{|c|}{{\bf Physical properties}}\tabularnewline
\hline
$m_{1},m_{2}$ & \multicolumn{3}{c|}{$\begin{matrix}{\rm \text{Low Mass: Kroupa (0.5,1)}}\\
{\rm \text{Int. Mass: Kroupa (1,8)}}
\end{matrix}$}\tabularnewline
\hline
$m_{3}$ & \multicolumn{3}{c|}{Kroupa (0.5,1)}\tabularnewline
\hline
$R_{i}$ & \multicolumn{3}{c|}{$\begin{matrix}(m_{i}/M_{\odot})^{\xi}R_{\odot}\ {\rm MS}\\
10\times R_{i}(0)\ {\rm RG}\\
0.01R_{\odot}\ {\rm WD}
\end{matrix}$ }\tabularnewline
\hline
$k_{1}$ & \multicolumn{3}{c|}{$\begin{matrix}0.014\ {\rm MS}\\
0.1\ {\rm Other}
\end{matrix}$}\tabularnewline
\hline
$t_{\nu}$ & \multicolumn{3}{c|}{$\begin{matrix}5\ {\rm yr}\ {\rm MS}; 10^4\ {\rm yr\ if} M>1.25M_\odot \\
0.7\ {\rm yr}\ {\rm RG}\\
10^{7}\ {\rm yr}\ {\rm WD}
\end{matrix}$}\tabularnewline
\hline
$f_{\rm dyn}$ & \multicolumn{3}{c|}{0.03}\tabularnewline
\hline
\end{tabular}
\[
\]
\par\end{center}
\caption{Initial distributions for the orbital and physical parameters.}\label{tab:ic}
\end{table}
Since the binaries are generally wide, their orbital parameters are assumed to be uncorrelated. The masses are drawn from the low mass Kroupa mass function, $dN/dm\propto m^{-2.3}$. Once the mass is determined, the radius is determined from the mass-radius relation $R/R_\odot = (m/M_\odot)^\xi$ with $\xi=0.8$ for $m<M_\odot$, and $\xi=0.57$ for $m>M_\odot$ \citep[e.g.][]{MRR}. The eccentrities are drawn from a thermal distribution, $dN/de\propto 2e$. The separations are drawn from log-uniform distribution, $dN/da \propto 1/a$. The inclinations are cosine-uniform $dN/d\cos i \propto 1$. The nodal and apsidal angles are drawn uniformly, $\omega,\Omega \sim U[0,2\pi]$. The latter are generally motivated by observations \citep{moe-distefano2017}.
For MS stars, we use the apsidal motion constant $k_1=0.014$ and a viscous time of $t_{\nu} = 5\rm \ yr$. Stars above $m\gtrsim1.25M_\odot$ have radiative envelopes, and their radiative tides are much weaker.
Generally, most population synthesis studies rely on detailed prescriptions which rely on stellar evolution codes to determine $k_1$ and the ratio $k_1/T_c$ where $T_c$ is related to the tidal dissipation \citep{hurley2002, hamers21}. In our case, tides are important only when the pericentre is close, while we aim to estimate the fraction of systems that can reach low pericentre to begin with. Hence, the role of tides is secondary and the typical parameters are taken for an order of magnitude.
\subsubsection{Simplified stellar evolution} \label{sec:3.3}
The main-sequence lifetime of stars is empirically approximated as (e.g. \citealp{hansen2004})
\begin{equation}
t_{\rm ms} = 10 \left(\frac{m}{M_\odot}\right)^{-2.5}\rm Gyr \label{t_ms}
\end{equation}
which is comparable to our secular and chaotic evolution timescales for star above a Solar mass. The red giant phase has a timescale of $10-15\%$ of the main sequence lifetime. We set the time for WD formation as $t_{\rm WD} = 1.15t_{\rm ms}$ and the radius to be $10$ times the initial radius. This is in no way a detailed evolution model but just a proxy to probe the importance of this channel to red giant binary interactions.
The apsidal motion constant of red giant is increased, and we set it to $k_1=0.1$ from inspecting the tables of \cite{claret1992}. The viscous time is set for simplicity to be $t_{\rm \nu}= 5 (k_1/0.014)^{-1} \ \rm yr$ to maintain a roughly constant lag time, and is accurate to within an order of magnitude within more detailed prescriptions \citep{hurley2002, hamers21}.
The WD phase is the final stage of evolution for the low-mass stars we consider. The WD radius is is taken to be $10^{-2}R_\odot$. The apsidal constant is $k_1=0.1$ estimated from \cite{vila1977}, and the viscous timescale is large, $t_{\rm \nu} \gtrsim 10^7\ \rm yr$ \citep{campbell84}. We expect equilibrium tides to be negligible in the WD phase due to the small radii and large viscous times, but dynamical tides could still play a role in highly eccentric encounters.
In any case, the strength of the dynamical tide is $f_{\rm dyn}=0.03$, which is more appropriate for MS stars \citep{mk18}. Table \ref{tab:ic} summarizes the initial conditiosn and distributions of the parameters.
\subsection{Stopping conditions}\label{3.3}
The stopping conditions are:
i) The time reaches $t_{\rm end}=10\ \rm Gyr$.
ii) The pericentre is a few times the mutual radius $r_p=\zeta (R_1+R_2)$.
We choose $\zeta=3$, which is close enough to the value of $2.7$ from detailed tidal disruption simulations \citep{g13}.
iii) The semi-major axis shrinks to $10\%$ of its initial value.
iv) The triple becomes dynamically unstable according to the \cite[][MA]{ma01} stability criterion
\begin{equation}
\frac{a_2(1-e_2)}{a_1} \le 2.8 \left[ \left(1 + \frac{m_{\rm out}}{m_{\rm in}} \right) \frac{1+e_2}{(1-e_2)^{1/2}} \right]^{2/5
\end{equation}
These stopping consitions correspond to four possibilities: i) No significant interaction occurs during the evolution. ii) Close encounter, probably a tidal disruption or direct collision. iii) Efficient tidal migration.
iv) Dynamical instability according to the MA criterion.
\begin{figure*}
\centering
\includegraphics[width=7cm] {fig4a.png}\includegraphics[width=7cm] {fig4b.png}
\caption{\textbf{Distributions of initial
and final eccentricities}. The blue lines indicate the initial eccentricity distribution of all runs ($t=0$ label). The green lines show the distributions of the runs that stopped once the critical separation for the pericentre has been reached ($t=0,\rm s$ label). The red lines show the final eccentricity distribution for all runs ($\rm fin$ label).The purple lines show the final eccentricity distribution of the runs that stopped once the critical separation for the pericentre has been reached ($\rm fin, s$). The left panel shows the inner eccentricity $e_1$, while the right panel shows the outer eccentricity. CDF is given in log scale of $1-e$.}
\label{fig:es}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=7cm] {fig5a.png}\includegraphics[width=7cm] {fig5b.png}
\caption{Same as Fig. \ref{fig:es}, but for $a_1$ and $a_2$.}
\label{fig:as}
\end{figure*}
\section{Population synthesis results} \label{sec:4}
\subsection{Main sequence stars} \label{sec:4.1}
In order to explore the fraction of wide triple systems that undergo chaotic evolution we initialize 2000 systems as follows: The initial conditions are drawn as described in sec. \ref{3.2.1}. The boundaries are $=50\ \rm au < a_1 < 500\ \rm au$, and $5000\ \rm au < a_1 < 50000\ \rm au$. The masses follow the Kroupa mass function between $0.5\ M_\odot <m< 1 M_\odot$. We run each system up to a time of $t_{\rm end} = 10\ \rm Gyr$ or until one of the stopping conditions applies, as described below. The output recorded every $10^4\ \rm yr$, for a total of $10^5$ data points per simulation.
\subsubsection{Statistical properties}
Out of the $2000$ simulations, $\sim27\%$ ($538$) are unstable, and are expected to lead to an ejection of one of the stars\footnote{A fraction of $\sim30\times R/a_{\rm in}$ could still result in a close inteactions, where R is the stellar radii at the time and $a_{\rm in}$ is the inner binary sma ; see e.g. discussion of collisions in an unstable system in \cite{pk12}. Given the small ratio, we neglect such interactions in the curent study, which at most get to a few percents for RGs.}. Around $7.2\%$ ($144$) end up with pericentre smaller than our threshold. Since they are extremely eccentric, they enter the Roche limit on almost a radial orbit and probably will be tidally disrupted or collide. Around $1.3\%$ ($26$) and up with a separation ten times smaller the initial value, due to efficient tidal dissipation. These systems will probably be tidally circularized and eventually form close binaries. The rest of the sample remained stable and no significant interaction occurred.
\begin{figure*}
\centering
\includegraphics[width=7cm] {fig6a.png}\includegraphics[width=7cm]{fig6b.png}
\includegraphics[width=7cm] {fig6c.png}\includegraphics[width=7cm]{fig6d.png}
\caption{Same as Fig. \ref{fig:es}, but for the parameter $\mathcal{R}_0$ (left) and the pericentre $q_1$ (right). The solid lines indicate the recorded pericentre at the end of the run, while the dashed lines correpond to the minimal recorded pericentre. The bottom panels show the density plot of the data of the top panels. }
\label{fig:R-q}
\end{figure*}
The fraction of unstable systems is derived in Appendix A and is equal to $f_{\rm U}(e_f) = 2\sqrt{(1-e_f^2)/5}$ where $e_f$ is the eccentricity above the system is typically unstable. Since the mean separation ratio is $\langle a_2/a_1\rangle \approx 100$ and instability occurs roughly when $a_2 \approx 5a_1$, a good choice for $e_f$ is $e_f=0.95$, which gives $f_{\rm U}\approx 28\%$, or $\sim 557$ unstable systems, which is remarkably close to the recorded value of $538$.
In what follows, we show the initial and final cumulative distribution function (CDF) of various parameters. We distinguish between the entire sample 'all', and the sample of systems that were stopped due to conditions ii) and iii) 'terminated', or 'stopped' samples. Stopping due to dynamical instability is not considered in the stopped sample because it will most likely lead to an ejection and not to close encounter.
Fig. \ref{fig:es} shows the CDF of the eccentricities. The left panel shows the CDF of the inner eccentricity $e_1$ (or the CDF of $\log_{10}(1-e_1)$, so large negative values indicate values closer to unity, $e_1\to1$.). The initial inner eccentricity $e_1$ follows a thermal dsitribution (blue line). The sample of the initial eccentriciy of orbits which were terminated (green line) is slighly above thermal, but not significantly.
The final inner eccentricity CDF (red line) is more eccentric. The vast majority of the extreme eccentricies ($\log_{10}(1-e_1) < -3$) is achieved predominantly in terminated orbits, and a large eccentricity ($\log_{10}(1-e_1) < -2$) is a requirement for close encounter, as expected. The eccentricities are also pushed to the extreme values of $1-e_1 \approx 10^{-4.5}$, which are unachievable by initial conditions for standard ZLK oscillations.
The right panel shows the CDF of the outer eccentricity. Here we also see a trend that the initial outer eccentricity $e_2$ that leads to termination is slightly more eccentric than the initial distribution. The initial and final 'all' $e_2$ are truncated at around $e_2\lesssim 0.98$ to allow dynamical stability. The final $e_2$ is on average more eccentric, but the maximal value for terminated orbits is somewhat lower, around $e_2\approx 0.95$, to allow dynamical stability. We'll show later in Fig. \ref{fig:incs} that dynamical instability is predominantly achieved when the outer inclination angle $i_2$ is initially close to $\pi/2$.
Fig. \ref{fig:as} shows the CDF of the separation. The left panel shows the inner separation. The initial separation is distributed as log-uniform (blue), as do the distribution of terminated orbits. The final distribution (red) is also almost log-uniform, up to a tail of small separations for orbits that undergone efficient dissipation. The final distribution of terminated orbits is more skewed towards lower separations; $\sim 40\%$ of the terminated orbits end up with $a_1<50\ \rm au$, while the orbits that have migrated to smaller separations include roughly a quarter of the terminated orbits. This means that some orbits tidally evolved before reaching small pericentre and terminating.
The right panel of Fig. \ref{fig:as} shows the outer separations. The initial and final separations are essentially identical both for all the initial conditions and the terminated ones. The orbits that lead to termination are slightly more skewed, partially due to the absence of chaotic evolution for the very small or very large separations.
\begin{figure*}
\centering
\includegraphics[width=5.9cm]{fig7a.png}\includegraphics[width=5.9cm]{fig7b.png}\includegraphics[width=5.9cm]{fig7c.png}
\includegraphics[width=5.9cm]{fig7d.png}\includegraphics[width=5.9cm]{fig7e.png}\includegraphics[width=5.9cm]{fig7f.png}
\caption{Same as Fig. \ref{fig:es}, but for the inner (left), outer (middle) and mutual (left) inclinations. The middle panel shows the initial outer inclination that eventually rendered unstable (grey), together with the entire initial sample (blue). }
\label{fig:incs}
\end{figure*}
In order to explore the role of the chaotic dynamics we plot the CDF of the $\mathcal{R}_0$ parameter (Eq. \ref{r0}) in the top left panel of Fig. \ref{fig:R-q}, while the density plot (a histogram where the area is normalised to 1) is shown in the bottom left panel. The initial distribution is roughly uniform in the range $-3 \le \log_{10}\mathcal{R}_0 \le 3$ (blue lines), however the initial conditions that lead to termination are predominantly between $-0.3 \le \log_{10}\mathcal{R}_0 \le 0.7$; $\sim50\%$ of the terminated systems are in this range. The final recorded $\mathcal{R}_0$ also follows roughly uniform distribution, although in the terminated orbits there is a dearth of final orbits around $\log_{10}\mathcal{R}_0 \ge 0$. The reason is probably due to reduction of the outer eccentricity $e_2$ at the close encounter, which reduces $\mathcal{R}_0$. The systems which are more prone to strong encounters are the ones in the chaotic regime where $\mathcal{R}_0 \sim \mathcal{O}(1)$ and with large misalignment in respect to the Galactic plane, which can give rise to large variations in $e_2$.
The bottom left panel shows only the initial $\mathcal{R}_0$, showing that the terminated orbits prediminantly originate at $0.5\lesssim \mathcal{R}_0 \lesssim5$
The top right panel of Fig. \ref{fig:R-q} shows the CDF of the pericentre $q_1= a_1(1-e_1)$, while the bottom left panel shows the density plot. While there is little dependence on the initial pericentre between the total CDF and the terminated CDF, the vast majority of the terminated systems have a pericentre smaller than a few Solar radii. The maximal pericentre of a terminated orbit is $\approx 0.04\rm \ au$, which means that low pericentre is required for both radial plunge and efficient tidal dissipation. The bottom right panel shows that a low $q$ is required for any significant interaction.
The top rows of figure \ref{fig:incs} shows the CDF of the initial and final inclinations, while the bottom rows show the density plots of the same data. The left panels show the inner inclination $i_1$ with respect to the Galactic plane. We see little variation in $i_1$, which means that the evolution is mostly independent on $i_1$. The middle panels show the inclination of the outer binary with respect to the Galactic tide $i_2$. The total initial and final $i_2$ doesn't change much, but the initial $i_2$ which leads to instabilities (grey line) is located mostly around $\pi/2$. This is to be expected since $i_2$ close to $\pi/2$ leads to high excitation of $e_2$ which destabilizes the triple system. The right panels show the mutual inclination $i_{12}$. We see that the initial and final values don't change much, but the initial value of terminated orbits is more clustered around $i_{12}=\pi/2$. These orbits could interact on shorter timescale without the aid of the Galactic tide, as we demonstrate in the next section.
\subsubsection{Correlations}
In order to explore correlations between various parameters, we show in Figure \ref{fig:low_mass_scatters} the scatter of key parameters of the terminated orbits. The left panel shows the scatter of the mutual inclination and the $\mathcal{R}_0$ parameter. Red marks indicate short stopping times while blue marks indicate longer stopping times. We see that the orbits that were terminated first are close to being initially polar, i.e. $\cos i_{12} \approx 0$. Their low value of $\mathcal{R}_0$ suggest that they have a relative short secular timescale, on the order of $10-100\ \rm Myr$, comparable to their stopping time. These triples evolved via isolated triple evolution without the aid of the Galactic tide. Only at $\mathcal{R}_0\approx 0.1$ the scatter in $\cos i_{12}$ starts to widen and the stopping times increase to be $t>1\ \rm Gyr$, suggesting the chaotic Galactic tidal evolution took place.
\begin{figure*}
\centering
\includegraphics[width=8.1cm]{fig8a.png}\includegraphics[width=8.5cm]{fig8b.png}
\caption{Scatter plots of various parameters. Left: cosine of the mutual inclination $i_{12}$ versus the (logarithm of the) $\mathcal{R}_0$ parameter, with the stopping time as a color code. Right: log-log plot of the initial outer versus initial inner separations, with the final separation in units of the initial separation as a color code.}
\label{fig:low_mass_scatters}
\end{figure*}
The right panel of Figure \ref{fig:low_mass_scatters} shows the log-log scatter in the initial inner and outer separations, where the color code is the relative shrinking of the inner orbit. There are two distinct populations, the migrating one (which stopped at $a_{1,\rm fin} = 0.1 a_{1,0}$) and the plunging one, which is essentially partially migrating, or not migrating at all, but where the stopping was due to small pericentre approach. The two populations are distinguished mainly by their initial separation; the ones with $a_1\lesssim 100\ \rm au$ is affected by additional precession terms from GR and tides that limit the maximal eccentricity and save it from disruption and allow efficient migration, while systems with orbits with $a_1 \gtrsim 100\ \rm au$ have larger unconstrained eccentricities and tend to plunge and probably be disrupted or end in direct collision, pending on the specific types of stars nad mass-ratios.
\begin{figure*}
\centering
\includegraphics[width=5.8cm]{fig9a.png}\includegraphics[width=5.8cm]{fig9b.png}\includegraphics[width=5.8cm]{fig9c.png}
\caption{Same as Fig. \ref{fig:es}, but for the RG runs. Left: inner separation. Right: outer separation. Right: inner eccentricity. }
\label{fig:9}
\end{figure*}
{\bf To summarize}, $27\%$ of the triples are dynamically unstable and most likely become unbound. $7.2\%$ become close binaries with their pericentre is less than 3 times their mutual stellar radii, while another $1.3\%$ are efficiently shrinking due to efficient orbital energy loss.
Most of the close encounters are induced by lowering the pericentre of the outer binary due to the Galactic tide and triggering chaotic evolution.
\subsection{Red Giants and Common envelope evolution} \label{sec:4.2}
In order to add more massive stars that undergo stellar evolution within the duration of the dynamical simulation we must specify the details of stellar evolution and mass loss prescription. We assume an adiabatic mass loss, namely that $m/\dot{m} \gg P_2$, which is roughly correct for the widest binaries, providing that stars are not too massive. In this case the angular momentum is conserved and we have for a star with initial mass $m_0$:
\begin{equation}
m(t) = m_0 \left( \frac{m_0}{m_{\rm WD}} \right) ^ {-\frac{t-t_{\rm MS}}{t_{\rm WD}-t_{\rm MS} } };\quad t\in (t_{\rm MS}, t_{\rm WD}) \label{eq:m_of_t}
\end{equation}
Note that the vast majority of our inner binaries are initially sufficiently wide such we can assume no significant mass-transfer or interaction occurs between the inner binary components, in the absence of CATGATE effects.
For the final WD mass we use the empirical relationship of \cite{catalan2008}:
\begin{equation}
m_{\rm WD}(m)=\begin{cases}
0.096 m + 0.429 & m<2.7\\
0.127 m + 0.31 & m>2.7
\end{cases}\label{eq:mwd}
\end{equation}
where $m,m_{\rm WD}$ are given in units of $M_\odot$.
Although it is not obvious at first glance, the mass is exponentially decaying $m(t) \propto \exp\left( (t-t_{\rm MS})/\tau_{\rm ML} \right)$ with the mass loss time-scale being $\tau_{\rm ML} = (t_{\rm WD} - t_{\rm MS})/\ln (m_0/m_{\rm WD})$. In order to conserve the angular momentum, the orbits expand as
\begin{align}
a_1(t) = & a_{1,\rm in} \frac{m_{1,\rm in} + m_{2, \rm in}} {m_1(t) + m_2(t)} \\
a_2(t) = &a_{2,\rm in} \frac{m_{1,\rm in} + m_{2, \rm in} + m_{3,\rm in}} {m_1(t) + m_2(t) + m_3(t)} \label{eq:expansions}
\end{align}
where it is assumed that at $t>t_{\rm WD}$ the mass is $m_{\rm WD}$ for each star. The MS timescale $t_{\rm MS}$ is given in Eq. (\ref{t_ms}). For the WD time scale $t_{\rm WD}$ we use the rule of thumb that the RG phase is $\sim 10-15\%$ of the MS timescale, so we set $t_{\rm WD}=1.15t_{\rm MS}$. At the RG branch, we keep the radius to be conservatively $10$ times the initial value. The tidal parameters are described in the initial conditions section. In principle, such mass-loss evolution could lead to instbaility of the triple system (triple evolution instability; \citealp{pk12,ham+21,too+21}) in a small fraction of the cases, and can affect the type of secular and quasi-secular evolution \cite[e.g][]{pk12,sha+13,michaely2014, toonen2018}.
\subsubsection{Combined effects of stellar evolution and galactic tides}
\begin{figure*}
\centering
\includegraphics[width=8.1cm] {fig10a.png}
\includegraphics[width=8.0cm]{fig10b.png}
\caption{Left: Same as Fig. \ref{fig:R-q} but for the RG run and only for the pericentre $q_1$ parameter $\mathcal{R}_0$ Right: The histogram of the final pericentre of the stopped orbits. Note that this is a histogram, not a density plot and $q_1$ in normalised to Solar radii, not au. Each cluster is indicated with the potential progenitors of the close encounter. }
\label{fig:10}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=9.5cm] {fig11a.png}\includegraphics[width=7.8cm]{fig11b.png}
\caption{Typical timescales for the merger in terms to the stars' MS lifetime. The X and Y axes are stopping time in units of $t_{\rm MS,1}$ and $t_{\rm MS,1}$, respectively. Since we set up $m_1>m_2$, $t_{\rm MS,1}$ is always shorter than $t_{\rm MS,2}$. The grey areas indicate the time where $m_1$ is a RG (vertical area) and where $m_2$ is a RG (horizontal area). Bottom left square corresponds to the bulk of MS-MS collisions, bottom right area corresponds to MS-WD collisions. Top right corner corresponds for WD-WD collisions. The color code is the pericentre at the soptting time. The blue dots terminate with low pericentre, indicative to WD collisions. Stars indicate systems with low q (disruptions or mergers. low $q_1$), while triangles indicate migrating systems (low $a_1$. The dark red dots indicate large pericentre indicative for RG encounters. Right panel shows the zoomed in region near the RG evolution time. }
\label{fig:11}
\end{figure*}
In order to explore the role of stellar evolution and the contribution of red giants to close encounters, we ran 2000 additional simulations where the two inner stars are drawn from the same mass function $dN/dm \propto m^{-2.3}$, but in the range $1 \le m/M_\odot \le 8$. We also assign the more massive star to be $m_1$, such that it evolves faster, i.e. $t_{\rm MS}(m_1) < t_{\rm MS}(m_2)$. Out of the 2000 simulations around $41.5\%$ (831) became dynamically unstable. Another $3.6\%$ (72) ended up with low pericentre and $4\%$ (80) ended up with low separation.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& MS & RG & WD \\ [0.5ex]
\hline\hline\
MS & 71 (47\%) & 29 (19\%) & 27 (18\%) \\
\hline
RG & & 2 (1.3\%) & 19 (12.5\%) \\
\hline
WD & & & 4 (2.7\%) \\ [1ex]
\hline
\end{tabular}
\end{center}
\caption{Total number of close encounters for different stellar types. The percentage is of the total number of encounters. }\label{tab:1}
\end{table}
Figure \ref{fig:9} shows the distributions of the separations and the inner eccentricity. We note that the inner separation (left panel) undergoes both expansion due to mass loss in the RG phase and contraction due to close encounters and tidal evolution. The outer separation (middle panel) undergoes expansion. For the inner eccentricity, the distribution is similar to the lower mass stars case, but now there are two more clusters for the final treminated orbits (purple line): eccentricities between $0.1 \gtrsim 1-e\gtrsim 0.01$ for RG close encounter, and extreme eccentricities below $\log_{10}(1-e)=10^{-5}$ for direct WD-WD collisions.
Figure \ref{fig:10} shows the distribution of the pericentre. The top panel shows that the final pericentre of the stopped systems $q_1$ has a trimodal distribution. This is evident in the histogram in the bottom panel. Note that $q_1$ in normalised to units of solar radius. The bulk of the systems merge before stellar evolution kicks in, or later when star 1 is a WD and star 2 is still on the MS (98), which gives a pericentre of a few Solar radii. Around a third of the stopped orbits (50) stop when one of the stars is a RG, leading to several dozends of Solar radii separation. A handful of systems (4) were able to collide as binary WD, when the pericentre is below a tenth of a Solar radii.
Table \ref{tab:1} further lists the number of terminated systems (and percentage) for all of our stellar types. We see MS-MS encounters are the most common, with nearly half the systems ending in this state. RGs most likely encounter MS stars in about $1/5$ of the systems and with WDs in about $1/8$ of the systems, and only a handful are RG-RG encounters. This is mainly due to the nearly equal mass of the binary components progenitors required in order for both to be on the RG at the time of the close encounter. WD-WD collisions are relatively rare compared with WD-RG or WD-MS close encounters.
In order to explore the effects of stellar evolution we plot in Figure \ref{fig:11} the stopping times normalized to the MS timescales of stars 1 and 2, respectively. Mass $m_1$ is always larger than $m_2$, thus $t_{\rm MS,1}$ is smaller than $t_{\rm MS,2}$, thus the upper left triangle is forbidden. We see that in the bottom left corner, the MS-MS encounters occur when the pericentre is a few Solar radii, around $10$ or slightly less. Once the first stars evolve to become RGs, there is a pile-up on systems that merge in the RG phase within the vertical gray area (it is better visualized in the zoomed in right panel). The other star may also evolve to RG which would lead to RG-RG encounters (small grey box in the zoomed in plot), but it is less likely and only two system experience it. In most cases, the first star evolves into a WD and then encounters the MS conpanion, which results in a pericentre around a factor of $\sim 2$ lower than for MS-MS collisions or around $\sim 5$ solar radii. Some systems will have an encounter when the second star evolves into a RG which leads to RG-WD encounters (horizontal grey line). These could then lead to CEE. Finally, a small fraction of systems stop when both stars are WDs and have very small pericentre, which will most likely lead to direct WD collisions and possibly Type Ia SNe.
The systems that end with low $q_1$ (disruption or collision) are most likely to be MS-MS or WD-WD collisions. This is also evident from sec. \ref{sec:4.1}, where the vast majority of the terminated systems were with low $q_1$. This occurs since for wide binaries the role of tidal dissipation in MS and in WD stars is relatively weak. For the cases of RG close encounters, the bulk of the terminated systems had efficiently migrated and terminated with low $a_1$. This is most likely due to stronger tidal dissipation owing to the large radii and convective envelopes of the RGs.
{\bf To summarize}, around $3.6\%$ of systems end up with low-pericentre and another $4\%$ efficiently migrate inwards. The efficient migration occurs due to the large radius of the convective RGs, while MS and WD encounters are generally concluded as tidal-disruptions or collisions. Around two thirds (or $4.9\%$ of the total systems) are MS-MS or MS-WD collisions. Another third (or $2.5\%$ of the total systems) are RG collisions with any stellar type, and a handful (or $0.2\%$) of the systems are WD-WD collisions.
\section{Discussion} \label{sec:5}
\subsection{Rates of close encounters} \label{5.1}
Here we provide an approximate (given the simplifications made) estimate for the rate of strong interactions between the inner binary components catalyzed by the CATGATE process.
Assuming the number of stars in the Galaxy is $N_\star$. A fraction $f_b$ of them are in binaries with log uniform separation spaning five orders of magnitude between $0.5\ \rm au$ and $50,000\rm au$. For a wide triple the inner binary should be within the $(50,500)\ \rm au$ octave, and the outer should be within the $(5,50)\times 10^3\ \rm au$ octave. The hierarchical structure and the ultra-wide outer binary companion are unlikely to affect the binarity of the inner orbit. If the distributions are uncorrelated, we have $N_3 = 2 (1/5)^2 f_b^2 N_\star$ triples. A fraction $f_a (f_q)$ of the triples ended up with a low separation (pericentre), so the total number of strongly interacting systems is $N_x = f_x (f_b/5)^2 N_\star$ for $x={a,q}$.
For a binary fraction $f_b=0.1$ for low mass stars, the fractions are $f_a=0.013$ and $f_q=0.072$, we conclude that one in $\sim 1750\ (\sim 9600)$ stars experiences a collision (efficient migration) during the course of $10$ Gyrs. In the Solar neighbourhood, the thickness of the disc is $\sim d=0.4\ \rm kpc$. Assuming that stars at galactocentric dustances between $6.5-9.5$ kpc are affected by the galactic tide, and a stellar density of $0.14\ \rm pc^{-3}$ we end us with $2\times0.4\times 3 \times 3 \times 1.4\times10^9 \approx 10^9$ stars.
Similar estimates can be made on more massive stars. If the number of massive star is $f_{\rm massive} N_\star$ with $f_{\rm massive}$ a normalization parameter that depends on the mass function, $f_{\rm in}$ is the fraction of massive binaries ($M>M_\odot$) and $f_{\rm out}$ is the fraction of wider low-mass - massive binaries, then the number of total eccentric binary interactions is $N_e = (f_a + f_e) (f_{\rm in}f_{\rm out}/25)2 f_{\rm massive} N_\star$. Here, the massive binary fraction is larger, $f_{\rm in} \approx 0.3$, while for the wider binary we conservatively assume $f_{\rm out} \sim f_2 =0.1$. From our RG simulations, We have in this case $f_a+f_e=0.076$. The fraction of massive stars from a Kroupa IMF is \citep{kroupa2001, mic_sha21} $f_{\rm massive}\approx 0.1$, but their binary fraction is larger. Finally, we have $N_e \approx 1.8\cdot10^4$, or one of every $\sim 55000$ stars
We can further deduce the branching ratios of individual encounters from table \ref{tab:1}. Due to the uncertainty in the mass of the Galaxy and the mass function, we normalize the branching ratios of the occurence rate per $10^6$ stars, which is then easier to convert, depending of the modelling of individual galaxies.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
& MS & RG & WD \\ [0.5ex]
\hline\hline\
MS & 8.57 & 3.46 & 3.3 \\
\hline
RG & & 0.24 & 2.28 \\
\hline
WD & & & 0.48 \\ [1ex]
\hline
\end{tabular}
\end{center}
\caption{Occurence rate of various stellar type encounters per $10^6$ stars. }\label{tab:2}
\end{table}
\cite{kaib2014} estimated the collision rate of wide binaries in the field at the Solar neighbourhood to be $\sim 1/500$, and a collision per $10^4$ stars if $5\%$ of wide binaries are assumed. The collisions of wide binaries are dominated by flybys, while in our case the Galactic tide should have the dominant contributon \cite[see e.g. flux estimates for both in ][]{ht86}. We have a factor of six larger value. If we rescale our wide binary fraction from $0.1 \times 0.2 = 2\%$ to $5\%$, and increase the Solar density by a factor of $2$, we will have at least an order of magniude more collisions due to wide triples. The combined effects of flybys and Galactic tide can further increase the encounter rates.
The latter numbers are derived under the assumption that the Galactic potential is roughly constant. In the inner parts of the Galaxy the density is larger, the galactic potential is stronger and the Galactic timescale is also faster, so binaries on less wide orbits, maybe $\sim 10^3 \rm au$, are significantly perturbed. Finally, the non-spherical nature of nuclear star clusters for closer triples could also be relevant \citep{petrovich17, hamers_vrr, bub2020}
The stellar field is also collisional at smaller separations, and stochastic changes in the orbital elements may lead to more close encounters \citep{kaib2014, michaely2016, michaely2019}. On the other hand, binaries at $\sim 10^4\ \rm au$ could be unstable due to the strong tidal force, and non-vertical tides could also be important. \citep{kaib2014} estimated the rates of wide-binaries collisions for a wide range of galactocentric distances and found differences of roughly a factor of a few. Altough it's tempting to extrapolate \protect{\citep{kaib2014}}'s results to our rate estimate, we cannot assess of the change of our rates since triple evolution is more complex; a detailed study of these issues is beyond the scope of the current paper.
In the next section we briefly discuss the astrophysical implications of each type of CATGATE-catalyzed close encounters we identify.
\subsection{Astrophysical implications} \label{implications}
{\bf MS-MS collisions and blue stragglers:} The collisions or mass transfer of close binary MS-MS stars may lead to the formation of blue-straggler stars (BSS). BSS are found in all stellar populations, predominantly in open \citep{rain2021} and globular clusters \citep{bssgc}, but also in field stars with similar frequency to blue horizontal branch (BHB) stars \citep{santucci2015} and the Galactic halo with volumetric density of $\sim 3.4\pm0.7 \times 10^{-5} M_\odot \ \rm{pc^{-3}}$ \citep{casagrande2020}.
It was already suggested that secular triple interaction can lead to BSS formation \citep{per_fab}, and collisional dynamics in the field affecting wide binaries was suggeted to give rise to MS-MS mergers \citep{kaib2014}.
Here we estimate the importance of the CATGATE channel. In table \ref{tab:2} the number of MS-MS mergers is $\approx 8.6$ per $10^6$ stars. For low mass stars it is roughly two order of magnitude larger, or $\sim 500$ per $10^6$ stars. In order to compare it to the observes frequencies, we take the density of the stellar halo, and dividing by the local stellar density of $0.1 \ \rm pc^{-3}$ leads us to $3.4 \cdot 10^{-4}$, which is comparable to our low mass channel estimate.
If we assume that every stars above $3 M_\odot$ will be a BHB star for $10\%$ of its MS lifetime, and if $\sim 1\%$ of stars are above $3M_\odot$, then $\sim 6 \times 10^{-5}$ of stars are in the BHB stage per $3M_\odot$. The observed rate of BSS exceeded the BHB stars by a factor of a few, pending on the galactocentric distance \citep{santucci2015}. This leads to a BSS rate of $\sim 5 \times 10^{-5}$ per $M_\odot$, which could be comparable to our rate in the intermediate mass channel.
We conclude that the CATGATE channel could explain most of BSS stars in the field for favourable conditions.
{\bf MS-MS collisions: optical transients and red novae:} The collision or inspiral or two stars may produce a transient event such as luminous red novae (LRN) and intermediate luminous optical transients \citep{soker2006, ilot1}. Our channel may produce few transients per $10^5\ \rm yr$ in the Solar neighbourhood alone, and probably larger rate closer to the Galactic centre.
{\bf MS-WD encounters, cataclysmic variables and Iax SNe projenitors:} The rate of MS-WD collisions from wide binaries in the field for a Milky-Way Galaxy model was estimated in \cite{mic_sha21}, and is around $10^{-4}\ \rm{yr}^{-1}$, or around $10^6$ mergers per Galaxy of mass $\sim 10^{11}$ stars,
which places the merger rate per $M_\odot$ to be $10^{-5}$. Our rate is roughly comparable. Most of the WD-MS encounters are inspirals which could later lead to type Ia's \citep{michaely2021}. Collisions could also be possible especially in the non-secular regime (see sec. \ref{sec. 5.4}), where the MS will essentially be sheared apart, and an energy of $\sim 10^{48}\ \rm erg$ will be released over the course of weeks-year, with typical peak luminosity around $L=10^{7} - 10^{9}\ \rm L_\odot$ \citep{shara78}. Finally, low-mass stars might be tidally disrupted, leading to transient events \citep{per+16}.
If tidal circularization is efficient, MS-WD encounters could be sources or cataclysmic-varibale (CV) stars and soft X-ray sources. The initial volume density of $\sim 10^{-4}\ \rm pc^{-3}$ \citep{pretorius2007} was recently pinned down to $4.8_{-0.8}^{+0.6} \times 10^{-6} \ \rm pc^{-3}$ owing to GAIA data \citep{gaia_cv}. For a stellar number density of $0.1\ \rm pc^{-3}$ the rate of CVs is $4.8\times 10^{-5}$ per $M_\odot$. Our estimated rate of MS-WD encounters is short by about a factor of $\sim 5$, thus the CATGATE channel could account for a significant fracion of CVs. CVs with low mass WD could merge faster, produce a transient of $\sim 10^5L_\odot$, reignite the hydrogen shell and form a giant \citep{met2021}.
{\bf RGs, sDB stars and Common envelope: } Mergers of WD with RG may lead to sDB stars. The inspiral of a WD and a RG (or MS and RG) may lead to CEE.
In particular, the CATGATE channel may provide the initial conditions for the eccentric CE channel recently studied \citep{glanz2021}.
{\bf WD-WD collisions and type Ia supernovae:} The results of a direct WD-WD collisions is thought to be a promising channel for Type Ia Supernovae \citep{katz2011, kushnir2013, michaely2021}. In terms of stellar mass, the observed rate is around $10^{-3}$ per $M_\odot$ \citep{maoz2014, maoz17}. \cite{toonen2018} found that direct collisions of triples is a few times $10^{-7}$ per $M_{\odot}$, which accounts for at most $0.1\%$ of the observed rate. In our case, if we assume a typical mass of $1\ M_\odot$, the rate per $M_\odot$ is
$\mathfrak{R}_{\rm Ia} \approx 4.8\cdot 10^{-7}$,
which is comparable to \citep{toonen2018}, and equally accounts for $\lesssim 0.1\%$ of the observed rates. As we show in Sec. \ref{sec. 5.4} non-secular fluctuations may lead to direct collisions if $\log_{\rm 10}(1-e) \lesssim 10^{-4}$, which occurs at around $2\%$ of the simulated systems. Even if assuming they all occur in the double WD regime, this leads to an increase of at most one order of magnitude, so our rate could be $\lesssim 1\%$ from the observed rate.
Production of interacting mass-transfering WD binaries may also contribute to the production of type Ia supernovae through the single degenerate channel \citep{whe+73} or the production of type Ia SNe \citep{jor+12}, but binary evolution channels would be far more efficient.
{\bf Stability of planetary systems:} \cite{kaib2013} pointed out that a significant fraction of planetary systems in wide binaries may be unstable or gain significant eccentricity over Gyr timescales. Here, even binaries with separation of $\sim 100\ \rm au$ may become eccentric and render some of the planetary systems unstable. The stability of planetary systems in wide binary/triple context deserves more focused studies.
{\bf Massive stars and compact objects:} We analysed only intermidiate mass stars that form WDs. More massive stars form neutron stars (NS) and black holes (BH). The formation of a NS generally accompanied with a large natal kick, which will most likely destabilise the wide triple system, unless electron-capture SNe may produce low-velocity kicks \citep{willcox2021}. A BH may form through a direct collapse or will have significant fallback so kicks can be essentially quenched. Both cases could become unbound due to significant mass loss or form dynamically later on after their rapid stellar evolution. WD-NS/BH merger may produce distinctive transients \citep{zenati2020, bobrick2021}, while BH-BH mergers and gravitational-wave projenitors had been recognised as an important channel in the colisional field environment \citep{tbh}. CATGATE may produce compact-objects mergers, but the initial conditions of the projenitors are complex and deserve a more detailed population synthesis modelling with much more detailed stellar evolution of binary \citep{compas} and triple \citep{toonen2018, hamers21} massive stars, and considerations of the natal kicks.
{\bf Wide triples properties}
Besides the unique transient features and final merger/inspiral product that may rise from each encounter discussed in above, several corollaries on the wide binary/triple population may drawn:
{\bf Anisotropy in the GAIA sample:} Wide binaries with large inclination angle with the galactic plane become eccentric and could be unstable. Thus, constraining the orbits of wide binaries, if possible, and observing a dearth of polar orbits may pave the way to understanding the dynamical evolution of such binaries. Encounters with passing stars may randomize the inclination angles again, thus a qualitative comparison between the replenishment time and the loss time are required. The inclination distribution may be separation-dependent.
{\bf Wide triples:} The current detections of wide binaries search for stars on roughly the same portion of the sky with close proper motions (the orbital velocity is negligible) and then running a data-analysis pipeline the estimates the probability of a true wide binary \citep{hartman2020}. If one of the stars is part of a tighter binary of $\sim 100\ \rm au$, then the relative velocity contaminates significantly the proper motions and the wider binary is not detected. A possible solution could be calculating the centre of mass velocity of these binaries as if they are single stars and looking for wider counterparts in the data.
{\bf Super-thermal eccentricity distribution:} It had been recently deduced that the eccentricity distribution becomes super-thermal for ultra-wide binaries \citep{hwa+21}\footnote{While this paper was under review, \cite{hamilton22} shown that an initially thermal eccentricity distribution of wide binaries cannot evolve into super-thermal.}. If the zero-ago eccentricity is thermal, then dynamical evolution due do isolated triples, CATGATE may shift the eccentricity distribution to super-thermal. Wide binaries perturbed undergoing galactic tides alone will keep a thermal distribution \protect{\citep{hamilton22}}. Perhaps the combination of feedback from the inner binary, or combinations with flybys could alter the eccentricity distribution. If uncorrelated flybys were the only cause, the eccentricity distribution would have remained thermal.
\subsection{Caveats} \label{sec. 5.4}
{\bf Neglected effects and tidal dissipation:} We assumed that the widest binary is in the disc midplane and that only the vertical tide is the most relevant. Recent Gaia data allows the reconstruction of the Galactic potential \citep[and potential dark-matter density,][]{buschmann2021} in much more detail. Different spatial galactocentric distances might also change the overall dynamics and encounter rate \citep[see comparison in ][]{kaib2014}. The combined effects of realistic galactic potentials, different spatial locations and the cumulative effects of flybys could change the results, but are beyond the scope of this paper. Similar effects could domiante the collisional evolution of triple in cluster potentials \citep{hamilton1, hamilton2, hamilton3}.
The exact details of tidal dissipation are also uncertain, especially in the dynamical tide regime. Nevertheless, tidal interactions are expected to be important only during a close approach. The primary focus of this paper was to understand which fraction of wide triples experience close approaches to begin with. The strength of tidal disispation will change the branching ratio of merging/inspiraling orbit, but the the overall number. Indeed, \cite{kaib2014} demonstrated that for wide binaries, the details of tidal dissipation do not alter the overall rates too much.
{\bf Breakdown of double averaging:} We use a secular code with corrected double averaging \citep{luo16, gpf18, mangipudi2021}. This means that effectively the secular evolution is qualitatively similar to N-body integration as long as the total angular momentum does not vary too much. Qualitatively, (corrected) secular evolution is preserved as long as \citep[e.g.][]{ll18}
\begin{equation}
\sqrt{1-e^2} \gtrsim \epsilon_{\rm SA}^2,
\end{equation}
where $\epsilon_{\rm SA}$ is given in Eq. \ref{eq:eps_sa}. \cite{ant2014} gives a similar criterion
\begin{equation}
\sqrt{1-e} \gtrsim 8\cdot 10^{-3} \frac{2m_2}{m_{\rm in}} \left(\frac{10 a_1}{a_2 (1-e_2)} \right)^3. \label{eq:n-body-crit}
\end{equation}
For the extreme case of $a_2 (1-e_2) = 10 a_1$ and equal masses we have $1-e\gtrsim 6.4\cdot 10^{-5}$, or $\log_{\rm 10}(1-e) \gtrsim -4.2$. This means that most of the systems are well described by secular evolution, while the small tail of highly eccentric one could be non-secular. This is potentially promising for WD-WD mergers that require extreme eccentricities, so their rates could be enhanced by non-secular evolution.
Finally, as pointed out by \cite{gpf18}, most systems may become dynamically unstable before achieving condition in Eq. \ref{eq:n-body-crit}.
\section{summary} \label{sec:6}
We study the influence of a Galactic tide on triple stellar systems where the outer binary is ultra wide $\gtrsim 10^4\ \rm au$. When the secular Lidov-Kozai and Galacitc tidal timescales are comparable, chaotic evolution drives the inner binary into extremely large eccentricies. Contrary to the standard ZLK case (where the triple is isolated and almost polar misalignment is required), the chaotic evolution mechanism is robust for wide range of initial conditions, provided that the secular timescales overlap. It therefore provide arobust novel channel for close interaction of stars in ultra-wide triple systems.
We simulated low-mass ($0.5-1M_\odot$) and high-mass ($1-8M_\odot$) triple systems using the secular code \href{https://github.com/eugeneg88/SecuLab}{\texttt{SecuLab}} with averaging correction, GR and equilibrium and dynamical tides, and simplified stellar evolution. Our ChAotic-Triple-GAlactic-Tide-Evolution (CATGATE) mechanism is the dominant one for close stellar encounters in the field (with Solar neighbourhood typical condtions), overtaking the collisional flybys by a factor of a few. It therefore provides a novel mechanism for producing closely interacting binaries, catcalysmic variables, and various stellar merger and collsions products and potential transeint phenomena. It would also affect the evolution of planetary systems in such wide triples.
The final outcomes and occurrence rates of close encounters depends on the stellar types involved and assumptions made on triples' population in the Galaxy. We discuss the transients and the final object formed in these encounters, as well as potential observational signatures in sec. \ref{implications}. Future studies may refine the uncertainties is our assumptions; namely, detailed stellar evolution, exact N-body intergration and tidal models, as well more realistic Galactic potential. Future observations of irregular bursts, mass-gyrochronology mismatch (such as for blue stragglers) and enhanced stellar oscillations from asteroseismology could provide evidence of eccentric encounters and/or mergers.
Extension of the CATGATE mechanism for higher mass system producing netutron stars and black holes may also give rise to production of GW sources. The potential efficeincy of such GW soucres channel, however, would be highly dependend on the types of natal kicks given to compact objects and the survival of wide triple to secular timescales. These issues will be explored elsewhere.
\section*{Data availability statement}
The data that support the findings of this study are available from the corresponding author upon reasonable request.
\section*{Acknowledgements}
We thank Hila Glanz, Ilya Mandel and Mor Rozner for useful discussions. We thank Cristobal Petrovich for discussions and comments on the manuscript. EG and HBP acknowledge support for this project from the European Union's Horizon 2020 research and innovation program under grant agreement No 865932-ERC-SNeX.
|
2,877,628,088,384 | arxiv | \section{What lies inside the light sphere?}
The Event Horizon Telescope (EHT) \cite{eht} has established, by viewing Sgr A* in our galaxy, and M87* in a neighboring galaxy, that for a black hole of mass $M$, the minimal stable orbit for photons, called the ``light sphere'', lies at the radius 3M. So the compact objects viewed by the EHT have the expected exterior geometry of a mathematical black hole. But because of the Birkhoff uniqueness theorem \cite{birkhoff} for spherically symmetric solutions of the Einstein equations, the object interior to radius M, whether a true black hole or an exotic compact object (ECO) \cite{cardoso} mimicking a black hole, will have to high accuracy the same exterior geometry. Thus a key experimental question remains: What type of object lies inside the observed light sphere?
Two approaches have been suggested to attempt to resolve this question. The first, reviewed in \cite{cardoso}, uses as a diagnostic the ringdown gravitational waves emitted in mergers of two holes. The form of the waves emitted will have a different structure if the holes are true mathematical black holes with an event horizon, as opposed to the case in which the holes are ECO's. A second approach \cite{adlerleaky} notes that if ``black''\footnote{Since leaky holes will not be exactly black, in this context we put black in quotes.} holes have no event horizon or apparent horizon, then they will be ``leaky'', and interior particles will be able to exit. This can influence astrophysical processes, such as young star formation near the object Sgr A* central to our galaxy \cite{young}, and galaxy formation itself \cite{adler1}, as discussed now in more detail.
\section{Leaky ``black'' holes as catalysts for galaxy formation}
In a recent Gravitation Essay \cite{adler1}, we have proposed that horizonless ``black'' holes can act as the catalysts for galaxy formation. This suggestion was originally motivated by a novel model for dark energy (reviewed in \cite{adlerreview}) that suppresses formation of black hole event horizons \cite{ramaz} and apparent horizons \cite{adlerleaky}, suggesting that black holes may be leaky \cite{adlerleaky}. More recently, we have studied \cite{adler2} a modification of
Mazur-Mottola gravastars \cite{mazur} (which are an examplar of ECO's \cite{cardoso}). Our modified version is based on using the Tolman-Oppenheimer-Volkoff equation \cite{tov}, in which the pressure is continuous, and the jump to a Gliner \cite{gliner} ground state with pressure plus density summing to near zero is through a jump in the energy density. We found that even in the absence of a cosmological constant, this model gives a gravastar ``black'' hole with a metric that joins smoothly to an exterior Schwarzschild solution. The metric component $g_{00}$ in the gravastar is always positive, and takes small to very small values in the interior. In such a gravastar, there is a very small black hole leakage or ``wind'' driven by the cosmological constant, but a much larger black hole wind can arise from accreting particles which exit the hole, on a time scale that depends on their impact parameter relative to the center of the hole. This gives a concrete model in which the black hole catalysis mechanism of \cite{adler1} can be realized, while simultaneously allowing hole growth along with growth of the galaxy.
A peculiar feature of galaxies is the existence of a diffuse structure that unfolds over many decades of length scales. The model of \cite{adler1} ties this to the recent observation that nearly all galaxies have massive black holes at their center, by suggesting that ``black'' holes seed the formation of stars that constitute the galaxy, through collisions of outstreaming with infalling particles. A simple calculation of radial geodesic motion shows that at any distance from the central hole, the outstreaming and infalling particle velocities are identical, and so the center of mass of the collision has no radial motion, permitting nucleation of star formation over a wide range of length scales from the central hole. If $\ell$ is the collision length of the outgoing particles (taken as neutral hydrogen), then the number of outstreaming particles at any radius from the central hole scales as $\exp(-r/\ell)$, corresponding to an exponential scale length structure as observed for nearly all disk galaxies. A geometric estimate of the collision length is $\ell \simeq (A_H \rho_H)^{-1}$, where $A_H = \pi a_0^2$ is the cross sectional area of a hydrogen atom, giving the formula for the scale length quoted above in the Abstract.\footnote{Other mechanisms have been suggested for producing an exponential disk density profile, although without fixing the magnitude of the scale length $\ell$. A mechanism based on maximizing entropy under angular momentum mixing by radial migration has been suggested by Herpich, Tremaine, and Rix \cite{hpr}, and they give extensive references to earlier proposals. Their mechanism suggests that an initially formed exponential density profile, as in our proposal, would be stable under subsequent galactic dynamics.} Initially this mechanism will give rise to spherical galaxies, which then relax into disks through dissipation with conservation of angular momentum.
\section{Implications of this mechanism for JWST observations}
The mechanism of \cite{adler1} has a striking implication for observations of high redshift galaxies by the James Webb Space Telescope (JWST). Since the density of hydrogen in the early universe scales with redshift $z$ as $\rho_H(z)=\rho_H(z=0)(1+z)^3$, the scale length of galaxies should scale correspondingly with $z$ \cite{adler1}. For example, at a redshift of $z=11$, the scale length will be reduced by a factor $12^3 = 1728$, so the scale length at $z=11$ will be of the order a few parsecs rather than a few kiloparsecs as observed locally. Two galaxies reported from JWST measurements by Naidu and Oesch et al. \cite{naidu} at $z=11$ and $z=13$ are much larger than this, with scale lengths estimated as 0.7 kpc and 0.5 kpc respectively. If these are generic early universe galaxies they falsify our model, but they may also be outliers in a distribution over several decades of galaxy sizes, chosen for initial analyses precisely because they are more visible. If this is the case, one expects the presence of larger numbers of smaller galaxies. From Fig. 4 of \cite{naidu}, which shows 1 kpc on the residual pixel map, one sees that 1 kpc corresponds to roughly 7 pixels, so the pixel size at $z=11$ is roughly 0.14 kpc. Thus, a galaxy with a 2 pc scale length would fit entirely within one pixel. {\it Hence the prediction of our galaxy formation model \cite{adler1} is that the JWST sky maps should contain large numbers of single pixel galaxies!}
\section{Possible way to estimate the diameter of sub-pixel bright galaxies}
This prediction raises the question of whether one can estimate the size of galaxies that are contained within a single pixel of an image. This may be possible by slowly shifting the camera charge coupled device at a rate that corresponds to a motion in the observed image of $v=dL/dt$. If a single pixel galaxy moves off the edge of the pixel, and the elapsed time from when the observed intensity starts to decrease until it is zero is $\Delta t$, then to within factors of order unity the actual dimension of the galaxy can be estimated as $\Delta L \sim v \Delta t$. When the galaxy appears in an adjacent pixel (after a delay if there is a dead zone between pixels) the time for the observed intensity to ramp up will give a similar estimate. We note that if galaxies in the $z=11$ universe are spaced at the current intergalactic spacing of $\sim 300,000$ pc rescaled by $z=11$, that is spaced by $30,000$ pc or 30 kpc, then a single pixel of extent 0.14 kpc is unlikely to contain more than one galaxy. So the shift mechanism that we are suggesting will not have to deal with the presence of multiple galaxies in a single pixel.
\section{Summary}
We have discussed how the nature of the structure lying interior to the light sphere observed by the EHT, relates to issues of ``black'' hole leakiness, a ``black'' hole catalysis mechanism for galaxy formation, a formula for the disk galaxy scale length, and early universe galaxy pixel spans observed in the JWST.
|
2,877,628,088,385 | arxiv | \section{Introduction}
After the discovery of the Higgs boson, precision measurement of the Higgs boson's properties is placed on the agenda, especially the measurement of the rare decay modes of the Higgs boson as the Standard Model (SM) contribution is fairly small. Observing a deviation from the SM prediction would shed light on new physics (NP) beyond the SM. Among the rare decay modes of the Higgs boson, the $\gamma\gamma$ mode is bounded much tighter than others, and its best-fit signal strength relative to the standard model prediction is $1.17\pm 0.27$ obtained by the ATLAS collaboration~\cite{Aad:2014eha} and $1.14^{+0.26}_{-0.23}$ by the CMS collaboration~\cite{Khachatryan:2014ira}, respectively. The $H\to Z\gamma$ decay, however, is loosely constrained. ATLAS collaboration reported an upper limit of 11 times the SM expectation at the $95\%$ confidence level~\cite{ATLASHzg2014}. A similar result is achieved by the CMS Collaboration~\cite{CMSHzg2013}, which sets an upper limit of 9.5 times the SM expectation at the $95\%$ confidence level. Note that the $H\gamma\gamma$ and $HZ\gamma$ couplings are sensitive to different kinds of NP and therefore are independent in principle. Ref.~\cite{Azatov:2013ura} pointed out that the $HZ\gamma$ coupling could be sizably modified in certain composite Higgs model while still keeping the $H\gamma\gamma$ coupling untouched.
On the other hand, the $HZ\gamma$ and $H\gamma\gamma$ couplings were highly correlated in the NMSSM or MSSM-like models~\cite{Arhrib:2014pva,Belanger:2014roa}.
Thus the NP models can be tested and discriminated by their different expected correction of the $HZ\gamma$ and $H\gamma\gamma$ couplings. In this work, we explore the potential of probing the anomalous couplings of $HZ\gamma$ and $H\gamma\gamma$ through the $H\gamma$ production at the future electron-positron collider.
The potential of probing the $HZ\gamma$, $H\gamma\gamma$ couplings has been studied at $e^+e^-$ and $e^-\gamma$ colliders through the channels of $e^+ e^- \to ZH$, $e^+ e^- \to e^+ e^- H$, $e^{\pm} \gamma \to H e^{\pm}$ and $e^+ e^- \to \gamma H$~\cite{Hagiwara:1993sw, Gounaris:1995mx, Hagiwara:2000tk, Cao:2006rn,Hankele:2006ma, Dutta:2008bh,Rindani:2009pb,Rindani:2010pi,Ren:2015uka}. For the process of $e^{+}e^{-}\to H\gamma$, the analytical expressions of its cross section have been given in~\cite{Barroso:1985et,Abbasabadi:1995rc,Djouadi:1996ws}. It has also been studied in the
Inert Higgs Doublet Model~\cite{Arhrib:2014pva} and MSSM~\cite{Hu:2014eia}. Searching for the Higgs boson in the collider signature of $e^+e^-\gamma$ at the Large Hadron Collider (LHC) is also studied in Ref.~\cite{Gainer:2011aa,Belanger:2014roa}.
In this work we assume the NP resonances are too heavy to be observed directly at the LHC, but they might generate sizable quantum corrections. Such effects are then described by an effective Lagrangian of the form
\begin {equation}
\mathcal{L}_{\rm eff} = \mathcal{L}_{\rm SM} + \frac{1}{\Lambda_{\rm NP}^2} \sum_i
\left(c_{i}\mathcal{O}_{i}+h.c.\right)+O\left(\frac{1}{\Lambda_{NP}^{3}}
\right),
\label{eq:eft}
\end {equation}
where $c_{i}$'s are coefficients that parameterize the non-standard
interactions. Note that dimension-5 operators involve fermion
number violation and are assumed to be
associated with a very high energy scale and are
not relevant to the processes studied here.
The relevant CP-conserving operators $\mathcal{O}_{i}$ contributing to the anomalous $HZ\gamma$ and $H\gamma\gamma$ couplings are ~\cite{Hagiwara:1993qt}
\begin{eqnarray}
\mathcal{O}_{BW} & = & \left(\phi^{\dagger}\tau^I\phi\right) B_{\mu\nu} W^{I\mu\nu},\\
\mathcal{O}_{WW} & = & \left(\phi^{\dagger}\phi\right) W^I_{\mu\nu} W^{I\mu\nu},\\
\mathcal{O}_{BB} & = & \left(\phi^{\dagger} \phi \right) B_{\mu\nu} B^{\mu\nu},\\
\mathcal{O}_{\phi\phi} & = & \left(D_{\mu}\phi \right)^{\dagger}\phi\phi^{\dagger}\left(D_{\mu}\phi \right),
\end{eqnarray}
in which $\phi^T=(0,(v+H)/\sqrt{2})$ is the Higgs doublet in the unitary gauge with $v=246~{\rm GeV}$ the vacuum expectation value,
$B_{\mu\nu}=\partial_\mu B_\nu- \partial_\nu B_\mu$
and $W^I_{\mu\nu}=\partial_\mu W_\nu^I - \partial_\nu W^I_\mu -g f_{IJK}W_\mu^JW_\nu^K$ are the strength tensors of the gauge fields, and the Lie communicators $[T_a, T_b]=i f_{abc}T_c$ define the structure constants $f_{abc}$.
The $\mathcal{O}_{\phi\phi}$ and $\mathcal{O}_{BW}$ are constrained strongly by the electroweak precision measurements ~\cite{Achard:2004kn,Hankele:2006ma} and are neglected in our study.
After spontaneous symmetry breaking, the operators
yield the effective Lagrangian in terms of the mass eigenstates of photon and $Z$-boson as follows:
\begin {equation}
\mathcal{L}=\frac{v}{\Lambda^{2}}\biggl(\mathcal{F}_{Z\gamma} HZ_{\mu\nu}A^{\mu\nu}+\mathcal{F}_{ZZ} H Z_{\mu\nu}Z^{\mu\nu} + \mathcal{F}_{\gamma\gamma}H A_{\mu\nu}A^{\mu\nu}\biggr)
\end {equation}
where
\begin {eqnarray}
\mathcal{F}_{\gamma\gamma} & = & c_{WW}\sin^2\theta_W+c_{BB}\cos^2\theta_W, \nn\\
\mathcal{F}_{Z\gamma} & = & \left( c_{WW}-c_{BB}\right)\sin(2\theta_W).
\end {eqnarray}
Therefore, the other two couplings would exhibit a non-trial relation which could be verified in future experiments.
\section{$H\gamma$ production at $e^{-}e^{+}$ collider}
Now we are ready to calculate the $H\gamma$ production with the contributions of the $HZ\gamma$ and $H\gamma\gamma$ anomalous couplings. There is a subtlety in the calculation. The scattering process of $e^+e^- \to H\gamma$ is absent at the tree-level in the SM when ignoring the electron mass, but it can be generated through the electroweak corrections at the loop-level~\cite{Barroso:1985et, Abbasabadi:1995rc, Djouadi:1996ws}. The effects of the $HZ\gamma$ and $H\gamma\gamma$ anomalous couplings, as suppressed by the NP scale $\Lambda$, might be comparable to those SM loop effects. Therefore, one has to consider the SM contributions as well in the discussion of the NP effects in the $H\gamma$ production. The loop corrections in the SM can be categorized as follows: (1) the bubble diagrams originating from the external $\gamma$ wave-function renormalization; (2) the triangle diagrams with the $HZ\gamma$, $H\gamma\gamma$ or the $Hee$ in the external lines; (3) the box diagrams with $e^+e^-H\gamma$ in the external line. Figure~\ref{fig:feyn} displays the representative Feynman diagrams, which also includes the $HZ\gamma$ anomalous coupling.
\begin{center}
\includegraphics[scale=0.35,clip]{fig/feyn}
\figcaption{Representative Feynman diagrams of the process of $e^{+}e^{-}\rightarrow H \gamma$: the SM (a-e) and the $HZ\gamma$ anomalous coupling (f).}
\label{fig:feyn}
\end{center}
Consider the case of unpolarized incoming beams and ignore the electron mass. Summing over the polarization of the photon, the differential cross section of the scattering of $e^{-}e^{+}\rightarrow H\gamma$ can be written as~\cite{Djouadi:1996ws}
\begin {eqnarray}
\label{eeha}
&& \frac{d\sigma(e^{+}e^{-}\rightarrow H\gamma)}{d\cos\theta} \nn\\
&=& \frac{s-M^{2}_{H}}{64\pi s}\Big[u^2 \left(|a_1^+|^2+|a_1^-|^2\right)+t^2\left(|a_2^+|^2+|a_2^-|^2\right)\Big],~~
\end {eqnarray}
where $\sqrt{s}$ is the energy of center-of-mass (c.m.) and the Mandelstam variables are
\begin {eqnarray}
&& t=(p_{e^+}-p_\gamma)^2=-(s-M_H^2)(1-\cos\theta)/2, \nn\\
&& u=(p_{e^-}-p_\gamma)^2=-(s-M_H^2)(1+\cos\theta)/2 \nn
\end {eqnarray}
with $p_i$ the momentum of particle $i$ and $\theta$ the scattering angle of the photon.
The coefficient $a_i$, which sums contributions from all the loop diagrams and the anomalous $HZ\gamma$ and $H\gamma\gamma$ couplings, is
\begin {equation}
\label{coeff}
a_{i}^{\pm} = a^{\gamma\pm}_{i} +a^{Z\pm}_{i}+a^{e\pm}_{i}+a^{{\rm box}\pm}_{i},
\end {equation}
where $a_i^\gamma$ and $a_i^Z$ denote the contributions of the photon and $Z$ pole vertex diagrams, $a_i^e$ the $t$-channel $H^0 ee$ vertex corrections and $a_i^{\rm box}$ the contribution of the box diagrams; see Fig.~\ref{fig:feyn}. Detailed expression of all the coefficients in the SM can be found in Ref.~\cite{Djouadi:1996ws}. The anomalous $\mathcal{F}_{Z\gamma}$ and $\mathcal{F}_{\gamma\gamma}$ couplings contribute only to $a^{Z~\pm}_{i}$ and $a^{\gamma~\pm}_{i}$ as follows:
\begin {eqnarray}
&& a_1^{Z\pm} = a_2^{Z\pm} = \frac{e ~x^{\pm}}{4s_{W}c_{W}} \frac{1}{s-M_{Z}^{2}}\left(\frac{1}{16\pi^2} a^{Z\pm}_{\rm SM} + \frac{2v}{\Lambda^2} \mathcal{F}_{Z\gamma}\right) \nn\\
&& a_1^{\gamma\pm} = a_2^{\gamma\pm} = - \frac{e}{2} \frac{1}{s}\left(\frac{1}{16\pi^2} a^{\gamma\pm}_{\rm SM} + \frac{2v}{\Lambda^2} \mathcal{F}_{\gamma\gamma}\right) \nn
\end {eqnarray}
where $e$ is the electric charge, $x^{+} = -1+2s_{W}^{2}$, $x^{-} = 2s_{W}^{2}$ and
\begin {eqnarray}
&& a^{Z~\pm}_{\rm SM} = \frac{e^{3}M_{W}}{c_{W}s^{2}_{W}}\left[F_{Z,W}+ \frac{m_{t}^{2}}{M_{W}^{2}}\left(\frac{1}{2}-2s_{W}^{2}\right)F_t\right]\\
&&a^{\gamma~\pm}_{\rm SM} = \frac{e^{3}M_{W}}{s_{W}}\left[F_{\gamma,W} - \frac{16m_{t}^{2}}{3M_{W}^{2}}F_t\right].
\end {eqnarray}
The $F_{Z,W}$, $F_{\gamma,W}$ and $F_t$ are obtained from the gauge boson ($W$ and $Z$) and top-quark loops respectively. Only the top-quark loop is taken into account in this work as the contributions from other fermion loops are highly suppressed.
The $F_{Z,W}$, $F_{\gamma,W}$ and $F_t$ are
\begin {eqnarray}
F_{Z,W} &=& 2\left[\frac{M_{H}^{2}}{M_{W}^{2}}\left(1-2c_{W}^{2}\right) + 2\left(1-6c^{2}_{W}\right)\right] \left(C^{W}_{12}+C^{W}_{23}\right)\nn\\
&&+4\left(1-4c_{W}^{2}\right)C^{W}_{0},\nn\\
F_{\gamma,W} &=& 4\left(\frac{M_{H}^{2}}{M_{W}^{2}}+6\right)\left(C^{W}_{12}+C^{W}_{23}\right) + 16C^{W}_{0},\nn\\
F_t &=& C^{t}_{0} + 4C^{t}_{12} + 4C^{t}_{23}
\end {eqnarray}
where the three-point functions $C_{ij}^{t}$ and $C_{ij}^{W}$ are defined as
\begin {eqnarray}
&&C_{ij}^{t} = C_{ij}\left(s,0,M_{H}^{2};M_{t}^{2},M_{t}^{2},M_{t}^{2}\right), \nn\\
&&C_{ij}^{W} = C_{ij}\left(s,0,M_{H}^{2};M_{W}^{2},M_{W}^{2},M_{W}^{2}\right),
\end {eqnarray}
and $C_0$ is the Passarino-Veltman scalar function~\cite{Passarino:1978jh}.
We first calculate the SM loop corrections in FormCalc~\cite{Hahn:1998yk} and LoopTools~\cite{vanOldenborgh:1990yc}. Our analytical and numerical results are consistent with those in Refs.~\cite{Djouadi:1996ws}. We then incorporate the $HZ\gamma$ and $H\gamma\gamma$ anomalous couplings in our calculation to examine their impact on the $H\gamma$ production, respectively.
\begin{figure*}
\center
\includegraphics[scale=0.25]{./fig/xsec_a}\includegraphics[scale=0.25]{./fig/xsec_b}\includegraphics[scale=0.25]{./fig/xsec_c}\\
\includegraphics[scale=0.25]{./fig/xsec_d}\includegraphics[scale=0.25]{./fig/xsec_e}\includegraphics[scale=0.25]{./fig/xsec_f}\\
\includegraphics[scale=0.25]{./fig/xsec_g}
\caption{The cross section of $e^+ e^- \to H \gamma$ as a function of $\sqrt{s}$: (a), (d) and (g) each individual contribution of $\sigma_{\rm SM}$ (solid), $\sigma_{\rm IN}^{(1,2)}$ (Dashed) and $\sigma_{\rm NP}^{(1,2,3)}$ (Dotted); (b) and (e) the total cross section of for $\Lambda={\rm 2 TeV}$ and $\mathcal{F}_{Z\gamma/\gamma\gamma}=0, \pm 1$; (c) and (f) the total cross section of for $\Lambda={\rm 2 TeV}$ and $\mathcal{F}_{Z\gamma/\gamma\gamma}=0, \pm 0.1$;}
\label{fig:xsec}
\end{figure*}
In order to quantify the NP effects, we separate the total cross session of the $H \gamma$ production ($\sigma_{\rm t}$) into the following three pieces:
\begin{eqnarray}\label{smnp}
\sigma_{\rm t}&=&\sigma_{\rm SM}+\left[\sigma_{\rm IN}^{(1)}\mathcal{F}_{Z\gamma}+\sigma_{\rm IN}^{(2)}\mathcal{F}_{\gamma\gamma} \right]\left(\frac{\rm 2TeV}{\Lambda}\right)^2 \nn\\
&&+\left[\sigma_{\rm NP}^{(1)}\mathcal{F}_{Z\gamma}^2+\sigma_{\rm NP}^{(2)}\mathcal{F}_{\gamma\gamma}^2+\sigma_{\rm NP}^{(3)}\mathcal{F}_{Z\gamma}\mathcal{F}_{\gamma\gamma} \right]\left(\frac{\rm 2TeV}{\Lambda}\right)^4,\nn\\
\end{eqnarray}
where $\sigma_{\rm SM}$ is the cross section in the SM, $\sigma_{\rm IN}^{(1,2)}$ is the interference effect between the SM and NP contributions and $\sigma_{\rm NP}^{(1,2,3)}$ is the NP contribution. Figs.~\ref{fig:xsec}(a), (d) and (g) show each individual contribution above as a function of $\sqrt{s}$ for $m_H=125~{\rm GeV}$. The SM contribution falls with $\sqrt{s}$ and decreases rapidly around the top-quark pair threshold of $\sqrt{s}\sim 350~{\rm GeV}$. The fall-off is owing to the cancellation between the $W$-boson loop and $t$-quark loop contributions. When $\sqrt{s}\simeq 2 m_t$, the virtual top-quark loop develops an imaginary part and thus contributes maximally. Above the top-quark pair threshold, the cross section drops smoothly with $\sqrt{s}$ as expected. The interference effect ($\sigma^{(1,2)}_{\rm IN}$) exhibits a similar behavior as the SM contribution and drops with $\sqrt{s}$. On the contrary, the NP contributions ($\sigma^{(1,2)}_{\rm NP}$) increase with $\sqrt{s}$ as induced by a high-dimensional operator.
The interference effects between the SM and NP depend on the sign of the effective $HZ\gamma$, $H\gamma\gamma$ couplings. We plot in Fig.~\ref{fig:xsec}(b) the total cross section for $\mathcal{F}_{Z\gamma}=\pm 1$. For reference $\sigma_{\rm SM}$, i.e. $\mathcal{F}_{Z\gamma}=0$, is also plotted. For a large $\mathcal{F}_{Z\gamma}$, the NP contribution dominates over the interference and SM contributions. We also plot in Fig.~\ref{fig:xsec}(c) the total cross section for $\mathcal{F}_{Z\gamma}=\pm 0.1$ to illustrate the interference effects. For a small $\mathcal{F}_{Z\gamma}$, we can ignore the NP contributions as it is proportional to $\mathcal{F}_{Z\gamma}^2$. Therefore, the interference effects yield three similar curves. This discussion above is also applied to $\mathcal{F}_{\gamma\gamma}$ displayed in Figs.~\ref{fig:xsec}(d), (e), (f).
For illustration we list the total cross section (in the unit of femtobarn) for four benchmark of c.m. energies ($\sqrt{s}$) as follows:
\end{multicols}
\begin{eqnarray}\label{sigma}
&&250~{\rm GeV}:\sigma_{\rm t} =0.1004 +\left[0.3109\mathcal{F}_{Z\gamma}+0.3465\mathcal{F}_{\gamma\gamma}\right]\left(\frac{\rm 2 TeV}{\Lambda}\right)^2+ \left[0.3828\mathcal{F}_{Z\gamma}^2+0.7872\mathcal{F}_{\gamma\gamma}^2+0.1195\mathcal{F}_{Z\gamma}\mathcal{F}_{\gamma\gamma}\right]\left(\frac{\rm 2 TeV}{\Lambda}\right)^4; \nn\\
&&350~{\rm GeV}:\sigma_{\rm t} =0.0341 +\left[0.2524\mathcal{F}_{Z\gamma}+0.0105\mathcal{F}_{\gamma\gamma}\right]\left(\frac{\rm 2 TeV}{\Lambda}\right)^2+ \left[0.5212\mathcal{F}_{Z\gamma}^2+1.2392\mathcal{F}_{\gamma\gamma}^2+0.1750\mathcal{F}_{Z\gamma}\mathcal{F}_{\gamma\gamma}\right]\left(\frac{\rm 2 TeV}{\Lambda}\right)^4; \nn\\
&&500~{\rm GeV}:\sigma_{\rm t} =0.0524 +\left[0.2865\mathcal{F}_{Z\gamma}+0.3613\mathcal{F}_{\gamma\gamma}\right]\left(\frac{\rm 2 TeV}{\Lambda}\right)^2+ \left[0.6012\mathcal{F}_{Z\gamma}^2+1.5375\mathcal{F}_{\gamma\gamma}^2+0.2093\mathcal{F}_{Z\gamma}\mathcal{F}_{\gamma\gamma}\right]\left(\frac{\rm 2 TeV}{\Lambda}\right)^4; \nn\\
&&1000~{\rm GeV}:\sigma_{\rm t} =0.0214 +\left[0.1703\mathcal{F}_{Z\gamma}+0.2808\mathcal{F}_{\gamma\gamma}\right]\left(\frac{\rm 2 TeV}{\Lambda}\right)^2+ \left[0.6614\mathcal{F}_{Z\gamma}^2+1.7799\mathcal{F}_{\gamma\gamma}^2+0.2362\mathcal{F}_{Z\gamma}\mathcal{F}_{\gamma\gamma}\right]\left(\frac{\rm 2 TeV}{\Lambda}\right)^4;\nn \\
\end{eqnarray}
\begin{multicols}{2}
\section{Collider Simulation and Discussion}
In this section, we discuss how to detect the $HZ\gamma$ and $H\gamma\gamma$ anomalous couplings at the $e^+e^-$ collider with various c.m. energies. Firstly we focus on the contribution of $HZ\gamma$ with the $b\bar{b}$ mode of the Higgs boson decay where $\mathcal{F}_{Z\gamma}=1$ and $\mathcal{F}_{\gamma\gamma}=0$. The collider signature of interests to us is one hard photon and two $b$-jets. We generate the dominant backgrounds with MadGraph~\cite{Alwall:2014hca}
\begin {equation}
e^+ + e^- \to \gamma + \gamma^* / Z^* \to \gamma + b +\bar{b}.
\end {equation}
At the analysis level, all signal and background events are required to pass the following {\it selection cuts}:
\begin{align}
&p_T^\gamma > 25~{\rm GeV}, & &p_T^b \geq 25~{\rm GeV}, & &p_T^{\bar{b}}\geq 25~{\rm GeV}, \nn\\
&|\eta^\gamma| \leq 3.5, & & |\eta^b| \leq 3.5, & &|\eta^{\bar{b}}| \leq 3.5, \nn\\
& \Delta R_{b\bar{b}} \geq 0.7, & &\Delta R_{b\gamma} \geq 0.7, & &\Delta R_{\bar{b}\gamma} \geq 0.7,
\end{align}
where $p_T^i$ and $\eta^i$denotes the transverse momentum and pseudo-rapidity of the particle $i$, respectively. The separation $\Delta R$ in the azimuthal angle-pseudo-rapidity ($\phi$-$\eta$) plane between the objects $k$ and $l$ is
\begin {equation}
\Delta R_{kl}\equiv \sqrt{(\eta_k-\eta_l)^2 + (\phi_k - \phi_l)^2}.
\end {equation}
For simplicity we ignore the effects due to the finite resolution of the detector and assume a perfect $b$-tagging efficiency.
\begin{center}
\includegraphics[scale=0.235]{fig/dist_a}
\includegraphics[scale=0.235]{fig/dist_b}\\
\includegraphics[scale=0.24]{fig/dist_c}
\includegraphics[scale=0.24]{fig/dist_d}
\figcaption{The normalized distributions of $p_T^\gamma$ and $p_T^b$ of the signal (red and black curve) and background (blue curve) for $\sqrt{s}=250~{\rm GeV}$ and $500~{\rm GeV}$. The black curve denotes the contribution of the SM and NP operator while the red curve labels the SM contribution alone. }
\label{fig:basic}
\end{center}
Figure~\ref{fig:basic} plots the $p_T$ distribution of the photon and $b$-jets for $\sqrt{s}=250~{\rm GeV}$ and $500~{\rm GeV}$. The photon in the signal event exhibits a hard traverse momentum to balance the motion of the Higgs boson. On the other hand, the photon in the SM background is mainly radiated out from the initial state electron and peaks in the small $p_T$ owing to the collinear enhancement; see Figs.~\ref{fig:basic}(a) and (c). The anomalous $HZ\gamma$ coupling yields more energetic photon in the final state and the effects tend to be more evident with increasing $\sqrt{s}$; see Fig.~\ref{fig:basic}(c). Since the $b$-jets in the signal are from the Higgs boson decay while those in the background are mainly from a $Z$-boson decay, the signal exhibits a hard $p_T$ distribution of the $b$-jet; see Figs.~\ref{fig:basic}(b) and (d). Similar conclusions also applied to other values of $\mathcal{F}_{Z\gamma/\gamma\gamma}$.
To compare the relevant background event rates ($\mathcal{B}$) to the signal event rates ($\mathcal{S}$), we assume an integrated luminosity of $1~{\rm ab}^{-1}$. The numbers of the signal and background events after imposing the above selection cuts are summarized in the second, fourth, eighth, twelfth rows of Table~$\ref{tab:cut}$. We consider three kinds of the signal: one is induced solely by the SM loop corrections, the other two are generated both by the SM loop correction and by NP effects where $\mathcal{F}_{Z\gamma}=1,\mathcal{F}_{\gamma\gamma}=0$ for one and $\mathcal{F}_{Z\gamma}=0,\mathcal{F}_{\gamma\gamma}=1$ for the other. The former is named as the $\mathcal{S}_{\mathrm{SM}}$, shown in the fourth to sixth rows in Table~$\ref{tab:cut}$, while the latter are denoted as the $\mathcal{S}_{Z\gamma/\gamma\gamma}$, shown in the seventh to fourteenth rows. Obviously, the backgrounds are larger than the signals by three or four order of magnitudes. One has to impose other cuts to extract the small signal out of the huge background.
\begin{table*}[htbp]
\center
\begin{tabular}{c|c|c|c|c|c}
\hline
\multicolumn{2}{c|}{$\sqrt{s}$ (GeV)}& 250 &350 & 500&1000\tabularnewline
\hline
\hline
\multirow{2}{*}{$\mathcal{B}$}&{\it selection cuts} ($\times 10^{5}$) & 7.169 & 4.229 & 2.450 &0.708 \tabularnewline
\cline{2-6}
&$\Delta M$ {\it cut} & 7640 & 3993 & 2104&475\tabularnewline
\hline
\hline
\multirow{2}{*}{\tabincell{c}{${\mathcal S}_{\mathrm{SM}}$\\$ee\rightarrow H\gamma,H\rightarrow b\bar{b}$}}&{\it selection cuts}& 58 & 21 & 33&12\tabularnewline
\cline{2-6}
& $\Delta M$ {\it cut} & 58 & 21 & 33&12\tabularnewline
\hline
\multicolumn{2}{c|}{$\mathcal{S}_{\mathrm{SM}}/\sqrt{\mathcal B}$} & 0.664 & 0.33 & 0.72& 0.55\tabularnewline
\hline
\hline
\multicolumn{2}{c|}{\tabincell{c}{$\mathcal{S}_{Z\gamma}$\\ $(ee\rightarrow H\gamma)$}}& 794 & 808 & 940 & 853\tabularnewline
\hline
\multirow{2}{*}{\tabincell{c}{$\mathcal{S}_{Z\gamma}$\\$ee\rightarrow H\gamma,H\rightarrow b\bar{b}$}}&{\it selection cuts} & 451 & 482 & 569 & 341\tabularnewline
\cline{2-6}
\cline{2-6}
& $\Delta M$ {\it cut} & 451 & 482 & 569 & 341\tabularnewline
\hline
\multicolumn{2}{c|}{$\mathcal{S}_{Z\gamma}/\sqrt{\mathcal{B}}$}& 5.2 & 7.6 & 12.4 & 15.6\tabularnewline
\hline
\hline
\multicolumn{2}{c|}{\tabincell{c}{$\mathcal{S}_{\gamma\gamma}$\\ $(ee\rightarrow H\gamma)$}}& 1234 & 1284 & 1951 & 2082\tabularnewline
\hline
\multirow{2}{*}{\tabincell{c}{$\mathcal{S_{\gamma\gamma}}$\\$ee\rightarrow H\gamma,H\rightarrow b\bar{b}$}}&{\it selection cuts} & 701 & 754 & 1180 & 834\tabularnewline
\cline{2-6}
\cline{2-6}
& $\Delta M$ {\it cut} & 701 & 754 & 1180 & 834\tabularnewline
\hline
\multicolumn{2}{c|}{$\mathcal{S}_{\gamma\gamma}/\sqrt{\mathcal{B}}$}& 8.0 & 11.9 & 26.3 & 38.2\tabularnewline
\hline
\end{tabular}
\caption{The number of events of the signal ($\mathcal{S}_{\mathrm{SM}/Z\gamma/\gamma\gamma}$) and the background ($\mathcal{B}$) for various c.m. energies ($\sqrt{s}$). The signal is further divided into the SM contribution only($\mathcal{S}_{\mathrm{SM}}$) and the contribution of both the SM and NP effects($\mathcal{S}_{Z\gamma/\gamma\gamma}$). For illustration we choose $\Lambda=2~{\rm TeV}$, $\mathcal{F}_{Z\gamma}=1,\mathcal{F}_{\gamma\gamma}=0$ for $\mathcal{S}_{Z\gamma}$ and $\mathcal{F}_{Z\gamma}=0,\mathcal{F}_{\gamma\gamma}=1$ for $\mathcal{S}_{\gamma\gamma}$. The integrated luminosity is chosen as $1~{\rm ab}^{-1}$. }\label{tab:cut}
\end{table*}
\begin{center}
\includegraphics[scale=0.35]{fig/mbb}
\figcaption{The normalized $m_{bb}$ distributions of the signal and background for $\sqrt{s}=250~{\rm GeV}$.}
\label{fig:mbb}
\end{center}
\begin{figure*}
\center
\includegraphics[scale=0.26]{fig/cmscurrent1.pdf}
\includegraphics[scale=0.26]{fig/cms3001.pdf}
\includegraphics[scale=0.26]{fig/cms30001.pdf}\\
\includegraphics[scale=0.26]{fig/cmscurrent_a1.pdf}
\includegraphics[scale=0.26]{fig/cms300_a1.pdf}
\includegraphics[scale=0.26]{fig/cms3000_a1.pdf}
\caption{Sensitivities to the $HZ\gamma/H\gamma\gamma$ anomalous coupling at the $e^+ e^-$ collider as a function of $\sqrt{s}$ for $\mathcal{L}=1000~{\rm fb}^{-1}$ and $\Lambda=2~{\rm TeV}$. The shade regions above or below the black-dashed curves are good for discovery. The CMS exclusion limits and allowed regions obtained from the Higgs boson rare decay are also shown for comparison (see the horizontal red-dashed curves and red regions): (a) CMS exclusion limits ($\sqrt{s}=8~{\rm TeV}$ and $\mathcal{L}=19~{\rm fb}^{-1}$); (d) CMS allowed regions ($\sqrt{s}=8~{\rm TeV}$ and $\mathcal{L}=19~{\rm fb}^{-1}$); (b), (e) CMS projection allowed regions ($\sqrt{s}=14~{\rm TeV}$ and $\mathcal{L}=300~{\rm fb}^{-1}$; (c), (f) CMS projection allowed regions ($\sqrt{s}=14~{\rm TeV}$ and $\mathcal{L}=3000~{\rm fb}^{-1}$).}
\label{fig:potential}
\end{figure*}
As from the Higgs boson decay, the two $b$-jets in the signal events exhibit a sharp peak around $m_H$ in the distribution of their invariant mass ($m_{bb}$). Figure~\ref{fig:mbb} displays the $m_{bb}$ distribution of the signal events (red-peak) and the background events (blue) for $\sqrt{s}=250~{\rm GeV}$. The two $b$-jets in the background events are mainly from the on-shell $Z$-boson, yielding a peak around $m_Z\sim 91~{\rm GeV}$. The background events also exhibit a long tail in the region of $m_{bb} \sim m_H$ region owing to the $Z$-boson width. The difference of the $m_{bb}$ distribution between the signal and background events remains at other $\sqrt{s}$ of the $e^+e^-$ collider. We impose a hard cut on $m_{bb}$ to suppress the background. After the lepton and jet reconstruction, we demand that the invariant mass of the two $b$-jets is within a mass window of 5 GeV around $m_H$, i.e.
\begin {equation}
\Delta M \equiv \left| m_{bb}-m_H \right|\leq 5~{\rm GeV}.
\end {equation}
The $\Delta M$ cut suppresses the background dramatically; for example, for almost all the c.m. energies, less than 1\% of the background survives after the $\Delta M$ cut. On the other hand, most of the signal events pass the mass window cut. Unfortunately, the SM contribution alone still cannot be observed owing to the tiny production rate; see the fifth row in Table~$\ref{tab:cut}$. For $\mathcal{F}_{Z\gamma}=1,\mathcal{F}_{\gamma\gamma}=0$ and $\mathcal{F}_{Z\gamma}=0,\mathcal{F}_{\gamma\gamma}=1$, both the anomalous $HZ\gamma$ coupling and $H\gamma\gamma$ coupling lead to a few hundreds of the signal events after the mass window cut separately and thus are testable experimentally. The significance ($\mathcal{S}_{Z\gamma/\gamma\gamma}/\sqrt{\mathcal{B}}$) increases with $\sqrt{s}$ owing both to the non-renormalizable feature of the high-dimensional operators and to the decreasing SM backgrounds.
We now use the results of last section to discuss the potential of testing the $HZ\gamma$, $H\gamma\gamma$ couplings at the electron-positron linear collider. Most attention is paid on the scenario in which only one of $HZ\gamma$ and $H\gamma\gamma$ anomalous couplings is nonzero. We first consider the discovery of $HZ\gamma$ and $H\gamma\gamma$ anomalous couplings at the electron-positron linear collider.
Demanding the $5\sigma$ significance, $\mathcal{S}_{Z\gamma/\gamma\gamma}=5\sqrt{\mathcal{B}}$, yields the discovery potential of the $HZ\gamma/H\gamma\gamma$ coupling in the scattering of $e^+e^- \to H\gamma$. Figures~\ref{fig:potential}(a), (d) display the $5\sigma$ significance curve (dashed-line). The shade regions are good for the discovery of the anomalous $HZ\gamma/H\gamma\gamma$ coupling. Owing to the SM contribution and the interference effects, the discovery regions are asymmetric around $\mathcal{F}_{Z\gamma/\gamma\gamma}=0$. We also plot the CMS exclusion limits of the $HZ\gamma/H\gamma\gamma$ coupling. We note that the discovery potential of $HZ\gamma$ coupling at $e^-e^+$ collider at $\sqrt{s}=250~{\rm GeV}$ is marginally close to the current CMS exclusion limit. With the c.m. energy increased from 250~GeV to 1000~GeV, the $e^+e^-$ collider could cover the regions of $0.50<\mathcal{F}_{Z\gamma}<1.03$ and $-2.02<\mathcal{F}_{Z\gamma}<-0.76$ which cannot be probed at the 8~TeV LHC; while the discovery potential of $H\gamma\gamma$ coupling could cover the non-exclusion red region of $\mathcal{F}_{\gamma\gamma}\sim 0.56$ at a high energy $e^-e^+$ collider.
The CMS limits are derived from the Higgs boson decay as follows. The partial decay width of $H\to Z\gamma$ and $H\to \gamma\gamma$ are given by
\begin{eqnarray}
\Gamma(H\rightarrow Z\gamma) &=& \frac{m_{H}^{3}}{8\pi v^{2}}\left(1-\frac{m_{Z}^{2}}{m_{H}^{2}}\right)^{3}\biggl|\mathcal{F}_{Z\gamma}^{\rm SM}+\frac{v^{2}}{\Lambda^{2}}\mathcal{F}_{Z\gamma}\biggr|^2,\\
\Gamma(H\rightarrow \gamma\gamma) &=& \frac{m_{H}^{3}}{16\pi v^{2}}\biggl|\mathcal{F}_{\gamma\gamma}^{\rm SM}+\frac{v^{2}}{\Lambda^{2}}\mathcal{F}_{\gamma\gamma}\biggr|^2,
\end{eqnarray}
where $\mathcal{F}_{Z\gamma}^{\rm SM}$, $\mathcal{F}_{\gamma\gamma}^{\rm SM}$, induced by the $W$ boson and fermion loops in the SM, are given by~\cite{Azatov:2013ura, Low:2012rj}
\begin{eqnarray}
\mathcal{F}_{Z\gamma}^{\rm SM} &=& \frac{\alpha}{4\pi s_{W}c_{W}}\biggl(3\frac{Q_{t}(2T_{3}^{t}-4Q_{t}s^{2}_{W})}{c_{W}}A^{H}_{1/2}(\tau_{t},\lambda_{t})\notag\\
&&+ c_{W}A^{H}_{1}(\tau_{W},\lambda_{W})\biggr),\\
\mathcal{F}_{\gamma\gamma}^{\rm SM} &=& \frac{\alpha}{4\pi}\biggl(3Q_{t}^{2}A^{H}_{1/2}(\tau_{t}^{-1})+A^{H}_{1}(\tau_{W}^{-1})\biggr).
\end{eqnarray}
The functions, $A^{H}_{1/2}(\tau_i,\lambda_i)$, $A^{H}_1(\tau_i,\lambda_i)$, $A^{H}_{1/2}(\tau_i)$ and $A^{H}_1(\tau_i)$, are given in Ref.~\cite{Djouadi:2005gi} where $\tau_i=4m_i^2/m_H^2$ and $\lambda_i=4m_i^2/m_Z^2$. $Q_t$ is the top-quark electric charge in units of $|e|$ and $T_3^t=1/2$.
In the SM $\mathcal{F}_{Z\gamma}^{\mathrm{SM}} \sim 0.007$, $\mathcal{F}_{\gamma\gamma}^{\mathrm{SM}} \sim -0.004$ for $m_H=125~{\rm GeV}$~\cite{Cao:2015fra}.
The CMS measurement requires
\begin{eqnarray}
&\dfrac{\Gamma(H\rightarrow Z\gamma)}{\Gamma_{\rm SM}(H\rightarrow Z\gamma)}&\leq 9.5~,\nn\\
0.91\leq&\dfrac{\Gamma(H\rightarrow \gamma\gamma)}{\Gamma_{\rm SM}(H\rightarrow \gamma\gamma)}&\leq 1.4~,
\end{eqnarray}
which yields the CMS exclusion bounds shown in Figs.~\ref{fig:potential}(a) and (d), one bound on $\mathcal{F}_{Z\gamma}$ as $-2.02\leq \mathcal{F}_{Z\gamma}\leq 1.03$, two bounds on $\mathcal{F}_{\gamma\gamma}$ as $-0.051\leq \mathcal{F}_{\gamma\gamma}\leq 0.013$ and $0.55\leq \mathcal{F}_{\gamma\gamma}\leq 0.62$; see the horizontal black-dashed curves and red regions.
A recent study on projected performance of an upgraded CMS detector at the LHC and high luminosity LHC (HL-LHC)~\cite{CMS:2013xfa} shows that the $H\rightarrow Z\gamma$ process is expected to be measured at 14~TeV LHC with $\sim 60~\%$ and $\sim 20~\%$ uncertainties at the 95~\% confidence level using an integrated dataset of $300~{\rm fb}^{-1}$ and $3000~{\rm fb}^{-1}$, respectively, while for the $H\rightarrow \gamma\gamma$ process, the uncertainties are $\sim 6~\%$ and $\sim 4~\%$. We plot the corresponding CMS projection limits in Figs.~\ref{fig:potential}(b), (e) and Figs.~\ref{fig:potential}(c), (f). Future experiments at the LHC and HL-LHC are expected to impose tighter bounds on $\mathcal{F}_{Z\gamma/\gamma\gamma}$. For the negative $\mathcal{F}_{Z\gamma}$ and $\mathcal{F}_{\gamma\gamma}\sim 0.56$, the $e^+e^-$ collider has a better performance than the LHC and HL-LHC at a high energy level; see the overlapping regions of the red region and shaded region.
When both the $HZ\gamma$ and $H\gamma\gamma$ couplings are considered, Fig.~\ref{fig:sigmat} displays the total cross section of $H\gamma$ production changing as a function of $\mathcal{F}_{Z\gamma}$ and $\mathcal{F}_{\gamma\gamma}$ with various energies. The allowed discovery regions of $\mathcal{F}_{Z\gamma}$ and $\mathcal{F}_{\gamma\gamma}$ are the red regions outside the black dashed lines. With the c.m. energy increased from $250$ GeV to $1000$ GeV, more and more red regions can be discovered. When $\sqrt{s}\geq 500$ GeV, the non-exclusive red region of $\mathcal{F}_{\gamma\gamma}\sim 0.56$ are entirely allowed. For more detail, see Eqs.~\ref{sigma}.
\begin{center}
\center
\includegraphics[scale=0.18]{fig/contribution_a_z_250}\includegraphics[scale=0.18]{fig/contribution_a_z_350}\\
\includegraphics[scale=0.18]{fig/contribution_a_z_500}\includegraphics[scale=0.18]{fig/contribution_a_z_1000}
\figcaption{The total cross section of $H\gamma$ production at $e^{-}e^{+}$ collider changes as a function of $\mathcal{F}_{Z\gamma}$ and $\mathcal{F}_{\gamma\gamma}$. The red region are non-exclusive according to the current CMS data and the region outside of the black dashed line show the discovery potential of $e^{-}e^{+}$ collider.}
\label{fig:sigmat}
\end{center}
\section{Further Analysis}
The $HZ\gamma$ and $H\gamma\gamma$ anomalous couplings affect both the Higgs boson decay and the $H\gamma$ production, but their interference effects with the SM contributions is different for the two processes.
In order to examine the different interference effects, we define a ratio of the cross section of the $H\gamma$ production, $R_{\sigma}$, a ratio of the width of $H\to Z\gamma/\gamma\gamma$ decay, $R_{Z\gamma/\gamma\gamma}$, and the relative sign $\mu_{Z\gamma/\gamma\gamma}$, as follows:
\begin{align}
&R_{\sigma} \equiv \frac{\sigma_{\rm t}(e^+e^- \to H\gamma)}{\sigma_{\rm SM}(e^+e^- \to H\gamma)},&&\nn\\
&R_{Z\gamma} \equiv \frac{\Gamma(H\to Z\gamma)}{\Gamma_{\rm SM}(H\to Z\gamma)},
&&\mu_{Z\gamma}={\rm sign}\left(\frac{\mathcal{F}_{Z\gamma}}{\mathcal{F}^{\rm SM}_{Z\gamma}}\right),\nn\\
&R_{\gamma\gamma} \equiv \frac{\Gamma(H\to \gamma\gamma)}{\Gamma_{\rm SM}(H\to \gamma\gamma)},& &\mu_{\gamma\gamma}={\rm sign}\left(\frac{\mathcal{F}_{\gamma\gamma}}{\mathcal{F}^{\rm SM}_{\gamma\gamma}}\right).
\end{align}
Figure~\ref{fig:corr} displays the strong correlation between $R_\sigma$ and $R_{Z\gamma/\gamma\gamma}$ for several c.m. energies when one anomalous coupling is considered at a time; see the red-dashed curves. There are two values of $R_\sigma$ for each fixed $R_{Z\gamma/\gamma\gamma}$; the larger value $R_\sigma$ corresponds to $\mu_{Z\gamma/\gamma\gamma}<0$ while the smaller value to $\mu_{Z\gamma/\gamma\gamma}>0$. The two-fold ambiguity in the $\Gamma(H\to Z\gamma/\gamma\gamma)$ measurement can be resolved by precise knowledge of $R_\sigma$ if the $\mathcal{F}_{Z\gamma/\gamma\gamma}$ is large enough to discover the $H\gamma$ signal at the $e^+e^-$ collider. In Fig.~\ref{fig:corr} we also plot the discovery region of $R_{Z\gamma/\gamma\gamma}$ in the scattering of $e^+e^- \to H\gamma$ for various c.m.energies; see the shaded bands. One can uniquely determine both the magnitude and sign of $\mathcal{F}_{Z\gamma/\gamma\gamma}$ in those shaded-band regions. The discrimination power of the two-fold $R_\sigma$ for a fixed $R_{Z\gamma/\gamma\gamma}$ increases dramatically with c.m. energy of $e^+e^-$ collider; for example, for $R_{Z\gamma}=9$, $R_\sigma$ is equal to 8 and 10 at a $\sqrt{s}=250~{\rm GeV}$ collider while it is equal to 40 and 110 at a $\sqrt{s}=1000~{\rm GeV}$ collider. It is worthwhile to mentioning that the partial decay width of $H\to Z\gamma$ is exactly the same as the SM prediction when $v^2/\Lambda^2\mathcal{F}_{Z\gamma}=-2\mathcal{F}_{Z\gamma}^{\rm SM}$. In that case one can still observe the anomalous $HZ\gamma$ coupling at the $e^+e^-$ collider when $\sqrt{s}\lower.7ex\hbox{$\;\stackrel{\textstyle>}{\sim}\;$} 500~{\rm GeV}$. For the $R_{\gamma\gamma}$, it is highly limited by the current LHC data and yields two solutions of $\mathcal{F}_{\gamma\gamma}:$ one is $v^2/\Lambda^2\mathcal{F}_{\gamma\gamma}\sim-2\mathcal{F}_{\gamma\gamma}^{\rm SM}$ which could be detected in the $H\gamma$ production when $\sqrt{s}\geq 500$ GeV, the other is $\mathcal{F}_{\gamma\gamma}\sim 0$ which cannot be probed.
\begin{center}
\includegraphics[scale=0.24]{fig/correlation_350}
\includegraphics[scale=0.24]{fig/correlation2_350}\\
\includegraphics[scale=0.24]{fig/correlation_500}
\includegraphics[scale=0.24]{fig/correlation2_500}\\
\includegraphics[scale=0.24]{fig/correlation_1000}
\includegraphics[scale=0.24]{fig/correlation2_1000}\\
\figcaption{ Correlations between $R_\sigma$ and $R_{Z\gamma/\gamma\gamma}$ (red-dashed line) and discovery region at the $e^+e^-$ colliders (bold-gray curve). The yellow shadow regions are excluded by recent CMS data.}
\label{fig:corr}
\end{center}
\begin{center}
\center
\includegraphics[scale=0.24]{fig/cmscurrent_cutz1.pdf}\includegraphics[scale=0.24]{fig/cms300_cutz1.pdf}\\
\includegraphics[scale=0.24]{fig/cms3000_cutz1.pdf}\includegraphics[scale=0.24]{fig/cmscurrent_cuta1.pdf}\\
\includegraphics[scale=0.24]{fig/cms300_cuta1.pdf}\includegraphics[scale=0.24]{fig/cms3000_cuta1.pdf}\\
\figcaption{
Lower bounds and allowed regions of $\mathcal{F}_{Z\gamma/\gamma\gamma}$ as a function of $\sqrt{s}$ obtained in the $H\gamma$ production for $\mathcal{L}=1000~{\rm fb}^{-1}$ and $\Lambda=2~{\rm TeV}$. The shade regions above or below the black-dashed curves are for exclusion. The CMS exclusion limits obtained from the Higgs boson rare decay are also shown for comparison (see the horizontal red-dashed curves): (a), (d) CMS exclusion limits ($\sqrt{s}=8~{\rm TeV}$ and $\mathcal{L}=19~{\rm fb}^{-1}$); (b), (e) CMS projection limits ($\sqrt{s}=14~{\rm TeV}$ and $\mathcal{L}=300~{\rm fb}^{-1}$; (c), (f) CMS projection limits ($\sqrt{s}=14~{\rm TeV}$ and $\mathcal{L}=3000~{\rm fb}^{-1}$).}
\label{fig:exclusion}
\end{center}
If no NP effects were observed in the $H\gamma$ production, one can obtain a $2\sigma$ exclusion limits of $\mathcal{F}_{Z\gamma/\gamma\gamma}$ which are displayed in Fig.~\ref{fig:exclusion}.
The CMS current and projection sensitivities are also plotted for comparison; see the red-shaded region.
\section{Summary}
We study the potential of measuring the $HZ\gamma$ and $H\gamma\gamma$ anomalous couplings in the process of $e^{-}e^{+}\rightarrow H\gamma$. Such a scattering process occurs only at the loop level in the SM. After considering the interference of the SM loop effects and the anomalous coupling contributions, we perform a collider simulation of the the $H\gamma$ production with $H\to b\bar{b}$. Even though the SM contribution alone cannot be detected, the anomalous couplings can enhance the production rate sizeably and lead to a discovery at the high energy electron-positron collider with an integrated luminosity of $1~{\rm ab }^{-1}$.
When considering one anomalous coupling at a time, our study shows that, for negative $\mathcal{F}_{Z\gamma}$ or $\mathcal{F}_{\gamma\gamma} \sim 0.56$, the $e^+e^-$ collider has a better performance than the current LHC and HL-LHC. When both couplings contribute simultaneously to the $H\gamma$ production, more parameter regions are allowed and can be fully explored at a high energy $e^+e^-$ collider.
We also derive exclusion bounds on the anomalous couplings in the case that no NP effects were observed in the $H\gamma$ production. The current CMS data indicates a two-fold solution of the anomalous coupling. Resolving such an ambiguity is beyond the capability of the upgraded LHC or High luminosity LHC. But it can be discriminated easily at the $e^+e^-$ collider.
\begin{acknowledgments}
The work is supported in part by the National Science Foundation of China under Grand No. 11275009.
\end{acknowledgments}
\end{multicols}
\vspace{-1mm}
\centerline{\rule{80mm}{0.1pt}}
\vspace{2mm}
\begin{multicols}{2}
|
2,877,628,088,386 | arxiv | \section{Introduction}
The search for and the characterization of the dual (kpc scale
separation) and binary (pc separation) active supermassive black hole (SMBH, {\it M}$_{\mathrm{BH}} >$10$^6$ {\it M}$_{\odot}$)
population is a hot topic of current astrophysics, given its relevance to
understand galaxy formation and evolution.
Since it is now clear that the most
massive galaxies should harbor a central SMBH \citep{kor95, fer05}, the formation of
dual/binary SMBH systems is the inevitable consequence of the current $\Lambda$CDM
cosmological paradigm, in which galaxies grow hierarchically
through minor and major mergers.
The dynamical evolution of dual/binary SMBH systems within the merged galaxy, and
their interaction with the host (via dynamical encounters and feedback
during baryonic accretion onto one or both SMBHs) encode crucial information
about the assembly of galaxy bulges and SMBHs.
In addition to this, if the binary SMBHs eventually
coalesce, they will emit gravitational waves that could be detected with incoming low frequency
gravitational wave experiments \citep{eno04}.\\
Although dual/binary AGN are a natural outcome of galaxy mergers, the
number of confirmed dual/binary AGN is still too low when compared with
model expectations \citep[e.g.,][]{spr05,hop05}. Indeed, directly observing SMBHs during different merger stages is still a challenging task not only because
of the stringent resolution requirements, but also because of the intrinsic difficulty
in identifying SMBHs. In late merger stages, SMBHs are expected to be
embedded in a large amount of dust and gas and thus strongly obscured and
elusive both in the UV and optical bands. Only a few tens of dual
SMBHs at $<$10 kpc separation have been confirmed \cite[see][and
references therein]{mcg15} and only a
few definitive sub-kpc binary SMBHs have been discovered and studied so far
\citep[e.g.,][] {rod06,val08,bor09}.
The search for double-peaked optical emission lines emerging from two separate
narrow-line regions (NLRs) of two SMBHs has been proposed
as a method to select dual AGN candidates on kpc/sub-kpc scales \citep[e.g.,][]{wan09}.
The catalogs of \cite{wan09}, \cite{liu10}, and \cite{smi10} provide about three hundreds of unique candidate dual AGN, selected from the Sloan Digital Sky Survey
Data Release 7 \citep[SDSS--DR7,][]{aba09}
as spectroscopic
AGN with a double-peaked [OIII]$\lambda$5008\AA{} line (redshift range
between 0.008$<${\it z}$<$0.686).
However, the 3" diameter of the SDSS fiber does not discern whether the double-peaked optical emission lines are due
to dual kpc-scale nuclei or to kinematical effects occurring within a single AGN, e.g. jet-cloud interactions
\citep{hec84,gab17}, a rotating, disk-like NLR \citep{xu09}, or the combination of a blobby NLR and extinction effects \citep{cre10}.
In this paper we present and discuss the optical and X-ray properties of MCG+11-11-032, a radio-quiet optical Seyfert 2 galaxy at {\it z}=0.0362.
Besides being a dual AGN candidate on the basis of its SDSS--DR7 optical spectrum
characterized by double-peaked [OIII] emission lines \citep{wan09}, MCG+11-11-032 is also
an X-ray variable source \citep{bal15a}.
It belongs to the all-sky survey {\it Swift}-BAT catalogues \citep{bau13,cus14} and
was observed several times by the {\it Swift} X-ray Telescope \citep[XRT;][]{bur05} on
time scales from years to days.
In Section 2, after summarizing previous results,
we present our new analysis of the SDSS--DR13 spectrum of MCG+11-11-032. The analysis of the XRT light-curve and spectra,
along with the 123-month BAT light curve, are presented in Section 3.
In Section 4, we combine the optical spectroscopic information with the X-ray results to
discuss the most plausible physical scenarios acting in MCG+11-11-032. Section 5 presents our conclusions.
Throughout the paper we assume a flat $\Lambda$CDM cosmology with H$_0$=69.6 km s$^{-1}$ Mpc$^{-1}$, $\Omega_{\Lambda}$=0.7 and
$\Omega_{\rm M}$=0.3. Errors are given at 68 per cent confidence level unless otherwise specified, i.e. Sect. 3.
\section{SDSS spectrum}
MCG+11-11-032 (SDSS J085512.54+642345.6)
belongs to a sample of 87 SDSS--DR7 type 2 AGN with double-peaked [OIII] profiles
selected and analyzed by \cite{wan09}.
The obscuration of the active nucleus allowed the authors to determine the redshift
of the host galaxy ({\it z}=0.03625$\pm$1$\times$10$^{-5}$) through the stellar absorption lines and investigate the
properties of the nebular emission lines.
They fitted the [OIII] line with a blue-shifted ($\Delta \lambda_{\rm b}$=-2.52$\pm$0.07 \AA{}) and a red--shifted ($\Delta \lambda_{\rm r}$=2.26$\pm$0.06 \AA{}) component, neither of which is at the systemic velocity of the host galaxy.
The corresponding fluxes and luminosities of the blue-shifted and red-shifted components by \cite{wan09} are:
$F_{\rm[OIII] blue}$=[338$\pm$11]$\times$10$^{-17}$ erg cm$^{-2}$ s$^{-1}$, $L_{\rm[OIII] blue}$=1.04$\times$10$^{40}$ erg s$^{-1}$ and
$F_{\rm[OIII] red}$=[372$\pm$11]$\times$10$^{-17}$ erg cm$^{-2}$ s$^{-1}$, $L_{\rm[OIII] red}$=1.14$\times$10$^{40}$ erg s$^{-1}$.
\cite{com12} performed follow--up long-slit observations with the Blue Channel Spectrograph
on the MMT 6.5 m telescope. They observed
the object at two different position angles (one along the isophotal position angle of the major axis
of the host galaxy and the other one along the orthogonal axis) in order to determine the full spatial separation of the two emission line components.
They fit two Gaussian components to the continuum subtracted [OIII]$\lambda$5007\AA{} emission line and
found a velocity offset between the [OIII]$\lambda$5007\AA{} peaks of $v$=275$\pm$4 km s$^{-1}$.
They measured the angular and physical projected spatial
offset between the two [OIII]$\lambda$5007\AA{} emission features and found 0.77$\pm$0.04 arcsec and
0.55$\pm$0.03 {\it h}$^{-1}_{70}$ kpc, respectively, with a PA=61.0$^\circ$$\pm$2.2.
By checking the long-slit
spectra {\it by eye}, \cite{com12} classified the AGN emission components as
spatially "extended", meaning that not all the emission line components appear spatially compact at the
position angles observed. No additional information has been provided by the authors about the emission line profiles at each position angle.
\begin{figure}
\includegraphics[scale=0.38,clip=true,trim=0cm 5cm 0cm 3cm]{severgnini_f1}
\caption{Optical (SDSS--DR13) spectrum in the rest frame of the MCG+11-11-032 host galaxy
({\it z}=0.036252). The strongest emission lines are labelled.}
\label{sdss_spectrum}
\end{figure}
\begin{figure}
\centering
\vskip -0.2truecm
\includegraphics[scale=0.3,clip=true,trim=0cm 5cm 0cm 3cm]{severgnini_f2_upper}
\includegraphics[scale=0.3,clip=true,trim=0cm 5cm 0cm 3cm]{severgnini_f2_middle}
\includegraphics[scale=0.3,clip=true,trim=0cm 5cm 0cm 3cm]{severgnini_f2_lower}
\caption{MCG+11-11-032 rest-frame spectrum around the [OIII] (top panel), H$\alpha$+[NII] (middle panel) and
[SII] (bottom panel) regions. Wavelengths are in vacuum. Red solid lines represent the total fit, while blue dashed lines are the gaussian components. Black dot--dashed vertical lines mark the position of the
line at the systemic velocity of the host galaxy.}
\label{sdss_lines}
\end{figure}
\begin{table*}
\begin{minipage}[t]{1\textwidth}
\begin{center}
\caption{Properties of the strongest emission line components in the SDSS--DR13 spectrum of MCG+11-11-032.}
\label{sdss_results}
\begin{tabular}{cccccccc}
\hline
Spectral line & $\lambda_{\rm peak}$ & {\it FWHM} & $\Delta \lambda_{\rm peak}$ &$\Delta {v}_{\rm peak}$ & $v_{\rm red} - v_{\rm blue}$& Flux & Luminosity \\
& [\AA{}] & [km/s] & [\AA{}] & [km/s] & [km/s] & [10$^{-17}$ erg/cm$^{2}$s ] & [10$^{39}$ erg/s]\\
(1) & (2) & (3) & (4) & (5) & (6) & (7) & (8) \\
\hline
\hline
\multirow{ 3}{*}{$\rm {[OIII]}{\lambda 4960.295}$\AA{}} & 4957.40$\pm$0.23 & 300$^a$ & -2.93$\pm$0.33 & -177$\pm$20 &\multirow{ 3}{*}{305$\pm$17} & 140$\pm$11 & 4.3$\pm$0.3\\
~~~\\
& 4962.41$\pm$0.17 & 222$^b$ & 2.11$\pm$0.24 & 128$\pm$15 & & 118$\pm$11 & 3.6$\pm$0.3\\
~~~\\
~~~\\
\multirow{ 3}{*}{ $\rm {[OIII]}{\lambda 5008.239}$\AA{}} & 5005.75$\pm$0.15 & 300$\pm$17 & -2.49$\pm$0.21 & -149$\pm$13 & \multirow{ 3}{*}{283$\pm$11} & 402$\pm$24 & 12.3$\pm$0.7\\
~~~\\
& 5010.48$\pm$0.11 & 222$\pm$11 & 2.24$\pm$0.16 & 134$\pm$9 & & 317$\pm$39 &9.7$\pm$0.6\\
\hline
\hline
~~~\\
\multirow{ 3}{*}{ $\rm {[NII]}{\lambda 6549.86}$\AA{}} & 6546.61$\pm$0.19 & 294$^c$ & -3.25$\pm$0.27 & -149$\pm$12 & \multirow{ 3}{*}{271$\pm$13} & 142$\pm$8 & 4.4$\pm$0.3 \\
~~~\\
& 6552.53$\pm$0.20 & 260$^d$ & 2.66$\pm$0.28 & 122$\pm$13 & & 104$\pm$23 & 3.2$\pm$0.7\\
~~~\\
~~~\\
\multirow{ 3}{*}{ $\rm {H\alpha}{\lambda 6564.614}$\AA{}} & 6561.89$\pm$0.17 & 294$^c$ & -2.70$\pm$0.41 & -124$\pm$18 &\multirow{ 3}{*}{248$\pm$11} & 262$\pm$32 & 8.0$\pm$0.8\\
~~~\\
& 6567.32$\pm$0.16 & 260$^d$ & 2.71$\pm$0.26 & 124$\pm$12 & & 219$\pm$68 & 6.7$\pm$2.3\\
~~~\\
~~~\\
\multirow{ 3}{*}{ $\rm {[NII]}{\lambda 6585.27}$\AA{}} &6582.59$\pm$0.15 & 294$\pm$10 & -2.68$\pm$0.21 & -122$\pm$13 & \multirow{ 3}{*}{259$\pm$10} & 449$\pm$38 & 13.8$\pm$1.2\\
~~~\\
&6588.26$\pm$0.15 & 260$\pm$9 &2.99$\pm$0.25 & 136$\pm$11 & & 324$\pm$48 & 9.9$\pm$01.5 \\
\hline
\hline
~~~\\
\multirow{ 3}{*}{ $\rm {[SII]}{\lambda 6718.29}$\AA{}} & 6715.77$\pm$0.33 & 303$\pm$24 & -2.52$\pm$0.47 & -112$\pm$21 & \multirow{ 3}{*}{254$\pm$20}& 147$\pm$15 & 4.5$\pm$0.4\\
~~~\\
& 6721.45$\pm$0.30 & 233$\pm$21 &3.16$\pm$0.42 & 141$\pm$19 & & 77$\pm$12 & 2.4$\pm$0.4\\
~~~\\
~~~\\
\multirow{ 3}{*}{ $\rm {[SII]}{\lambda 6732.68}$\AA{}} & 6730.15$\pm$0.32 & 303$^e$ & -2.53$\pm$0.45 & -113$\pm$20 & \multirow{ 3}{*}{250$\pm$21} & 147$\pm$9 & 4.5$\pm$0.3\\
~~~\\
& 6735.76$\pm$0.32 & 233$^f$ &3.08$\pm$0.45 & 137$\pm$20 & & 85$\pm$12 & 2.6$\pm$0.4 \\
\hline
\end{tabular}
\end{center}
{\bf Notes.} Col. (1): Spectral lines (rest--frame vacuum wavelengths).
Col. (2): Rest-frame wavelengths of the peaks of the blue and red-shifted emission line components.
Col. (3): $FWHM$, not corrected for the instrumental resolution, of the blue and red-shifted emission line components.
Col (4): Doppler shifts of the blue and red emission line components.
Col. (5): Line of sight velocity offsets of the blue and red-shifted components.
Col. (6): Line of sight velocity offset between red and blue peaks.
Col. (7)-(8): Fluxes and luminosities, corrected for Galactic extinction, of the blue and red-shifted components. \\
$^a$ fixed to be equal to $FWHM$ of the $\rm {[OIII]}{\lambda 5008.239}$\AA{} blue component. $^b$ fixed to be equal to $FWHM$ of the $\rm {[OIII]}{\lambda 5008.239}$\AA{} red component. $^c$ fixed to be equal to $FWHM$ of the $\rm {[NII]}{\lambda6585.27}$\AA{} blue component. $^d$ fixed to be equal to $FWHM$ of the $\rm {[NII]}{\lambda6585.27}$\AA{} red component. $^e$ fixed to be equal to $FWHM$ of the $\rm {[SII]}{\lambda 6718.29}$\AA{} blue component. $^f$fixed to be equal to $FWHM$ of the $\rm {[SII]}{\lambda 6718.29}$\AA{} red component.
\end{minipage}
\end{table*}
To further investigate the presence of double peaked emission lines,
we analyzed the SDSS--DR13 spectrum.
We examined the
properties of all the prominent emission lines detected in the spectrum (see Fig.~\ref{sdss_spectrum}), instead of
considering only the [OIII]$\lambda$5007\AA{} (as it was done by previous authors).
To this end, we analyzed the spectral region around the [OIII], H$\alpha$+[NII], and
[SII] lines by fitting the rest-frame 4910-5070 \AA{}, 6500-6660 \AA{} and 6660-6715 \AA{}
ranges, avoiding the regions where absorption lines are present.
We used a combination of a power-law continuum and two Gaussian components for each transition.
In this procedure, for each Gaussian component there are three independent parameters that need to be determined:
the position of the line peak ($\lambda_{\rm peak}$), the line broadening ($\sigma$) and
the normalization of the line ($N$).
The central line positions and their relative intensities were set to be all independent.
We note, however, that the emission line ratios derived by our fit are in good agreement with the expected theoretical values, e.g.
3:1 for the two [OIII] lines.
Since the narrow lines observed in this source should have the same physical origin, i.e. they arise from the NLRs,
we performed a first fit in which the widths of all the Gaussian components were forced to have the
same value in units of km s$^{-1}$.
However, since for forbidden transitions larger
critical electron densities for de-excitation (i.e. {\it N$_{\rm ec}$}[OIII]$>${\it N$_{\rm ec}$}[NII]$>${\it N$_{\rm ec}$}[SII], see \citeauthor{ost91} \citeyear{ost91}) can
broaden the line profile, we left the line widths to be independent between the three different spectral
ranges considered here.
Each emission line resulted to be well fitted by two narrow Gaussian components.
In particular, for the two [OIII]$\lambda$5007\AA{} lines our results
are in good agreement with \cite{wan09} and \cite{com12}
in terms of peak offsets and line fluxes.
As a second and final step, we re-fitted the data by allowing
the widths of the bluer Gaussian components to be
independent with respect to the red ones.
The results of this second fit are shown in Fig.~\ref{sdss_lines} and reported in Table~\ref{sdss_results} together with 1$\sigma$ statistical errors.
The addition of a third Gaussian
to account for a possible component at the systemic velocity is not requested by the fit.
We note that outflows/jets in integrated galaxy spectra, where the individual outflowing clouds are not
resolved by the observations, tend to appear in the form of fainter -- and broader -- possibly asymmetric emission line components
superimposed on the narrow emission line centered at the systemic velocity \citep[e.g.][]{har14,har16}.
The lack of such typical outflow components
does not support the presence of strong ionized outflows or jets detectable in the SDSS13 spectrum.
The absence of a strong radio jet is also in agreement with the non detection of MCG+11-11-032
down to about 0.5 mJy (3$\times$rms) at 1.4 Ghz in the VLA FIRST survey \citep{bec95}.
However, we note that, although strong radio jets can be ruled out, the presence of faint jets cannot be discarded with the available optical and radio data.
All the results reported here were obtained by combining a power-law continuum plus narrow Gaussian components, and we have verified that they are not dependent (within the 1$\sigma$ uncertainties) on the stellar continuum subtraction.
In summary, the value reported in Table~\ref{sdss_results} are in agreement with
the results obtained by \cite{wan09} and \cite{com12} for the [OIII] line.
In addition, we confirm the presence of
double-peaked profiles in all the optical nebular emission lines detected in the MCG+11-11-032 SDSS--DR13 spectrum. The widths of all the
blue and red components are consistent, within 3$\sigma$ uncertainties, with each other across
the full spectral range, though the width of the red components is systematically smaller.
We found a significant (more than 3$\sigma$) offsets between the blue and red peaks with respect to the systemic redshift. This implies that none of these components is consistent with the nominal systemic velocity of the host galaxy (derived from the stellar absorption
lines). Finally, the offset ratios ($\Delta \lambda_{\rm blue}$/$\Delta \lambda_{\rm red}$) of the two components of H$\alpha$, [NII], and
[SII] are consistent with that of [OIII], and they are of the order of one, i.e. the wavelength shifts and the corresponding velocity offsets
between the blue and red peaks are similar for all the nebular emission lines.
Following statistical argument originally proposed by \cite{wan09}, these results can be explained by
the presence of two distinct NLRs that, on the basis of their projected physical separation
estimated by \cite{com12} (0.55$\pm$0.03 {\it h}$^{-1}_{70}$ kpc), may be related to two different AGN. The extended structures detected by the same authors
could be produced by either gas kinematics in the two NLRs or faint outflows/jets associated to one or both SMBHs.
We note, however, that the line properties reported in Table~\ref{sdss_results} as well as the presence of spatially extended components can be also explained by alternative physical scenarios. Firstly, as quoted before, the presence of a single SMBH associated with faint jet activity can not be excluded based on the available optical and radio data. Secondly, the velocity offsets reported in Table~\ref{sdss_results} are
fully consistent with rotation velocities measured in
nearby galaxies \citep[see e.g.][]{sof01}.
This implies that the double-peaked emission lines observed for MCG+11-11-032 could be produced by gas kinematics related to a single NLR \citep[see e.g.][]{ble13,fu12}, partially or totally tracing the kinematics of the almost edge-on \citep[{\it b/a}=0.45 by][]{kos11} host galaxy disk at sub-kpc scale. Another intriguing possibility, corroborated by the X-ray data presented in the next Section, is that we are observing
gas kinematics effects produced by a single NLR ionized by two SMBHs near to the coalescence phase, i.e.
a sub-pc scale SMBH pair (see Section 4).
\section{X-ray data}
\begin{table}
\scriptsize
\begin{center}
\caption{MCG+11-11-032: XRT observation log. The observations are ordered
on the basis of the observation starting date.}
\label{xobs_log}
\begin{tabular}{@{}lccc@{}}
\hline
\hline
Obs. ID & Obs. Start date& Net Counts & Net Exp. Time\\
(1) & (2) & (3) & (4) \\
\hline
{\it Archival data} & & & \\
\hline
00038045001 & 2008-11-20 & 54 & 4957\\
00038045002 & 2009-01-28 & 135 & 5022 \\
00090163002 & 2009-09-06 & 42 & 5594\\
00084954001 & 2015-01-23 & 12 & 457\\
00084954002 & 2015-02-02 & 16 & 622 \\
00084954005 & 2015-03-24 & 9 & 1249\\
\hline
{\it Our own monitoring} &&& \\
\hline
00034134001 & 2015-12-14 & 94 & 3236 \\
00034134002 & 2015-12-15 & 106 & 4430\\
00034134003 & 2015-12-15 & 126 & 7317 \\
00034134004 & 2015-12-19 & 34 & 899 \\
00034134005 & 2015-12-19 & 460 & 15890\\
00034134006 & 2015-12-24 & 230 & 8346 \\
00034134007 & 2015-12-25 & 352 & 11050\\
00034134008 & 2015-12-29 & 187 & 8129 \\
00034134009 & 2015-12-30 & 161 & 5866\\
00034134010 & 2016-01-10 & 272 & 7337\\
00034134011 & 2016-01-12 & 182 & 7280\\
00034134012 & 2016-01-13 & 26 & 942\\
00034134013 & 2016-01-14 & 51 & 2527\\
00034134014 & 2016-01-14 & 265 & 16810\\
00034134015 & 2016-01-19 & 65 & 3326\\
00034134016 & 2016-01-20 & 225 & 8259\\
00034134017 & 2016-01-21 & 23 & 1096\\
00034134018 & 2016-01-22 & 21 & 1056\\
00034134019 & 2016-01-24 & 107 & 6171\\
00034134020 & 2016-01-27 & 17 & 942\\
00034134021 & 2016-01-28 & 218 & 10340\\
00034134022 & 2016-01-29 & 263 & 15400\\
00034134024 & 2016-02-03 & 99 & 3906\\
00034134025 & 2016-02-03 & 101 & 3591\\
00034134026 & 2016-02-05 & 338 & 11530\\
\hline
{\it Archival data} && & \\
\hline
00080403001 & 2016-02-18 & 60 & 1826 \\
\hline
\end{tabular}
\end{center}
{\bf Notes.} Col. (1): XRT observational ID. (2) Start date of the observation. (3) [0.3-10 keV] net counts as derived by
X-ray spectral analysis. (4) Nominal exposure time in unit of second.
\end{table}
We monitored MCG+11-11-032 with the {\it Swift}-XRT telescope as part
of a project aiming at studying the X-ray variability on different time scales \citep{bal15a}.
The observations, performed during the {\it Swift} Cycle-12 (P. I. Severgnini), started on 2015 December 14 and ended on 2016
February 5, covering $\sim$54 days, for a total exposure time of $\sim$166 ks.
The data were taken using XRT in the standard PC-mode (Target ID=00034134).
The observation log is reported in Table~\ref{xobs_log}; we note that the observations scheduled
during the segment 00034134023 were not completed, explaining the absence of this latter in the observation log.
A further archival observation (Target ID=00080403), was performed just after our own monitoring and it was considered in our analysis.
Before our daily monitoring, MCG+11-11-032 was observed different times by {\it Swift} on
month/year time scales.
The relevant information about the previous XRT observations
in which the source was detected with a S/N$>$3 in the 0.3-10 keV range is reported in the
first part of Table~\ref{xobs_log} (Target ID=00038045, 00090163 and 00084954).
For each observation, we extracted the images, light curves, and spectra, including the background and ancillary response files,
using the on-line XRT data product generator\footnote{http://www.swift.ac.uk/user\_objects} \citep{eva07, eva09}.
The effects of the damage to the CCD and automatic
readout-mode switching were handled and the appropriate spectral response files were identified in the calibration database.
The source appears to be point-like in the
XRT image and we do not find any significant evidence of pile-up.
\begin{figure*}
\includegraphics[scale=0.4,clip=true,trim=0cm 5cm 0cm 3cm]{severgnini_f3_left}
\includegraphics[scale=0.4,clip=true,trim=0cm 5cm 0cm 3cm]{severgnini_f3_right}
\caption{0.3-10 keV XRT count rate (upper panels) and hardness ratio (lower panels) light curves of MCG+11-11-032 obtained by binning the data per observation. Data points are corrected for technical issues (i.e. bad pixels/columns, field of view effects and source counts landing outside the extraction region) following the recipes discussed by \citet{eva07, eva09}. Error bars mark 1$\sigma$ uncertainties, while dashed lines represent the weighted averages of the high and low count rate states (upper panels, blue and red lines, respectively) and of the hardness ratios (lower panels, black line). In the upper panels, blue and red points flag higher and lower count rate states, respectively.}
\label{xrt_curve}
\end{figure*}
\begin{figure}
\begin{center}
\includegraphics[scale=0.4,clip=true,trim=0cm 10cm 0cm 3cm]{severgnini_f4}
\caption{15-150 keV light curve of MCG+11-11-032 taken from the {\it Swift}-BAT 123 month survey (2005 January to 2015 March) in time bins of two (grey data points) and five months (red filled circles). Error bars mark 1$\sigma$ uncertainties. The black skeletal symbols represent the binned XRT count rates overlapping in time with the BAT monitoring and rescaled to BAT count rates (see Sect. 3.2). For visual purposes only, we over-plotted a modular function obtained by summing four sinusoidal components with
equal period but different amplitudes (dashed black curve).}
\label{bat_curve}
\end{center}
\end{figure}
\subsection{XRT light curve}
We first investigated the source variability behavior from year/month to day timescales.
We compared the results obtained by our own monitoring program, probing variations on relatively short timescales (day-weeks),
with archival data, which allow us to explore longer timescales (up to several years, see Table~\ref{xobs_log}).
Figure \ref{xrt_curve} shows the 0.3-10 keV XRT light curve (upper panels) and the hardness ratio light curve (bottom panels) of MCG+11-11-032.
This latter is defined as the ratio between the 4-10 keV and the 2-4 keV count rates; as showed in \cite{bal15a}, this ratio
provides a strong indication of the amount of absorption.
Indeed, while below
$\sim$2 keV different soft components (soft excess, reflection, scattering, etc) can be present, in the 2--10 keV range,
the spectrum of an AGN can be approximated, at the first order, by an absorbed power law.
The data are binned per observation, and we considered
all the observations listed in Table \ref{xobs_log}. Our daily monitoring data correspond to the data-points crammed in to the rightmost part of the
figure. The overall pattern of the count rate light curve
is not constant at the 99.99 per cent confidence level ($\chi^2$ test).
In particular, on month/year timescales the source was caught in two significantly different count rate
levels: the higher one (blue empty symbols, Fig.~\ref{xrt_curve}, upper panels), characterized by a
weighted average count rate of $\sim$0.03 cts s$^{-1}$
(marked with a blue dashed line), and the lower one
(red filled symbols, Fig.~\ref{xrt_curve}, upper panels), characterized by a weighted average count rate of $\sim$0.01 cts s$^{-1}$ (marked with a red dashed line).
The top-left panel of Fig.~\ref{xrt_curve} clearly shows that the source significantly increases its
count rate level in two months, and after about seven months the source has the
same lower count rate value of the first observation.
A similar but opposite trend was observed about seven years later (top right panel of Fig.~\ref{xrt_curve}):
the source decreases its
count rate level in two months
and, after about nine months, the source has already increased again its count rate
to the higher state.
Unfortunately, the lack of an uniform sampling across the full period prevents us from further investigating this ``alternating" behavior.
The two states shown in Fig.~\ref{xrt_curve} are, most probably,
the states where the source spends the majority of its time.
The overall pattern of variability observed in the full 0.3--10 keV band, and shown in the upper panels of Fig.~\ref{xrt_curve}, is similar to that
registered in the 2-4 keV and 4-10 keV energy ranges.
Indeed, the hard-to-soft flux ratio, plotted in the bottom panels of Fig.~\ref{xrt_curve}, does not show significant variations ($<$3$\sigma$, $\chi^2$ test),
except from the third observation in the left panel (Sep. 2009).
The lack of significant variation in the count rate ratio suggests that the observed count rate variability is
not caused by variable absorption but it is most likely
due to intrinsic flux variations (see also Sect. 3.3).
\subsection{BAT light curve}
To further investigate the month/year variability behavior shown by the XRT data,
we considered the (still unreleased) MCG+11-11-032 123-month 15-150 keV BAT light curve
\citep[see Fig.~\ref{bat_curve}; Palermo {\it Swift}-BAT team, private communication; see also][]{seg10},
which provides a tight sampling of the source count rate on a total time scale of more than ten years.
In Fig.~\ref{bat_curve} we show the light curve binned in periods of two (grey data points) and five (red filled circles) months.
The 15-150 keV emission is clearly variable;
a constant flux is rejected at 99.95 per cent confidence level ($\chi^2$ test).
The curve has a modular behavior with different peaks and dips occurring almost every 25 months.
For visual purposes only, we over-plotted on Fig.~\ref{bat_curve} a modular function obtained by summing four sinusoidal components with
equal period but different amplitudes (dashed black curve).
In order to compare the XRT and BAT variability, we binned the XRT light curve in periods of five months
and then converted the XRT to BAT count rates{\footnote{We considered the XRT data obtained during the123-months of BAT
observation and we convert them to BAT count rates by using
the PIMMS tool (v. 4.8f) and considering the spectral parameters ($\Gamma$, N$_{\rm H}$) derived by our own spectral analysis reported in Sec. 3.3.}}
(skeletal symbols in Fig.~\ref{bat_curve}). We found a good agreement between the two datasets.
\subsection{XRT spectral analysis}
\begin{figure}
\includegraphics[scale=0.33,clip=true,trim=0cm 0.8cm 0cm 2.5cm]{severgnini_f5}
\caption{XRT spectra corresponding to the high (blue open circles) and low (red solid circles) states shown in upper panels
of Fig.~\ref{xrt_curve}.}
\label{ldata_area}
\end{figure}
\begin{table*}
\begin{minipage}[t]{1\textwidth}
\begin{center}
\caption{Best-fitting values obtained by applying to the high-state spectra (empty blue circles in Fig.~\ref{xrt_curve}) of MCG+11-11-032 the different models discussed in Sect. 3.3 (referenced
as models from 1 to 4). Errors are quoted at the 90 per cent confidence level for one parameter of interest \citep{avn76}.}
\label{xrays}
\begin{tabular}{ccccccccc}
\hline
\hline
{\it Model} & $\Gamma$ & {\it N$_{\rm H}$} & {\it R} & {\it E} & {\it EW} & $\chi^2/d.o.f.$ & {\it F}$_{\rm {2-10 keV}}$ & {\it L}$_{\rm {2-10 keV}}$ \\
& & [$10^{22}$ cm$^{-2}$] & & [keV] & [eV] & & [10$^{-12}$ erg cm$^{-2}$ s$^{-1}$] & [10$^{43}$ erg s$^{-1}$] \\
(1) & (2) &(3) & (4) & (5) & (6) &(7) &(8) & (9) \\
\hline
1 & 1.29$^{+0.19}_{-0.18}$ & 11.0$^{+1.2}_{-1.5}$ & & & & 133 / 75 & 4.39 & 2.15\\
\hline
2 & 1.61$^{+0.17}_{-0.18}$ & 13.5$^{+1.4}_{-1.5}$ & $\sim$0.09 & & & 95 / 74 & 4.32 & 2.09 \\
\hline
3 & 1.64$^{+0.19}_{-0.18}$ & 13.3$^{+1.5}_{-1.4}$ & $\sim$0.09 & 6.18$^{+0.09}_{-0.08}$ & 120$^{+50}_{-60}$& 83 / 72 & 4.27 & 2.05\\
\hline
\multirow{ 3}{*}{4} & \multirow{ 3}{*}{1.68$^{+0.19}_{-0.18}$} &\multirow{ 3}{*}{13.3$^{+1.5}_{-1.4}$} & \multirow{ 3}{*}{ $\sim$0.09} & 6.16$^{+0.08}_{-0.07}$ & 120$^{+80}_{-60}$& \multirow{ 3}{*}{79 / 70} & \multirow{ 3}{*}{4.24} & \multirow{ 3}{*}{2.05} \\
\\
& & & &6.56$^{+0.16}_{-0.15}$ & 85$^{+70}_{-25}$ && & \\
\hline
\hline
\end{tabular}
\end{center}
{\bf Notes:} Col. (1): Model number as referenced in the text (see Sect. 3.3).
Col. (2): Power-law and reflection component photon index.
Col. (3): Intrinsic column density.
Col. (4): Reflection fraction, defined as the ratio between the 2-10 keV flux of the reflected and direct continuum components.
Col. (5): Rest-frame energy centroid of the Gaussian line.
Col. (6): Emission line equivalent width.
Col. (7): $\chi^2$ and number of degrees of freedom.
Col. (8): Observed flux (de-absorbed by Galactic absorption) in the 2-10 keV energy band.
Col. (9): Intrinsic (i.e. absorption-corrected) luminosity in the 2-10 keV energy band.
\end{minipage}
\end{table*}
\begin{figure}
\includegraphics[scale=0.33,clip=true,trim=0cm 0.8cm 0cm 2.5cm]{severgnini_f6}
\caption{Upper panel: The model composed of an intrinsically absorbed power-law ({\it tbabs*ztbabs*zpowerlw}, model 1, Table~\ref{xrays})
is plotted over the XRT spectrum of MCG+11-11-032.
Lower panel: Relevant residuals plotted in terms of sigmas.}
\label{model1}
\end{figure}
\begin{figure}
\includegraphics[scale=0.33,clip=true,trim=0cm 0.8cm 0cm 2.5cm]{severgnini_f7}
\caption{Upper panel: The model composed of an intrinsically absorbed power-law plus a continuum reflection component and one narrow emission
line ({\it tbabs*}({\it ztbabs*zpowerlw+pexrav+zgauss})), model 3 in Table~\ref{xrays}) is plotted over the XRT spectrum of MCG+11-11-032.
Lower panel: Relevant residuals plotted in terms of sigmas.}
\label{model3}
\end{figure}
\begin{figure}
\includegraphics[scale=0.33,clip=true,trim=0cm 0.8cm 0cm 2.5cm]{severgnini_f8}
\caption{Confidence contours plot of the joint errors of the rest-frame emission line energy versus the
intrinsic normalization of the Gaussian component ({\it A$_{\rm line}$}). 68 per cent (black line), 90 per cent (red line) and 99 per cent (green line)
confidence contours are shown.}
\label{cont}
\end{figure}
\begin{figure}
\includegraphics[scale=0.33,clip=true,trim=0cm 0.8cm 0cm 2cm]{severgnini_f9}
\caption{Upper panel: the model, which includes an intrinsically absorbed power-law plus a continuum reflection component and two emission narrow lines
({\it tbabs*}({\it ztbabs*zpowerlw+pexrav+zgauss+zgauss})), model 4 in Table~\ref{xrays}), is plotted over the spectrum of MCG+11-11-032.
Note that this plot was obtained by creating fluxed spectrum against a simple $\Gamma$ = 2 power-law and then overlaying the best fit model.
Lower panel: relevant residuals, plotted in terms of sigmas.}
\label{model4}
\end{figure}
As already discussed in Sect. 3.1, by considering the XRT count rate ratios (see Fig.~\ref{xrt_curve}, lower panel),
no evidence of spectral variability is observed between the two states.
To further test this result,
we produced two different spectra on the basis
of the 0.3-10 keV light curve: we
co-added all the data relevant to the blue and red points showed in the
upper panels of Fig.~\ref{xrt_curve} to obtain source and background spectra
of the high and low-flux states, respectively.
The resulting spectra are compared in Fig.~\ref{ldata_area}.
Due to the different statistical quality of the two spectra,
the high-state spectrum ($\sim$4170 net counts) has been binned in order to have at least 50 counts per energy channel,
while the low-state one ($\sim$100 net counts) has been binned in order to have at least 10 counts per energy channel.
As evident from Fig.~\ref{ldata_area}, the two states have very similar spectral shapes as already suggested by the X-ray colours.
Therefore, we can rule out variable absorption to be at the origin of the observed flux variability.
In the following, we will focus on the higher state and higher statistics spectrum.
The spectral analysis is performed by using the {\textsc{{\small XSPEC}}} 12.8.2
package \citep{arn96}. We use the $\chi^2$ statistics in the search for the best fit model
and for parameter errors determination \citep{avn76}; quoted statistical
errors are at the 90 per cent confidence level for one parameter of interest.
Each model discussed below includes a Galactic column density {\it N}$_{\rm {H Gal}}$=4.7$\times$10$^{20}$ cm$^{-2}$
\citep{kal05}, modeled with {\it tbabs} in {\textsc{{\small XSPEC}}} \citep{wil00}.
Since the shape of the spectra in Fig.~\ref{ldata_area} suggests the presence of obscuration, as a starting point
we adopted a single intrinsically absorbed power-law model ({\it ztbabs*zpow} model in {\textsc{{\small XSPEC}}}, model 1, Table~\ref{xrays}).
As shown in Fig.~\ref{model1}, such model cannot be considered a good
representation of the global spectral properties of the source leaving evident
residuals over the full energy range. In particular, both the softer (below $\sim$ 2 keV) and the harder (above $\sim$6 keV) residuals suggest the
presence of a non-negligible additional component tracing reflection, most probably due to the
circum-nuclear material.
We added to the fit first a continuum reflection component ({\it pexrav} model in {\textsc{{\small XSPEC}}}, \cite{mag95},
model 2, Table~\ref{xrays}) and then a narrow (50 eV) Gaussian emission line component around 6.4 keV to account for a Fe K$\alpha$ emission component ({\it zgauss}
model in {\textsc{{\small XSPEC}}},
model 3, Table~\ref{xrays}). Both components are statistically required and their addition significantly improves the fit.
The spectrum and the residuals corresponding to the final best-fitting model (model 3, Table~\ref{xrays})
are shown in Fig.~\ref{model3}, upper and lower panels, respectively.
We note that the best-fit value of the rest-frame energy (6.18 keV) of the narrow emission line does not have a clear association with well-known and expected transitions.
The difference with respect to the neutral rest-frame Fe emission line at 6.4 keV is significant at 97 per cent confidence level
for one parameter of interest.
Fig.~\ref{cont} shows the confidence contour plot of the joint errors of the rest-frame emission line energy versus its
intrinsic normalization. This figure indicates that a rest-frame energy of 6.18 keV is favored by the model, although also a 6.4 keV value, i.e.
the rest-frame Fe K$\alpha$ line, cannot be excluded at 3$\sigma$. The shape of the 99 per cent contour level hints also another
possibility: the presence of a second emission line at energy higher than 6.4 keV.
For this reason,
even if it is not required by the fit with the present statistics,
we add a second narrow emission line to the model
leaving its energy free to vary in 6-7 keV energy range (model 4, Table \ref{xrays}).
The best fit values obtained for the rest-frame energies of the two emission lines are: {\it E$_{\rm 1}$}=6.16$\pm$0.08 keV ({\it EW}$\sim$120 eV, {\it F}$_{\rm line1}$$\sim$8$\times$10$^{-14}$ erg s$^{-1}$ cm$^{-2}$) and {\it E$_{\rm 2}$}=6.56$\pm$0.15 keV ({\it EW}$\sim$85 eV, {\it F}$_{\rm line2}$$\sim$6$\times$10$^{-14}$ erg s$^{-1}$ cm$^{-2}$).
As expected, due to the statistics of our data, the energy of the second line is poorly constrained.
All the continuum parameters are unaffected with respect to those obtained by adopting model 3 (see Table~\ref{xrays}).
For completeness, we report in Fig.~\ref{model4} the spectrum and the ratio between data and this last best-fitting model.
As a final step, we check if the two putative X-ray emission lines could be associated with the double-horn of a relativistic
Fe K$\alpha$ emission line produced in the accretion disk, i.e. inside the absorbing medium intercepted along the line of sight
({\it N$_{\rm H}$}=1.33$\times$10$^{23}$ cm$^{-2}$, see Table \ref{xrays}).
To this end, the two {\it zgauss} components are replaced with an absorbed {\it laor} \citep{lao91} disk line plus a {\it pexrav} component in {\rm XSPEC}.
These represent the iron emission line plus continuum components reflected by the accretion disk and absorbed by the outer medium along the line of sight.
We find that the symmetric double peaked profile observed for MCG+11-11-032 (both in terms of $\Delta E$ and emission line fluxes)
can be reproduced by this model only by assuming that a significant fraction of the line flux comes from
a Keplerian disk at $\sim$200-400 {\it R}$_{\rm G}$ from the central engine.
In the case of a single ionizing source, such range corresponds to the outer part of an accretion disk or to the region where optical broad emission lines are typically produced, i.e. the so called Broad Line Regions (BLR). As we will discuss in the next Section, the distance quoted above matches well
also the inner radius expected for a circumbinary accretion disk for the presence of two sub-parsec scale SMBHs.
All the possible different scenarios will be discussed in the next section.
\section{Summary and Discussion}
Our analysis of the SDSS--DR13 optical spectrum of MCG+11-11-032 (see Sect. 2) confirms the
presence of double-peaked profiles in all the strongest nebular emission lines, particularly for the [OIII] lines.
Emission line components at the systemic velocity of the
host galaxy and/or broad
wings have not been detected. The velocity offsets between
the blue and red narrow peaks are similar for all the emission line components
and the separation line ratios ($\Delta \lambda_{\rm blue}$/$\Delta \lambda_{\rm red}$) are consistent with one.
Although these properties make MCG+11-11-032
a good dual AGN candidate \citep{wan09}, alternative and equally valuable physical scenarios could
account for the optical double-peaked narrow emission line components observed in this source,
such as faint outflows/jets or gas kinematics within a single NLR.
Interestingly, besides being characterized by double-peaked narrow emission lines in the
optical spectrum, MCG+11-11-032 also belongs to the all-sky survey {\it Swift}-BAT catalogues and was
monitored several times with {\it Swift}-XRT (see Section 3).
Our analysis of the XRT light curve shows that MCG+11-11-032 clearly alternates between two main flux states on a time scale of several months.
Unfortunately, the XRT light curve data are not uniform and they are inadequate to unveil a possible periodic behavior;
they only suggest that, if a modular behavior is present, it must have a period equal or larger than about one year.
To further investigate the X-ray variability pattern of MCG+11-11-032,
we considered the still unreleased 123-month 15-150 keV BAT light curve, which provides a tight sampling of the source count rate on a total time scale of more than ten years.
The light curve is clearly variable and shows a modular behavior with different peaks and dips occurring almost every 25 months.
The XRT data analysis (count rate ratio and low and high state spectrum comparison) suggests that
the observed X-ray variability is most likely caused by intrinsic flux variations rather than to a change of obscuration
along the line of sight.
Our spectral analysis of the higher state shows that the XRT spectrum of MCG+11-11-032 is well fitted by an absorbed power-law plus a reflection component.
While the continuum is reminiscent of a typical Seyfert 2 galaxy, we did not detect any neutral Fe K$\alpha$ emission line at the expected 6.4 keV rest-frame energy.
Although this would not make MCG+11-11-032 an outlier, what makes it more interesting is the possible presence of two emission lines at rest-frame energies of: {\it E$_{\rm 1}$}=6.16$\pm$0.08 keV and
{\it E$_{\rm 2}$}=6.56$\pm$0.15 keV.
While the 6.56 keV emission can be considered only a tentative detection (2$\sigma$ significance),
we note that the $\sim$6.2 keV line is detected at high significance (more than 3$\sigma$) and
is not consistent with the rest-frame Fe K$\alpha$ energy of 6.4 keV at the 97 per cent confidence level.
\subsection{A binary SMBH at the center of MCG+11-11-032}
Although the results presented here need to be confirmed by higher quality X-ray data,
the putative modular X-ray variability combined with the possible presence of two Doppler-shifted iron emission lines
opens a further interesting possibility for MCG+11-11-032: the presence of two sub-pc scale SMBHs in the core of the source.
As a matter of fact, modular variations of the intrinsic flux in AGN constitutes an almost unique observational evidence
for the presence of a binary SMBH at sub-parsec scale \citep[][and references therein]{cha18}. Numerous hydrodynamical simulation show that
the large amount of dense gas in the central region of galaxies hosting sub-parsec scale binary systems
can form a circumbinary accretion disk with an inner radius lower than two times the
binary separation. The circumbinary disk accretion mass rate is expected to be modulated by the orbital period of the two SMBHs, hence naturally producing
modulated X-ray variations \citep{dor13,gol14,far14}.
Alternatively, modular variability in the presence of a SMBH pair could be caused by relativistic Doppler-boosts of the emission
produced in mini-disks bounded to individual SMBHs \citep{dor15}. In this case the emission of the brighter mini-disk will be periodically Doppler-boosted,
and the observed time-scale of the modulated X-ray emission would correspond, also in this case, to the orbital period of the two
SMBHs.
As for MCG+11-11-032, by considering a total SMBH mass of log{\it (M/M$_{\odot}$})=8.7$\pm$0.3 \citep[derived from the CO velocity dispersion, see][]{lam17}\footnote{The SMBH mass for MCG+11-11-032, ID~\#434 in the online-only extended Table 8 of \cite{lam17}, has been derived from the velocity dispersion of the CO line.
See the relevant paper for more details.}, under the hypothesis of a
SMBH pair, the observed modular time-scale (i.e. about 25 months) would imply a sub-pc separation between the two SMBHs,
with an orbital velocity of a few per cent of the speed of light ($\Delta v$$\sim$0.06{\it c}).
In spite of the statistical significance of the X-ray results presented here, what makes the sub-pc SMBH pair a valuable hypothesis in the case of MCG+11-11-032 is
the complete agreement between the orbital velocity derived from the BAT light curve and the velocity offset derived by the rest--frame $\Delta E$ between the two X-ray line peaks
in the XRT spectra data (i.e. $\Delta v$ of order of 3-10 per cent the speed of light, with a best fit value of about 0.06{\it c}).
Interestingly, as discussed in the previous section, the two X-ray emission lines may be either reproduced by rotational effects of a
Keplerian disk at distance well consistent with the putative circumbinary disk ($\sim$200-400 {\it R}$_{\rm G}$) or by two different gas structures bounded to individual SMBHs.
At these small separations, binary SMBHs may stall for a significant fraction of the Hubble time \citep[see e.g.][]{col14}, much higher
of the typical galaxy merger time-scale \citep[see e.g.][]{boy08,hop10}. In these systems optical narrow lines would come from a much larger zone enveloping the binary system \citep{beg80} by tracing the velocity of the post-merger galaxy. Under the binary hypothesis, the double-peaked emission lines observed in MCG+11-11-032 could be thus explained by gas kinematics related to a NLR moving at the velocity of the almost edge-on host galaxy disk at sub-kpc scale.
\subsection{Alternative scenarios}
Alternative physical scenarios proposed on the
basis of the optical data alone (see Section 2) would unlikely explain both the optical and the X-ray properties of this source.
In particular, the presence of two SMBHs at larger scale (e.g. sub-kpc distance) would be at odd with the distance and velocity
derived by the X-ray data. Similarly, although the presence of a precessing outflow/jet
would explain the presence of optical and X-ray double-peaked lines, this hypothesis remains unlikely on the basis of the expected precession period for a jet with a single SMBH
of mass of 10$^8$ M$_{\odot}$, i.e. 10$^{2.2}$-10$^{6.5}$ years \citep[see][]{lu05}. On the other hand, shorter periods are possible
for jet precession related to a SMBH binary system \citep[][and reference therein]{gra15}.
The optical double-peaked profiles could be caused by almost edge-on NLR disk gas kinematics ionized by a single SMBH, where the X-ray emission lines may be produced by the outer parts of a co-planar accretion disk.
In this case, modular X-ray behavior could be justified by the presence of a warped accretion disk which modulates
the intrinsic luminosity as it precesses \citep{gra15}.
However, even under this hypothesis, the precession time-scale of a self-gravitating warped disk around a single SMBH of mass of order of 10$^8$ M$_\odot$ is much larger ($\sim$50 yr) than the putative modular time scale observed for MCG+11-11-032 \citep[see][]{tre14}.
\section{Conclusions}
Although higher quality X--ray data are mandatory to confirm and better characterize the observed X-ray emission lines
and to confirm the X-ray modular behavior, the results presented here make MCG+11-11-032 a promising binary SMBH candidate.
Due to the stringent spatial resolution requirements, confirming the presence of a sub--parsec binary SMBH is significantly challenging both with present,
i.e. the Hubble Space Telescope, and future new generation telescopes, i.e. the European Extremely Large Telescope and
the James Webb Space Telescope. However, these new upcoming facilities will
significantly improve our statistics on the binary/dual SMBH population thus providing the data for a detailed study of this class of objects and of the
multi-wavelength properties of their host galaxies.
In addition, future spatially--resolved spectral observations
could confirm the origin of the double peaked optical emission lines, and the location of their emitting regions with respect to the central nucleus hosting one or possibly two SMBHs.
Although the interpretation proposed here is admittedly in part speculative, this paper clearly shows the high capability of X-ray data
in unveiling SMBH pair candidates also in obscured sources.
In particular, the still on going {\it Swift}-BAT all--sky monitoring will
allow us to investigate the hard X-ray light curve of MCG+11-11-032
on even longer time scales and to definitively confirm its periodic-like behavior.
Higher quality X-ray spectra are also necessary to better characterize the observed X-ray emission lines and thus
confirm on more solid ground the scenario proposed here.
MCG+11-11-032 is an intriguing source as it could be the first case in which X-ray data unveil the presence of a sub-parsec binary SMBH on
the basis of a double-peaked Fe K$\alpha$ emission line. We note that, such kind of profiles will be easily detected with
the advent of the X-ray calorimeters such the one developed for XARM and {\it Athena}.
\section*{Acknowledgements}
We thank the anonymous referee for the useful and constructive comments which improved the quality of the paper.
CC acknowledges funding from the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant No 664931.
PS thanks P. Saracco, M. Dotti and A. Wolter for the useful discussions and helpful suggestions.
This work is based on data supplied by the UK {\it Swift} Science Data Centre at the University of Leicester.
We made use of the Palermo BAT Catalogue and database operated at INAF - IASF Palermo.
The SDSS is managed by the Astrophysical Research Consortium for the Participating Institutions of the SDSS Collaboration
(see http://www.sdss.org/collaboration/citing-sdss/).
\nocite{}
\bibliographystyle{mnras}
|
2,877,628,088,387 | arxiv | \section{Introduction and summary}
In this paper we investigate the existence and uniqueness of solutions to two related sets of equations. The first set consists of algebraic equations for $N$ analytic functions $Y_n$, and is an example for so-called Y-system functional equations. The second set consists of coupled
nonlinear integral equations for $N$ functions $f_n$, called thermodynamic Bethe ansatz equations, or TBA equations for short.
Y-systems and their relation to
TBA equations
first appeared in \cite{Zamo:ADE}.
We will now describe these two sets of equations in more detail and state our existence and uniqueness results.
Afterwards we outline the standard argument connecting the Y-system to the TBA equations and stress the points where our approach differs from earlier works.
\subsubsection*{Main results}
Let us start by fixing some notation which we need to formulate the results and which we will use throughout this paper.
\begin{notation}\label{intro-notation}~
\begin{itemize}[leftmargin=1em]
\item Let $\mathbb{K}$ stand for $\mathbb{R}$ or $\mathbb{C}$.
We denote by \[BC(\mathbb{R},\mathbb{K})\] the functions from $\mathbb{R}$ to $\mathbb{K}$ which are continuous and bounded, and by
$$
BC_-(\mathbb{R},\mathbb{R})
$$
the continuous real-valued functions on $\mathbb{R}$ which are bounded from below.
\item For $a>0$ we denote by $\mathbb{S}_a := \lbrace z\in\mathbb{C} | -a<\mathrm{Im}(z)<a\rbrace$ the open horizontal strip in $\mathbb{C}$ of height $2a$, and by $\overline{\mathbb{S}}_a$ its closure.
We define the spaces of functions
$$
\mathcal{A}(\mathbb{S}_a) ~\supset~ \mathcal{BA}(\mathbb{S}_a) \ ,
$$
where $\mathcal{A}(\mathbb{S}_a)$ is the space of $\mathbb{C}$-valued functions which are analytic on $\mathbb{S}_a$ and have a continuous extension to
$\overline{\mathbb{S}}_a$,
and $\mathcal{BA}(\mathbb{S}_a)$ consists of
those functions in $\mathcal{A}(\mathbb{S}_a)$
which are in addition bounded on
$\overline{\mathbb{S}}_a$.
\item
We fix the constants
$$
N \in \mathbb{Z}_{>0}
\quad ,
\quad
s \in \mathbb{R}_{>0} \ .
$$
\item We denote by
$$
\mathrm{Mat}_{<2}(N) \subset \mathrm{Mat}(N,\mathbb{R})
$$
the subset of real-valued $N\times N$
matrices which can be diagonalised over the real numbers, and whose eigenvalues lie in the open interval $(-2,2)$.
\end{itemize}
\end{notation}
We can now describe the Y-system. For $\vec{G} \in \mathrm{Mat}(N,\mathbb{R})$ (with entries $G_{nm}$) and
$Y_1,\dots,Y_N \in \mathcal{A}(\mathbb{S}_s)$,
the Y-system is the set of functional equations
\be\label{Y}
Y_n(x+is)Y_n(x-is)=\prod_{m=1}^N \left(1+Y_m(x)\right)^{G_{nm}} \ ,\tag{Y}
\ee
for $n=1,\dots,N$ and all $x \in \mathbb{R}$.
If $\vec{G}$ is not integer valued, one needs to give a prescription how to deal with the multi-valuedness of the right hand side. We will later do this by demanding $Y_n$ to be positive and real-valued on the real axis.
If $Y_n\in\mathcal{A}(\mathbb{S}_s)$ has no zeros,
we may pick $h_n\in\mathcal{A}(\mathbb{S}_s)$ such that $Y_n(z)=e^{h_n(z)}$ for all
$z\in\overline{\mathbb{S}}_s$.
Denote $h_n(z)=\log Y_n(z)$.
We can think of a function $a_n\in\mathcal{A}(\mathbb{S}_s)$ as capturing the asymptotic behaviour of $Y_n(z)$ if $\log Y_n(z)-a_n(z)$ is bounded on
$\overline{\mathbb{S}}_s$.
This condition is independent of the branch choice
for the logarithm.
To formulate our first main theorem, we need to single out a certain type of asymptotic behaviour.
\begin{definition}
We call $\vec{a}\in\mathcal{A}(\mathbb{S}_s)^N$ (with components $a_n(z)$)
a {\sl valid asymptotics for \eqref{Y}} if
\begin{enumerate}
\item for $n=1,...,N\, $, $a_n$ is real valued on $\mathbb{R}$ and the functions $e^{-a_n(x)}$ and $\frac{d}{dx}e^{-a_n(x)}$ are bounded on $\mathbb{R}$,
\item $\vec a$ satisfies
\be\label{linear-eqn-for-asympt}
\vec{a}(x+is)+\vec{a}(x-is) = \vec{G}\cdot \vec{a}(x)
\qquad \text{ for all } x \in \mathbb{R} \ .
\ee
\end{enumerate}
\end{definition}
Recall from Perron-Frobenius theory that
a real $N\times N$ matrix is called {\sl non-negative} if all its entries are $\ge 0$, and {\sl irreducible} if there is no permutation of the standard basis
which makes it block-upper-triangular.
Our first main result is the following existence and uniqueness statement.
\begin{theorem}\label{main-theorem-Y}
Let $\vec{G}\in\mathrm{Mat}_{<2}(N)$ be non-negative and irreducible, and $\vec{a}\in\mathcal{A}(\mathbb{S}_s)^N$ a valid asymptotics for \eqref{Y}.
Then there exists a solution
$Y_1,\dots,Y_N \in \mathcal{A}(\mathbb{S}_s)$ to \eqref{Y} which satisfies, for $n=1,\dots,N$ ,
\begin{enumerate}
\item \label{Yproperties:real}
$Y_n(\mathbb{R})\subseteq\mathbb{R}_{>0}$~,
\hfill\emph{(real\,\&\,positive)}
\item \label{Yproperties:roots} $Y_n(z)\neq 0$ for all $z\in\overline{\mathbb{S}}_s$~.
\hfill\emph{(no roots)}
\item \label{Yproperties:asymptotics}
$ \log Y_n(z)-a_n(z) \in\mathcal{BA}(\mathbb{S}_s)$.\hfill\emph{(asymptotics)}
\end{enumerate}
Moreover, this solution is the unique one in $\mathcal{A}(\mathbb{S}_s)$ which satisfies properties \ref{Yproperties:real}--\ref{Yproperties:asymptotics}.
\end{theorem}
Recall that the logarithm in property~\ref{Yproperties:asymptotics} exists on all of $\overline{\mathbb{S}}_s$ as by condition \ref{Yproperties:roots}, $Y_n$ has no zeros, and that property~\ref{Yproperties:asymptotics} is not affected by the choice of branch for the logarithm.
Even though the unique solution $Y_1,\dots,Y_N$ is initially only defined on $\overline{\mathbb{S}}_s$, using \eqref{Y} and property~\ref{Yproperties:roots}, it is easy to see that $Y_n$ can be analytically continued at least to $\overline{\mathbb{S}}_{3s}$.
\medskip
One important valid asymptotics for \eqref{Y} is simply $\vec{a}=0$,
in which case the $Y_n$ themselves are bounded. We will see in Corollary~\ref{unique-constant-sol} below that then in fact the $Y_n$ are constant.
The Perron-Frobenius eigenvector $\vec{w}$ of $\vec{G}$ provides a whole family of valid asymptotics.
By our assumptions on $\vec G$, $\vec{w}$ can be chosen to have positive entries and its eigenvalue lies strictly between $0$ and $2$
(see Theorem \ref{PF}).
Then for any choice of $\gamma \in \mathbb{R}_{>0}$ such that $\vec{G}\cdot\vec{w} = 2 \cos(\gamma) \vec{w}$, the functions
\be \label{asymptoticexample}
\vec{a}(x) = e^{\gamma x/s} \, \vec{w}
\qquad \text{and} \qquad
\vec{a}(x) = e^{- \gamma x/s} \, \vec{w}
\ee
are valid asymptotics for \eqref{Y}.
We can also take linear combinations with positive coefficients; in particular the symmetric choice
$\vec{a}(x) = r \cosh(\gamma x/s) \, \vec{w}$
is considered frequently in the context of massive relativistic quantum field theory, where $\vec{w}$ takes the role of the mass vector
and $r>0$ represents the volume.
\medskip
Next we discuss the TBA equations. Let $\vec{C} \in \mathrm{Mat}_{<2}(N)$ and consider the following Fourier transform of a matrix-valued function,
for $x \in \mathbb{R}$,
\be\label{phiC-def-intro}
\Phi_{\vec{C}}(x)~=~\frac{1}{2\pi}\int_{-\infty}^{\infty}
\hspace{-.4em}
e^{ikx}
\, \big( 2\cosh(sk)\mathbf{1}-\vec{C} \big)^{-1} \,dk \ .
\ee
The integral is well defined since by the condition on the eigenvalues of $\vec{C}$, the components of the integrand are Schwartz-functions.
Then the
components of $\Phi_{\vec{C}}$ are also Schwartz-functions which moreover are real and even.
See Section~\ref{section-NdimGreen} for more details on $\Phi_{\vec{C}}$.
For $\vec{a} \in BC_-(\Rb,\Rb)^N$, $\vec G \in \mathrm{Mat}(N,\mathbb{R})$, and $\vec{C}$ as above, the TBA equation is the following nonlinear integral equation for a vector-valued function $\vec{f} \in BC(\mathbb{R},\mathbb{R})^N$:
\be\label{TBAintro}
\vec{f}(x)
~=\,
\int_{-\infty}^{\infty}
\hspace{-.5em}
\Phi_{\vec C}(x-y)\cdot\Big(\vec{G}\cdot\log\left(e^{-\vec{a}(y)}+e^{\vec{f}(y)}\right)-\vec{C}\cdot\vec{f}(y)\Big) \;dy \ . \tag{TBA}
\ee
Here we used the short-hand notation $\log\left(e^{-\vec{a}(y)}+e^{\vec{f}(y)}\right)$ to denote the function $\mathbb{R} \to \mathbb{R}^N$ with entries
\be\label{functions-of-a-vector-convention}
\big[ \log\left(e^{-\vec{a}(y)}+e^{\vec{f}(y)}\right) \big]_j :=
\log\left(e^{-a_j(y)}+e^{f_j(y)}\right) \ .
\ee
The integral \eqref{TBAintro} is well-defined because the components of $\vec a$ are bounded from below, $\vec f$ is bounded and the components of $\Phi_{\vec{C}}$ are Schwartz-functions.
Recall that a function
$f : \mathbb{R} \to \mathbb{K}$
is called \textsl{H\"older continuous}
if there exist $0<\alpha\leq 1$ and $C>0$, such that
\be
\left|f(x)-f(y) \right|\leq C \left|x-y\right|^\alpha \hspace{1cm} \text{for all}~~ x,y\in\mathbb{R}\ .
\ee
If $\alpha=1$, then $f$ is called
\textsl{Lipschitz continuous}.
Our second main result is:
\begin{theorem}\label{main-theorem-TBA} Let $\vec{G} \in \mathrm{Mat}_{<2}(N)$ be non-negative and irreducible, $\vec{C} \in \mathrm{Mat}_{<2}(N)$, and $\vec{a}\in BC_-(\mathbb{R},\mathbb{R})^N$ such that the components of $e^{-\vec{a}}$ are H\"older continuous.
Then the following holds:
\begin{enumerate}[label=\roman*)]
\item\label{mainTBAunique} The TBA equation \eqref{TBAintro} has a unique solution $\vec{f}_\star \in BC(\mathbb{R},\mathbb{R})^N$. The function $\vec{f}_\star$ is independent of the choice of $\vec{C}$.
\item\label{mainTBAsolvesY} $\vec{f}_\star$ can be extended to a function in
$\mathcal{BA}(\mathbb{S}_s)^N$,
which we denote also by $\vec{f}_\star$.
If $\vec{a}$ can be extended to a valid asymptotics, then the functions
$Y_n(z) = e^{a_n(z)+f_{\star,n}(z)}$,
for $z \in \overline{\mathbb{S}}_s$ and $n=1,\dots,N$, provide the unique solution to \eqref{Y} with the properties as stated in Theorem~\ref{main-theorem-Y}.
\end{enumerate}
\end{theorem}
In the case $N=1$, $\vec{G}=\vec{C}=1$ and $\vec{a} = r\cosh(x)$,
the existence and uniqueness of $\vec{f}_\star$ was already shown in \cite{FringKorffSchulz} (see discussion in Section~\ref{N=1-example-section}).
Theorems~\ref{main-theorem-Y} and~\ref{main-theorem-TBA}, as well as Corollary~\ref{unique-constant-sol} below, will be proved in Section~\ref{section:uniquenessY}.
\medskip
Next we specialise Theorems~\ref{main-theorem-Y} and~\ref{main-theorem-TBA} to the case $\vec{a}=0$. From the proofs of these theorems we get the following corollary.
\begin{corollary}\label{unique-constant-sol}
For $\vec{a}=0$, the unique solution $\vec{f}_\star$ from Theorem~\ref{main-theorem-TBA} is constant, and correspondingly, the unique solution $Y_1,\dots,Y_N$ from Theorem~\ref{main-theorem-Y} is constant.
\end{corollary}
\begin{remark}\label{remark-constantY}
~
\begin{enumerate}[label=\roman*)] \setlength{\leftskip}{-1em}
\item
The constant case is interesting in itself. The functional equations \eqref{Y} turn into the {\sl constant Y-system}
\be\label{Yconst}
Y_n^{\,2}
~=~\prod_{m=1}^N \left(1+Y_m\right)^{G_{nm}} \ ,
\ee
for $Y_1,\dots,Y_N \in \mathbb{C}$.
Suppose $\vec{G} \in \mathrm{Mat}_{<2}(N)$ is non-negative and irreducible as in Theorem~\ref{main-theorem-Y}.
Since a real and positive solution to \eqref{Yconst} also solves \eqref{Y}
and satisfies conditions~\ref{Yproperties:real}--\ref{Yproperties:asymptotics}
in Theorem~\ref{main-theorem-Y} (for $\vec a =0$),
by Corollary~\ref{unique-constant-sol} the constant Y-system
\textsl{has a unique positive solution}.
This extends a result of \cite{Nahm:2009hf,Inoue:2010a},
where symmetric and positive definite $\vec{G}$ were considered, as well as adjacency matrices of finite Dynkin diagrams.
\item
If $\vec{G}$ is the adjacency matrix of a finite Dynkin diagram,
explicit trigonometric expressions for the unique positive solution to
the constant Y-system (and more general versions thereof)
are known or conjectured, see \cite{Kirillov:1989} and \cite[Sec.\,14]{KunibaNakanishiSuzuki}.
\item\label{spectralradius2isbad}
If $\vec{G}$ has spectral radius
$\ge 2$,
it is shown in \cite[Sec.\,4]{Tateo:DynkinTBAs} that the constant Y-system
\eqref{Yconst} does not possess a real positive solution at all.
This shows that for $\vec a=0$, the condition on the spectral radius in Theorems~\ref{main-theorem-Y} and~\ref{main-theorem-TBA} is sharp.
\end{enumerate}
\end{remark}
\subsubsection*{Background on Y-systems and TBA equations}
The Thermodynamic Bethe Ansatz
was developed to study thermodynamic properties of a gas of particles moving on a circle \cite{YangYang69}. The version for relativistic particles whose scattering matrix factorises into a product of two-particle scattering matrices was given in \cite{Zamolodchikov:1989cf}. The reformulation as a Y-system was first described in \cite{Zamo:ADE}. A review of Y-systems and their applications can be found for example in \cite{KunibaNakanishiSuzuki}. Below we sketch the transformations linking the Y-system and the TBA equation, see \cite{Zamo:ADE} and e.g.\ \cite{Tateo:DynkinTBAs,Dorey:2007zx,vanTongeren:TBAreview}.
We note that while our proof follows the standard steps,
we are not aware of a previous proof of the correspondence between Y-systems and TBA equations in the literature,
in the sense that all analytic questions are carefully addressed. To provide all these details was one of the motivations to write the present paper.
\medskip
Rewrite $Y_n$ in \eqref{Y} as $Y_n(z)=\exp(f_n(z)+a_n(z))$, where $f_n(z)$ are bounded functions and $a_n(z)$
are valid asymptotics for \eqref{Y}.
Upon taking the logarithm, one verifies that the $a_n(z)$ cancel out and one remains with the set of finite difference equations
\be
f_n(x+is)+f_n(x-is)=\sum_{m=1}^N G_{nm}\log\left(e^{-a_m(x)}+e^{f_m(x)}\right)\ .
\ee
Even though it looks like a trivial modification of the above equation, it will be crucial for us to add a linear term in $\vec{f}$ to both sides (we switch to vector notation to avoid too many indices, recall also the convention \eqref{functions-of-a-vector-convention}),
\be\label{intro-nonlinear-rhs}
\vec{f}(x+is)+\vec{f}(x-is) -\vec{C}\cdot\vec{f}(x)
=
\vec{G}\cdot\log\left(e^{-\vec{a}(y)}+e^{\vec{f}(y)}\right)-\vec{C}\cdot\vec{f}(x)
\ .
\ee
To get rid of the nonlinearity for a while, we replace the $\vec f$-dependent function on the right hand side simply by a suitable function $\vec{g}$,
\be\label{intro-f+f-Cf=g}
\vec{f}(x+is)+\vec{f}(x-is) -\vec{C}\cdot\vec{f}(x) = \vec{g}(x) \ .
\ee
The difference equation can now be solved by a Greens-function-like approach. Namely, the function $\Phi_{\vec C}$ from \eqref{phiC-def-intro} gives rise to a representation of the
Dirac $\delta$-distribution (see Lemma~\ref{Phiproperties} for details):
\be
\Phi_{\vec C}(x+is)+\Phi_{\vec C}(x-is)-\vec{C}\cdot \Phi_{\vec C}(x) = \delta(x)\mathbf{1}_N \ .\ee
This allows one to write a solution to the functional equation \eqref{intro-f+f-Cf=g} as an integral,
\be\label{intro_convolution-to-solve-difference}
\vec{f}(x)=\int_{-\infty}^{\infty}\Phi_{\vec C}(x-y)\cdot\vec{g}(y) \;dy \ .
\ee
The only detailed proof of this that we are aware of is \cite[Lem.\,2]{TraceyWidom}, which treats the
case
$N=1$ and $\vec{C}=0$ and imposes
a decay condition on $\vec{f}(x)$ for $x\rightarrow\pm\infty$.
Therefore, in Section~\ref{section-soldefeqn} we give a proof in the generality we require.
Substituting the right hand side of \eqref{intro-nonlinear-rhs} for $\vec g$ in \eqref{intro_convolution-to-solve-difference}
produces \eqref{TBAintro}.
\begin{remark} \label{remark-S-matrix-relation}
In the case $\vec{C}=0$, the matrix $\Phi_{\vec C}(x)$ is
proportional to the identity matrix
and corresponds to the \textsl{standard kernel}
(often denoted by $s$) which produces the
\textsl{universal} or \textsl{simplified TBA equations}
of the physics literature.
The case $\vec{C}=\vec{G}$ yields the canonical TBA equations which emphasise the relation to the relativistic
scattering matrix $\vec{S}(x)$ if such is available (see Remark~\ref{remark-Smatrix-exists}).
Specifically, we have (see e.g.\ \cite{Dorey:2007zx})
\be\label{PhiG-logS}
\left[\Phi_{\vec G}(x)\cdot \vec{G}\right]_{nm} = \frac{i}{2\pi}\frac{d}{dx}\log\left(S_{nm}(x)\right)\ ,
\ee
More details and references can be found e.g.\ in \cite{Zamo:ADE,Tateo:DynkinTBAs,vanTongeren:TBAreview}.
Note that our Green's function $\Phi_{\vec G}(x)$ has to be multiplied with $\vec{G}$ to obtain the canonical kernel used in the physics TBA-literature.
In this paper, we consistently treat the factor $\vec{G}$ not as part of the kernel, but absorb it in the function $\vec{g}(x)$. This is a natural convention for $\vec{C}=0$, and we preserve this convention for general $\vec{C}$.
\end{remark}
Allowing for arbitrary $\vec{C} \in \mathrm{Mat}_{<2}(N)$,
not just $\vec C = 0$ or $\vec G$, is one place where we work in greater generality
than the physics literature we are aware of.
For us it will be important to choose $\vec{C} = \tfrac 12 \vec{G}$, as this will allow us to apply the Banach Fixed Point Theorem to find a unique solution to the integral equation (Proposition~\ref{TBAuniquenessDynkin}).
The other place where we allow for greater generality than considered before is in the choice of the matrix $\vec G$, which can have non-negative real entries, rather than
just
integers.
In the non-negative integral case, the $\vec{G}$ which fit our assumptions include in particular the adjacency matrix of finite Dynkin diagrams
or tadpole graphs ($T_N=A_{2N}/\mathbb{Z}_2$) -- these are called \textsl{Dynkin TBAs} in \cite{Tateo:DynkinTBAs}.
\medskip
Existence of a solution to equations similar to \eqref{TBAintro} has in some cases also been argued constructively. In \cite{YangYang69} and \cite{Lai} solutions to some specific TBA equations (albeit with $\Phi_{\vec{C}}(x)$ replaced by functions substantially different from ours) are constructed from a specific starting function by iterating the equations and showing uniform convergence.
A different approach to existence and uniqueness of solutions to the TBA equation \eqref{TBAintro} is suggested in a footnote in \cite[Sec.\,3.2]{KlassenMelzer91}, where the authors propose to use a fixed point theorem due to Leray, Schauder and Tychonoff. A detailed proof is, however, not provided.
Various methods to solve equations of the general form \eqref{TBAintro}, so called Hammerstein equations, are discussed in \cite{AppellChen}, including several fixed point theorems. In particular their example 1 is similar in spirit to \eqref{TBAintro}.
There are also other types of nonlinear integral equations relevant to the study of integrable models, which often share many features with TBA equations.
Existence and uniqueness of solutions to such an equation of Destri-de-Vega type in the XXZ model has been investigated in \cite{Kozlowski}.
\subsubsection*{Structure of paper}
In Section~\ref{section-soldefeqn} we give a detailed statement and proof of the above claim that \eqref{intro_convolution-to-solve-difference} solves \eqref{intro-f+f-Cf=g}, see Proposition~\ref{FuncRelToNLIE}. In Section~\ref{section-unique-integral-soln} we apply the Banach Fixed Point Theorem to obtain a unique solution to \eqref{TBAintro} in the case $\vec C = \tfrac12 \vec G$, see Proposition~\ref{TBAuniquenessDynkin}. Section~\ref{section:uniquenessY} contains the
proofs of Theorems~\ref{main-theorem-Y}, \ref{main-theorem-TBA} and of Corollary~\ref{unique-constant-sol}.
Finally, in Section~\ref{section-outlook} we discuss some open questions.
\subsubsection*{Acknowledgements}
We would like to thank
Nathan Bowler,
Andrea Cavagli\`a,
Patrick Dorey,
Clare Dunning,
Andreas Fring,
Annina Iseli,
Karol Kozlowski,
Louis-Hadrien Robert,
Roberto Tateo,
J\"org Teschner,
Stijn van Tongeren,
Alessandro Torrielli,
Beno\^\i t Vicedo,
and
G\'erard Watts
for helpful discussions and comments.
LH is supported by the
SFB 676 ``Particles, Strings, and the Early Universe'' of the German Research Foundation (DFG).
\section{Solution to a set of difference equations}
\label{section-soldefeqn}
For two functions $\vec{F} : \mathbb{C} \to \mathrm{Mat}(N,\mathbb{C})$ with components $F_{nm}$ and $\vec{g}:\mathbb{R}\rightarrow\mathbb{C}^N$ with components $g_n$ let us formally denote by
\be\left(\vec{F}\star \vec{g}\right)(z):=\int_{-\infty}^\infty \vec{F}(z-t)\cdot \vec{g}(t)\ dt
\ee
the convolution product, where the components of the integrand are
$[\vec{F}(z-t)\cdot \vec{g}(t)]_n = \sum_{m=1}^N F_{nm}(z-t) g_m(t)$.
In this section we will prove the following important proposition which will allow us to relate finite difference equations and integral equations. Recall from Notations~\ref{intro-notation} the definition of the subset
$\mathrm{Mat}_{<2}(N) \subset \mathrm{Mat}(N,\mathbb{R})$, and that we fixed constants $N \in \mathbb{Z}_{>0}$ and $s \in \mathbb{R}_{>0}$. Recall also the definition of $\Phi_{\vec C}$ from \eqref{phiC-def-intro}.
\begin{proposition}\label{FuncRelToNLIE}
Let $\vec{C}\in\mathrm{Mat}_{<2}(N)$,
$\vec{f} :\mathbb{R}\rightarrow\mathbb{C}^N$ and $\vec{g} \in BC(\mathbb{R},\mathbb{C})^N$.
Consider the following two statements:
\begin{enumerate}
\item \label{ffg-funrel} $\vec{f}$ is real analytic and can be analytically continued to a function $\vec{f}\in \mathcal{BA}(\mathbb{S}_s)^N$ satisfying the functional equation
\be\label{ffuncrel}
\vec{f}(x+is)+\vec{f}(x-is) -\vec{C}\cdot\vec{f}(x) = \vec{g}(x)
\qquad \text{for all}~~ x\in\mathbb{R}\ .
\ee
\item \label{ffg-inteq} $\vec{f}$ and $\vec{g}$ are related via the convolution
\be\label{fconv}
\vec{f}(x) = \left(\Phi_{\vec C}\star \vec{g}\right)(x)\hspace{1cm} \qquad \text{for all}~~ x\in\mathbb{R} \ .
\ee
\end{enumerate}
We have that \ref{ffg-funrel} implies \ref{ffg-inteq}. If the components of $\vec g$ are in addition H\"older continuous, then \ref{ffg-inteq} implies \ref{ffg-funrel}.
\end{proposition}
Such results have been widely used in the physics literature, but the only rigorous proof we are aware of is in \cite[Lem.\,2]{TraceyWidom} for the special case $N=1$ and
$\vec{C}=0$, and under
a decay condition on $\vec{f}(x)$.
Hence, we will give a complete proof here.
The basic reasoning of our proof is the same as in the physics literature, as outlined in the introduction. We take care to prove all the required analytical properties, which to our knowledge has not been done before in this generality. We also note that the observation that Proposition~\ref{FuncRelToNLIE} applies to all $\vec{C} \in \mathrm{Mat}_{<2}(N)$
(rather than just $\vec{C}=0$ and
adjacency matrices of certain graphs) seems to be new.
The proof relies on a number of ingredients developed in Sections~\ref{GreensChapter}--\ref{section:phiconvolution}. The proof
of Proposition~\ref{FuncRelToNLIE} itself is given in Section~\ref{GreenSoln}.
\subsection{The Green's function $\phi_d(z)$}
\label{GreensChapter}
In this subsection we introduce a family of meromorphic functions $\phi_d(z)$, parametrized by a real number
\be\label{d_in_(-2,2)}
d\in(-2,2) \ .
\ee
In this section we adopt the convention that whenever the parameter $d$ appears, it is understood that it is chosen from the above range.
The function $\phi_d$ will be central to our problem since it plays, in analogy with the theory of differential equations, the role of a Green's functions for the difference operator
\be\label{diffop}
D[f](x) \,:=\, f(x+is) + f(x-is) - d\cdot f(x) \ .
\ee
We start by defining the function $\phi_d$ on the real axis in terms of a Fourier integral representation.
\begin{definition}
The function $\phi_d:\mathbb{R}\rightarrow\mathbb{R}$ is defined by
\be\label{deltastandardrep}
\phi_d(x):=\frac{1}{2\pi}\int_{-\infty}^{\infty} \frac{e^{ikx}}{2\cosh(sk)-d} \, dk .
\ee
\end{definition}
Note that $\phi_d(x)$ is real and even, since it is the Fourier transform of a real and even function. Moreover, $(2\cosh(sk)-d)^{-1}$ is in the Schwartz space of rapidly decaying functions; thus also $\phi_d(x)$ is a Schwartz function, and Fourier inversion holds.
\begin{example} \label{phi_0} Consider the case $d=0$. The Fourier integral can then be computed explicitly, with the result
\be
\phi_0(z) = \frac{1}{4s\cosh\left(\frac{\pi}{2s}z\right)}\ .
\ee
This function is called the \textsl{universal kernel} or \textsl{standard kernel} in the physics literature.
It has a meromorphic continuation to the whole complex plane, with poles of first order in $z=is(1+2\mathbb{Z})$.
\end{example}
Explicit expressions for $\phi_d$ in terms of hyperbolic functions for other specific values of $d$ can be found e.g.\ in~\cite[App.\,D]{Dorey:2007zx} and \cite[Eqn.\,4.22]{BLZ4}. A general expression is given in Remark~\ref{rem:should-have-found-this-earlier} below.
Here we will proceed by deducing the analytic structure of $\phi_d(z)$ from its definition in \eqref{deltastandardrep}.
Recall that smoothness of a function is related to the rate of decay of its Fourier transform. If the decay is exponential, analytic continuation is possible; the faster the exponential decay, the further one can analytically continue:
\begin{lemma} \label{fourieranalytic} Let $f(x)$ be a function on $\mathbb{R}$ whose Fourier transform $\hat{f}(k)$ exists. Suppose there exist constants $A,a>0$ such that $|\hat{f}(k)|\leq A\exp(-a|k|)$ for all $k\in\mathbb{R}$. Then $f(x)$ has an analytic continuation to the strip $\mathbb{S}_a$.
\end{lemma}
For a proof see for example \cite[Ch.\,4,\,Thm.\,3.1]{SteinShakarchi}.
In particular, $\phi_d(x)$ can be analytically continued to the strip $\mathbb{S}_s$.
In fact, it can be continued to a meromorphic function with poles of order $\le 1$ in $is\mathbb{Z}$ (Lemma~\ref{phianalyticstructure} below). To get there we need some preparation. We start with:
\begin{lemma} \label{phianalyticcontinuation} $\phi_d(z)$ has an analytic continuation to the complex plane with two cuts,
$\mathbb{C}\setminus (i\mathbb{R}_{\geq s}\cup i\mathbb{R}_{\leq -s})$.
\end{lemma}
\begin{proof}
Take any $\theta\in(-\frac{\pi}{2},\frac{\pi}{2})$ and consider the function
\be\label{phitilde}
\tilde{\phi}_d(z)=e^{-i\theta}\frac{1}{2\pi}\int_{-\infty}^{\infty} e^{ike^{-i\theta}z}\left(2\cosh\left(ske^{-i\theta}\right)-d\right)^{-1}dk\ .
\ee
By Lemma~\ref{fourieranalytic}, this integral is analytic in $z\in e^{i\theta}\mathbb{S}_{s\cdot\cos\theta}$, a strip in the $z$-plane tilted by the angle $\theta$. We claim that $\tilde{\phi}_d(z)$ and $\phi_d(z)$ coincide in the intersection $e^{i\theta}\mathbb{S}_{s\cdot\cos\theta} \cap \mathbb{S}_s$ of their respective analytic domains.
This can be checked via contour deformation. We will first show that for $z \in e^{i\theta}\mathbb{S}_{s\cdot\cos\theta} \cap \mathbb{S}_s$ we have
\be\label{phirotatecontour}
\phi_d(z) =\frac{1}{2\pi}\int_{e^{-i\theta}\mathbb{R}} e^{ikz}(2\cosh(sk)-d)^{-1}dk \ .
\ee
Since for $|d|<2$, the integrand $k \mapsto e^{ikz}(2\cosh(sk)-d)^{-1}$ has no poles away from the imaginary axis, rotating the contour by $-\theta$ does not pick up any residues. It remains to verify that there are no contributions from infinity.
We express $z=x+iy$ and $k=u+iv$ in real coordinates, in terms of which the absolute value of the integrand can be written as
\be\label{estimintegrand}
\left|\frac{e^{ikz}}{2\cosh(sk)-d}\right|=\left|\frac{e^{-(uy+vx)}}{e^{su}e^{isv}+e^{-su}e^{-isv}-d}\right| \ .
\ee
On the circular contour components one can parametrise $v=-u\tan(\tau)$, with $\tau$ running from 0 to $\theta$. When $u\rightarrow\pm\infty$, the right hand side of \eqref{estimintegrand} approaches
\be
\left|e^{-u(y\pm s)-vx}\right|=\left|e^{-u(y\pm s-\tan(\tau) x)}\right| \ .
\ee
Thus, if the inequalities
\begin{align}
y&>\tan(\tau) x -s \nonumber \\
y&<\tan(\tau) x +s
\end{align}
are satisfied for all $\tau$ between 0 and $\theta$, then the two circular integrals do indeed vanish when the radius is taken to infinity. But these inequalities just describe the strips $e^{i\tau}\mathbb{S}_{s\cdot\cos\tau}$ in the $z$-plane, and their intersection for all $\tau \in [0,\theta]$ is precisely $\mathbb{S}_s \cap e^{i\theta}\mathbb{S}_{s\cdot\cos\theta}$.
Now, substituting $k'=e^{i\theta}k$ in \eqref{phirotatecontour} shows that $\phi_d(z) = \tilde \phi_d(z)$ on the intersection of their domains, and hence $\tilde{\phi}_d(z)$ is the analytic continuation of $\phi_d(z)$ to the strip $e^{i\theta}\mathbb{S}_{s\cdot\cos\theta}$.
Consequently, $\phi_d(z)$ has an analytic continuation to the union of all of these strips,
\be
\bigcup_{\theta\in(-\frac{\pi}{2},\frac{\pi}{2})}e^{i\theta}\mathbb{S}_{s\cdot\cos\theta} = \mathbb{C}\setminus (i\mathbb{R}_{\geq s}\cup i\mathbb{R}_{\leq -s})\ .
\ee
\end{proof}
In order to understand the behaviour of $\phi_d(z)$ on the whole imaginary axis, it is natural to start with the case $d=0$ whose analytic structure we know explicitly (Example~\ref{phi_0}). When comparing $\phi_d(z)$ to $\phi_0(z)$ we will need to control the derivative $\frac{\partial}{\partial d}\phi_d(z)$.
\begin{lemma} \label{dderivative} For all $z \in \mathbb{C}\setminus (i\mathbb{R}_{\geq s}\cup i\mathbb{R}_{\leq -s})$,
the partial derivative $\frac{\partial}{\partial d}\phi_d(z)$ exists and is
an analytic function on $\mathbb{C}\setminus (i\mathbb{R}_{\geq 2s}\cup i\mathbb{R}_{\leq -2s})$. For $z\in e^{i\theta}\mathbb{S}_{2s\cdot \cos\theta}$ it has the integral representation
\be\label{eqn:dderivative}
\frac{\partial}{\partial d}\phi_d(z) = e^{-i\theta}\frac{1}{2\pi} \int_{-\infty}^\infty e^{ike^{-i\theta}z} \left(2\cosh(ske^{-i\theta}) -d\right)^{-2} dk \ .
\ee
\end{lemma}
\begin{proof}
For any $z\in e^{i\theta}\mathbb{S}_{s\cdot \cos\theta}$, consider $\phi_d(z)$ given by the integral representation \eqref{phitilde}.
Write $ke^{-i\theta}= u+iv$ with $u,v\in\mathbb{R}$. Assuming $u\geq 0$, one then estimates
\begin{align}
&\left|2\cosh\left(ske^{-i\theta}\right)-d\right|
~=~
\left|e^{su}e^{isv}+e^{-su}e^{-isv}-d\right|\nonumber\\
&\geq~
\left|e^{su}e^{isv}\right|-\left|e^{-su}e^{-isv}\right|-|d|
~\geq~ e^{su}-1-\left|d\right|
~\geq~ e^{s\cos\theta \, |k|}-3 \ .
\end{align}
The same overall estimate applies for $u<0$ as well. For $|k|$ large enough, the last expression on the right hand side becomes bigger than $\frac{1}{2}e^{s\cos\theta \,|k|}$ and we can estimate
\be\label{cosh-theta-estimate}
\left|2\cosh\left(ske^{-i\theta}\right)-d\right|^{-1}
~\le~
2 \, e^{-s\cos\theta \,|k|}
\qquad \text{(for $|k|$ large enough)} \ .
\ee
Next, writing $z = e^{i \theta} (x + i y)$ with
$x\in\mathbb{R}$ and $y \le s\cos\theta$
we obtain, for all $k \in \mathbb{R}$,
\be
\left|e^{ike^{-i\theta}z}\right|
~=~ e^{-ky}
~\le~ e^{s\cos\theta \,|k|} \ .
\ee
For $|k|$ large enough and $z\in e^{i\theta}\mathbb{S}_{s\cdot \cos\theta}$ we can now estimate
\begin{align}\left|\frac{\partial}{\partial d}e^{ike^{-i\theta}z} \left(2\cosh(ske^{-i\theta}) -d\right)^{-1}\right| &=
\left|e^{ike^{-i\theta}z}\right| \cdot
\left| 2\cosh(ske^{-i\theta}) -d\right|^{-2} \nonumber\\
&\leq 4 \, e^{-s\cos\theta \,|k|} \ .
\end{align}
One can choose
$k_0>0$ large enough, such that this estimate applies for all $d \in (-2,2)$
and $|k| \ge k_0$.
Let $\varepsilon>0$, and define $D:=2-\varepsilon$. Since for any $k_0>0$, the integrand of \eqref{eqn:dderivative} is continuous (and finite) as a function of $(k,d)$ in the compact region $[-k_0,k_0]\times [-D,D]$, it is in particular bounded. One can therefore find a constant $A>0$ such that the integrand of \eqref{eqn:dderivative} is bounded by $A \,e^{-s\cos\theta \,|k|}$ for all $k \in \mathbb{R}$ and $d \in [-D,D]$.
The integrand is thus majorised by the integrable function $A \,e^{-s\cos\theta \,|k|}$ for all $d \in [-D,D]$ and therefore integration and $d$-derivative can be swapped, establishing \eqref{eqn:dderivative} for all $d \in [-D,D]$ and $z\in e^{i\theta}\mathbb{S}_{s\cdot \cos\theta}$.
Since $\varepsilon>0$ was arbitrary, this extends to all $d\in(-2,2)$,
proving the first part of the claim.
Moreover, according to Lemma \ref{fourieranalytic}, the integral on the right-hand-side of \eqref{eqn:dderivative} is actually analytic for $z\in e^{i\theta}\mathbb{S}_{2s\cdot \cos\theta}$. By uniqueness of the analytic continuation it must also coincide with $\frac{\partial}{\partial d}\phi_d(z)$ on this larger domain.
\end{proof}
One consequence of the integral representation of the
$d$-derivative is the following functional relation for $\phi_d$.
\begin{lemma} \label{phifuneq} For all $x\in\mathbb{R}\setminus \lbrace 0 \rbrace$ we have
\be\label{eqn:phifuneq}\phi_d(x+is)+\phi_d(x-is)-d\phi_d(x)=0 \ .
\ee
\end{lemma}
\begin{proof}
Fix $x \in \mathbb{R}$, $x \neq 0$, and
write
\be
\mathcal{L}_x(d):=\phi_d(x+is)+\phi_d(x-is)-d\phi_d(x) \ .
\ee
We will show that $\mathcal{L}_x(d)$ solves the initial value problem \be \label{dgl36}
\mathcal{L}_x(0)=0
\quad , \quad \frac{\partial}{\partial d}\mathcal{L}_x(d) =0
\qquad \text{for all $d\in (-2,2)$} \ .
\ee
The initial condition in \eqref{dgl36} is
straightforward
to check by recalling from Example~\ref{phi_0} that $\phi_0(z)=\left(4s\cosh\left(\frac{\pi}{2s}z\right)\right)^{-1}$. In order to prove the differential equation in \eqref{dgl36}, we use the integral representation \eqref{eqn:dderivative} for
$\theta=0$ and $z\in\mathbb{S}_{2s}$:
\begin{align}
\frac{\partial}{\partial d}\mathcal{L}_x(d) &= \frac{\partial}{\partial d}\phi_d(x+is)+\frac{\partial}{\partial d}\phi_d(x-is)-\phi_d(x)-d\frac{\partial}{\partial d}\phi_d(x) \nonumber\\
&=\frac{1}{2\pi} \int_{-\infty}^\infty e^{ikx} \frac{F(k)}{\left(2\cosh(sk) -d\right)^2}dk,
\end{align}
where
\be
F(k)=e^{-ks}+e^{ks}-(2\cosh(sk)-d)-d =0 \ .
\ee
Hence, \eqref{dgl36} follows.
\end{proof}
By Lemma \ref{phianalyticcontinuation}, the function
$x \mapsto \mathcal{L}_x(d)$
has an analytic continuation
to all $z\in\mathbb{C}\setminus i\mathbb{R}$. The functional relation \eqref{eqn:phifuneq} thus extends to this domain:
\be\label{fun-rel-phid-for-z}
\phi_d(z+is)+\phi_d(z-is)-d\phi_d(z)=0
\qquad
\text{for all} \quad
z\in\mathbb{C}\setminus i\mathbb{R} \ .
\ee
Using this, we now show:
\begin{lemma} \label{phianalyticstructure}
$\phi_d(z)$ has a meromorphic continuation to the whole complex plane which satisfies:
\begin{enumerate}[label=\roman*)]
\item \label{Poles1stOrder} The poles are all of first order and form a subset of $is\mathbb{Z}\setminus\lbrace 0\rbrace$.
\item \label{extended_funrel}
For $z \in \mathbb{C} \setminus is\mathbb{Z}$ we have $\phi_d(z+is)+\phi_d(z-is)-d\phi_d(z)=0$.
\item \label{recurs}
For $z \in \mathbb{C} \setminus is\mathbb{Z}$ and $n \in \Zb$ we have
\be\label{eqn:phirecursive}
\phi_d(z+ ins) = \frac{\sin(n\gamma)}{\sin(\gamma)}\phi_d(z+is)-\frac{\sin((n-1)\gamma)}{\sin(\gamma)}\phi_d(z) \ ,
\ee
where $\gamma \in \mathbb{R}$ satisfies
$d=2\cos(\gamma)$.
\end{enumerate}
\end{lemma}
\begin{proof}~\\
$\bullet$ {\em Relation \eqref{eqn:phirecursive} holds on
$\mathbb{C}\setminus i\mathbb{R}$}: Writing $p_n(z):=\phi_d(z+ins)$, we can rewrite \eqref{fun-rel-phid-for-z} as $p_{n+1}+p_{n-1}-dp_n=0$.
It is straight-forward to check that this recursion relation is solved by \be
p_n=\frac{\sin(n\gamma)}{\sin(\gamma)}p_1-\frac{\sin((n-1)\gamma)}{\sin(\gamma)}p_0\ .
\ee
$\bullet$ {\em $\phi_d$ has an analytic continuation to $\mathbb{C}$ minus the points $is\mathbb{Z}\setminus\lbrace 0\rbrace$}:
The right hand side of \eqref{eqn:phirecursive} is actually analytic in $\{ z \in \mathbb{C} | -s < \mathrm{Im}(z) < 0 \}$. Hence the same holds for the left hand side, that is, $\phi_d$ is analytic on the shifted strips $\{ z \in \mathbb{C} | sn < \mathrm{Im}(z) < (n+1)s \}$ for all $n \in \mathbb{Z}$. Combining this with Lemma~\ref{phianalyticcontinuation}, which states that $\phi_d(z)$ is analytic on $\mathbb{C}\setminus (i\mathbb{R}_{\geq s}\cup i\mathbb{R}_{\leq -s})$, we obtain the claim.
In particular, \eqref{fun-rel-phid-for-z} and \eqref{eqn:phirecursive} hold on $\mathbb{C} \setminus is\mathbb{Z}$, showing parts \ref{extended_funrel} and \ref{recurs} of the lemma.
\medskip
\noindent
$\bullet$ {\em $\phi_d$ is Lipschitz continuous in $d$ on $[-D,D]$ for $D<2$:}
Consider again the case $d=0$, given by $\phi_0(z)=\left(4s\cosh\left(\frac{\pi}{2s}z\right)\right)^{-1}$: this is a meromorphic function with poles of first order located at $is(1+2\mathbb{Z})$.
We are now going to show that in the strip $\mathbb{S}_{2s}$ the pole structure of $\phi_d(z)$ for general $d$ coincides with the pole structure of $\phi_0(z)$.
Recall the derivative $\frac{\partial}{\partial d}\phi_d(z)$ defined inside $\mathbb{S}_{2s}$ by the integral representation \eqref{eqn:dderivative}. We
write $y=\mathrm{Im}(z)$, where $y\in(-2s,2s)$, and obtain
\be
\left|\frac{\partial}{\partial d}\phi_d(z)\right|\leq \frac{1}{2\pi} \int_{-\infty}^\infty\frac{e^{-ky}}{\left(2\cosh(sk)-d\right)^2}dk \ .
\ee
Let $\varepsilon>0$ be arbitrary so that $D:=2-\varepsilon>0$ and $Y:=2s-\varepsilon>0$. For $d\leq D$ and $|y|\leq Y$, the integrand can then be further estimated as follows:
\be\frac{e^{-ky}}{\left(2\cosh(sk)-d\right)^2}\leq \frac{2\cosh(yk)}{\left(2\cosh(sk)-d\right)^2}\leq \frac{2\cosh(Yk)}{\left(2\cosh(sk)-D\right)^2}\ee But \be
B:=\frac{1}{2\pi}\int_{-\infty}^\infty \frac{2\cosh(Yk)}{\left(2\cosh(sk)-D\right)^2}dk\ee still converges (as $|Y|<2s$), and hence \be\left|\frac{\partial}{\partial d}\phi_d(z)\right|\leq B \ .\ee Put differently, $B$ is a Lipschitz constant for $\phi_d(z)$ understood as a function of $d$ on the interval $[-D,D]$. The Lipschitz condition reads \be\left|\phi_d(z)-\phi_0(z)\right|\leq B\cdot d \ .
\ee
\medskip
\noindent
$\bullet$ {\em The pole order of $\phi_d$ at $is\Zb \setminus \{0\}$ is 0 or 1}:
By Lipschitz-continuity in $d$, we know that $|\phi_d(z)-\phi_0(z) |\leq B\cdot d$ for all $z \in \mathbb{S}_{2s} \setminus \{ \pm is\}$. Since $\phi_0$ has first order poles at $\pm is$, it follows that so does $\phi_d$.
Since \eqref{eqn:phirecursive} holds on $\mathbb{C} \setminus is\mathbb{Z}$, the pole structure is as claimed.
This finally proves part \ref{Poles1stOrder} of the lemma.
We remark that the Lipschitz-continuity in $d$ does not extend beyond the strip $\mathbb{S}_{2s}$. Indeed,
$\phi_0$ has no pole at $\pm 2is$, whereas for $d\neq 0$, \eqref{eqn:phifuneq} forces $\phi_d(z)$ to have a pole there.
\end{proof}
Next we turn to the growth properties of $\phi_d$. We need the following result from complex analysis (see e.g.\ \cite[Ch.\,4,\,Thm.\,3.4]{SteinShakarchi} for a proof).
\begin{theorem}[Phragm\'en-Lindel\"of]\label{phrag-lind}
Let $f$ be a holomorphic function in the wedge $W = \{ z \in \mathbb{C} \,|\, -\tfrac\pi{2\beta} < \mathrm{arg}(z) < \tfrac\pi{2\beta} \}$, $\beta>\tfrac12$, which is continuous on the closure of $W$. Suppose that $|f(z)| \le 1$ on the boundary of $W$ and that there are $A,B>0$ and $0<\alpha<\beta$ such that $|f(z)| \le A e^{B |z|^\alpha}$ for all $z \in W$. Then $|f(z)| \le 1$ on $W$.
\end{theorem}
With the help of this theorem we can establish the following boundedness properties.
\begin{lemma} \label{phibounded} Let $\Theta\in[0,\frac{\pi}{2})$. Then for all $m,n\in\mathbb{Z}_{\ge 0}$
the function $z^m\frac{d^n}{dz^n}\phi_d(z)$ is bounded in the wedges $|\arg(z)|\leq\Theta$ and $|\arg(z)-\pi|\leq\Theta$.
\end{lemma}
\begin{proof}
Consider the lines $z=e^{\pm i\Theta}\mathbb{R}$, which constitute the boundary of the wedges we are interested in. The integral representation \eqref{phitilde} of $\phi_d(z)$, when restricted to these lines, yields
functions in the Schwartz space, because $\left(2\cosh\left(ske^{\mp i\Theta}\right)-d\right)^{-1}$ are functions in the Schwartz space. By definition of the Schwartz space, the function $z^m\frac{d^n}{dz^n}\phi_d(z)$ is bounded on these two lines as well. Moreover, it is analytic in the interior of the wedges. We will now show that the growth of $z^m\frac{d^n}{dz^n}\phi_d(z)$ is less than exponential in the interior of the wedges. The statement of the lemma then follows from Theorem~\ref{phrag-lind}.
It suffices to show that $\frac{d^n}{dz^n}\phi_d(z)$ is bounded in the wedges $|\arg(z)|<\Theta$ and $|\arg(z)-\pi|<\Theta$. Let $x\in \mathbb{R}$, $\theta\in(-\Theta,\Theta)$, and use the integral representation \eqref{phitilde} to obtain the $x$-independent estimate
\be\label{108u3}
\left|\frac{d^n}{dz^n}\phi_d(xe^{i\theta})\right|
\leq \frac{1}{2\pi} \int_{-\infty}^{\infty} \left|\frac{k^n}{2\cosh\left(ske^{-i\theta}\right)-d}\right|dk \ .
\ee
Next we estimate the integrand by a $\theta$-independent integrable function.
Recall from \eqref{cosh-theta-estimate} that for $|k|$ large enough we can estimate
\be
\left|2\cosh\left(ske^{-i\theta}\right)-d\right|^{-1}
~\le~
2 \, e^{-s\cos\theta \,|k|}
~\le~
2 \, e^{-s\cos\Theta \,|k|} \ .
\ee
But for $|k|$ large enough, we also have $|k^n|\leq e^{\frac{s}{2}\cos\Theta\, |k|}$. In other words, there exists a $k_0>0$, independent of $\theta$, such that for all $|k|\geq k_0$,
\be\label{vs98h}
\left|\frac{k^n}{2\cosh\left(ske^{-i\theta}\right)-d}\right|\leq 2e^{-\frac{s}{2}\cos\Theta\, |k|} \ .
\ee
Meanwhile, the function $w \mapsto \left|2\cosh(sw)-d\right|$ is continuous in
$\lbrace w\in\mathbb{C}|\, |\arg(w)|\leq\Theta, |\mathrm{Re}(w)|\leq k_0 \rbrace $ and in the corresponding wedge with $|\arg(-w)|\leq\Theta$,
and has no zeros in this
bow tie shaped
compact subset of the complex plane. Hence, it is bounded from below by a strictly positive number $M>0$.
Consequently, for all $|k|<k_0$,
\be\label{brt65}
\left|\frac{k^n}{2\cosh\left(ske^{-i\theta}\right)-d}\right|\leq \frac{k_0^n}{M} \ .
\ee
Plugging \eqref{vs98h} and \eqref{brt65} into \eqref{108u3} yields the bound \be\left|\frac{d^n}{dz^n}\phi_d(z)\right|\leq \frac{1}{2\pi}\left(\int_{-k_0}^{k_0}\frac{k_0^n}{M}dk + 2 \int_{k_0}^{\infty} 2e^{-\frac{s}{2}\cos\Theta\, k} dk \right)
\ ,
\ee
valid for all $z$ in the two wedges defined by $\Theta$, and
where the right hand side is finite and independent of $\theta$.
\end{proof}
\begin{corollary} \label{phiL1} For $y\in\mathbb{R}$, let $\partial^n\phi_d^{[y]}(x):=\frac{d^n}{dz^n}\phi_d(x+iy)$ be the restrictions of $\frac{d^n}{dz^n}\phi_d(z)$ to horizontal lines. If $y\notin s\mathbb{Z}\setminus\lbrace 0 \rbrace$, then
$(x+iy)^m\partial^n\phi_d^{[y]}(x)\in L_1(\mathbb{R})$
for all $n,m\in\mathbb{Z}_{\geq 0}$.
\end{corollary}
\begin{corollary} \label{boundaryintegrals} For any $a,b\in\mathbb{R}$ and for all $n,m\in\mathbb{Z}_{\geq 0}$, \be\lim_{x\rightarrow \pm\infty}\int_a^b(x+it)^m\frac{d^n}{dz^n}\phi_d(x+it)\, dt =0 \ .\ee
\end{corollary}
To understand at which points of $is\mathbb{Z} \setminus \{0\}$ the function $\phi_d$ has a first order pole and at which points the singularity can be lifted, we compute the residues.
\begin{lemma} \label{phiresidue}
For $n \in \mathbb{Z}$, the residue of $\phi_d(z)$ in $z=isn$ is given by
\be
\mathrm{Res}_{isn}(\phi_d) = \frac{1}{2\pi i} \frac{\sin(n\gamma)}{\sin(\gamma)}\ ,
\ee
where $\gamma \in \mathbb{R}$ satisfies $d=2\cos(\gamma)$.
\end{lemma}
\begin{proof}
We start by computing the residue at $is$:
\begin{align}
2 \pi i \, \mathrm{Res}_{is}(\phi_d) &
\overset{(a)}=\int_{\mathbb{R}+\frac{1}{2}is}\phi_d(z)\,dz - \int_{\mathbb{R}+\frac{3}{2}is}\phi_d(z)\,dz \nonumber\\
&= \int_{-\infty}^{\infty}\phi_d\left(x+\tfrac{is}{2}\right)dx -\int_{-\infty}^{\infty}\phi_d\left(x+\tfrac{3is}{2}\right)dx \nonumber\\
&\overset{(b)}=\int_{-\infty}^{\infty}\phi_d\left(x+\tfrac{is}{2}\right)dx +\int_{-\infty}^{\infty}\phi_d\left(x-\tfrac{is}{2}\right)dx-d\int_{-\infty}^{\infty}\phi_d\left(x+\tfrac{is}{2}\right)dx \nonumber\\
&\overset{(c)}=(2-d)\int_{-\infty}^{\infty}\phi_d(x)dx = (2-d)\frac{1}{2\cosh(sk)-d}\Big|_{k=0}
= 1 \ .
\end{align}
Here, all integrals exist by Corollary~\ref{phiL1}. In step (a), the circular contour is deformed to two horizontal infinite lines, making use of Corollary~\ref{boundaryintegrals} to ensure that no contribution is picked up when pushing the vertical parts of the contour to infinity. Step (b) is the functional relation in
Lemma~\ref{phianalyticstructure}\,\ref{extended_funrel}. In step (c) all contours are moved to the real axis, using that $\phi_d$ is analytic in $\mathbb{S}_{s}$ (Lemma~\ref{phianalyticstructure}) and that by Corollary~\ref{boundaryintegrals} there are no contributions from infinity.
But from \eqref{eqn:phirecursive} we see that \be\mathrm{Res}_{isn}\phi_d = \frac{\sin(n\gamma)}{\sin(\gamma)}\mathrm{Res}_{is}\phi_d-\frac{\sin((n-1)\gamma)}{\sin(\gamma)}\mathrm{Res}_{0}\phi_d\ .\ee
Since $\mathrm{Res}_{0}\phi_d=0$ we obtain the statement of the lemma.
\end{proof}
We are now in a position to justify the notion that $\phi_d(z)$ is a Green's function for the difference operator \eqref{diffop}:
\begin{lemma} \label{Greensproperty} $\phi_d(x)$ gives rise to a representation of the Dirac $\delta$-distribution on $BC(\mathbb{R},\mathbb{C})$ via
\be
\lim_{y\nearrow s}\left(\phi_d(x+iy) + \phi_d(x-iy) - d\phi_d(x) \right)= \delta(x)\ .
\ee
\end{lemma}
Before we turn to the proof, we note that for $|y|<s$,
\begin{align}
\phi_d(x+iy) + \phi_d(x-iy) - d\phi_d(x) &= \frac{1}{2\pi}\int_{-\infty}^{\infty} \frac{e^{ikx-yk}+e^{ikx+yk}-de^{ikx}}{2\cosh(sk)-d}dk \nonumber\\
&=\frac{1}{2\pi}\int_{-\infty}^{\infty}e^{ikx} \frac{2\cosh(yk)-d}{2\cosh(sk)-d}dk\ .
\end{align}
In the limit $y\nearrow s$, the integrand on the right hand side approaches $e^{ikx}$ pointwise. The usual exchange-of-integration-order argument proves that one obtains a Dirac $\delta$-distribution on $L_1$-functions whose Fourier-transformation is also $L_1$. To show that we obtain a $\delta$-distribution on $BC(\mathbb{R},\mathbb{C})$, we follow a different route.
\begin{proof}[Proof of Lemma~\ref{Greensproperty}]
Since $\phi_d(z)$ has simple poles at $z=\pm is$ of residue $\pm \frac{1}{2\pi i}$ (see Lemma \ref{phiresidue}), we can write
\be\phi_d(z)= \pm\frac{1}{2\pi i}\frac{1}{(z\mp is)}+r_\pm(z)\ ,
\ee
where $r_\pm(z)$ is now analytic at
$z=\pm is$. In particular, by Lemma~\ref{phibounded}
$r_+(z)$ is bounded in the upper half of $\mathbb{S}_s$ and $r_-(z)$ is bounded in the lower half of $\mathbb{S}_s$. Then
\begin{align}
\delta_y(x) &:= \phi_d(x+iy)+\phi_d(x-iy)-d\phi_d(x) \nonumber\\
&= \frac{1}{2\pi i}\left(\frac{1}{(x+iy- is)}-\frac{1}{(x-iy+ is)}\right)+r_+(x+iy)+r_-(x-iy)-d\phi_d(x) \nonumber\\
&= \tilde{\delta}_y(x) +u_y(x)\ ,
\end{align}
where \be\tilde{\delta}_y(x):=\frac{1}{\pi}\frac{s-y}{x^2 + (s-y)^2}\ee and $u_y(x):=r_+(x+iy)+r_-(x-iy)-d\phi_d(x)$.
Now suppose $f\in BC(\mathbb{R},\mathbb{C})$. We have to show that \be
\lim_{y\nearrow s} \int_{-\infty}^\infty \delta_y(x) f(x) dx = f(0).
\ee
In order to do that, let $\varepsilon >0$ be arbitrary and split the integral,
\be\int_{-\infty}^\infty \delta_y(x) f(x) dx = I_1(y)+I_2(y)+I_3(y)\ ,
\ee
where
\be
I_1(y)=\int_{\mathbb{R}\setminus (-\varepsilon,\varepsilon)} \delta_y(x) f(x) dx\ ,
\ee
\be
I_2(y)= \int_{-\varepsilon}^\varepsilon \tilde{\delta}_y(x) f(x) \; dx, \hspace{1cm} I_3(y)= \int_{-\varepsilon}^\varepsilon u_y(x) f(x) dx\ .
\ee The \textsl{Lorentz functions} $\tilde{\delta}_y(x)$ are a well-known representation of the Dirac $\delta$-distribution on $BC(\mathbb{R},\mathbb{C})$ as $y\rightarrow s$, so independently of $\varepsilon$ we have \be\lim_{y\nearrow s} I_2(y)=f(0)\ .\ee The functions $u_y(x)$ are uniformly bounded for $y\in[0,s]$, say by $C$. Hence, \be|I_3(y)|\leq 2\varepsilon C \left\|f\right\|_\infty \ .\ee Finally, consider $\delta_y(x)$ on $\mathbb{R}\setminus(-\varepsilon,\varepsilon)$. Due to the functional equation \eqref{eqn:phifuneq}, it converges pointwisely to zero on this domain as $y\rightarrow s$. But Lemma \ref{UniformConvergenceOfAnalyticFuns} in connection with Lemma \ref{phibounded} even ensures uniform convergence, and this is still true for the function $x^2 \delta_y(x)$. By means of the variable transformation $t=1/x$ we can recast $I_1(y)$ as an integral over a finite interval:
\be
I_1(y)=\int_{-\tfrac{1}{\varepsilon}}^{\tfrac{1}{\varepsilon}} \frac{1}{t^2}\delta_y(\tfrac{1}{t}) f(\tfrac{1}{t}) dt\ ,
\ee
By what we just said, the integrand converges uniformly to zero on $[-\tfrac{1}{\varepsilon},\tfrac{1}{\varepsilon}]$.
Thus, integral and limit can be swapped, which results in
\be\lim_{y\nearrow s} I_1(y)=0\ .\ee
We conclude that \be\left|f(0) -\lim_{y\nearrow s}\int_{-\infty}^\infty \delta_y(x) f(x) dx \right|\leq 2\varepsilon C\left\|f\right\|_\infty\ .\ee As $\varepsilon>0$ was arbitrary, the statement follows.
\end{proof}
\begin{remark} \label{rem:should-have-found-this-earlier}
After completion of this paper we noticed that one can actually give a simple explicit expression for $\phi_d(z)$ for arbitrary $d\in(-2,2)$:
\be\label{explicit_phi_d}
\phi_d(z)~=~\frac{1}{2s\sin(\gamma)}\cdot\frac{\sinh\left(\frac{\pi-\gamma}{s}z\right)}{\sinh\left(\frac{\pi}{s}z\right)} \ ,
\ee
where $\gamma\in(0,\pi)$ is defined via $d=2\cos(\gamma)$. This can be seen via a contour deformation argument using the analytical properties of $\phi_d$ established in this section, we will provide the details elsewhere. The explicit integral can also be found in tables,
see e.g.\ \cite[1.9\,(6)]{Erdelyi}.
We were, however, unable to obtain it in a more straight-forward fashion, circumventing the analysis carried out in this section.
\end{remark}
\subsection{The $N$-dimensional Green's function $\Phi_{\vec C}(z)$}
\label{section-NdimGreen}
Now let us investigate an $N$-dimensional version of the Green's function $\phi_d(z)$.
Recall from Notations~\ref{intro-notation} the definition of the subset $\mathrm{Mat}_{<2}(N) \subset \mathrm{Mat}(N,\mathbb{R})$, as well as from \eqref{phiC-def-intro} the definition of
$\Phi_{\vec C}:\mathbb{R}\rightarrow\mathrm{Mat}(N,\mathbb{R})$
for $\vec{C}\in\mathrm{Mat}_{<2}(N)$.
\begin{lemma}\label{Phiproperties} $\Phi_{\vec C}(x)$ has the following properties:
\begin{enumerate}[label=\roman*)]
\item \label{simdiag}$\Phi_{\vec C}(x)$ and $\vec{C}$ are simultaneously diagonalisable for all $x\in\mathbb{R}$.
\item \label{matrixelements} Any matrix element of $\Phi_{\vec C}(x)$ can be written as a linear combination \be [\Phi_{\vec C}]_{nm}(x) = \sum_{j=1}^N \Omega_{nm}^j \phi_{d_j}(x)\ee of one-dimensional Green's functions, with $d_j\in(-2,2)$ given by the eigenvalues of $\vec{C}$, and $\Omega_{nm}^j$ some real coefficients.
\item \label{delta} $\Phi(x)$ gives rise to a representation of the Dirac $\delta$-distribution on $BC(\mathbb{R},\mathbb{C})^N$:
\be\delta(x)\mathbf{1} = \lim_{y\nearrow s}\left(\Phi_{\vec C}(x+iy) + \Phi_{\vec C}(x-iy) - \vec{C}\cdot \Phi_{\vec C}(x) \right) \ee
\end{enumerate}
\end{lemma}
\begin{proof}
First of all, note that the matrix inverse in the definition is well-defined since $\vec{C}$ has spectral radius smaller than 2.
\medskip
\noindent
\ref{simdiag} Let $\vec{D}\in\mathrm{Mat}(N,\mathbb{R})$ be a
diagonal matrix such that $\vec{D} = \vec{T}^{-1}\vec{C}\vec{T}$ for some invertible $\vec{T}\in \mathrm{Mat}(N,\mathbb {R})$. Then
\begin{align}\vec{T}^{-1}\Phi_{\vec C}(x)\vec{T} &~=~ \frac{1}{2\pi}\int_{-\infty}^{\infty} e^{ikx}\;\vec{T}^{-1}(2\cosh(sk)\mathbf{1}-\vec{C})^{-1}\vec{T} \;dk \nonumber\\
&~=~\frac{1}{2\pi}\int_{-\infty}^{\infty} e^{ikx}\left(2\cosh(sk)\mathbf{1}-\vec{D}\right)^{-1}dk
~=~ \Phi_{\vec D}(x)
\end{align}
is also diagonal.
\medskip
\noindent
\ref{matrixelements} Write $\vec{D}=\mathrm{diag}(d_1,...,d_N)$. Then the matrix elements can be written as \be[\Phi_{\vec C}]_{nm}(x)=\sum_{j=1}^N T_{nj}[T^{-1}]_{jm}[\Phi_D]_{jj}(x) \ee where $\Omega_{nm}^j=T_{nj}[T^{-1}]_{jm}$ are real constants, and $[\Phi_D]_{jj}(x) = \phi_{d_j}(x)$.
Since $\vec{C} \in \mathrm{Mat}_{<2}(N)$,
$|d_j|<2$ holds for all $j=1,...,N$.
\medskip
\noindent
\ref{delta} Using \ref{simdiag} and \ref{matrixelements} and applying the Green's function property of $\phi_d(z)$ (Lemma \ref{Greensproperty}), one computes
\begin{align}
\lim_{y\nearrow s}&\left[\Phi_{\vec C}(x+iy) + \Phi_{\vec C}(x-iy) - \vec{C}\cdot \Phi_{\vec C}(x)\right]_{nm} \nonumber\\ &= \sum_{j=1}^N T_{nj}[T^{-1}]_{jm}\lim_{y\nearrow s}\left(\phi_{d_j}(x+iy) +\phi_{d_j}(x-iy) -d_j\phi_{d_j}(x)\right) \nonumber\\
&=\sum_{j=1}^N T_{nj}[T^{-1}]_{jm}\delta(x)
=\delta_{nm}\delta(x) \ .
\end{align}
This completes the proof.
\end{proof}
From the definition of $\phi_d$ in \eqref{deltastandardrep} and from
Lemma~\ref{Phiproperties}\,\ref{matrixelements} we know that all components of $\Phi_{\vec C}(x)$ are Schwartz functions on $\mathbb{R}$ (cf.\ Lemma~\ref{phibounded}). Hence the Fourier transformation of $\Phi_{\vec C}(x)$ reproduces the integrand in \eqref{phiC-def-intro}. In particular, for $k=0$ we obtain the following integral, which we will need later:
\begin{equation}\label{phiC-infinf-integral}
\int_{-\infty}^\infty
\hspace{-.5em}
\Phi_{\vec C}(x) \,dx ~=~
\big( 2 \cdot \mathbf{1}-\vec{C} \big)^{-1} \ .
\end{equation}
\begin{lemma}\label{Phinonneg}
Suppose $\vec{C}$ is non-negative. Then the matrix $\Phi_{\vec C}(x)$ is non-negative for all $x\in\mathbb{R}$.
\end{lemma}
\begin{proof}
The integrand can be expanded into a Neumann series,
\begin{align}
\left(2\cosh(sk)\mathbf{1}-\vec{C}\right)^{-1} &= \left(2\cosh(sk)\right)^{-1}\left(\mathbf{1}-\frac{\vec{C}}{2\cosh(sk)}\right)^{-1}\nonumber\\
&= \sum_{j=0}^\infty \frac{\vec{C}^j}{(2\cosh(sk))^{j+1}}\ ,
\end{align}
which converges absolutely since for $k\in\mathbb{R}$ all eigenvalues of $\left(2\cosh(sk)\right)^{-1}\vec{C}$ are strictly smaller than 1. Fubini's theorem (with counting measure on $\mathbb{Z}_{\ge0}$ and Lebesgue measure on $\mathbb{R}$) then justifies pulling the sum out of the Fourier integral, and we find that \be\label{phiseries}\Phi_{\vec C}(x)=\sum_{j=0}^\infty \left(\frac{1}{2\pi}\int_{-\infty}^\infty e^{ikx} \left(2\cosh(sk)\right)^{-j-1}dk\right) \vec{C}^j \ .\ee In Appendix~\ref{Fourier_coshm} it is shown that
\be\label{coshm-integral-strictly-positive}
\int_{-\infty}^\infty e^{ikx}\left(2\cosh(sk)\right)^{-j-1}dk = \frac{\pi}{2^{j+1} j!s}\left(\prod_{\substack{l=j-1\\ \mathrm{step -2}}}^1 \left(\frac{x^2}{s^2} + l^2\right)\right)\cdot \begin{cases} \frac{1}{\cosh\left(\frac{\pi}{2s}x\right)} & \mathrm{if} \; j \;\mathrm{even} \\ \frac{x}{s\sinh\left(\frac{\pi}{2s}x\right)} &\mathrm{if} \; j \;\mathrm{odd} \end{cases} \ ,\ee
which is a
strictly positive
function of $x\in\mathbb{R}$. Furthermore, $\vec{C}^j$ is a non-negative matrix. Hence, $\Phi_{\vec C}(x)$ is non-negative for all $x\in\mathbb{R}$.
\end{proof}
\begin{remark}\label{remark-Smatrix-exists}
Suppose $\vec G \in \mathrm{Mat}_{<2}(N)$ is non-negative and irreducible. One of the equivalent ways to characterise irreducibility is that for each $i,j$ there is an $m>0$ such that $[(\vec G)^m]_{ij} \neq 0$. Together with non-negativity of $\vec G$ and strict positivity of \eqref{coshm-integral-strictly-positive},
this implies that
$\Phi_{\vec G}(x)$ has \textsl{strictly positive} entries for all $x \in \mathbb{R}$.
By Corollary~\ref{phiL1} and Lemma~\ref{Phiproperties}\,\ref{matrixelements}, the components of $\Phi_{\vec G}(x)$ are integrable, and so we can choose $\Psi_{\vec G} \in BC(\mathbb{R},\mathrm{Mat}(N,\mathbb{R}))$ such that $\Psi_{\vec G}(x)$ has positive entries bounded away from zero and satisfies $\tfrac{d}{dx} \Psi_{\vec G}(x) = \Phi_{\vec G}(x)$. Comparing to Remark~\ref{remark-S-matrix-relation}, we see that with the above assumption on $\vec G$, it is always possible to find an
$\vec S \in BC(\mathbb{R},\mathrm{Mat}(N,\mathbb{C}))$
such that \eqref{PhiG-logS} holds.
\end{remark}
\subsection{Convolution integrals involving $\phi_d(z)$}
\label{section:phiconvolution}
In this section we adopt again the convention \eqref{d_in_(-2,2)} that the parameter $d$ will always take values in the range
\be
d\in(-2,2) \ .
\ee
Just as in the case of differential equations, the Green's function approach to difference equations will eventually express solutions in terms of convolution integrals involving the Green's function. For $g \in BC(\mathbb{R},\mathbb{C})$,
the convolution with $\phi_d(z)$ is defined by
\be\label{Fd-convolution-def}
F_d[g](z):= \int_{-\infty}^{\infty} \phi_d(z-t)g(t)\ dt \ .\ee
Due to Corollary~\ref{phiL1}, this function is well-defined on $\mathbb{S}_s$. As we will see in Section~\ref{GreenSoln}, it is important to understand the properties of such integrals as functions in $z$. That is the subject of this section.
The first question to ask is whether $F_d[g](z)$ is analytic. More generally: does the integration of a parameter-dependent analytic function preserve analyticity? The following lemma gives a criterion:
\begin{lemma}\label{analyticity}
Let $D\subseteq\mathbb{C}$ be a complex domain. Suppose $f:D\times \mathbb{R} \rightarrow \mathbb{C}$ is a function with the following properties:
\begin{enumerate}
\item \label{analyticity:1} for every $t_0\in\mathbb{R}$, the function $f(z,t_0)$ is analytic in $D$.
\item \label{analyticity:2} for every $z_0\in D$, the function $f(z_0,t)$ is continuous on $\mathbb{R}$.
\item \label{analyticity:3} for every $z_0\in D$ there exists a neighbourhood $U$ and an $L_1(\mathbb{R})$-integrable function $M(t)$, such that $\left|f(z,t)\right|\leq M(t)$ for all $z\in U$ and all $t\in \mathbb{R}$.
\end{enumerate}
Then the function
\be
F(z) = \int_{-\infty}^\infty f(z,t)\; dt
\ee
is analytic in $D$.
\end{lemma}
\begin{proof}
Let $z\in D$. Note that $F(z)$ is well-defined since the integrand is continuous (condition \ref{analyticity:2}) and dominated by an integrable function (condition \ref{analyticity:3}). Now take an arbitrary closed triangular contour $\Gamma$ inside $D$.
Define the function
\be
L(z):= \int_{-\infty}^\infty \left|f(z,t)\right| \;dt \ . \ee
Since by condition \ref{analyticity:3}, $f(z,t)$ is locally dominated by an integrable function, $L(z)$ is continuous on $D$. Thus, on the compact contour $\Gamma$ the function $L(z)$ is bounded and the integral \be\oint_\Gamma\left(\int_{-\infty}^\infty \left|f(z,t)\right|dt \right) dz\ee is finite. This warrants the application of Fubini's theorem, followed by analyticity (condition \ref{analyticity:1}): \be\oint_\Gamma F(z) dz = \oint_\Gamma\int_{-\infty}^\infty f(z,t)dt \; dz = \int_{-\infty}^\infty\oint_\Gamma f(z,t) dz\; dt = 0\ .\ee
By Morera's theorem, the claim follows.
\end{proof}
This lemma can be applied to the convolution integral $F_d[g](z)$. Set $D=\mathbb{S}_s$, and for any given $z_0\in\mathbb{S}_s$ set $U=B_\varepsilon(z_0)$
(the open ball with radius $\varepsilon$ and center $z_0$)
with some sufficiently small $\varepsilon$. By Lemma \ref{phibounded}, a dominating integrable function $M(t)$ can be found by taking it to be a constant $B>0$ for $|t-z_0|<T$ and $B|t-z_0|^{-2}$ else. $B$ and $T$ are to be chosen sufficiently large. We have shown:
\begin{corollary}\label{PhiConvgAnalytic}
For every $g\in BC(\mathbb{R},\mathbb{C})$, the function $F_d[g](z)$ is analytic in $\mathbb{S}_s$.
\end{corollary}
Note that in the same fashion as for $\phi_d(z-t)g(t)$, one can also use Lemma \ref{phibounded} to construct integrable dominating functions for $\frac{d^n}{dz^n}\phi_d(z-t)g(t)$. Hence, we are allowed to differentiate inside the integral:
\be\label{Fswapdiffint}
\frac{d^n}{dz^n}F_d[g](z) = \int_{-\infty}^{\infty} \frac{d^n}{dz^n}\phi_d(z-t) \ g(t) \ dt \ .
\ee
More can be said about the nature of $F_d[g](z)$ and its derivatives. The following lemma provides a stepping stone.
\begin{lemma} \label{L1phiBounded}
Let $n\in\mathbb{Z}_{\geq 0}$. The function $y\mapsto\left\|\partial^n\phi_d^{[y]}\right\|_{L_1}$ is bounded in the compact interval $[-Y,Y]$ for every $0<Y<s$.
\end{lemma}
\begin{proof}
The function is well-defined due to Corollary \ref{phiL1}. Now fix a $\Theta\in[0,\frac{\pi}{2})$ and $0<Y<s$. Due to Lemma \ref{phibounded}, there exists a $C>0$ such that
\be
\left|(x+iy)^2\partial^n\phi_d^{[y]}(x)\right|\leq C\ee for all
$x,y\in\mathbb{R}$
with $|\frac{y}{x}|<\tan\Theta$, and consequently \be\left|\partial^n\phi_d^{[y]}(x)\right|\leq \begin{cases}
\frac{C}{|x+iy|^2}\leq \frac{C}{x^2} &
\mathrm{for}\;|x|>\frac{Y}{\tan\Theta}\\ \max_{z\in\mathbb{S}_Y}|\partial^n\phi_d(z)| & \mathrm{else}\end{cases} \ee for all $y\in[-Y,Y]$. The right hand side is in $L_1(\mathbb{R})$ and independent of $y$. Its integral over $\mathbb{R}$ provides a bound for $\left\|\partial^n\phi_d^{[y]}\right\|_{L_1}$ in the interval $[-Y,Y]$.
\end{proof}
\begin{lemma} \label{DerivConvBounded}
Let $n\in\mathbb{Z}_{\geq 0}$.
For every $g\in BC(\mathbb{R},\mathbb{C})$, the function $\frac{d^n}{dz^n}F_d[g](z)$ is bounded in $\mathbb{S}_Y$ for all $0<Y<s$.
\end{lemma}
\begin{proof}
With \ref{Fswapdiffint}, one has \be\left|\frac{d^n}{dz^n}F_d[g](z)\right|\leq \int_{-\infty}^{\infty} \left|\frac{d^n}{dz^n}\phi_d(z-t)\right|\left|g(t)\right| dt \leq \sup_{t\in\mathbb{R}}|g(t)|\left\|\partial^n\phi_d^{[\mathrm{Im}(z)]}\right\|_{L_1} \ .\ee According to Lemma \ref{L1phiBounded}, the right-hand side is bounded for $|\mathrm{Im}(z)|\leq Y$. Hence, $F_d[g](z)$ is bounded in $\mathbb{S}_Y$.
\end{proof}
Since $\phi_d(z)$ has poles in $z=\pm is$, there is no obvious way to extend the domain of $F_d[g](z)$ beyond $\mathbb{S}_s$. Lemma~\ref{analyticity} thus provides no information regarding the behaviour of this convolution integral as $z$ approaches the boundary $\partial\mathbb{S}_s=\mathbb{R}\pm is$. Moreover, Lemma \ref{DerivConvBounded} can only be used to prove boundedness of $F_d[g](z)$ in a strip which is strictly contained in $\mathbb{S}_s$. In the remainder of this section, we will show that for $g$ is H\"older continuous, $F_d[g](z)$ can be extended to $\overline{\mathbb{S}}_s$ as a bounded and continuous function. To this end, we need another result from complex analysis.
Let us relax the analyticity condition in Lemma \ref{analyticity}: suppose $f(z,t)$ is analytic everywhere except in $z=t$, where it shall have a pole of first order. Consider a contour $\gamma$ in $D$, and integrate over it: \be
F(z) = \int_\gamma f(z,t) \, dt \ .\ee
The pole of $f(z,t)$ at $z=t$ causes $F(z)$ to have a branch cut along $\gamma$. Theorems describing this behaviour often go by the name of Sokhotski-Plemelji \cite{Gakhov}.
The next proposition is an instance of this for $\gamma=\mathbb{R}$, and it follows from a more more general statement proven in Appendix~\ref{AppendixSokhotski}.
\begin{proposition}\label{SokhotskiConvolutionMain}
Let $a>0$ and let $h:\mathbb{S}_a \rightarrow \mathbb{C}$ be an analytic function such that both $zh(z)$ and $\frac{d}{dz}h(z)$ are bounded in $\mathbb{S}_a$.
Moreover, let $g:\mathbb{R}\rightarrow \mathbb{C}$ be a bounded
H\"older continuous
function. Then
\be
F(z)=\int_{-\infty}^{\infty}\frac{h(z-t)}{z-t}g(t)\,dt\ee is analytic in $\mathbb{S}_a \setminus \mathbb{R}$. Moreover, the limits
\be
F^\pm(x):=\lim_{y\searrow s}F(x\pm iy) \hspace{1.5cm}(x\in\mathbb{R})\ee exist and are uniform in $x$.
The functions $F^\pm(x)$ are bounded on $\mathbb{R}$ and provide continuous extensions of $F(z)$ from the upper/lower half-plane to the real axis,
related by
\be
F^+(x)-F^-(x)~=~2i\pi \, h(0) g(x)\ .\ee
\end{proposition}
Now let us apply this result to convolution integrals involving $\phi_d(z)$: let $a<s$ and define the functions $h_\pm(z)=z\phi_d(z\pm is)$. By Lemma \ref{phianalyticstructure}, $h_\pm(z)$ are analytic in $\mathbb{S}_a$. Due to Lemma \ref{phibounded}, both $zh_\pm(z)$ and $\frac{d}{dz}h_\pm(z)$ are bounded in $\mathbb{S}_a$. Hence, $h_\pm(z)$ satisfy the conditions of Proposition \ref{SokhotskiConvolutionMain} for any $a<s$. Specifically, this clarifies the behaviour of our convolution integral as we approach the boundary of the strip:
\begin{corollary}\label{PhiConvgExtension}
Let $g:\mathbb{R}\rightarrow\mathbb{C}$ be bounded and H\"older continuous. Then the function $F_d[g](z)$ has a continuous extension to $\overline{\mathbb{S}}_s$, and this extension is bounded on $\partial\mathbb{S}_s=\mathbb{R}\pm is$.
\end{corollary}
Lastly, combining this result with the case $n=0$ of Lemma \ref{DerivConvBounded}, we obtain:
\begin{lemma} \label{PhiConvgBounded}
Let $g:\mathbb{R}\rightarrow\mathbb{C}$ be bounded and H\"older continuous. Then the function $F_d[g](z)$ is bounded in $\mathbb{S}_s$.
\end{lemma}
\begin{proof}
By Corollary \ref{PhiConvgExtension}, there is some constant $B>0$ such that $|F_d[g](z)|\leq B$ for all $z\in\partial\mathbb{S}_s$. According to Proposition \ref{SokhotskiConvolutionMain}, $F_d[g](x\pm iy)\rightarrow F_d[g](x\pm is)$ uniformly as $y\nearrow s$. Thus, for any given $\varepsilon >0$ there exists a $\delta>0$ such that $|F_d[g](z)|\leq B+\varepsilon$ for all $z\in\mathbb{S}_s\setminus\mathbb{S}_{s-\delta}$. In other words, $F_d[g](z)$ is bounded in $\mathbb{S}_s\setminus\mathbb{S}_{s-\delta}$. But on the other hand, by Lemma \ref{DerivConvBounded} $F_d[g](z)$ is also bounded in $\mathbb{S}_{s-\delta}$.
\end{proof}
The results of this section can now be summed up as follows: if $g$ is bounded and H\"older continuous, then $F_d[g]\in\mathcal{BA}(\mathbb{S}_s)$.
\subsection{Proof of Proposition \ref{FuncRelToNLIE}}
\label{GreenSoln}
$\ref{ffg-funrel} \Rightarrow \ref{ffg-inteq}$: For $y\in\mathbb{R}$, define the family of continuous functions $\vec{f}^{[y]}(x):=\vec{f}(x+iy)$. Continuity of $\vec{f}$ on the closure of the strip $\mathbb{S}_s$ guarantees pointwise convergence $\vec{f}^{[y]}\rightarrow \vec{f}^{[s]}$ as $y\nearrow s$.
By boundedness of $\vec{f}$, the components $f^{[y]}_m$ of $\vec{f}^{[y]}$ are uniformly bounded by some constant $M$. It follows that, for any fixed value of $b\in\mathbb{R}$ and any $d \in (-2,2)$,
\be\left|\phi_d(b-x)
f_m^{[y]}(x)\right|\leq M|\phi_d(b-x)|
\qquad \text{for all}~~ x\in\mathbb{R}
\ .\ee The function on the right-hand-side is in $L_1(\mathbb{R})$ according to Corollary \ref{phiL1}. Thus, by Lebesgue's dominated convergence theorem we can write
\be\label{lebesguelimes}\int_{-\infty}^{\infty}\phi_d(x-t)f_m^{[s]}(t)dt = \lim_{y\nearrow s}\int_{-\infty}^{\infty}\phi_d(x-t)f_{m}^{[y]}(t)dt \ .\ee
By a simple change of variables followed by contour deformation (which is now allowed because for $y<s$ the contour lies inside the analytic domain) one can transfer the appearance of $y$ from $f_m$ to $\phi_d$: \be\label{contdef}\int_{-\infty}^{\infty}\phi_d(x-t)f_m(t+iy)dt = \int_{-\infty}^{\infty}\phi_d(x+iy-t)f_m(t) dt \ee
Note that the integrals over the vertical parts of the contour vanish when pushed to infinity (see Corollary \ref{boundaryintegrals}). Plugging \eqref{contdef} into \eqref{lebesguelimes}, and making use of Lemma \ref{Phiproperties}\,\ref{matrixelements} to write $[\Phi_{\vec C}]_{nm}$
in terms of the one-dimensional Green's functions $\phi_d$, gives rise to the identity \be\label{shiftidentity}\Phi_{\vec C}\star\vec{f}^{[\pm s]}(x) = \lim_{y\nearrow s} \Phi_{\vec C}^{[\pm y]}\star\vec{f}(x)
\qquad \text{for all}~~
x\in\mathbb{R}
\ .\ee Taking into account $[\vec{C},\Phi_{\vec C}(x)]=0$ due to Lemma \ref{Phiproperties}\,\ref{simdiag} and distributivity of the convolution, \eqref{shiftidentity} directly implies \be\Phi_{\vec C}\star\left(\vec{f}^{[+s]}+\vec{f}^{[-s]}-(\vec{C}\cdot\vec{f})\right)(x) =\lim_{y\nearrow s}\left(\Phi_{\vec C}^{[+y]}+\Phi_{\vec C}^{[+y]}-\vec{C}\cdot\Phi_{\vec C}\right)\star\vec{f}(x)\ee for all $x\in\mathbb{R}$. On the left-hand-side we can substitute the functional relation \eqref{ffuncrel}, and on the right-hand-side apply Lemma~\ref{Phiproperties}\,\ref{delta}. This results in \eqref{fconv}.
\medskip
\noindent
$\ref{ffg-inteq} \Rightarrow \ref{ffg-funrel}$: According to Lemma \ref{Phiproperties}\,\ref{matrixelements}, the components of $\vec{f}(x)=\left(\Phi_{\vec C}\star \vec{g}\right)(x)$ are given by real linear combinations of the form
\be\label{phi*g-via-Fd}
\sum_{d,m} c_{d,m}F_d[g_m](x) \ ,
\ee
where the $g_m(x)$ are bounded and H\"older continuous by assumption and
$F_d[g_m](x)$ are the convolution integrals discussed in Section~\ref{section:phiconvolution}. According to
Corollary~\ref{PhiConvgAnalytic},
Corollary~\ref{PhiConvgExtension} and Lemma~\ref{PhiConvgBounded}, these integrals have
analytic continuations $F_d[g_m]\in\mathcal{BA}(\mathbb{S}_s)$.
This shows that $\vec{f}\in \mathcal{BA}(\mathbb{S}_s)^N$
and
\be\label{fy-as-convolution-with-g}
\vec{f}^{[y]}(x)=\left(\Phi_{\vec C}^{[y]}\star \vec{g}\right)(x) \ .
\ee
To obtain the functional equation \eqref{ffuncrel} we basically reverse the above reasoning,
\begin{align}
\left(\vec{f}^{[+s]}+\vec{f}^{[-s]}-(\vec{C}\cdot\vec{f})\right)(x)
&\overset{(a)}=
\lim_{y\nearrow s}
\left(\vec{f}^{[+y]}+\vec{f}^{[-y]}-(\vec{C}\cdot\vec{f})\right)(x)
\nonumber\\
&\overset{\eqref{fy-as-convolution-with-g}}=
\lim_{y\nearrow s}
\left(\left(\Phi_{\vec C}^{[+y]}+\Phi_{\vec C}^{[-y]}-(\vec{C}\cdot\Phi_{\vec C})\right)\star \vec{g}\right)(x)
\nonumber\\
&\overset{(b)}=\vec{g}(x) \ .
\end{align}
Here (a) follows from pointwise convergence $\vec{f}^{[\pm y]}(x) \to \vec{f}^{[\pm s]}(x)$ due to continuity of $\vec{f}$ on $\overline{\mathbb{S}}_s$, and step (b) is the $\delta$-function property from Lemma~\ref{Phiproperties}\,\ref{delta}.
This completes the proof of Proposition~\ref{FuncRelToNLIE}.
\section{Unique solution to a family of integral equations}
\label{section-unique-integral-soln}
In this section we give a criterion for functional equations of the form \eqref{TBAintro} in the introduction to have a unique solution.
Specifically, we will prove a special case of Theorem~\ref{main-theorem-TBA} where we choose $\vec{C} = \tfrac12 \vec{G}$.
\begin{proposition}\label{TBAuniquenessDynkin} Let $\vec G \in \mathrm{Mat}_{<2}(N)$ be non-negative and irreducible,
and let $\vec{a} \in BC_-(\mathbb{R},\mathbb{R})^N$.
Then the system of nonlinear integral equations
\be\label{Csystem}
\vec{f}(x)=\int_{-\infty}^{\infty} \Phi_{\frac{1}{2}\vec G}(x-y)\cdot\vec{G}\cdot\left(\log\left(e^{-\vec{a}(y)}+e^{\vec{f}(y)}\right)-
\tfrac{1}{2}\vec{f}(y)
\right) dy
\ee
has exactly one bounded continuous solution,
$\vec{f}_\star\in BC(\mathbb{R},\mathbb{R})^N$.
\end{proposition}
The proof will consist of verifying that the Banach Fixed Point Theorem can be applied and will be given in Section~\ref{uniqunessproofTBA}. In Sections~\ref{hammerstein-section} and~\ref{section:TBAuniqueness} we lay the groundwork by discussing a type of integral equations called Hammerstein integral equations, and by applying the general results there to TBA-type equations.
After the proof, in Section~\ref{G-is-graph-section} we comment on the special case that $\vec{G}$ is the adjacency matrix of a graph -- the case most commonly considered in applications -- and in Section~\ref{N=1-example-section} we look at the case $N=1$ in more detail.
The only previous proof of Theorem~\ref{main-theorem-TBA} we know of
concerns the case
$N=1$, $\vec{G}=\vec{C}=1$ and $\vec{a} \sim \cosh(x)$, and
can be found in \cite{FringKorffSchulz}.
Their argument
also uses the Banach Fixed Point Theorem but is different from ours (we rely on being able to choose $\vec{C}$ different from $\vec{G}$) and we review it in Section~\ref{N=1-example-section}.
\subsection{Hammerstein integral equations as contractions}
\label{hammerstein-section}
In this section we take $\mathbb{K}$ to stand for $\mathbb{R}$ or $\mathbb{C}$. We use the abbreviation
$BC(\mathbb{R}) := BC(\mathbb{R},\mathbb{K})$. Similarly, we write $BC(\mathbb{R}^m)^N$ for $BC(\mathbb{R}^m,\mathbb{K}^N)$, which we think of either as $\mathbb{K}^N$-valued functions, or as $N$-tuples of $\mathbb{K}$-valued functions.
\medskip
Consider the nonlinear integral equation
\be
f(x) = \int_{-\infty}^{\infty} K(x,y)L(y,f(y)) dy \ ,\ee where
$K:\mathbb{R}\times\mathbb{R}\rightarrow\mathbb{K}$ and
$L:\mathbb{R}\times\mathbb{K}\rightarrow\mathbb{K}$
are some functions continuous in both arguments, and where it is understood that the integral is well-defined for $f(x)$ in some suitable class of functions on $\mathbb{R}$. Integral equations of this form are commonly referred to as \textsl{Hammerstein equations}, see e.g.\ \cite[I.3]{Krasnoselskii}
and \cite[Ch.\,16]{Polyanin}.
A function $f(x)$ solves this equation if and only if it is a fixpoint of the corresponding integral operator
\be
A[f](x) := \int_{-\infty}^{\infty} K(x,y)L(y,f(y)) dy \ .\ee When does such a map have a unique fixpoint? We will try to bring Banach's Fixed Point Theorem to bear on this question, which we now briefly recall.
\begin{definition}
Let $X$ be a metric space. A map $A:X\rightarrow X$ is called a {\sl contraction} if there exists a positive real constant $\kappa<1$ such that \be
d_X(A(x),A(y)) \leq \kappa \, d_X(x,y)\ee
for all $x,y\in X$. If the condition is satisfied for $\kappa=1$, then $A$ is called {\sl non-expansive}.
\end{definition}
\begin{theorem}[Banach]\label{Banach}
Let $X$ be a complete metric space and $A:X\rightarrow X$ a contraction. Then $A$ has a unique fixpoint $x_\star\in X$. Furthermore, for every $x_0\in X$, the recursive sequence $x_n:=A(x_{n-1})$ converges to $x_\star$.
\end{theorem}
We now describe a general principle
which facilitates the application of
Banach's Theorem
to integral operators of Hammerstein type
(see e.g.\ \cite[16.6-1,\,Thm.\,3]{Polyanin}).
Suppose there exists a constant $\rho>0$ such that
\be\int_{-\infty}^{\infty} \left|K(x,y)\right| dy \leq \rho
\qquad \text{for all} \quad
x\in\mathbb{R} \ .
\ee
Suppose also that $L$ is Lipschitz continuous in the second variable,
i.e.\ there exists a constant $\sigma>0$ such that
\be\left|L(x,t)-L(x,s)\right| \leq \sigma \left|t-s\right|
\qquad \text{for all} \quad
x\in\mathbb{R}, \ s,t\in\mathbb{K}\ .\ee
Provided $A[f]$ defines a map $BC(\mathbb{R})\rightarrow BC(\mathbb{R})$,
we can use this to compute for any $f,g\in BC(\mathbb{R})$:
\begin{align}
\left\| A[f]-A[g] \right\|_\infty &= \sup_{x\in\mathbb{R}} \left| \int_{-\infty}^{\infty} K(x,t) \left( L(t,f(t))-L(t,g(t)) \right) dt \right| \nonumber\\
&\leq \sup_{x\in\mathbb{R}} \int_{-\infty}^{\infty}\left| K(x,t)\right| \left| L(t,f(t))-L(t,g(t)) \right| dt \nonumber\\
&\leq \sigma \, \sup_{x\in\mathbb{R}} \int_{-\infty}^{\infty}\left| K(x,t)\right| \left| f(t)-g(t) \right| dt
~\leq~ \sigma\rho \,\left\| f-g \right\|_\infty
\end{align}
If $\kappa:=\sigma\rho<1$, then $A[f]$ is a contraction with respect to the metric induced by the supremum norm $\left\|\cdot\right\|_\infty$. Recall that $BC(\mathbb{R})$ together with the norm $\left\|\cdot\right\|_\infty$ is a Banach space, and so the Banach Theorem~\ref{Banach} applies.
Consider now $N$ coupled nonlinear integral equations of Hammerstein type:
\be\vec{f}(x) = \int_{-\infty}^{\infty} \vec{K}(x,y)\cdot\vec{L}(y,\vec{f}(y)) \, dy \ ,\ee
where
$\vec{K}:\mathbb{R}\times\mathbb{R}\rightarrow \mathrm{Mat}(N,\mathbb{K})$ and
$\vec{L}:\mathbb{R}\times\mathbb{K}^N\rightarrow\mathbb{K}^N$
are continuous in both arguments.
Our arguments depend crucially on the right choice of norm for the functions $\vec{f} :\mathbb{R} \to \mathbb{K}^N$.
For $1\leq p \leq\infty$, we equip the space $BC(\mathbb{R})^N$ with the norm $\left\|\cdot\right\|_{\infty_p}$ given by \be\left\|\vec{f}\right\|_{\infty_p} := \sup_{x\in\mathbb{R}}\left\|\vec{f}(x)\right\|_p = \sup_{x\in\mathbb{R}}\left(\sum_{i=1}^N |f_i(x)|^p\right)^\frac{1}{p} \ .\ee
The normed space $\left(BC(\mathbb{R})^N, \left\|\cdot\right\|_{\infty_p}\right)$ is a Banach space by the following standard lemma, which we state without proof.
\begin{lemma} \label{B(X,Y)Banach} Let $X$ be a metric space and $(Y,\left\|\cdot\right\|_Y)$ a Banach space.
Then also $\left(BC(X,Y),\left\|\cdot\right\|_\infty\right)$ is a Banach space.
\end{lemma}
We now give the $N$-component version of the general principle outlined above. The proof is the same, just with heavier notation.
\begin{lemma} \label{HammersteinContractionNdim} Let $\vec{K}\in\mathrm{Mat}(N,BC(\mathbb{R}^2))$ and $\vec{L}:\mathbb{R}\times\mathbb{K}^N \rightarrow\mathbb{K}^N$ a function. Suppose that
\begin{itemize}
\item
$\vec{K}$ has bounded integrals in the second variable, in the sense that
\be
\rho_{ij}:= \sup_{x\in\mathbb{R}}\int_{-\infty}^{\infty} \left|K_{ij}(x,y)\right| dy \,<\, \infty \qquad , \quad i,j=1,...,N \ .
\ee
\item
all the components of $\vec{L}$ are Lipschitz continuous in the second variable in the sense that there are constants $\sigma_j\geq 0$,
$j=1,\dots,N$, such that
\be
\left|L_j(y,\vec{v})-L_j(y,\vec{w})\right| \leq \sigma_j \left\|\vec{v}-\vec{w}\right\|_p \quad \text{for all} ~~
\vec{v},\vec{w}\in
\mathbb{K}^N,
\; y\in\mathbb{R} \ .\ee
\item
the matrix $\boldsymbol\rho=(\rho_{ij})_{i,j=1,...,N}$ and the vector $\boldsymbol\sigma=(\sigma_i)_{i=1,...,N}$ are
such that $\kappa := \left\|\boldsymbol\rho\cdot\boldsymbol\sigma\right\|_p < 1$.
\end{itemize}
Suppose that the following
integral operator defines a map
$\vec{A}:BC(\mathbb{R})^N \rightarrow BC(\mathbb{R})^N$,
\be\vec{A}[\vec{f}](x) := \int_{-\infty}^{\infty} \vec{K}(x,y)\cdot\vec{L}(y,\vec{f}(y)) \, dy \ .
\ee
Then $\vec{A}$ is a contraction
on $\left(BC(\mathbb{R})^N, \left\|\cdot\right\|_{\infty_p}\right)$.
\end{lemma}
\begin{proof} Let $\vec{f},\vec{g}\in BC(\mathbb{R})^N$. Then
\allowdisplaybreaks
\begin{align}
\left(\left\| \vec{A}[\vec{f}]-\vec{A}[\vec{g}] \right\|_{\infty_p}\right)^p &= \sup_{x\in\mathbb{R}} \sum_{i=1}^N \left|A_i[\vec{f}]-A_i[\vec{g}]\right|^p \nonumber\\
&= \sup_{x\in\mathbb{R}} \sum_{i=1}^N \left|\sum_{j=1}^N\int_{-\infty}^{\infty} K_{ij}(x,y)\left(L_j(y,\vec{f}(y))-L_j(y,\vec{g}(y))\right) dy\right|^p \nonumber\\
&\leq \sup_{x\in\mathbb{R}} \sum_{i=1}^N \left|\sum_{j=1}^N\int_{-\infty}^{\infty} \left|K_{ij}(x,y)\right|\big|L_j(y,\vec{f}(y))-L_j(y,\vec{g}(y))\big| dy\right|^p \nonumber\\
&\leq \left(\left\|\vec{f}-\vec{g}\right\|_{\infty_p}\right)^p \,\sum_{i=1}^N \left|\sum_{j=1}^N \rho_{ij}\sigma_j\right|^p
\nonumber \\
&= \left(\left\|\vec{f}-\vec{g}\right\|_{\infty_p}\right)^p \left(\left\|\boldsymbol\rho\cdot\boldsymbol\sigma\right\|_p\right)^p
=\left(\kappa \left\|\vec{f}-\vec{g}\right\|_{\infty_p}\right)^p \ .
\end{align}
Since $\kappa<1$, $\vec{A}$ is a contraction.
\end{proof}
\subsection{Unique solution to TBA-type equations}
\label{section:TBAuniqueness}
In this section we specialise the results of the previous section to integral equations of the form \eqref{TBAintro}. We will restrict ourselves to the case $\mathbb{K} = \mathbb{R}$.
Let us call a function $f:\mathbb{R}^N\rightarrow\mathbb{R}$ \textsl{$p$-Lipschitz-continuous}, if it satisfies the Lipschitz condition with respect to $\left\|\cdot\right\|_p$. In fact, this is a slightly redundant denomination: equivalence of all $p$-norms ensures that $f$ is Lipschitz continuous either with respect to all or none of the $p$-norms. However, the optimal Lipschitz constants differ, which is important in view of the third condition in
Lemma~\ref{HammersteinContractionNdim}.
For differentiable functions that satisfy a Lipschitz condition, we can characterise the $p$-Lipschitz constant in terms of the gradient as follows:
\begin{lemma}\label{pLipschitz}
Let $f:\mathbb{R}^N\rightarrow\mathbb{R}$ be a continuous function whose gradient $\nabla f:\mathbb{R}^N\rightarrow\mathbb{R}^N$ is also a continuous function, and $1\leq p \leq\infty$. Then the following inequality holds:
\be\left|f(\vec{v})-f(\vec{w})\right| \leq \left(\sup_{\vec{u}\in\mathbb{R}^N}\left\|\nabla f(\vec{u})\right\|_q\right) \left\|\vec{v}-\vec{w}\right\|_p \hspace{1cm} \ \text{for all}~~ \vec{v},\vec{w}\in\mathbb{R}^N\ ,\ee where $q$ is defined via the relation $\frac{1}{p}+\frac{1}{q}=1$. In particular, $f$ is Lipschitz continuous if the gradient is bounded.
\end{lemma}
\begin{proof}
Let $\vec{v},\vec{w}\in\mathbb{R}^N$. By the mean value theorem there exists a $t\in[0,1]$ such that \be\left|f(\vec{v})-f(\vec{w})\right| = \left|\nabla f(\vec{x})\cdot(\vec{v}-\vec{w})\right|\ee for $\vec{x}=\vec{v}+t(\vec{w}-\vec{v})$. Applying first the triangle and then the H\"older inequality to the right hand side, one obtains
\begin{align}
\left|f(\vec{v})-f(\vec{w})\right| &\leq \sum_{i=1}^N\left|\nabla_i f(\vec{x})(v_i-w_i)\right| \nonumber\\
&\leq \left\|\nabla f(\vec{x})\right\|_q \left\|\vec{v}-\vec{w}\right\|_p \nonumber\\
&\leq \left(\sup_{\vec{u}\in\mathbb{R}^N}\left\|\nabla f(\vec{u})\right\|_q\right) \left\|\vec{v}-\vec{w}\right\|_p \ ,
\end{align}
which means that $\sup_{\vec{u}\in\mathbb{R}^N}\left\|\nabla f(\vec{u})\right\|_q$ , if bounded, is a $p$-Lipschitz constant for $f$. It is easy to see that it is the optimal $p$-Lipschitz constant.
\end{proof}
\begin{lemma} \label{StandardLfunctionIsLipschitz}
Let $A\ge 0$ and $a,b,c\in\mathbb{R}$. The function
$L:\mathbb{R}\rightarrow\mathbb{R}$ defined by
\be
L(x)=a\cdot\log\left(A+e^{bx}\right)-cx
\ee is Lipschitz continuous with Lipschitz constant $\sigma_L=\max(|c|,|ab-c|)$.
\end{lemma}
\begin{proof}
If any one of $a,b,A$ is zero, the statement is clear. Suppose $a,b,A\neq0$.
The derivative \be\frac{d}{dx}L(x)=\frac{ab}{Ae^{-bx}+1}-c\ee interpolates between $-c$ (for $bx\rightarrow -\infty$) and $ab-c$ (for $bx\rightarrow \infty$). It is also a monotonous function, since
\be\frac{d^2}{dx^2}L(x)= a \cdot \frac{Ab^2 e^{bx}}{(A+e^{bx})^2}\ ,
\ee
which is $\ge 0$
for $a \ge 0$ and $\le 0$ for $a \le 0$.
It thus follows that $\left|\frac{d}{dx}L(x)\right|\leq\max(|c|,|ab-c|)$.
Lemma~\ref{pLipschitz} completes the proof.
\end{proof}
\begin{proposition} \label{UniqueGroundstateNdim} Let $1\leq p,q \leq\infty$ such that $\frac{1}{p}+\frac{1}{q}=1$ holds. For $i,j=1,...,N$, let $\phi_{ij}\in
BC(\mathbb{R},\mathbb{R})
\cap L_1(\mathbb{R})$, $a_j\in BC_-(\mathbb{R},\mathbb{R})$
and $G_{ij},C_{ij}\in\mathbb{R}$, $w_i\in\mathbb{R}_{>0}$. Furthermore, set $M_{ij}:=\max\left(|C_{ij}|,|G_{ij}-C_{ij}|\right)$ and define \be \sigma_i:=\left( \sum_{j=1}^N (M_{ij}w_j)^q \right)^\frac{1}{q}
\quad , \qquad
\rho_{ij}:=\frac{1}{w_i}\int_{-\infty}^\infty\left|\phi_{ij}(y)\right|dy \ .\ee If
$\kappa:=\left\|\boldsymbol\rho\cdot\boldsymbol\sigma\right\|_p<1$, then the system of nonlinear integral equations given by
\be\label{TBA-type-contracting-integral}
f_i(x)=\sum_{j=1}^N\int_{-\infty}^{\infty} \phi_{ij}(x-y)\sum_{k=1}^N\left[G_{jk}\log\left(e^{-a_k(y)}+e^{f_k(y)}\right)-C_{jk}f_k(y)\right] dy
\ee
has exactly one bounded continuous solution,
$(f_{\star,1},\dots,f_{\star,N})\in BC(\mathbb{R},\mathbb{R})^N$.
\end{proposition}
\begin{proof}
We start by rewriting \eqref{TBA-type-contracting-integral} in terms of the rescaled functions $g_i(x)= f_i(x)/w_i$:
\be\label{contractingintegralforg}
g_i(x)=\sum_{j=1}^N\int_{-\infty}^{\infty} \frac{1}{w_i}\phi_{ij}(x-y)\sum_{k=1}^N\left[G_{jk}\log\left(e^{-a_k(y)}+e^{w_k g_k(y)}\right)-C_{jk}w_k g_k(y)\right] dy
\ee
The lower bound on the $a_k(y)$ ensures that the corresponding integral operator is a well-defined map $BC(\mathbb{R},\mathbb{R})^N\rightarrow BC(\mathbb{R},\mathbb{R})^N$. Consider the functions
\be
L_i(y,\vec{v})=\sum_{k=1}^N\left[G_{ik}\log\left(e^{-a_k(y)}+e^{w_k v_k}\right)-C_{ik}w_k v_k\right] \ .\ee The individual summands, as functions in $v_k$ for fixed $y$, are of the form considered in Lemma \ref{StandardLfunctionIsLipschitz}, from which one concludes \be\left|\frac{\partial}{\partial v_j} L_i(y,\vec{v})\right|\leq w_j\, \max\left(|C_{ij}|,|G_{ij}-C_{ij}|\right) \ .\ee According to Lemma \ref{pLipschitz}, the functions $L_i(y,\vec{v})$ are thus $p$-Lipschitz continuous in the second variable, with Lipschitz constants
\begin{align}
\sigma_i &= \sup_{\substack{\vec{v}\in\mathbb{R}^N \\ y\in\mathbb{R}}}\left\|\nabla_\vec{v} L_i(y,\vec{v})\right\|_q = \sup_{\substack{\vec{v}\in\mathbb{R}^N \\ y\in\mathbb{R}}}\left(\sum_{j=1}^N\left|\frac{\partial}{\partial v_j} L_i(y,\vec{v})\right|^q\right)^\frac{1}{q} \nonumber \\ &=\left(\sum_{j=1}^N\sup_{\substack{\vec{v}\in\mathbb{R}^N \\ y\in\mathbb{R}}}\left|\frac{\partial}{\partial v_j} L_i(y,\vec{v})\right|^q\right)^\frac{1}{q} =\left(\sum_{j=1}^N \left(w_j\, \max\left(|C_{ij}|,|G_{ij}-C_{ij}|\right)\right)^q\right)^\frac{1}{q} \ .
\end{align}
An application of
Lemma~\ref{HammersteinContractionNdim}
completes the proof.
\end{proof}
We note that the bound on $\kappa$ used in Proposition~\ref{UniqueGroundstateNdim} is actually independent of the functions $a_j$.
\subsection{Proof of Proposition \ref{TBAuniquenessDynkin}}\label{uniqunessproofTBA}
The proof makes use of the Perron-Frobenius Theorem in the following form
(see e.g.~\cite[Theorem 2.2.1]{BrouwerHaemers}):
\begin{theorem}[Perron-Frobenius]\label{PF}
Let $\vec{A}$ be a non-negative real-valued irreducible $N\times N$ matrix. Then the largest eigenvalue $\lambda_{\mathrm{PF}}$ of $\vec{A}$ is real and has geometric and algebraic multiplicity 1. Its associated eigenvector can be chosen to have strictly positive components
and is the only eigenvector with that property.
\end{theorem}
\medskip
We now turn to the proof of Proposition~\ref{TBAuniquenessDynkin}.
\medskip
Set $p=\infty$ and $q=1$. In terms of Proposition \ref{UniqueGroundstateNdim} we have $\vec{C}=\frac{1}{2}\vec{G}$, so that
$\vec{M}=\frac{1}{2}\vec{G}$.
By the Perron-Frobenius theorem, $\vec{G}$ has an eigenvector $\vec{w}$ with strictly positive components $w_i>0$ associated to its largest eigenvalue $\lambda_{\mathrm{PF}}$. With this choice of $w_i$ the constant
vector $\boldsymbol{\sigma}$ in
Proposition~\ref{UniqueGroundstateNdim}
is given by
\be\label{sig}
\boldsymbol\sigma
= \tfrac12 \vec{G}\vec{w}
= \tfrac12 \lambda_{\mathrm{PF}} \vec{w} \ .
\ee
Let us abbreviate $\Phi(x) := \Phi_{\frac{1}{2}\vec G}(x)$ with components $\phi_{ij}(x)$.
Due to Lemma \ref{Phinonneg} we know that
$|\phi_{ij}(x)|=\phi_{ij}(x)$
for all $x\in\mathbb{R}$ and $i,j=1,\dots,N$. Combining this with \eqref{phiC-infinf-integral} one computes the matrix $\boldsymbol{\rho}$ in
Proposition~\ref{UniqueGroundstateNdim} to be
\be
\rho_{ij} = \frac{1}{w_i} \int_{-\infty}^\infty\phi_{ij}(y)dy
=
\frac{1}{w_i}\big[ (2\,\mathbf{1} - \tfrac{1}{2}\vec{G})^{-1}\big]_{ij} \ .
\ee
Since $\vec{w}$ is an eigenvector of $(2\,\mathbf{1} - \tfrac{1}{2}\vec{G})^{-1}$ we find
\be\label{rhow}
\boldsymbol\rho \cdot \vec{w} = \frac{1}{2-\frac{\lambda_{\mathrm{PF}}}{2}} \cdot (1,1,\dots,1) \ .
\ee
Hence the contraction constant $\kappa$ in
Proposition~\ref{UniqueGroundstateNdim} is given by
\be\kappa = \left\|\boldsymbol\rho\cdot\boldsymbol\sigma\right\|_\infty = \frac{\lambda_{\mathrm{PF}}}{2}\max_{i=1,...,N}\left|\sum_{j=1}^N \rho_{ij}w_j\right| = \frac{\lambda_{\mathrm{PF}}}{2}\left|\frac{1}{2-\frac{\lambda_{\mathrm{PF}}}{2}}\right|=\left|\frac{\lambda_{\mathrm{PF}}}{4-\lambda_{\mathrm{PF}}}\right| \ .
\ee
It follows that $\kappa<1$ if and only if $\lambda_{\mathrm{PF}}<2$.
By Proposition~\ref{UniqueGroundstateNdim} there is a unique solution to \eqref{Csystem}, completing the proof of Proposition~\ref{TBAuniquenessDynkin}.
\subsection{Adjacency matrices of graphs}\label{G-is-graph-section}
In Proposition~\ref{TBAuniquenessDynkin}, the matrix $\vec{G}$ may have non-negative real entries. This is in itself interesting because it makes the solution
$\vec{f}_\star$
depend on an additional set of continuous parameters.
However, in the application to integrable quantum field theory, $\vec{G}$ is usually the \textsl{adjacency matrix} of some (suitably generalised) graph.
Irreducibility is then equivalent to the corresponding generalised graph being strongly connected.
If $\vec{G}$ is symmetric and has entries $\lbrace 0,1\rbrace$ with zero on the diagonal, then by definition it is the adjacency matrix of a simple (undirected, unweighted) graph whose nodes $i$ and $j$ are connected if and only if $G_{ij}=1$.
In this case, strongly connected and connected are equivalent.
The only connected simple graphs with $\lambda_{\mathrm{PF}}<2$ are the graphs associated to the $ADE$ Dynkin diagrams
(while their affine versions are the sole examples satisfying $\lambda_{\mathrm{PF}}=2$), and their adjacency matrix is diagonalisable over $\mathbb{R}$ \cite[Thm.\,3.1.3]{BrouwerHaemers}.
Consider now generalised graphs.
If we allow for loops ($G_{ii}\neq 0$, edges connecting a node to itself) and
multiple edges between the same nodes (where the entry $G_{ij}$ is the number of edges connecting nodes $i$ and $j$)
we additionally get the tadpole
$T_N=A_{2N}/\mathbb{Z}_2$ (defined as the adjacency matrix of $A_N$ with additional entry $G_{11}=1$).
If, moreover, the symmetry requirement is dropped ($G_{ij}\neq G_{ji}$), we may still associate to $\vec{G}$ a mixed multigraph (with some edges now being replaced by arrows). The $BCFG$ Dynkin diagrams provide examples of this type for which $\lambda_{\mathrm{PF}}<2$ hold. However, in contrast with the undirected (symmetric) case, they are not exhaustive at all. For instance, a directed graph with $\lambda_{\mathrm{PF}}>2$ can be turned into a new directed graph with $\lambda_{\mathrm{PF}}<2$ by
subdividing all its edges often enough.\footnote{We are grateful to Nathan Bowler
for pointing this out to us.} If $\vec{G}$ is not even assumed to be integer-valued, then every non-example becomes an example after appropriate rescaling.
To summarise, we have the following special cases in which Proposition \ref{TBAuniquenessDynkin} applies:
\begin{corollary}
If $\vec{G}$ is the adjacency matrix of a finite
Dynkin diagram
($A_N$, $B_N$, $C_N$, $D_N$, $E_6$, $E_7$, $E_8$, $F_4$ or $G_2$)
or of the tadpole $T_N$,
then the system \eqref{Csystem} has exactly one bounded continuous solution.
\end{corollary}
\subsection{The case of a single TBA equation}\label{N=1-example-section}
Consider a single integral equation of TBA-type ($N=1$ with
$G_{11} = g\in(0,2)$,
$C_{11} = c\in(-2,2)$):
\be\label{TBAforN1}
f(x)=\int_{-\infty}^{\infty} \phi_c(x-y)\left[g\log\left(e^{-a(y)}+e^{f(y)}\right)-c f(y)\right] dy
\ee
This case is instructive: we do not have to worry about the choice of $p$ and $q$, nor does a rescaling $f(x)\rightarrow f(x)/w$ influence the contraction constant.
We compute the quantities in Proposition~\ref{UniqueGroundstateNdim} to be
\begin{align}
&\rho = \int_{-\infty}^\infty \phi_c(y)dy = \frac{1}{2-c}
\quad , \qquad
\sigma = M = \max(|c|, |g-c|) \ ,\nonumber\\
&\kappa(c) = \rho\sigma =\frac{\max(|c|, |g-c|)}{2-c} \ .
\end{align}
The most important case is $g=1$ (this case arises for example in the Yang-Lee model). The contraction constant for this case is shown in Figure~\ref{kappaofc}.
\begin{figure}
\begin{center}
\includegraphics[width=10cm]{KappaForN1withTitle.png}
\caption{Contraction constant for a single TBA equation at different values of $c$.}
\label{kappaofc}
\end{center}
\end{figure}
For $c<1$, our estimate guarantees a contraction.
The canonical TBA equation with $c=g=1$ corresponds to the marginal case $\kappa=1$, whereas the universal TBA with $c=0$ yields $\kappa=\tfrac{1}{2}$. The ``sweet spot'' is $c=\tfrac{1}{2}=\tfrac{1}{2}g$ where our estimate for the contraction constant
attains its minimum $\kappa=\tfrac{1}{3}$.
If we chose a different value $g\in(0,2)$, the sweet spot shifts, but the overall picture remains the same (the region of assured contraction is $c<1$). However, for $g\geq 2$, $\kappa$ is larger or equal to one everywhere, and no region of assured contraction exists.
\begin{remark}\label{kappa-is-not-optimal}
It is noteworthy that $\kappa$ has virtually no practical bearing on the speed of convergence of the iterative numerical solution of equation \eqref{TBAforN1}. We solved it numerically for $g=1$, $a(x)=r\cosh(x)$ (the massive Yang-Lee model in volume $r$) for different values of $c\in[0,1]$ and $r\in(0,1]$, and the speed of convergence
(as measured by the number of iterations required to obtain a certain accuracy) increases almost linearly in $c$, instead of being governed by $\kappa$.
If $\kappa$ were the optimal contraction constant, we would instead expect to see the fastest convergence for $c=\tfrac12$.
\end{remark}
We now describe in more detail the proof given in
\cite[Sec.\,5]{FringKorffSchulz}, which concerns the case $g=c=1$ and $a(x)=r\cosh(x)$ with $r>0$,
in the above example.
As described above, our bound on $\kappa$ in this case is $1$, so that Proposition~\ref{UniqueGroundstateNdim} does not apply. We circumvent this by using $c=\tfrac12$ and proving (in Section~\ref{section:uniquenessY}, see Theorem~\ref{main-theorem-TBA} from the introduction) that the fixed point is independent of $c$ and unique in the range
$c \in (-2,2)$.
The argument of \cite{FringKorffSchulz} also uses the Banach Theorem but proceeds differently.
Namely, they exhaust the space of bounded continuous functions by subspaces with specific bounds,
$BC(\mathbb{R},\mathbb{R})=\bigcup_{q \in [e^{-r},1)} D_{q,r}$,
where
\be
D_{q,r}:=\Big\lbrace f\in BC(\mathbb{R},\mathbb{R})\,\Big| \left\|f\right\|_\infty \leq \log(\tfrac{q}{1-q})+r\Big\rbrace\ .
\ee
Using convexity of $D_{q,r}$ it is shown that
for $q \ge e^{-r}$
the corresponding integral operator maps $D_{q,r}$ to itself, and that the associated contraction constant is $\kappa=q$.
Hence, as expected one obtains $\kappa\rightarrow 1$ when $q\rightarrow 1$ (so that $\log(\tfrac{q}{1-q}) \to \infty$), but on each $D_{q,r}$ we have $\kappa<1$ and thus a proper contraction. This implies a unique solution on the whole space $BC(\mathbb{R},\mathbb{R})$.
It is also stated in \cite{FringKorffSchulz} that the generalisation to higher $N$ should be straightforward. This is less clear to us. In the examples from finite Dynkin diagrams we need all the extra freedom we introduce in Proposition~\ref{UniqueGroundstateNdim} in an optimal way in order to press our bound $\kappa$ under 1. If we take, for example, $\vec G = \vec C$ to be the adjacency matrix of the $A_N$ Dynkin diagram,
the quantities in Proposition~\ref{UniqueGroundstateNdim} take the values
\be
M_{ij} = G_{ij}
~~,\quad
\rho_{ij} = \left[(A_N)^{-1}\right]_{ij}
~~,\quad
\sigma_i =
\begin{cases} 1 &; i = 1,N \\ 2^{1/q} &; i = 2, \dots, N-1 \end{cases}
\quad ,
\ee
where $A_N$ denotes the Cartan matrix $2 \mathbf{1} - \vec G$. With some more work, from this one can estimate that for $N$ large enough one has
$\left\|\boldsymbol\rho\cdot\boldsymbol\sigma\right\|_p > \tfrac18 N^2$ (independent of $p$ and $q$).
Hence, as opposed to what happened at $N=1$, for larger $N$ the bound $\kappa$ is not 1 but grows
at least
quadratically with $N$.
It is not obvious to us how to obtain drastically better estimates by choosing subsets analogous to $D_{q,r}$ of $BC(\mathbb{R},\mathbb{R}^N)$.
However, it might be possible -- at least in the massive case, cf.\ Remark~\ref{remark-constantY}\,\ref{spectralradius2isbad} -- that one can combine the freedom to choose $\vec C$ and $w_i$ that we introduce with the method of \cite{FringKorffSchulz} to extend our results to the case of
spectral radius 2 or to
infinitely many coupled TBA equations (such as the $N\to\infty$ limit of classical finite Dynkin diagrams, where the spectral radius approaches 2). The idea of choosing subsets $D_{q,r}$ as above might push $\kappa$ strictly below 1.
We hope to return to these points in the future.
\section{Uniqueness of solution to the Y-system}
\label{section:uniquenessY}
For $\vec{G}\in\mathrm{Mat}(N,\mathbb{R})$, $\vec{C}\in\mathrm{Mat}_{<2}(N)$
and $\vec{a}\in BC_-(\mathbb{R},\mathbb{R})^N$ we define the map $\vec{L}_{\vec{C}} : BC(\mathbb{R},\mathbb{R})^N \to BC(\mathbb{R},\mathbb{R})^N$ as (recall the convention in \eqref{functions-of-a-vector-convention})
\be\label{LC[f]-def}
\vec{L}_{\vec{C}}[\vec{f}](x):=\vec{G}\cdot\log\left(e^{-\vec{a}(x)}+e^{\vec{f}(x)}\right)-\vec{C}\cdot\vec{f}(x) \ .
\ee
With this notation, the TBA equation \eqref{TBAintro} reads
\be\label{TBAshort}
\vec{f}(x) = \left(\Phi_{\vec{C}}\star \vec{L}_{\vec{C}}[\vec{f}]\right)(x)\ .
\ee
From Proposition~\ref{TBAuniquenessDynkin} we know that
\eqref{TBAshort} has a unique bounded continuous solution for one special choice of $\vec{C}$, namely $\vec{C}=\frac{1}{2}\vec{G}$. In this section we will apply the results of section \ref{section-soldefeqn} in order to translate that statement to other choices of $\vec{C}$, as well as to the associated Y-system.
In particular, we will prove Theorems~\ref{main-theorem-Y}, \ref{main-theorem-TBA} and Corollary~\ref{unique-constant-sol} from the introduction.
\subsection{Independence of the choice of $\vec C$}
In this subsection we fix
\be
\vec{G}\in\mathrm{Mat}(N,\mathbb{R}) \quad , \qquad
\vec{C}\in\mathrm{Mat}_{<2}(N) \quad , \qquad
\vec{a}\in BC_-(\mathbb{R},\mathbb{R})^N \ .
\ee
We stress that for the moment, we make no further assumptions on $\vec G$ (as opposed to Theorems~\ref{main-theorem-Y} and \ref{main-theorem-TBA}).
We will later need to apply Proposition~\ref{FuncRelToNLIE} to \eqref{TBAshort}. To this end we now provide a criterion for the components of $\vec{L}_{\vec{C}}[\vec{f}]$ to be H\"older continuous.
\begin{lemma}
Let $\vec{f}\in BC(\mathbb{R},\mathbb{R})^N$.
If the components of $\vec{f}$ and of $e^{-\vec{a}}$ are H\"older continuous, then the components of $\vec{L}_{\vec{C}}[\vec{f}]$ are H\"older continuous.
\end{lemma}
\begin{proof}
It is easy to see that the composition of H\"older continuous functions is again H\"older continuous, as is the sum of bounded H\"older continuous functions.
Therefore, and since $x\mapsto e^x$ is H\"older continuous on any compact subset of $\mathbb{R}$ -- in particular on the images of the bounded functions $f_m$ -- the functions $x\mapsto u_m(x):=e^{-a_m(x)}+e^{f_m(x)}$ are H\"older continuous. The bounds on $a_m$ and $f_m$ ensure that the image of $u_m$ is contained in some interval $[x_0,x_1]$ with $x_0,x_1>0$.
But $x\mapsto \log(x)$ is H\"older continuous on $[x_0,x_1]$, and so the functions $x\mapsto l_m(x):=\log(u_m(x))$ are
bounded and
H\"older continuous. From this it follows that the components of $\vec{L}_{\vec{C}}[\vec{f}](x)$,
\be
x \,\longmapsto\, \left[\,\vec{L}_{\vec{C}}[\vec{f}]\,\right]_n (x)
= \sum_{m=1}^N \left(G_{nm}l_m(x)-C_{nm}f_m(x)\right) \ ,
\ee
are H\"older continuous.
\end{proof}
We are careful not to make too strong assumptions on $\vec{a}$ here, namely we do not require the components of $\vec{a}$ to be H\"older continuous. For instance, the relevant example $a_m(x) \sim e^{\gamma x/s}$ from \eqref{asymptoticexample} is only locally H\"older continuous.
Meanwhile, the H\"older condition on $\vec{f}$ is, in fact, obtained from the TBA equation for free:
\begin{lemma} \label{alwayshoelder} Suppose $\vec{f}\in BC(\mathbb{R},\mathbb{R})^N$ is a solution of the TBA equation \eqref{TBAshort}. Then the components of $\vec{f}$ are
Lipschitz continuous.
\end{lemma}
\begin{proof}
Using Lemma~\ref{Phiproperties}\,\ref{matrixelements} one quickly verifies that one can
write the components of $\vec{f}$ as real linear combinations
of the form
\be
f_n(x)=\sum_{d,m} c_{d,m}^{(n)}F_d\big[\left[\,\vec{L}_{\vec{C}}[\vec{f}]\,\right]_m \big](x) \ ,
\ee
where $F_d[-]$ is the convolution functional defined in \eqref{Fd-convolution-def}, see also \eqref{phi*g-via-Fd}.
Lemma~\ref{DerivConvBounded} then implies that the derivatives $\frac{d}{dx}f_n(x)$ are bounded.
This shows the claim (cf.\ Lemma~\ref{pLipschitz}).
\end{proof}
Now we are in the position to apply Proposition~\ref{FuncRelToNLIE} to \eqref{TBAshort}.
The $\vec{C}$-independence will boil down to the following simple observation on the functional equation \eqref{ffuncrel}: the $\vec{C}$-dependence on the left and right hand side of
\be\label{ffC=LC-v1}
\vec{f}(x+is)+\vec{f}(x-is) -\vec{C}\cdot\vec{f}(x) = \vec{L}_{\vec{C}}[\vec{f}](x)
\ee
simply cancels, see \eqref{LC[f]-def}. Thus \eqref{ffC=LC-v1} is in particular equivalent to
\be\label{ffC=LC-v2}
\vec{f}(x+is)+\vec{f}(x-is) = \vec{L}_{\vec{0}}[\vec{f}](x)
\ee
\begin{proposition}\label{uniquenessFsystem}
Suppose that the components of $e^{-\vec{a}}$ are H\"older
continuous
and that
there exists $\vec{C}\in\mathrm{Mat}_{<2}(N)$, such that the TBA equation
\eqref{TBAshort}
has a unique solution $\vec{f}_\star$ in $BC(\mathbb{R},\mathbb{R})^N$.
Then:
\begin{enumerate}[label=\roman*)]
\item \label{TBAtoF}
$\vec{f}_\star$ is real analytic and can be continued to a function in $\mathcal{BA}(\mathbb{S}_s)^N$, which we also denote by $\vec{f}_\star$. It is the unique solution to the functional equation
\be\label{funeq}
\vec{f}(x+is)+\vec{f}(x-is) = \vec{L}_{\vec 0}[\vec{f}](x) \quad , \qquad x\in\mathbb{R} \ ,
\ee
in the space $\mathcal{BA}(\mathbb{S}_s)^N$
which also satisfies
$\vec{f}(\mathbb{R})\subset\mathbb{R}^N$.
\item \label{indepC} For any $\vec{C}'\in\mathrm{Mat}_{<2}(N)$,
$\vec{f}_\star$
is the unique solution to the TBA equation
\be\label{Cprimetba}
\vec{f}(x) = \left(\Phi_{\vec{C}'}\star \vec{L}_{\vec{C}'}[\vec{f}]\right)(x)
\ee
in the space $BC(\mathbb{R},\mathbb{R})^N$.
\end{enumerate}
\end{proposition}
\begin{proof}
\ref{TBAtoF} By definition, the components of $\vec{g}_\star(x):=\vec{L}_{\vec{C}}[\vec{f}_\star](x)$ are bounded real functions. Moreover, Lemma~\ref{alwayshoelder} ensures that they are Lipschitz continuous, so in particular H\"older continuous. It follows from
direction $\ref{ffg-inteq} \Rightarrow \ref{ffg-funrel}$ in
Proposition~\ref{FuncRelToNLIE} that $\vec{f}_\star\in\mathcal{BA}(\mathbb{S}_s)^N$, and that $\vec{f}_\star$ satisfies the functional relation
\be\label{auxfuneq}
\vec{f}_\star(x+is)+\vec{f}_\star(x-is) - \vec{C}\cdot \vec{f}_\star(x) = \vec{L}_{\vec{C}}[\vec{f}_\star](x)
\ee
for all $x\in\mathbb{R}$,
which, as we just said, is equivalent to \eqref{funeq}.
Now suppose there is another solution $\vec{f}'_\star\in\mathcal{BA}(\mathbb{S}_s)^N$ to
\eqref{funeq}, or, equivalently, \eqref{auxfuneq}, which satisfies $\vec{f}'_\star(\mathbb{R})\subset\mathbb{R}^N$.
By direction $\ref{ffg-funrel} \Rightarrow \ref{ffg-inteq}$
of Proposition~\ref{FuncRelToNLIE}, the restriction $\vec{f}'_\star\big|_\mathbb{R}$ is also a solution to the TBA equation \eqref{TBAshort}.
By our uniqueness assumption, we must have $\vec{f}'_\star\big|_\mathbb{R} = \vec{f}_{\star}\big|_\mathbb{R}$, and by uniqueness of the analytic continuation also $\vec{f}'_\star = \vec{f}_{\star}$ on $\overline{\mathbb{S}}_s$.
\medskip
\noindent
\ref{indepC} For any choice of $\vec{C}'\in\mathrm{Mat}_{<2}(N)$,
\eqref{auxfuneq} can be rewritten as
\be \label{ffC=LC-v3}
\vec{f}_\star(x+is)+\vec{f}_\star(x-is) - \vec{C}'\cdot \vec{f}_\star(x) = \vec{L}_{\vec C'}[\vec{f}_\star](x) \ .
\ee
Direction $\ref{ffg-funrel} \Rightarrow \ref{ffg-inteq}$ of Proposition~\ref{FuncRelToNLIE} shows that $\vec{f}_\star$ satisfies \eqref{Cprimetba}.
Suppose $\vec{f}'_\star \in BC(\mathbb{R},\mathbb{R})^N$ is another solution to \eqref{Cprimetba}. Then by Lemma~\ref{alwayshoelder}, $\vec{f}'_\star$ is H\"older continuous, and by direction $\ref{ffg-inteq} \Rightarrow \ref{ffg-funrel}$ of Proposition~\ref{FuncRelToNLIE} it satisfies \eqref{ffC=LC-v3}. But \eqref{ffC=LC-v3} is equivalent to \eqref{funeq}, whose solution is unique and equal to $\vec f_\star$ by part \ref{TBAtoF}.
\end{proof}
\subsection{Proofs of Theorems \ref{main-theorem-Y}, \ref{main-theorem-TBA} and of Corollary~\ref{unique-constant-sol}}
We now turn to the proof of the main results of this paper. We start with
part of
Theorem~\ref{main-theorem-TBA}, as this is used in the proof of Theorem \ref{main-theorem-Y}.
Then we show Theorem~\ref{main-theorem-Y} and subsequently the missing part of Theorem~\ref{main-theorem-TBA}.
Finally, we give the proof of Corollary~\ref{unique-constant-sol}.
\subsubsection*{Proof of Theorem \ref{main-theorem-TBA}, less uniqueness in part \ref{mainTBAsolvesY}}
\noindent
{\em Part \ref{mainTBAunique}:} First note that the assumptions in Proposition~\ref{TBAuniquenessDynkin} are satisfied. Thus there exists a unique solution $\vec f_\star$ to \eqref{TBAintro} in $BC(\mathbb{R},\mathbb{R})^N$ for the specific choice $\vec G = \tfrac12 \vec C$.
Therefore, the conditions of Proposition~\ref{uniquenessFsystem} are satisfied and part \ref{indepC} of that proposition establishes
existence and uniqueness of a solution $\vec f_\star \in BC(\mathbb{R},\mathbb{R})^N$ to \eqref{TBAintro} for any choice of $\vec C \in \mathrm{Mat}_{<2}(N)$, as well as
$\vec C$-independence
of that solution.
\medskip
\noindent
{\em Part \ref{mainTBAsolvesY} (without uniqueness):} By Proposition~\ref{uniquenessFsystem}\,\ref{TBAtoF}, $\vec{f}_\star$ can be analytically continued to a function in $\mathcal{BA}(\mathbb{S}_s)^N$, which we will also denote by $\vec{f}_\star$.
We define
\be\label{Y-via-f*-proof}
\vec{Y}(z):=\exp\!\big(\,\vec{a}(z)+\vec{f}_\star(z)\,\big) \ .
\ee
Let us denote the components of $\vec{f}_\star$ and $\vec{Y}$ by
$f_{\star,n}$ and $Y_{n}$, respectively. Note that $Y_n \in \mathcal{A}(\mathbb{S}_s)$.
It is immediate that the $Y_n$ satisfy properties \ref{Yproperties:real}--\ref{Yproperties:asymptotics} of Theorem~\ref{main-theorem-Y} (for property \ref{Yproperties:real} note that $f_{\star,n}$ and $a_n$ are real-valued on the real axis).
Furthermore,
as a consequence of the functional relations \eqref{linear-eqn-for-asympt} and \eqref{funeq} for $\vec{a}$ and $\vec{f}_\star$ respectively, the $Y_n$ solve \eqref{Y}:
\begin{align}
Y_{n}(x+is)Y_{n}(x-is) &= e^{a_n(x+is)+a_n(x-is)}e^{f_{\star,n}(x+is)+f_{\star,n}(x-is)}\nonumber\\
&= e^{\sum_m G_{nm}a_m(x)}e^{\sum_m G_{nm}\log\left(e^{-a_m(x)}+e^{f_{\star,m}(x)}\right)} \nonumber\\
&= \prod_m e^{G_{nm}a_m(x)}\left(e^{-a_m(x)}+e^{f_{\star,m}(x)}\right)^{G_{nm}} \nonumber\\
&= \prod_m \left(1 + Y_{m}(x) \right)^{G_{nm}} \ .
\end{align}
This proves that $\vec Y$ is a solution to \eqref{Y} which lies in $\mathcal{A}(\mathbb{S}_s)^N$ and satisfies properties \ref{Yproperties:real}--\ref{Yproperties:asymptotics} in Theorem~\ref{main-theorem-Y}. It remains to show that $\vec Y$ is the unique such solution.
This will be done below as an immediate consequence of Theorem~\ref{main-theorem-Y}, whose proof we turn to now.
\subsubsection*{Proof of Theorem \ref{main-theorem-Y}}
Existence of a $\vec Y \in \mathcal{A}(\mathbb{S}_s)^N$ which solves \eqref{Y} and satisfies properties \ref{Yproperties:real}--\ref{Yproperties:asymptotics} has just been proven above. The solution $\vec Y$ is given via \eqref{Y-via-f*-proof} in terms of the valid asymptotics $\vec a$ and the unique solution $\vec f_\star$ to \eqref{TBAintro} obtained in Theorem~\ref{main-theorem-TBA}\,\ref{mainTBAunique}.
It remains to show uniqueness of $\vec Y$.
Suppose there is another function $\vec{Y}'\in\mathcal{A}(\mathbb{S}_s)^N$ with the properties \ref{Yproperties:real}--\ref{Yproperties:asymptotics}. Since $\mathbb{S}_s$ is a simply connected domain and the components $Y_{n}'(z)$
have no roots in $\overline{\mathbb{S}}_s$ (property \ref{Yproperties:roots}), there exists a function $\vec{h}\in\mathcal{A}(\mathbb{S}_s)^N$, such that $\vec{Y}'(z)=\exp(\vec{h}(z))$ for all $z\in\overline{\mathbb{S}}_s$.
In fact, property \ref{Yproperties:real} (real\ \& positive) allows one to choose this function in such a way that $\vec{h}(\mathbb{R})\subseteq\mathbb{R}^N$.
Consequently, the function $\vec{f}'_\star(z):=\vec{h}(z)-\vec{a}(z)$ is in $\mathcal{A}(\mathbb{S}_s)^N$ and satisfies $\vec{f}'_\star(\mathbb{R})\subseteq\mathbb{R}^N$.
Due to property \ref{Yproperties:asymptotics} (asymptotics),
$\vec{f}'_\star\in\mathcal{BA}(\mathbb{S}_s)^N$.
As a consequence of the Y-system \eqref{Y} and the functional relation \eqref{linear-eqn-for-asympt} for the asymptotics, we have
\begin{align}
e^{f'_{\star,n}(x+is)+f'_{\star,n}(x-is)} &= e^{-a_n(x+is)-a_n(x-is)}\,Y'_n(x+is)Y'_n(x-is) \nonumber \\
&= \prod_m e^{-G_{nm}a_m(x)} \left(1 + Y'_m(x) \right)^{G_{nm}} \nonumber \\
&=\prod_m \left(e^{-a_m(x)} + e^{f'_{\star,m}(x)} \right)^{G_{nm}} \ .
\end{align}
Hence, due to continuity of $\vec{f}'_\star(x+is)+\vec{f}'_\star(x-is)- \vec{L}_{\vec 0}[\vec{f}'_\star](x)$
there exists $\vec{v}\in\mathbb{Z}^N$, such that
\be\label{expfsystem-aux}
\vec{f}'_\star(x+is)+\vec{f}'_\star(x-is)+2\pi i \vec{v} = \vec{L}_{\vec 0}[\vec{f}'_\star](x) \ .
\ee
But since $\vec{f}'_\star(\mathbb{R})\subseteq\mathbb{R}^N$, the Schwarz reflection principle $\vec{f}'_\star(x-is) = \overline{\vec{f}'_\star(x+is) }$ allows \eqref{expfsystem-aux} to be rewritten as
\be
2\,\mathrm{Re}\,\vec{f}'_\star(x+is)+2\pi i \vec{v} = \vec{L}_{\vec 0}[\vec{f}'_\star](x) \ .
\ee
Since the right hand side is real, it follows that $\vec{v}=0$. Thus $\vec{f}'_\star$ solves \eqref{funeq}.
As in the previous proof, by Proposition~\ref{TBAuniquenessDynkin} we can apply Proposition~\ref{uniquenessFsystem}. Part \ref{TBAtoF} of the latter proposition states that
$\vec{f}'_\star$ is the unique solution to \eqref{funeq} in $\mathcal{BA}(\mathbb{S}_s)^N$ which maps $\mathbb{R}$ to $\mathbb{R}^N$.
Part \ref{indepC} states that $\vec{f}'_\star$ solves the TBA equation \eqref{Cprimetba} for any choice of $\vec C'$, so in particular it solves \eqref{TBAintro}.
But from Theorem~\ref{main-theorem-TBA}\,\ref{mainTBAunique} we know that $\vec f_\star$ is the unique solution to \eqref{TBAintro} in $BC(\mathbb{R},\mathbb{R})^N$, and hence $\vec{f}'_\star=\vec{f}_\star$ on $\mathbb{R}$ (and therefore, by uniqueness of the analytic continuation,
also on $\overline{\mathbb{S}}_s$).
This completes the proof of Theorem~\ref{main-theorem-Y}.
\subsubsection*{Proof of Theorem \ref{main-theorem-TBA}, uniqueness in part \ref{mainTBAsolvesY}}
This is now immediate from Theorem~\ref{main-theorem-Y}, as the solution to the Y-system in $\mathcal{A}(\mathbb{S}_s)^N$ satisfying \ref{Yproperties:real}--\ref{Yproperties:asymptotics} is unique.
This completes the proof of Theorem~\ref{main-theorem-TBA}.
\subsubsection*{Proof of Corollary~\ref{unique-constant-sol}}
We only need to show that for $\vec a=0$, the unique solution $\vec f_\star$
from Theorem~\ref{main-theorem-TBA} is constant. By part 2 of that theorem, $\vec Y$ is then constant, too.
By Theorem~\ref{main-theorem-TBA}\,\ref{mainTBAunique}, the solution $\vec f_\star$ is independent of $\vec C$, and in particular is equal to the unique solution found in Proposition~\ref{TBAuniquenessDynkin} for $\vec C = \tfrac12 \vec G$.
In the proof of Proposition~\ref{TBAuniquenessDynkin} in Section~\ref{uniqunessproofTBA} it was verified that the integral operator
$I : BC(\mathbb{R},\mathbb{R})^N \to BC(\mathbb{R},\mathbb{R})^N$ from \eqref{contractingintegralforg} is a contraction. Explicitly,
\be \label{contTBAoperator}
I_i[\vec g] :=
\sum_{j=1}^N
\int_{-\infty}^{\infty} \frac{1}{w_i}\phi_{ij}(x-y)\sum_{k=1}^NG_{jk}\left[\log\left(e^{-a_k(y)}+e^{w_k g_k(y)}\right)-\tfrac12 w_k g_k(y)\right] dy
\ee
where $\phi_{ij}$ are the entries of $\Phi_{\frac12 \vec G}$ and $\vec w$ is the Perron-Frobenius eigenvector of $\vec{G}$.
The relation between $\vec g$ and the functions $\vec f$ in the TBA equation is $f_i(x) = w_i g_i(x)$.
By assumption we have $\vec a = 0$. Clearly, if also $\vec g$ is constant, so is $I[\vec g]$.
This shows that the operator $I$ preserves the space of constant functions (for $\vec a=0$). Hence the unique fixed point $\vec g_\star$ of $I$ (and hence the unique solution $\vec f_\star$ to \eqref{TBAintro}) must be a constant function.
This completes the proof of Corollary~\ref{unique-constant-sol}.
\section{Discussion and outlook}\label{section-outlook}
In this section, we will make additional
comments on our results and discuss possible further investigations, in particular with reference to the physical background.
\medskip
First of all, let us mention that the existence of a unique solution to Y-systems or TBA equations, even if the latter arise from a physical context, is by no means clear:
\begin{itemize}
\item \emph{Existence}: as already mentioned earlier, the constant Y-system \eqref{Yconst} has no non-negative solution if the spectral radius of $\vec G$ is larger or equal than 2 \cite{Tateo:DynkinTBAs}.
It seems likely that this generalises to solutions with asymptotics $\vec{a} = \vec{w}e^{\pm\gamma x/s}$, cf.\ \eqref{asymptoticexample}.
\item \emph{Uniqueness}: when relaxing the reality condition $\vec{Y}(\mathbb{R})\subset\mathbb{R}$, uniqueness generally fails to hold (see \cite[Fig.\,1]{DoreyTateo:YL} for the case $N=1$).
Moreover, there exist Y-systems of some more general form for which stability investigations show that the associated TBA equation for constant functions is not a contraction,
and in fact may display chaotic behaviour upon iteration \cite{CastroAlvaredoFring}.\footnote{Note that our results do not imply that the TBA integral operators are contracting, except in the specific case in Section~\ref{uniqunessproofTBA} to which the Banach Theorem is applied. However, our bound on the contraction constant $\kappa$ is certainly not optimal (see Remark~\ref{kappa-is-not-optimal}) and the TBA integral operator may well be contracting even if our bound yields $\kappa>1$. The results of $\cite{CastroAlvaredoFring}$ show that sometimes it is not contracting.}
\end{itemize}
\medskip
We will conclude this work with a brief outline of possible future investigations. Our results already cover a significant class of integrable models \cite{Zamo:ADE,KlassenMelzer91}. In physical terms, the state corresponding to the unique solution with no zeros in $\overline{\mathbb{S}}_s$ is the
ground state.
Two main directions in which to extend our results are to a) consider excited states, and b) treat situations associated with more general models. We will briefly comment on both of these.
\subsubsection*{a) Including excited states}
To include excited states one has to allow for $Y$-functions which
have roots in $\overline{\mathbb{S}}_s$.
In this case,
the TBA equation involves an additional term which depends on the positions of these roots \cite{KluemperPearce,DoreyTateo:YL,BLZ4}.
It seems possible that existence and
uniqueness of a solution to the TBA equation for a generic set of root positions can be established with similar techniques as in the present paper.
However, to produce a solution to the Y-system with a sufficiently far analytic continuation, one needs to impose additional constraints.
It would be very interesting to understand if some general statements about solutions to these constraints can be made.
In examples, these solutions are parametrised by a discrete set of ``quantum numbers''.
In the asymptotic limit $r\to \infty$ of Y-systems with $\vec{a}(x) = r \cosh(\gamma x/s) \, \vec{w}$ (relativistic scattering theories in volume $r$) these quantum numbers are expected to coincide with the Bethe-Yang quantum numbers \cite{YangYang69} which parametrise solutions of the Bethe ansatz equations. In the limit $r\to 0$, quantisation conditions in terms of Virasoro states have been conjectured for the Yang-Lee model ($N=1,\vec{G}=1$), see \cite{BajnokDeebPearce}. A related $N=1$ model, albeit with a slightly deformed Y-system (see below), is the Sinh-Gordon model for which a conjecture on the classification of states (for all $r$) in terms of solutions to the Y-system
has been given in
\cite{Teschner}.
\subsubsection*{b) More general Y-systems}
It is fair to ask if the approach presented in this paper is flexible enough to make contact with a larger number of physical models. This would require us to consider different conditions on $\vec{G}$
as well as
Y-systems or TBA equations of a more general form.
\medskip
There are several generalisations which still fit the form \eqref{Y}:
\begin{itemize}
\item
As mentioned in the end of Section~\ref{N=1-example-section},
with the method used in \cite{FringKorffSchulz}
it may be possible to treat cases of slightly more general $\vec{G}$ giving rise to $\kappa=1$, where our results
no longer apply.
\item Giving up the reality requirement in a controlled way would
for example allow for a treatment of Y-systems with chemical potentials, where a constant imaginary vector is added to $\vec{a}$
(see e.g.\ \cite{KlassenMelzer91,Fendley92}).
\end{itemize}
There are also more general forms of \eqref{Y}, which would be of interest. For example:
\begin{itemize}
\item Y-systems with a second shift parameter $t$ such as
\be \label{affineToda}
Y_n(x+is)Y_n(x-is)=\frac{\left(1+Y_n(x+it)\right)\left(1+Y_n(x-it)\right)}{\prod_{m=1}^N \left(1+\frac{1}{Y_m(x)}\right)^{H_{nm}}} \,
\ee
where $\vec{H}$ is the adjacency matrix of a finite Dynkin diagram. This specific type of Y-system appears in the context of Affine Toda Field Theories \cite{Martins,FringKorffSchulz} whose simplest representative, the Sinh-Gordon model, has $\vec{H}=0$
and bears some resemblance to \eqref{Y}.
\item The case of two simple Lie algebras giving rise to a Y-system of the form
\be
Y_{n,m}(x+is)Y_{n,m}(x-is)=\frac{\prod_{k=1} \left(1+Y_{k,m}(x)\right)^{G_{nk}}}{\prod_{l=1} \left(1+\frac{1}{Y_{n,l}(x)}\right)^{H_{ml}}}.
\ee
via their Dynkin diagrams $\vec{G}$ and $\vec{H}$.
In applications, often many of the $Y_{n,m}$ are required to be trivial. One example, albeit with an infinite number of Y-functions, is the famous ``T-hook'' of the AdS/CFT Y-system
(see e.g.\ \cite{Bajnok:TBAreview} for more details and references).
The physically relevant solutions in this case have, however, rather complicated analytical properties involving also branch cuts.
\end{itemize}
|
2,877,628,088,388 | arxiv | \section{Introduction}
\setcounter{equation}{0}
Among the many formulations of Quantum Mechanics the path-integral method
(see e.g. \cite{Feyn}, \cite{Klein}, \cite{engpath}) is a very attractive one as
it gives rise to new insights, approximation schemes and
can be readily generalized to Many-Body Physics and Quantum Field Theory. However,
for numerical evaluation
a path (i.e. functional, i.e. infinite-dimensional) integral poses an extraordinary challenge requiring at present (unphysical) imaginary time and Monte-Carlo methods. Still, due to the heavy oscillations of the integrand in real time scattering cannot be treated in this way (apart from low-energy quantities) and one has to resort to
semi-classical approximative methods (see, e.g. Ref. \cite{Makri}).
In a broader context "numerical integration" is essential in all quantitative sciences,
not only in theoretical and computational physics. It is "a wide field" \cite{Hahn} with a vast literature which cannot be cited adequatly here (a standard textbook is Ref. \cite{Rabin})
and many different methods.
It is somehow surprising that after centuries of work on numerical integration rules
(associated with the names of Kepler, Newton, Simpson, Lagrange, Gauss and others)
a new contender for the "best" all-purpose method appeared:
the double exponential (DE) integration (or tanh-sinh quadrature) method of Takahasi and Mori \cite{TaMori} (see also Ref. \cite{Mori-discov}). This method has become
a standard tool to obtain high-precision results with only $ \> {\cal O}(N \ln N) \> $ function calls \cite{BBBZ}. This is due to
a transformation $ x = \phi(t) $ which maps a finite interval into an infinite one
and leads to a new integrand which decays asymptotically with double exponential rate so that
the integral can be best approximated by the extended trapezoidal rule.
Although other integration schemes (like Gaussian quadrature) are superior for smooth integrands, it can be argued that the DE-method comes close to a all-purpose quadrature scheme \footnote{A caveat: in certain applications the method fails to give
accurate results \cite{num osc}. This is not surprising as realistically one may doubt whether
a "best" scheme for {\it all} functions exists
.}.
Ooura and Mori \cite{OoMori} have extended this scheme to oscillating integrands and found spectacular results.
For example, Euler's constant is obtained from integrating
\begin{equation}
\int_0^{\infty} dx \> \lrp - \ln x \rrp \, \sin x \> = \> \gamma_E
\label{gamma_E}
\end{equation}
to an absolute accuracy of $ 10^{-12} $ with only $80$ function calls
(see Table 2 in Ref. \cite{OoMori})
despite the fact that the integral only exists as limiting case
\begin{equation}
\lim_{\eta \to 0} \int_0^{\infty} dx \> \lrp - \ln x \rrp \, \sin x \, e^{-\eta x} \> .
\end{equation}
Thus the double exponential method implicitly introduces a suitable regularization scheme.
This makes it a very
appealing quadrature rule for quantum-mechanical path integrals in real time
\begin{equation}
\int {\cal D} x(t) \> \exp \lcp i \int_{t_a}^{t_b} dt \lsp \frac{m}{2} \dot x^2(t) - V(x(t)) \rsp \rcp
\end{equation}
where the convergence of these
(Fresnel-type) integrals is ensured by Feynman's ``$ i 0^+ $-rule"
which either modifies the mass
$ m \to m + i 0^+ $ or the squared frequency in a harmonic oscillator potential
$ \> V = m \Omega^2 x^2/2 \> $
as $ \Omega^2 \to \Omega^2 - i 0^+ $. As a free field theory can be seen as a system of coupled oscillators
it is not surprising that the latter prescription (here for the squared mass) is needed to specify
the singularities of Green functions.
\vspace{0.1cm}
It is the purpose of this work to show that a direct evaluation of
multi-dimensional oscillating integrals is possible by applying the double exponential
methods alluded to before. This will be demonstrated by calculating high-dimensional Gauss-Fresnel integrals and
by evaluating the so-called Maslov phase for a non-relativistic particle in a harmonic oscillator
well where the exact solutions are readily avalable.
\section{Double exponential integration}
\setcounter{equation}{0}
Ooura and Mori use a transformation $ \phi(t) $ such that
the derivative $ \phi'(t)$ goes to zero double exponentially for $ t \to -\infty $ while the function
$ \phi(t) $ approaches $ t $ double exponentially as $ t \to +\infty $. The latter property ensures that
for large $ t $ the zeroes of the oscillating integrand are nearly hit leading to the fast convergence of the scheme.
Ooura has later given an improved version of this method \footnote{See Fig. 3 in Ref. \cite{OMrobust} for a comparison
with the old method for the integral in Eq. \eqref{gamma_E}.}
which I will use in the following, viz.
"Approximation formula 2" (Eq. (3.5) in Ref.\cite{Ooura})
with the simplifications $ \omega_0 = \omega $ and $ N_+ = N_- \> \equiv \> k_{\rm max} $. Although originally only given for
$ \> \omega > 0 \> $ one can extend it also to the case $ \omega < 0 $ by complex conjugation provided the function
$ \> f(x) \> $ is real. Thus
\begin{equation}
\boxed{
\int_0^{\infty} dx \> f(x) \, e^{i \omega x} \> \simeq \>
\frac{\pi}{|\omega|} \> \> \sum_{k = - k_{\rm max}}^{k = + k_{\rm max}} \! f \lrp \frac{\pi}{|\omega| h} \phi(k h )
\rrp \> \phi'( k h ) \, \Big \{ \exp \lsp i \, {\rm sgn}(\omega)\, \frac{\pi}{h} \,
\phi(k h)\rsp - (-1)^k \Big \}
\label{Ooura form2}
}
\end{equation}
where $ \> {\rm sgn}(\omega) \> := \> \omega/|\omega| \> $ and
\begin{equation}
\phi(t) \> = \> \frac{t}{1 - \exp\lsp -2 t - \alpha \lrp 1 - e^{-t} \rrp - \beta \lrp e^t - 1 \rrp \rsp } \> .
\end{equation}
Ooura's parameter $ \alpha, \beta $ are $ \omega-$dependend:
\begin{equation}
\beta \> = \> \frac{1}{4} \> , \qquad 0 \> < \> \alpha \> = \> \beta \> \sqrt{\frac{4 \,|\omega| h}{4 \, |\omega| h +
\ln \lrp 1 + \frac{\pi}{|\omega| h} \rrp}} \quad < \quad \beta \> .
\label{alpha beta}
\end{equation}
Note that the case $ \> \omega = 0 \> $ is undefined in Ooura's integration rule although
one would then expect that it reduces to the standard double exponential integration rule
for an half-infinite interval. However,
questions about the allowed frequency range or the class of admissible functions are outside the scope of the present work.
\vspace{0.2cm}
\section{Application I: Multi-dimensional Gauss-Fresnel integral}
\setcounter{equation}{0}
I define the simplest $N$-dimensional Gauss-Fresnel integral as
\begin{equation}
G\!F_N(\omega) \> := \> \int d^N y \> \exp \lrp i \omega \sum_{k=1}^N y_k^2 \rrp \> , \quad \omega \quad {\rm real} \> .
\end{equation}
Its analytical value is obtained by a regularization
\begin{equation}
G\!F_N(\omega) \> = \> \lim_{\eta \to 0} \, \prod_{k=1}^N \lrp \int d^N y \> \exp \lsp i \lrp \omega + i \eta \rrp y^2 \rsp \rrp\> = \> \lim_{\eta \to 0} \lrp \frac{\pi i}{\omega + i \eta} \rrp^{N/2} \> = \> \lrp \frac{\pi}{|\omega|} \rrp^{N/2} \, \exp \lrp i \, {\rm sgn}(\omega) \,\pi \frac{N}{4}\rrp
\label{GF}
\end{equation}
where the positive sign of the square root has to be taken (for a thorough mathematical treatment see, e.g. Refs. \cite{NagaMiya1}, \cite{NagaMiya2}).
\vspace{0.2cm}
I will calculate the Gauss-Fresnel integral in {\it hyperspherical co-ordinates}. As the integrand only depends on
the hyperradius $ \> R = (\sum_{k=1}^N y_k^2 )^{1/2} \> $ one obtains
\begin{equation}
G\!F_N(\omega) \> = \> S_{N-1} \cdot \int_0^{\infty} dR \, R^{N-1} \, \exp \lrp i \omega R^2 \rrp
\end{equation}
where
\begin{equation}
S_{N-1} \> = \> \frac{2 \pi^{N/2}}{\Gamma (N/2)}
\label{S_(N-1)}
\end{equation}
is the $(N-1)$-dimensional surface of the $N$-sphere. The variable change $ R = \sqrt{y} $ brings the integral into the form
\begin{equation}
G\!F_N(\omega) \> = \> \frac{1}{2} S_{N-1} \int_0^{\infty} dy \> y^{N/2-1} \> e^{i \omega y}
\end{equation}
to which Ooura's numerical integration formula \eqref{Ooura form2} will be applied. Note that
this application is much more demanding than the numerical evaluation of Euler's number by Eq. \eqref{gamma_E} as now the integrand grows power-like instead of logarithmically.
\vspace{0.2cm}
Fig. \ref{abb:1} shows how much the numerical result (for $ \> \omega = \pm 1 \> $ ) deviates from the exact one. Here and in the following I use as metric the "relative complex deviation"
\begin{equation}
\delta_{\rm rel}[G\!F] \> := \> \lvl \frac{G\!F_{\rm num} - G\!F_{\rm exact}}{G\!F_{\rm exact}} \rvl
\label{rel complex dev}
\end{equation}
\vspace{0.2cm}
\refstepcounter{abb}
\begin{figure}[htbp]
\begin{center}
\includegraphics[angle=0,scale=0.5]{figure_1}
\label{abb:1}
\end{center}
\vspace*{-0.5cm}
{\bf Fig. \arabic{abb}} : {\small Relative complex deviation (as defined in Eq. \eqref{rel complex dev}) of the Gauss-Fresnel integral evaluated numerically with Ooura's method
from the exact value for $ \omega = \pm 1 $ and increasing dimension $N$.
Results for different accuracy parameters (see Table 1) are depicted.}
\end{figure}
\vspace{1cm}
\noindent
which quantifies the agreement of the numerical calculation compared with the exact result both
for the absolute magnitude as for the phase of a complex quantity. Obviously a very good agreement
with the exact values is obtained in a wide range of dimensions -- without any regularization!
\section{Application II: Maslov correction for the harmonic oscillator}
\setcounter{equation}{0}
It is long known \cite{Mas} and covered in many textbooks (e.g. in Ref. \cite{Schul}, ch. 17 or Ref. \cite{Klein}, p. 100) but frequently overseen \footnote{In particular in some quantum field theory texts where the harmonic oscillator serves as an example of exact path integration, e.g. in Refs. \cite{Das}, \cite{Wen}, \cite{BTov} or in the prepublication version of Ref. \cite{Woit}. A detailed exposition of the Maslov correction is given in Ref. \cite{Horv}.
An early proof for this phenomenon was provided by Pechukas \cite{Pech} who evaluated the time-evolution operator at time $ \pi < \Omega T < 2 \pi $ with the help of the composition law
$ \> \hat U(T,0) = \hat U(T,T') \, \hat U(T',0) \> $ with $ 0 < \Omega (T-T'), \Omega T' < \pi \> $
(see also Problem 9 in Ref. \cite{engpath}).
} that the matrix element of the time-evolution operator for a non-relativistic particle in a harmonic potential
\begin{equation}
V^{\rm h.o.} (x) \> = \> \frac{m}{2} \Omega^2 x^2 \>
\end{equation}
as given by
\begin{equation}
\left\langle \, x_b \lvl \hat U^{\rm h.o.}(t_b,t_a) \rvl x_a \, \right\rangle \> = \> F^{\rm h.o.}(T = t_b - t_a) \, \cdot \, \exp \lsp i S_{\rm class}^{\rm h.o.}(x_b,x_a;T) \rsp
\label{U h.o.}
\end{equation}
acquires an
additional phase each time the particle passes through a {\it focal point} at $ \> \Omega T = n \pi \> $ with $ \> n = 1,2 \ldots $.
In Eq. \eqref{U h.o.} $ S_{\rm class}^{\rm h.o.}(x_b,x_a;T) $ is the classical action of the harmonic oscillator (for an explicit expression see, e.g. eq. (1.64) in Ref. \cite{engpath}) and
\begin{equation}
F^{\rm h.o.}(T) \> = \> \lrp \frac{m \, \Omega}{2 \pi i \lvl \sin(\Omega T) \rvl} \rrp^{d/2} \> e^{- \frac{1}{2} i \pi \, d \, n_{\rm M}} \> , \quad
n_{\rm M} \> = \> \sum_{n=1}^{\infty} \Theta ( \Omega T - n \pi )
\label{F exact}
\end{equation}
the prefactor in $ d $ space dimensions which is due to the quantum fluctuations ( $ \Theta(x) $ is the step or Heaviside function and a system of units is used in which $ \> \hbar = 1 \> $).
Obviously, the prefactor diverges at
these focal points (creating the so-called caustics)
and the particle starts anew its quantum-mechanical propagation (like "a phoenix rising from the ashes")
but with an additional phase $ - d \pi/2 $ as sole remainder of its previous history.
\vspace{0.1cm}
It should be emphasized that the Maslov correction is a genuine quantum-mechanical phenomenon occuring only in real time: The euclidean
version (for example the partition function) does not display it \footnote{It is unclear to me
how an analytic continuation from euclidean to real time will generate the Maslov phase. According to the
Osterwalder-Schrader reconstruction theorem \cite{OSchr}, extensively used in Quantum Field Theory for justifying Wick
rotations, this should be possible.}.
The occurence of the Maslov correction is easily seen in the Fourier path integral for the harmonic oscillator propagator, see e.g. ch. 1.2 in Ref. \cite{engpath}.
In the mathematical literature this is a well-known result: Nagano and Miyazaki \cite{NagaMiya1}
refer to several textbooks and
cite an article by H\"ormander from 1971 as earliest reference \footnote{Thus calling the
correction the "Maslov phase" seems to be not fully correct but only underlines the saying:
{\it Most named effects in physics are not named after the first discoverer...}}. In particular, their Proposition 2.5 (for the particular case $\xi = 0 $)
\begin{equation}
\frac{1}{(2 \pi)^{N/2}} \int d^Nx \, \exp \lsp \frac{i}{2} \sum_{j=1}^N x_j \, {\cal A}_{j k} \, x_k
\rsp
\> = \> \frac{\exp \lrp i \frac{\pi}{4} \, {\rm sgn} {\cal A} \rrp}{|\det {\cal A}|^{1/2}}
\label{Hoermander}
\end{equation}
is the generalization of Eq. \eqref{GF}. Here
\begin{equation}
{\rm sgn}\, {\cal A} \> = \> n_+ - n_-
\end{equation}
denotes the {\it signature} (Ref. \cite{HoJo}, p. 221) of the real, symmetric, non-singular matrix $ {\cal A} $, i.e.
the difference of positive and negative eigenvalues. Using $ n_+ + n_- = N $ the phase factor in Eq. \eqref{Hoermander} thus is
\begin{equation}
\exp \lrp i \frac{\pi}{4} N \rrp \, \cdot \, \exp \lrp -i \frac{\pi}{2} n_- \rrp \> .
\end{equation}
While the first factor also shows up in the free propagator
the second factor obviously describes the effect of negative eigenvalues, i. e. the Maslov phase.
\subsection{The Maslov correction in the time-sliced path integral}
I will be using the usual time-slicing method for evaluating the path integral
in $ d = 1 $ dimensions (see, e.g. Ref.\cite{engpath}, section I.2) but with $ N + 1 $ intervals
to simplify the notation. For accelerating the convergence of the discretized version to the continuum one, I will utilize the symmetric form of the Trotter product formula eq. (1.33)
\begin{equation}
\exp \left [ -i \Delta T ( \hat T + \hat V) \right ]
\> = \exp \left [ -i \Delta T \hat V / 2 \right ] \> \cdot \>
\exp \left [-i \Delta T \hat T \right ] \> \cdot \>
\exp \left [ -i \Delta T \hat V / 2 \right ] \> + {\cal O}((\Delta T)^3)
\> ,
\label{kurz Zeit sym}
\end{equation}
applied to kinetic ($\hat T$) and potential ($\hat V$) energy operator.
This is correct up to order
$ (\Delta T)^2 $ where $\Delta T = (t_b - t_a)/(N+1) $ is the time-step of the discretized version \footnote{Expressions which are correct to arbitrary high orders have been investigated in
Refs. \cite{Serbia 1}, \cite{Serbia 2} in terms of increasingly higher derivatives of the potential. In lattice field theories this amounts to constructing "improved actions", an example of these is given in Ref. \cite{BorRo}.}.
This slightly modifies eq. (1.28) in that reference
to
\begin{eqnarray}
\hspace{-1cm} U(x_b, t_b; x_a, t_a) &=& \lim_{N \to \infty} \>
\left (\frac{m}{2 \pi i \Delta T} \right )^{(N+1)/2}
\int_{-\infty}^{+\infty} dx_1 \> dx_2 \> ... \> dx_N \nonumber \\
&& \cdot \exp \left \{ i \Delta T \sum_{j=1}^{N+1} \left [
\frac{m}{2} \left ( \frac{x_{j} - x_{j-1}}{\Delta T} \right )^2 - \frac{1}{2}
V(x_j) - \frac{1}{2} V(x_{j-1}) \right ] \> \right \}
\label{Lagrange exterior}
\end{eqnarray}
where $ x_0 = x_a $ and $ x_{N+1} = x_b $ are fixed. Eq. \eqref{Lagrange exterior} thus is based on an "exterior" average $ \> \frac{ V(x_j) + V(x_{j-1})}{2} \> $ for the potential whereas the ususal "midpoint rule" with
$ \> V \lrp \frac{x_j + x_{j-1}}{2} \rrp \> $ is an "interior" average. In the continuum limit
$ \> \Delta T \to 0 , N \to \infty \> $ there is, of course, no difference between these distinct discretization, but for finite $ \Delta T $ there is. For simplicity, in the present note, I will use Eq. \eqref{Lagrange exterior}.
\vspace{0.2cm}
The prefactor $ F(T) $ for the harmonic oscillator potential is thus obtained by putting
$ x_a = x_b = 0 $ so that the classical action vanishes as well as terms with
$ V^{\rm h. o.}(0) = 0 $.
For $ N \ge 1 $ this gives
\begin{equation}
F^{\rm h. o.} (T) \> = \> U(0, T; 0, 0) \> = \> \lim_{N \to \infty} \, F_N^{\rm h.o.}(T)
\end{equation}
with
\begin{eqnarray}
F_N^{\rm h.o.}(T) &=& \left (\frac{m}{2 \pi i \Delta T } \right )^{\frac{N+1}{2}}
\int_{-\infty}^{+\infty} \! \! dx_1 \> dx_2 ... dx_N \,
\exp \Bigg \{ \frac{i m}{2 \Delta T} \Bigg [ x_N^2 + x_1^2
+ \sum_{j=2}^N \big ((x_j - x_{j-1} \big)^2 - (\Omega \Delta T)^2 \sum_{j=1}^N x_j^2 \Bigg ]
\Bigg \} \nonumber \\
&=& \left (\frac{m}{2 \pi i \Delta T } \right )^{(N+1)/2}
\int_{-\infty}^{+\infty} dx_1 \> dx_2 \> ... \> dx_N
\> \exp \left \{ \frac{i m}{\Delta T} \left [ \xi \sum_{j=1}^N x_j^2 -
\sum_{j=2}^N x_j \cdot x_{j-1} \right ] \> \right \}
\label{pref F_N}
\end{eqnarray}
where I have defined
\begin{equation}
\xi \> := \> 1 - \frac{1}{2} (\Delta T)^2 \Omega^2 \> = \> 1 - \frac{1}{2} \, \frac{\Omega^2 T^2}{(N+1)^2}
\label{def xi}
\end{equation}
and an empty sum is to be taken as zero.
Since these are all Gauss-Fresnel integrals they can be evaluated analytically also for finite
$ N $ as demonstrated in Appendix A.
Let us write the prefactor as in eq. (1.89) of Ref. \cite{engpath}
\begin{equation}
F_N^{\rm h.o.} \> =: \> \lrp \frac{m}{2 \pi i} \frac{1}{|f_N (\Omega T)|} \rrp^{1/2} \,
e^{i \Phi_N^{\rm Maslov}(\Omega T)} \> .
\label{prefactor square root}
\end{equation}
Then one finds for finite $ N $
\begin{equation}
\Omega \, f_N(\tau) \> = \> \frac{\tau}{N+1} \, U_N \lrp 1 - \frac{\tau^2}{4 (N+1)^2} \rrp
\> , \quad \Phi_N^{\rm Maslov}(\tau) \> = \> - \frac{\pi}{2} \sum_{n=1}^N \Theta \lsp \tau - 2 (N+1) \sin \lrp
\frac{n \pi}{2 (N+1)} \rrp \rsp
\label{f_N}
\end{equation}
where
\begin{equation}
\tau \> := \> \Omega T
\label{def tau}
\end{equation}
is the dimensionless time and $ U_N(z) $ the Chebyshev polynomial of the second kind. In Appendix A the continuum limit ( $ N \to \infty $, $ T $ fixed )
\begin{equation}
\Omega \, f_{\infty}(\tau) \> = \> \sin \tau \> , \quad \Phi_{\infty}^{\rm Maslov}(\tau) \> = \> - \frac{\pi}{2} \,
\sum_{n=1}^{\infty} \Theta \lrp \tau - n \pi \rrp
\label{exact no damp}
\end{equation}
is also studied in detail.
\vspace{0.5cm}
\subsection{Numerical implementation and results}
The challenge is now to evaluate Eq. \eqref{pref F_N} with a finite number of time slices, i.e. a finite
number $ N $ of intermediate integrals. To cope with the oscillating integrand I use
$N$-dimensional spherical coordinates
\begin{eqnarray}
x_1 &=& R \, \cos ( \phi_1 ) \nonumber \\
x_2 &=& R \, \sin ( \phi_1 ) \, \cos ( \phi_2 ) \nonumber \\
x_3 &=& R \, \sin ( \phi_1 ) \, \sin ( \phi_2 ) \,
\cos ( \phi_3 ) \nonumber \\
\vdots \nonumber \\
x_{N-1} &=& R \, \sin ( \phi_1) \ldots \sin ( \phi_{N-2} ) \,
\cos ( \phi_{N-1} ) \nonumber \\
x_N &=& R \, \sin ( \phi_1 ) \ldots \sin ( \phi_{N-2} ) \,
\sin ( \phi_{N-1} )
\end{eqnarray}
so that the most violent oscillations come from the infinite integral over the hyperradius as in
the Gauss-Fresnel integral. Whereas $ \> R \in [0,\infty] \> $,
the integration over the angles $ \phi_j $ is restricted:
\begin{equation}
\phi_1, \phi_2 \ldots \phi_{N-2} \in [0,\pi] \quad \mbox{but} \quad
\phi_{N-1} \in [0, 2 \pi ] \> ,
\end{equation}
The volume element is
\begin{equation}
d^Nx \> = \> R^{N-1} \, \sin^{N-2} ( \phi_1 ) \, \sin^{N-3} ( \phi_2 )
\ldots
\sin ( \phi_{N-2} ) \> dR \, d\phi_1 \, d\phi_2 \ldots d\phi_{N-1} \> = \> R^{N-1} dR \> d\Omega_{N-1}
\> .
\end{equation}
Substituting $ \> R = \sqrt{y \Delta T/m} \> \> $
Eq. \eqref{pref F_N} then reads
\begin{equation}
F_N^{\rm h.o.}(\tau) \> = \> \sqrt{\frac{m}{8 \pi i \Delta T} } \, (2 \pi i)^{-N/2}
\, \int d\Omega_{N-1} \, \int_0^{\infty} dy \, y^{N/2 - 1} \>
\exp \lsp i \, \omega_N \lrp \phi_1 \ldots \phi_{N-1};\tau \rrp \, y \rsp
\label{FN hyper}
\end{equation}
where
\begin{equation}
\omega_N \lrp \phi_1 \ldots \phi_{N-1};\tau \rrp \> = \> \lrp \xi \sum_{j=1}^N x_j^2 - \sum_{j=2}^N
x_j \cdot x_{j-1} \rrp \Bigg / R^2 \> = \> \xi - \sum_{j=2}^N
x_j \cdot x_{j-1} \Big / R^2 \>.
\label{omega_N}
\end{equation}
By normalizing the prefactor to the free case trivial complex factors
are eliminated. The free case is easily obtained by letting $ \> \Omega \to 0 \> $ in Eq. \eqref{f_N}
so that
$ f_N \> \to T/(N+1) \> , \Phi_N^{\rm Maslov} \> \to 0 \> $. Thus
\begin{eqnarray}
F_N^{\rm h.o.}(\tau) \big / F_N^{\rm free} &=& \frac{1}{2} \, (2 \pi i)^{-N/2}
\, \int d\Omega_{N-1} \, \int_0^{\infty} dy \, y^{N/2 - 1} \>
\exp \lsp i \, \omega_N \lrp \phi_1 \ldots \phi_{N-1};\tau \rrp \, y \rsp \nonumber \\
&=& \sqrt{\frac{\Delta T}{|f_N(\tau)|}} \, e^{i \phi_N^{\rm Maslov}(\tau)}
\label{FN hyper 2}
\end{eqnarray}
directly gives magnitude and phase of the prefactor. Note that the phase is determined only
modulo $ 2 \pi $, or {\it vice versa}
\begin{equation}
\phi_N^{\rm Maslov} \> = \> {\rm atan}2\lrp {\rm Re} F_N,{\rm Im} F_N \rrp + 2 \pi \, n \quad, \qquad
n \> = \> 0, \pm 1, \pm 2 \ldots
\label{Phi ambiguity}
\end{equation}
(see Eq. \eqref{atan2}). We are free to "align", i. e. choose $ n $ in order to directly compare with the analytic result \eqref{exact no damp} without changing the physics.
\vspace{0.2cm}
The strategy to evaluate Eq. \eqref{FN hyper 2} numerically is the following:
Perform the integration over the
hyper-radius $ \> R \> $ by means of Ooura' s integration formula for oscillatory integrals while
the integration over the angles is done by standard integration routines. This is summarized
in Table 1 which also lists the relevant accuracy parameters in these routines and their typical values.
\vspace{1cm}
\renewcommand{\baselinestretch}{0.9}
\small\normalsize
\begin{center}
\begin{table}[htb]
\small
\begin{tabular}{l|l|c|rl|c} \hline
& & & &
& \\
routine/method & application/feature& Ref. & accuracy & \qquad explanation
& typical \\
& & & parameters &
& value \\
& & & &
& \\ \hline
& & & &
& \\
{\small\bf Ooura} & double exponential & \cite{Ooura} & $k_{\max} \> $: & summation cut-off
& 50 -- 80 \\
& for oscill. integrand & & & in Eq. (2.1)
& \\
& deterministic & & $\epsilon_{\rm Ooura} \> $: & smallest weight (Eq.(B.6)) & $ 10^{-8}$ \\
& & & &
& \\\hline
& & & & & \\
{\small\bf Double Exponential} & general integrand & \cite{TaMori} & $k_{\rm max} \> $: & as in {\bf Ooura}
& $ 50 $ \\
& deterministic & & $\epsilon_{\rm DE} \> $: & smallest weight
& $ 10^{-8} $ \\
& & & &
& \\
{\small\bf Gauss-Chebyshev}& adaptive& \cite{PSM} & $m_{Cheby} \> $: & max. \# of function calls
& $ 511 $ \\
(modified for complex & deterministic & & $\epsilon_{\rm Cheby} \> $: & required
rel. accuracy & $10^{-3}$ \\
integrand) & & & & & \\
& & & &
& \\
{\small\bf VEGAS} & importance sampling & \cite{Vegas} & $n_{\rm call} \> $: & max. \# of function calls
& $10^5 $ -- $10^6$ \\
& Monte-Carlo & & $itmx \> $: & max. \# of iterations
& $10$ \\
& & & &
& \\ \hline
\end{tabular}
\vspace*{1cm}
\small\normalsize
\normalsize
{\bf Table 1}: Integration routines used (in order of appearance) with typical values of the corresponding accuracy \hspace*{1.5cm} parameters.
\label{table:1}
\end{table}
\end{center}
\renewcommand{\baselinestretch}{1.2}
\refstepcounter{abb}
\begin{figure}[hbt]
\vspace*{-1cm}
\begin{center}
\includegraphics[angle=0,scale=0.5]{figure_2}
\label{abb:2}
\end{center}
{\bf Fig. \arabic{abb}} : {\small Upper panel: Relative complex deviation (as defined in Eq.
\eqref{rel complex dev})
of the numerical result for the prefactor of the harmonic oscillator propagator with $ \> N = 1 \> $
(i.e. 2 time slices in the path integral) from the exact $(N = 1)$-result. In the lower panel the resulting additional {\it Maslov} phase $ \> \Phi_1 \> $
is plotted as
as function of the dimensionless time
$ \tau = \Omega T $ where $\> \Omega \> $ denotes the oscillator frequency.
The integral
\eqref{FN hyper 2} was evaluated numerically with Oura's formula \eqref{Ooura form2} and is compared with the exact $(N=1)$-result (black line) which has
one focal point at $ \> \tau = \sqrt{8} = 2.818... \> $.}
\end{figure}
\vspace{0.2cm}
\noindent
\normalsize
Let us start discussing the numerical results with the simplest (but already non-trivial) case
$ \> N = 1 \>$, i.e. 2 time slices for the discretized path integral and thus no integral over
an hyperspherical angle.
Fig. \ref{abb:2} shows the numerical result when evaluating Eq. \eqref{FN hyper} for
$ \> N = 1 \> $, i.e.
\begin{equation}
F_1^{\rm h.o.}(\tau) \> = \> \sqrt{\frac{m}{2 \Delta T}} \, \frac{S_0}{2 \pi i } \, \int_0^{\infty} dy \,
y^{-1/2} e^{i \omega_1 y } \> = \> \sqrt{\frac{m}{2 \pi i \Delta T}} \, \frac{1}{\sqrt{\xi + i 0^+}}
\end{equation}
since $ \> S_0 = 2 \> $ (see Eq. \eqref{S_(N-1)}) and $ \> \omega_1 \equiv \xi = 1 - \tau^2/8 \> \> $ (see Eqs. \eqref{omega_N}, \eqref{def xi}).
The extra Maslov phase of $ \> - \pi/2 \> $ acquired when passing the focal point at
$ \> \xi = 0 \> $, i.e. $ \> \tau \equiv \Omega T = \sqrt{8} \> $ is clearly seen
\footnote{For better readability phases are plotted in degrees here and in the following figures.} and very accurately
reproduced by Ooura's integration routine.
\newpage
\vspace{1cm}
Unfortunately, this does not hold for $ \> N = 2 \> $ as Fig. \ref{abb:3} shows where
the subsequent integration over the angle $ \phi_1 $ is performed by the double exponential method:
even by vastly increasing the number of function calls the relative complex deviation from the
exact result
\begin{equation}
F_2^{\rm h.o.}(\tau) \> = \> \sqrt{\frac{m}{8 \pi i \Delta T}} \frac{1}{2 \pi i} \, \int_0^{2 \pi} d\phi_1 \, \int_0^{\infty} dy \, e^{i \omega_2(\phi_1)y}
\> = \> \sqrt{\frac{m}{8 \pi i \Delta T}} \frac{1}{2 \pi} \, \int_0^{2 \pi} d\phi_1 \, \frac{1}{\omega_2(\phi_1) + i 0^+}
\end{equation}
remains large ( {\cal O}(1) ) near the two focal points. This may be due to the distribution
\begin{equation}
\frac{1}{\omega_2(\phi_1) + i 0^+} \> = \> \frac{1}{\xi - \sin \phi_1 \cos \phi_1 + i 0^+} \> = \>
{\cal P} \frac{1}{\xi - \sin \phi_1 \cos \phi_1} - i \pi \, \delta \lrp \xi - \sin \phi_1 \cos \phi_1 \rrp
\label{distri}
\end{equation}
which Ooura's routine tries to mimick by a $ 2 k_{\rm max} + 1 $ finite number of terms but the subsequent double exponential integration rule cannot handle properly...
\refstepcounter{abb}
\begin{figure}[htbp]
\vspace*{-3.8cm}
\begin{center}
\includegraphics[angle=0,scale=0.5]{figure_3}
\label{abb:3}
\end{center}
{\bf Fig. \arabic{abb}} : {\small Prefactor of the harmonic oscillator propagator as function of the dimensionless time for the
discretized version of the path integral with $ N = 2 $ (i.e. 3 time slices).
The lower panel depicts the Maslov phase obtained when passing the two focal points
(solid line: exact, points: numerical result). The upper panel shows the relative complex deviation of the numerical result from the exact one.
The integration over the hyperradius has been performed with
Ooura's rule (with standard accuracy parameters) whereas the integration over the (only) one angle was done
with the original double exponential (DE) rule of Ref. \cite{TaMori} choosing the same $\epsilon$-parameter as
for the oscillatory integral. Blue or green points have been obtained with
$ \> k_{\rm max}^{\rm DE} = 50 \> {\rm or} \> 500 $. }
\end{figure}
\vspace*{0.2cm}
As an ad-hoc way out of this dilemma one may introduce an overall damping factor
\begin{equation}
\boxed{
D_{\eta}(y) \> := \> e^{- \eta \, y } \> , \quad \eta > 0 \quad {\rm "small"}
\label{damping}
}
\end{equation}
into the $y$-integral. As shown in Fig. \ref{abb:4} this works for
a damping factor $ \> \eta = 0.001 \> $ both for the standard double exponential integration routine
as well as for a modified adaptive Gauss-Chebyshev integration method.
\vspace{0.5cm}
\noindent
Indeed,
from Eq. \eqref{FN hyper} it is seen that introducing the {\it ad hoc}-damping factor \eqref{damping} amounts to replacing
\begin{equation}
\xi \quad \longrightarrow \quad \xi + i \eta
\end{equation}
or equivalently to shifting the focal points slightly into the complex plane. Then Eq. \eqref{distri} changes into
\begin{equation}
\frac{1}{\omega_2(\phi_1) + i 0^+} \> \longrightarrow \> \frac{1}{\omega_2(\phi_1) + i \eta} \> = \>
\frac{\omega_2(\phi_1) }{\omega_2^2(\phi_1) + \eta^2} - i \frac{\eta}{\omega_2^2(\phi_1) + \eta^2}
\end{equation}
which is finite and a well-known representation of these distributions for $ \> \eta \to 0 \> $.
The exact value of the harmonic oscillator prefactor with damping is worked out in Appendix A, Eqs. \eqref{phase with eta}, \eqref{modulus with eta}.
\refstepcounter{abb}
\begin{figure}[htbp]
\vspace*{-3cm}
\begin{center}
\includegraphics[angle=0,scale=0.4]{figure_4}
\label{abb:4}
\end{center}
\vspace*{0.5cm}
{\bf Fig. \arabic{abb}} : {\small Same as in Fig. 3 but now with an additional damping $ \exp(-\eta y ) $ and $ \eta = 0.001 $. An adaptive
Gauss-Chebyshev rule \cite{PSM} was used for integration over the only one angle with the maximal number of function calls
$ m_{\rm Cheby} = 1023 $ (blue points), $ 4095 $ (green points), $ 8191 $ (red points).}
\end{figure}
\refstepcounter{abb}
\begin{figure}[htbp]
\vspace*{-1.5cm}
\begin{center}
\includegraphics[angle=0,scale=0.5]{figure_5c}
\label{abb:5}
\end{center}
\vspace*{0.5cm}
{\bf Fig. \arabic{abb}} : {\small Same as in Fig. 4 but for $ \> N = 3 $ and damping $\eta = 0.01 $. Results with the adaptive
Gauss-Chebyshev rule for integration over the the two angles angles (maximal number of function calls
$ m_{\rm Cheby} = 511 $, required relative accuracy $10^{-5}$ ) are compared with the ones
from the Monte-Carlo routine VEGAS with $n_{\rm call} = 2.5 \cdot 10^4 $ function calls.
Open circles indicate values where the $ n = - 1 $ alignement \eqref{Phi ambiguity} has been applied.}
\end{figure}
\vspace{0.1cm}
As shown in Fig. 5 the
adaptive integration routine \cite{PSM} (slightly modified to allow complex integrands and relative complex error to be achieved) and the classic VEGAS program \cite{Vegas} (applied to
real and imaginary parts separately)
give nearly identical results when the same number of function calls is used. The adaptive routine does a little bit better away from the focal points but not in their vicinity. The advantage of the Monte-Carlo
evaluation is that it also provides error estimates for the real and imaginary part of the integral which -- by error propagation -- allow an estimate of the error in phase and magnitude of the path integral prefactor. However, due to the delicate integrand these estimates typically are too small by a factor of two and more.
\vspace{0.1cm}
Table 2 displays the numerical results for the ($N=3$)-Gauss-Chebyshev results depicted in Fig. 5. The
complex harmonic oscillator prefactor starts in the 4th quadrant and moves clockwise with increasing
time. After passing through 3 focal points the accumulated Maslov phase is $\> < - \pi \> $. The numerical result then enters the second quadrant which may be interpreted as a positive phase. Choosing $ n = - 1 $ in Eq. \eqref{Phi ambiguity} "alignes" it with the analytic result and allows a meaningful comparison. As a well-defined Fresnel integral the numerical result for the prefactor -- as given in the second column of Table 2 --
is, of course, unambigous and does not depend on the chosen branch of a multi-valued function as discussed in Ref. \cite{Vivo}.
\clearpage
\begin{center}
\begin{table}[hbt]
\begin{tabular}{r|c|c|r} \hline
& & & \\
$\tau $ \hspace*{1cm} & \hspace*{2cm}$F_3^{\rm h.o.}/F_3^{\rm free}$ \hspace*{2cm}& \qquad $\delta_{\rm rel}[F_3^{\rm h.o.}]$ \hspace*{1.5cm} & \hspace*{0.5cm}$\Phi_3^{\rm Maslov}$
[degrees] \hspace*{0.5cm} \\
& & & \\ \hline
& & & \\
0.0 \hspace*{1cm} & $ +0.999 - 0.025 \> i $ & $ 1.05 \cdot 10^{-6} $ &
{\footnotesize $(n=0)$} \qquad - 1.4 \quad \\
0.5 \hspace*{1cm} & $ +1.019 - 0.026 \> i $ & $ 1.22 \cdot 10^{-6} $ & - 1.5 \quad \\
1.0 \hspace*{1cm} & $ +1.084 - 0.029 \> i $ & $ 2.01 \cdot 10^{-6} $ & - 1.6 \quad \\
1.5 \hspace*{1cm} & $ +1.214 - 0.037 \> i $ & $ 4.90 \cdot 10^{-6} $ & - 1.8 \quad \\
2.0 \hspace*{1cm} & $ +1.464 - 0.057 \> i $ & $ 2.03 \cdot 10^{-5} $ & - 2.2 \quad \\
2.5 \hspace*{1cm} & $ +2.043 - 0.124 \> i $ & $ 5.98 \cdot 10^{-14}$ & - 3.5 \quad \\
3.0 \hspace*{1cm} & $ +5.266 - 2.014 \> i $ & $ 2.74 \cdot 10^{-6} $ & -20.9 \quad \\
3.5 \hspace*{1cm} & $ +0.079 - 2.563 \> i $ & $ 1.87 \cdot 10^{-2} $ & -88.2 \quad \\
4.0 \hspace*{1cm} & $ +0.094 - 2.128 \> i $ & $ 7.47 \cdot 10^{-2} $ & -87.5 \quad \\
4.5 \hspace*{1cm} & $ +0.040 - 1.917 \> i $ & $ 2.51 \cdot 10^{-2} $ & -88.8 \quad \\
5.0 \hspace*{1cm} & $ +0.047 - 2.029 \> i $ & $ 1.04 \cdot 10^{-1} $ & -88.7 \quad \\
5.5 \hspace*{1cm} & $ -0.475 - 4.235 \> i $ & $ 2.25 \cdot 10^{-2} $ & -96.4 \quad \\
6.0 \hspace*{1cm} & $ -2.797 - 0.085 \> i $ & $ 2.54 \cdot 10^{-2} $ & -178.2 \quad \\
6.5 \hspace*{1cm} & $ -2.016 - 0.113 \> i $ & $ 5.18 \cdot 10^{-2} $ & -176.8 \quad \\
7.0 \hspace*{1cm} & $ -2.038 + 0.020 \> i $ & $ 1.94 \cdot 10^{-2} $ & +179.4 $ \> $
$\stackrel{n=-1}{\longrightarrow}$ \quad -180.6 \quad \\
7.5 \hspace*{1cm} & $-0.316 + 2.935 \> i $ & $ 1.59 \cdot 10^{-11} $ & +96.2 \quad $\longrightarrow$ \quad -263.8 \quad \\
8.0 \hspace*{1cm} & $ -0.025 + 0.999 \> i $ & $ 1.05 \cdot 10^{-6} $ & +91.4 \quad $\longrightarrow$ \quad -268.6 \quad \\
8.5 \hspace*{1cm} & $ -0.009 + 0.606 \> i$ & $ 1.62 \cdot 10^{-8} $ & +90.9 \quad $\longrightarrow$ \quad-269.1 \quad\\
9.0 \hspace*{1cm} & $ -0.005 + 0.421 \> i$ & $ 7.55 \cdot 10^{-10} $ & +90.7 \quad $\longrightarrow$ \quad -269.3 \quad \\
9.5 \hspace*{1cm} & $ -0.003 + 0.312 \> i$ & $ 6.53 \cdot 10^{-11} $ &+90.5 \quad $\longrightarrow$ \quad -269.5 \quad \\
10.0 \hspace*{1cm} & $ -0.002 + 0.242 \> i$ & $ 8.54 \cdot 10^{-12} $ & +90.4 \quad $\longrightarrow$ \quad -269.6 \quad \\
& & & \\ \hline
\end{tabular}
\vspace{0.8cm}
{\bf Table 2}:
{\small The (complex) harmonic oscillator prefactor (normalized to the free case) for $ N = 3 $ obtained numerically with Ooura's and Gauss-Chebyshev integration routines (damping $ \eta = 0.01 $ ) as function of the dimensionless time $ \tau = \Omega T $. With no damping the focal points occur at $ \tau = 3.061, 5.656 , 7.391 $ (see Eq. \eqref{focalN3})
where nearly step-like changes of the phase can be seen in Fig. 5).
The third column gives the relative complex deviation of the numerical result from the exact value.
In the last column the calculated Maslov phase is listed together with the "alignement"
(addition of $ 2 \pi \, n , \> \> n = - 1 $ , see Eq. \eqref{Phi ambiguity} )
when the prefactor enters the second quadrant in the complex plane at $ \tau \simeq 7 $.
}
\end{table}
\end{center}
For larger values of $ N $ deterministic integration routines become inefficient (the infamous "curse of dimensions") so that only stochastic methods remain.
Figs. 6 and 7 demonstrate that the VEGAS Monte-Carlo method still works for $ N = 5 $ and
$ N = 8 $
although larger values of the damping parameter $ \eta $ are required to get stable results.
Longer Monte-Carlo runs (one data point with $10^6$ function calls in Fig. 7 took about 25 minutes on a standard 2.5 GHz PC) are also feasible to improve the statistics.
\refstepcounter{abb}
\begin{figure}[htbp]
\vspace*{-4cm}
\begin{center}
\includegraphics[angle=0,scale=0.5]{figure_6C}
\label{abb:6}
\end{center}
{\bf Fig. \arabic{abb}} : {\small Same as in Fig. \ref{abb:5} but for $ \> N = 5 \> $ and
$ \> \eta = 0.02 \> $.}
\vspace*{1.5cm}
\end{figure}
\refstepcounter{abb}
\begin{figure}[htbp]
\vspace*{-1cm}
\begin{center}
\includegraphics[angle=0,scale=0.5]{figure_8D}
\label{abb:7}
\end{center}
\vspace*{0.3cm}
{\bf Fig. \arabic{abb}} : {\small Same as in Fig. \ref{abb:5} but for $ \> N = 8 \> $ and
$ \> \eta = 0.08 \> $. For the last 3 data points (indicated by crosses) an alignement
\eqref{Phi ambiguity} with $ n = - 2 $ was applied.}
\end{figure}
\clearpage
\section{Summary and Outlook}
Real-time path integrals for dynamic quantum processes are a numerical challenge as
one has to steer between conflicting requirements: on the one hand
the time-step $ \> \Delta T \> $ has to be small to reach the continuum limit while the dimension $ \> N \> $ of the integrals has to be large enough to capture the relevant time scale $ \> T = (N + 1) \Delta T \> $.
While this can be handled in imaginary time and is widely used to obtain information on static properties
time-dependent processes like scattering require (functional) integration
over rapidly oscillating functions.
\vspace{0.1cm}
As prototype for these challenges
I have evaluated numerically Gauss-Fresnel oscillatory integrals of dimension up
to $ \> N = 20 \> $ , i.e. basically the free particle propagator.
The key to a successfull achievment was Ooura's double exponential integration method \cite{Ooura}
combined with the use of hyperspherical co-ordinates to isolate the most rapidly
oscillating degree of freedom.
\vspace{0.05cm}
In a second application these tools allowed to calculate numerically
the prefactor in the harmonic oscillator propagator and thereby the Maslov phase which
emerges each time when the quantum particle passes through a (singular) focal point.
In the discretized version
of the path integral the number of focal points equals the dimension of the oscillatory integral
over hyperradius and angles which in the present work went up to $ \> N = 8 \> $.
Unfortunately these singularities
required an additional small damping factor in order to obtain stable results.
Nevertheless this may be considered as an encouraging
step to evaluate real-time path integrals directly e.g for scattering in a finite-range
potential \cite{Ros1}, \cite{Ros2}. This is due to several reasons: First, the unwanted damping may be dealt with by an extrapolation of the results to zero damping
similar to the extrapolation to small quark (or pion) mass in lattice gauge
theories \cite{Rothe}. In addition, one may expect that "caustic" singularities
will be avoided or alleviated in exact path integral formulations for the $T$-matrix
which go beyond the semiclassical approximation. Also the peculiar properties of the harmonic
potential will be absent in short-range interactions.
While it seems that $ \> N = 8 \> $ is a far cry from the true continuum limit ( $ N \to \infty $ ) of functional integrals improved effective actions \cite{Serbia 1}
may allow larger time steps and thus fewer time slices. Whether this leads to a reliable
numerical evaluation of functional integrals for scattering requires further investigation.
\vspace{7cm}
\noindent
{\bf Acknowledgement:} I would like to thank Matthias who enabled me to perform the numerical calculations on my home computer and Michael Spira for supplying me with his version of the VEGAS program and for the hospitality in the PSI Particle Theory Group.
\newpage
\begin{center}
{\Large\bf Appendix}
\end{center}
\renewcommand{\thesection}{\Alph{section}.}
\renewcommand{\theequation}{\thesection\arabic{equation}}
\setcounter{equation}{0}
\setcounter{section}{0}
\vspace{0.2cm}
\section{Maslov phase in the discretized path integral}
Here I give the results for the prefactor in the discretized path integral for the harmonic oscillator, first for a few low-dimensional cases and then for an arbitrary number of time slices.
This then allows to study the continuum limit.
\vspace{0.2cm}
\noindent
\underline{\bf $ N = 1 $}:
\begin{equation}
F_1^{\rm h.o.}(T) \> = \> \frac{m}{2 \pi i \Delta T } \, \int_{-\infty}^{+\infty} dx_1 \>
\exp \lcp \frac{i m}{2 \Delta T} \lrp 2 - (\Delta T)^2 \Omega^2 \rrp x_1^2 \rcp
\> = \> \lrp \frac{m}{2 \pi i \Delta T} \rrp^{1/2} \, \frac{1}{\sqrt{2 \xi}}
\end{equation}
where $ \> \xi \> $ has been defined in Eq. \eqref{def xi}. It is seen that in this rough approximation for the path integral (just 2 time slices!) a focal point occurs at $ \xi_1 = 0 $ where the argument of the inverse square root turns negative.
Using $ \Delta T = T/2 $ this translates into
\begin{equation}
\Omega T \> = \> \sqrt 8 = 2.828... \> =: \> X_1^{(1)}
\end{equation}
compared to the exact continuum value of $ \pi = 3.141..$. Thus
\begin{equation}
\Omega \, f_1(\tau) \> = \> \tau \lrp 1 - \frac{\tau^2}{8} \rrp \> , \quad \Phi_1^{\rm Maslov}(\tau) \> = \>
- \frac{\pi}{2} \Theta \lrp \tau - X_1^{(1)} \rrp \> .
\end{equation}
\vspace{0.2cm}
\noindent
\underline{\bf $ N = 2 $}:
\begin{eqnarray}
F_2^{\rm h.o.}(T) &=& \lrp \frac{m}{2 \pi i \Delta T } \rrp^{3/2} \, \int_{-\infty}^{+\infty} dx_1 \, dx_2 \>
\exp \lcp \frac{i m}{2 \Delta T} \lsp x_1^2 + x_2^2 + \lrp x_2 - x_1 \rrp^2 - \Omega^2
(\Delta T)^2 \lrp x_1^2 + x_2^2 \rrp \rsp \rcp \nonumber \\
&=& \lrp \frac{m}{2 \pi i \Delta T} \rrp^{3/2} \, \int_{-\infty}^{+\infty} dx_1 \, dx_2
\exp \lcp \frac{i m}{\Delta T} \lsp \xi \lrp \underbrace{x_1 - x_2/2 \xi}_{\> =: \> x_1'} \rrp^2 +
\lrp \xi - \frac{1}{4 \xi} \rrp x_2^2 \rsp \rcp \> .
\end{eqnarray}
After shifting the integration variable $ \> x_1 \to x_1' \> $ one obtains
\begin{equation}
F_2^{\rm h.o.}(T) \> = \> \lrp \frac{m}{2 \pi i \Delta T} \rrp^{1/2} \, \frac{1}{\sqrt{\xi} }
\frac{1}{\sqrt{4 \xi - 1/\xi}} \> .
\label{F2}
\end{equation}
With $ \Delta T = T/3 $ one sees that in this approximation there are 2 focal points during the evolution of the harmonically bound particle: with increasing time
one at
\begin{equation}
\xi_1 \> = \> \frac{1}{2} \> \Longrightarrow \> \Omega T = 3 \> =: \> X_1^{(2)}
\end{equation}
(instead of the exact value $\pi $) and another one at
\begin{equation}
\xi_2 \> = \> -\frac{1}{2} \> \Longrightarrow \> \Omega T = 3 \sqrt 3 = 5.196... \> =: \> X_3^{(2)}
\end{equation}
(instead of the exact value $ 2 \pi = 6.283...$). Note that
\begin{equation}
\xi \> = \> 0 \> \Longrightarrow \> \Omega T = 3 \sqrt 2 = 4. 242... \> =: \> X_2^{(2)}
\end{equation}
is {\it not} a focal point as the prefactor does not diverge here.
\noindent
Thus from Eq. \eqref{F2} one has
\begin{equation}
\Omega f_2(\tau) \> = \> \tau \lrp 1 - \frac{\tau^2}{9} \rrp \, \lrp 1 - \frac{\tau^2}{27} \rrp \> .
\end{equation}
Obviously the function $ 4 \xi - 1/\xi $ is negative for $ 0 < \xi < 1/2 $ and for $ \xi < -1/2 $.
With $ \xi = 1 - \tau^2/18 \> $ the $(N = 2)$- approximation to the path integral thus gives the following Maslov phase
\begin{eqnarray}
\Phi_2^{\rm Maslov}(\tau) &=& - \frac{\pi}{2} \, \lsp \Theta \lrp \tau - X_1^{(2)} \rrp \Theta \lrp
X_2^{(2)} - \tau \rrp + \Theta \lrp \tau - X_2^{(2)} \rrp
+ \Theta \lrp \tau - 3 \sqrt 3 \rrp \rsp \nonumber \\
&=& - \frac{\pi}{2} \, \sum_{n=1, n \neq 2}^3 \Theta \lrp \tau - X_n^{(2)} \rrp
\label{Maslov 2}
\end{eqnarray}
as function of $ \> \tau = \Omega T $. Note that naively combining the square roots in Eq. \eqref{F2}
into $ (4 \xi^2 - 1 )^{-1/2} $ (without considering their phases in the complex plane) would give the wrong result that the Maslov phase $ \Phi_2 $ would vanish for $ \xi < - 1/2 $, i.e.
$ \tau > 3 \sqrt 3 $. Note also that in the final result \eqref{Maslov 2} the point
$ \tau = X_2^{(2)} $
does not appear: only the true focal points {\it add} a phase $ - \pi/2 $ when the particle passes through it during its time evolution.
\vspace{0.2cm}
\noindent
\underline{\bf $ N = 3 $}:
\begin{eqnarray}
F_3^{\rm h.o.}(T) \!&=& \! \lrp \frac{m}{2 \pi i \Delta T } \rrp^2 \! \int_{-\infty}^{+\infty} \! \! dx_1 dx_2 dx_3
\exp \Bigg \{ \frac{i m}{2 \Delta T} \Big [ x_1^2+x_3^2+\big ( x_2 - x_1 \big)^2 +
\big( x_3 - x_2 \big)^2
- ( \Omega \Delta T)^2 \big ( x_1^2+x_2^2+x_3^2 \big ) \Big ] \Bigg \} \nonumber \\
\! &=& \! \lrp \frac{m}{2 \pi i \Delta T } \rrp^2 \! \int_{-\infty}^{+\infty} \! \! dx_1 dx_2 dx_3
\exp \Bigg \{ \frac{i m}{2 \Delta T} \Big [ 2 \xi \lrp \underbrace{x_1 - x_2/2 \xi}_{=x_1'} \rrp^2 + \lrp 2 \xi - \frac{1}{\xi} \rrp x_2^2 + 2 \xi \lrp \underbrace{x_3 - x_2/2 \xi}_{\> =: \> x_3'} \rrp^2 \Big ] \Bigg \} \nonumber \\
&=& \lrp \frac{m}{2 \pi i \Delta T} \rrp^{1/2} \> \frac{1}{\sqrt{2 \xi}} \, \frac{1}{\sqrt{2 \xi - 1/\xi}} \, \frac{1}{\sqrt{2 \xi}} \> .
\end{eqnarray}
With $ \Delta T = T/4 $ one finds that there are 3 focal points at
\begin{eqnarray}
\xi_1 \> = \> \frac{1}{\sqrt{2}} && \Longrightarrow \> \Omega T = 4 \sqrt{2 -\sqrt 2} = 3.0614... \> =: \> X_1^{(3)} \nonumber \\
\xi_2 \> = \> 0 && \Longrightarrow \> \Omega T = 4 \sqrt 2 = 5.6556... \> =: \> X_2^{(3)} \quad \mbox{(here
a genuine focal point with multiplicity 2)} \nonumber \\
\xi_3 \> = \> - \frac{1}{\sqrt{2}} && \Longrightarrow \> \Omega T = 4 \sqrt{2 + \sqrt 2} = 7.3910... \> =: \> X_3^{(3)}
\label{focalN3}
\end{eqnarray}
instead of $ \Omega T = n \pi , n = 1, 2, 3 \ldots $
and therefore
\begin{eqnarray}
\Omega f_3(\tau) &=& \tau \, \lrp 1 - \frac{\tau^2}{32} \rrp \, \lrp 1 - \frac{\tau^2}{8} + \frac{\tau^4}{512} \rrp
\\
\Phi_3^{\rm Maslov}(\tau) &=& - \frac{\pi}{2} \, \lsp \Theta \lrp \tau - X_1^{(3)} \rrp \, \Theta \lrp X_2^{(3)} - \tau \rrp
+ 2 \, \Theta \lrp \tau - X_2^{(3)} \rrp + \Theta \lrp \tau - X_3^{(3)} \rrp \rsp \nonumber \\
&=& - \frac{\pi}{2} \sum_{n=1}^3 \Theta \lrp \tau - X_n^{(3)} \rrp
\end{eqnarray}
\vspace{0.2cm}
\noindent
\underline{\bf $ N $ arbitrary}:
\vspace{0.1cm}
\noindent
The standard method to calculate the prefactor for quadratic Lagrangians in the continuum limit
is the Gel'fand-Yaglom method (see, e.g. Ref. \cite{Schul}, ch. 6 or Ref. \cite{engpath}
ch. 1.3) which leads to a differential equation for $ f(T) $. The same method can also be used to evaluate $ f_N(\tau) $ for finite $ \Delta T $ by a recurrence relation:
Define $ p_0(\xi) = 1 \> , p_1(\xi) = 2 \, \xi \> $ and calculate
\begin{equation}
p_{j+1}(\xi) \> = \> 2 \xi \> p_j(\xi) - p_{j-1}(\xi) \> , \quad j = 1, 2 \ldots N-1 \> .
\label{recurrence}
\end{equation}
Then
\begin{equation}
\Omega f_N(\tau) \> = \> \frac{\tau}{N+1} \, p_N \lrp \xi = 1 - \frac{\tau^2}{2 (N+1)^2} \rrp \> .
\label{f_Ntau}
\end{equation}
This recurrence relation may be solved either by standard methods (see, e.g. Ref. \cite{WikiRecurrence}) or by examination of the recurrence relations for the classical orthogonal
polynomials \footnote{See, e.g. Ref. \cite{Handbook}, eq. 22.7.4 for the Chebyshev polynomials of the first kind $ T_N(\xi) $ and eq. 22.7.5 for the ones of the second kind
$ U_N(\xi) $.}. Although both $ T_N(\xi) $ and $ U_N(\xi) $ obey the same recurrence relation,
only the Chebyshev polynomials of the second kind fulfill the initial condition $ p_1(\xi) = 2 \xi $ and therefore the final result is
\begin{equation}
p_N(\xi) \> = \> U_N(\xi) \quad \Longrightarrow \Omega \, f_N(\tau) \> = \> \frac{\tau}{N+1} \, U_N \lrp 1 - \frac{\tau^2}{2 (N+1)^2} \rrp \> .
\label{pN}
\end{equation}
With $ \> \> U_0(\xi) = 1, \>
U_1(\xi) = 2 \xi, \> U_2(\xi) = 4 \xi^2 - 1, \> U_3(\xi) = 8 \xi^3 - 4 \xi \> \> $
( Ref. \cite{GradRyz} eq. 8.943) the explicit results obtained above are reproduced.
Since (Ref. \cite{Handbook}, eq. 22. 3.16)
\begin{equation}
U_N(\xi) \> = \> \frac{\sin \lrp (N+1) \arccos \xi \rrp}{\sin \lrp \arccos \xi \rrp}
\end{equation}
the zeroes of this function are real and given by the simple expression (Ref. \cite{Handbook}, eq. 22.16.5)
\begin{equation}
\xi_n^{(N)} \> = \> \cos \lrp \frac{n}{N+1} \pi \rrp \quad n = 1, 2 \ldots N \> .
\label{zeroes}
\end{equation}
Using the definition \eqref{def xi} this translates into the following formula for the (positive)
time to reach the $n^{\rm th}$ focal point
\begin{equation}
\Omega T_n \equiv X_n^{(N)} \> = \> 2 ( N + 1 ) \, \sin \lrp \frac{n}{2 (N+1)} \pi \rrp
\end{equation}
in which the correct continuum limit $ N \to \infty $ is evident.
Since the Chebyshev polynomial of the second kind $ U_N(\xi) $ is a polynomial of order $ N $
with real zeros $ \xi_n^{(N)} $ it may be written as
\begin{equation}
U_N(\xi) \> = \> C_N \, \prod_{n=1}^N \lrp \xi - \xi_n^{(N)} \rrp
\end{equation}
where the normalization factor $ C_N = 2^N $ is a consequence of the recurrence relation
\eqref{recurrence} which just gives this factor for the leading power of $ \xi \> $
(alternatively one can employ the explicit expression Eq. 22.3.7 in Ref. \cite{Handbook}).
Upon taking the inverse square root of $ p_N(\xi) $ for the prefactor (see Eqs. (\ref{prefactor square root}, \ref{f_Ntau}))
each factor contributes a phase of $ - \pi/2 $ whenever it becomes negative, i.e.
$ \xi \, < \xi_n^{(N)} $.
Thus
\begin{equation}
\Phi_N^{\rm Maslov}(\xi) \> = \> - \frac{\pi}{2} \, \sum_{n=1}^N \Theta \lrp \xi_n^{(N)} - \xi \rrp \> ,
\end{equation}
a discontinous function of the time.
The continuum limit for the function $ f_N(\tau) $ in Eq. \eqref{pN} is more involved but may be
derived by expressing it as a hypergeometric function
\begin{equation}
\Omega f_N(\tau) \> = \> \tau \> \cdot \> _2F_1(a,b;c;z) \> = \> \tau \, \cdot \sum_{n=0}^{\infty} \frac{(a)_n (b)_n}{(c)_n} \, \frac{z^n}{n!}
\end{equation}
with
\begin{equation}
a \> = \> - N \> , \quad b \> = \> N + 2 \>, \quad
c \> = \>\frac{3}{2} \>, \quad z \> = \> \frac{\tau^2}{4 (N+1)^2}
\end{equation}
by means of eq. 22.5.48 in Ref. \cite{Handbook}.
Thus
\begin{equation}
\Omega f_N(\tau) \> =: \> \sum_{n=0}^N d_n \, \tau^{2n+1} \> = \> \tau -
\frac{N^2+2N}{(N + 1)^2} \, \frac{\tau^3}{6} + \frac{(N-1)N(N+2)(N+3)}{(N+1)^4} \, \frac{\tau^5}{120} + \ldots \> .
\end{equation}
The first three expansion coefficients agree with the explicit calculations for $ N = 1, 2, 3 $ and tend to the first three coefficients in the series expansion of $ \> \sin \tau \> $.
From the series expansion of the hypergeometric function one deduces that the $n^{\rm th} $ term reads
\begin{equation}
d_n \> = \> \frac{ (-N)_n (N+2)_n}{ (3/2)_n} \, \frac{1}{n!} \lsp \frac{1}{4 (N+1)^2} \rsp^n \> .
\end{equation}
This can be evaluated by expressing the Pochhammer symbols as factorials
\begin{equation}
(-N)_n \> = \> (-)^n \, \frac{N!}{(N-n)!} \> , \quad (N+2)_n \> = \> \frac{(N+n+1)!}{(N+1)!} \> , \quad
\lrp \frac{3}{2} \rrp_n \> = \> \frac{(2 n + 1)!}{2^{2n} n!}
\end{equation}
and gives
\begin{equation}
d_n \> = \> \frac{(-)^n}{(2n + 1)!} \, \prod_{k=1}^{2n + 1} \lrp \frac{N-n+k}{N+1} \rrp \>
\stackrel{N \to \infty, \> n \> {\rm fixed}}{\longrightarrow} \> \frac{(-)^n}{(2n + 1)!} \quad \Longrightarrow \quad
\Omega f_N(\tau) \> \stackrel{N \to \infty}{\longrightarrow} \> \sin \tau \> .
\end{equation}
\vspace{0.4cm}
\noindent
\underline{\bf with damping}:
\vspace{0.1cm}
\noindent
If the damping factor \eqref{damping} is used to regulate the discrete path integral the variable $ \xi $ is replaced by
\begin{equation}
\xi \longrightarrow \bar \xi \> := \> \xi + i \eta
\end{equation}
which leads to the following modifications: Knowing that $ p_N(\xi) $ is a polynomial of degree $ N $ with
zeroes at $ \xi_n^{(N)} $ (see Eq. \eqref{zeroes}) one can immediately write
\begin{equation}
p_N(\bar \xi) \> = \> 2^N \, \prod_{n=1}^N \lrp \bar \xi - \xi_n^{(N)} \rrp \> = \>
2^N \, \prod_{n=1}^N \lrp \xi -\xi_n^{(N)} + i \eta \rrp
\end{equation}
As before this product formula can now be used to calculate the prefactor of the
damped harmonic oscillator path integral with each factor contributing separately to
the inverse (principal value) square root
\begin{equation}
F_N^{({\rm damped} \> {\rm h.o)}}(T;\eta) \> = \> \lrp \frac{m}{2 \pi i \Delta T} \rrp^{1/2} \, 2^{-N/2} \, \prod_{n=1}^N \Bigl [ \xi - \xi_n^{(N)} + i \eta \Bigr ]^{-1/2} \> , \quad \xi \> = \> 1 -
\frac{1}{2} \lrp \frac{\Omega T}{N+1} \rrp^2 \> .
\end{equation}
From this one can read off the exact Maslov phase as
\begin{equation}
\Phi_N^{\rm Maslov, damped}(\tau;\eta) \> = \> - \frac{1}{2} \sum_{n=1}^N \, {\rm arg} \lrp \xi - \xi_n^{(N)} + i \eta \rrp
\label{phase with eta}
\end{equation}
with the standard definitions for the argument of a complex quantity
\begin{subequations}
\begin{eqnarray}
{\rm arg}\, (z = x + i y) &=& \arctan \frac{y}{x} + \pi \, \Theta(-x) \, {\rm sgn} \, y \> ,
\quad - \pi < {\rm arg} \, z \le \pi \> , \> \> -\frac{\pi}{2} \le \arctan \frac{y}{x} \le
\frac{\pi}{2} \\
&\> \equiv \> & {\rm atan}2(y,x)
\label{atan2}
\end{eqnarray}
\end{subequations}
Note that the phase is now a continous function of $ \xi $, i.e. of time $\tau = \Omega T $.
The modulus of the prefactor is
\begin{equation}
\Bigl | F_N^{({\rm damped} \> {\rm h.o)}}(\xi;\eta) \Bigr | \> = \> \lrp \frac{m}{2 \pi \Delta T} \rrp^{1/2} \,
2^{-N/2} \prod_{n=1}^N \lsp \lrp \xi - \xi_n^{(N)}\rrp^2 + \eta^2 \rsp^{-1/4}\> .
\label{modulus with eta}
\end{equation}
\vspace{1cm}
\section{Some numerical details: Step size and accuracy}
\setcounter{equation}{0}
Inevitably the sum over $k$ in Eq. \eqref{Ooura form2} has to be restricted to a finite
number of terms: $ \> |k| \le k_{\rm max} \> $. Then we need to know which step size $ h $
one has to take to get results accurate up to a given precision. This can be determined by examining
the asymptotic behaviour of $ \phi(t= k h) \> $. Let us write
\begin{equation}
\phi(t) \> = \> t + t \frac{e^{-a(t)}}{1 - e^{-a(t)}} \> = \> t + t \frac{1}{e^{a(t)} - 1} \quad {\rm with} \quad a(t) \> := \> 2 t + \alpha \lrp 1 - e^{-t} \rrp + \beta \lrp e^t - 1 \rrp
\end{equation}
and consider first the limit $ \> t \to + \infty $.
Obviously
\begin{equation}
a(t) \> \stackrel{t \to +\infty}{\longrightarrow} \> \beta e^t + 2 t + \alpha - \beta + {\cal O}
\lrp e^{-t} \rrp
\end{equation}
grows exponentially
and the factor in the curly brackets in Eq. \eqref{Ooura form2} becomes
\begin{eqnarray}
\Big \{ \exp \lsp i \, {\rm sgn}(\omega)\, \frac{\pi}{h} \,
\phi(k h)\rsp - (-1)^k \Big \} &=& (-1)^k \, \lcp \exp \lsp i \, {\rm sgn}(\omega) \,
\frac{k \pi}{e^{a(k h)} - 1} \rsp - 1 \rcp \nonumber \\
& \stackrel{k h \to +\infty}{\longrightarrow} & (-1)^k \> \lcp i \, {\rm sgn}(\omega) \,
k \pi \, e^{-a(k h)}+ {\cal O} \lrp e^{-2a(k h)} \rrp \rcp
\label{small terms}
\end{eqnarray}
leading to a double exponential decay of terms with $ k \simeq k_{\rm max} $ as desired
\vspace{0.3cm}
Requiring a suppression to size $ \epsilon_{\rm Ooura} $ we demand (cf. eq. (5. 19) in Ref. \cite{num osc})
\begin{equation}
\exp \lsp - \beta e^{k_{\rm max} h} \rsp \le \epsilon_{\rm Ooura} \quad \Longrightarrow
h \> \ge \> \frac{1}{k_{\rm max}} \, \ln \lrp - \frac{1}{\beta} \ln \epsilon_{\rm Ooura} \rrp \>.
\label{eps beta}
\end{equation}
Since the step size $ h $ should be as small as possible this translates into an equality, i.e.
a determination of the
step size once the maximal number $ \> 2 k_{\rm max} + 1 \> $ of function calls and the accuracy parameter $ \epsilon_{\rm Ooura} $ are chosen.
For large negative values of $ t = k h $ one finds
\begin{equation}
\phi(t) \stackrel{t \to -\infty}{\longrightarrow} - \exp \lrp - \alpha e^{|t|} + 2 t
+ {\cal O}(\ln |t|)\rrp \> \Longrightarrow \> \phi'(t) \> \longrightarrow \> - \alpha \, \exp \lrp - \alpha e^{|t|} + {\cal O}(t, \ln |t|) \rrp \> .
\end{equation}
Since $ \phi'(k h ) $ is the weight in Ooura's integration rule the terms
with $ \> k \simeq -k_{\rm max} \> $ will be suppressed double exponentially if
\begin{equation}
\exp \lsp - \alpha e^{k_{\rm max} h} \rsp \le \epsilon_{\rm Ooura} \quad \Longrightarrow
h \ge \frac{1}{k_{\rm max}} \, \ln \lrp - \frac{1}{\alpha} \ln \epsilon_{\rm Ooura} \rrp \>.
\label{eps alpha}
\end{equation}
As $ \alpha < \beta $ this is a stronger requirement than the one in Eq. \eqref{eps beta}. Note
that Eq. \eqref{eps alpha} is an implicit equation for the step size as $ \alpha = \alpha(h) \> $
(see Eq. \eqref{alpha beta}). We solve it by a few iterations starting with
$ \alpha_{(0)} = \beta $ .
\vspace{0.2cm}
Calculations in this work have been performed in double-precision arithmetic.
Nevertheless the numerical implementation require some care when evaluating
small terms as
"smallness by subtraction" leads to a loss of accuracy. Here this is exacerbated by the fact that these terms are weighted by a (large) power of the hyperradius
which peaks at $ k \simeq k_{\rm max} $ although finally made small by the double exponential
decay. One could use a power series expansion as indicated in the last line of Eq. \eqref{small terms} but a simpler remedy is not to evaluate directly $ \> \exp(ix) - 1 \> $
for small $ x $ but instead $ \> \exp(ix) - 1 \> = \> 2 i \, \sin(x/2) \, \exp(ix/2) \> $,
i.e. making them "small by multiplication".
\newpage
|
2,877,628,088,389 | arxiv | \section{Introduction}
\label{introd}
The Casimir effect was predicted by H. B. Casimir in 1948 \cite{casimir:1948} and, although not with a great precision, experimentally verified teen years later by M. J. Sparnnaay \cite{Sparnaay:1958}. Since then it has been confirmed by several high precision experiments \cite{Bressi:2002fr, Lamoreaux:1996wh, PhysRevLett.81.5475, mohideen1998precision, MOSTEPANENKO2000, PhysRevA.78.020101, PhysRevA.81.052115}, leading currently to one of the most interesting topic of research. The Casimir effect consists in a direct manifestations of the existence of quantum fluctuations of the vacuum and was noted to arise for the first time when considering two parallel conducting plates placed very close to each other in the vacuum, separated by a very small distance when compared with the plates dimensions. In this case, the theoretical prediction and experimental observation that the two plates attract each other \cite{casimir:1948} was not credited to the gravitational or electromagnetic forces, but to the modifications of the quantum vacuum fluctuations of the electromagnetic field by the presence of the plate. The gravitational interaction between the plates is far too weak to be observed while the electromagnetic interaction is absent since the plates are neutral.
Other quantum-relativistic fields, such as scalar and fermion fields, can also present modifications in the quantum fluctuations of their vacua by some sort of boundary condition, leading to a Casimir-like effect. The formal and standard approach to investigate the Casimir interactions is in the realm of quantum field theory, which is based on the assumption that the Lorentz symmetry is preserved. However, one may well assume other scenarios where the Lorentz symmetry is violated which is normally the case in models that look for probing high-energy phenomena. This, in fact, has been done from both theoretical and experimental point of views. In the context of several of these Lorentz symmetry violation scenarios the spacetime becomes anisotropic in one (or more) direction, including time, and inevitably the quantum field whose modes propagates in it has its energy spectrum modified. Lorentz symmetry violation in string theory can be found in Ref. \cite{kostelecky:1989} and in low-energy scale scenarios in Refs. \cite{alexey:2002,carlson:2001,hewett:2001,bertolami:2003, kostelecky:2003,anchordoqui:2003,bertolami:1997, alfaro:2000,alfaro:2002}. In these contexts, Casimir energy has been considered in Refs. \cite{Obousy:2008xi, Obousy:2008ca} and \cite{ Martin-Ruiz:2016ijc, Martin-Ruiz:2016lyy, Escobar:2020pes}, respectively. Therefore, with the great number of theoretical works, it is natural that the search for Lorentz symmetry violations also acquire experimental interest and, in this sense, the Casimir effect becomes an even more interesting topic to study since it can be related with Lorentz symmetry violation models.
Although the Casimir effect is more often calculated in terms of the zero-point energy of a quantized field, this effect can also be investigated by adopting the path integral formalism for quantum field theory, developed by Jackiw \cite{jackiw:1686}, in which an effective potential, presented in terms of loop-expansions, allows us to obtain the energy density as well as generation of topological mass.\footnote{In fact both calculations are divergent. The standard procedure to renormalize them and obtain finite results is through the use of Riemann zeta-function.} Studies of radiative corrections for the Casimir energy were reported in Refs. \cite{wieczorek:1986,barone:2004,robaschik:1987} and in \cite{toms:1980:928,toms:1980:2805,toms:1980:334357}. In the latter, generation of topological mass for a self-interacting massless scalar field obeying different boundary conditions on two large and parallel plates was also considered. By following the same line of investigation as in Ref. \cite{toms:1980:2805}, in the present work, we study the loop expansions to the Casimir energy and generation of topological mass for a self-interacting massive and massless scalar fields subject to Dirichlet, Newman and mixed boundary condition in the context of a CPT even aether-type Lorentz symmetry violation model \cite{carroll:2008,chatrabhuti:2009,gomes:2010}.
This paper is organized as follows: In section \ref{casim_effect} we briefly describe the theoretical model that we want to investigate, which consists of a self-interacting massive scalar field in a CPT even aether-type Lorentz symmetry violation approach. We then calculate the one- and two-loop radiative corrections to the Casimir energy and generation of topological mass admitting that the scalar field obeys Dirichlet, Newman and mixed boundary conditions on two large and parallel plates. Because these calculations are divergent, we adopt the Riemann zeta-function renormalization procedure to provide finite and well defined results. Finally, in section \ref{concl} we present our conclusions. Throughout the paper we use natural units $\hbar = c = 1$ and metric signature $(-,+,+,+)$.
\section{Loops corrections and generation of topological mass}
\label{casim_effect}
\subsection{Theoretical Model}
We first introduce the aether-type Lorentz symmetry violation model that we want to consider to investigate the vacuum energy and generation of topological mass. The model is composed by a self-interacting scalar field that presents a CPT even and aether-like Lorentz violation term implemented by direct coupling between the derivative of the field with an external constant $4-$vector. (For a more detailed review see \cite{carroll:2008,gomes:2010}). The model is described by the action below,
\begin{equation}
\label{model_action}
\mathcal{S}(\phi) = \int_{\mathcal{M}} d^4x \mathcal{L}(x) \ ,
\end{equation}
where $\mathcal{M}$ is a flat manifold and $\mathcal{L}$ is a Lagrangian density, given by
\begin{equation}
\label{lagrang_density}
\mathcal{L}(x) = - \frac{1}{2} \big(\partial_{\mu}\phi \big)\big(\partial^{\mu}\phi \big) + \frac{1}{2} \chi \big(u \cdot \partial \phi \big)^2 - U(\phi) \ .
\end{equation}
In the above expression, the scalar field of mass $m$ is represented by $\phi(x)$. The $4$-vector, $u^{\mu}$, is responsible for a privileged direction in spacetime and the dimensionless parameter $\chi$, which codifies the Lorentz violation, is much smaller than unity. The last term on the r.h.s of \eqref{lagrang_density} is the classical potential $U(\phi)$, which for a massive and self-coupling, $\lambda\phi^4$ theory, is given by
\begin{equation}
\label{potencial_u}
U(\phi) = \frac{m^2\phi^2}{2} + \frac{\lambda \phi^4}{4!} + \frac{\phi^4}{4!}\delta_1 + \frac{\phi^2}{2}\delta_2 + \delta_3 \ ,
\end{equation}
where the parameters $\delta_1$, $\delta_2$ and $\delta_3$ correspond to the renormalization constants of the theory and will be determined later. Before we proceed, we want to make clear that the analysis we want to develop in this paper will take into consideration the $4$-vector, $u^{\mu}$, in two types: timelike and spacelike. The timelike component is represented by $u^t=(1,0,0,0)$ while the spacelike is represented by $u^x=(0,1,0,0)$ if the privileged direction is in the $x$-axis, $u^y=(0,0,1,0)$ if the privileged direction is in the $y$-axis and $u^z=(0,0,0,1)$ if the privileged direction is in the $z$-axis.
In order to adopt the path integral approach described in detail in Ref. \cite{toms:1980:2805} we need to allow the field $\phi(x)$ to fluctuate about a fixed background field, $\Phi$, with its quantum fluctuations represented by $\varphi$. Thus, after performing a Wick rotation $(t\rightarrow -it)$ in the Lorentzian action \eqref{model_action} and define an Euclidean one we can make use of the generating function of the one-particle-irreducible Green function \cite{toms:1980:2805}. This provides a description in terms of a $\Phi$-dependent effective potential which, up to two-loop corrections, is written as
\begin{eqnarray}
V_{\text{eff}}(\Phi) = V_{\text{cl}}(\Phi) + V^{(1)}(\Phi) + V^{(2)}(\Phi),
\label{effP}
\end{eqnarray}
where $V_{\text{cl}}(\Phi)= U(\Phi)$ is the tree-level (classical) contribution to the effective potential in a flat manifold, $V^{(1)}(\Phi)$ and $V^{(2)}(\Phi)$ are the one- and two-loop correction contributions, respectively. Note that we have performed a linear expansion of $\phi$ ($\phi\rightarrow\Phi + \varphi$) about the classical field, $\Phi$. The two-loop contribution in the last term on the r.h.s of \eqref{effP} is a contribution of two graphs to the effective Euclidian action \cite{toms:1980:2805}. We will postpone to the next sections how to calculate it, for each case we consider.
As to the one-loop contribution to the effective potential, we will follow the same method of \cite{toms:1980:2805}, which is to define this contribution in terms of the Riemann zeta function $\zeta(s)$, i.e.,
\begin{equation} \label{one_loop_potential_DN_t}
V^{(1)}(\Phi) = - \frac{1}{2 \text{vol} (\text{E})} \bigg{[}\zeta^{\prime}(0) + \zeta(0) \text{ln} \left(\mu^2\right) \bigg{]} \ ,
\end{equation}
where $\text{vol(E)}$ is the Euclidian volume, $\zeta^{\prime}(s)$ the derivative of the zeta function with respect to the parameter $s$ and the term $\zeta(0)\text{ln}(\mu^2)$ is to be removed by renormalization.\footnote{The parameter $\mu$ is associated with a measure on the space of function \cite{toms:1980:2805}.} As it is known, the (generalized) Riemann zeta function, $\zeta(s)$, is defined as
\begin{equation} \label{zet_funct_DN_t_0}
\zeta(s) = \sum_{\beta} \Lambda_{\beta}^{-s} \ ,
\end{equation}
where $\Lambda_{\beta}$ is the spectrum of eigenvalues associated with a self-adjoint elliptic operator, which in our case is given by
\begin{eqnarray}
\Delta = -\partial_{\mu}\partial^{\mu} + \chi u^{\mu}u^{\nu}\partial_{\mu}\partial_{\nu} + m^2 + \frac{\lambda\Phi^2}{2}.
\label{operator}
\end{eqnarray}
Note that $\beta$ stands for the set of quantum numbers associated with the quantum field eigenfunction, $\varphi$, of the operator, $\Delta$. Although the zeta function \eqref{zet_funct_DN_t_0} is defined in terms of the complex parameter $s$, for $\text{Re}(s) > 1$, an analytic continuation to the whole complex plane can be obtained for it, including in $s = 0$.
The renormalization condition that enable us to eliminate the term $\zeta(0)\text{ln}(\mu^2)$ in \eqref{one_loop_potential_DN_t} is considered in analogy to Coleman-Weinberg and should fix the coupling-constant \cite{coleman,toms:1980:2805}. This condition is written as
\begin{eqnarray}
\frac{d^4V_{\text{eff}}}{d\Phi^4}\bigg|_{\Phi=0}=\lambda.
\label{ren1}
\end{eqnarray}
As we will see, this condition will fix the renormalization constant $\delta_1$.
\begin{figure}[h!]
\includegraphics[scale=0.7]{f1.pdf}
\label{par_plat}
\caption{Schematic configuration for the two parallel plates with area $L^2$ separated by a distance $a$ ($a \ll L$).}
\label{figure1}
\end{figure}
On the other hand, the condition that makes possible to obtain a topological mass is given by
\begin{equation}
\frac{d^{2}V_{\text{eff}}}{d\Phi^2}\bigg|_{\Phi=0}=m^2.
\label{ren2}
\end{equation}
This condition fix the renormalization constant $\delta_2$ and also provides the topological mass when using the renormalized effective potential, as we will see. Note that $\Phi=0$ is the value that minimizes the effective potential and represents the minimum of the potential only if it obeys the extremum condition
\begin{equation}
\frac{dV_{\text{eff}}}{d\Phi}\bigg|_{\Phi=0}=0,
\end{equation}
leading to Eq.(\ref{ren2}) being positive.
Moreover, in order to find the constant $\delta_3$ we also need to use an additional renormalization condition which is given by
\begin{eqnarray}
V_{\text{eff}}\big|_{\Phi=0}=0.
\label{ren3}
\end{eqnarray}
From now on in our discussion we will assume that the quantum field $\varphi$ is confined between two large parallel plates, as it is shown in Fig.\ref{figure1}. The quantum field $\varphi$ is an eigenfunction of the self-adjoint elliptic operator \eqref{operator}, with eigenvalues $\Lambda_{\beta}$. In this sense, the eigenvalues we are interested in are the ones obtained by requiring that $\varphi$ must satisfy specific boundary conditions on the plates placed at $z=0$ and $z=a$ for all cases of $4$-vector $u^{\mu}$: timelike and spacelike.
\subsection{Dirichlet and Neumann boundary conditions} \label{DN_bound_cond}
We will start by considering the case where the scalar field, $\varphi$, satisfies Dirichlet and Neumann boundary conditions on the plates, respectively, that is,
\begin{equation}
\label{dir_bound}
\varphi(x)\big{|}_{z=0} = \varphi(x)\big{|}_{z=a}, \qquad\text{and}\qquad \frac{\partial \varphi(x)}{\partial z}\bigg{|}_{z=0} = \frac{\partial \varphi(x)}{\partial z}\bigg{|}_{z=a} \ .
\end{equation}
The complete set of normalized solutions of the scalar field, $\varphi$, under these conditions have been reported, for instance, in Ref. \cite{cruz:2017}. These solutions provide the following eigenvalues of the operator \eqref{operator}:
\begin{eqnarray}
\Lambda_{\beta} = k_t^2 + k^2 + \frac{n^2\pi^2}{a^2} -\chi u^{\mu}u^{\nu}k_{\mu}k_{\nu} + m^2 + \frac{\lambda\Phi^2}{2},
\label{ev1}
\end{eqnarray}
where $k_{\mu}=(k_t ,k_x, k_y, k_z)$ are the four-momentum components, $k^2=k_x^2+k_y^2$ and $k_z=n\pi/a$, for $n=1,2,3,...$\ . Hence, the set of quantum numbers is $\beta = (k_t,k_x,k_y,n)$. Note that $k_t,k_x$ and $k_y$ are continuum quantum numbers. In addition we want to point out that for constant time-like vector, $u^t$, in \eqref{ev1}, we have to consider the Euclidean version of the zero-component of the four-momentum, i.e., we have to take $k_t\to- ik_t$, in the term associated with the Lorentz violating parameter, $\chi$.
\subsubsection{Timelike vector}
We want to begin considering the timelike type of the 4-vector $u^{\mu}$, in which case, $u^{t}=(1,0,0,0)$, meaning that the privileged direction chosen to have the Lorentz symmetry violated is the time one. This leads to the eigenvalues \eqref{ev1} to be written as
\begin{equation}
\label{eigenv_timelike_DN}
\Lambda_{\beta} = (1+\chi)k_t^2 + k^2 + \frac{\pi^2 n^2}{a^2} + m^2 + \frac{\lambda \Phi^2}{2} \ .
\end{equation}
The set of eigenvalues in Eq. \eqref{eigenv_timelike_DN} allow us to build the zeta function by using the definition \eqref{zet_funct_DN_t_0}. This provides
\begin{equation} \label{zeta_DN_t_0}
\zeta(s) = \frac{V_3}{(2\pi)^3} \sum_{n=1}^{\infty} \int d^3k \left[(1+\chi)k_t^2 + k^2 + \frac{\pi^2 n^2}{a^2} + m^2 + \frac{\lambda}{2}\Phi^2\right]^{-s} \ ,
\end{equation}
where $V_3$ is a continuum volume associated with the coordinates $t,x,y$ and $d^3k=dk_tdk_xdk_y$. After defining a new variable $\kappa_t=\sqrt{1+\chi}k_t$, the integrals in $\kappa_t \ , \ k_x$ and $k_y$ can be performed by using the identity
\begin{equation}
\label{identity}
\frac{1}{\omega^{2s}} = \frac{1}{\Gamma(s)}\int_0^{\infty}d\tau\tau^{2s-1}e^{-\omega^2\tau^2}.
\end{equation}
Consequently, \eqref{zeta_DN_t_0} becomes
\begin{equation}
\label{zet_funct_DN_t_1}
\begin{aligned}
\zeta(s) = \frac{V_3}{(2\pi)^3}\frac{\pi^{3/2}}{\sqrt{1+\chi}}\frac{\Gamma(s-3/2)}{\Gamma(s)}w^{3-2s}\sum_{n=1}^{\infty}\left(n^2+\nu^2\right)^{3/2-s} \ ,
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
\nu^2 \equiv \frac{\lambda \Phi^2}{2w^2} + \frac{m^2}{w^2} \ \ \ \ \ \ \ \ \text{and} \ \ \ \ \ \ \ \ w \equiv \frac{\pi}{a} \ .
\end{aligned}
\end{equation}
The sum in $n$ present in Eq. \eqref{zet_funct_DN_t_1} can be worked out by making use of the Epstein-Hurwitz zeta function \cite{grad}:
\begin{equation}
\label{EH_funct}
\begin{aligned}
\zeta_{EH}(s,\nu) & \equiv \sum_{n=1}^{\infty} \left(n^2+\nu^2\right)^{-s} \\ &= -\frac{\nu^{-2s}}{2} + \frac{\pi^{1/2}}{2}\frac{\Gamma(s-1/2)}{\Gamma(s)}\nu^{1-2s} +\frac{2^{1-s}(2\pi)^{2s-1/2}}{\Gamma(s)} \sum_{n=1}^{\infty} n^{2s-1} f_{(s-1/2)}(2\pi n \nu),
\end{aligned}
\end{equation}
where the function $f_{\gamma}(x)$ is defined in terms of the modified Bessel functions \cite{grad}, $K_{\gamma}(x)$, by the following relation:
\begin{equation}
f_{\gamma}(x) \equiv \frac{K_{\gamma}(x)}{x^{\gamma}}.
\end{equation}
Thus, using \eqref{EH_funct} in Eq. \eqref{zet_funct_DN_t_1} we get
\begin{equation}
\label{zet_funct_DN_t_2}
\begin{aligned}
\zeta(s) = \frac{V_3}{(2\pi)^3} \frac{1}{\sqrt{1+\chi}} \bigg{[} & -\frac{\pi^{3/2} w^{-2\bar{s}} \nu^{-2\bar{s}}}{2} \frac{\Gamma(\bar{s})}{\Gamma(\bar{s}+3/2)} + \frac{\pi^{1/2} \pi^{3/2} w^{-2\bar{s}}}{2}\frac{\Gamma(\bar{s}-1/2)}{\Gamma(\bar{s}+3/2)}\nu^{1-2\bar{s}} \\ & + \frac{2^{1-\bar{s}}(2\pi)^{2\bar{s}-1/2} \pi^{3/2} w^{-2\bar{s}}}{\Gamma(\bar{s}+3/2)} \sum_{n=1}^{\infty} n^{2\bar{s}-1} f_{(\bar{s}-1/2)}(2\pi n \nu) \bigg{]} \ ,
\end{aligned}
\end{equation}
where $\bar{s}=s-3/2$. We should note that in order to obtain the one-loop correction to the effective potential we have to take the limit, $s\rightarrow 0$, which means $\bar{s}\rightarrow -3/2$. Consequently, Eq. \eqref{zet_funct_DN_t_2} provides
\begin{equation} \label{zet_0_D_t}
\zeta(0) = \frac{V_3 a}{2(2\pi)^4} \frac{\pi^2 b^4}{\sqrt{1+\chi}},
\end{equation}
and
\begin{equation} \label{zet_prim_D_t}
\begin{aligned}
\zeta^{\prime}(0) = \frac{V_3 a}{(2\pi)^3} \left[-\frac{2\pi^2}{3}\frac{b^3}{\sqrt{1+\chi}a} + \frac{3}{8}\frac{\pi b^4}{\sqrt{1+\chi}} - \frac{\pi}{2} \frac{b^4 \text{ln}(b)}{\sqrt{1+\chi}} + \frac{2\pi b^2}{\sqrt{1+\chi}a^2}\sum_{n=1}^{\infty} \frac{K_2\left(2ban\right)}{n^2} \right],
\end{aligned}
\end{equation}
where the parameter $b$ is defined as
\begin{equation}
b \equiv \sqrt{\frac{\lambda \Phi^2}{2} + m^2} \ .
\end{equation}
Hence, substituting the results in Eqs. \eqref{zet_0_D_t} and \eqref{zet_prim_D_t} into \eqref{one_loop_potential_DN_t}, we find the one-loop correction to effective potential, that is,
\begin{equation} \label{1_loop_potent_DN_t}
\begin{aligned}
V^{(1)}(\Phi) &= \frac{b^3}{24 \pi \sqrt{1+\chi} a} - \frac{3 b^4}{128 \pi^2 \sqrt{1+\chi}} + \frac{b^4 \text{ln}\left(\frac{b^2}{\mu^2}\right)} {64 \pi^2 \sqrt{1+\chi}} \\ & \ \ \ - \frac{b^2}{8 \pi^2 \sqrt{1+\chi} a^2} \sum_{n=1}^{\infty}\frac{K_2\left(2ban\right)}{n^2} .
\end{aligned}
\end{equation}
This allow us to write the effective potential \eqref{effP} up to one-loop correction as
\begin{equation} \label{effec_potent_DN_t}
\begin{aligned}
V_{\text{eff}}(\Phi) &= \frac{m^2\Phi^2}{2} + \frac{\lambda \Phi^4}{4!} + \frac{ \Phi^2}{2}\delta_2 + \frac{\Phi^4}{4!}\delta_1 + \delta_3 + \frac{b^3}{24 \pi \sqrt{1+\chi} a} - \frac{3 b^4}{128 \pi^2 \sqrt{1+\chi}} \\ & \ \ + \frac{b^4 \text{ln}\left(\frac{b^2}{\mu^2}\right)}{ 64\pi^2 \sqrt{1+\chi}} - \frac{b^2}{8 \pi^2 \sqrt{1+\chi} a^2} \sum_{n=1}^{\infty}\frac{K_2\left(2ban\right)}{n^2} .
\end{aligned}
\end{equation}
The effective potential above still needs to be renormalized, requiring that we find the renormalization constants $\delta_1$, $\delta_2$ and $\delta_3$ as we take the limit $a\rightarrow +\infty$ \cite{toms:1980:2805, chung:1998} . This is done by making use of the conditions \eqref{ren1}, \eqref{ren2} and \eqref{ren3} taken at $\Phi=0$. The condition \eqref{ren1} fix the renormalization constant $\delta_1$, i.e.,
\begin{equation}
\label{lambda_const_DN_t}
\begin{aligned}
\frac{\delta_1}{4!} &= \frac{\lambda^2 \text{ln}\left(\frac{\mu^2}{m^2}\right)}{256\pi^2\sqrt{1+\chi}} \ .
\end{aligned}
\end{equation}
Furthermore, the renormalization conditions \eqref{ren2} and \eqref{ren3} fix, respectively, the constants $\delta_2$ and $\delta_3$, providing
\begin{equation}
\label{m2_const_DN_t}
\begin{aligned}
\frac{\delta_2}{2} = \frac{\lambda m^2}{64\pi^2\sqrt{1+\chi}} + \frac{\lambda m^2 \text{ln}\left(\frac{\mu^2}{m^2}\right)} {64\pi^2\sqrt{1+\chi}} \ ,
\end{aligned}
\end{equation}
and
\begin{equation}
\label{m4_const_DN_t}
\begin{aligned}
\delta_3= \frac{3m^4}{128\pi^2\sqrt{1+\chi}} + \frac{m^4\text{ln}\left(\frac{\mu^2}{m^2}\right)}{64\pi^2\sqrt{1+\chi}} \ .
\end{aligned}
\end{equation}
The renormalization constants above when taking into account in Eq. \eqref{effec_potent_DN_t} allow us to obtain the renormalized effective potential at one-loop level
\begin{equation}
\label{ren_effec_potent_DN_t}
\begin{aligned}
V_{\text{eff}}^{\text{R}}(\Phi) &= \frac{m^2\Phi^2}{2} + \frac{\lambda \Phi^4}{4!} - \frac{3b^4}{128\pi^2\sqrt{1+\chi}} + \frac{3m^4}{128\pi^2\sqrt{1+\chi}} + \frac{b^3}{24\pi \sqrt{1+\chi} a} \\ & \ \ + \frac{m^2\lambda \Phi^2}{64\pi^2\sqrt{1+\chi}} + \frac{b^4\text{ln}\left(\frac{b^2}{m^2}\right) }{64\pi^2 \sqrt{1+\chi}}- \frac{b^2}{8\pi^2\sqrt{1+\chi}a^2}\sum_{n=1}^{\infty}\frac{K_2\left(2ban\right)}{n^2} \ .
\end{aligned}
\end{equation}
This expression for the renormalized effective potential is clearly affected by the Lorentz symmetry violation parameter $\chi$, as it should be.
The renormalized effective potential, \eqref{ren_effec_potent_DN_t}, when taken at the vacuum state $\Phi=0$, provides a non-vanishing vacuum Casimir-like potential energy by unit area of the plates, given by,
\begin{equation}
\label{dens_ener_cas_exa_DN_t}
\frac{{E}_{C}}{L^2} = a V_{\text{eff}}^{\text{R}}(0) = -\frac{m^2}{8\pi^2\sqrt{1+\chi}a}\sum_{n=1}^{\infty}\frac{K_2\left(2amn\right)}{n^2} \ .
\end{equation}
As we can see the Casimir potential energy density above is affected by the Lorentz symmetry violation parameter $\chi$ through a multiplicative factor. Although this potential energy is given in terms of a sum of the modified Bessel functions, $K_{2}(2man)$, it is a convergent expression, since for large values of $n$ this function is exponentially suppressed.
It is possible to provide a closed expressions for the asymptotic behaviors as $am\gg 1$ and $am\ll 1$, of the vacuum potential energy density. Thus, in the case $am\gg 1$, by using the asymptotic expression for the modified Bessel function for large argument \cite{grad}, we get
\begin{equation} \label{cas_energ_DN_t_large}
\frac{E_{C}}{L^2} \approx - \frac{1}{16\sqrt{1+\chi}} \left(\frac{m}{\pi a}\right)^{3/2} e^{-2am} \ .
\end{equation}
In this limit, the dominant term in \eqref{dens_ener_cas_exa_DN_t} is for $n=1$, and we can clearly see that for large values of $am$ the vacuum potential energy density is exponentially suppressed.
As to the case when $am \ll 1$ it is convenient first to use the integral representation for the modified Bessel function \cite{abramowitz:1948}:
\begin{equation}
\label{int_bessel_form}
K_{\nu}(z)=\frac{\sqrt{\pi}\left(\frac{1}{2}z\right)^{\nu}}{\Gamma(\nu+1/2)}\int_1^{\infty}e^{-zt}\left(t^2-1\right)^{\nu-1/2} dt \ .
\end{equation}
%
By substituting the above representation into \eqref{dens_ener_cas_exa_DN_t}, it is possible to develop the sum over $n$. Doing this we get:
\begin{eqnarray}
\label{potential}
\frac{E_C}{L^2}=-\frac{am^4}{6\pi^2\sqrt{1+\chi}}\int_1^\infty dv\frac{(v^2-1)^{3/2}}{e^{2amv}-1} \ .
\end{eqnarray}
In the regime of $am<<1$, the integral in \eqref{potential} is dominated by large values of $v$; so we may approximated \eqref{potential} to
\begin{eqnarray}
\label{potential1}
\frac{E_C}{L^2}\approx-\frac{am^4}{6\pi^2\sqrt{1+\chi}}\int_1^\infty dv\frac{v^3}{e^{2amv}-1} \ .
\end{eqnarray}
Now we can obtain an expression to the integral in \eqref{potential1}, which developing a series expansion in powers of $am<<1$, provides,
\begin{equation}
\frac{E_{C}}{L^2} \approx -\frac{\pi^2}{1440 \sqrt{1+\chi} a^3}+\frac{m^3}{36 \pi^2 \sqrt{1+\chi}} - \frac{a m^4}{48 \pi^2 \sqrt{1+\chi}} \ .
\label{asy2}
\end{equation}
Note that the leading term in the first term on the r.h.s of \eqref{asy2} is the contribution of the massless scalar field, which becomes an exact expression in the limit $m\rightarrow 0$.
Moreover, we also recover from Eq. \eqref{dens_ener_cas_exa_DN_t} results for the Casimir effect for a real scalar field which satisfies Dirichelet and Neumann boundary conditions on two parallel plates without Lorentz violation \cite{bordag:2001,milton:2003, cruz:2017}.
As it has been said before, the two-loop contribution to the effective potential comes from the two graphs to the effective Euclidian action given in \cite{toms:1980:2805, toms:1980:334357}. As we are only interested in the two-loop contribution to the vacuum energy, its only nonzero term, at $\Phi=0$, is given by
\begin{equation}
\label{vac_two_loop_contr_potent}
V^{(2)}(\Phi=0) = \frac{\lambda}{8} S_1(\Phi=0) \ .
\end{equation}
The function $S_1(\Phi)$ is obtained by means of the expression
\begin{equation}
\label{funct_S1_DN_t}
S_1(\Phi) = \left \{ \sum_{n=1}^{\infty} \frac{1}{a} \int \frac{d^3k}{(2\pi)^3}\left[(1+\chi)k_t^2+k^2+\frac{\pi^2 n^2}{a^2}+m^2+\frac{\lambda \Phi^2}{2}\right]^{-s}\right \}^{2} \ ,
\end{equation}
where $s$ has been introduced in order to regularize the expression above. After we subtract the divergente part of \eqref{funct_S1_DN_t} one should take $s=1$. This allows us to write the finite contribution of the function $ S_1(\Phi)$ in terms of the zeta function \eqref{zet_funct_DN_t_2} as
\begin{equation}
\label{funct_S1_DN_t_f2}
S_1(\Phi) = \left[\frac{\zeta_R(1)}{V_3 a}\right]^2 \ .
\end{equation}
Note that $\zeta_R(1)$ is the zeta function \eqref{zet_funct_DN_t_2} taken at $s=1$ after subtracting the divergent part of it given by the second term on the r.h.s. This term, when divided by $V_3 a$, is independent of $a$ and presents a divergent contribution at $s=1$ proportional to $\frac{\Gamma(s-2)}{\Gamma(s)}\approx \frac{s}{1-s}$. As it is usually done, this term should be subtracted since it does not depend on the boundary condition parameter, that is, $a$. Hence, Eqs. \eqref{vac_two_loop_contr_potent} and \eqref{funct_S1_DN_t_f2} provide the two-loop contribution to the effective potential as
\begin{equation}
\label{2_loop_cas_energ_DN_t}
\frac{E^{(\lambda)}_C}{L^2} = aV^{(2)}(0) = \frac{m^2 \lambda}{128 \pi^4 (1+\chi)a} \left[\sum_{n=1}^{\infty}\frac{K_1\left(2amn\right)}{n}\right]^2 \ .
\end{equation}
Also we can see that $\frac{E^{(\lambda)}_C}{L^2}$ depends on the Lorentz violation parameter by a multiplicative factor; moreover, it is an exact and a convergent function, which can be seen by noting that the modified Bessel function $K_{1}(2man)$ is exponentially suppressed for large values of $n$.
\begin{figure}[h!]
\centering
\subfloat{{\includegraphics[width=17cm]{f2} }}%
\qquad
\caption{The behaviors of the Casimir energy per unity area $E(am)=\frac{a^3}{L^2}E_C$ given by \eqref{dens_ener_cas_exa_DN_t} as function of $am$ is exhibited in the left panel, and the two-loop contribution to the Casimir energy per unit area $E_{\lambda}(am)=\frac{a^3}{L^2}E^{(\lambda)}_C$ in Eq. \eqref{2_loop_cas_energ_DN_t} as function of $am$, is presented in the right panel considering $\lambda=10^{-3}$. For both plots different values for the parameter $\chi$, is considered.}
\label{figure2}%
\end{figure}
We can also obtain a closed expressions for \eqref{2_loop_cas_energ_DN_t} in the regimes when $am \gg 1$ and $am \ll 1$. Considering first the case $am>>1$, we have
\begin{equation}
\label{app1}
\frac{E^{(\lambda)}_C}{L^2} \approx \frac{\lambda me^{-4 a m}}{512 \pi^3 (1+\chi) a^2} \ ,
\end{equation}
which is dominated by the term $n=1$ in the sum and it is exponentially suppressed. This feature is shown in Fig.\ref{figure2}.
For the opposite regime, that is when $am \ll 1$, we should use again the integral representation \eqref{int_bessel_form} for the modified Bessel function $K_{\mu}(x)$. Developing a similar procedure as before, we obtain
\begin{equation}
\begin{aligned}
\frac{E^{(\lambda)}_C}{L^2} & \approx \frac{\lambda }{18432 (1+\chi)a^3} - \frac{\lambda m}{768 \pi^2(1+\chi)a^2} \ ,
\label{expansion}
\end{aligned}
\end{equation}
which give us as the leading contribution the massless scalar field expression in the first term on the r.h.s of it. This becomes an exact expression in the limit $m\rightarrow 0$.
In the left panel of Fig.\ref{figure2}, we exhibit the behavior of the Casimir energy per unity area given by Eq. \eqref{dens_ener_cas_exa_DN_t}, as function of the dimensionless parameter $ma$, considering different values for the parameter $\chi$. We can see that it increases as the Lorentz symmetry violation parameter increases. In the right plot, on the other hand, we exhibit the two-loop correction to the Casimir energy per unity area, Eq. \eqref{2_loop_cas_energ_DN_t}, as function of $am$. It is clear that both plots are in agreement with the asymptotic behaviors \eqref{cas_energ_DN_t_large} and \eqref{asy2} for the Casimir energy, and with \eqref{app1} and \eqref{expansion} for the two-loop correction.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{f3}
\caption{The behavior of the ration of the topological mass by the scalar field mass as function of $am$ is plotted assuming $\lambda=10^{-3}$ and different values for the Lorentz violating parameter $\chi$.}
\label{figure3}
\end{figure}
At one-loop level the massive scalar field we are considering will get quantum corrections to its mass. This correction can be obtained by using condition \eqref{ren2}, at $\Phi=0$, for the renormalized effective potential \eqref{ren_effec_potent_DN_t} at one-loop level, i.e.,
\begin{eqnarray}
\label{topological_mass_exata}
m_{\text{T}}^2 &=& \frac{d^2V_{\text{eff}}^{\text{R}}(\Phi)}{d\Phi^2}\bigg{|}_{\Phi=0}\nonumber\\
%
&=& m^2 \left[1 + \frac{\lambda}{16 \pi \sqrt{1+\chi} a m} + \frac{\lambda}{8 \pi^2 \sqrt{1+\chi} am} \sum_{n=1}^{\infty} \frac{K_1\left(2amn\right)}{n} \right] \ .
\end{eqnarray}
This expression presents a topological contribution, which depends on $a$, to the mass $m$ of the scalar field given by the terms proportional to the self-coupling constant $\lambda$. Even for the massless scalar field, which remains massless at the tree-level, there appears a topological mass generated at one-loop correction as we can see from Eq. \eqref{topological_mass_exata}. Moreover, the third term inside the bracket is convergent, since it is exponentially suppressed for large values of $n$. In fact, the leading contribution for $am\gg 1$ is given by the first term on the r.h.s of \eqref{topological_mass_exata}. This asymptotic behavior can be obtained by using the expressions for the modified Bessel function for large argument \cite{grad}. The asymptotic expression for \eqref{topological_mass_exata} in the regime $am\gg 1$ is:
\begin{equation}
m_{\text{T}}^2 \approx m^2 + \frac{\lambda m}{16 \pi \sqrt{1+\chi} a} + \frac{\lambda}{16 \pi ^{3/2}} \sqrt{\frac{m}{a^3}} \frac{e^{-2 a m}}{ \sqrt{1+\chi}} \ .
\label{asm1}
\end{equation}
On the other hand, for $am\ll 1$, the leading term is mass independent, followed by terms that depend on the mass of the scalar field. Once more this behavior can be obtained by using the integral representation for the function $K_\mu(z)$, \eqref{int_bessel_form}. After some intermediate steps we obtain:
%
\begin{equation}
\begin{aligned}
m_{\text{T}}^2 & \approx \frac{\lambda}{96 \sqrt{1+\chi} a^2} + \frac{\lambda m}{16 \pi \sqrt{1+\chi} a} - \frac{\lambda m}{8 \pi^2 \sqrt{1+\chi} a} +m^2 \ .
\label{asm2}
\end{aligned}
\end{equation}
The asymptotic results, Eqs. \eqref{asm1} and \eqref{asm2}, are in agreement with the plot of Fig.\ref{figure3} which exhibits the behavior of the ratio of the topological mass to the field's mass itself, $m_T/m$, as function of $am$ for different parameter $\chi$. Note that the topological mass decreases as $\chi$ increases.
\subsubsection{Spacelike vector}
Now we consider the case that constant 4-vector is space-like, that is, of the types $u^x=(0,1,0,0)$, $u^y=(0,0,1,0)$ and $u^z=(0,0,0,1)$. The first two types are parallel to the plates and provide essentially the same results while the third type is perpendicular to the plates. Let us then begin by considering the parallel cases by assuming, for instance, that a broke of Lorentz symmetry happens in the $x$-direction, i.e., $u^x = (0,1,0,0)$. This give us the set of eigenvalues:
\begin{equation}
\Lambda_{\beta} = k^2 + (1-\chi)k_x^2 + \frac{\pi^2n^2}{a^2} + m^2 + \frac{\lambda}{2}\Phi^2 \ ,
\end{equation}
where $k^2=k_t^2+k_y^2$. Thereby, from Eq. \eqref{operator}, the set of eigenvalues above is associated with the elliptic-operator given by
\begin{equation}
\Delta = -\partial_{\mu}\partial^{\mu} + \chi \partial_x^2 + m^2 + \frac{\lambda}{2}\Phi^2 \ .
\end{equation}
So, in this case, the zeta function \eqref{zet_funct_DN_t_0} is written as
\begin{equation}
\zeta(s) = \frac{V_3}{(2\pi)^3} \int d^3k \sum_{n=1}^{\infty}\left[k^2 + (1-\chi)k_x^2 + \frac{\pi^2n^2}{a^2} + m^2 + \frac{\lambda}{2}\Phi^2\right]^{-s} \ .
\end{equation}
The integrals in $k_t$, $k_x$ and $k_y$ can be solved using the identity \eqref{identity}, providing
\begin{equation}
\zeta(s) = \frac{V_3}{(2\pi)^3}\frac{\pi^{3/2}}{\sqrt{1-\chi}}\frac{\Gamma(s-3/2)}{\Gamma(s)}w^{3-2s}\sum_{n=1}^{\infty}\left(n^2+\nu^2\right)^{3/2-s} \ ,
\label{zetatl1}
\end{equation}
where
\begin{equation}
\nu^2 = \frac{\lambda \Phi^2}{2w^2}+\frac{m^2}{w^2} \ \ \ \ \text{and} \ \ \ \ w=\frac{\pi}{a} \ .
\end{equation}
Clearly, we can see that this result is very similar to the one obtained in the timelike case, with a different dependence on the Lorentz violation parameter, $\chi$. In fact, the timelike and spacelike expressions for the zeta function \eqref{zet_funct_DN_t_1} and \eqref{zetatl1}, respectively, are related by the change $\chi \rightarrow -\chi$. Hence, we expect that all the results for the vacuum energy, its loop-correction and the topological mass will be related by this same change.
Let us now turn to the most important type of the spacelike Lorentz symmetry violation, namely, the one in the perpendicular direction to the plates given by
\begin{equation}
u^{z} = (0,0,0,1) \ .
\end{equation}
This provides, from Eq. \eqref{operator}, the following differential elliptic operator:
\begin{equation}
\label{diff_operador_DN_z}
\Delta = - \partial_{\mu} \partial^{\mu} + \chi \partial_z^2 + m^2 + \frac{\lambda \Phi^2}{2} \ .
\end{equation}
Consequently, the set of eigenvalues associated with this operator is found to be
\begin{equation}
\label{eigenvalue_DN_z}
\Lambda_{\beta} = k^2 + \left(1-\chi \right)\frac{\pi^2 n^2}{a^2} + m^2 + \frac{\lambda \Phi^2}{2} \ .
\end{equation}
Note that $k^2=k_t^2+k_x^2+k_y^2$. Thereby, the zeta function \eqref{zet_funct_DN_t_0}, taking into consideration \eqref{eigenvalue_DN_z}, is written as
\begin{equation}
\label{zeta_funct_s_DN_z}
\zeta(s) = \frac{V_3}{(2 \pi)^3} \sum_{n=1}^{\infty} \int d^3k \left[ k^2 + \left(1-\chi \right) \frac{\pi^2 n^2}{a^2} + m^2 + \frac{\lambda \Phi^2}{2}\right]^{-s} \ ,
\end{equation}
where, again, $V_3$ is the continuum volume associated with the dimensions $t,x,y$ and $d^3k=dk_tdk_xdk_y$. Thus, the identity \eqref{identity} allows us to perform the integrals in $k_t$, $k_x$ and $k_y$ in Eq. \eqref{zeta_funct_s_DN_z}. The resulting expression, in analogy with Eq. \eqref{zetatl1}, can be written in terms of the Epstein-Hurwitz function \eqref{EH_funct}, providing that
\begin{equation}
\begin{aligned}
\zeta(s) = \frac{V_3}{(2\pi)^3} \bigg{[} & -\frac{\pi^{3/2} w^{-2\bar{s}} \nu^{-2\bar{s}}}{2} \frac{\Gamma(\bar{s})}{\Gamma(\bar{s}+3/2)} + \frac{\pi^{1/2} \pi^{3/2} w^{-2\bar{s}}}{2}\frac{\Gamma(\bar{s}-1/2)}{\Gamma(\bar{s}+3/2)}\nu^{1-2\bar{s}} \\ & + \frac{2^{1-\bar{s}}(2\pi)^{2\bar{s}-1/2} \pi^{3/2} w^{-2\bar{s}}}{\Gamma(\bar{s}+3/2)} \sum_{n=1}^{\infty} n^{2\bar{s}-1} f_{(\bar{s}-1/2)}(2\pi n \nu) \bigg{]} \ ,
\end{aligned}
\label{zetaSL}
\end{equation}
where $\bar{s}=s-3/2$ and
\begin{equation}
\begin{aligned}
\nu^2 = \frac{\lambda \Phi^2}{2w^2} + \frac{m^2}{w^2} \ \ \ \ \ \text{and} \ \ \ \ \ w \equiv \sqrt{1-\chi} \frac{\pi}{a} \ .
\end{aligned}
\end{equation}
Note that one should consider the limit $s\rightarrow 0$, or analogously $\bar{s}\rightarrow -3/2$. In this limit, we can get the expressions for $\zeta(0)$ and $\zeta^{\prime}(0)$, respectively, given by
\begin{equation}
\label{zeta_s0_DN_z}
\zeta(0) = \frac{V_3 a}{2(2\pi)^4} \frac{\pi^2 b^4}{\sqrt{1-\chi}} \ ,
\end{equation}
\begin{equation}
\label{zeta_prim_s0_DN_z}
\begin{aligned}
\zeta^{\prime}(0) = \frac{V_3 L}{(2\pi)^3} \left[ - \frac{2\pi^{2}}{3} \frac{b^3}{a} +\frac{3}{8}\frac{\pi b^4}{\sqrt{1-\chi}} - \frac{\pi}{2}\frac{b^4 \text{ln}(b)}{\sqrt{1-\chi}} + \frac{2\pi \sqrt{1-\chi} b^2}{a^2} \sum_{n=1}^{\infty} \frac{K_2\left(\frac{2abn}{\sqrt{1-\chi}}\right)}{n^2} \right] \ .
\end{aligned}
\end{equation}
Consequently, from Eq. \eqref{one_loop_potential_DN_t}, the one-loop correction to the effective potential is
\begin{equation}
\label{cont_potent_1loop_DN_z}
\begin{aligned}
V^{(1)}(\Phi) & = \frac{b^3}{24 \pi a} - \frac{3 b^4}{128 \pi^2 \sqrt{1-\chi}} + \frac{b^4 \text{ln}\left(\frac{b^2}{\mu^2}\right)}{64 \pi^2 \sqrt{1-\chi}} \\ & \ \ \ - \frac{b^2 \sqrt{1-\chi}}{8 \pi^2 a^2} \sum_{n=1}^{\infty}\frac{K_2\left(\frac{2abn}{\sqrt{1-\chi}}\right)}{n^2} \ ,
\end{aligned}
\end{equation}
where the parameter $b$ is defined as
\begin{equation}
b = \sqrt{\frac{\lambda \Phi^2}{2} + m^2} \ .
\end{equation}
Hence, the effective potential up to one-loop correction, given by the expression \eqref{effP}, is obtained as
\begin{equation} \label{eff_potential_DN_z}
\begin{aligned}
V_{\text{eff}}(\Phi) & = \frac{m^2\Phi^2}{2} + \frac{\lambda \Phi^4}{4!} + \frac{\Phi^2}{2}\delta_2 + \frac{\Phi^4}{4!}\delta_1 + \delta_3 + \frac{b^3}{24 \pi a} - \frac{3 b^4}{128 \pi^2 \sqrt{1-\chi}} \\ & \ \ \ + \frac{b^4 \text{ln}\left(\frac{b^2}{\mu^2}\right)}{ 64\pi^2 \sqrt{1-\chi}} - \frac{b^2 \sqrt{1-\chi}}{8 \pi^2 a^2} \sum_{n=1}^{\infty}\frac{K_2\left(\frac{2abn}{\sqrt{1-\chi}}\right)}{n^2} \ .
\end{aligned}
\end{equation}
We should now obtain the renormalization constants $\delta_1$, $\delta_2$ and $\delta_3$, which can be done by using the conditions \eqref{ren1}, \eqref{ren2} and \eqref{ren3}, respectively. Thus, the renormalization constant $\delta_1$ is found to be
\begin{equation}
\label{lambda_const_DN_z}
\frac{\delta_1}{4!} = \frac{\lambda^2 \text{ln}\left(\frac{\mu^2}{m^2}\right)}{256\pi^2\sqrt{1-\chi}} \ ,
\end{equation}
\begin{equation}
\label{m2_const_DN_z}
\frac{\delta_2}{2!} = \frac{m^2\lambda}{64\pi^2\sqrt{1-\chi}} + \frac{m^2\lambda \text{ln}\left(\frac{\mu^2}{m^2}\right)}{64\pi^2\sqrt{1-\chi}},
\end{equation}
and
\begin{equation}
\label{m4_const_DN_z}
\delta_3 = \frac{3m^4}{128\pi^2\sqrt{1-\chi}} + \frac{m^4 \text{ln}\big(\frac{\mu^2}{m^2}\big)}{64\pi^2\sqrt{1-\chi}}.
\end{equation}
Hence, substituting the renormalization constants above in Eq. \eqref{eff_potential_DN_z}, the renormalized effective potential, up to one-loop correction, is found to be
\begin{equation}
\label{ren_eff_potential_DN_z}
\begin{aligned}
V_{\text{eff}}^{\text{R}}(\Phi) &= \frac{m^2\Phi^2}{2} + \frac{\lambda \Phi^4}{4!} + \frac{b^3}{24\pi a} - \frac{3b^4}{128\pi^2\sqrt{1-\chi}} + \frac{m^2 \lambda \Phi^2}{64 \pi^2 \sqrt{1-\chi}} + \frac{3m^4}{128 \pi^2 \sqrt{1-\chi}} \\ & + \frac{b^4}{64 \pi^2 \sqrt{1-\chi}}\text{ln}\left(\frac{b^2}{m^2}\right) - \frac{b^2 \sqrt{1-\chi}}{8 \pi^2 a^2} \sum_{n=1}^{\infty}\frac{K_2\left(\frac{2abn}{\sqrt{1-\chi}}\right)}{n^2} \ .
\end{aligned}
\end{equation}
The vacuum energy per unit area of the plates is obtained when we take the vacuum state ($\Phi=0$). Thus, from Eq. \eqref{ren_eff_potential_DN_z} we get
\begin{equation}
\label{1loop_cas_energ_DN_z}
\frac{E_{\text{C}}}{L^2} = a V_{\text{eff}}^{R}(0) = - \frac{m^2 \sqrt{1-\chi}}{8 \pi^2 a} \sum_{n=1}^{\infty}\frac{K_2\left(\frac{2 a m n}{\sqrt{1-\chi}}\right)}{n^2} \ ,
\end{equation}
which is convergent and, therefore, finite. This expression for the vacuum energy per unit area is exponentially suppressed for $ma\gg 1$ and provides the massless scalar field expression for $ma\ll1$.
We can mathematically obtain asymptotic expression for \eqref{1loop_cas_energ_DN_z} in the regimes $ma\ll 1$ and $ma\gg1$. By considering the latter we have
\begin{equation}
\label{app2}
\frac{E_C}{L^2} \approx -\frac{\sqrt{1-\chi}}{16}\left(\frac{m}{\pi a}\right)^{3/2} e^{-\frac{2am}{\sqrt{1-\chi}}} \ ,
\end{equation}
which, as mentioned before, is exponentially suppressed. It is the dominant term for $n=1$ in the sum.
As to the limit $am \ll 1$, we should first use the integral representation for the modified Bessel function. $K_{\mu}(x)$, in Eq. \eqref{int_bessel_form}. This provides
\begin{equation}
\frac{E_C}{L^2} \approx -\frac{\pi^2 (1-\chi)^{3/2}}{1440 a^3} + \frac{m^3}{36 \pi^2} - \frac{a m^4}{48 \pi^2
\sqrt{1-\chi }} \ .
\label{enerper}
\end{equation}
%
Note that the dominant term is the first one on the r.h.s and represents the vacuum energy per unit area in the massless scalar field case.
Let us turn to the two-loop contribution to the effective potential calculated at $\Phi=0$ in the case the $4$-vector is orthogonal to the parallel plates. The $S_1(\Phi)$ function in Eq. \eqref{vac_two_loop_contr_potent} is now written as
\begin{equation}
\label{funct_S1_DN_z}
S_1(\Phi) = \left \{ \sum_{n=1}^{\infty} \frac{1}{a} \int \frac{d^3k}{(2\pi)^3} \left [ k^2 + (1-\chi)\frac{\pi^2 n^2}{a^2} + m^2 + \frac{\lambda \Phi^2}{2} \right ]^{-s} \right \}^2 \ ,
\end{equation}
which should be taken at $s=1$ after subtracting the divergent contribution. This can be done by using the zeta function \eqref{zetaSL} in a similar way as in the previous case shown in Eq. \eqref{funct_S1_DN_t_f2}. In the present case, the divergent contribution comes from the second term on the r.h.s of Eq. \eqref{zetaSL} and, after subtracted, the two-loop contribution $V^{(2)}(0)$ at $s=1$ can be found, leading to the two-loop correction to the vacuum energy
\begin{equation}
\label{2_loop_cas_energ_DN_z}
\frac{E^{(\lambda)}_C}{L^2} = a V^{(2)}(0) = \frac{m^2 \lambda}{128 \pi^4 a} \left[\sum_{n=1}^{\infty} \frac{K_1\left(\frac{2amn}{\sqrt{1-\chi}}\right)}{n}\right]^2 \ .
\end{equation}
This is a completely convergent expression since the modified Bessel function, $K_{\mu}(x)$, is exponentially suppressed. This is also clear when one consider the asymptotic limit $ma\gg 1$. In the opposite limit, $ma\ll 1$, the expression for the two-loop contribution to the vacuum energy for a massless scalar field is obtained, as leading contribution.
\begin{figure}[h!]
\centering
\subfloat{{\includegraphics[width=17cm]{f4} }}
\qquad
\caption{The behavior of the Casimir energy per unity area $E(am)=\frac{a^3}{L^2}E_C$ given by Eq. \eqref{1loop_cas_energ_DN_z} as function of $am$ is presented in the left panel, and the two-loop correction $E_{\lambda}(am)=\frac{a^3}{L^2}E^{(\lambda)}_C$ , given by \eqref{2_loop_cas_energ_DN_z}, as function of $am$ is in the right one. In the latter we have taken $\lambda=10^{-3}$ and considered different values for $\chi$.}
\label{figure4}
\end{figure}
We can also obtain the asymptotic expressions for \ref{2_loop_cas_energ_DN_z} in the regimes $ma\ll1$ and $ma\gg 1$. In the latter, we have
\begin{equation}
\frac{E^{(\lambda)}_C}{L^2} \approx \frac{\lambda m \sqrt{1-\chi} e^{-\frac{4 a m}{\sqrt{1-\chi}}}}{512 \pi^3 a^2} \ ,
\end{equation}
which is exponentially suppressed and dominated by the term $n=1$ in the sum.
In the opposite limit, $ma\ll 1$, we need to use the integral representation \eqref{int_bessel_form} for the modified Bessel function, $K_{\mu}(x)$. This provides
\begin{equation}
\begin{aligned}
\frac{E^{(\lambda)}_C}{L^2} & \approx \frac{\lambda (1-\chi)}{18432 a^3} - \frac{\lambda m \sqrt{1-\chi}}{768 \pi^2 a^2} \ .
\label{SLVE}
\end{aligned}
\end{equation}
Note that the first term on the r.h.s is the massless scalar field contribution which becomes exact when $ma\rightarrow 0$.
In the left panel of Fig.\ref{figure4} we plot the behavior of the Casimir energy per unity area, \eqref{1loop_cas_energ_DN_z}, as function of $am$ considering different values for the parameter $\chi$. This plot shows that as $\chi$ increases, the vacuum energy also increases. On the other hand, in the right figure we plot the two-loop correction to the vacuum energy per unity area, \eqref{2_loop_cas_energ_DN_z}, as function of $am$, assuming $\lambda=10^{-3}$. This plot shows that as $\chi$ increases, the two-loop correction decreases. Moreover, one should note an importante feature here, namely, the vacuum energy \eqref{1loop_cas_energ_DN_z} and its radiative correction \eqref{2_loop_cas_energ_DN_z} not only depends on, $\chi$, by means of a multiplicative factor but also depends on, $\chi$, in the argument of the modified Bessel function, $K_{\mu}(x)$. This stronger dependence is shown in the plots of Fig.\ref{figure4}. The shift in the curves are stronger than in the previous timelike case.
A topological mass in this case will also be generated and can be obtained by using the condition \eqref{ren2}. Thus, by applying the latter in the renormalized effective potential \eqref{ren_eff_potential_DN_z} we find
\begin{equation}
\label{topol_mass_DN_z}
m_{\text{T}}^2 = m^2 \left[1 + \frac{\lambda}{16 \pi a m} + \frac{\lambda}{8 \pi^2 a m} \sum_{n=1}^{\infty} \frac{K_1\left(\frac{2 a m n}{\sqrt{1-\chi}}\right)}{n} \right] \ .
\end{equation}
This is an exact convergent expression and in the limit $ma\gg 1$ is dominated by the first term on the r.h.s while in the limit $ma\ll 1$ is dominated by the third term, the massless scalar field contribution.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{f5}
\caption{Plot exhibiting the behavior of the ration of the topological mass by the mass of the field as function of $am$. In the plot is consider
$\lambda=10^{-3}$ and different variation of $\chi$.}
\label{figure5}
\end{figure}
Mathematically, the asymptotic behaviour $ma\gg 1$ of \eqref{topol_mass_DN_z} is given by
\begin{equation}
\label{top_mas1}
m^2_{\text{T}} \approx m^2 + \frac{\lambda m}{16 \pi a} + \frac{\lambda \sqrt[4]{1-\chi}}{16 \pi^{3/2}} \sqrt{\frac{m}{a^3}}
e^{-\frac{2 a m}{\sqrt{1-\chi}}} \ ,
\end{equation}
while the asymptotic behaviour $ma\ll 1$ is
\begin{equation}
\label{top_mas2}
m^2_{\text{T}} \approx \frac{\lambda \sqrt{1-\chi}}{96 a^2} + \frac{\lambda m}{16 \pi a} +m^2 \ .
\end{equation}
%
The asymptotic results, Eqs. \eqref{top_mas1} and \eqref{top_mas2}, are in agreement with the plot of Fig.\ref{figure5} which
exhibits the behavior of the ratio of the topological mass to the field's mass as function of $ma$ for a fixed value of $\lambda$
and different values of $\chi$. Note that, due to the dependence of the topological mass \eqref{topol_mass_DN_z} on $\chi$ in the argument of the modified Bessel function, $K_{\mu}(x)$, the curves in Fig.\ref{figure5} are shifted down, as $\chi$ increses, more than in the timelike case.
It is important to point out that all the results obtained up to here, adopting Dirichlet boundary condition, for the timelike and spacelike types of Lorentz symmetry violation are the same ones obtained when we consider Neumann boundary condition, as it should be. In the next section, we will consider a mix of these two boundary conditions in which case we will refer to as mixed boundary condition.
\subsection{Mixed boundary condition}
After the analysis of the effective potential, Casimir-like effect and topological mass assuming Dirichlet and Neumann boundaries conditions obeyed by a massive scalar field on two parallel plates, now we want to consider the case in which the field satisfies mixed boundary condition. In other words, we assume that the field obeys Dirichlet and Neumann boundary conditions on each of the plates separately. The conditions are then written as
\begin{equation}
\label{mixed_cond}
\begin{aligned}
\varphi(x)\big{|}_{z=0}=\frac{\partial \varphi(x)}{\partial z}\bigg{|}_{z=a}=0 \ \ \ \ \ \text{and} \ \ \ \ \ \frac{\partial \varphi(x)}{\partial z}\bigg{|}_{z=0}=\varphi(x)\big{|}_{z=a} \ .
\end{aligned}
\end{equation}
The complete set of normalized solutions of the scalar field, $\varphi$, under these conditions have also been reported in Ref. \cite{cruz:2017}. These solutions provide the following eigenvalues of the operator \eqref{operator}:
\begin{eqnarray}
\Lambda_{\beta} = k_t^2 + k^2 + \left[\left(n+\frac{1}{2}\right)\frac{\pi}{a}\right]^2 -\chi u^{\mu}u^{\nu}k_{\mu}k_{\nu} + m^2 + \frac{\lambda\Phi^2}{2},
\label{evmb}
\end{eqnarray}
where $k^2=k_x^2+k_y^2$ and $n=0,1,2,3,...$\ . Hence, we will consider the set of eigenvalues \eqref{evmb} representing the mixed boundary condition case to calculate the renormalized effective potential, the vacuum energy and topological mass taking into consideration the two types of Lorentz symmetry violation, namely, the timelike and spacelike types.
\subsubsection{Timelike vector}
In the case that the $4$-vector $u^{\mu}$ is of the timelike type, i.e., $ u^{t}=(1,0,0,0)$, the eigenvalues \eqref{evmb} of the elliptic operator \eqref{operator} becomes
\begin{equation}
\Lambda_{\beta} = \left(1+\chi \right)k_t^2 + k^2 + \left[\left(n+\frac{1}{2}\right)\frac{\pi}{a}\right]^2 + m^2 + \frac{\lambda \Phi^2}{2} \ .
\end{equation}
Consequently, the zeta function \eqref{zet_funct_DN_t_0} is now written as
\begin{equation}
\zeta(s) = \frac{1}{\sqrt{1+\chi}} \frac{V_3}{(2\pi)^3} \sum_{n=0}^{\infty} \int d^3k \left \{ \kappa_t^2 + k^2 + \left[\left(n+\frac{1}{2}\right)\frac{\pi}{a}\right]^2 + m^2 + \frac{\lambda \Phi^2}{2} \right \}^{-s} \ .
\end{equation}
Furthermore, by using the identity \eqref{identity} we are able to perform the integrals in $\kappa_t$, $k_x$ and $k_y$, providing that
\begin{equation}
\label{zet_funct_M_t_0}
\zeta(s) = \frac{1}{\sqrt{1+\chi}} \frac{V_3}{(2\pi)^3}\frac{\pi^{3/2}\Gamma(s-3/2)}{\Gamma(s)}w^{3-2s}\sum_{n=0}^{\infty}\left[\left(n+\frac{1}{2}\right)^2+\nu^2\right]^{3/2-s} \ ,
\end{equation}
where
\begin{equation}
\begin{aligned}
\nu^2 = \frac{\lambda \Phi^2}{2w^2} + \frac{m^2}{w^2} \ \ \ \ \ \ \text{and} \ \ \ \ \ \ \ w = \frac{\pi}{a} \ .
\end{aligned}
\end{equation}
We can see that there is still a sum in $n$ to be performed in the zeta function expression \eqref{zet_funct_M_t_0}. In this sense, in order to apply the Epstein-Hurwitz zeta function \eqref{EH_funct} we can write the sum in $n$ as
\begin{equation}
\label{sum_alt_M_t}
\sum_{n=0}^{\infty}\left[\left(n+\frac{1}{2}\right)^2+\nu^2\right]^{3/2-s} = \frac{1}{2^{3-2s}} \left[\sum_{n=1}^{\infty}\left(n^2+4\nu^2\right)^{3/2-s} - 2^{3-2s}\sum_{n=1}^{\infty}\left(n^2+\nu^2\right)^{3/2-s}\right] .
\end{equation}
By using now Eqs. \eqref{sum_alt_M_t} and \eqref{EH_funct} in \eqref{zet_funct_M_t_0} we have the final form of the generalized zeta function as given by
\begin{equation}
\label{zet_funct_M_t_f}
\begin{aligned}
\zeta(s) &= \frac{V_3}{(2\pi)^3} \Bigg{\{}\frac{\pi^2w^{-2\bar{s}}\nu^{1-2\bar{s}}\Gamma(\bar{s}-1/2)}{2\Gamma(\bar{s}+3/2) \sqrt{1+\chi}} + \frac{2^{\bar{s}+1/2}\pi^{2\bar{s}+1}w^{-2\bar{s}}}{\Gamma(\bar{s}+3/2) \sqrt{1+\chi}} \times \\ & \ \ \ \ \ \ \ \ \ \ \ \ \ \times \Bigg{[}4^{\bar{s}} \sum_{n=1}^{\infty} n^{2\bar{s}-1}f_{\bar{s}-1/2}\big(4\pi n \nu \big) - \sum_{n=1}^{\infty} n^{2\bar{s}-1}f_{\bar{s}-1/2}\big(2\pi n \nu \big) \Bigg{]} \Bigg{\}} \ ,
\end{aligned}
\end{equation}
where $\bar{s}=s-3/2$. Thus, by using \eqref{zet_funct_M_t_f} we can obtain $\zeta(0)$ and $\zeta^{\prime}(0)$ leading to the one-loop correction to the effective potential \eqref{one_loop_potential_DN_t} written as
\begin{equation}
\label{one_loop_poten_M_t}
\begin{aligned}
V^{(1)}(\Phi) &= - \frac{3 b^4}{128 \pi^2 \sqrt{1+\chi}} + \frac{b^4 \text{ln}\left(\frac{b^2}{\mu^2}\right)}{64 \pi^2 \sqrt{1+\chi}}- \frac{b^2}{16 \pi^2 \sqrt{1+\chi} a^2} \times \\ & \ \ \ \times \sum_{n=1}^{\infty} \frac{\left[K_2\left(4ban\right) - 2K_2\left(2ban\right)\right]}{n^2} \ ,
\end{aligned}
\end{equation}
where
\begin{equation}
b = \sqrt{\frac{\lambda \Phi^2}{2} + m^2} \ .
\end{equation}
Hence, from Eqs. \eqref{potencial_u} and \eqref{one_loop_poten_M_t} the effective potential up to one-loop correction is found to be
\begin{equation}
\label{effec_potent_M_t}
\begin{aligned}
V_{\text{eff}}(\Phi) &= \frac{m^2\Phi^2}{2} + \frac{\lambda \Phi^4}{4!} + \frac{\Phi^2}{2}\delta_2 + \frac{\Phi^4}{4!}\delta_1 + \delta_3 - \frac{3 b^4}{128 \pi^2 \sqrt{1+\chi}} + \frac{b^4 \text{ln}\left(\frac{b^2}{\mu^2}\right)}{64 \pi^2 \sqrt{1+\chi}} \\ & \ \ \ - \frac{b^2}{16 \pi^2 \sqrt{1+\chi} a^2} \sum_{n=1}^{\infty} \frac{\left[K_2\left(4ban\right) - 2K_2\left(2ban\right)\right]}{n^2} \ ,
\end{aligned}
\end{equation}
where the renormalization constants $\delta_1$, $\delta_2$ and $\delta_3$ need to be determined in order to find the renormalized form for the effective potential \eqref{effec_potent_M_t}. For this purpose, the conditions \eqref{ren1}, \eqref{ren2} and \eqref{ren3} provide
\begin{equation}
\frac{\delta_1}{4!} = \frac{\lambda^2 \text{ln} \left(\frac{\mu^2}{m^2}\right)}{256 \pi^2 \sqrt{1+\chi}}\ ,
\end{equation}
\begin{equation}
\frac{\delta_2}{2} = \frac{\lambda m^2 \text{ln} \left(\frac{\mu^2}{m^2}\right)}{64 \pi^2
\sqrt{1+\chi}} + \frac{\lambda
m^2}{64 \pi^2 \sqrt{1+\chi}}\ ,
\end{equation}
and
\begin{equation}
\delta_3 = \frac{m^4 \text{ln} \left(\frac{\mu^2}{m^2}\right)}{64 \pi^2 \sqrt{1+\chi}} + \frac{3 m^4}{128 \pi^2 \sqrt{1+\chi}}\ .
\end{equation}
The renormalization constants found above, when used in \eqref{effec_potent_M_t}, allow us to obtain the renormalized effective potential up to one-loop correction
\begin{equation}
\label{ren_effec_potent_M_t}
\begin{aligned}
V_{\text{eff}}^{\text{R}}(\Phi) &= \frac{m^2 \Phi^2}{2} + \frac{\lambda \Phi^4}{24} -\frac{3 \lambda^2 \Phi^4}{512 \pi ^2 \sqrt{1+\chi}} - \frac{\lambda m^2 \Phi^2}{128 \pi^2 \sqrt{1+\chi}} \\ & \ \ + \frac{\lambda^2 \Phi^4 \text{ln} \left(\frac{b^2}{m^2}\right)}{256 \pi^2 \sqrt{1+\chi}} + \frac{\lambda m^2 \Phi^2 \text{ln} \left(\frac{b^2}{m^2}\right)}{64 \pi^2 \sqrt{1+\chi}} + \frac{m^4 \text{ln} \left(\frac{b^2}{m^2}\right)}{64 \pi^2 \sqrt{1+\chi}} \\ & \ \ - \frac{b^2}{16 \pi^2 \sqrt{1+\chi } a^2} \sum_{n=1}^{\infty} \frac{\left[K_2\left(4abn\right)-2K_2\left(2abn\right)\right]}{n^2} \ .
\end{aligned}
\end{equation}
The renormalized effective potential above, at $\Phi=0$, provide the vacuum energy per unit area of the plates as
\begin{equation}
\label{cas_ener_dens_M_t_ex}
\begin{aligned}
\frac{E_C}{L^2} = a V_{\text{eff}}^{\text{R}}(0) = - \frac{m^2}{16 \pi^2 \sqrt{1+\chi } a} \sum_{n=1}^{\infty} \frac{\left[K_2\left(4amn\right)-2K_2\left(2amn\right)\right]}{n^2} \ .
\end{aligned}
\end{equation}
This expression is a convergent and exact expression for the vacuum energy. From it we can consider asymptotic expressions for small and large arguments of the modified Bessel function $K_{\mu}(x)$.
Let us now show the asymptotic expressions in the regimes $ma\ll 1$ and $ma\gg 1$. In the latter, the vacuum energy \eqref{cas_ener_dens_M_t_ex} is exponentially suppressed and dominated by the term $n=1$ of the modified Bessel function in the sum, i.e.,
\begin{equation}
\frac{E_C}{L^2} \approx \frac{1}{32 \sqrt{1+\chi}}\left(\frac{m}{\pi a}\right)^{3/2} e^{-2am} \ .
\end{equation}
On the other hand, in the regime $ma\ll 1$, the vacuum energy is given by
\begin{equation}
\begin{aligned}
\frac{E_C}{L^2} \approx \frac{7 \pi^2}{11520 \sqrt{1+\chi}a^3} -\frac{a m^4}{48 \pi^2 \sqrt{1+\chi}} + \frac{a^2 m^5}{60 \pi^2 \sqrt{1+\chi}} \ .
\label{Eap}
\end{aligned}
\end{equation}
This approximated expression is dominated by the first term on the r.h.s, associated with the massless scalar field.
Now we turn to the calculation of the two-loop correction to the effective potential. As in the previous sections, we can also make use of the zeta function which in the present case is given by \eqref{zet_funct_M_t_f}. Thus, the function $S_1(\Phi)$ is written in the form
\begin{equation}
S_1(\Phi) = \left \{ \sum_{n=0}^{\infty} \frac{1}{a} \int \frac{d^3k}{(2\pi)^3}\left[\left(1+\chi \right)k_t^2 + k^2 + \left(\left(n+\frac{1}{2}\right)\frac{\pi}{a}\right)^2 + m^2 + \frac{\lambda \Phi^2}{2}\right]^{-s}\right \}^2 \ ,
\end{equation}
and can be expressed in terms of the zeta function \eqref{zet_funct_M_t_f} as
\begin{equation}
S_1(\Phi) =\left[\frac{\zeta_R(1)}{V_3a}\right]^2,
\label{s11}
\end{equation}
where $\zeta_R(1)$ is the zeta function \eqref{zet_funct_M_t_f} taken at $s = 1$ after subtracting the divergent part of it given by the first term on the r.h.s. As explained before, this divergent part, when divided by $V_3a$, does not depend on $a$ and as customary must be subtracted. Thus, from \eqref{s11} and \eqref{vac_two_loop_contr_potent}, we obtain the two-loop correction to the vacuum energy as
\begin{equation}
\label{2_loop_cas_energ_M_t}
\frac{E^{(\lambda)}_C}{L^2} = a V^{(2)}(0) = \frac{\lambda m^2}{128 \pi^4 (1+\chi) a} \left \{ \sum_{n=1}^{\infty}\frac{\left[K_1(4amn) - K_1(2amn)\right]}{n} \right \}^2 \ ,
\end{equation}
which is also a convergent and exact expression. It is also exponentially suppressed for $ma\gg 1$ and provide the massless contribution for $ma\ll 1$.
The exponentially suppressed mathematical expression for \eqref{2_loop_cas_energ_M_t} in the regime $ma\gg 1$ is given by
\begin{equation}
\frac{E^{(\lambda)}_C}{L^2} \approx \frac{\lambda m e^{-4 a m}}{512 \pi^3 \left(1+\chi \right) a^2} \ ,
\end{equation}
while in the regime $ma\ll 1$, the vacuum energy is written as
\begin{equation}
\frac{E^{(\lambda)}_C}{L^2} \approx \frac{\lambda}{73728 (1+\chi) a^3} - \frac{\lambda m^2}{3072 \pi^2 (1+\chi) a} + \frac{\lambda m^3}{4608 \pi^2 (1+\chi)} \ ,
\label{asysmall}
\end{equation}
where we can clearly see that the first term on the r.h.s is the dominant one and is associated with the massless scalar field.
In the left panel of Fig.\ref{figure6} we exhibit the Casimir energy, given by \eqref{cas_ener_dens_M_t_ex}, as function of $am$, wheres in the right panel we exhibit the behavior of the two-loop correction to the Casimir energy per unity area, given by \eqref{2_loop_cas_energ_M_t}, as function of $am$ considering different values for $\chi$ and fixing $\lambda=10^{-5}$. By these plots we can infer that the vacuum energy and its two-loop correction decrease as $\chi$ increases. Note that this is different from the timelike case considering only Dirichlet/Neumann boundary condition, in which case the vacuum energy increases whereas its radiative correction decreases, as $\chi$ increases.
\begin{figure}[h!]
\centering
\subfloat{{\includegraphics[width=17cm]{f6} }}
\qquad
\caption{The behavior of the Casimir energy per unity area $E(am)=\frac{a^3}{L^2}E_C$ given by \eqref{cas_ener_dens_M_t_ex} as function of $am$ is exhibited in the left plot. The behaviour of the two-loop contribution $E_{\lambda}(am)=\frac{a^3}{L^2}E^{(\lambda)}_C$ , given by \eqref{2_loop_cas_energ_M_t}, is exhibited in the right plot. For the latter we assume $\lambda=10^{-3}$, and consider different values for the parameter, $\chi$.}
\label{figure6}
\end{figure}
The mixed boundary condition we are considering here will also generate, at one-loop level, a topological mass. In this sense, we can obtain the topological mass by using \eqref{ren_effec_potent_M_t} and \eqref{ren2}. It is given by
\begin{equation}
\label{topol_mass_M_t}
m_{\text{T}}^2 = m^2 \left \{ 1 + \frac{\lambda}{8 \pi^2 \sqrt{1+\chi} a m} \sum_{n=1}^{\infty} \frac{\left[K_1(4amn) - K_1(2amn) \right]}{n} \right \} \ .
\end{equation}
We have plotted in Fig.\ref{figure7} the behaviour of $\frac{m_T}{m}$ by using \eqref{topol_mass_M_t} in terms of $am$. The plot shows that the topological mass increases as the Lorentz symmetry violation parameter, $\chi$, increases. This is different from the timelike case considering only Dirichlet/Neumann boundary condition., in which case the topological mass decreases as $\chi$ increases. Fig.\ref{figure7} also shows that in the regime $ma\gg 1$ the topological mass is dominated by the first term on the r.h.s of \eqref{topol_mass_M_t} while in the opposite limit $ma\ll 1$ the topological mass is the one associated with a massless scalar field.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{f7}
\caption{The ration of the topological mass by the field mass as function of $am$. In the plot is consider $\lambda=10^{-3}$ and different values of $\chi$.}
\label{figure7}
\end{figure}
Mathematically, for $ma\gg 1$, we have
\begin{equation}
m_{\text{T}}^2 \approx m^2 - \frac{\lambda}{16\sqrt{1+\chi}}\frac{\sqrt{ m} e^{-2 a m}}{\left(\pi a\right)^{3/2} } \ ,
\end{equation}
which shows that the topological mass is dominated by the first term, $m^2$.
In the opposite regime, $ma\ll 1$ we obtain
\begin{equation}
m_{\text{T}}^2 \approx m^2 - \frac{\lambda}{192 \sqrt{1+\chi} a^2} + \frac{\lambda m^2}{16 \pi^2 \sqrt{1+\chi}} - \frac{\lambda a m^3}{24 \pi^2 \sqrt{1+\chi}} + \frac{\lambda a^3 m^5}{120 \pi^2 \sqrt{1+\chi}} \ .
\end{equation}
In this limit, the dominant term is the second one on the r.h.s of the above approximation, associated with a massless scalar field.
\subsubsection{Spacelike vector}
We want now to consider the case in which the constant 4-vector, $u^{\mu}$, is of the spacelike type. As before, there are three spacelike components specifying the broken symmetry direction: $u^x=(0,1,0,0)$, $u^y=(0,0,1,0)$, both parallel to the plates, and $u^z=(0,0,0,1)$, orthogonal to the plates. In the two first cases, specifying the $x$ and $y$ directions, parallel to the plates, the results for the effective potential, Casimir energy and topological mass are the same. Let us then consider the $x$-direction:
\begin{equation}
u^x = (0,1,0,0) \ .
\end{equation}
In this case, from Eq. \eqref{evmb}, the set of eigenvalues is given by
\begin{equation}
\label{eigev_M_x}
\Lambda_{\beta} = k^2 + \left(1-\chi \right)k_x^2 + \left[\left(n+\frac{1}{2}\right)\frac{\pi}{a}\right]^2 + m^2 + \frac{\lambda \Phi^2}{2} \ ,
\end{equation}
where $k^2=k_t^2+k_y^2$. So, using Eq. \eqref{zet_funct_DN_t_0}, the zeta function is written as
\begin{equation}
\zeta(s) = \frac{V_3}{(2\pi)^3} \int d^3k \sum_{n=1}^{\infty}\left \{ k^2 + \left(1-\chi \right)k_x^2 + \left[\left(n+\frac{1}{2}\right)\frac{\pi}{a}\right]^2 + m^2 + \frac{\lambda \Phi^2}{2}\right \}^{-s} \ .
\end{equation}
Again, by using the identity \eqref{identity}, we are able to solve the integrals in $k_t$, $k_x$ and $k_y$ to obtain the zeta function in the form
\begin{equation}
\label{zet_funct_M_x_0}
\zeta(s) = \frac{1}{\sqrt{1-\chi}} \frac{V_3}{(2\pi)^3}\frac{\pi^{3/2}\Gamma(s-3/2)}{\Gamma(s)}w^{3-2s}\sum_{n=0}^{\infty}\bigg[\bigg(n+\frac{1}{2}\bigg)^2+\nu^2\bigg]^{3/2-s} \ ,
\end{equation}
where
\begin{equation}
\nu^2 = \frac{\lambda \Phi^2}{2w^2}+\frac{m^2}{w^2} \ \ \ \ \ \ \ \text{and} \ \ \ \ \ \ \ w=\frac{\pi}{a} \ .
\end{equation}
We can notice that the expression \eqref{zet_funct_M_x_0} can be obtained from \eqref{zet_funct_M_t_0} by making $\chi \rightarrow -\chi$. Consequently, all the results for the renormalized effective potential, Casimir energy and topological mass can be obtained from the timelike case considered previously. This is also valid if we consider the vector $u^{y}$.
The most important type of Lorentz symmetry violation in this section is the one occurring in the orthogonal direction to the plates, that is,
\begin{equation}
u^z = (0,0,0,1) \ .
\end{equation}
In this case, the set of eigenvalues \eqref{evmb} becomes
\begin{equation}
\label{eigev_M_z}
\Lambda_{\beta} = k^2 + \left(1-\chi \right) \left[\left(n+\frac{1}{2}\right)\frac{\pi}{a}\right]^2 + m^2 + \frac{\lambda \Phi^2}{2} \ ,
\end{equation}
where $k^2=k_t^2+k_x^2+k_y^2$. Thus, substituting Eq. \eqref{eigev_M_z} in Eq. \eqref{zet_funct_DN_t_0}, we have the zeta function expression written as
\begin{equation}
\label{zet_0_M_z}
\zeta(s) = \frac{V_3}{(2\pi)^3} \sum_{n=0}^{\infty} \int d^3k \left \{ k^2 + \left(1-\chi \right)\left[\left(n+\frac{1}{2}\right)\frac{\pi}{a}\right]^2 + m^2 + \frac{\lambda \Phi^2}{2}\right \}^{-s} \ .
\end{equation}
Once again, by using \eqref{identity}, we obtain
\begin{equation} \label{zeta_func_mixed_z}
\begin{aligned}
\zeta(s)=\frac{V_3}{(2\pi)^3}\frac{\pi^{3/2}\Gamma(s-3/2)}{\Gamma(s)}w^{3-2s}\sum_{n=0}^{\infty}\bigg[\bigg(n+\frac{1}{2}\bigg)^2+\nu^2\bigg]^{3/2-s} \ ,
\end{aligned}
\end{equation}
where
\begin{equation}
\begin{aligned}
\nu^2 \equiv \frac{\lambda \Phi^2}{2w^2} + \frac{m^2}{w^2} \ \ \ \ \ \ \text{and} \ \ \ \ \ \ \ w \equiv \sqrt{1-\chi}\frac{\pi}{a} \ .
\end{aligned}
\end{equation}
We need now to find an expression for the sum in $n$ present in the zeta function above. In order to do that, we can make use of Eq. \eqref{sum_alt_M_t}. This provides
\begin{equation}
\label{zeta_func_mixed_z_f}
\zeta(s) = \frac{V_3}{(2\pi)^3}\frac{\pi^{3/2} \Gamma(\bar{s})}{\Gamma(\bar{s}+3/2)} \left(\frac{w}{2}\right)^{-2\bar{s}} \left[ \sum_{n=1}^{\infty}\left(n^2+4\nu^2\right)^{-\bar{s}} - 2^{-2\bar{s}}\sum_{n=1}^{\infty} \left(n^2+\nu^2\right)^{-\bar{s}}\right] \ ,
\end{equation}
where we have $\bar{s}=s-3/2$. Hence, the Epstein-Hurwitz zeta function \eqref{EH_funct} allows us to obtain \eqref{zeta_func_mixed_z_f} as
\begin{equation} \label{zeta_func_fin_mix_z}
\begin{aligned}
\zeta(s) &= \frac{V_3}{(2\pi)^3} \Bigg{\{}\frac{\pi^2w^{-2\bar{s}}\nu^{1-2\bar{s}}\Gamma(\bar{s}-1/2)}{2\Gamma(\bar{s}+3/2)} + \frac{2^{\bar{s}+1/2}\pi^{2\bar{s}+1}w^{-2\bar{s}}}{\Gamma(\bar{s}+3/2)} \\ & \ \ \ \ \ \ \ \ \ \ \ \ \times \left[4^{\bar{s}} \sum_{n=1}^{\infty} n^{2\bar{s}-1}f_{\bar{s}-1/2}\left(4\pi \nu n \right) - \sum_{n=1}^{\infty} n^{2\bar{s}-1}f_{\bar{s}-1/2}\left(2\pi \nu n \right) \right] \Bigg{\}} \ .
\end{aligned}
\end{equation}
Consequently, in the limit $s\rightarrow 0$, we have
\begin{equation} \label{zeta_mix_0_z}
\zeta(0) = \frac{V_3 a}{2(2\pi)^4} \frac{\pi^2 b^4}{\sqrt{1-\chi}} \ ,
\end{equation}
and
\begin{equation} \label{zeta_prim_mix_0_z}
\begin{aligned}
\zeta^{\prime}(0) = \frac{V_3 a}{(2\pi)^3} \Bigg{\{} & \frac{3 \pi b^4}{8 \sqrt{1-\chi}} - \frac{\pi b^4 \text{ln}(b)}{2 \sqrt{1-\chi}} \\ & + \frac{\pi b^2 \sqrt{1-\chi}}{a^2} \sum_{n=1}^{\infty}\frac{\left[ K_2 \left(\frac{4abn}{\sqrt{1-\chi}}\right) - 2K_2\left(\frac{2abn}{\sqrt{1-\chi}}\right)\right]}{n^2} \Bigg{\}} \ .
\end{aligned}
\end{equation}
The one-loop correction \eqref{one_loop_potential_DN_t} to the effective potential is now possible to be obtained by using the results above for $\zeta(0)$ and $\zeta'(0)$. This gives
\begin{equation}
\label{1_loop_potent_M_z}
\begin{aligned}
V^{(1)}(\Phi) = & \frac{b^4 \text{ln}\left(\frac{b^2}{\mu^2}\right)}{64 \pi^2 \sqrt{1-\chi}} - \frac{3 b^4}{128 \pi^2 \sqrt{1-\chi}} \\ & - \frac{b^2 \sqrt{1-\chi}}{16 \pi^2 a^2} \sum_{n=1}^{\infty} \frac{\left[K_2\left(\frac{4abn}{\sqrt{1-\chi}}\right) - 2K_2\left(\frac{2abn}{\sqrt{1-\chi}}\right)\right]}{n^2} \ ,
\end{aligned}
\end{equation}
where
\begin{equation}
b = \sqrt{\frac{\lambda \Phi^2}{2} + m^2} \ .
\end{equation}
Furthermore, the effective potential up to one-loop correction, from Eqs. \eqref{potencial_u} and Eq. \eqref{1_loop_potent_M_z}, is written as
\begin{equation}
\label{effec_potent_M_z}
\begin{aligned}
V_{\text{eff}}(\Phi) = & \frac{m^2\Phi^2}{2} + \frac{\lambda \Phi^4}{4!} + \frac{ \Phi^2}{2}\delta_2 + \frac{\Phi^4}{4!}\delta_1 + \delta_3+ \frac{b^4 \text{ln}\left(\frac{b^2}{\mu^2}\right)}{64 \pi^2\sqrt{1-\chi }} - \frac{3 b^4}{128 \pi^2\sqrt{1-\chi }} \\ & - \frac{b^2 \sqrt{1-\chi }}{16 \pi^2 a^2} \sum_{n=1}^{\infty} \frac{\left[K_2\left(\frac{4abn}{\sqrt{1-\chi}}\right) - 2K_2\left(\frac{2abn}{\sqrt{1-\chi}}\right)\right]}{n^2} \ ,
\end{aligned}
\end{equation}
where the renormalization constants $\delta_1$, $\delta_2$ and $\delta_3$ are to be found by using \eqref{effec_potent_M_z} and the conditions given by $\eqref{ren1}$, $\eqref{ren2}$ and $\eqref{ren3}$. This provides
\begin{equation}
\label{lambda_const_M_t}
\frac{\delta_1}{4!} = \frac{\lambda^2 \text{ln} \left(\frac{\mu^2}{m^2}\right)}{256 \pi^2
\sqrt{1-\chi}}
\end{equation}
\begin{equation}
\label{m2_const_M_t}
\frac{\delta_2}{2} =\frac{\lambda m^2 \text{ln} \left(\frac{\mu^2}{m^2}\right)}{64 \pi^2 \sqrt{1-\chi}} +\frac{\lambda m^2}{64 \pi^2 \sqrt{1-\chi}} \ ,
\end{equation}
and
\begin{equation}
\label{m4_const_M_t}
\delta_3 = \frac{m^4 \text{ln} \left(\frac{\mu^2}{m^2}\right)}{64 \pi^2
\sqrt{1-\chi}} + \frac{3
m^4}{128 \pi^2 \sqrt{1-\chi}}\ .
\end{equation}
Finally, by using the renormalization constants found above in Eq. \eqref{effec_potent_M_z} we obtain the renormalized effective potential up to one-loop correction, i.e.,
\begin{equation}
\label{ren_effec_potent_M_z}
\begin{aligned}
V^{\text{R}}_{\text{eff}}(\Phi) = & \frac{m^2 \Phi^2}{2} + \frac{\lambda \Phi^4}{24} -\frac{\lambda m^2 \Phi^2}{128 \pi^2 \sqrt{1-\chi}} - \frac{3 \lambda^2 \Phi^4}{512 \pi^2 \sqrt{1-\chi}}\\ &+\frac{\lambda m^2 \Phi^2 \text{ln} \left(\frac{b^2}{m^2}\right)}{64 \pi^2 \sqrt{1-\chi}} + \frac{m^4 \text{ln} \left(\frac{b^2}{m^2}\right)}{64 \pi^2 \sqrt{1-\chi}} + \frac{\lambda^2 \Phi^4 \text{ln} \left(\frac{b^2}{m^2}\right)}{256 \pi^2 \sqrt{1-\chi}} \\ & - \frac{b^2 \sqrt{1-\chi}}{16 \pi^2 a^2} \sum_{n=1}^{\infty} \frac{\left[K_2\left(\frac{4abn}{\sqrt{1-\chi}}\right) - 2K_2\left(\frac{2abn}{\sqrt{1-\chi}}\right)\right]}{n^2} \ .
\end{aligned}
\end{equation}
At this point we can, by using the renormalized effective potential \eqref{ren_effec_potent_M_z}, obtain the vacuum energy per unit area of the plate. This is done taking \eqref{ren_effec_potent_M_z} at $\Phi=0$. This gives
\begin{equation}
\label{casim_energ_dens_M_z}
\frac{E_C}{L^2} = aV_{\text{eff}}^{\text{R}}(0) = - \frac{m^2 \sqrt{1-\chi }}{16 \pi^2 a} \sum_{n=1}^{\infty} \frac{\left[K_2\left(\frac{4amn}{\sqrt{1-\chi}}\right) - 2K_2\left(\frac{2amn}{\sqrt{1-\chi}}\right)\right]}{n^2} \ .
\end{equation}
This exact and closed expression for the vacuum energy is exponentially suppressed for $ma\gg 1$ while for $ma\ll 1$ provides the expression for the vacuum energy in the massless scalar field case.
The exponentially suppressed expression for the vacuum energy in the regime $ma\gg 1$ is dominated by the $n=1$ term of the sum, providing
\begin{equation}
\frac{E_C}{L^2} \approx \frac{\left(1-\chi \right)^{3/4}}{16} \left( \frac{m}{\pi a}\right)^{3/2} e^{-\frac{2 a m}{\sqrt{1-\chi}}} \ .
\end{equation}
The opposite regime $ma\ll 1$ provides the approximated expression for the vacuum energy
\begin{equation}
\frac{E_C}{L^2} \approx \frac{7 \pi^2 (1-\chi)^{3/2}}{11520 a^3} - \frac{a m^4}{48 \pi^2 \sqrt{1-\chi}} + \frac{a^2 m^5}{60 \pi^2 (1-\chi)} \ ,
\label{masmall}
\end{equation}
%
which is dominated by the first term on the r.h.s. This is the term associated with the vacuum energy of the massless scalar field.
%
\begin{figure}[h!]
\centering
\subfloat{{\includegraphics[width=17cm]{f8} }}
\qquad
\caption{The left plot presents the behavior of Casimir energy per unity area $E(am)=\frac{a^3}{L^2}E_C$ , given by \eqref{casim_energ_dens_M_z}, as function of $am$. The two-loop contribution $E_{\lambda}(am)=\frac{a^3}{L^2}E^{(\lambda)}_C$, given by \eqref{2_loop_cas_energ_M_z}, is exhibited in the right panel as function of $am$ fixing $\lambda=10^{-3}$. In both plots we have considered $\chi$.}
\label{figure8}
\end{figure}
We can now turn the calculation of the two-loop correction to the effective potential at $\Phi=0$. The $S_1(\Phi)$ function, as before, is given by
%
\begin{equation} \label{S1_funct_M_z}
S_1(\Phi) = \left \{ \sum_{n=0}^{\infty}\frac{1}{a} \int \frac{d^3k}{(2\pi)^3} \left[k^2 + \left(1-\chi \right)\left[\left(n+\frac{1}{2}\right)\frac{\pi}{a}\right]^2 + m^2 + \frac{\lambda \Phi^2}{2}\right]^{-s}\right \}^2 \ ,
\end{equation}
which can be expressed in terms of the zeta function \eqref{zeta_func_fin_mix_z} as
\begin{equation}
S_1(\Phi) =\left[\frac{\zeta_R(1)}{V_3a}\right]^2,
\label{s1mb}
\end{equation}
where $\zeta_R(1)$ is the zeta function \eqref{zet_funct_M_t_f} taken at $s = 1$ after subtracting the divergent part of it given by the first term on the r.h.s. Again, this divergent part, when divided by $V_3a$, does not depend on $a$ and, as customary, must be subtracted. Thus, from \eqref{s1mb} and \eqref{vac_two_loop_contr_potent}, we obtain the two-loop correction to the vacuum energy as
\begin{equation}
\label{2_loop_cas_energ_M_z}
\frac{E^{(\lambda)}_C}{L^2} = a V^{(2)}(0) = \frac{\lambda m^2}{128 \pi^4 a} \left \{ \sum_{n=1}^{\infty} \frac{\left[ K_1\left(\frac{4amn}{\sqrt{1-\chi}} \right) - K_1\left(\frac{2amn}{\sqrt{1-\chi}} \right)\right]}{n} \right \}^2 \ .
\end{equation}
This is an exact and convergent expression for the correction of the vacuum energy per unity area. The asymptotic behaviors for the above expression are explicitly provided below.
The exponentially suppressed expression for the vacuum energy correction in the regime $ma\gg 1$ is dominated by the $n=1$ term:
\begin{equation}
\frac{E^{(\lambda)}_C}{L^2} \approx \frac{\lambda m \sqrt{1-\chi} e^{-\frac{4 a
m}{\sqrt{1-\chi}}}}{512 \pi^3 a^2} \ .
\end{equation}
%
On the other hand, the expression for the vacuum energy correction in the opposite regime $ma\ll 1$ is given by
%
\begin{equation}
\frac{E^{(\lambda)}_C}{L^2} \approx \frac{\lambda (1-\chi)}{73728 a^3} - \frac{\lambda m^2}{3072 \pi^2 a} + \frac{\lambda m^3}{4608 \pi^2 \sqrt{1-\chi}} + \frac{\lambda a m^4}{512 \pi^4 (1-\chi)} \ .
\label{ecmb}
\end{equation}
%
This expression is dominated by the first term on the r.h.s and is associated with the vacuum energy correction in the massless scalar field case.
In the left panel of Fig.\ref{figure8} we exhibit the behavior of the Casimir energy, \eqref{casim_energ_dens_M_z}, as function of $am$. In the right panel is exhibited the vacuum energy radiative correction, \eqref{2_loop_cas_energ_M_z}, also as function of $am$. In both cases, the energy values decreases as $\chi$ increases. The curves are shifted down more than in the timelike case. This is due the dependence of the vacuum energy \eqref{casim_energ_dens_M_z} and its radiative correction \eqref{2_loop_cas_energ_M_z} on $\chi$, in the argument of the modified Bessel function, $K_{\mu}(x)$. The vacuum energy \eqref{casim_energ_dens_M_z} also depend on, $\chi$, as a multiplicative factor.
\begin{figure}[h!]
\centering
\includegraphics[scale=0.5]{f9}
\caption{Graph presenting behavior of the ration of the topological mass by the mass of the field as function of $am$. In the plot is consider $\lambda=10^{-3}$ and different values for the Lorentz violating parameter: $\chi$.}
\label{figure9}
\end{figure}
We want now to analyse the generation of topological mass. The latter can be obtained by using the renormalized effective potential \eqref{ren_effec_potent_M_z} in the condition \eqref{ren2}. This provides the exact expression for the topological mass, that is,
%
\begin{equation}
\label{topol_mass_M_z}
m_{\text{T}}^2 = m^2 \left \{ 1 + \frac{\lambda}{8 \pi^2 a m} \sum_{n=1}^{\infty} \frac{\left[K_1\left(\frac{4amn}{\sqrt{1-\chi}} \right) - K_1\left(\frac{2amn}{\sqrt{1-\chi}} \right)\right]}{n} \right \} \ .
\end{equation}
This expression is plotted in Fig.\ref{figure9}. We can see that in the regime $ma\gg 1$ the topological mass is dominated by the first term on the r.h.s and grows to infinity. In the opposite regime $ma\ll 1$ the topological mass tends to the expression associated with the massless scalar field. We can also see in Fig.\ref{figure9} that the topological mass increases as $\chi$ increases. The curved are shifted up more than in the timelike previous case as a consequence of the dependence of the topological mass \eqref{topol_mass_M_z} on $\chi$, in the argument of the modified Bessel function.
The topological mass \eqref{topol_mass_M_z} in the regime $ma\gg 1$ is given by
\begin{equation}
m_{\text{T}}^2 \approx m^2 - \frac{\lambda \sqrt[4]{1-\chi}}{16 \pi^{3/2}} \sqrt{\frac{m}{a^3}} e^{-\frac{2am}{\sqrt{1-\chi}}} \ ,
\end{equation}
while in the opposite regime $ma\ll 1$ is
\begin{equation}
m_{\text{T}}^2 \approx m^2 - \frac{\lambda \sqrt{1-\chi}}{192 a^2} - \frac{\lambda m^2}{16 \pi^2 \sqrt{1-\chi}} - \frac{\lambda a m^3}{24 \pi^2 (1-\chi)} + \frac{\lambda a^3 m^5}{120 \pi^2 (1-\chi)^2} \ .
\label{tpsmall}
\end{equation}
Note that the second term on the r.h.s of \eqref{tpsmall} is associated with a massless scalar field.
\section{Concluding remarks}
\label{concl}
.
In this work we have investigated the Casimir effect and the generation of topological mass associated with a scalar self-interacting, $\lambda \phi^4$, field theory in the context of aether-type Lorentz symmetry violation model, implemented by direct coupling between the derivative of the field with an external constant $4-$vector. Specifically we have considered the situation in which the field is confined between two parallel plates, assuming that it obeys, on each of the plates, Dirichlet, Newman and mixed boundary conditions, separately. The area of the plates has been taken to be $L^2$ wheres the distance between them has been taken as $a$ ($a \ll L$).
Furthermore, we have found exactly $\Phi$-dependent renormalized effective potentials, up to one loop-correction, considering both timelike and spacelike cases of the 4-vector $u^{\mu}$, where $\Phi$ is the classical and fixed background field. These renormalized effective potentials, at $\Phi=0$, provided a Casimir-like energy and topological mass for all cases. We have also obtained an exact two-loop correction to the effective potential when $\Phi=0$, which allows us to find a radiative correction to the Casimir-like energy obtained from the renormalized effective potential up to one-loop correction. The Casimir-like energies from Dirichlet and Neumann boundary conditions are equal and differ from the Casimir-like energy arising from the mixed boundary condition by a numerical factor and also by a change of sign.
The Casimir-like effect, its radiative correction as well as the topological mass depend upon specific boundary conditions imposed on the fields and the Lorentz symmetry breaking parameter, $\chi$. It is worth pointing out that in all boundary condition cases considered, our results are more affected by $\chi$ in the spacelike type of broken symmetry, specifically, in the $z$-direction, orthogonal to the plates. Note that the results obtained here considering Dirichlet, Neumann and mixed boundary conditions, at one-loop level, agree with well known results in the case the Lorentz symmetry is preserved, that is, $\chi =0$ \cite{milton:2003, Bordag:2009zz}. This is also true at two-loop levels, i.e., we also recover, in the massless scalar field case, the expressions obtained for the Casimir energy density and topological mass in Ref. \cite{toms:1980:2805}, and the Casimir energy, considering the three boundaries conditions, in the massless field limit, given in \cite{barone,barone1}. In all these three last papers, it was used the Riemann zeta-function renormalization to obtain the Casimir energy. The two-loop correction to the Casimir energy associated with the scalar field under Dirichlet boundary condition, in the absence of Lorentz symmetry violation, was also calculated in \cite{reza}, using the Box Renormalization Scheme (BRS). However, in the latter, the Casimir energy correction in the massless limit disagrees with the results found in Refs.
\cite{toms:1980:2805,barone,barone1} by a negative sign. The reason for that is in the convention adopted in the definition of the two-loop correction in \cite{reza}, which presents a minus sign in the expression analogue to \ref{vac_two_loop_contr_potent} of our present paper. Consequently, the total Casimir energy will be decreased. From the physical point of view, it is expected that the scalar self-interaction would increase the Casimir energy, and not the opposite. Our results are in according with this assumption.
The analysis for the Casimir energy density considering an aether-type Lorentz symmetry violation term in the case of a Dirichlet self-interacting scalar field has been also considered in Ref. \cite{Mojavezi:2019ess}. There the authors have obtained the first order radiative correction in $\lambda$ to the Casimir energy density by using BRS. Moreover, the definition adopted to evaluate this correction is the same as given in \cite{reza}, consequently a negative contribution to the Casimir energy is obtained. Finally we want to emphasize that in our present paper we have considered, besides Dirichlet boundary condition, also Neumann and mixed ones to obtain the first order correction in $\lambda$ to the Casimir energy. Furthermore, we have used a different renormalization method based on the zeta function. We have also shown that in each one of the boundary conditions considered a topological mass is generated.
Let us now, before ending the conclusions, discuss implications and the possibility of observational detection of a violation in the Lorentz symmetry in light of our results. As it is known the energy scale where the Lorentz symmetry is expected to be broken is of order of Planck scale, something around $10^{19}$\;GeV. This makes difficult in principle to envisage an experiment capable of detecting signals of Lorentz symmetry violation. Nevertheless, a Casimir energy density analysis considering models of Lorentz symmetry violation, as the one considered here, can offer a possible way of detecting signals in low energy scales. In particular, one can consider, for instance, extensions of the Standard Model of Particle Physics where violations in the Lorentz symmetry are taken into consideration. In these scenarios, looking at the Higgs sector where a beta decay is observed a bound of $\chi <10^{-6}$ is obtained. Also, a bound of $\chi <10^{-19}$ is obtained considering laser based on interferometry \cite{kostelecky:2003}. If these bounds are used in ours results finite values could be obtained and experiments for the detection of the Casimir energy could confirm the theoretical results. Likewise, if the detailed observations and measurements for the Casimir-like effect were possible, one could use the modifications of it by the Lorentz symmetry violation model considered here to estimate the values of the parameter, $\chi$, describing the spacetime anisotropy. This would certainly contribute to the experimental measurement attempts to get an upper bound on $\chi$.
\\
\\
{\bf Acknowledgements.} M.B.C is supported by Conselho
Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'{o}gico - Brasil (CNPq) through the project No. 150479/2019-0. E.R.B.M is partially supported by CNPq under grant No 301.783/2019-3. H.F.S.M is partially supported by CNPq under grants 305379/2017-8 and 430002/2018-1.
|
2,877,628,088,390 | arxiv | \section{Introduction}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\setcounter{equation}{0}
The spin-$\frac{1}{2}$ XXZ model with general non-diagonal boundaries has been the subject of recent interest. Although this is well known to be integrable \cite{deVega:1992zd} many problems have been encountered in the formulation of Bethe Ansatz equations.
For very special diagonal boundary terms the XXZ model has an $SU_q(2)$ quantum group symmetry \cite{Pasquier:1989kd}. From an algebraic point of view this is the simplest type of boundary term and the model can be written in terms of the Temperley-Lieb algebra \cite{Temperley:1971iq}.
The addition of a single boundary term to the $SU_q(2)$ chain can be described
using the one-boundary Temperley-Lieb (1BTL)
algebra \cite{Martin:1992td,Martin:1993jk,MartinWoodcockI,MartinWoodcockII}. Although the integrable Hamiltonian involves three parameters only two of these appear in the 1BTL algebra. It is the algebraic parameters which control the structure of the lattice theory.
As shown in \cite{Nichols:2004fb} this general one-boundary
Hamiltonian has exactly the same spectrum as the XXZ Hamiltonian with purely diagonal
boundary terms. The Bethe Ansatz for the XXZ chain with purely diagonal boundary terms is especially simple due to the presence of an obvious conserved charge and Bethe reference states \cite{Alcaraz:1987uk}.
The situation of non-diagonal boundary terms at both ends of the chain
is considerably more complicated. One now has five boundary
parameters. As noticed in \cite{deGier:2003iu} the Hamiltonian
can be written in terms of the generators of the two-boundary Temperley-Lieb (2BTL)
algebra \cite{JanReview}. This algebra depends on three
boundary parameters only with the other two parameters of the problem entering as coefficients in the integrable Hamiltonian.
The formulation of Bethe Ansatz equations for the general two-boundary system has attracted recent attention. Unlike the diagonal case there is no obvious Bethe reference state. For the free fermion point the spectrum and wavefunctions can be found \cite{BilsteinI}. However away from this point, apart from some special cases \cite{Murgan:2005rz,Murgan:2005bp,Yang:2005ce}, the Bethe
ansatz equations have only been obtained in the case in which the parameters satisfy an additional constraint \cite{ChineseGuys,Nepomechie:2002xy,Nepomechie:2003vv,Nepomechie:2003ez,deGier:2003iu}. The surprising fact is that this constraint involves \emph{only} the parameters which
enter the 2BTL algebra and not the coefficients in the Hamiltonian. In \cite{deGier:2005fx} it was found that these were exactly the points at which the 2BTL algebra possesses indecomposable representations.
Here we shall discuss the two-boundary problem using a special basis that we discovered for the one-boundary problem \cite{Nichols:2004fb}. We shall demonstrate the existence of special points at which there are non-trivial subspaces invariant under the action of all the 2BTL generators. They are therefore invariant under the action of the integrable Hamiltonian. We find that these are exactly the points at which Bethe Ansatz equations were written \cite{Nepomechie:2002xy,ChineseGuys,Nepomechie:2003vv,Nepomechie:2003ez,deGier:2003iu,deGier:2005fx}. Furthermore we shall show that the dimension of these invariant subspaces reproduces the splitting of eigenvalues previously obtained numerically between the two Bethe Ansatz equations. A full discussion of the Bethe Ansatz in this basis will be given elsewhere.
\section{A good basis for the one-boundary problem}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\setcounter{equation}{0}
We shall first review the bulk and one-boundary problems as these will be crucial in order to discuss the two-boundary one. For further details we refer the reader to \cite{Nichols:2004fb}.
The bulk XXZ Hamiltonian with $SU_q(2)$ quantum group symmetry is given by:
\begin{eqnarray}
H^{TL}&=&-\sum_{i=1}^{L-1} e_i
\end{eqnarray}
where the $e_i$ are the Temperley-Lieb generators given by:
\begin{eqnarray}
e_i= -\frac{1}{2} \left\{ \sigma^x_i \sigma^x_{i+1} + \sigma^y_i \sigma^y_{i+1} + \cos \gamma \sigma^z_i \sigma^z_{i+1} - \cos \gamma + i \sin \gamma \left(\sigma^z_i - \sigma^z_{i+1} \right) \right\}
\end{eqnarray}
The addition of an arbitrary boundary term added to the left end is described by:
\begin{eqnarray}
H^{1BTL}&=&-a f_0 - \sum_{i=1}^{L-1} e_i
\end{eqnarray}
where the parameter $a$ is arbitrary and the one-boundary Temperley-Lieb (1BTL) generator $f_0$ is given by \cite{Martin:1992td}:
\begin{eqnarray}
f_0&=&-\frac{1}{2} \frac{1}{\sin(\omega+\gamma)}\left( -i \cos \omega \sigma_1^z - \sigma_1^x - \sin \omega \right)
\end{eqnarray}
In \cite{Nichols:2004fb} we studied a conserved charge in this 1BTL spin chain system \cite{Delius:2001qh,Doikou:2004km}. We constructed a basis of eigenvectors of this charge which we shall refer to as the ${\bf Q}$ basis. It is defined as:
\begin{eqnarray} \label{eqn:eigenvectors}
\left| Q_1 ; Q_2 \cdots ; Q_L \right> &=&\left[ i e^{ - 2i \omega Q_{1}} \uparrow + \downarrow \right] \otimes \left[ i e^{ -4i \gamma Q_1 Q_{2} - 2i \omega Q_{2}} \uparrow + \downarrow \right] \nonumber \\
&& \otimes \left[ i e^{ -4i \gamma (Q_1+Q_2) Q_{3} - 2i \omega Q_{3}} \uparrow + \downarrow \right] \\
&& \otimes \left[ i e^{ -4i \gamma (Q_1+Q_2+Q_3) Q_{4} - 2i \omega Q_{4}} \uparrow + \downarrow \right] \cdots \nonumber\ \\
&& \otimes \left[ i e^{ -4i \gamma (Q_1+Q_2+\cdots+Q_{L-1}) Q_{L} - 2i \omega Q_{L}} \uparrow + \downarrow \right] \nonumber
\end{eqnarray}
where ${\bf Q}=(Q_1,Q_2,\cdots,Q_L)$ and $Q_i= \pm \frac{1}{2}$.
The space of ${\bf Q}$-vectors can be encoded in a
Bratelli diagram - see figure \ref{fig:fullbratelli}.
\begin{figure}
\centering
\includegraphics[width=6 cm]{fullbratelli2.eps}
\caption{\label{fig:fullbratelli} Full Bratelli Diagram. The system size, $L$,
is given on the horizontal axis. The path
corresponding to the eigenvector $\left|\frac{1}{2},\frac{1}{2},-\frac{1}{2},\frac{1}{2} \right>$ is
shown in bold.}
\end{figure}
From each different path on the diagram one reads off the values of $Q_i$ and
gets a vector from (\ref{eqn:eigenvectors}). As there are two choices at each point ($Q_i=\pm \frac{1}{2}$) it
is obvious that this gives $2^L$ possible solutions (\ref{eqn:eigenvectors}). We shall see shortly that these solutions are not always distinct.
An important quantity is the height of a given path at point $i$. It is defined to be:
\begin{eqnarray} \label{eqn:height}
h_i= Q_1 + \cdots +Q_i
\end{eqnarray}
For a system of size $L$ the degeneracy corresponding to a given value
of $h_L$ is given by:
\begin{eqnarray} \label{eqn:Binomial}
\left(\begin{array}{c} L\\ \frac{L}{2}-h_L \end{array} \right)
\end{eqnarray}
In terms of the Bratelli diagram it is the
number of paths that start from the far left side at $0$ and reach that point.
One can show inductively \cite{Nichols:2004fb} that the ${\bf Q}$-basis is complete if:
\begin{eqnarray} \label{eqn:exceptionalQ}
2 \gamma h_L +\omega \ne \pi Z
\end{eqnarray}
This restriction is exactly the condition that the 1BTL is non-critical. We shall assume throughout this paper that the ${\bf Q}$-basis is indeed complete. The most general case where the 1BTL algebra is also exceptional will be discussed elsewhere.
As the action for $f_0$ and $e_i$ is known in the
spin basis we can work out their action on the ${\bf Q}$-basis. For $f_0$ we have:
\begin{eqnarray} \label{eqn:e0inQbasis}
f_0 \left| \frac{1}{2} ; Q_2 \cdots ; Q_L \right> &=& \frac{\sin
w}{\sin(\omega+\gamma)} \left| \frac{1}{2} ; Q_2 \cdots ; Q_L \right> \\
f_0 \left| -\frac{1}{2} ; Q_2 \cdots ; Q_L \right> &=& 0 \nonumber
\end{eqnarray}
For $e_i$ we have:
\begin{eqnarray} \label{eqn:eiinQbasis}
e_i \left| \cdots ; Q_{i-1}; \frac{1}{2} ; \frac{1}{2} ; Q_{i+2} ; \cdots \right> &=&0
\nonumber \\
e_i \left| \cdots ; Q_{i-1}; \frac{1}{2} ; -\frac{1}{2} ; Q_{i+2} ; \cdots \right> &=&
\alpha_i \left| \cdots ; Q_{i-1}; \frac{1}{2} ; - \frac{1}{2} ; Q_{i+2} ; \cdots ; \right> \nonumber
\\
&&-\alpha_i \left| \cdots ; Q_{i-1}; -\frac{1}{2} ; \frac{1}{2} ; Q_{i+2} ; \cdots \right>\\
e_i \left| \cdots ; Q_{i-1}; -\frac{1}{2} ; \frac{1}{2} ; Q_{i+2} ; \cdots \right> &=&
- \beta_i \left| \cdots ; Q_{i-1}; \frac{1}{2} ; - \frac{1}{2} ; Q_{i+2} ; \cdots ; \right>\nonumber
\\
&& + \beta_i \left| \cdots ; Q_{i-1}; -\frac{1}{2} ; \frac{1}{2} ; Q_{i+2} ; \cdots
\right>\nonumber \\
e_i \left| \cdots ; Q_{i-1}; -\frac{1}{2} ; -\frac{1}{2} ; Q_{i+2} ; \cdots \right> &=&0 \nonumber
\end{eqnarray}
where:
\begin{eqnarray} \label{eqn:AlphaandBeta}
\alpha_i&=& \frac{\sin(2 \gamma h_{i-1} + \omega + \gamma)}{\sin(2 \gamma h_{i-1} + \omega)} \nonumber\\
\beta_i&=& \frac{\sin(2 \gamma h_{i-1} + \omega - \gamma)}{\sin(2 \gamma h_{i-1} + \omega)}
\end{eqnarray}
The variables $\alpha_i$ and $\beta_i$ depend on the previous ${\bf Q}$ spins only through the height variable (\ref{eqn:height}).
In the ${\bf Q}$-basis one can see from (\ref{eqn:e0inQbasis}) and
(\ref{eqn:eiinQbasis}) that both $f_0$ and the $e_i$'s act within sectors of
a given value of $h_L=Q_1+\cdots+Q_L$. These are precisely the irreducible representations of the 1BTL algebra of size (\ref{eqn:Binomial}). The boundary generator $f_0$ is diagonalized and the bulk generators only affect nearest neighbour sites.
\section{Addition of a second boundary generator}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\setcounter{equation}{0}
We now consider the most general two-boundary XXZ model. We shall write this as:
\begin{eqnarray} \label{eqn:2BTL}
H^{2BTL}&=& -a f_0 - a' f_L - \sum_{i=1}^{L-1} e_i
\end{eqnarray}
where $a$ and $a'$ are arbitrary numerical constants and the right boundary generator is given by:
\begin{eqnarray}
f_L&=& -\frac{1}{2 \sin(w_2+\gamma)} \left( i \cos w_2 \sigma^z_L + \cos \phi \sigma^x_L - \sin \phi \sigma^y_L - \sin \omega_2 \right)
\end{eqnarray}
This generator together with the 1BTL algebra generates the two-boundary
Temperley-Lieb (2BTL) algebra \cite{deGier:2003iu,JanReview}.
We note that the Hamiltonian (\ref{eqn:2BTL}) contains five independent boundary parameters: $\omega_1$, $\omega_2$, $\phi$, $a$, and $a'$. However only $\omega_1$, $\omega_2$, and $\phi$ are present in the boundary generators $f_0$ and $f_L$. It is these three parameters which will control the structure of the lattice theory.
It is simple to calculate the action of $f_L$ in the ${\bf Q}$-basis:
\begin{eqnarray} \label{eqn:RightBoundaryAction}
f_L \left|;\cdots ; Q_{L-1} ; \frac{1}{2} \right>&=& F_{\frac{1}{2},\frac{1}{2}} \left|\cdots ; Q_{L-1} ; \frac{1}{2} \right>+F_{\frac{1}{2},-\frac{1}{2}} \left|\cdots ; Q_{L-1} ; -\frac{1}{2} \right> \nonumber\\
f_L \left|; \cdots ; Q_{L-1} ; -\frac{1}{2} \right>&=& F_{-\frac{1}{2},\frac{1}{2}} \left|\cdots ; Q_{L-1} ; \frac{1}{2} \right> + F_{-\frac{1}{2},-\frac{1}{2}} \left|\cdots ; Q_{L-1} ; -\frac{1}{2} \right>
\end{eqnarray}
where:
\begin{eqnarray} \label{eqn:Fdefined}
F_{\frac{1}{2},\frac{1}{2}}&=& -\frac{\sin \left(\frac{2 \gamma h_{L-1} +\omega_1-\omega_2+\phi}{2}\right)\sin \left(\frac{2 \gamma h_{L-1} +\omega_1-\omega_2-\phi}{2}\right)}{\sin(\gamma+\omega_2) \sin(2 \gamma h_{L-1}+\omega_1)}\nonumber \\
F_{\frac{1}{2},-\frac{1}{2}}&=&- e^{-i(2 \gamma h_{L-1}+ \omega_1)} \frac{\sin \left(\frac{2 \gamma h_{L-1} + \omega_1 + \omega_2 + \phi}{2} \right)\sin \left(\frac{2 \gamma h_{L-1} + \omega_1 - \omega_2 + \phi}{2} \right)}{\sin(\gamma +\omega_2) \sin(2 \gamma h_{L-1} +\omega_1)} \\
F_{-\frac{1}{2},\frac{1}{2}}&=& e^{i(2 \gamma h_{L-1}+ \omega_1)} \frac{\sin \left(\frac{2 \gamma h_{L-1}+\omega_1-\omega_2-\phi}{2}\right)\sin \left(\frac{2 \gamma h_{L-1}+\omega_1+\omega_2-\phi}{2}\right)}{\sin(\gamma +\omega_2) \sin(2 \gamma h_{L-1} +\omega_1)} \nonumber \\
F_{-\frac{1}{2},-\frac{1}{2}}&=& \frac{\sin \left(\frac{2 \gamma h_{L-1} + \omega_1 + \omega_2 +\phi}{2} \right)\sin \left(\frac{2 \gamma h_{L-1} + \omega_1 + \omega_2 -\phi}{2} \right) }{\sin(\gamma+\omega_2) \sin(2 \gamma h_{L-1}+\omega_1)} \nonumber
\end{eqnarray}
Note that $f_L$ still just affects the final site in the ${\bf Q}$-basis and only depends on the previous spins through the height (\ref{eqn:height}) at the $L-1$ site.
These expressions are well defined as we are, by assumption, away from the 1BTL exceptional points (\ref{eqn:exceptionalQ}). In the next section we shall discuss the case in which some of the $F_{\pm \frac{1}{2} ,\pm \frac{1}{2}}$ terms vanish.
\section{Critical points and invariant subspaces}
\renewcommand{\theequation}{\arabic{section}.\arabic{equation}}
\setcounter{equation}{0}
If $F_{\frac{1}{2},-\frac{1}{2}}$ and $F_{-\frac{1}{2},\frac{1}{2}}$ are always non-vanishing (i.e. $\phi$ is generic) then there is no non-trivial invariant subspace. To prove this assume that there is an invariant subspace and take any vector within it. This will have a particular value of $h_L$. By the action of the 1BTL generators we will produce all possible vectors with the same fixed value of $h_L$. Now by the action of $f_L$ on these we will produce some vectors with $h'_L=h_L \pm 1$. Now act with the 1BTL generators on these to get all the vectors with $h_L \pm 1$. By repeating this procedure we get all $-\frac{L}{2} \le h_L \le \frac{L}{2}$ sectors and therefore all $2^L$ states. Therefore we conclude that there is no non-trivial subspace.
We shall now discuss the cases in which a single $F_{\pm \frac{1}{2},\mp \frac{1}{2}}$ vanishes.
Let us first consider a value of $\phi$ for which there is a particular value of $h_{L-1}$, say $h_{L-1}=x$, for which $F_{\frac{1}{2},-\frac{1}{2}}=0$. Now let us consider the action of the 2BTL generators on the vectors:
\begin{eqnarray}
\left|Q_1 \cdots ; Q_{L-1} ; \frac{1}{2} \right>
\end{eqnarray}
with $h_{L-1}=x$. The final boundary generator $f_L$, by the vanishing of $F_{\frac{1}{2},-\frac{1}{2}}$, acts as a constant on these states. The bulk generators, except $e_{L-1}$, conserve the value of $h_{L-1}$ and therefore act completely within this space of vectors. Now the action of the generator $e_{L-1}$ on these vectors will be non-trivial only if $Q_{L-1}=-\frac{1}{2}$. On such vectors it will give rise to vectors with $Q_{L-1}=\frac{1}{2}, Q_{L}=-\frac{1}{2}$ which have $h_{L-1}=x+1$. By acting with $f_L$ on these vectors we get $Q_L=\frac{1}{2}$ states as well. Now by action of all 1BTL generators we get all vectors with $h_{L-1}=x+1$. Now we can repeat this procedure to conclude that the set of all vectors with $h_{L} \ge x + \frac{1}{2}$ is closed under the action of all generators.
In a similar way one can also consider a value of $\phi$ for which there is a particular value of $h_{L-1}$, say $x'$, for which $F_{-\frac{1}{2},\frac{1}{2}}=0$. By a similar set of arguments we can conclude that the set of vectors with $h_{L} \le x' - \frac{1}{2}$ is closed under the action of all generators.
Therefore for every possible value of $h_{L-1}$ i.e. $x=-\frac{L-1}{2},-\frac{L-1}{2}+1,\cdots,\frac{L-1}{2}$ we have a value of $\phi$ for which we get an invariant subspace. The fact that invariant subspaces only appear when parameters are tuned to particular values implies that the action of the 2BTL generators, in this representation, is becoming indecomposable at these points. The location of these critical points is exactly as previously conjectured in \cite{deGier:2005fx} found using completely different methods.
For $\phi=-2 \gamma x - \omega_1 \pm \omega_2$ (the case $F_{\frac{1}{2},-\frac{1}{2}}=0$) the invariant subspace has dimension:
\begin{eqnarray} \label{eqn:firstset}
\sum_{q \ge x + \frac{1}{2}} \bin{L}{\frac{L}{2}- q}
\end{eqnarray}
whereas for $\phi=2 \gamma x + \omega_1 \pm \omega_2$ (the case $F_{-\frac{1}{2},\frac{1}{2}}=0$) the invariant subspace has dimension:
\begin{eqnarray}
\sum_{q \le x - \frac{1}{2}} \bin{L}{\frac{L}{2}- q} = 2^L - \sum_{q \ge x + \frac{1}{2}} \bin{L}{\frac{L}{2}- q}
\end{eqnarray}
\begin{table}[ht]
\begin{tabular}{c|cccccccccc}
Size of system & \multicolumn{10}{c}{Value of $x$} \\
& $0$ & $\frac{1}{2}$ & $1$ & $\frac{3}{2}$ & $2$ & $\frac{5}{2}$ & $3$ & $\frac{7}{2}$ & $4$ & $\frac{9}{2}$ \\
\hline
1 & 1 \\
2 & - & 1 \\
3 & 4 & - & 1 & \\
4 & - & 5 & - & 1 \\
5 & 16 & - & 6 & - & 1 \\
6 & - & 22 & - & 7 & - & 1 \\
7 & 64 & - & 29 & - & 8 & - & 1 \\
8 & - & 93 & - & 37 & - & 9 & - & 1 \\
9 & 256 & - & 130 & - & 46 & - & 10 & - & 1\\
10 & - & 386 & - & 176 & - & 56 & - & 11 & - & 1
\end{tabular}
\caption{\label{tab:2BTLcriticalpts} Dimension of the invariant subspaces at the 2BTL critical points.}
\end{table}
In table \ref{tab:2BTLcriticalpts} we give the values of (\ref{eqn:firstset}) for $x \ge 0$. The points at which the non-trivial subspaces exist are exactly the points at which the Bethe Ansatz can be performed \cite{ChineseGuys,Nepomechie:2002xy,Nepomechie:2003vv,Nepomechie:2003ez}. At these points the eigenvalues of the two-boundary Hamiltonian (\ref{eqn:2BTL}) split into two sets. These sets are each described by Bethe Ansatz equations. This is similar to the diagonal chain where two sets of Bethe Ansatz equations come from the two distinct Bethe reference states. Here we only wish to draw attention to the fact that in several cases the dimensions of the invariant subspaces give exactly the numerically observed splitting.
In \cite{Nepomechie:2003ez} it was found in the $x=\frac{1}{2}$ case that the number of solutions of one of the Bethe Ansatz equations followed the formula:%
\begin{eqnarray}
2^{L-1} + \frac{1}{2} \left( \begin{array}{c} L \\ \frac{L}{2} \end{array} \right)
\end{eqnarray}
The other eigenvalues were given as solutions to the other Bethe Ansatz equation. As there are in total $2^L$ eigenvalues these are therefore of number:
\begin{eqnarray}
2^{L-1} - \frac{1}{2} \left( \begin{array}{c} L \\ \frac{L}{2} \end{array} \right)
\end{eqnarray}
Evaluating this for $L=2,4,6,8,10$ we find $1,5,22,93,386$ which are exactly the numbers in the $x=\frac{1}{2}$ column of table \ref{tab:2BTLcriticalpts}.
In the case of $L$ odd and $x=0$ it was found numerically in \cite{Nepomechie:2003ez} that each Bethe Ansatz contained $2^{L-1}$ solutions. Again this is in agreement with table \ref{tab:2BTLcriticalpts}.
We therefore conjecture that the numbers given in the table above correctly account for the splitting of the $2^L$ solutions between the Bethe Ansatz equations (at least when parameters other than $\phi$ are generic). The numbers in the table correspond to the size of the smaller set.
A proper explanation of this fact requires the Bethe Ansatz equations to be formulated directly in the ${\bf Q}$-basis. This will be discussed in detail elsewhere.
We would finally like to point out that although we have used the one-boundary ${\bf Q}$-basis for the left-hand boundary it is equally possible to consider the two-boundary problem by beginning at the right-hand side.
\section{Conclusion}
We have discussed the XXZ model with general non-diagonal boundary terms. We first isolated the algebraic aspects of the problem by rewriting the model in terms of the 2BTL algebra. By calculating the action of the 2BTL generators on a good basis for the one-boundary problem we were able to show that at particular points there are non-trivial invariant subspaces. The dimension of these subspaces was also calculated. The fact that the Bethe Ansatz was only able to be written at these critical points, and moreover that the number of solutions to each Bethe Ansatz equation is given precisely by the dimension of the invariant subspace, still needs to be properly explained.
We believe that it is possible to generalize the ${\bf Q}$-basis to study more general spin chains with boundaries. We shall return to this point at a later date.
\section*{Acknowledgements}
This research is partially supported by the EU network
\emph{Integrable models and applications: from strings to condensed matter} HPRN-CT-2002-00325 (EUCLID).
This present paper arose, partially, from a more detailed mathematical investigation of the 2BTL algebra in collaboration with B. Westbury. I would also like to thank J. de Gier, P. Pyatov, and V. Rittenberg for useful discussions on the 2BTL algebra.
|
2,877,628,088,391 | arxiv |
\section{Introduction}
A search for new phenomena in final states with at least three charged leptons (electrons or muons) is presented, using 137\fbinv of proton-proton ($\Pp\Pp$) collision data at $\sqrt{s} = 13\TeV$ collected by the CMS experiment at the CERN LHC from 2016 to 2018.
The results are interpreted in the context of two beyond the standard model (SM) theories, namely the type-III seesaw and light scalar or pseudoscalar sector extensions to the SM. The event selection and signal region definitions are chosen in a way that allows other models to be tested.
Phenomenologically, these models show complementary signatures of resonant and nonresonant multilepton final states, as described below.
The seesaw mechanism introduces new heavy particles coupled to leptons and to the Higgs boson, in order to explain the light masses of the neutrinos~\cite{Minkowski:1977sc,Mohapatra:1979ia,Magg:1980ut,Mohapatra:1980yp,Schechter:1980gr,Schechter:1981cv,Foot:1988aq,Mohapatra:1986aw,Mohapatra:1986bd}.
Within the type-III seesaw model, the neutrino is assumed to be a Majorana particle whose mass arises via the mediation of new massive fermions.
These massive fermions are an SU(2) triplet of heavy Dirac charged leptons ($\Sigma^\pm$) and a heavy Majorana neutral lepton ($\Sigma^0$).
In $\Pp\Pp$ collisions, these massive fermions may be pair-produced through electroweak interactions in both charged-charged and charged-neutral pairs.
Multilepton final states arise from the decays of each of the $\Sigma^+\Sigma^-$, $\Sigma^+\Sigma^0$, and $\Sigma^-\Sigma^0$ pairs to the nine different pairs of $\PW$, $\PZ$, and Higgs bosons with SM leptons and the subsequent leptonic decays of the SM bosons.
A complete decay chain example would be $\Sigma^\pm \Sigma^0\to (W^\pm \nu) (W^\pm \ell^\mp) \to (\ell^{\pm} \nu \nu) (\ell^{\pm} \nu \ell^\mp)$, where $\ell$ and $\nu$ are the three flavors of charged and neutral SM leptons, respectively.
All 27 distinct signal production and decay combinations of the seesaw signal are simulated~\cite{Biggio:2011ja}.
The $\Sigma^{\pm,0}$ are degenerate in mass, their decays are prompt, and the $\Sigma$ decay branching fractions are identical across all lepton flavors (flavor-democratic scenario).
This is achieved by taking the mixing angles to be $V_{\Pe}=V_{\mu}=V_{\tau}=10^{-4}$, values that are compatible with the existing constraints~\cite{Abada:2008ea,Abada:2007ux,delAguila:2008pw,Biggio:2011ja,Goswami:2018}.
New light scalars or pseudoscalars are a ubiquitous feature of many theories of physics beyond the SM, including, but not limited to, extended Higgs sectors, supersymmetric theories, and dark sector extensions~\cite{Cacciapaglia:2019bqz,Ellwanger:2009dp,Maniatis:2009re,Buckley:2014fba}.
We consider a generalization of a simple model \cite{Casolino:2015cza,Chang:2017ynj}, where a new light CP-even scalar or CP-odd pseudoscalar boson ($\phi$) is produced in $\Pp\Pp$ collisions via a Yukawa coupling of the $\phi$ to top quarks, $g_{\PQt}$, either in three-body associated production with top quark pairs, or in top quark pair production with three-body top quark decays, ${\PQt}\to\cPqb\PW\phi$.
The signal is collectively labeled as $\ensuremath{\ttbar\phi}$.
In this paper, we search for decays of the $\phi$ boson via a Yukawa coupling to the charged leptons, $g_{\ell}$, into dielectron or dimuon pairs within multilepton events.
The decays of the $\phi$ boson into tau-tau lepton pairs are not considered.
It is assumed that $g_{\ell}\ll g_{\PQt}$ and that all other couplings of the $\phi$ boson are negligible.
Furthermore, the $\phi$ boson decays are taken to be prompt, and the $\phi$ branching fractions into different flavors of charged lepton pairs, $\mathcal{B}(\phi\to\ell\ell)$, as well as $g_{\PQt}$, are left as free parameters.
Figure~\ref{fig:FeynmanDiagrams} illustrates example diagrams for the production and decay of heavy fermions in the type-III seesaw model (left) and a light scalar or pseudoscalar boson in the $\ensuremath{\ttbar\phi}$ model (right).
Prior searches for the manifestation of the type-III seesaw model have been conducted by the ATLAS and CMS Collaborations using data recorded at $\sqrt{s} = 7$, 8, and 13\TeV~\cite{CMS:2012ra,Aad:2015cxa,Aad:2015dha,Sirunyan:2017qkz}.
The most stringent constraints in the flavor-democratic scenario are from a CMS search using 13\TeV data collected in 2016, which excluded $\Sigma$ masses below 850\GeV~\cite{Sirunyan:2017qkz}.
The present study of the $\ensuremath{\ttbar\phi}$ model is the first direct search for a light scalar or pseudoscalar boson in leptonic decays produced in association with a top quark pair.
\begin{figure}[!htp]
\centering
\includegraphics[width=.50\textwidth]{Figure_001-a.pdf} \hspace{.025\textwidth}
\includegraphics[width=.43\textwidth]{Figure_001-b.pdf}
\caption{
Leading order Feynman diagrams for the type-III seesaw (left) and $\ensuremath{\ttbar\phi}$ (right) signal models, depicting example production and decay modes in $\Pp\Pp$ collisions.
\label{fig:FeynmanDiagrams}}
\end{figure}
\section{The CMS detector}
The central feature of the CMS apparatus is a superconducting solenoid of 6\unit{m} internal diameter, providing a magnetic field of 3.8\unit{T}.
Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter, each composed of a barrel and two endcap sections.
Forward calorimeters extend the pseudorapidity ($\eta$) coverage provided by the barrel and endcap detectors.
Muons are detected in gas-ionization chambers embedded in the steel flux-return yoke outside the solenoid.
A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in Ref.~\cite{Chatrchyan:2008zzk}.
The CMS detector uses a two-tiered trigger system \cite{Khachatryan:2016bia}.
The first level, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select the most relevant pp collision events at rates up to 100\unit{kHz}.
These are further processed by a second level consisting of a farm of processors, known as the high level trigger, that combines information from all CMS subdetectors to yield a final event rate of less than 1\unit{kHz} for data storage.
\section{Data samples and event simulation}
The data samples analyzed in this search correspond to a total integrated luminosity of 137\fbinv (35.9, 41.5, and 59.7\fbinv in years 2016, 2017 and 2018, respectively), recorded in $\Pp\Pp$ collisions at $\sqrt{s} = 13\TeV$.
A combination of isolated single-electron and single-muon triggers was used with corresponding transverse momentum (\pt) thresholds of 24 and 27\GeV in 2016,
27 and 32\GeV in 2017, and 24 and 32\GeV in 2018.
Event samples from Monte Carlo (MC) simulations are used to estimate the rates of signal and relevant SM background processes.
The $\PW\PZ$, $\PZ\gamma$, $\ttbar\PZ$, $\ttbar\PW$, and triboson backgrounds are generated using \MGvATNLO (2.2.2 in 2016, 2.4.2 in 2017 and 2018 data analyses)~\cite{Alwall:2014hca} at next-to-leading order (NLO) precision. The top quark mass used in all simulations is 172.5\GeV.
The $\PZ\PZ$ background contribution from quark-antiquark annihilation is generated using \POWHEG 2.0~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd} at NLO, whereas the contribution from gluon-gluon fusion is generated at leading order (LO) using \MCFM 7.0.1~\cite{Campbell:2010ff}.
Backgrounds from Higgs boson production for a Higgs boson mass of 125\GeV are generated at NLO using \POWHEG and {\textsc{JHUGen}}\xspace 7.0.11~\cite{Gao:2010qx,Bolognesi:2012mm,Anderson:2013afp,Gritsan:2016hjl}.
Simulated event samples for Drell--Yan (DY) and $\ttbar$ processes, generated at NLO with \MGvATNLO and \POWHEG, respectively, are used for systematic uncertainty studies.
All signal samples are simulated using \MGvATNLO 2.6.1 at LO precision.
The production cross section for the type-III seesaw signal model $\sigma(\Sigma\Sigma)$ is calculated at NLO plus next-to-leading logarithmic precision, assuming that the heavy leptons are SU(2) triplet fermions \cite{Fuks:2012qx,Fuks:2013vua}, while the $\ensuremath{\ttbar\phi}$ production cross section $\sigma(\ensuremath{\ttbar\phi})$ comes directly from the \MGvATNLO 2.6.1 generator at LO precision.
All background and signal samples in 2016 are generated with the NNPDF3.0 NLO or LO parton distribution functions (PDFs), with the order matching that in the matrix element calculations. In 2017 and 2018, the NNPDF3.1 next-to-next-to-leading order PDFs~\cite{Ball:2014uwa,Ball:2017nwa} are used.
Parton showering, fragmentation, and hadronization for all samples are performed using \PYTHIA 8.230~\cite{Sjostrand:2014zea} with the underlying event tune CUETP8M1~\cite{Khachatryan:2015pea} for the 2016 analysis, and CP5~\cite{Sirunyan:2019dfx} for the 2017 and 2018 analyses.
Double counted partons generated with \PYTHIA and \MGvATNLO are removed using the FxFx~\cite{Frederix:2012ps} matching schemes.
The response of the CMS detector is simulated using dedicated software based on the \GEANTfour toolkit~\cite{Agostinelli:2002hh},
and the presence of multiple $\Pp\Pp$ interactions in the same or adjacent bunch crossing (pileup) is incorporated by simulating additional interactions, that are both in-time and out-of-time with the hard collision according to the pileup in the data samples.
\section{Event reconstruction}
A particle-flow (PF) algorithm~\cite{Sirunyan:2017ulk} aims to reconstruct and identify each individual particle in an event, with an optimized combination of information from the various elements of the CMS detector.
In each event, the candidate vertex with the largest value of summed physics-object $\pt^2$ is taken to be the primary $\Pp\Pp$ interaction vertex (PV). Here the physics objects are the jets, clustered using the jet finding algorithm ~\cite{Cacciari:2008gp,Cacciari:2011ma} with the tracks assigned to candidate vertices as inputs, and the associated missing transverse momentum, taken as the negative vector sum of the \pt of those jets.
The energy of photons is obtained from the ECAL measurement.
The energy of electrons is determined from a combination of the electron momentum at the PV as determined by the tracker, the energy of the corresponding ECAL cluster, and the energy sum of all bremsstrahlung photons spatially compatible with originating from the electron track.
The energy of muons is obtained from the curvature of the corresponding track.
The energy of charged hadrons is determined from a combination of their momentum measured in the tracker and the matching ECAL and HCAL energy deposits, corrected for zero-suppression effects and for the response function of the calorimeters to hadronic showers.
Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energies.
Jets used in this analysis are reconstructed using the anti-\kt algorithm~\cite{Cacciari:2008gp} with a distance parameter of 0.4, as implemented in the \textsc{FastJet} package~\cite{Cacciari:2011ma}. Jets are required to have $\pt>30\GeV$ and, to be fully in the tracking system volume, $\abs{\eta}<2.1$.
Jet momentum is determined as the vectorial sum of all particle momenta in the jet, and is found from simulation to be, on average, within 5--10\% of the true momentum over the whole \pt spectrum and detector acceptance.
The effect of the pileup on reconstructed jets is mitigated through a charged hadron subtraction technique, which removes the energy of charged hadrons not originating from the PV~\cite{Sirunyan:2017ulk}.
The impact of neutral pileup particles in jets is mitigated by an event-by-event jet-area-based correction of the jet four-momenta~\cite{Cacciari:2008ca,Cacciari:2008ps,Sirunyan:2017jes}.
Jet energy corrections are derived from simulation studies so that the average measured response of jets becomes identical to that of particle level jets.
In situ measurements of the momentum balance in dijet, photon+jet, leptonically decaying $\PZ$+jet, and multijet events are used to determine any residual differences between the jet energy scale in data and in simulation, and appropriate corrections are made to the jet \pt~\cite{Sirunyan:2017jes}.
Additional quality criteria are applied to each jet to remove those potentially dominated by instrumental effects or reconstruction failures~\cite{CMS:2017jme}.
Finally, all selected jets are required to be outside a cone of $\Delta R \equiv \sqrt{\smash[b]{(\Delta\eta)^2+(\Delta\phi)^2}}=0.4$ around a selected electron or muon as defined below, where $\Delta \phi$ is the azimuthal distance.
A subset of these reconstructed jets originating from $\Pb$ hadrons is identified using the DeepCSV {\cPqb} tagging algorithm~\cite{Sirunyan:2017ezt}.
This algorithm has an efficiency of 60--75\% to identify {\cPqb} quark jets, depending on jet \pt and $\eta$, and a misidentification rate of about 10\% for {\cPqc} quark jets as well as 1\% for light quark and gluon jets.
The missing transverse momentum vector \ptvecmiss is computed as the negative vector sum of the transverse momenta of all the PF candidates in an event, and its magnitude is denoted as \ptmiss~\cite{Sirunyan:2019kia}.
The \ptvecmiss is modified to account for corrections to the energy scale of the reconstructed jets in the event.
Electrons and muons are reconstructed by geometrically matching tracks reconstructed in the tracking system with energy clusters in the ECAL~\cite{Khachatryan:2015hwa} and with the tracks in the muon detectors~\cite{Sirunyan:2018fpa}, respectively.
Electrons are required to be within the tracking system acceptance, $\abs{\eta}<2.5$, and muons are required to be within the muon system acceptance, $\abs{\eta}<2.4$. Both electrons and muons must have $\pt>10\GeV$.
Furthermore, electrons must satisfy shower shape and track quality requirements to suppress those originating from photon conversions in detector material as well as hadronic activity misidentified as electrons.
Similarly, muons must satisfy track fit and matching quality requirements to suppress muon misidentification due to hadron shower remnants that reach the muon system.
Prompt isolated leptons produced by SM boson decays (either directly, or via an intermediate tau lepton) are indistinguishable from those produced in signal events.
Thus, SM processes that can produce three or more isolated leptons, such as $\PW\PZ$, $\PZ\PZ$, $\ttbar\PZ$, $\ttbar\PW$, triboson, and Higgs boson production, constitute the irreducible backgrounds.
Reducible backgrounds arise from SM processes, such as $\PZ$+jets or $\ttbar$+jets production, accompanied by additional leptons originating from heavy quark decays or from misidentification of jets. Such leptons arising not from boson decays, but from leptons inside or near jets, hadrons that reach the muon detectors, or hadronic showers with large electromagnetic energy fractions, are referred to as misidentified leptons.
The reducible backgrounds are significantly suppressed by applying a set of lepton isolation and displacement requirements in addition to the quality criteria in the lepton identification~\cite{Khachatryan:2015hwa,Sirunyan:2018fpa}.
The relative isolation is defined as the scalar \pt sum, normalized to the lepton \pt, of photon and hadron PF objects within a cone of $\Delta R $ around the lepton.
This relative isolation is required to be in the range of 5--15\% for $\Delta R=0.3$ for electrons, scaling inversely with the electron \pt, and to be less than 15\% for $\Delta R= 0.4$ for muons.
The isolation quantities are corrected for contributions from particles originating from pileup vertices.
In addition to the isolation requirement, electrons must satisfy $\abs{d_{z}}<0.1$\cm and $\abs{d_{xy}}<0.05$\cm in the ECAL barrel ($\abs{\eta}<1.479$), and $\abs{d_{z}}<0.2$\cm and $\abs{d_{xy}}<0.1$\cm in the ECAL endcap ($\abs{\eta}>1.479$), where $d_{z}$ and $d_{xy}$ are the longitudinal and transverse impact parameters of electrons with respect to the primary vertex, respectively.
Similarly, muons must satisfy $\abs{d_{z}}<0.1$\cm and $\abs{d_{xy}}<0.05$\cm.
All selected electrons within a cone of $\Delta R<0.05$ of a selected muon are discarded, since these are possibly due to bremsstrahlung from the muons.
In trilepton events, where misidentified-background contributions are dominant, additional 3-dimensional impact parameter significance and {\cPqb} tag veto requirements are imposed on the leptons, removing those with significant displacement with respect to the PV or whose matching jet is {\cPqb} tagged.
A PF jet with $\pt>10\GeV$ and $\abs{\eta}<2.5$ is considered to be matched if it is located within a cone of $\Delta R<0.4$ around the lepton without any further quality criteria on the jet.
These electron and muon reconstruction and selection requirements result in typical efficiencies of 40--90 and 75--95\%, respectively, depending on the lepton \pt and $\eta$~\cite{Khachatryan:2015hwa,Sirunyan:2018fpa}.
\section{Event selection}
In both data and simulated event samples, events satisfying the trigger criteria are required to pass additional offline selections.
Each event is required to have at least one electron with $\pt>35\GeV$ (30\GeV in 2016) or at least one muon with $\pt>26\GeV$ (29\GeV in 2017) to be consistent with the trigger thresholds, depending on the trigger used to collect the event.
Throughout this analysis, we consider events with exactly 3 leptons (3L) in one category and four or more leptons (4L) in another category.
In the 4L event category, only the 4 leading-\pt leptons are considered.
All events containing a lepton pair with $\Delta R<0.4$ or a same-flavor lepton pair with dilepton invariant mass below 12\GeV are removed to reduce background contributions from low-mass resonances as well as final-state radiation.
The 3L events containing an opposite-sign same-flavor (OSSF) lepton pair with the dilepton invariant mass below 76\GeV, when the trilepton invariant mass is within a $\PZ$ boson mass window ($91\pm15\GeV$), are also rejected.
This suppresses events from the $\PZ\to\ell\ell^{*}\to\ell\ell\gamma$ background process, where the photon converts into two additional leptons, one of which is lost.
The event selection criteria for both the type-III seesaw and $\ensuremath{\ttbar\phi}$ signal models are orthogonal to those used in the estimation of SM backgrounds.
In the context of the type-III seesaw extension of the SM, pair production of heavy fermions gives rise to events with multiple energetic charged leptons or neutrinos in the final state.
Given the relatively high momenta of bosons and leptons originating from the decays of these heavy particles, kinematic quantities, such as the scalar \pt sum of all leptons, are instrumental in suppressing SM contributions.
This is especially valid for decay modes such as $\Sigma^{\pm}\to\ell^{\pm}{\PZ}\to\ell^{\pm}\ell^{\pm}\ell^{\mp}$, where all of the daughter particles of the heavy fermion can be reconstructed in the detector.
However, \ptmiss can be used as a complementary kinematic quantity in other decay modes, such as $\Sigma^0\to\nu \PH\to\nu{\PW}^{\pm}{\PW}^{\mp}$
or $\Sigma^\pm\to\nu \PW^{\pm} \to \nu \ell^{\pm} \nu$, where neutrinos can carry a significant fraction of the outgoing momentum.
We define $\ensuremath{L_{\mathrm{T}}}$ as the scalar \pt sum of all charged leptons, and the quantity $\ensuremath{L_{\mathrm{T}}}$+\ptmiss is chosen as the primary kinematic discriminant to select this variety of decay modes.
We classify the selected multilepton events into statistically independent search channels using the multiplicity of leptons, \ensuremath{N_{\mathrm{leptons}}}, as well as the multiplicity and mass of distinct OSSF pairs, $\ensuremath{N_{\mathrm{OSSF}}}$ and $\ensuremath{M_{\mathrm{OSSF}}}$, respectively.
In cases of ambiguity, $\ensuremath{M_{\mathrm{OSSF}}}$ is calculated using the OSSF pair with the mass closest to that of the $\PZ$ boson, considering both electrons and muons.
The 3L events with an OSSF lepton pair are labeled as OSSF1, whereas those without are labeled as OSSF0.
The OSSF1 events are further classified as on-$\PZ$, below-$\PZ$, and above-$\PZ$, based on the $\ensuremath{M_{\mathrm{OSSF}}}$ relative to the $\pm15\GeV$ window around the $\PZ$ boson mass,
where the latter two categories are also collectively labeled as off-$\PZ$.
Similarly, the 4L events are classified as those with zero, one, and two distinct OSSF lepton pairs, OSSF0, OSSF1, and OSSF2, respectively.
In the 3L on-$\PZ$ search region, the sensitivity is increased by considering the transverse mass discriminant $\ensuremath{M_{\mathrm{T}}}=\sqrt{\smash[b]{2\ptmiss\pt^{\ell}[1-\cos(\Delta\phi_{\ptvecmiss,\ptvec^{\ell}})]}}$ instead of $\ensuremath{L_{\mathrm{T}}}$+\ptmiss, where $\ell$ refers to the lepton that is not part of the on-$\PZ$ pair.
We reject 3L on-$\PZ$ events with $\ptmiss<100\GeV$, and 4L OSSF2 events with $\ptmiss<100\GeV$ and two distinct OSSF lepton pairs on-$\PZ$, as these are used in the estimation of SM backgrounds.
This event selection and binning scheme yields a total of 40 statistically independent search bins for the type-III seesaw model, as summarized in Table~\ref{tab:seesawSRs}.
\begin{table}
\centering
\topcaption{Multilepton signal region definitions for the type-III seesaw signal model.
All events containing a same-flavor lepton pair with invariant mass below 12\GeV are removed in the 3L and 4L event categories.
Furthermore, 3L events containing an OSSF lepton pair with mass below 76\GeV when the trilepton mass is within a $\PZ$ boson mass window ($91\pm15\GeV$) are also rejected.
The last $\ensuremath{L_{\mathrm{T}}}$+\ptmiss or $\ensuremath{M_{\mathrm{T}}}$ bin in each signal region contains the overflow events.}
\label{tab:seesawSRs}
\resizebox{\textwidth}{!}{
\begin{tabular}{l c r c c r l c }
\hline
Label & \ensuremath{N_{\mathrm{leptons}}} & $\ensuremath{N_{\mathrm{OSSF}}}$ & $\ensuremath{M_{\mathrm{OSSF}}}$ ({\GeVns}) & \ptmiss ({\GeVns}) & \multicolumn{2}{c}{Variable and range ({\GeVns})} & Number of bins \\[0.5ex] \hline
3L below-$\PZ$ & ~~3 & 1~~ & $<$76 & \NA & ~~~~$\ensuremath{L_{\mathrm{T}}}$+\ptmiss & $[0,1200]$ & \multicolumn{1}{c}{6} \\[0.5ex]
3L on-$\PZ$ & ~~3 & 1~~ & 76--106 & $>$100 & ~~~~$\ensuremath{M_{\mathrm{T}}}$ & $[0,700]$ & \multicolumn{1}{c}{7} \\[0.5ex]
3L above-$\PZ$ & ~~3 & 1~~ & $>$106 & \NA & ~~~~$\ensuremath{L_{\mathrm{T}}}$+\ptmiss & $[0,1600]$ & \multicolumn{1}{c}{8} \\[0.5ex]
3L OSSF0 & ~~3 & 0~~ & \NA & \NA & ~~~~$\ensuremath{L_{\mathrm{T}}}$+\ptmiss & $[0,1200]$ & \multicolumn{1}{c}{6} \\[0.5ex]
4L OSSF0 & $\geq$4 & 0~~ & \NA & \NA & ~~~~$\ensuremath{L_{\mathrm{T}}}$+\ptmiss & $[0,600]$ & \multicolumn{1}{c}{2} \\[0.5ex]
4L OSSF1 & $\geq$4 & 1~~ & \NA & \NA & ~~~~$\ensuremath{L_{\mathrm{T}}}$+\ptmiss & $[0,1000]$ & \multicolumn{1}{c}{5} \\[0.5ex]
\multirow{2}{*}{4L OSSF2} & \multirow{2}{*}{$\geq$4} & \multirow{2}{*}{2~~} & \multirow{2}{*}{\NA} & $>$100 if both & \multirow{2}{*}{~~~~$\ensuremath{L_{\mathrm{T}}}$+\ptmiss} & \multirow{2}{*}{$[0,1200]$} & \multicolumn{1}{c}{\multirow{2}{*}{6}} \\
& & & & pairs are on-$\PZ$ & & & \\ \hline
\end{tabular}}
\end{table}
In contrast, the $\ensuremath{\ttbar\phi}$ model yields events with a resonant OSSF lepton pair originating from the $\phi$ decays produced in association with a $\ttbar$ pair.
We consider only 3L or 4L events with at least one OSSF lepton pair and exclude those with $\ensuremath{M_{\mathrm{OSSF}}}$ on-$\PZ$.
This event selection requires semileptonic or dileptonic $\ttbar$ decays in the $\ensuremath{\ttbar\phi}$ signal.
Unlike the type-III seesaw heavy fermions, relatively light scalar or pseudoscalar decays do not necessarily produce energetic charged leptons,
but can yield striking resonant dilepton signatures in events with high hadronic activity and {\cPqb} tagged jets.
Therefore, we seek events with resonances in the OSSF dilepton mass spectra in various $\ensuremath{S_{\mathrm{T}}}$ bins, where $\ensuremath{S_{\mathrm{T}}}$ is defined as the scalar \pt sum of all jets, all charged leptons ($\ensuremath{L_{\mathrm{T}}}$) and \ptmiss.
We probe the $\ensuremath{\ttbar\phi}$ signal in light and heavy $\phi$ mass ranges, namely 15--75 and 108--340\GeV.
Signal masses below 15\GeV and in the range of 75--108\GeV are not considered because of background from low-mass quarkonia and $\PZ$ boson resonances, respectively. Masses above 340\GeV are not considered as the $\phi\to\ttbar$ decay channel becomes kinematically accessible here.
To account for the effects of radiation and resolution on the invariant mass reconstruction, we consider the 12--77\GeV (low) and 106--356\GeV (high) reconstructed dilepton mass ranges for the light and heavy signal mass scenarios, respectively, in both 3L and 4L channels.
Because there can be an ambiguity caused by additional leptons originating from the $\ttbar$ system, the reconstruction of the correct $\phi$ mass is not always possible.
Therefore, we define the $\ensuremath{M_{\mathrm{OSSF}}}^{20}$ and the $\ensuremath{M_{\mathrm{OSSF}}}^{300}$ variables as the OSSF lepton pair masses of a given lepton flavor closest to the targeted mass of 20 and 300\GeV, respectively. The $\ensuremath{M_{\mathrm{OSSF}}}^{20}$ variable is used for the low dilepton mass range, while the $\ensuremath{M_{\mathrm{OSSF}}}^{300}$ variable is used for the high dilepton mass range.
Events with a value of $\ensuremath{M_{\mathrm{OSSF}}}^{20}$ ($\ensuremath{M_{\mathrm{OSSF}}}^{300}$) outside the low (high) dilepton mass ranges are not considered.
The analysis is insensitive to the choice of the targeted mass value, and this simplified scheme allows multiple $\ensuremath{\ttbar\phi}$ signal scenarios to be probed with a single mass spectrum.
The $\ensuremath{M_{\mathrm{OSSF}}}^{20}$ and $\ensuremath{M_{\mathrm{OSSF}}}^{300}$ masses are calculated separately for each lepton flavor scenario, yielding two nonorthogonal categories labeled as 3/4L($\Pe\Pe$) and 3/4L($\mu\mu$).
Hence, a given event can qualify for both the low and high dilepton mass regions, as well as for both lepton flavor channels.
For example, a $\mu^\pm\mu^\pm\mu^\mp$ event could be present in both low and high dilepton mass regions in the 3L($\mu\mu$) category, and similarly, an $\Pe^\pm\Pe^\mp$$\mu^\pm\mu^\mp$ event could qualify for both the 4L($\Pe\Pe$) and 4L($\mu\mu$) categories.
However, for any one given $\ensuremath{\ttbar\phi}$ signal mass and flavor scenario, only one of the dilepton mass ranges of a single flavor category is considered.
Events that satisfy the low or high dilepton mass ranges are considered in orthogonal $\ensuremath{N_{\cPqb}}=0$ (0B) and $\ensuremath{N_{\cPqb}}\geq1$ (1B) selections, where $\ensuremath{N_{\cPqb}}$ is the multiplicity of {\cPqb} tagged jets in an event.
Events in the 3L signal channels are further split into 3 $\ensuremath{S_{\mathrm{T}}}$ bins (0--400\GeV, 400--800\GeV, and $\geq$800\GeV) for both $\ensuremath{N_{\cPqb}}$ selections, those in the 4L signal channels are split into 2 $\ensuremath{S_{\mathrm{T}}}$ (0--400\GeV and $\geq$400\GeV) bins for the 0B selection, and only one inclusive bin in $\ensuremath{S_{\mathrm{T}}}$ is used for the 1B selection.
This event selection and binning scheme results in a total of 70 (68) statistically independent low (high) dilepton mass search bins in each of the 3/4L($\Pe\Pe$) and 3/4L($\mu\mu$) channels for the $\ensuremath{\ttbar\phi}$ signal model, as summarized in Table~\ref{tab:ttphiSRs}.
The signal mass hypotheses that are closer to the mass bin boundaries than to the bin centers are probed with a modified binning scheme, where the mass bin boundaries are shifted by half the value of the bin widths.
\begin{table}
\centering
\topcaption{Multilepton signal region definitions for the $\ensuremath{\ttbar\phi}$ signal model.
All events containing a same-flavor lepton pair with invariant mass below 12\GeV are removed in the 3L and 4L event categories.
Furthermore, 3L events containing an OSSF lepton pair with mass below 76\GeV when the trilepton mass is within a $\PZ$ boson mass window ($91\pm15\GeV$) are also rejected.}
\label{tab:ttphiSRs}
\resizebox{\textwidth}{!}{
\begin{tabular}{l r r r r r l l c c c}
\hline
Label & \ensuremath{N_{\mathrm{leptons}}} & $\ensuremath{N_{\mathrm{OSSF}}}$ & $\ensuremath{M_{\mathrm{OSSF}}}$ & $\ensuremath{N_{\cPqb}}$ & \multicolumn{2}{c}{Variable and range ({\GeVns})} & \multicolumn{4}{c}{Number of bins} \\[0.5ex] \hline
& & & & & & & $\ensuremath{S_{\mathrm{T}}}$ ({\GeVns}) & 0--400 & 400--800 & $>$800~ \\ [\cmsTabSkip]
\multirow{2}{*}{3L(${\Pe\Pe}/\mu\mu$) 0B} & \multirow{2}{*}{3} & \multirow{2}{*}{1} & \multirow{2}{*}{off-$\PZ$} & \multirow{2}{*}{0} & ~~~~~~~$\ensuremath{M_{\mathrm{OSSF}}}^{20}$ & $[12,77]$ & & 13 & 13 & 5 \\[0.5ex]
& & & & & ~~~~~~~$\ensuremath{M_{\mathrm{OSSF}}}^{300}$ & $[106,356]$ & & 10 & 10 & 10 \\[0.5ex]
\multirow{2}{*}{3L(${\Pe\Pe}/\mu\mu$) 1B} & \multirow{2}{*}{3} & \multirow{2}{*}{1} & \multirow{2}{*}{off-$\PZ$} & \multirow{2}{*}{$\geq$1} & ~~~~~~~$\ensuremath{M_{\mathrm{OSSF}}}^{20}$ & $[12,77]$ & & 13 & 13 & 5 \\[0.5ex]
& & & & & ~~~~~~~$\ensuremath{M_{\mathrm{OSSF}}}^{300}$ & $[106,356]$ & & 10 & 10 &10 \\ [\cmsTabSkip]
& & & & & & & $\ensuremath{S_{\mathrm{T}}}$ ({\GeVns}) & 0--400 & $>$400 \\ [\cmsTabSkip]
\multirow{2}{*}{4L(${\Pe\Pe}/\mu\mu$) 0B} & \multirow{2}{*}{$\geq$4} & \multirow{2}{*}{$\geq$1} & \multirow{2}{*}{off-$\PZ$} & \multirow{2}{*}{0} & ~~~~~~~$\ensuremath{M_{\mathrm{OSSF}}}^{20}$ & $[12,77]$ & & 3 & 2 & \\[0.5ex]
& & & & & ~~~~~~~$\ensuremath{M_{\mathrm{OSSF}}}^{300}$ & $[106,356]$ & & 3 & 2 & \\ [\cmsTabSkip]
& & & & & & & \multicolumn{2}{l}{$\ensuremath{S_{\mathrm{T}}}$ inclusive} & \\ [\cmsTabSkip]
\multirow{2}{*}{4L(${\Pe\Pe}/\mu\mu$) 1B} & \multirow{2}{*}{$\geq$4} & \multirow{2}{*}{$\geq$1} & \multirow{2}{*}{off-$\PZ$} & \multirow{2}{*}{$\geq$1} & ~~~~~~~$\ensuremath{M_{\mathrm{OSSF}}}^{20}$ & $[12,77]$ & & 3 & & \\[0.5ex]
& & & & & ~~~~~~~$\ensuremath{M_{\mathrm{OSSF}}}^{300}$ & $[106,356]$ & & 3 & & \\[0.5ex] \hline
\end{tabular}}
\end{table}
\section{Background estimation and systematic uncertainties}\label{sec:backgroundEstimation}
The irreducible backgrounds are estimated using simulated event samples and are dominated by the $\PW\PZ$, $\PZ\PZ$, $\ttbar\PZ$, and $\PZ\gamma$ processes.
The event yields of these processes are obtained from theoretical predictions, with normalization corrections derived in dedicated control regions as described below. These estimates for the $\PW\PZ$, $\PZ\PZ$ and $\PZ\gamma$ processes are largely independent of each other. Since these backgrounds make significant contributions to the $\ttbar\PZ$-enriched control region, the normalization correction for this process is measured after the corresponding corrections have been obtained for the other backgrounds.
The normalization correction factors and their associated uncertainties, which include both statistical and systematic contributions, take the contamination of events from other processes into account and are applied to the corresponding background estimates in the signal regions.
\begin{figure}[!htp]
\centering
\includegraphics[width=.4\textwidth]{Figure_002-a.pdf}\hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_002-b.pdf}
\includegraphics[width=.4\textwidth]{Figure_002-c.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_002-d.pdf}
\caption{The $\ensuremath{M_{\mathrm{T}}}$ distribution in the $\PW\PZ$-enriched control region (upper left),
the $\ensuremath{L_{\mathrm{T}}}$ distribution in the $\ttbar\PZ$-enriched control region (upper right),
the $\ensuremath{S_{\mathrm{T}}}$ distribution in the $\PZ\PZ$-enriched control region (lower left),
and the $\ensuremath{L_{\mathrm{T}}}$ distribution in the misidentified-lepton ($\PZ$+jets) enriched control region (lower right).
The lower panels show the ratio of observed to expected events.
The hatched gray bands in the upper panels and the light gray bands in the lower panels represent the total (systematic and statistical) uncertainty of the backgrounds in each bin, whereas the dark gray bands in the lower panels represent only the statistical uncertainty of the backgrounds.
The rightmost bins contain the overflow events in each distribution.
\label{fig:controlRegions}}
\end{figure}
For the $\PW\PZ$ and $\ttbar\PZ$ processes, we select events with exactly three leptons with an on-$\PZ$ OSSF pair, and the minimum lepton \pt is required to be above 20\GeV to increase the purity of these selections in the targeted process.
For the $\PW\PZ$-enriched selection, we require $50<\ptmiss<100\GeV$ and zero {\cPqb} tagged jets, whereas for the $\ttbar\PZ$-enriched selection we require $\ptmiss<100\GeV$, $\ensuremath{S_{\mathrm{T}}}>350\GeV$, and at least one {\cPqb} tagged jet.
Similarly, for $\PZ\PZ$, we select events with exactly four leptons, $\ptmiss<100\GeV$, and two distinct on-$\PZ$ OSSF lepton pairs.
In the $\PW\PZ$- and $\PZ\PZ$-enriched selections, the simulated event yields are normalized to match those in the data in the 0--3 and 0--2 jet multiplicity bins including overflows, respectively, yielding normalization factor uncertainties in the range of 5--25\%, whereas an inclusive normalization is performed in the $\ttbar\PZ$-enriched selection, resulting in a $20\%$ uncertainty.
The various kinematic distributions in the $\PW\PZ$-, $\PZ\PZ$-, and $\ttbar\PZ$-enriched control regions, where the normalizations of these major irreducible backgrounds are performed, are illustrated in Fig.~\ref{fig:controlRegions} (top-left, top-right, bottom-left).
Similarly, a $\PZ\gamma$-enriched selection is created in three-lepton events with an OSSF lepton pair with mass below 76\GeV and trilepton mass within the $\PZ$ boson mass window, $91\pm15\GeV$.
This selection is dominated by $\PZ$+jets events with internal and external photon conversions originating from final-state radiation, and the normalization yields a relative uncertainty of $20\%$.
Conversion contributions from non-$\PZ\gamma$ processes play a subdominant role, and are estimated using simulated event samples.
Other irreducible backgrounds, such as $\ttbar\PW$, triboson, and Higgs boson processes, are estimated via simulation as well, using the cross sections obtained from the MC generation at NLO or higher accuracy, and are collectively referred to as `rare' backgrounds.
All rare and non-$\PZ\gamma$ conversion backgrounds, which are not normalized to data in dedicated control regions, are assigned a relative normalization uncertainty of $50\%$.
A small fraction of the irreducible backgrounds are due to misidentification of the charge of one or more prompt electrons. These backgrounds are also estimated using simulated event samples.
Following a study of same-sign dielectron events, in which the dielectron invariant mass is within a $\PZ$ boson mass window ($91\pm15\GeV$),
a relative uncertainty of $50\%$ is assigned to such contributions.
These constitute less than 35\% of the irreducible $\PW\PZ$, $\PZ\PZ$, and $\ttbar\PZ$ background contributions in the 3L~OSSF0, 4L~OSSF1, and 4L~OSSF0 signal regions, and are negligible in all other signal regions.
A category of systematic uncertainties in the simulated events is due to the corrections applied to background and signal simulation samples to account for differences with respect to data events.
These corrections are used in lepton reconstruction, identification, isolation, and trigger efficiencies, {\cPqb} tagging efficiencies, pileup modeling, as well as electron and muon resolution, and electron, muon, jet, and unclustered energy scale measurements.
The uncertainties due to such corrections typically correspond to a 1--10\% variation of the simulation-based irreducible background and signal yields across all signal regions. Therefore, they form a sub-dominant category of systematic uncertainties in the simulation-based background estimation.
Similarly, uncertainties due to choices of factorization and renormalization scales~\cite{Cacciari:2003fi} and PDFs~\cite{Ball:2017nwa} are also evaluated for signal and dominant irreducible background processes, yielding $<$10\% variation in signal regions.
The uncertainties in the integrated luminosity are in the range of 2.3--2.5\% in each year of data collection~\cite{CMS:2017sdi,CMS:2018elu,CMS:2019jhq}.
The reducible backgrounds are due to misidentified leptons (MisID) arising from events such as $\PZ$+jets and $\ttbar$+jets.
These are estimated using a three-dimensional implementation of a matrix method \cite{Khachatryan:2015bsa},
in which the rates at which prompt and misidentified leptons satisfying a loose lepton selection also pass a tight lepton selection are measured in dedicated signal-depleted selections of events in data.
The misidentification rates are measured in $\PZ$+jets and $\ttbar$+jets enriched trilepton (on-$\PZ$, $\ptmiss<50\GeV$) and same-sign dilepton (off-$\PZ$, $\ptmiss>50\GeV$, and with at least 3 jets) selections, respectively, whereas an on-$\PZ$ dilepton selection is used for the prompt rates.
The rates are parametrized as a function of lepton kinematic distributions and the multiplicity of tracks in the event.
A weighted average of these misidentification rates is used in the analysis, reflecting the approximate expected composition of the SM backgrounds in a given search region as obtained from simulated event samples.
The final uncertainty in the estimated background from misidentified leptons is obtained by varying the rates within the uncertainties as well as the differences in rates in $\PZ$+jets and $\ttbar$+jets events, and has a relative uncertainty of 30--40\%.
Figure~\ref{fig:controlRegions} (lower right) illustrates the misidentified-lepton background estimate as a function of $\ensuremath{L_{\mathrm{T}}}$ in the trilepton selection used to measure the rates, where a misidentified lepton is produced in association with a $\PZ$ boson.
A summary of the uncertainty sources in this analysis, including the typical resultant variations on relevant background and signal processes, as well as the correlation model across the three different data taking periods, is given in Table~\ref{tab:systematics}.
The quoted variations on affected processes, except those in the integrated luminosity, and the inclusive normalizations of the $\ttbar\PZ$, conversion and rare simulations, are calculated taking into account variations of the uncertainty sources as a function of object and event dependent parameters as appropriate, such as lepton momenta, or jet multiplicity. Thus, these uncertainties also include bin-to-bin correlations across the search regions.
The overall uncertainties in the total expected backgrounds are largely dominated by those in the irreducible $\PW\PZ$, $\PZ\PZ$, and $\ttbar\PZ$ processes, as well as the misidentified-lepton contributions, whereas the relatively large uncertainties in rare and conversion contributions and those due to electron charge misidentification are subdominant and have a negligible effect on the results across different signal regions.
\begin{table}
\centering
\topcaption{Sources of systematic uncertainties, affected background and signal processes, relative variations of the affected processes, and presence or otherwise of correlation between years in signal regions.}
\label{tab:systematics}
\resizebox{\textwidth}{!}{
\begin{tabular}{l l c c}
\hline
Uncertainty source & Signal/Background process & Variation (\%) & Correlation \\ \hline
Integrated Luminosity & Signal/Rare/Non-$\PZ\gamma$ conversion & 2.3--2.5 & No \\
Lepton reconstruction, identification,& \multirow{2}{*}{Signal/Background$^{\star}$} & \multirow{2}{*}{4--5} & \multirow{2}{*}{No} \\
and isolation efficiency & & & \\
Lepton displacement efficiency (only in 3L) & Signal/Background$^{\star}$ & 3--5 & Yes \\
Trigger efficiency & Signal/Background$^{\star}$ & $<$3 & No \\
{\cPqb} tagging efficiency & Signal/Background$^{\star}$ & $<$5 & No \\
Pileup modeling & Signal/Background$^{\star}$ & $<$3 & Yes \\
Factorization/renormalization scales $\&$ PDF & Signal/Background$^{\star}$ & $<$10 & Yes \\
Jet energy scale & Signal/Background$^{\star}$ & $<$5 & Yes \\
Unclustered energy scale & Signal/Background$^{\star}$ & $<$5 & Yes \\
Muon energy scale and resolution & Signal/Background$^{\star}$ & $<$5 & Yes \\
Electron energy scale and resolution & Signal/Background$^{\star}$ & $<$2 & Yes \\
[\cmsTabSkip]
$\PW\PZ$ normalization (0/1/2/$\geq$3 jets) & $\PW\PZ$ & 5--10 & Yes \\
$\PZ\PZ$ normalization (0/1/$\geq$2 jets) & $\PZ\PZ$ & 5--10 & Yes \\
$\ttbar\PZ$ normalization & $\ttbar\PZ$ & 15--20 & Yes \\
Conversion normalization & Conversion & 20--50 & Yes \\
Rare normalization & Rare & 50 & Yes \\
Lepton misidentification rates & Misidentified lepton & 30--40 & Yes \\
Electron charge misidentification & $\PW\PZ$/$\PZ\PZ$$^{\dagger}$ & $<$20 & No \\[\cmsTabSkip]
&\multicolumn{3}{l}{$^{\star} \PW\PZ,~\PZ\PZ,~\ttbar\PZ$, rare, and conversion background processes.} \\
&\multicolumn{3}{l}{$^{\dagger}$Only in 3L OSSF0, 4L OSSF0, and 4L OSSF1 signal regions.} \\ \hline
\end{tabular}}
\end{table}
\section{Results}
The distributions of expected SM backgrounds and observed event yields in the signal regions as defined in Tables~\ref{tab:seesawSRs} and~\ref{tab:ttphiSRs} are given in Figs.~\ref{fig:Seesaw3LSR}--\ref{fig:Seesaw4LSR} and~\ref{fig:ttPhiEle0B3LSR}--\ref{fig:ttPhiMu4LSR} for the type-III seesaw model and the $\ensuremath{\ttbar\phi}$ model, respectively.
The figures also show the predicted yields for type-III seesaw models with $\Sigma$ masses of 300 and 700\GeV in the flavor-democratic scenario as well as for $\ensuremath{\ttbar\phi}$ models with a pseudoscalar (scalar) $\phi$ mass of 20 and 125 (70 and 300)\GeV assuming $g_{\PQt}^2\mathcal{B}(\phi\to{\Pe\Pe}/\mu\mu)=0.05$.
We perform a goodness-of-fit test based on the saturated model method ~\cite{Baker:1984} to quantify the local deviations between the background-only hypothesis and the observed data, without considering the look-elsewhere effect~\cite{Gross:2010qma}. The most significant local deviation from the SM expectation in the signal regions is found in the 3L($\mu\mu$) 1B $\ensuremath{S_{\mathrm{T}}}<400\GeV$ high mass $\ensuremath{\ttbar\phi}$ channel (Fig.~\ref{fig:ttPhiMu1B3LSR}) by selecting the bins with $\ensuremath{M_{\mathrm{OSSF}}}^{300}>206\GeV$, resulting in a data excess of approximately 3.2 standard deviations.
Similarly, by examining other deviations from the SM, we observe a local data deficit of 2.5 standard deviations in the $10<\ensuremath{M_{\mathrm{OSSF}}}^{20}<15\GeV$ bin of the 3L($\Pe\Pe$) 0B $400<\ensuremath{S_{\mathrm{T}}}<800\GeV$ channel (Fig.~\ref{fig:ttPhiEle0B3LSR}), and a local data excess of 2.5 standard deviations in the $60<\ensuremath{M_{\mathrm{OSSF}}}^{20}<65\GeV$ bin of the 3L($\mu\mu$) 1B $400<\ensuremath{S_{\mathrm{T}}}<800\GeV$ channel (Fig.~\ref{fig:ttPhiMu1B3LSR}).
Other deviations are less significant.
Overall, the observations are found to be globally consistent with the SM predictions within 2.7 standard deviations,
and no statistically significant excess compatible with the signal models probed is observed.
\begin{figure}[!htp]
\centering
\includegraphics[width=.4\textwidth]{Figure_003-a.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_003-b.pdf}
\includegraphics[width=.4\textwidth]{Figure_003-c.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_003-d.pdf}
\caption{
Type-III seesaw signal regions in 3L below-$\PZ$ (upper left), on-$\PZ$ (upper right), above-$\PZ$ (lower left), and OSSF0 (lower right) events.
The total SM background is shown as a stacked histogram of all contributing processes.
The predictions for type-III seesaw models with $\Sigma$ masses of 300 and 700\GeV in the flavor-democratic scenario are also shown.
The lower panels show the ratio of observed to expected events.
The hatched gray bands in the upper panels and the light gray bands in the lower panels represent the total (systematic and statistical) uncertainty of the backgrounds in each bin, whereas the dark gray bands in the lower panels represent only the statistical uncertainty of the backgrounds.
The rightmost bins contain the overflow events in each distribution.
\label{fig:Seesaw3LSR}}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=.4\textwidth]{Figure_004-a.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_004-b.pdf}
\includegraphics[width=.4\textwidth]{Figure_004-c.pdf}
\caption{
Type-III seesaw signal regions in 4L OSSF0 (upper left), OSSF1 (upper right), and OSSF2 (lower) events.
The total SM background is shown as a stacked histogram of all contributing processes.
The predictions for type-III seesaw models with $\Sigma$ masses of 300 and 700\GeV in the flavor-democratic scenario are also shown.
The lower panels show the ratio of observed to expected events.
The hatched gray bands in the upper panels and the light gray bands in the lower panels represent the total (systematic and statistical) uncertainty of the backgrounds in each bin, whereas the dark gray bands in the lower panels represent only the statistical uncertainty of the backgrounds.
The rightmost bins contain the overflow events in each distribution.
\label{fig:Seesaw4LSR}}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=.4\textwidth]{Figure_005-a.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_005-b.pdf}
\includegraphics[width=.4\textwidth]{Figure_005-c.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_005-d.pdf}
\includegraphics[width=.4\textwidth]{Figure_005-e.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_005-f.pdf}
\caption{
Dielectron $\ensuremath{M_{\mathrm{OSSF}}}^{20}$ (left column) and $\ensuremath{M_{\mathrm{OSSF}}}^{300}$ (right column) distributions in the 3L($\Pe\Pe$) 0B $\ensuremath{\ttbar\phi}$ signal regions.
Upper, center, and lower plots are for $\ensuremath{S_{\mathrm{T}}}<400\GeV$, $400<\ensuremath{S_{\mathrm{T}}}<800\GeV$, and $\ensuremath{S_{\mathrm{T}}}>800\GeV$, respectively.
The total SM background is shown as a stacked histogram of all contributing processes.
The predictions for $\ensuremath{\ttbar\phi}(\to{\Pe\Pe}$) models with a pseudoscalar (scalar) $\phi$ of 20 and 125 (70 and 300)\GeV mass assuming $g_{\PQt}^2\mathcal{B}(\phi\to\Pe\Pe)=0.05$ are also shown.
The lower panels show the ratio of observed to expected events.
The hatched gray bands in the upper panels and the light gray bands in the lower panels represent the total (systematic and statistical) uncertainty of the backgrounds in each bin, whereas the dark gray bands in the lower panels represent only the statistical uncertainty of the backgrounds.
The rightmost bins do not contain the overflow events as these are outside the probed mass ranges.
\label{fig:ttPhiEle0B3LSR}}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=.4\textwidth]{Figure_006-a.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_006-b.pdf}
\includegraphics[width=.4\textwidth]{Figure_006-c.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_006-d.pdf}
\includegraphics[width=.4\textwidth]{Figure_006-e.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_006-f.pdf}
\caption{
Dielectron $\ensuremath{M_{\mathrm{OSSF}}}^{20}$ (left column) and $\ensuremath{M_{\mathrm{OSSF}}}^{300}$ (right column) distributions in the 3L($\Pe\Pe$) 1B $\ensuremath{\ttbar\phi}$ signal regions.
Upper, center, and lower plots are for $\ensuremath{S_{\mathrm{T}}}<400\GeV$, $400<\ensuremath{S_{\mathrm{T}}}<800\GeV$, and $\ensuremath{S_{\mathrm{T}}}>800\GeV$, respectively.
The total SM background is shown as a stacked histogram of all contributing processes.
The predictions for $\ensuremath{\ttbar\phi}(\to{\Pe\Pe}$) models with a pseudoscalar (scalar) $\phi$ of 20 and 125 (70 and 300)\GeV mass assuming $g_{\PQt}^2\mathcal{B}(\phi\to\Pe\Pe)=0.05$ are also shown.
The lower panels show the ratio of observed to expected events.
The hatched gray bands in the upper panels and the light gray bands in the lower panels represent the total (systematic and statistical) uncertainty of the backgrounds in each bin, whereas the dark gray bands in the lower panels represent only the statistical uncertainty of the backgrounds.
The rightmost bins do not contain the overflow events as these are outside the probed mass ranges.
\label{fig:ttPhiEle1B3LSR}}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=.4\textwidth]{Figure_007-a.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_007-b.pdf}
\includegraphics[width=.4\textwidth]{Figure_007-c.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_007-d.pdf}
\includegraphics[width=.4\textwidth]{Figure_007-e.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_007-f.pdf}
\caption{
Dielectron $\ensuremath{M_{\mathrm{OSSF}}}^{20}$ (left column) and $\ensuremath{M_{\mathrm{OSSF}}}^{300}$ (right column) distributions in the 4L($\Pe\Pe$) $\ensuremath{\ttbar\phi}$ signal regions.
Upper, center, and lower plots are for 0B $\ensuremath{S_{\mathrm{T}}}<400\GeV$, 0B $\ensuremath{S_{\mathrm{T}}}>400\GeV$, and 1B $\ensuremath{S_{\mathrm{T}}}$-inclusive, respectively.
The total SM background is shown as a stacked histogram of all contributing processes.
The predictions for $\ensuremath{\ttbar\phi}(\to{\Pe\Pe}$) models with a pseudoscalar (scalar) $\phi$ of 20 and 125 (70 and 300)\GeV mass assuming $g_{\PQt}^2\mathcal{B}(\phi\to\Pe\Pe)=0.05$ are also shown.
The lower panels show the ratio of observed to expected events.
The hatched gray bands in the upper panels and the light gray bands in the lower panels represent the total (systematic and statistical) uncertainty of the backgrounds in each bin, whereas the dark gray bands in the lower panels represent only the statistical uncertainty of the backgrounds.
The rightmost bins do not contain the overflow events as these are outside the probed mass range.
\label{fig:ttPhiEle4LSR}}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=.4\textwidth]{Figure_008-a.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_008-b.pdf}
\includegraphics[width=.4\textwidth]{Figure_008-c.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_008-d.pdf}
\includegraphics[width=.4\textwidth]{Figure_008-e.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_008-f.pdf}
\caption{
Dimuon $\ensuremath{M_{\mathrm{OSSF}}}^{20}$ (left column) and $\ensuremath{M_{\mathrm{OSSF}}}^{300}$ (right column) distributions in the 3L($\mu\mu$) 0B $\ensuremath{\ttbar\phi}$ signal regions.
Upper, center, and lower plots are for $\ensuremath{S_{\mathrm{T}}}<400\GeV$, $400<\ensuremath{S_{\mathrm{T}}}<800\GeV$, and $\ensuremath{S_{\mathrm{T}}}>800\GeV$, respectively.
The total SM background is shown as a stacked histogram of all contributing processes.
The predictions for $\ensuremath{\ttbar\phi}(\to\mu\mu$) models with a pseudoscalar (scalar) $\phi$ of 20 and 125 (70 and 300)\GeV mass assuming $g_{\PQt}^2\mathcal{B}(\phi\to\mu\mu)=0.05$ are also shown.
The lower panels show the ratio of observed to expected events.
The hatched gray bands in the upper panels and the light gray bands in the lower panels represent the total (systematic and statistical) uncertainty of the backgrounds in each bin, whereas the dark gray bands in the lower panels represent only the statistical uncertainty of the backgrounds.
The rightmost bins do not contain the overflow events as these are outside the probed mass range.
\label{fig:ttPhiMu0B3LSR}}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=.4\textwidth]{Figure_009-a.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_009-b.pdf}
\includegraphics[width=.4\textwidth]{Figure_009-c.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_009-d.pdf}
\includegraphics[width=.4\textwidth]{Figure_009-e.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_009-f.pdf}
\caption{
Dimuon $\ensuremath{M_{\mathrm{OSSF}}}^{20}$ (left column) and $\ensuremath{M_{\mathrm{OSSF}}}^{300}$ (right column) distributions in the 3L($\mu\mu$) 1B $\ensuremath{\ttbar\phi}$ signal regions.
Upper, center, and lower plots are for $\ensuremath{S_{\mathrm{T}}}<400\GeV$, $400<\ensuremath{S_{\mathrm{T}}}<800\GeV$, and $\ensuremath{S_{\mathrm{T}}}>800\GeV$, respectively.
The total SM background is shown as a stacked histogram of all contributing processes.
The predictions for $\ensuremath{\ttbar\phi}(\to\mu\mu$) models with a pseudoscalar (scalar) $\phi$ of 20 and 125 (70 and 300)\GeV mass assuming $g_{\PQt}^2\mathcal{B}(\phi\to\mu\mu)=0.05$ are also shown.
The lower panels show the ratio of observed to expected events.
The hatched gray bands in the upper panels and the light gray bands in the lower panels represent the total (systematic and statistical) uncertainty of the backgrounds in each bin, whereas the dark gray bands in the lower panels represent only the statistical uncertainty of the backgrounds.
The rightmost bins do not contain the overflow events as these are outside the probed mass range.
\label{fig:ttPhiMu1B3LSR}}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=.4\textwidth]{Figure_010-a.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_010-b.pdf}
\includegraphics[width=.4\textwidth]{Figure_010-c.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_010-d.pdf}
\includegraphics[width=.4\textwidth]{Figure_010-e.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_010-f.pdf}
\caption{
Dimuon $\ensuremath{M_{\mathrm{OSSF}}}^{20}$ (left column) and $\ensuremath{M_{\mathrm{OSSF}}}^{300}$ (right column) distributions in the 4L($\mu\mu$) $\ensuremath{\ttbar\phi}$ signal regions.
Upper, center, and lower plots are for 0B $\ensuremath{S_{\mathrm{T}}}<400\GeV$, 0B $\ensuremath{S_{\mathrm{T}}}>400\GeV$, and 1B $\ensuremath{S_{\mathrm{T}}}$-inclusive, respectively.
The total SM background is shown as a stacked histogram of all contributing processes.
The predictions for $\ensuremath{\ttbar\phi}(\to\mu\mu$) models with a pseudoscalar (scalar) $\phi$ of 20 and 125 (70 and 300)\GeV mass assuming $g_{\PQt}^2\mathcal{B}(\phi\to\mu\mu)=0.05$ are also shown.
The lower panels show the ratio of observed to expected events.
The hatched gray bands in the upper panels and the light gray bands in the lower panels represent the total (systematic and statistical) uncertainty of the backgrounds in each bin, whereas the dark gray bands in the lower panels represent only the statistical uncertainty of the backgrounds.
The rightmost bins do not contain the overflow events as these are outside the probed mass range.
\label{fig:ttPhiMu4LSR}}
\end{figure}
Upper limits at 95\% confidence level (\CL) are set on the product of the signal production cross sections and branching fractions using a modified frequentist approach with the \CLs criterion~\cite{Junk:1999kv,Read:2002hq} and the asymptotic approximation for the test statistic~\cite{Cowan:2010js,ATLAS:2011tau}.
Upper limits at 95\% \CL are also set on the product of the branching fractions and the square of the scalar or pseudoscalar Yukawa coupling in the $\ensuremath{\ttbar\phi}$ model.
A binned maximum-likelihood fit is performed to discriminate between the potential signal and the SM background processes for both signal models separately.
All of the $\ensuremath{L_{\mathrm{T}}}$+\ptmiss and $\ensuremath{M_{\mathrm{T}}}$ bins are used for the seesaw signal masses under consideration,
whereas the appropriate subset of the lepton flavor and dilepton mass bins is used for a given $\phi$ mass and branching fraction scenario in the $\ensuremath{\ttbar\phi}$ signal model,
such that the low (high) dielectron and dimuon mass spectra are considered for a light (heavy) $\ensuremath{\ttbar\phi}$ signal with the $\phi\to{\Pe\Pe}$ and $\phi\to\mu\mu$ decays, respectively.
The uncertainties in the mean values of both the expected signal and background yields are treated as nuisance parameters modeled by log-normal and gamma distributions for systematic and statistical uncertainties, respectively.
Statistical uncertainties in the signal and background yields in each bin and year are assumed to be fully uncorrelated,
whereas all systematic uncertainties are assumed to be fully correlated among the signal bins in a given year.
The correlation model of all nuisance parameters across the datasets collected in different years is summarized in Table~\ref{tab:systematics}.
\begin{figure}[!htp]
\centering
\includegraphics[width=.6\textwidth]{Figure_011.pdf}
\caption{
The 95\% confidence level expected and observed upper limits on the total production cross section of heavy fermion pairs.
The inner (green) and the outer (yellow) bands indicate the regions containing 68 and 95\%, respectively, of the distribution of limits expected under the background-only hypothesis.
Also shown are the theoretical prediction for the cross section and the associated uncertainty of the $\Sigma$ pair production via the type-III seesaw mechanism.
Type-III seesaw heavy fermions are excluded for masses below 880\GeV (expected limit 930\GeV) in the flavor-democratic scenario.
\label{fig:seesawLimitComb}}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=.4\textwidth]{Figure_012-a.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_012-b.pdf}
\includegraphics[width=.4\textwidth]{Figure_012-c.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_012-d.pdf}
\caption{
The 95\% confidence level expected and observed upper limits on the product of the signal production cross section and branching fraction of a scalar $\phi$ boson in the dielectron (upper left) and dimuon (lower left) channels, and of a pseudoscalar $\phi$ boson in the dielectron (upper right) and dimuon (lower right) channels, where $\phi$ is produced in association with a top quark pair.
The inner (green) and the outer (yellow) bands indicate the regions containing 68 and 95\%, respectively, of the distribution of limits expected under the background-only hypothesis. The vertical hatched gray band indicates the mass region corresponding to the $\PZ$ boson veto.
Also shown are the theoretical predictions for the product of the production cross section and branching fraction of the $\ensuremath{\ttbar\phi}$ model, with their uncertainties, and assuming ${g}_{t}^2\mathcal{B}(\phi\to{\Pe\Pe}/\mu\mu)=0.05$.
All $\ensuremath{\ttbar\phi}$ signal scenarios are excluded for the product of the production cross section and branching fraction above 1--20\unit{fb} for $\phi$ masses in the range of 15--75\GeV, and above 0.3--5\unit{fb} for $\phi$ masses in the range of 108--340\GeV.
\label{fig:ttPhiLimitsV1}}
\end{figure}
\begin{figure}[!htp]
\centering
\includegraphics[width=.4\textwidth]{Figure_013-a.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_013-b.pdf}
\includegraphics[width=.4\textwidth]{Figure_013-c.pdf} \hspace{.05\textwidth}
\includegraphics[width=.4\textwidth]{Figure_013-d.pdf}
\caption{
The 95\% confidence level expected and observed upper limits on the product of the square of the Yukawa coupling to top quarks and branching fraction of a scalar $\phi$ boson in the dielectron (upper left) and dimuon (lower left) channels, and of a pseudoscalar $\phi$ boson in the dielectron (upper right) and dimuon (lower right) channels, where $\phi$ is produced in association with a top quark pair.
The inner (green) and the outer (yellow) bands indicate the regions containing 68 and 95\%, respectively, of the distribution of limits expected under the background-only hypothesis. The vertical hatched gray band indicates the mass region corresponding to the $\PZ$ boson veto.
The dashed horizontal line marks the unity value of the product of the square of the Yukawa coupling to top quarks and the branching fraction.
Assuming a Yukawa coupling of unit strength to top quarks, the branching fraction of new scalar (pseudoscalar) bosons to dielectrons or dimuons above 0.0004--0.004 (0.004--0.03) are excluded for masses in the range of 15--75\GeV, and above 0.004--0.04 (0.006--0.03) for masses in the range of 108--340\GeV.
\label{fig:ttPhiLimitsV2}}
\end{figure}
The observed and expected upper limits on the production cross section $\sigma(\Sigma\Sigma)$ in the type-III seesaw signal model are given in Fig.~\ref{fig:seesawLimitComb}.
Type-III seesaw heavy fermions are excluded at 95\% \CL with masses below 880\GeV assuming the flavor-democratic scenario.
Similarly, the upper limits on $\sigma($\ensuremath{\ttbar\phi}$)\mathcal{B}(\phi\to{\Pe\Pe}/\mu\mu)$ and $g_{\PQt}^2\mathcal{B}(\phi\to{\Pe\Pe}/\mu\mu)$ in the $\ensuremath{\ttbar\phi}$ signal model are shown in Figs.~\ref{fig:ttPhiLimitsV1} and \ref{fig:ttPhiLimitsV2}, respectively.
In the $\ensuremath{\ttbar\phi}$ signal model, we exclude cross sections above 1--20\unit{fb} for $\phi$ masses in the range of 15--75\GeV, and above 0.3--5\unit{fb} for $\phi$ masses in the range of 108--340\GeV.
Furthermore, $g_{\PQt}^2\mathcal{B}(\phi\to{\Pe\Pe}/\mu\mu)$ above (0.4--4)$\ten{-3}$ for the scalar and above (0.4--3)$\ten{-2}$ for the pseudoscalar scenarios are excluded for $\phi$ masses in the 15--75\GeV range, whereas the two models perform similarly for masses 108--340\GeV and are excluded above (0.4--4)$\ten{-2}$ for the scalar and above (0.6--3)$\ten{-2}$ for the pseudoscalar scenarios.
Uncertainties in the production cross sections due to scale and PDF choices are considered for both signal models~\cite{Fuks:2012qx,Fuks:2013vua,deFlorian:2016spz}, and are also shown in Figs.~\ref{fig:seesawLimitComb} and \ref{fig:ttPhiLimitsV1}.
The differences in the low-mass exclusion limits of scalar and pseudoscalar $\ensuremath{\ttbar\phi}$ models result from the kinematic structure of the couplings, which affect both the production cross section and the signal efficiency of the $\phi$ bosons.
The coupling of a scalar boson to a fermion is momentum independent, whereas that of a pseudoscalar boson is proportional to the momentum in the low momentum limit.
Therefore, the low $\phi$ momentum part of the production cross section is suppressed in the pseudoscalar model in comparison to the scalar model for $\phi$ masses below the top quark mass scale, while both production cross sections are similar for $\phi$ masses at and above the top quark mass scale.
Furthermore, this coupling structure results in more pseudoscalar $\phi$ bosons in the Lorentz-boosted region compared to the scalar $\phi$ bosons,
yielding more energetic leptons with higher selection efficiencies.
The product of the fiducial acceptance and the event selection efficiency for the type-III seesaw and the $\ensuremath{\ttbar\phi}$ models for various signal mass hypotheses,
calculated after all analysis selection requirements, are given in Table~\ref{tab:signalEffs}.
\begin{table}[!htp]
\centering
\topcaption{Product of the fiducial acceptance and the event selection efficiency for the signal models at various signal mass hypotheses calculated after all analysis selection requirements.
} \label{tab:signalEffs}
\resizebox{\textwidth}{!}{
\begin{tabular}{ l l l l l l l l l l l l l l l l }
\hline
\multicolumn{1}{l}{Signal model} & \multicolumn{15}{c}{Product of acceptance and efficiency (\%)} \\[0.5ex] \hline
\multicolumn{13}{l}{Type-III seesaw (flavor-democratic scenario)} & & & \\[0.5ex]
$\Sigma$ mass ({\GeVns}) & 100 & 200 & 300 & 400 & 550 & 700 & 850 & 1000 & 1250 & 1500 & & & & & \\[\cmsTabSkip]
& 0.32 & 1.82 & 2.63 & 3.02 & 3.29 & 3.34 & 3.29 & 3.21 & 2.99 & 2.82 & & & & & \\[\cmsTabSkip]
\multicolumn{1}{l}{$\ensuremath{\ttbar\phi}$}\\[0.5ex]
$\phi$ mass ({\GeVns}) & 15 & 20 & 25 & 30 & 40 & 50 & 60 & 70 & 75 & 108 & 125 & 150 & 200 & 250 & 300 \\[\cmsTabSkip]
Scalar $\phi(\to{\Pe\Pe}$) & 0.85 & 1.29 & 1.67 & 2.02 & 2.74 & 3.44 & 4.25 & 5.16 & 4.95 & 5.53 & 8.32 & 9.00 & 10.3 & 11.1 & 11.5 \\
Scalar $\phi(\to\mu\mu$) & 1.54 & 2.16 & 2.81 & 3.35 & 4.38 & 5.29 & 6.40 & 7.69 & 7.56 & 8.74 & 11.6 & 12.3 & 14.0 & 14.8 & 15.3 \\
Pseudoscalar $\phi(\to{\Pe\Pe}$) & 0.96 & 1.81 & 2.69 & 3.45 & 4.88 & 5.82 & 6.62 & 7.35 & 6.83 & 6.8 & 9.77 & 10.4 & 11.0 & 11.4 & 11.9 \\
Pseudoscalar $\phi(\to\mu\mu$) & 1.69 & 2.95 & 4.24 & 5.38 & 7.14 & 8.46 & 9.73 & 10.4 & 9.93 & 10.3 & 13.4 & 14.0 & 14.9 & 15.2 & 15.9 \\ \hline
\end{tabular}}
\end{table}
\section{Summary}
A search has been performed for physics beyond the standard model, using multilepton events in 137\fbinv of $\Pp\Pp$ collision data at $\sqrt{s} = 13\TeV$, collected with the CMS detector in 2016--2018.
The observations are found to be consistent with the expectations from standard model processes, with no statistically significant signal-like excess in any of the probed channels.
The results are used to constrain the allowed parameter space of the targeted signal models.
At 95\% confidence level, heavy fermions of the type-III seesaw model with masses below 880\GeV are excluded assuming identical $\Sigma$ decay branching fractions across all lepton flavors. This is the most restrictive limit on the flavor-democratic scenario of the type-III seesaw model to date.
Assuming a Yukawa coupling of unit strength to top quarks, branching fractions of new scalar (pseudoscalar) bosons to dielectrons or dimuons above
0.004 (0.03) are excluded at 95\% confidence level for masses in the range 15--75\GeV, and above 0.04 (0.03) for masses in the range 108--340\GeV. These are the first limits in these channels on an extension of the standard model with scalar or pseudoscalar particles.
\begin{acknowledgments}
We thank M. J. Strassler for drawing our attention to the need to include top quark pair production with three-body top quark decays, ${\PQt}\to\cPqb\PW\phi$, in the $\ensuremath{\ttbar\phi}$ signal model.
We congratulate our colleagues in the CERN accelerator departments for the excellent performance of the LHC and thank the technical and administrative staffs at CERN and at other CMS institutes for their contributions to the success of the CMS effort. In addition, we gratefully acknowledge the computing centers and personnel of the Worldwide LHC Computing Grid for delivering so effectively the computing infrastructure essential to our analyses. Finally, we acknowledge the enduring support for the construction and operation of the LHC and the CMS detector provided by the following funding agencies: BMBWF and FWF (Austria); FNRS and FWO (Belgium); CNPq, CAPES, FAPERJ, FAPERGS, and FAPESP (Brazil); MES (Bulgaria); CERN; CAS, MoST, and NSFC (China); COLCIENCIAS (Colombia); MSES and CSF (Croatia); RPF (Cyprus); SENESCYT (Ecuador); MoER, ERC IUT, PUT and ERDF (Estonia); Academy of Finland, MEC, and HIP (Finland); CEA and CNRS/IN2P3 (France); BMBF, DFG, and HGF (Germany); GSRT (Greece); NKFIA (Hungary); DAE and DST (India); IPM (Iran); SFI (Ireland); INFN (Italy); MSIP and NRF (Republic of Korea); MES (Latvia); LAS (Lithuania); MOE and UM (Malaysia); BUAP, CINVESTAV, CONACYT, LNS, SEP, and UASLP-FAI (Mexico); MOS (Montenegro); MBIE (New Zealand); PAEC (Pakistan); MSHE and NSC (Poland); FCT (Portugal); JINR (Dubna); MON, RosAtom, RAS, RFBR, and NRC KI (Russia); MESTD (Serbia); SEIDI, CPAN, PCTI, and FEDER (Spain); MOSTR (Sri Lanka); Swiss Funding Agencies (Switzerland); MST (Taipei); ThEPCenter, IPST, STAR, and NSTDA (Thailand); TUBITAK and TAEK (Turkey); NASU (Ukraine); STFC (United Kingdom); DOE and NSF (USA).
\hyphenation{Rachada-pisek} Individuals have received support from the Marie-Curie program and the European Research Council and Horizon 2020 Grant, contract Nos.\ 675440, 752730, and 765710 (European Union); the Leventis Foundation; the A.P.\ Sloan Foundation; the Alexander von Humboldt Foundation; the Belgian Federal Science Policy Office; the Fonds pour la Formation \`a la Recherche dans l'Industrie et dans l'Agriculture (FRIA-Belgium); the Agentschap voor Innovatie door Wetenschap en Technologie (IWT-Belgium); the F.R.S.-FNRS and FWO (Belgium) under the ``Excellence of Science -- EOS" -- be.h project n.\ 30820817; the Beijing Municipal Science \& Technology Commission, No. Z181100004218003; the Ministry of Education, Youth and Sports (MEYS) of the Czech Republic; the Lend\"ulet (``Momentum") Program and the J\'anos Bolyai Research Scholarship of the Hungarian Academy of Sciences, the New National Excellence Program \'UNKP, the NKFIA research grants 123842, 123959, 124845, 124850, 125105, 128713, 128786, and 129058 (Hungary); the Council of Science and Industrial Research, India; the HOMING PLUS program of the Foundation for Polish Science, cofinanced from European Union, Regional Development Fund, the Mobility Plus program of the Ministry of Science and Higher Education, the National Science Center (Poland), contracts Harmonia 2014/14/M/ST2/00428, Opus 2014/13/B/ST2/02543, 2014/15/B/ST2/03998, and 2015/19/B/ST2/02861, Sonata-bis 2012/07/E/ST2/01406; the National Priorities Research Program by Qatar National Research Fund; the Ministry of Science and Education, grant no. 3.2989.2017 (Russia); the Programa Estatal de Fomento de la Investigaci{\'o}n Cient{\'i}fica y T{\'e}cnica de Excelencia Mar\'{\i}a de Maeztu, grant MDM-2015-0509 and the Programa Severo Ochoa del Principado de Asturias; the Thalis and Aristeia programs cofinanced by EU-ESF and the Greek NSRF; the Rachadapisek Sompot Fund for Postdoctoral Fellowship, Chulalongkorn University and the Chulalongkorn Academic into Its 2nd Century Project Advancement Project (Thailand); the Nvidia Corporation; the Welch Foundation, contract C-1845; and the Weston Havens Foundation (USA).
\end{acknowledgments}
|
2,877,628,088,392 | arxiv | \section{Introduction}
\label{sec1}
Network coding was introduced in \cite{ACLY} as a means to improve the rate of transmission in networks, and often achieve capacity in the case of single source networks. Linear network coding was introduced in \cite{CLY}. network-error correction, which involved a trade-off between the rate of transmission and the number of correctable network-edge errors, was introduced in \cite{YeC} as an extension of classical error correction to a general network setting. Along with subsequent works \cite{Zha} and \cite{YaY}, this generalized the classical notions of the Hamming weight, Hamming distance, minimum distance and various classical error control coding bounds to their network counterparts. An algebraic formulation of network coding was discussed in \cite{KoM} for both instantaneous networks and networks with delays. In all of these works, it is assumed that the sinks and the source know the network topology and the network code, which is referred to as \textit{coherent network coding}.
Random network coding, introduced in \cite{HMKKESL} presented a distributed network coding scheme where nodes independently chose random coefficients (from a finite field) for the linear mixing of their inputs. Subspace codes and rank metric codes were constructed for the setting of random network coding in \cite{KoK} and \cite{SKK}.
Convolutional network codes were discussed in \cite{ErF,CLYZ,LiY} and a connection between network coding and convolutional coding was analyzed in \cite{FrS}. In this work, convolutional coding is introduced to achieve network-error correction. We assume an acyclic, single source, instantaneous (delay-free) network with coherent linear network coding to multicast information to several sinks.
We define \textit{a network use} as a single usage of all the edges of the network to multicast utmost min-cut number of symbols to each of the sinks. An \textit{error pattern} is a subset of the set of edges of the network which are in error. It is seen that when the source implements a convolutional code to send information into the network, every sink sees a different convolutional code. We address the following problem.
\textit{Given an acyclic, delay-free, single-source network with a linear multicast network code, and a set of error patterns $\Phi$, how to design a convolutional code at the source which shall correct the errors corresponding to the error patterns in $\Phi$, as long as consecutive errors are separated by a certain number of network uses?}
The main contributions of this paper are as follows.
\begin{itemize}
\item For networks with a specified network code, convolutional codes have been employed to achieve network-error correction for the first time.
\item An explicit convolutional code construction (for the network with a given network code) that corrects a given pattern of network-errors (provided that the occurrence of consecutive errors are separated by certain number of network uses) is given.
\item The convolutional codes constructed in this paper are found to offer certain advantages in field size and decoding over the previous approaches of block network-error correction codes (BNECCs) of \cite{YaY} for network error correction.
\item Some bounds are derived on the minimum field size required, and on the minimum number of network uses that two error events must be separated by in order that they get corrected.
\end{itemize}
The rest of the paper is organized as follows. Section \ref{sec2} gives a primer on convolutional codes and MDS convolutional codes. In Section \ref{sec3}, we discuss the general network coding set-up and network-errors. In Section \ref{sec4}, we give a construction for a input convolutional code which shall correct errors corresponding to a given set of error patterns. In Section \ref{sec5}, we give some examples for this construction. In Section \ref{sec5.5}, we compare the advantages and disadvantages of our network error correcting codes with that of \cite{YaY}. Finally, a short discussion on the construction of Section \ref{sec4} constitutes Section \ref{sec6} along with several directions for further research.
\section{Convolutional codes-Basic Results}
\label{sec2}
In this section, we review the basic concepts related to convolutional codes, used extensively throughout the rest of the paper. For $q,$ power of a prime, let $\mathbb{F}_q$ denote the finite field with $q$ elements
For a convolutional code, the \textit{information sequence} $\boldsymbol{u} = \left[\boldsymbol{u}_0,\boldsymbol{u}_1,...,\boldsymbol{u}_t\right](\boldsymbol{u}_i\in\mathbb{F}_q^b)$ and the \textit{codeword sequence} (output sequence) $\boldsymbol{v} = \left[\boldsymbol{v}_0,\boldsymbol{v}_1,...,\boldsymbol{v}_t\right]\left(\boldsymbol{v}_i\in\mathbb{F}_q^c\right)$ can be represented in terms of the delay parameter $z$ as
\begin{eqnarray*}
\boldsymbol{u}(z)=\sum_{i=0}^t \boldsymbol{u}_i z^i ~~~ \mbox{ and }~~~
\boldsymbol{v}(z)=\sum_{i=0}^t \boldsymbol{v}_i z^i
\end{eqnarray*}
\begin{definition}[\cite{JoZ}]
A \textit{convolutional code}, ${\cal C}$ of rate $~b/c~(b~<~c)$ is defined as
\[
{\cal C} = \left\{ \boldsymbol{v}(z)\in\mathbb{F}_q^{c}[[z]]\text{ }|\text{ } \boldsymbol{v}(z)=\boldsymbol{u}(z)G(z) \right\}
\]
where $G(z)$ is a $b \times c$ \textit{generator matrix} with entries from $\mathbb{F}_q(z)$(the field of rationals functions over $\mathbb{F}_q$) and rank $b$ over $\mathbb{F}_q(z)$, and $\boldsymbol{v}(z)$ being the code sequence arising from the information sequence, $\boldsymbol{u}(z)\in\mathbb{F}_q^{b}[[z]]$, the set of all $b$-tuples with elements from the formal power series ring $\mathbb{F}_q[[z]]$ over $\mathbb{F}_q.$
\end{definition}
Two generator matrices are said to be \textit{equivalent} if they encode the same convolutional code. A \textit{polynomial generator matrix}\cite{JoZ} for a convolutional code $\cal C$ is a generator matrix for $\cal C$ with all its entries from $\mathbb{F}_q[z]$, the ring of polynomials over $\mathbb{F}_q.$ It is known that every convolutional code has a polynomial generator matrix \cite{JoZ}. Also, a generator matrix for a convolutional code is \textit{catastrophic}\cite{JoZ} if there exists an information sequence with infinitely many non-zero components, that results in a codeword with only finitely many non-zero components. For a polynomial generator matrix $G(z)$, let $g_{ij}(z)$ be the element of $G(z)$ in the $i^{th}$ row and the $j^{th}$ column, and $\nu_i:=\max_{j} deg(g_{ij}(z))$ be the $i^{th}$ \textit{row degree} of $G(z)$. Let $\delta: = \sum_{i=1}^{b}\nu_i$ be the \textit{degree} of $G(z).$
\begin{definition}[\cite{JoZ} ]
A polynomial generator matrix is called \textit{basic} if it has a polynomial right inverse. It is called \textit{minimal} if its degree $\delta$ is minimum among all generator matrices of $\cal C$.
\end{definition}
Forney in \cite{For} showed that the ordered set $\left\{\nu_{1},\nu_{2},...,\nu_{b}\right\}$ of row degrees (indices) is the same for all minimal basic generator matrices of $\cal C$ (which are all equivalent to one another). Therefore the ordered row degrees and the degree $\delta$ can be defined for a convolutional code $\cal C.$ A rate $b/c$ convolutional code with degree $\delta$ will henceforth be referred to as a $(c,b,\delta)$ code. Also, any minimal basic generator matrix for a convolutional code is non-catastrophic.
\begin{definition}[\cite{JoZ} ]
A \textit{convolutional encoder} is a physical realization of a generator matrix by a linear sequential circuit. Two encoders are said to be \textit{equivalent encoders} if they encode the same code. A \textit{minimal encoder} is an encoder with minimal delay elements among all equivalent encoders.
\end{definition}
\begin{definition}[\cite{JoZ}]
The \textit{free distance} of the convolutional code $\cal C$ is given as
\[
d_{free}({\cal C})=min\left\{w_H(\boldsymbol{v}(z))|\boldsymbol{v}(z)\in{\cal C},\boldsymbol{v}(z)\neq 0\right\}
\]
where $w_H$ indicates the Hamming weight over $\mathbb{F}_q.$
\end{definition}
\subsection{MDS convolutional codes}
In this subsection, we discuss some results on the existence and construction of Maximum Distance Separable (MDS) convolutional codes. In Subsection \ref{sec4e}, we use these results to obtain some bounds on the field size and the error correcting capabilities of such MDS convolutional codes when they are used for network-error correction. The following bound on the free distance, and the existence of codes meeting the bound, called MDS convolutional codes, was proved in \cite{RoS}.
\begin{theorem}[\cite{RoS}]
\label{GenSingBound}
For every base field $\mathbb{F}$ and every rate $k/n$ convolutional code $\cal C$ of degree $\delta$, the free distance is bounded as
\[
d_{free}({\cal C})\leq(n-k)(\left\lfloor \delta / k \right\rfloor + 1) + \delta + 1.
\]
\end{theorem}
Theorem \ref{GenSingBound} is known as the \textit{generalized Singleton bound}.
\begin{theorem}[\cite{RoS}]For any positive integers $k<n$, $\delta$ and for any prime $p$ there exists a field $\mathbb{F}_q$ of characteristic $p$, and a rate $k/n$ convolutional code $\cal C$ of degree $\delta$ over $\mathbb{F}_q$, whose free distance meets the generalized Singleton bound.
\end{theorem}
A method of constructing MDS convolutional codes based on the connection between quasi-cyclic codes and convolutional codes was given in \cite{RLS}. It is known \cite{RLS} that the field size $q$ required for a $(n,k,\delta)$ MDS convolutional code ${\cal C}$ in the construction in \cite{RLS} should be a prime power such that
\begin{equation}
\label{fieldsizeconv}
n|(q-1)\text{ and } q\geq\delta\frac{n^2}{k(n-k)}+2.
\end{equation}
\section{Convolutional Codes for network-error Correction - Problem Formulation}
\label{sec3}
\subsection{Network model}
We consider only acyclic networks in this paper the model for which is as in \cite{CLYZ}. An acyclic network can be represented as a acyclic directed multi-graph ${\cal G}$ = ($\cal V,\cal E$) where $\cal V$ is the set of all vertices and $\cal E$ is the set of all edges in the network.
We assume that every edge in the directed multi-graph representing the network has unit \emph{capacity} (can carry utmost one symbol from $\mathbb{F}_q$). Network links with capacities greater than unit are modeled as parallel edges. The network is assumed to be instantaneous, i.e, all nodes process the same \emph {generation} (the set of symbols generated at the source at a particular time instant) of input symbols to the network in a given coding order (ancestral order \cite{CLYZ}).
Let $s\in\cal V$ be the source node and $\cal T$ be the set of all receivers. Let $n_{_T}$ be the unicast capacity for a sink node $T\in{\cal T}$ i.e the maximum number of edge-disjoint paths from $s$ to $T$. Then $n = \min_{T\in{\cal T}}n_{_T}$ is the max-flow min-cut capacity of the multicast connection.
\subsection{Network code}
We follow \cite{KoM} in describing the network code. For each node $v\in{\cal V}$, let the set of all incoming edges be denoted by $\Gamma_I(v)$. Then $|\Gamma_I(v)|=\delta_I(v)$ is the in-degree of $v$. Similarly the set of all outgoing edges is defined by $\Gamma_O(v)$, and the out-degree of the node $v$ is given by $|\Gamma_O(v)|=\delta_O(v)$. For any $e \in {\cal E}$ and $v \in {\cal V}$, let $head(e)=v$, if $v$ is such that $e \in \Gamma_I(v)$. Similarly, let $tail(e)=v$, if $v$ is such that $e \in \Gamma_O(v)$. We will assume an ancestral ordering on ${\cal E}$ of the acyclic graph ${\cal G}$.
The network code can be defined by the local kernel matrices of size $\delta_I(v)\times\delta_O(v)$ for each node $v\in{\cal V}$ with entries from $\mathbb{F}_q$. The global encoding kernels for each edge can be recursively calculated from these local kernels.
The network transfer matrix, which governs the input-output relationship in the network, is defined as given in \cite{KoM}. Towards this end, the matrices $A$,$K$,and $B^T$(for every sink $T\in {\cal T}$ are defined as follows:\newline
The entries of the $n \times |{\cal E}|$ matrix $A$ are defined as
\[
A_{i,j}=\left\{
\begin{array}{cc}
\alpha_{i,e_j} & \text{ if } e_j \in \Gamma_{O}(s)\\
0 & \text{ otherwise}
\end{array}
\right.
\]
where $\alpha_{i,e_j} \in \mathbb{F}_q$ is the local encoding kernel coefficient at the source coupling input $i$ with edge $e_j \in \Gamma_O(s)$.\newline
The entries of the $|{\cal E}| \times |{\cal E}|$ matrix $K$ are defined as
\[
K_{i,j}=\left\{
\begin{array}{cc}
\beta_{i,j} & \text{ if } head(e_i) = tail(e_j) \\
0 & \text{ otherwise}
\end{array
\right.
\]
where the set of $\beta_{i,j} \in \mathbb{F}_q$ is the local encoding kernel coefficient between $e_i$ and $e_j$, at the node $v=head(e_i) = tail(e_j)$.\newline
For every sink $T \in {\cal T}$, the entries of the $|{\cal E}| \times n$ matrix $B^T$ are defined as
\[
B^T_{i,j}=\left\{
\begin{array}{cc}
\epsilon_{e_j,i} & \text{ if } e_j \in \Gamma_{I}(T)\\
0 & \text{ otherwise} \end{array}
\right.
\]
where all $\epsilon_{e_j,i} \in \mathbb{F}_q$.
For instantaneous networks, we have
\begin{eqnarray*}
F : = (I-K)^{-1}
\end{eqnarray*}
where $I$ is the $|{\cal E}| \times |{\cal E}|$ identity matrix. Now we have the following:
\begin{definition}[\cite{KoM}]
\label{nettransfermatrix}
\textit{The network transfer matrix}, $M_{T}$, corresponding to a sink node ${T} \in \cal T$ is a full rank $n \times n$ matrix defined as $~~~M_{T}:=AFB^{T}=AF_{T}.$
\end{definition}
Definition \ref{nettransfermatrix} implies that if $\boldsymbol{x} \in \mathbb{F}_q^n$ is the input to the instantaneous network at any particular instant, then at any particular sink $T \in \cal T$, we have the output, $\boldsymbol{y} \in \mathbb{F}_q^n$, at the same instant, to be $\boldsymbol{y} = \boldsymbol{x}M_T$.
\subsection{Convolutional codes for networks}
Assuming that a $n$-dimensional linear network code multicast has been implemented in the network, we define the following terms-
\begin{definition}
An \textit{input convolutional code}, ${\cal C}_s$ is a convolutional code of rate $~k/n (k < n)$ with a \textit{input generator matrix }$G_{I}(z)$ implemented at the source of the network.
\end{definition}
\begin{definition}
The \textit{output convolutional code} ${\cal C}_T$, corresponding to a sink node ${T} \in \cal T$ is the $~k/n (k < n)$ convolutional code generated by the \textit{output generator matrix } $G_{O,{T}}(z)$ which is given as $~~~G_{O,{T}}(z) = G_I(z)M_{T}$, with $M_T$ being the full rank network transfer matrix corresponding to a $n$-dimensional network code.
\end{definition}
\begin{example}
\label{exm1}
\begin{figure}[htbp]
\centering
\includegraphics[totalheight=2.2in,width=3in]{combi.eps}
\caption{$_4C_2$ combination network over a ternary field. The global kernels of the edges coming from the source are indicated. All the intermediate nodes have local kernels unity.}
\label{fig:4C2Network}
\end{figure}
Consider the $_4C_2$ combination $\mathbb{F}_3$ network as shown in Fig. \ref{fig:4C2Network}. For this network, let the input convolutional code over $\mathbb{F}_3[z]$ be generated by $G_I(z) = \left[1 + z^2\text{ }\text{ }1+z+z^2\right].$
The network transfer matrices at each sink and their corresponding output convolutional matrices are calculated and tabulated in Table \ref{tab1}.
\begin{table}[htbp]
\centering
\caption{$_4C_2-$ $\mathbb{F}_3$ network for the input convolutional code $G_I(z) = \left[1 + z^2\text{ }\text{ }1+z+z^2\right].$}
\begin{tabular}{|c|c|l|}\hline
\textbf{Sink} & \textbf{Network transfer} & \textbf{Output convolutional code}\\
& \textbf{matrix} & \\ \hline
$T_1$ & $ M_{T_1}= \left( \begin{array}{cc}1 & 0 \\0 & 1 \end{array} \right)$ & $G_{O,T_1}(z) =[1+z^2\text{ }\text{ }1+z+z^2]$\\
\hline
$T_2$ & $M_{T_2}=\left( \begin{array}{cc}1 & 1 \\0 & 1 \end{array} \right)$ & $G_{O,T_2}(z) =[1+z^2\text{ }\text{ }2+z+2z^2]$\\
\hline
$T_3$ & $M_{T_3}=\left( \begin{array}{cc}1 & 1 \\0 & 2 \end{array} \right)$ & $G_{O,T_3}(z) =[1+z^2\text{ }\text{ }2z]$\\
\hline
$T_4$ & $M_{T_4}=\left( \begin{array}{cc}0 & 1 \\1 & 1 \end{array} \right)$ & $G_{O,T_4}(z) =[1+z+z^2\text{ }\text{ }2+z+2z^2]$\\
\hline
$T_5$ & $M_{T_5}=\left( \begin{array}{cc}0 & 1 \\1 & 2 \end{array} \right)$ & $G_{O,T_5}(z) =[1+z+z^2\text{ }\text{ }2z]$\\
\hline
$T_6$ & $M_{T_6}=\left( \begin{array}{cc}1 & 1 \\1 & 2 \end{array} \right)$ & $G_{O,T_6}(z) =[2+z+2z^2\text{ }\text{ }2z]$\\
\hline
\end{tabular}
\label{tab1}
\end{table}
\end{example}
\begin{figure*}
[htbp]
\centering
\includegraphics[totalheight=3in,width=5.6in]{convnetcode2.eps}
\caption{A network with a input convolutional code and a network code}
\label{fig:netcodewithcc}
~\\ \hrule
\end{figure*}
Thus, as can be seen from the Example \ref{exm1}, the source node implements a convolutional code, and maps the encoded symbols into its outgoing symbols. The network maps these symbols from the source to symbols at the receivers. Each of the receivers hence sees a different convolutional code which might have different distance properties and different degrees ($\delta$). Fig. \ref{fig:netcodewithcc} illustrates the entire system for a particular sink.
\subsection{Network-errors}
An \textit{error pattern} $\rho,$ as stated previously, is a subset of ${\cal E}$ which indicates the edges of the network in error. An \textit{error vector} $\boldsymbol{w}$ is a $1\times |{\cal E}|$ vector which indicates the error occurred at each edge. An error vector is said to match an error pattern $(\text{i.e }\boldsymbol{w} \in \rho)$ if all non-zero components of $\boldsymbol{w}$ occur only on the edges in $\rho$. An \textit{error pattern set} $\Phi$ is a collection of subsets of ${\cal E}$, each of which is an error pattern. Therefore we have the formulation as follows.
Let $\boldsymbol{x} \in \mathbb{F}_q^n$ be the input to the network at any particular time instant, and let $\boldsymbol{w} \in F_q^{|{\cal E}|}$ be the error vector corresponding to the network-errors that occurred in the same particular instant. Then, the output vector, $\boldsymbol{y} \in \mathbb{F}_q^n$, at that instant at any particular sink $T \in \cal T$ can be expressed as
\[
\boldsymbol{y} = \boldsymbol{x}M_T + \boldsymbol{w}F_T
\]
\section{Convolutional Codes for network-error Correction - Code Construction and Capability}
\label{sec4}
\subsection{Bounded distance decoding of convolutional codes}
\label{sec4a}
In this section, we briefly discuss and give some results regarding the bounded distance decoding of convolutional codes, reinterpreting results from \cite{JeR} for our context.
For the convolutional encoder with $c$ encoded output symbols and $b$ input (information) symbols, starting at some state in the trellis, we shall denote every such $c$ output symbol durations as \textit{a segment} of the trellis of the convolutional code. Each segment can be identified by an integer, which is zero at the start of transmission and incremented by $1$ for every $c$ output symbols henceforth.
Let $\cal C$ be a rate $b/c$ convolutional code with a generator matrix $G(z).$ Then corresponding to the information sequences $\boldsymbol{u}_0,\boldsymbol{u}_1,.. (\boldsymbol{u}_i \in \mathbb{F}_q^b)$ and the code sequence $\boldsymbol{v}_0,\boldsymbol{v}_1,... (\boldsymbol{v}_i \in \mathbb{F}_q^c)$, we can associate an encoder state sequence $\boldsymbol{\sigma}_0,\boldsymbol{\sigma}_1,. . $, where $\boldsymbol{\sigma}_t$ indicates the content of the delay elements in the encoder at a time $t.$We define the set of $j$ output symbols as $\boldsymbol{v}_{[0,j)}:=\left[\boldsymbol{v}_0,\boldsymbol{v}_1,. . . ,\boldsymbol{v}_{j-1}\right].$ We define the set $S_{d_{free}}$ consisting of all possible truncated code sequences $\boldsymbol{v}_{[0,j)}$ $\forall$ $j$ of weight less than $d_{free}({\cal C})$ that start in the zero state as follows:
\begin{eqnarray*}
S_{d_{free}}:=\left\{\boldsymbol{v}_{[0,j)} \mid w_H\left(\boldsymbol{v}_{[0,j)}\right) < d_{free}({\cal C}), \boldsymbol{\sigma}_0=\boldsymbol{0}, ~ \forall~ j>0 \right\}
\end{eqnarray*}
where $w_H$ indicates the Hamming weight over $\mathbb{F}_q.$ Clearly the definition of $S_{d_{free}}$ excludes the possibility of a zero state in between (in the event of which $w_H\left(\boldsymbol{v}_{[0,j)}\right) \geq d_{free}({\cal C})$), i.e, $ \boldsymbol{\sigma}_t \neq \boldsymbol{0} \text{ for any t such that } 0 < t \leq j.$ We have that the set $S_{d_{free}}$ is invariant among the set of minimal convolutional encoders. We now define
\[
T_{d_{free}}({\cal C}):=\max_{\boldsymbol{v}_{[0,j)} \in S_{d_{free}}}j+1
\]
which thereby can be considered as a code property because of the fact that $S_{d_{free}}$ is invariant among minimal encoders. Then, we have the following proposition:
\begin{proposition}
\label{minweighttime}
The minimum Hamming weight trellis decoding algorithm can correct all error sequences which have the property that the Hamming weight of the error sequence in any consecutive $T_{d_{free}}({\cal C})$ segments is utmost $\left\lfloor \frac{d_{free}({\cal C})-1}{2} \right\rfloor$.
\end{proposition}
\begin{proof}
Without loss of generality, let $\boldsymbol{\sigma}_t$ be a correct state (according to the transmitted sequence) at some depth $t$ in the path traced by the received sequence on the trellis of the code and let us assume that all the errors before $t$ have been corrected.
Now consider the window from $t$ to $t+T_{d_{free}}({\cal C})$, consisting of $T_{d_{free}}({\cal C})$ segments. In this window, the Hamming weight of the error sequence is utmost $\left\lfloor \frac{d_{free}({\cal C})-1}{2} \right\rfloor$. However, by the definition of $T_{d_{free}}({\cal C})$, the distance between the correct path and every other path of length $T_{d_{free}}({\cal C})$ starting from the state $\boldsymbol{\sigma}_t$ is at least $d_{free}({\cal C})$. Therefore, in this window, the error sequence can be corrected.
Now using $\boldsymbol{\sigma}_{t+T_{d_{free}}({\cal C})}$ at depth $t+T_{d_{free}}({\cal C})$ as our new correct starting state, we can repeat the same argument thus proving that the entire error sequence is correctable.
\end{proof}
\subsection{Construction}
\label{construction}
For the given network with a single source that has to multicast information to a set of sinks, $n$ being min-cut of the multicast connections, a $n$-dimensional network code in place over a sufficiently large field $F_{q}$ (for which we provide a bound in Subsection \ref{sec4e}) of characteristic $p$, we provide a construction for a convolutional code for correcting errors with patterns in a given error pattern set. This is the main contribution of this work. The construction is as follows.
\begin{enumerate}
\item Let $M_T=AF_{T}$ be the $n\times n$ network transfer matrix from the source to any particular sink $T\in {\cal T}$. Let $\Phi$ be the error pattern set given. Then we compute the following sets.
\item Let the set of all error vectors having their error pattern in $\Phi$ be
\[
{\cal W}_{\Phi}=\bigcup_{\rho \in \Phi}\left\{\boldsymbol{w}=(w_1,w_2,...,w_{|{\cal E}|}) \in \mathbb{F}_{q}^{|{\cal E}|}\text{ }|\text{ }\boldsymbol{w} \in \rho\right\}.
\]
\item Let
\[
{\cal W}_{T}:= \left\{\boldsymbol{w}F_T\text{ }|\text{ }\boldsymbol{w}\in{\cal W}_{\Phi}\right\}
\]
be computed for each sink $T$. This is nothing but the set of n-length resultant vectors at the sink $T$ due to errors in the given error patterns $\rho \in \Phi$.
\item Let
\[
{\cal W}_s:=\bigcup_{T\in{\cal T}} \left\{\boldsymbol{w}_{_T}M_T^{-1}\text{ }|\text{ }\boldsymbol{w}_{_T}\in{\cal W}_{T}\right\}
\]
be computed. This is the set of all $n$ length input vectors to the network that would result in the set of output vectors given by ${\cal W}_{T}$ at sink $T$, for each sink $T$.
\item Given a vector $\boldsymbol{y} \in \mathbb{F}_q^m$ (for some positive integer $m$), let $w_H(\boldsymbol{y})$ denote the Hamming weight of $\boldsymbol{y}$, i.e., the number of non-zero elements of $\boldsymbol{y}$. Let
\begin{equation}
\label{ts}
t_s = \max_{\boldsymbol{w}_s \in {\cal W}_s}w_H(\boldsymbol{w}_s).
\end{equation}
\item Choose an input convolutional code ${\cal C}_s$ with free distance at least $2t_s+1$.
\end{enumerate}
\subsection{Decoding}
\label{decoding}
Let $G_I(z)$ be the $k \times n$ generator matrix of the input convolutional code, ${\cal C}_s$, obtained from the given construction. Let $G_{O,{T}}(z) = G_I(z)M_{T}$ be the generator matrix of the output convolutional code, ${\cal C}_{T}$, at sink $T \in {\cal T}$, with $M_T$ being its network transfer matrix. Each sink can choose between two decoding methods based on the free distance ($d_{free}({\cal C}_{T})$) and $T_{d_{free}}({\cal C}_T)$ of its output convolutional code as follows:
\textit{Case-A:} This case is applicable in the event of both of the following two conditions are satisfied.
\begin{equation}\label{decodAcond1}
d_{free}({{\cal C}_T}) \geq 2 \left( \max_{\boldsymbol{w}_{_T} \in {\cal W}_T} w_H(\boldsymbol{w}_{_T}) \right)+1
\end{equation}
and
\begin{equation}\label{decodAcond2}
T_{d_{free}}({\cal C}_s) \geq T_{d_{free}}({\cal C}_T).
\end{equation}
In this case, the sink $T$ performs minimum distance decoding directly on the trellis of the output convolutional code, ${\cal C}_{T}$.
\textit{Case-B:} This case is applicable if either of the following two conditions are satisfied.
\[
d_{free}({{\cal C}_T}) < 2 \left(\max_{\boldsymbol{w}_{_T} \in {\cal W}_T}w_H(\boldsymbol{w}_{_T}) \right)+1
\]
or
\[
T_{d_{free}}({\cal C}_s) < T_{d_{free}}({\cal C}_T).
\]
This method involves additional processing at the sink, i.e, matrix multiplication. We have the following formulation at the sink $T$. Let
\begin{eqnarray*} {\left[v_{1}'(z) \text{ }\text{ } v_{2}'(z) \text{ }\text{ } ... \text{ }\text{ } v_{n}'(z)\right] = \left[v_{1}(z)\text{ }\text{ }v_{2}(z)\text{ }\text{ } ... \text{ }\text{ }v_{n}(z)\right] + }\\ \left[w_{1}(z)\text{ }\text{ }w_{2}(z)\text{ }\text{ }...\text{ }\text{ }w_{n}(z)\right] \end{eqnarray*} represent the output sequences at sink $T$, where
\[
\left[v_{1}(z) \text{ }\text{ } v_{2}(z) \text{ }\text{ } ... \text{ }\text{ } v_{n}(z)\right] = \boldsymbol{u}(z)G_{O,T}(z) = \boldsymbol{u}(z)G_{I}(z)M_T
\]
$\boldsymbol{u}(z)$ being the $k$ length vector of input sequences, and
\[
\left[w_{1}(z) \text{ }\text{ } w_{2}(z) \text{ }\text{ } ... \text{ }\text{ }w_{n}(z)\right]
\]
represent the corresponding error sequences. Now, the output sequences are multiplied with the inverse of the network transfer matrix $M_T$, so that decoding can be done on the trellis of the input convolutional code. Hence, we have
\begin{eqnarray*} \left[v_{1}''(z) \text{ }\text{ } v_{2}''(z) \text{ }\text{ } ... \text{ }\text{ } v_{n}''(z) \right] = \left[v_{1}'(z) \text{ }\text{ } v_{2}'(z) \text{ }\text{ } ... \text{ }\text{ } v_{n}'(z) \right]M_T^{-1} \\
= \boldsymbol{u}(z)G_{I}(z) + \left[w_{1}(z) \text{ }\text{ } w_{2}(z) \text{ }\text{ } ... \text{ }\text{ }w_{n}(z) \right]M_T^{-1}
\\=\boldsymbol{u}(z)G_{I}(z) + \left[w_{1}'(z) \text{ }\text{ } w_{2}'(z) \text{ }\text{ } ... \text{ }\text{ }w_{n}'(z) \right] \end{eqnarray*}
where $\boldsymbol{w'}(z) = \left[w_{1}'(z) \text{ }\text{ } w_{2}'(z) \text{ }\text{ } ... \text{ }\text{ }w_{n}'(z) \right]$ now indicate the set of modified error sequences that are to be corrected. Then the sink $T$ decodes to the minimum distance path on the trellis of the input convolutional code.
\subsection{Error correcting capability}
\label{capability}
In this subsection we prove a main result of the paper given by Theorem \ref{maintheorem} which characterizes the error correcting capability of the code obtained via the construction of Subsection \ref{construction}. Before proving the following theorem, we recall the following observation that in every network use, $n$ encoded symbols which is equal to the number of symbols corresponding to one segment of the trellis, are to be multicast to the sinks.
\begin{theorem}
\label{maintheorem}
The code ${\cal C}_s$ resulting from the construction of Subsection \ref{construction} can correct all network-errors that have their pattern as some $\rho \in \Phi$ as long as any two consecutive network-errors are separated by $T_{d_{free}}({\cal C}_s)$ network uses.
\end{theorem}
\begin{proof}
In the event of Case-A of the decoding, the given conditions ((\ref{decodAcond1}) and (\ref{decodAcond2})) together with Proposition \ref{minweighttime} prove the given claim that errors with their error pattern in $\Phi$ will be corrected as long as no two consecutive error events occur within $T_{d_{free}}({\cal C}_s)$ network uses.
In fact, condition (\ref{decodAcond1}) implies that network-errors with pattern in $\Phi$ will be corrected at sink $T$, as long as consecutive error events are separated by $T_{d_{free}}({\cal C}_{T})$.
Now we consider Case B of the decoding. Suppose that the set of error sequences in the formulation given, $\boldsymbol{w'}(z)$, is due to network-errors that have their pattern as some $\rho \in \Phi$, such that any two consecutive such network-errors are separated by at least $T_{d_{free}}({\cal C}_s)$ network uses.
Then, from (\ref{ts}), we have that the maximum Hamming weight of any error event embedded in $\boldsymbol{w'}(z)$ would be utmost $t_s$, and any two consecutive error events would be separated by $T_{d_{free}}({\cal C}_s)$ segments of the trellis of the code ${\cal C}_s$. Because of the free distance of the code chosen and along with Proposition \ref{minweighttime}, we have that such errors will get corrected when decoding on the trellis of the input convolutional code.
\end{proof}
\subsection{Bounds on the field size and $T_{d_{free}}({\cal C}_s)$}
\label{sec4e}
\subsubsection{Bound on field size}
The following theorem gives a sufficient field size for the required $(n,k)$ convolutional code to be constructed with the required free distance condition ($\geq 2t_s+1$).
\begin{theorem}
\label{fieldsizebound}
The code ${\cal C}_s$ can be constructed and used to multicast $k$ symbols to the set of sinks ${\cal T}$ along with the required error correction in the given instantaneous network with min-cut $n$ ($n>k$), if the field size $q$ is such that
\[
n|q-1 ~~~~\text{ and } ~~~~ q > max\left\{|{\cal T}|,\frac{2n^2}{n-k}+2 \right\}.
\]
\end{theorem}
\begin{proof}
The condition that
\[
q>|{\cal T}|
\]
is from the known sufficient condition \cite{HMKKESL} for the existence of a linear multicast network code. \newline
For the other conditions, we first note that in the construction of Subsection \ref{construction}, $t_s\leq n$. In the worst case that $t_s=n$, we need $d_{free}({\cal C}_s) \geq 2n+1$. We have from the generalized Singleton bound:
\[
d_{free}({\cal C}_s)\leq(n-k)(\left\lfloor \delta / k \right\rfloor + 1) + \delta + 1.
\]
In order that $d_{free}({\cal C}_s)$ be at least $2n+1$, we let $\delta = 2k$, in which case the R.H.S of the inequality becomes
\begin{eqnarray*}
(n-k)(\left\lfloor 2k / k \right\rfloor + 1) + 2k + 1 \\
= 2n+(n-k)+1 > 2n+1
\end{eqnarray*}
Thus, with $\delta = 2k$, from (\ref{fieldsizeconv}) we have that $(n,k,\delta = 2k)$ MDS convolutional code can be constructed based on \cite{RLS} if
\[
n|q-1 \text{ and } q > \frac{2n^2}{n-k}+2.
\]
Such an MDS convolutional code the requirements in the construction ($d_{free}({\cal C}_s) \geq 2n+1$), and hence the theorem is proved.
\end{proof}
\subsubsection{Bound on $T_{d_{free}}({\cal C}_s)$}
Towards obtaining a bound on $T_{d_{free}}({\cal C}_s)$, we first prove the following lemma.
\begin{lemma}
\label{deltazeroes}
Let $\cal C$ be a rate $b/c$ convolutional code with degree $\delta$ and $S_{d_{free}}$ be defined as in Subsection \ref{sec4a} for a minimal encoder (a controller canonical form realization \cite{JoZ} of a minimal basic generator matrix, $G_{mb}(z)$, of $\left.{\cal C}\right)$. Then any
\[
\boldsymbol{v}_{[0,j)} = \left[\boldsymbol{v}_0,\boldsymbol{v}_1,. . . ,\boldsymbol{v}_{j-1}\right] \in S_{d_{free}}
\]
cannot have $\delta$ zeros in $\delta$ consecutive segments, i.e, at least one of $\boldsymbol{v}_i,\boldsymbol{v}_i,...,\boldsymbol{v}_{i+\delta-1}$ is non zero $ \forall$ $0 \leq i \leq j- \delta.$
\end{lemma}
\begin{IEEEproof}
Let the ordered Forney indices (row degrees of $G_{mb}(z)$) be $\nu_1,\nu_2,. . . ,\nu_b=\nu_{max}$, and therefore $\delta$ being the sum of these indices. Then a systematic generator matrix($G_{sys}(z)$) for $\cal C$ that is equivalent to $G_{mb}(z)$ is of the form
\begin{equation*}
G_{sys}(z)=T^{-1}(z)G_{mb}(z)
\end{equation*}
where $T(z)$ is a full rank $b \times b$ submatrix of $G_{mb}(z)$ with a delay-free determinant. We have the following observation.
\begin{observation}
\label{degree}
The degree of $det\left(T(z)\right)$ is clearly utmost $\delta.$ Also, we have the $(i,j)^{th}$ element $t_{i,j}(z)$ of $T^{-1}(z)$ as
\[
t_{i,j}(z)= \frac{Cofactor\left(T(z)_{j,i}\right)}{det\left(T(z)\right)}
\]
where $Cofactor(T(z)_{j,i}) \in \mathbb{F}_q[z]$ is the cofactor of the $(j,i)^{th}$ element of $T(z).$ The degree of $Cofactor(T(z)_{j,i})$ is utmost $\delta - \nu_j \leq \delta - \nu_1.$
Let $a_{i,j}(z) \in \mathbb{F}_q(z)$ represent the $(i,j)^{th}$ element of $G_{sys}(z),$ where
\begin{eqnarray*}
a_{i,j}(z)=\sum_{k=1}^{b}t_{i,k}(z)g_{k,j}(z)~~~~~~~~ \\
~~~~~~~= \frac{\sum_{k=1}^{b}Cofactor(T(z)_{k,i})g_{k,j}(z)}{det\left(T(z)\right)}
\end{eqnarray*}
$g_{k,j}(z)$ being $(k,j)^{th}$ element of $G_{mb}(z).$ Therefore, the element $a_{i,j}(z)$ can be expressed as
\[
a_{i,j}(z) = \frac{p_{i,j}(z)}{det\left(T(z)\right)}
\]
where the degree of $p_{i,j}(z) \in \mathbb{F}_q[z]$ is utmost $\delta + \nu_{max} - \nu_1.$ Now if we divide $p_{i,j}(z)$ by $det\left(T(z)\right)$, we have
\begin{equation}
\label{eqn4}
a_{i,j}(z)= q_{i,j}(z) + \frac{r_{i,j}(z)}{det\left(T(z)\right)}
\end{equation}
where the degree of $q_{i,j}(z) \in \mathbb{F}_q[z]$ is utmost $\nu_{max}-\nu_1$, and the degree of $r_{i,j}(z)$ is utmost $\delta - 1.$ Because every element of $G_{sys}(z)$ can be reduced to the form in (\ref{eqn4}), we can have a realization of $G_{sys}(z)$ with utmost $\delta$ memory elements for each of the $b$ inputs. Let this encoder realization be known as $E.$
\end{observation}
Now we shall prove the lemma by contradiction. Suppose there exists a codeword $\boldsymbol{v}(z)=\left[\boldsymbol{v}_0,\boldsymbol{v}_1,...,\boldsymbol{v}_{j-2},\boldsymbol{v}_{j-1},\boldsymbol{v}_j,...\right]$ exists such that $\boldsymbol{v}_{[0,j)} = \left[\boldsymbol{v}_0,\boldsymbol{v}_1,. . . ,\boldsymbol{v}_{j-1}\right] \in S_{d_{free}}$ and $\boldsymbol{v}_i,\boldsymbol{v}_i,...,\boldsymbol{v}_{i+\delta-1}$ are all zero for some $i$ such that $0 \leq i \leq j- \delta.$
Let $\boldsymbol{u}_s(z)$ be the information sequence which when encoded into $\boldsymbol{v}(z)$ by the systematic encoder $E.$ Because of the systematic property of $E$, we must have that $\boldsymbol{u}_i,\boldsymbol{u}_i,...,\boldsymbol{u}_{i+\delta-1}$ are also all zero. By Observation \ref{degree}, $E$ is an encoder which has utmost $\delta$ delay elements (for each input), and hence the state vector $\boldsymbol{\sigma}_{i+\delta}$ at time instant $i+\delta$ becomes zero as a result of these $\delta$ zero input vectors. Fig. \ref{fig:tdfreebound} shows the scenario we consider.
\begin{figure}[htbp]
\centering
\includegraphics[totalheight=2.66in,width=3.2in]{tdfreebound2.eps}
\caption{The trellis corresponding to a systematic encoder of $\cal C$}
\label{fig:tdfreebound}
\end{figure}
Therefore the codeword $\boldsymbol{v}(z)$ can be written as a unique sum of two code words $\boldsymbol{v}(z)=\boldsymbol{v}'(z)+\boldsymbol{v}''(z)$, where
\[
\boldsymbol{v}'(z)=\sum_{k=1}^{i+\delta-1}\boldsymbol{v}_kz^k=\left[\boldsymbol{v}_0,...,\boldsymbol{v}_{i}=\boldsymbol{0},...,\boldsymbol{v}_{i+\delta-1}=\boldsymbol{0},\boldsymbol{0},...\right]
\]
and
\[
\boldsymbol{v}''(z)=\sum_{k=i+\delta}\boldsymbol{v}_kz^k=\left[\boldsymbol{0},\boldsymbol{0},...,\boldsymbol{0},\boldsymbol{v}_{i+\delta},\boldsymbol{v}_{i+\delta+1},...,\boldsymbol{v}_{j},...\right]
\]
where $\boldsymbol{0}\in \mathbb{F}_q^c$ and the uniqueness of the decomposition holds with respect to the positions of the zeros indicated in the two code words $\boldsymbol{v}'(z)$ and $\boldsymbol{v}''(z).$
Let $\boldsymbol{u}_{mb}(z)$ be the information sequence which is encoded into $\boldsymbol{v}(z)$ by a minimal realization $E_{mb}$ of a minimal basic generator matrix $G_{mb}(z)$ (a minimal encoder). Then we have
\[
\boldsymbol{u}_{mb}(z)=\boldsymbol{u}'_{mb}(z)+\boldsymbol{u}''_{mb}(z)
\]
where $\boldsymbol{u}'_{mb}(z)$ and $\boldsymbol{u}''_{mb}(z)$ are encoded by $E_{mb}$ into $\boldsymbol{v}'(z)$ and $\boldsymbol{v}''(z)$ respectively.
By the \textit{predictable degree property} (PDP) \cite{JoZ} of minimal basic generator matrices, we have that for any polynomial code sequence $\boldsymbol{v}(z)$,
\[
deg\left(\boldsymbol{v}(z)\right)=\max_{1\leq l \leq b}\left\{deg\left(\boldsymbol{u}_{mb,l}(z)\right)+\nu_l\right\}.
\]
where $\boldsymbol{u}_{mb,l}(z) \in \mathbb{F}_q[z]$ represents the information sequence corresponding to the $l^{th}$ input, and $deg$ indicates the degree of the polynomial. Therefore, by the PDP property, we have that $deg\left(\boldsymbol{u}'_{mb}(z)\right) < i$, since $deg\left(\boldsymbol{v}'(z)\right)<i$.
Also, it is known that in the trellis of corresponding to a minimal realization of a minimal-basic generator matrix, there exists no non-trivial transition from the all-zero state to a non-zero state that produces a zero output. Therefore we have $deg\left(\boldsymbol{u}''_{mb}(z)\right) \geq i+\delta$, with equality being satisfied if $\boldsymbol{v}_{i+\delta}\neq \boldsymbol{0}.$ Therefore, $u_{mb}(z)$ is of the form
\begin{eqnarray*}
\boldsymbol{u}_{mb}(z)=\boldsymbol{u}'_{mb}(z)+\boldsymbol{u}''_{mb}(z) ~~~~~~~~~~~~~~~~~~~~~~~~~~\\
=\sum_{k=1}^{i-1}\boldsymbol{u}'_{mb,k}z^k + \sum_{k=i+\delta}^{\infty}\boldsymbol{u}''_{mb,k}z^k ~~~~~~~~~~~~\\
\boldsymbol{u}_{mb}(z)=\left[\boldsymbol{u}'_{mb,0},..,\boldsymbol{u}'_{mb,i-1},\boldsymbol{0},\boldsymbol{0},..\right]~~~~~~~~~~~~~~~~~~~~~~~~~\\
~~~~~~~~~~~~~~~~~+\left[\boldsymbol{0},..,\boldsymbol{0},\boldsymbol{u}''_{mb,i+\delta},\boldsymbol{u}''_{mb,i+\delta+1},..\right]~~~~~~~~~~~~~~
\end{eqnarray*}
i.e,
\[
\boldsymbol{u}_{mb}(z)=\left[\boldsymbol{u}_{mb,0},\boldsymbol{u}_{mb,1},...,\boldsymbol{u}_{mb,i},...,\boldsymbol{u}_{mb,i+\delta-1},\boldsymbol{u}_{mb,i+\delta},..\right]
\]
where $\boldsymbol{u}_{mb,i}=\boldsymbol{u}_{mb,i+1}=...=\boldsymbol{u}_{mb,i+\delta-1}=\boldsymbol{0} \in \mathbb{F}_q^b.$
With the minimal encoder $E_{mb}$, which has utmost $\nu_{b}$ memory elements, these $\delta$ consecutive zeros of would result in the state vector $\boldsymbol{\sigma}_{mb,t}$ becoming zero at time instant $i+\nu_{b}{\leq}i+\delta {\leq}j$, i.e,
$\boldsymbol{\sigma}_{mb,i+\nu_{b}}=\boldsymbol{0}.$ But the definition of $S_{d_{free}}$ excludes such a condition, which means that $\boldsymbol{v}_{[0,j)} \notin S_{d_{free}},$ contradicting our original assumption. Thus we have proved our claim.
\end{IEEEproof}
We shall now prove the following bound on $T_{d_{free}}({\cal C})$.
\begin{proposition}
\label{prop}
Let $\cal C$ be a $(c,b,\delta)$ convolutional code. Then
\begin{equation}
\label{eqn19}
T_{d_{free}}({\cal C}) \leq \left(d_{free}\left({\cal C}\right)-1\right)\delta+1.
\end{equation}
\end{proposition}
\begin{IEEEproof}
Let $\boldsymbol{v}_{[0,j)} \in S_{d_{free}}$ be some truncated codeword. Then we have $w_H\left(\boldsymbol{v}_{[0,j)}\right) \leq d_{free}\left({\cal C}\right)-1.$ By Lemma \ref{deltazeroes}, we have that in any consecutive $\delta$ segments, the Hamming weight of $\boldsymbol{v}_{[0,j)}$ is at least $1.$ With this observation, and by the definition of $T_{d_{free}}({\cal C})$, we have (\ref{eqn19}), thus proving the proposition.
\end{IEEEproof}
Thus, for a network error correcting MDS convolutional code ${\cal C}_s$, we have the following bound on $T_{d_{free}}({\cal C}_s).$
\begin{corollary}
If the code ${\cal C}_s$ chosen in the construction of Subsection \ref{construction} is an $(n,k)$ MDS convolutional code, then we have the following bound on $T_{d_{free}}({\cal C}_s).$
\begin{equation}
\label{boundtdf}
T_{d_{free}}({\cal C}_s) \leq 6nk-2k^2+1.
\end{equation}
\end{corollary}
\begin{IEEEproof}
In the Construction of Subsection \ref{construction}, if the code ${\cal C}_s$ selected is an MDS convolutional code, then we know from the proof of Theorem \ref{fieldsizebound} that the degree being $\delta = 2k$ satisfies the required error correcting capability. Moreover, for an $(n,k,\delta)$ MDS convolutional code, we have
\[
d_{free}({\cal C}) = (n-k)(\lfloor\delta/k\rfloor + 1) + \delta + 1
\]
Therefore, substituting this value for $d_{free}({\cal C}_s)$ with $\delta = 2k$ in (\ref{eqn19}) of Proposition \ref{prop}, we have (\ref{boundtdf}).
\end{IEEEproof}
\section{Illustrative Examples}
\label{sec5}
\subsection{Code construction for the butterfly network}
The most common example quoted in network coding literature, the butterfly network, is shown in Fig. \ref{fig:ButterflyNetwork}. Let us assume the ancestral ordering as given in the figure. Every edge is assumed to have unit capacity. It is noted that the network code in place for the butterfly network as shown is a generic network code for all field sizes. We seek to design a convolutional code for this network which will correct all single edge errors.
\begin{figure}[htbp]
\centering
\includegraphics[totalheight=2.2in,width=3.6in]{butterfly1.eps}
\caption{Butterfly network}
\label{fig:ButterflyNetwork}
\end{figure}
\begin{example}[Butterfly network under a binary field]
The network transfer matrix for sink $T_1$ is the full rank $2 \times 2$ matrix
\[
M_{T_1} = \left[ \begin{array}{cc}
1 & 1 \\
0 & 1 \end{array} \right]=AF_{T_1}
\]
where
\[
A =\left[ \begin{array}{ccccccccc}
1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{array} \right]
\]
\[
F_{T_1} = \left[ \begin{array}{ccccccccc}
1 & 0 & 1 & 0 & 0 & 0 & 0 & 0 & 0\\
1 & 1 & 0 & 1 & 1 & 1 & 1 & 0 & 0
\end{array} \right]^T
\]
Similarly, for sink $T_2$, $ M_{T_2} = \left[ \begin{array}{cc}
1 & 0 \\
1 & 1 \end{array} \right]=AF_{T_2}$
where
\[
F_{T_2} = \left[ \begin{array}{ccccccccc}
1 & 1 & 0 & 1 & 1 & 1 & 0 & 1 & 0\\
0 & 1 & 0 & 0 & 0 & 0 & 0 & 0 & 1
\end{array} \right]^T
\]
For single edge errors, the error pattern set is
\[
\Phi=\left\{\left\{e_i\right\}:i=1,2,...,9\right\}.
\]
Then the set of $9$ length error vectors over $\mathbb{F}_2$, ${\cal W}_{\Phi} = $
\[
\left\{(1,0,0,...,0),(0,1,0,...,0),...,(0,0,0,...,0,1),(0,0,0,...,0)\right\}
\]
For both sinks $T_1$ and $T_2$, we have
\[
{\cal W}_{T}={\cal W}_{T_1} = {\cal W}_{T_2} = \left\{(0,0),(0,1),(1,0),(1,1)\right\}
\]
Since $M_{T_1}^{-1}=M_{T_1}$ and $M_{T_2}^{-1}=M_{T_2}$, we have
\[
{\cal W}_s =\bigcup_{T\in{\cal T}} \left\{\boldsymbol{w}_{_T}M_T\text{ }|\text{ }\boldsymbol{w}_{_T}\in{\cal W}_{T}\right\}
\]
\[
{\cal W}_s = \left\{(0,0),(0,1),(1,0),(1,1)\right\}
\]
Now we have $t_s = \max_{\boldsymbol{w}_s \in {\cal W}_s}w_H(\boldsymbol{w}_s) = 2.$ Hence a convolutional code with free distance at least $2t_s+1 = 5$ is required to correct these errors. With the min-cut $n$ being 2, let $k = 1$. Let this input convolutional code ${\cal C}_s$ be generated by the generator matrix $G_I(z) = \left[1 + z^2\text{ }\text{ }1+z+z^2\right].$
This code is a degree $2$ convolutional code with free distance $5$, and $T_{d_{free}}({\cal C}_s)=6$. Hence, by Theorem \ref{maintheorem}, this code will correct all single edge errors under the condition that consecutive single edge errors occur within $6$ network uses. Now the sinks must select between Case A and Case B for decoding, based upon their output convolutional codes.
The output convolutional code that is `seen' by the sink~$T_1$ has a generator matrix
\[
G_{O,T_1}(z) = G_I(z)M_{T_1} = [1+z^2\text{ }\text{ }z].
\]
This code seen by sink $T_1$ has a free distance of only $3$, which is lesser than $2\left(\max_{\boldsymbol{w}_{T_1} \in {\cal W}_{T_1}} w_H(\boldsymbol{w}_{T_1})\right)+1 = 5$. Hence case B applies and decoding is done on the trellis of the input convolutional code after processing. \\
Similarly, the convolutional code thus seen by the sink node $T_2$ has the generator matrix
\[
G_{O,T_2}(z) = G_I(z)M_{T_2}(z) = [z\text{ }\text{ }1+z+z^2]
\]
This is a free distance $4$ code, which is again lesser than $2\left(\max_{\boldsymbol{w}_{T_2} \in {\cal W}_{T_2}} w_H(\boldsymbol{w}_{T_2})\right)+1 = 5$. Hence, for this sink too, Case B applies and decoding is done on the input trellis. \newline
\end{example}
\begin{example}[Butterfly network under a ternary field]
We now present another example to illustrate the case when the field size and the choice of the input convolutional code affects the error correction capabilities of the output convolutional codes at the sinks.
Let us assume the butterfly network with the network code being the same as the previous case, but over $\mathbb{F}_3$. The network transfer matrices in this case are the same as before, but the symbols are from $\mathbb{F}_3$.
We seek to correct single edge errors in this case too. Thus the error pattern set is the same as the previous case. Now we have the set of $9$ length error vectors over $\mathbb{F}_3$
\begin{eqnarray*}
{\cal W}_{\Phi} = \left\{(1,0,0,...,0),(0,1,0,...,0),...,(0,0,0,...,0,1),\right. \\ \left. (2,0,0,...,0),(0,2,0,...,0),...,(0,0,0,...,0,2),(0,0,0,...,0)\right\}
\end{eqnarray*}
The other sets are as follows. ${\cal W}_{T}={\cal W}_{T_1} = {\cal W}_{T_2} =$
\[
\left\{(0,0),(0,1),(1,0),(1,1),(0,2),(2,0),(2,2)\right\}
\]
With $ M_{T_1}^{-1} = \left[ \begin{array}{cc} 1 & 2 \\ 0 & 1 \end{array} \right]
\text{ and }
M_{T_2}^{-1} = \left[ \begin{array}{cc}
1 & 0 \\
2 & 1 \end{array} \right], $
we have
\[
{\cal W}_s = \bigcup_{T\in{\cal T}} \left\{\boldsymbol{w}_{_T}M_T^{-1}\text{ }|\text{ }\boldsymbol{w}_{_T}\in{\cal W}_{T}\right\}
\]
\[
=\left\{(0,0),(0,1),(1,0),(1,2),(0,2),(2,1),(2,0)\right\}
\]
Thus, again we have,
\[
t_s = \max_{\boldsymbol{w}_s \in {\cal W}_s}w_H(\boldsymbol{w}_s) = \max_{\boldsymbol{w}_{_T} \in {\cal W}_T}w_H(\boldsymbol{w}_{_T}) = 2.
\]
Hence a convolutional code with free distance at least $2t_s+1 = 5$ is required to correct all single errors.
We compare the error correction capability of the output convolutional code at each sink for two input convolutional codes, ${\cal C}_s$ and ${\cal C}_s'$ generated by the matrices
\[
G_I(z) = \left[1 +z^2\text{ }\text{ }1+z+z^2\right]
\]
and
\[
G_I'(z) = \left[1 + z^2\text{ }\text{ }1+z+2z^2\right]
\]
respectively, each over $\mathbb{F}_3[z]$.
Both of these codes are degree $2$ convolutional codes and have free distances $d_{free}({\cal C}_s)=d_{free}({\cal C}_s')=5$, with $T_{d_{free}}({\cal C}_s)=T_{d_{free}}({\cal C}_s')=6$.
First, we discuss the case where the input convolutional code is ${\cal C}_s$. The sink $T_1$ thus sees the code generated by
\[
G_{O,T_1}(z) = G_I(z)M_{T_1} = [1 + z^2\text{ }\text{ }2+z+2z^2]
\]
which has a free distance of $5$, with $T_{d_{free}}({\cal C}_{T_1}) = 6 = T_{d_{free}}({\cal C}_s)$. Thus decoding is done on the output trellis at sink $T_1$ to correct all single edge errors that as long as they are separated by $6$ network uses. Sink $T_2$ sees the code generated by
\[
G_{O,T_2}(z)=[2+z+2z^2\text{ }\text{ }1+z+z^2]
\]
which has $d_{free}=6$, and $T_{d_{free}}({\cal C}_{T_2}) = 6 = T_{d_{free}}({\cal C}_s).$
Therefore, sink $T_2$ can also decode on the output trellis after multiplication by the corresponding processing matrix to get the required error correction in every $6$ network uses. Upon carrying out a similar analysis with the input convolutional code being ${\cal C}_s'$, we give the following tables for comparison.
\begin{table}[htbp]
\centering
\caption{Butterfly network with ${\cal C}_s[d_{free}({\cal C}_s)=5,T_{d_{free}}({\cal C}_s)=6]$} \begin{tabular}{|c|c|c|c|}\hline
\textbf{Sink} & \textbf{Output convolutional } & \textbf{$d_{free}({\cal C}_{T_i})$}, & \textbf{Decoding on}\\
& \textbf{code} $[G_{O,T_i}(z)]$ & $T_{d_{free}}({\cal C}_{T_i})$ & \\
\hline
$T_1$ & $[1+z^2\text{ }\text{ }2+z+2z^2]$ & 5,6 & Output trellis \\
\hline
$T_2$ & $[2+z+2z^2\text{ }\text{ }1+z+z^2]$ & 6,6 & Output trellis\\
\hline
\end{tabular}
\label{tab3}
\end{table}
\begin{table}[htbp]
\centering
\caption{Butterfly network with ${\cal C}_s' [d_{free}({\cal C}_s')=5,T_{d_{free}}({\cal C}_s')=6 ]$} \begin{tabular}{|c|c|c|c|}\hline
\textbf{Sink} & \textbf{Output convolutional} & \textbf{$d_{free}({\cal C}_{T_i})$}, & \textbf{Decoding on}\\
& \textbf{code} $[G_{O,T_i}(z)]$ & $T_{d_{free}}({\cal C}_{T_i})$ & \\
\hline
$T_1$ & $[1+z^2\text{ }\text{ }2+z]$ & 4,3 & Input trellis \\
\hline
$T_2$ & $[2+z\text{ }\text{ }1+z+2z^2]$ & 5,5 & Output trellis\\
\hline
\end{tabular}
\label{tab4}
\end{table}
With the input convolutional code being ${\cal C}_s$, conditions (\ref{decodAcond1}) and (\ref{decodAcond2}) are satisfied at both sinks. Hence additional processing can be avoided at both sinks and they can decode on the output trellis directly, and get single edge error correction under the constraint that consecutive single edge errors are separated by at least $6$ network uses.
However with ${\cal C}_s'$, one of the sinks $T_1$ does not have sufficient free distance at its output convolutional code, and hence has to process the incoming symbols using $M_{T_1}^{-1}$ and thereby decode on the trellis of the input convolutional code.
Thus it can be seen that using a larger field size and choosing the input convolutional code appropriately can give more desirable properties to the output convolutional codes at the sinks.
\end{example}
\subsection{Code construction for the $_4C_2$ network}
\begin{example}[$_4C_2$ combination network under $\mathbb{F}_3$]
\label{combiexample1}
Let us consider the combination network in Fig. \ref{fig:4C2Network}. The network transfer matrices for the $6$ sinks are as in Table \ref{tab1}. We seek to design a convolutional code that will correct all network-errors whose error vectors have Hamming weight utmost 2 (i.e single and double edge errors).
The error pattern set is thus
\[
\Phi~=~\left\{\left\{e_i,e_j\right\}:i,j=1,2,...,15,16 \text{ and }i \neq j\right\}
\]
The set ${\cal W}_{\Phi}$ is the set of all $16$ length vectors with Hamming weight utmost $2.$ We have
\[
{\cal W}_{T_1} = {\cal W}_{T_2} = ... = {\cal W}_{T_6} = \mathbb{F}^2_3
\]
and
\[
{\cal W}_s = \bigcup_{T\in{\cal T}} \left\{\boldsymbol{w}_{_T}M_T^{-1}\text{ }|\text{ }\boldsymbol{w}_{_T}\in{\cal W}_{T}\right\} = \mathbb{F}^2_3
\]
For every sink $T_i$, we have
\[
\max_{\boldsymbol{w}_{T_i} \in {\cal W}_{T_i}}w_H(\boldsymbol{w}_{T_i})= \max_{\boldsymbol{w}_s \in {\cal W}_s}w_H(\boldsymbol{w}_s)= t_s = 2
\]
Therefore the input convolutional code needs to have free distance at least $5$.
As in Example \ref{exm1}, let the input convolutional code, ${\cal C}_{s}$, over $\mathbb{F}_3[z]$ be generated by the matrix
\[
G_I(z) = \left[1 + z^2\text{ }\text{ }1+z+z^2\right].
\]
This code has free distance = $5$, and $T_{d_{free}}({\cal C}_{s})=6.$
Each sink decodes on either the input or the output trellis depending upon whether $d_{free}({\cal C}_{T_i}) \geq 2t_s+1$, and if $ T_{d_{free}}({\cal C}_{s}) \geq T_{d_{free}}({\cal C}_{T_i})$, and hence can correct all network-errors with with their pattern in $\Phi$ as long as consecutive errors are separated by $6$ network uses. The output convolutional codes at the sinks, their free distances and their $T_{d_{free}}({\cal C}_{T_i})$ are shown in in Table \ref{tab2}.
\begin{table}[htbp]
\centering
\caption{$_4C_2$ network with $G_I(z) = \left[1 + z^2\text{ }\text{ }1+z+z^2\right]$}
\begin{tabular}{|c|c|c|c|}\hline
\textbf{Sink} & \textbf{Output} & \textbf{$d_{free}({\cal C}_{T_i})$}, & \textbf{Decoding on}\\
& \textbf{convolutional code} & $T_{d_{free}}({\cal C}_{T_i})$ & \\
\hline
$T_1$ & $[1+z^2\text{ }\text{ }1+z+z^2]$ & 5,6 & Output trellis\\
\hline
$T_2$ & $[1+z^2\text{ }\text{ }2+z+2z^2]$ & 5,6 & Output trellis \\
\hline
$T_3$ & $[1+z^2\text{ }\text{ }2z]$ & 3,4 & Input trellis \\
\hline
$T_4$ & $[1+z+z^2\text{ }\text{ }2+z+2z^2]$ & 6,6 & Output trellis\\
\hline
$T_5$ & $[1+z+z^2\text{ }\text{ }2z]$& 4,5 & Input trellis\\
\hline
$T_6$ & $[2+z+2z^2\text{ }\text{ }2z]$& 4,5 & Input trellis\\
\hline
\end{tabular}
\label{tab2}
\end{table}
\end{example}
\section{Comparison with block network error correction codes}
\label{sec5.5}
The approach of \cite{YaY} can also be used to obtain network error correcting codes that correct $t$ edge errors once in every $J$ network uses (for some positive integer $J$). A time-expanded graph would then be used, i.e, with the network nodes (those except the source and sinks) and edges replicated for each additional time instant.
Suppose the network has been replicated $J$ times. Then the algorithm in \cite{YaY} can be employed to obtain a $t$-error correcting BNECC for the time-expanded network, which equivalently for the original network gives a network error correcting code that corrects $t$ errors once in every $J$ network uses. It is noted that the sufficient field size $q$ required by the technique of \cite{YaY} to construct a $t$-error correcting BNECC for the time-expanded graph ($\cal T$ being the set of all sinks) is such that
\[
q > \sum_{T\in\cal{T}}
\left(
\begin{array}{c}
J|\cal{E}| \\
2t
\end{array}
\right).
\]
Our approach demands a field size according to Theorem \ref{fieldsizebound}, which is independent of the number of edges in the network. Although the error correcting capability might not be comparable to that offered by the BNECC, the reduction in field size is a considerable advantage in terms of the computation to be performed at each coding node of the network. Also, the use of convolutional codes permits decoding using the Viterbi decoder, which is readily available.
For example, one could design network error correcting codes according to \cite{YaY} for the butterfly network by using the twice replicated butterfly network as shown in Fig. \ref{fig:butterflytwice}. The time-expanded network has min-cut 4, and thus the technique in \cite{YaY} can be used to obtain BNECCs, which correct single or double edge errors in the butterfly network once in $2$ network uses. In either case, the sufficient field size $q$ is such that $q > 306$, although by trial and error a code could be found over a smaller field size. On the other hand, the convolutional code that we used here in our paper for the butterfly network is over the binary and ternary fields.
\begin{figure}[htbp]
\centering
\includegraphics[totalheight=2.8in,width=3.6in]{butterfly_multiple.eps}
\caption{A twice replicated butterfly network. The edges are marked with a time index as to denote the time-expanded nature of the network.}
\label{fig:butterflytwice}
\end{figure}
\section{Discussion}
\label{sec6}
In the construction of Subsection \ref{construction}, the maximum Hamming weight $t_s$ of the vectors in the set ${\cal W}_s$, is such that $t_s\leq n.$ Clearly the actual value of $t_s$ is governed by the network code and hence the network code influences the choice of the network-error correcting convolutional code. Therefore the network code designed should be such that $t_s$ is minimal, so that the free distance demanded of the network-error correcting convolutional code in the construction of Subsection \ref{construction} is minimal.
Also, for a particular error pattern set, the decoding procedure at the sinks (Case-A or Case-B of decoding as in Subsection \ref{decoding}) is influenced by the field size, the network code and the network-error correcting convolutional code chosen. The examples given in Section \ref{sec5} illustrate the construction of Subsection \ref{construction} and also compare the effects of change in field size and the convolutional code chosen to correct errors corresponding to a given fixed error pattern set.
\section*{Acknowledgment} This work was supported partly by the DRDO-IISc program on Advanced Research in Mathematical Engineering through a research grant to B.~S.~Rajan.
|
2,877,628,088,393 | arxiv | \section{Introduction}
Deep neural networks have achieved state-of-the-art results in various tasks in natural language processing (NLP) tasks~\citep{sutskever2014sequence,vaswani2017attention,devlin2019bert} and computer vision (CV) tasks~\citep{he2016deep,goodfellow2016deep}. One approach to improve the generalization performance of deep neural networks is data augmentation~\citep{xie2019unsupervised,jiao2019tinybert,cheng2019robust,cheng2020advaug}.
However, there are some problems if we directly incorporate these augmented samples into the training set. Minimizing the average loss on all these samples means treating them equally, without considering their different implicit impacts on the loss.
To address this, we propose to minimize a reweighted loss on these augmented samples to make the model utilize them in a cleverer way.
Example reweighting has previously been explored extensively in curriculum learning \citep{bengio2009curriculum,jiang2014self}, boosting algorithms~\citep{freund1999short}, focal loss~\citep{lin2017focal} and importance sampling~\citep{csiba2018importance}.
However, none of them
focus on the reweighting of augmented samples instead of the original training samples.
A recent work \citep{jiang2020beyond} also assigns different weights on augmented samples. But weights in their model are predicted by a mentor network while we obtain the weights from the closed-form solution by minimizing the maximal expected loss (MMEL). In addition, they focus on image samples with noisy labels, while our method can generally be applied to also textual data as well as image data.
\citet{tran2017bayesian} propose to minimize the loss on the augmented samples under the framework of Expectation-Maximization algorithm. But they mainly focus on the generation of augmented samples
Unfortunately, in practise there is no way to directly access the optimal reweighting strategy. Thus,
inspired by adversarial training \citep{madry2018towards}, we propose to minimize the maximal expected loss (MMEL) on augmented samples from the same training example. Since the maximal expected loss is the supremum over any possible reweighting strategy on augmented samples' losses, minimizing this supremum makes the model perform well under any reweighting strategy. More importantly, we derive a closed-form solution of the weights,
where augmented samples with larger training losses have larger weights.
Intuitively, MMEL allows the model to keep focusing on augmented samples that are harder to train.
\par
The procedure of our method is summarized as follows. We first generate the augmented samples with commonly-used data augmentation technique, e.g., lexical substitution for textual input \citep{jiao2019tinybert}, random crop and horizontal flip for image data \citep{krizhevsky2012imagenet}. Then we explicitly derive the closed-form solution of the weights on each of the augmented samples.
After that, we update the model parameters with respect to the reweighted loss.
The proposed method can generally be applied above any data augmentation methods in various domains like natural language processing and computer vision.
Empirical results on both natural language understanding tasks and image classification tasks show that the proposed reweighting strategy consistently outperforms the counterpart of without using it, as well as other reweighting strategies like uniform reweighting.
\section{Related Work}\label{sec:related work}
\par \textbf{Data augmentation.}
Data augmentation is proven to be an effective technique to improve the generalization ability of various tasks, e.g., natural language processing \citep{xie2019unsupervised, zhu2020freelb,jiao2019tinybert}, computer vision \citep{krizhevsky2014cifar}, and speech recognition \citep{park2019specaugment}. For image data, baseline augmentation methods like random crop, flip, scaling, and
color augmentation \citep{krizhevsky2012imagenet} have been widely used. Other heuristic data augmentation techniques like Cutout~\citep{devries2017cutout} which masks image patches and Mixup~\citep{zhang2018mixup} which combines pairs of examples and their labels, are later proposed.
Automatically searching for augmentation policies~\citep{cubuk2018autoaugment,lim2019fast} have recently proposed to improve the performance further.
For textual data, \citet{zhang2015character,wei2019eda} and \citet{wang2015s} respectively use lexical substitution based on
the embedding space. \citet{jiao2019tinybert,cheng2019robust,kumar2020data} generate augmented samples with a pre-trained language model. Some other techniques like back translation \citep{xie2019unsupervised}, random noise injection \citep{xie2017data} and data mixup \citep{guo2019augmenting,cheng2020advaug} are also proven to be useful.
\paragraph{Adversarial training.}
Adversarial learning is used to enhance the robustness of model \citep{madry2018towards}, which dynamically constructs the augmented adversarial samples by projected gradient descent across training.
Although adversarial training hurts the generalization of model on the task of image classification \citep{raghunathan2019adversarial}, it is shown that adversarial training can be used as data augmentation to help generalization in neural machine translation \citep{cheng2019robust,cheng2020advaug} and natural language understanding \citep{zhu2020freelb,jiang2020smart}. Our proposed method differs from adversarial training in that we adversarially decide the weight on each augmented sample, while traditional adversarial training adversarially generates augmented input samples.
\par
In \citep{behpour2019ada}, adversarial learning is used as data augmentation in object detection. The adversarial samples (i.e., bounding boxes that are maximally different from the ground truth) are reweighted to form the underlying annotation distribution.
However, besides the difference in the model and task, their training objective and the resultant solution are also different from ours.
\paragraph{Sample reweighting.}
Minimizing a reweighted loss on training samples has been widely explored in literature. Curriculum learning ~\citep{bengio2009curriculum,jiang2014self} feeds first easier and then harder data into the model to accelerate training. \cite{zhao2014accelerating,needell2014stochastic,csiba2018importance,katharopoulos2018not} use importance sampling to reduce the variance of stochastic gradients to achieve faster convergence rate. Boosting algorithms~\citep{freund1999short} choose harder examples to train subsequent classifiers. Similarly, hard example mining \citep{malisiewicz2011ensemble} downsamples the majority class and exploits the most difficult examples.
Focal loss~\citep{lin2017focal,goyal2018focal} focuses on harder examples by reshaping the standard cross-entropy loss in object detection. \citet{ren2018learning,jiang2018mentornet,shu2019meta} use meta-learning method to reweight examples
to handle the noisy label problem. Unlike all these existing methods, in this work, we reweight the augmented samples' losses instead of training samples.
\section{minimize the maximal Expected Loss}
\label{sec: minimize mel}
In this section, we derive our reweighting strategy on augmented samples from the perspective of maximal expected loss. We first give a derivation of the closed-form solution of the weights on augmented samples. Then we describe two kinds of loss under this formulation. Finally, we give the implementation details using the natural language understanding task as an example.
\subsection{Why Maximal Expected Loss}
Consider a
classification task with $N$ training samples.
For the $i$-th training sample $\boldsymbol{x}_i$, its label is denoted as $y_{\boldsymbol{x}_{i}}$.
Let $f_{\theta}(\cdot)$
be the model with parameter $\theta$ which outputs the classification probabilities. $\ell(\cdot, \cdot)
$ denotes the loss function, e.g. the cross-entropy loss between outputs $f_{\theta}(\boldsymbol{x}_{i})$ and the ground-truth label $y_{\boldsymbol{x}_i}$. Given an original training sample $\boldsymbol{x}_{i}$, the set of augmented samples generated by some method
is $B(\boldsymbol{x}_i)$. Without loss of generality, we assume $\boldsymbol{x}_i\in B(\boldsymbol{x}_i)$. The conventional training objective is to minimize the loss on every augmented sample $\boldsymbol{z}$ in $B(\boldsymbol{x}_i)$
as
\begin{equation}
\small
\label{eq:expected loss}
\min_{\theta}\frac{1}{N}\sum\limits_{i=1}^{N}\left[\frac{1}{|B(\boldsymbol{x}_{i})|}\sum_{(\boldsymbol{z}, y_{\boldsymbol{z}})\in B(\boldsymbol{x}_{i})}\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}})\right],
\end{equation}
where $y_{\boldsymbol{z}}$ is the label of $\boldsymbol{z}\in B(\boldsymbol{x}_{i})$, and can be different with $y_{\boldsymbol{x}_{i}}$.
$|B(\boldsymbol{x}_{i})|$ is the number of augmented samples in $B(\boldsymbol{x}_{i})$, which is assumed to be finite.
\par
In equation \eqref{eq:expected loss}, for each given $\boldsymbol{x}_i$, the weights on its augmented samples are the same (i.e., $1/|B(\boldsymbol{x}_i)|$). However,
different samples have different implicit impacts on the loss, and we can assign different weights on them to facilitate training.
Note that computing the weighted sum of losses of each augmented sample in $B(\boldsymbol{x}_{i})$ can be viewed as taking expectation of loss on augmented samples $\boldsymbol{z}\in B(\boldsymbol{x}_{i})$ under a certain distribution.
When the augmented samples generated from the same training sample are drawn from a uniform distribution,
the loss in equation \eqref{eq:expected loss} can be rewritten as
\begin{equation}\small\label{eq:em loss uniform}
\begin{aligned}
\min_{\theta} R_{\theta}(\mathbb{P}_{U}) = \min_{\theta}\frac{1}{N}\sum\limits_{i=1}^{N}\left[\mathbb{E}_{\boldsymbol{z}\sim \mathbb{P}_{U}(\cdot\mid \boldsymbol{x}_{i})}\left[\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}})\right] - \lambda_{P}\textbf{KL}(\mathbb{P}_{U}(\cdot\mid\boldsymbol{x}_{i})\parallel \mathbb{P}_{U}(\cdot\mid\boldsymbol{x}_{i}))\right],
\end{aligned}
\end{equation}
where the Kullback–Leibler (KL) divergence $\textbf{KL}(\mathbb{P}_{U}(\cdot\mid\boldsymbol{x}_{i})\parallel \mathbb{P}_{U}(\cdot\mid\boldsymbol{x}_{i}))$ equals zero.
Here $\mathbb{P}_{U}(\cdot\mid\boldsymbol{x}_{i})$ denotes the uniform distribution on $B(\boldsymbol{x}_{i})$.
When the augmented samples are drawn from a more general distribution $\mathbb{P}_{B}(\cdot\mid\cdot)$\footnote{In the following, we simplify $\mathbb{P}_{B}(\cdot\mid\cdot)$ as $\mathbb{P}_{B}$ if there is no obfuscation.} instead of the uniform distribution, we can generalize $\mathbb{P}_{U}(\cdot\mid \cdot)$ here to some other conditional distribution $\mathbb{P}_{B}$.
\begin{equation}\small\label{eq:objective}
\min_{\theta} R_{\theta}(\mathbb{P}_{B}) = \min_{\theta}\frac{1}{N}\sum\limits_{i=1}^{N}\left[\mathbb{E}_{\boldsymbol{z}\sim \mathbb{P}_{B}(\cdot\mid \boldsymbol{x}_{i})}\left[\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}})\right] - \lambda_{P}\textbf{KL}(\mathbb{P}_{B}(\cdot\mid\boldsymbol{x}_{i})\parallel \mathbb{P}_{U}(\cdot\mid\boldsymbol{x}_{i}))\right].
\end{equation}
\begin{remark}
When $\mathbb{P}_{B}(\cdot\mid \boldsymbol{x}_{i})$ reduces to the uniform distribution $\mathbb{P}_{U}(\cdot\mid\boldsymbol{x}_{i})$ for any $\boldsymbol{x}_{i}$,
since $\emph{\textbf{KL}}(\mathbb{P}_{U}(\cdot\mid\boldsymbol{x}_{i})\parallel \mathbb{P}_{U}(\cdot\mid \boldsymbol{x}_{i}))=0$, the objective in equation \eqref{eq:objective} reduces to the one in equation \eqref{eq:expected loss}.
\end{remark}
The KL divergence term in equation \eqref{eq:objective} is used as a regularizer to encourage $\mathbb{P}_{B}$ close to $\mathbb{P}_{U}$ (see Remark \ref{rmk:kl}). From equation \eqref{eq:objective}, the conditional distribution $\mathbb{P}_{B}$ determines the weights of each augmented sample in $B(\boldsymbol{x}_{i})$. There may exist an optimal formulation of $\mathbb{P}_{B}$ in some regime, e.g. corresponding to the optimal generalization ability of model. Unfortunately, we can not explicitly characterize such an unknown optimal $\mathbb{P}_{B}$. To address this, we borrow the idea from adversarial training \citep{madry2018towards} and minimize the maximal reweighted loss on augmented samples. Then, the model is guaranteed to perform well
under any reweighting strategy, including the underlying optimal one.
Specifically, let the conditional distribution $\mathbb{P}_{B}$ be $\mathbb{P}_{\theta}^{*} = \arg \sup_{\mathbb{P}_{B}}R_{\theta}(\mathbb{P}_{B}).$
Our objective is to minimize the following reweighted loss
\begin{equation}\small\label{eq:mmel loss}
\min_{\theta} R_{\theta}(\mathbb{P}^{*}_{\theta}) = \min_{\theta}\sup_{\mathbb{P}_{B}}R_{\theta}(\mathbb{P}_{B}).
\end{equation}
The following Remark~\ref{rmk:kl} discusses about the KL divergence term in equation \eqref{eq:objective}.
\begin{remark}
\label{rmk:kl}
Since we take a supremum over $\mathbb{P}_{B}$ in equation \eqref{eq:mmel loss}, the regularizer $\emph{\textbf{KL}}(\mathbb{P}_{B}\parallel \mathbb{P}_{U})$ encourages $\mathbb{P}_{B}$ to be close to $\mathbb{P}_{U}$ because it reaches the minimal value zero when $\mathbb{P}_{B} = \mathbb{P}_{U}$. Thus the regularizer controls the diversity among the augmented samples by constraining the discrepancy between $\mathbb{P}_{B}$ and uniform distribution $\mathbb{P}_{U}$, e.g., a larger $\lambda_{P}$ promotes a larger diversity among the augmented samples.
\end{remark}
The following Theorem~\ref{thm:explicit loss} gives the explicit formulation of $R_{\theta}(\mathbb{P}_{\theta}^{*})$.
\begin{theorem}\label{thm:explicit loss}
Let $R_{\theta}(\mathbb{P}_{B})$ and $R_{\theta}(\mathbb{P}^{*}_{\theta})$ be defined in equation \eqref{eq:expected loss} and \eqref{eq:mmel loss}, then we have
\begin{equation}\small
\label{eq:upper_bound_full}
R_{\theta}(\mathbb{P}_{\theta}^{*}) = \frac{1}{N}\sum\limits_{i=1}^{N}\left[\sum_{\boldsymbol{z}\in B(\boldsymbol{x}_{i})}\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i})\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}}) - \lambda_{P}\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i})\log{(|B(\boldsymbol{x}_{i})|\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i}))}\right],
\end{equation}
where
\begin{equation}\small\label{eq:probability measure}
\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i}) = \frac{\exp\left(\frac{1}{\lambda_{P}}\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}})\right)}{\sum_{\boldsymbol{z}\in B(\boldsymbol{x}_{i})}\exp\left(\frac{1}{\lambda_{P}}\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}})\right)} = {\rm Softmax}_{\boldsymbol{z}}\left(\frac{1}{\lambda_{P}}\ell(f_{\theta}(B(\boldsymbol{x}_i)), y_{B(\boldsymbol{x}_i)})\right),
\end{equation}
where ${\rm Softmax}_{\boldsymbol{z}}(\frac{1}{\lambda_{P}}\ell(f_{\theta}(B(\boldsymbol{x}_i)), y_{B(\boldsymbol{x}_i)}))$ represents the output probability of $\boldsymbol{z}$ for vector $(\frac{1}{\lambda_{P}}\ell(f_{\theta}(\boldsymbol{z}_{1}), y_{\boldsymbol{z}_{1}}) ,\cdots, \frac{1}{\lambda_{P}}\ell(f_{\theta}(\boldsymbol{z}_{|B(\boldsymbol{x}_{i})|}), y_{|B(\boldsymbol{x}_{i})|}) )$.
\end{theorem}
\begin{remark}
If we ignore the KL divergence term in equation \eqref{eq:objective},
due to the equivalence of minimizing cross-entropy loss and MLE loss \citep{martens2014new}, the proposed MMEL also falls into the generalized Expectation-Maximization (GEM) framework \citep{dempster1977maximum}.
Specifically, given a training example, the augmented samples of it can be viewed as latent variable, and any reweighting on these augmented samples corresponds to a specific conditional distribution of these augmented samples given the training sample. In the expectation step (E-step),
we explicitly derive the closed-form solution of the weights on each of these augmented samples according to \eqref{eq:probability measure}. In the maximization step, since there is no analytical solution for deep neural networks, following \citep{tran2017bayesian}, we update the model parameters with respect to the reweighted loss by one step of gradient descent.
\end{remark}
The proof of this theorem can be found in Appendix \ref{app:proof of themrem 1}. From
Theorem~\ref{thm:explicit loss},
the loss of it decides the weight on each augmented sample $\boldsymbol{z} \in B_{\boldsymbol{x}_i}$, and the weight is
normalized by Softmax over all augmented samples in $B_{\boldsymbol{x}_i}$. The reweighting strategy allows more attention paid to augmented samples with higher loss values. The strategy is similar to those in \citep{lin2017focal,zhao2014accelerating} but they apply it on training samples.
\subsection{Two Types of Loss}
For augmented sample $\boldsymbol{z}\in B(\boldsymbol{x}_{i})$, instead of computing the discrepancy between the output probability $f_{\theta}(\boldsymbol{z})$ and the hard label $y_{\boldsymbol{z}}$ as in equation \eqref{eq:upper_bound_full}, one can also compute the discrepancy between $f_{\theta}(\boldsymbol{z})$ and the ``soft'' probability $f_{\theta}(\boldsymbol{x}_{i})$ in the absence of ground-truth label on augmented samples as in \citep{xie2019unsupervised}. In the following, We use superscript ``hard" for the loss in equation \eqref{eq:upper_bound_full} as
\begin{equation}\small
\label{eq:hard loss}
R_{\theta}^{{\rm hard}}(\mathbb{P}_{\theta}^{*}, \boldsymbol{x}_i) =\sum_{\boldsymbol{z}\in B(\boldsymbol{x}_{i})}\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i})\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}})) - \lambda_{P}\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i})\log{(|B(\boldsymbol{x}_{i})|\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i}))},
\end{equation}
to distinguish with the following objective which uses the ``soft probability'':
\begin{equation}\small\label{eq:soft loss}
\begin{aligned}
R_{\theta}^{{\rm soft}}(\mathbb{P}_{\theta}^{*}, \boldsymbol{x}_i) =
\ell(f_{\theta}(\boldsymbol{x}_i), y_{\boldsymbol{x}_i})
& + \lambda_{T}\sum\limits_{\boldsymbol{z}\in B(\boldsymbol{x}_{i}); \boldsymbol{z}\neq \boldsymbol{x}_{i}}\big(\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid\boldsymbol{x}_{i})\ell(f_{\theta}(\boldsymbol{z}), f_{\theta}(\boldsymbol{x}_{i})) \\
& - \lambda_{P}\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i})\log{(|B(\boldsymbol{x}_{i})| - 1)\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i})}\big).
\end{aligned}
\end{equation}
The two terms in $R_{\theta}^{{\rm soft}}(\mathbb{P}_{\theta}^{*}, \boldsymbol{x}_{i})$ respectively correspond to the loss on original training samples $\boldsymbol{x}_{i}$ and the reweighted loss on the augmented samples. The reweighted loss promotes a small discrepancy between the augmented samples and the original training sample. $\lambda_{T} > 0$ is the coefficient used to balance the two loss terms, and
$\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i})$ is defined similar to \eqref{eq:probability measure} as
\begin{equation}\small
\label{eq:probability measure_soft}
\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i}) = \frac{\exp\left(\frac{1}{\lambda_{P}}\ell(f_{\theta}(\boldsymbol{z}), f_{\theta}(\boldsymbol{x}_{i}))\right)}{\sum_{\boldsymbol{z}\in B(\boldsymbol{x}_{i}); \boldsymbol{z}\neq \boldsymbol{x}_{i}}\exp\left(\frac{1}{\lambda_{P}}\ell(f_{\theta}(\boldsymbol{z}), f_{\theta}(\boldsymbol{x}_{i}))\right)}.
\end{equation}
\begin{figure*}[t!]\centering
\vspace{-0.3in}
\subfloat[MMEL-H.\label{fig:hard}]{
\includegraphics[width=0.47\textwidth]{./figures/hard_loss.pdf}}
\hspace{0.1in}
\subfloat[MMEL-S.\label{fig:soft}]{
\includegraphics[width=0.47\textwidth]{./figures/soft_loss.pdf}}
\vspace{-0.1in}
\caption{
MMEL with two types of losses. Figure~(\ref{fig:hard}) is the hard loss \eqref{eq:hard loss} with probability computed using \eqref{eq:probability measure} while Figure~(\ref{fig:soft}) is the soft loss \eqref{eq:soft loss} with the probabilities computed using \eqref{eq:probability measure_soft}.}
\label{fig:loss}
\end{figure*}
The two losses are shown in Figure~\ref{fig:loss}. Summing over all the training samples, we get the two kinds of reweighted training objectives.
\begin{remark}
The proposed MMEL-S tries to reduce the discrepancy between $f_{\theta}(\boldsymbol{z})$ and $f_{\theta}(\boldsymbol{x}_{i})$ for $\boldsymbol{z}\in B(\boldsymbol{x}_{i})$. However, if the prediction $f_{\theta}(\boldsymbol{x}_{i})$ is inaccurate, such misleading supervision for $\boldsymbol{z}$ may lead to the degraded performance of MMEL-S. More details are in Appendix \ref{app:imagenet}.
\end{remark}
\subsection{Example: MMEL Implementation on Natural Language Understanding Tasks}
\label{sec:implementation of algorithm}
\begin{algorithm}[t!]
\caption{Minimize the Maximal Expected Loss (MMEL)}
\label{alg:training}
\textbf{Input:} Training set $\{(\boldsymbol{x}_{1}, y_{\boldsymbol{x}_{1}}), \cdots, (\boldsymbol{x}_{N}, y_{\boldsymbol{x}_{N}})\}$, batch size $S$, learning rate $\eta$, number of training iterations $T$, $R_{\theta}$ equals $R^{{\rm hard}}_{\theta}$ or $R^{{\rm soft}}_{\theta}$.
\begin{algorithmic}[1]
\For {$i$ in $\{1,2,\cdots,N\}$ } \Comment{\emph{\texttt{generate augmented samples}}}
\State{Generating $B(\boldsymbol{x}_i)$ using some data augmentation method.}
\EndFor
\For {$t=1, \cdots ,T$} \Comment{\emph{\texttt{minimize the maximal expected loss}}}
\State {Randomly sample a mini-batch $\mathcal{S} = \{(\boldsymbol{x}_{i_{1}}, y_{\boldsymbol{x}_{i_{1}}}), \cdots, (\boldsymbol{x}_{i_{S}}, y_{\boldsymbol{x}_{i_{S}}})\}$ from training set.
\State Fetch the augmented samples $B(\boldsymbol{x}_{i_1}), B(\boldsymbol{x}_{i_2}), \cdots, B(\boldsymbol{x}_{i_S})$.}
\State {Compute $\mathbb{P}_{\theta}^{*}$ according to \eqref{eq:probability measure} or \eqref{eq:probability measure_soft}. }
\State {Update model parameters $\theta_{t + 1} = \theta_{t} - \frac{\eta}{S}\sum\nolimits_{\boldsymbol{x}\in\mathcal{S}}\nabla_{\theta} R_{\theta}(\mathbb{P}_{\theta}^{*}, \boldsymbol{x})$. }
\EndFor
\end{algorithmic}
\end{algorithm}
In this section, we elaborate on implementing the proposed method using textual data in natural language understanding tasks as an example. Our method is separated into two phases. In the first phase, we generate augmented samples. Then in the second phase, with these augmented samples, we update the model parameters under these augmented samples with respect to the hard reweighted loss \eqref{eq:hard loss} or the soft counterpart \eqref{eq:soft loss}. The generation and training procedure can be decoupled, and the augmented samples are offline generated in the first phase by only once. On the other hand, in the second phase, since we have the explicit solution of weights on augmented samples and the multiple forward and backward passes on these augmented samples can be computed in parallel, the whole training time is similar to the regular training counterpart for an appropriate number of augmented samples. The whole training process is shown in Algorithm \ref{alg:training}.
\paragraph{Generation of Textual Augmented Data.} Various methods have been proposed to generate augmented samples for textual data.
Recently, large-scale pre-trained language models like BERT~\citep{devlin2019bert} and GPT-2~\citep{radford2019language} learn contextualized representations and have been used widely in generating high-quality augmented sentences~\citep{jiao2019tinybert,kumar2020data}. In this paper, we use a pre-trained BERT trained from masked language modeling to generate augmented samples. For each original input sentence, we randomly mask $k$ tokens. Then we do a forward propagation of the BERT to predict the tokens in those masked positions by greedy search. Details can be found in Algorithm \ref{alg:greedy} in Appendix \ref{app:text generation}.
\paragraph{Mismatching Label.} For $R_{\theta}^{{\rm hard}}$ in equation \eqref{eq:hard loss}, the loss term $\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}})$ on augmented sample $\boldsymbol{z}\in B(\boldsymbol{x}_{i})$ for some $\boldsymbol{x}_{i}$ relies on its label $y_{\boldsymbol{z}}$.
Unlike image data, where conventional augmentation methods like random crop and horizontal flip of an image do not change its label, substituting even one word in a sentence can drastically change its meaning.
For instance, suppose the original sentence is \emph{``She is my daughter''}, and the word ``She'' is masked. The top $5$ words predicted by the pre-trained BERT are ``This, She, That, It, He''. Apparently, for the task of linguistic acceptability task, replacing ``She'' with ``He'' can change the label from linguistically ``acceptable'' to ``non-acceptable''. Thus for textual input, for the term $\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}})$ in hard loss \eqref{eq:hard loss}, instead of directly setting $y_{\boldsymbol{z}}$ as $y_{\boldsymbol{x}_{i}}$ \citep{zhu2020freelb},
we replace $y_{\boldsymbol{z}}$ with the output probability of a trained teacher model.
On the other hand, for the soft loss in equation \eqref{eq:soft loss}, if an augmented sample $\boldsymbol{z} \in B(\boldsymbol{x}_{i}) $ is predicted to a different class from $\boldsymbol{x}_{i}$ by the teacher model,
it is unreasonable to still minimize the discrepancy between $f_{\theta}(\boldsymbol{z})$ and $f_{\theta}(\boldsymbol{x}_{i})$.
In this case, we replace $f_{\theta}(\boldsymbol{x}_{i})$ in the loss term
$\lambda_{T}\sum\nolimits_{\boldsymbol{z}\in B(\boldsymbol{x}_{i}); \boldsymbol{z}\neq \boldsymbol{x}_{i}}\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid\boldsymbol{x}_{i})\ell(f_{\theta}(\boldsymbol{z}), f_{\theta}(\boldsymbol{x}_{i}))$ with the output probability from the teacher model.
\section{Experiments}
In this section, we evaluate the efficacy of the proposed MMEL algorithm with both hard loss (MMEL-H) and soft loss (MMEL-S). Experiments are conducted on both the image classification tasks \texttt{CIFAR-10} and \texttt{CIFAR-100}~\citep{krizhevsky2014cifar} with the ResNet Model \citep{he2016deep}, and the General Language Understanding Evaluation~(GLUE) tasks \citep{wang2019glue} with the BERT model~\citep{devlin2019bert}.
\subsection{Experiments on Image Classification Tasks.}\label{sec:exp for image}
\label{expt:cifar}
\paragraph{Data.}
\texttt{CIFAR} \citep{krizhevsky2014cifar} is a benchmark dataset for image classification. We use both \texttt{CIFAR-10} and \texttt{CIFAR-100} in our experiments, which are colorful images with 50000 training samples and 10000 validation samples, but from 10 and 100 object classes, respectively.
\paragraph{Setup.}
The model we used is ResNet \citep{he2016deep} with different depths. We use random crop and horizontal flip \citep{krizhevsky2012imagenet} to augment the original training images. Since these operations do not change the augmented sample label, we directly adopt the original training sample label for all its augmented samples. Following \citep{he2016deep}, we use the SGD with momentum optimizer to train each model for 200 epochs. The learning rate starts from 0.1 and decays by a factor of 0.2 at epochs 60, 120 and 160. The batch size is 128, and weight decay is 5e-4. For each $\boldsymbol{x}_i$, $|B(\boldsymbol{x}_{i})|=10$. The $\lambda_{P}$ of the KL regularization coefficient is 1.0 for both MMEL-H and MMEL-S. The $\lambda_{T}$ in equation \eqref{eq:soft loss} for MMEL-S is selected from \{0.5, 1.0, 2.0\}.
\par
We compare our proposed MMEL with conventional training with data augmentation (abbreviated as ``Baseline(DA)'') under the same number of epochs.
Though MMEL can be computed efficiently in parallel, the proposed MMEL encounters $|B(\boldsymbol{x}_{i})|=10$ times more training data. For fair comparison, we also compare with two other baselines that also use 10 times more data: (i) naive training with data augmentation but with 10 times more training epochs compared with MMEL (abbreviated as ``Baseline(DA+Long)''). In this case, the learning rate accordingly decays at epochs 600, 1200 and 1600; (ii) training with data augmentation under the framework of MMEL but with uniform weights on the augmented samples
(abbreviated as ``Baseline(DA+UNI)'').
\paragraph{Main Results.}
The results are shown in Table \ref{tbl:cifar}. As can be seen, for both \texttt{CIFAR-10} and \texttt{CIFAR-100}, MMEL-H and MMEL-S significantly outperform the Baseline(DA), with over 0.5 points higher accuracy on all four architectures. Compared to Baseline(DA+Long), the proposed MMEL-H and MMEL-S also have comparable or better performance,
while being much more efficient in training. This is because our backward pass only computes the gradient of the weighted loss instead of the separate loss of each example.
Compared to Baseline(DA+UNI) which has the same computational cost as
MMEL-H and MMEL-S, the proposed methods also have better performance. This indicates the efficacy of the proposed maximal expected loss based reweighting strategy.
\par
We further evaluate the proposed method on larege-scale dataset \texttt{ImageNet}\citep{deng2009imagenet}. The detailed results are in Appendix \ref{app:imagenet}.
\begin{table*}[htbp]
\caption{Performance of ResNet on \texttt{CIFAR-10} and \texttt{CIFAR-100}.
The time is the training time
measured on a single NVIDIA V100 GPU. The results of five independent runs with ``mean ($\pm$std)'' are reported, expected for ``Baseline(DA + Long)'' which is slow in training.}
\label{tbl:cifar}
\centering
\scalebox{0.65}{
{
\begin{tabular}{l|c|cc|cc|cc|cc|cc}
\hline
\multirow{2}{*}{dataset } & \multirow{2}{*}{Model } & \multicolumn{2}{c|}{Baseline(DA)} & \multicolumn{2}{c|}{Baseline(DA+Long) } & \multicolumn{2}{c|}{Baseline(DA+UNI) } & \multicolumn{2}{c|}{MMEL-H} & \multicolumn{2}{c}{MMEL-S} \\ \cline{3-12}
& & acc & time & acc & time & acc & time & acc & time & acc & time \\ \hline
\multirow{4}{*}{\texttt{CIFAR-10}} & ResNet20 & 92.53($\pm$0.10) & 0.7h & \textbf{93.27} & 6.7h & 93.00($\pm$0.16) & 2.9h & 93.16($\pm$0.03) & 2.9h & 93.10($\pm$0.18) & 2.9h \\
& ResNet32 & 93.46($\pm$0.21) & 0.7h & \textbf{94.43} & 7.2h & 94.11($\pm$0.33) & 4.3h & 94.31($\pm$0.07) & 4.3h & 93.93($\pm$0.05) & 4.3h \\
& ResNet44 & 93.92($\pm$0.10) & 0.8h & 94.11 & 8.3h & 94.30($\pm$0.18) & 5.7h & \textbf{94.70}($\pm$0.14) & 5.7h & 94.48($\pm$0.08) & 5.7h \\
& ResNet56 & 93.96($\pm$0.20) & 1.1h & 94.12 & 10.6h & 94.62($\pm$0.18) & 7.0h & \textbf{94.85}($\pm$0.15) & 7.0h & 94.64($\pm$0.03) & 7.0h \\ \hline
\multirow{4}{*}{\texttt{CIFAR-100}} & ResNet20 & 68.95($\pm$0.56) & 0.7h & 69.45 & 6.7h & 68.89($\pm$0.06) & 2.9h & \textbf{70.01}($\pm$0.07) & 2.9h & 70.00($\pm$0.07) & 2.9h \\
& ResNet32 & 70.66($\pm$0.16) & 0.7h & 71.98 & 7.2h & 71.59($\pm$0.10) & 4.3h & \textbf{72.51}($\pm$0.07) & 4.3h & 72.57($\pm$0.20) & 4.3h \\
& ResNet44 & 71.43($\pm$0.30) & 0.8h & 72.83 & 8.3h & 72.30($\pm$0.38) & 5.7h & \textbf{73.18}($\pm$0.31) & 5.7h & 72.89($\pm$0.16) & 5.7h \\
& ResNet56 & 72.22($\pm$0.26) & 1.1h & 73.09 & 10.6h & 73.44($\pm$0.13) & 7.0h & \textbf{74.20}($\pm$0.24) & 7.0h & 73.89($\pm$0.15) & 7.0h \\ \hline
\end{tabular}}
\vspace{-0.1in}}
\end{table*}
\paragraph{Varying the Number of Augmented Samples.}
One hyperparameter of the proposed method is the number of augmented samples $|B(\boldsymbol{x}_{i})|$.
In Table~\ref{tbl:mmel-h-k}, we evaluate the effect of $|B(\boldsymbol{x}_{i})|$ on the CIFAR dataset. We vary $|B(\boldsymbol{x}_{i})|$ in $\{2, 5, 10, 20\}$ for both MMEL-H and MMEL-S with other settings unchanged.
As can be seen,
the performance of MMEL improves with more augmented samples for small $|B(\boldsymbol{x}_{i})|$.
However, the performance gain begins to saturate when $|B(\boldsymbol{x}_{i})|$ reaches 5 or 10 for some cases.
Since a larger $|B(\boldsymbol{x}_{i})|$ also brings more training cost, we should choose a proper number of augmented samples rather than continually increasing it.
\begin{table*}[htbp]
\caption{Performance of MMEL on \texttt{CIFAR-10} and \texttt{CIFAR-100} with ResNet with varying $|B_{\boldsymbol{x}_i}|$. Here ``MMEL-*-$k$'' means training with MMEL-* loss with $|B(\boldsymbol{x}_{i})|=k$. The results are averaged over five independent runs with ``mean($\pm$std)'' reported.}
\label{tbl:mmel-h-k}
\centering
\scalebox{0.55}{
{
\begin{tabular}{l|c|c|cc|cc|cc|cc}
\hline
\multirow{2}{*}{dataset } & \multirow{2}{*}{Model } & \multirow{2}{*}{Baseline(DA)} & \multicolumn{2}{c|}{MMEL-*-2} & \multicolumn{2}{c|}{MMEL-*-5} & \multicolumn{2}{c|}{MMEL-*-10} & \multicolumn{2}{c}{MMEL-*-20} \\ \cline{4-11}
& & & H & S & H & S & H & S & H & S \\ \hline
\multirow{4}{*}{\texttt{CIFAR-10}} & ResNet20 & 92.53($\pm$0.10) & 92.77($\pm$0.01) & 92.91($\pm$0.21) & 93.11($\pm$0.13) & 92.89($\pm$0.05) & 93.16($\pm$0.03) & 93.10($\pm$0.18) & \textbf{93.57}($\pm$0.04) & 93.18($\pm$0.08) \\
& ResNet32 & 93.46($\pm$0.21) & 93.85($\pm$0.16) & 93.88($\pm$0.18) & 94.20($\pm$0.18) & 93.88($\pm$0.14) & 94.31($\pm$0.07) & 93.93($\pm$0.05) & \textbf{94.39}($\pm$0.09) & 93.89($\pm$0.17) \\
& ResNet44 & 93.92($\pm$0.10) & 94.18($\pm$0.12) & 93.87($\pm$0.13) & 94.51($\pm$0.13) & 94.35($\pm$0.07) & \textbf{94.70}($\pm$0.14) & 94.48($\pm$0.08) & \textbf{94.70}($\pm$0.20) & 94.39($\pm$0.11) \\
& ResNet56 & 93.96($\pm$0.20) & 94.29($\pm$0.08) & 94.43($\pm$0.05) & 94.78($\pm$0.09) & 94.56($\pm$0.15) & 94.85($\pm$0.15) & 94.64($\pm$0.03) & \textbf{95.01}($\pm$0.12) & 94.62($\pm$0.12) \\ \hline
\multirow{4}{*}{\texttt{CIFAR-100}} & ResNet20 & 68.95($\pm$0.56) & 69.46($\pm$0.24) & 70.00($\pm$0.36) & 69.73($\pm$0.21) & 69.88($\pm$0.18) & 70.01($\pm$0.07) & 70.00($\pm$0.07) & 69.89($\pm$0.09) & \textbf{70.05}($\pm$0.23) \\
& ResNet32 & 70.66($\pm$0.16) & 71.50($\pm$0.09) & 71.37($\pm$0.30) & 72.41($\pm$0.18) & 71.73($\pm$0.16) & 72.51($\pm$0.07) & \textbf{72.57}($\pm$0.20) & 72.25($\pm$0.12) & 72.00($\pm$0.12) \\
& ResNet44 & 71.43($\pm$0.30) & 72.58($\pm$0.08) & 72.42($\pm$0.24) & \textbf{73.38}($\pm$0.07) & 72.92($\pm$0.15) & 73.18($\pm$0.31) & 72.89($\pm$0.16) & 73.23($\pm$0.18) & 72.77($\pm$0.15) \\
& ResNet56 & 72.22($\pm$0.26) & 73.11($\pm$0.36) & 73.33($\pm$0.30) & 73.95($\pm$0.04) & 73.47($\pm$0.14) & \textbf{74.20}($\pm$0.24) & 73.89($\pm$0.15) & 74.10($\pm$0.05) & 73.53($\pm$0.12) \\ \hline
\end{tabular}}
}
\end{table*}
\subsection{Results on Natural Language Understanding Tasks}
\label{expt:glue}
\paragraph{Data.} GLUE is a benchmark containing various natural language understanding tasks, including textual entailment (\texttt{RTE} and \texttt{MNLI}), question answering (\texttt{QNLI}), similarity and paraphrase (\texttt{MRPC}, \texttt{QQP}, \texttt{STS-B}), sentiment analysis (\texttt{SST-2}) and linguistic acceptability (\texttt{CoLA}). Among them, \texttt{STS-B} is a regression task,
\texttt{CoLA} and \texttt{SST-2} are single sentence classification tasks, while the rest are sentence-pair classification tasks.
Following \citep{devlin2019bert}, for the development set, we report Spearman correlation for \texttt{STS-B}, Matthews correlation for \texttt{CoLA} and accuracy for the other tasks. For the test set for \texttt{QQP} and \texttt{MRPC}, we report ``F1''.
\paragraph{Setup.}
The backbone model is $\text{BERT}_\text{BASE}$ \citep{devlin2019bert}.
We use the method in Section~\ref{sec:implementation of algorithm} to generate augmented samples. For the problem of mismatching label as described in Section~\ref{sec:implementation of algorithm}, we use a $\text{BERT}_\text{BASE}$ model fine-tuned on the downstream task as teacher model to predict the label of each generated sample $\boldsymbol{z}$ in $B(\boldsymbol{x}_{i})$. For each $\boldsymbol{x}_{i}$, $|B(\boldsymbol{x}_{i})|=5$. The fraction of masked tokens for each sentence is 0.4. The $\lambda_{P}$ of the KL regularization coefficient is 1.0 for both MMEL-H and MMEL-S. The $\lambda_{T}$ in equation \eqref{eq:soft loss} for MMEL-S is 1.0. The other detailed hyperparameters in training can be found in Appendix \ref{app:hyperparameters}.
\par
The derivation of MMEL in Section~\ref{sec: minimize mel} is based on the classification task, while \texttt{STS-B} is a regression task. Hence, we generalize our loss function accordingly for regression tasks as follows. For the hard loss in equation \eqref{eq:hard loss},
we directly replace $y_{\boldsymbol{z}}\in\mathbb{R}$ with the prediction of teacher model on $\boldsymbol{z}$.
For the soft loss $\eqref{eq:soft loss}$, for each entry of $f_{\theta}(\boldsymbol{x}_{i})$ in loss term $\lambda_{T}\sum\nolimits_{\boldsymbol{z}\in B(\boldsymbol{x}_{i}); \boldsymbol{z}\neq \boldsymbol{x}_{i}}\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid\boldsymbol{x}_{i})\text{MSE}(f_{\theta}(\boldsymbol{z}), f_{\theta}(\boldsymbol{x}_{i}))$,
we replace it with the prediction of teacher model if the difference between them is larger than 0.5.
\par
Similar to Section~\ref{expt:cifar}, We compare with three baselines.
However, we change the first baseline to naive training without data augmentation (abbreviated as ``Baseline'') since data augmentation is not used by default in NLP tasks.
The other two baselines are similar to those in Section~\ref{expt:cifar}: (i) ``Baseline(DA+Long)'' which fine-tunes BERT with data augmentation with the same batch size; and (ii)``Baseline(DA+UNI)'' which fine-tunes BERT with augmented samples by using average loss. We also compare with another recent data augmentation technique SMART \citep{jiang2020smart}.
\begin{table*}[htbp]
\caption{Development and test sets results on the $\text{BERT}_\text{BASE}$ model. The training time is measured on a single NVIDIA V100 GPU. The results of Baseline, Baseline(DA+UNI), MMEL-H and MMEL-S are obtained by five independent runs with ``mean($\pm$std)'' reported.}
\label{tbl:glue}
\centering
\scalebox{0.52}{
{
\begin{tabular}{ll|cccccccccc}
\hline & Method & \tabincell{c}{\texttt{CoLA} \\ 8.5k} & \tabincell{c}{\texttt{SST-2} \\ 67k} & \tabincell{c}{\texttt{MRPC} \\ 3.7k} & \tabincell{c}{\texttt{STS-B} \\7k} & \tabincell{c}{\texttt{QQP} \\ 364k} & \tabincell{c}{\texttt{MNLI-m/mm} \\393k} & \tabincell{c}{\texttt{QNLI} \\ 108k} & \tabincell{c}{\texttt{RTE} \\ 2.5k} & Avg & Time \\ \hline
\multirow{6}{*}{Dev} & Baseline & 59.7($\pm$0.61) & 93.1($\pm$0.38) & 87.0($\pm$0.56) & 89.7($\pm$0.34) & 91.1($\pm$0.12) & 84.6($\pm$0.28)/85.0($\pm$0.37) & 91.7($\pm$0.17) & 69.7($\pm$2.3) & 83.5($\pm$0.27) & 21.5h \\
& Baseline(DA+Long) & 61.5 & 93.3 & 88.0 & 89.8 & 91.1 & 84.8/85.3 & 92.0 & \textbf{73.3} & 84.3 & 107.5h \\
& Baseline(DA+UNI) & 61.1($\pm$0.75) & 93.1($\pm$0.17) & 87.9($\pm$0.63) & 90.0($\pm$0.14) & 91.1($\pm$0.03) & 84.8($\pm$0.36)/85.1($\pm$0.26) & 91.9($\pm$0.16) & 71.8($\pm$1.02) & 84.1($\pm$0.14) & 31.6h \\
& SMART & 59.1 & 93.0 & 87.7 & 90.0 & 91.5 & \textbf{85.6/86.0} & 91.7 & 71.2 & 84.0 & - \\
& MMEL-H & \textbf{62.1}($\pm$0.55) & 93.1($\pm$0.14) & 87.7($\pm$0.20) & \textbf{90.4}($\pm$0.14) & \textbf{91.5}($\pm$0.07) & 85.3($\pm$0.06)/85.5($\pm$0.06) & 92.2($\pm$0.10) & 72.3($\pm$0.85) & 84.5($\pm$0.11) & 31.8h \\
& MMEL-S & \textbf{62.1}($\pm$0.55) & \textbf{93.5}($\pm$0.23) & \textbf{88.4}($\pm$0.73) & \textbf{90.4}($\pm$0.14) & \textbf{91.5}($\pm$0.04) & 85.2($\pm$0.05)/85.6($\pm$0.02) & \textbf{92.4}($\pm$0.12) & 71.9($\pm$0.24) & \textbf{84.6}($\pm$0.11) & 32.4h \\ \hline
\multirow{5}{*}{Test} & Baseline & 51.6($\pm$0.73) & 93.3($\pm$0.21) & 88.0($\pm$0.59) & 85.8($\pm$0.88) & 71.3($\pm$0.32) & 84.6($\pm$0.19)/83.8($\pm$0.30) & 91.1($\pm$0.35) & 67.4($\pm$1.30) & 79.6($\pm$0.09) & 21.5h \\
& Baseline(DA+Long) & 52.0 & 93.3 & \textbf{88.8} & \textbf{86.7} & 71.3 & 84.4/83.9 & 90.9 & 69.6 & 80.1 & 107.5h \\
& Baseline(DA+UNI) & 52.4($\pm$1.50) & 92.3($\pm$0.52) & 87.7($\pm$0.72) & 85.8($\pm$0.70) & 71.5($\pm$0.44) & 84.6($\pm$0.31)/83.6($\pm$0.46) & 90.6($\pm$0.43) & 68.8($\pm$1.76) & 79.7($\pm$0.50) & 31.6h \\
& MMEL-H & \textbf{53.6}($\pm$0.90) & 93.4($\pm$0.05) & 88.3($\pm$0.21) & 86.6($\pm$0.45) & \textbf{72.4}($\pm$0.05) & 84.9($\pm$0.19)/\textbf{84.5}($\pm$0.15) & \textbf{91.5}($\pm$0.11) & 69.8($\pm$0.52) & \textbf{80.5}($\pm$0.17) & 31.8h \\
& MMEL-S & 52.5($\pm$0.43) & \textbf{93.5}($\pm$0.16) & 88.3($\pm$0.15) & 86.1($\pm$0.07) & 72.1($\pm$0.10) & \textbf{85.0}($\pm$0.23)/84.2($\pm$0.15) & 91.4($\pm$0.31) & \textbf{69.9}($\pm$0.54) & 80.3($\pm$0.10) & 32.4h \\ \hline
\end{tabular}}}
\end{table*}
\paragraph{Main Results.}
The development and test set results on the GLUE benchmark are shown in Table \ref{tbl:glue}. The development set results for the BERT baseline are from our re-implementation, which is comparable or better than the reported results in the original paper \citep{devlin2019bert}. The results for SMART are taken from \citep{jiang2020smart},
and there are no test set results in \citep{jiang2020smart}.
As can be seen, data augmentation significantly improves the generalization of GLUE tasks. Compared to the baseline without data augmentation (Baseline), MMEL-H or MMEL-S consistently achieves better performance, especially on small datasets like \texttt{CoLA} and \texttt{RTE}.
Similar to the observation in the image classification task in Section~\ref{expt:cifar}, the proposed MMEL-H and MMEL-S are more efficient and have better performance than Baseline(DA+Long). MMEL-H and MMEL-S also outperform Baseline(DA+UNI), indicating the superiority of using the proposed reweighting strategy. In addition, our proposed method also beats SMART in both accuracy and efficiency because they use PGD-k \citep{madry2018towards} to construct adversarial augmented samples which requires nearly $k$ times more training cost.
Figure \ref{fig:test acc} shows the development set accuracy across over the training procedure. As can be seen, training with MMEL-H or MMEL-S converges faster and has better accuracy except \texttt{SST-2} and \texttt{RTE} where the performance is similar.
\begin{figure*}[t!]\centering
\subfloat[\texttt{CoLA}.]{
\includegraphics[width=0.24\textwidth]{./figures/CoLA.png}}
\subfloat[\texttt{SST-2}.]{
\includegraphics[width=0.24\textwidth]{./figures/SST-2.png}}
\subfloat[\texttt{MRPC}.]{
\includegraphics[width=0.24\textwidth]{./figures/MRPC.png}}
\subfloat[\texttt{STS-B}.]{
\includegraphics[width=0.24\textwidth]{./figures/STS-B.png}}
\\
\vspace{-0.1in}
\subfloat[\texttt{QQP}.]{
\includegraphics[width=0.24\textwidth]{./figures/QQP.png}}
\subfloat[\texttt{MNLI}.]{
\includegraphics[width=0.24\textwidth]{./figures/MNLI.png}}
\subfloat[\texttt{QNLI}.]{
\includegraphics[width=0.24\textwidth]{./figures/QNLI.png}}
\subfloat[\texttt{RTE}.]{
\includegraphics[width=0.24\textwidth]{./figures/RTE.png}}
\vspace{-0.1in}
\caption{Development set results on $\text{BERT}_\text{BASE}$ model with different loss functions. }
\label{fig:test acc}
\end{figure*}
\paragraph{Effect of Predicted Labels.} For the augmented samples from same origin, we use a fine-tuned task-specific $\text{BERT}_\text{BASE}$ teacher model to predict their labels as mentioned in Section~\ref{sec:implementation of algorithm} to handle the problem of mismatching label. In Table~\ref{tbl:predict_label}, we show the comparison between using the label of the original sample and using predicted labels. As can be seen, using the predicted label significantly improves the performance. By comparing with the results in Table~\ref{tbl:glue}, using the label of the original sample even hurts the performance.
\begin{table*}[htbp]
\caption{Effect of using the predicted label. Development set results are reported.}
\label{tbl:predict_label}
\centering
\scalebox{0.88}{
{
\begin{tabular}{ll|ccccccccc}
\hline
Method & Label & \tabincell{c}{\texttt{CoLA} } & \tabincell{c}{\texttt{SST-2}} & \tabincell{c}{\texttt{MRPC}} & \tabincell{c}{\texttt{STS-B} } & \tabincell{c}{\texttt{QQP}} & \tabincell{c}{\texttt{MNLI-m/mm}} & \tabincell{c}{\texttt{QNLI} } & \tabincell{c}{\texttt{RTE}} & Avg \\ \hline
MMEL-H & Original & 48.8 & 91.5 & 79.2 & 80.3 & 88.6 & 80.4/79.8 & 88.8 & 65.3 & 78.1 \\
MMEL-H & Predicted & 62.8 & 93.3 & 87.5 & 90.4 & 91.6 & 85.4/85.6 & 92.1 & 72.2 & 84.5 \\
MMEL-S & Original & 56.6 & 91.7 & 85.8 & 81.6 & 90.0 & 81.9/81.3 & 89.9 & 61.0 & 80.0 \\
MMEL-S & Predicted & 61.7 & 93.8 & 89.2 & 90.2 & 91.6 & 85.0/85.5 & 92.5 & 73.3 & 84.7 \\ \hline
\end{tabular}}}
\vspace{-0.1in}
\end{table*}
\section{Conclusion}
In this work, we propose to minimize a reweighted loss over the augmented samples which directly considers their implicit impacts on the loss.
Since we can not access the optimal reweighting strategy, we propose to minimize the supremum of the loss under all reweighting strategies, and give a closed-form solution of the optimal weights. Our method can be applied on top of any data augmentation methods. Experiments on both image classification tasks and natural language understanding tasks show that the proposed method improves the generalization performance of the model, while being efficient in training.
\section{Introduction}
Deep neural networks have achieved state-of-the-art results in various tasks in natural language processing (NLP) tasks~\citep{sutskever2014sequence,vaswani2017attention,devlin2019bert} and computer vision (CV) tasks~\citep{he2016deep,goodfellow2016deep}. One approach to improve the generalization performance of deep neural networks is data augmentation~\citep{xie2019unsupervised,jiao2019tinybert,cheng2019robust,cheng2020advaug}.
However, there are some problems if we directly incorporate these augmented samples into the training set. Minimizing the average loss on all these samples means treating them equally, without considering their different implicit impacts on the loss.
To address this, we propose to minimize a reweighted loss on these augmented samples to make the model utilize them in a cleverer way.
Example reweighting has previously been explored extensively in curriculum learning \citep{bengio2009curriculum,jiang2014self}, boosting algorithms~\citep{freund1999short}, focal loss~\citep{lin2017focal} and importance sampling~\citep{csiba2018importance}.
However, none of them
focus on the reweighting of augmented samples instead of the original training samples.
A recent work \citep{jiang2020beyond} also assigns different weights on augmented samples. But weights in their model are predicted by a mentor network while we obtain the weights from the closed-form solution by minimizing the maximal expected loss (MMEL). In addition, they focus on image samples with noisy labels, while our method can generally be applied to also textual data as well as image data.
\citet{tran2017bayesian} propose to minimize the loss on the augmented samples under the framework of Expectation-Maximization algorithm. But they mainly focus on the generation of augmented samples
Unfortunately, in practise there is no way to directly access the optimal reweighting strategy. Thus,
inspired by adversarial training \citep{madry2018towards}, we propose to minimize the maximal expected loss (MMEL) on augmented samples from the same training example. Since the maximal expected loss is the supremum over any possible reweighting strategy on augmented samples' losses, minimizing this supremum makes the model perform well under any reweighting strategy. More importantly, we derive a closed-form solution of the weights,
where augmented samples with larger training losses have larger weights.
Intuitively, MMEL allows the model to keep focusing on augmented samples that are harder to train.
\par
The procedure of our method is summarized as follows. We first generate the augmented samples with commonly-used data augmentation technique, e.g., lexical substitution for textual input \citep{jiao2019tinybert}, random crop and horizontal flip for image data \citep{krizhevsky2012imagenet}. Then we explicitly derive the closed-form solution of the weights on each of the augmented samples.
After that, we update the model parameters with respect to the reweighted loss.
The proposed method can generally be applied above any data augmentation methods in various domains like natural language processing and computer vision.
Empirical results on both natural language understanding tasks and image classification tasks show that the proposed reweighting strategy consistently outperforms the counterpart of without using it, as well as other reweighting strategies like uniform reweighting.
\section{Related Work}\label{sec:related work}
\par \textbf{Data augmentation.}
Data augmentation is proven to be an effective technique to improve the generalization ability of various tasks, e.g., natural language processing \citep{xie2019unsupervised, zhu2020freelb,jiao2019tinybert}, computer vision \citep{krizhevsky2014cifar}, and speech recognition \citep{park2019specaugment}. For image data, baseline augmentation methods like random crop, flip, scaling, and
color augmentation \citep{krizhevsky2012imagenet} have been widely used. Other heuristic data augmentation techniques like Cutout~\citep{devries2017cutout} which masks image patches and Mixup~\citep{zhang2018mixup} which combines pairs of examples and their labels, are later proposed.
Automatically searching for augmentation policies~\citep{cubuk2018autoaugment,lim2019fast} have recently proposed to improve the performance further.
For textual data, \citet{zhang2015character,wei2019eda} and \citet{wang2015s} respectively use lexical substitution based on
the embedding space. \citet{jiao2019tinybert,cheng2019robust,kumar2020data} generate augmented samples with a pre-trained language model. Some other techniques like back translation \citep{xie2019unsupervised}, random noise injection \citep{xie2017data} and data mixup \citep{guo2019augmenting,cheng2020advaug} are also proven to be useful.
\paragraph{Adversarial training.}
Adversarial learning is used to enhance the robustness of model \citep{madry2018towards}, which dynamically constructs the augmented adversarial samples by projected gradient descent across training.
Although adversarial training hurts the generalization of model on the task of image classification \citep{raghunathan2019adversarial}, it is shown that adversarial training can be used as data augmentation to help generalization in neural machine translation \citep{cheng2019robust,cheng2020advaug} and natural language understanding \citep{zhu2020freelb,jiang2020smart}. Our proposed method differs from adversarial training in that we adversarially decide the weight on each augmented sample, while traditional adversarial training adversarially generates augmented input samples.
\par
In \citep{behpour2019ada}, adversarial learning is used as data augmentation in object detection. The adversarial samples (i.e., bounding boxes that are maximally different from the ground truth) are reweighted to form the underlying annotation distribution.
However, besides the difference in the model and task, their training objective and the resultant solution are also different from ours.
\paragraph{Sample reweighting.}
Minimizing a reweighted loss on training samples has been widely explored in literature. Curriculum learning ~\citep{bengio2009curriculum,jiang2014self} feeds first easier and then harder data into the model to accelerate training. \cite{zhao2014accelerating,needell2014stochastic,csiba2018importance,katharopoulos2018not} use importance sampling to reduce the variance of stochastic gradients to achieve faster convergence rate. Boosting algorithms~\citep{freund1999short} choose harder examples to train subsequent classifiers. Similarly, hard example mining \citep{malisiewicz2011ensemble} downsamples the majority class and exploits the most difficult examples.
Focal loss~\citep{lin2017focal,goyal2018focal} focuses on harder examples by reshaping the standard cross-entropy loss in object detection. \citet{ren2018learning,jiang2018mentornet,shu2019meta} use meta-learning method to reweight examples
to handle the noisy label problem. Unlike all these existing methods, in this work, we reweight the augmented samples' losses instead of training samples.
\section{minimize the maximal Expected Loss}
\label{sec: minimize mel}
In this section, we derive our reweighting strategy on augmented samples from the perspective of maximal expected loss. We first give a derivation of the closed-form solution of the weights on augmented samples. Then we describe two kinds of loss under this formulation. Finally, we give the implementation details using the natural language understanding task as an example.
\subsection{Why Maximal Expected Loss}
Consider a
classification task with $N$ training samples.
For the $i$-th training sample $\boldsymbol{x}_i$, its label is denoted as $y_{\boldsymbol{x}_{i}}$.
Let $f_{\theta}(\cdot)$
be the model with parameter $\theta$ which outputs the classification probabilities. $\ell(\cdot, \cdot)
$ denotes the loss function, e.g. the cross-entropy loss between outputs $f_{\theta}(\boldsymbol{x}_{i})$ and the ground-truth label $y_{\boldsymbol{x}_i}$. Given an original training sample $\boldsymbol{x}_{i}$, the set of augmented samples generated by some method
is $B(\boldsymbol{x}_i)$. Without loss of generality, we assume $\boldsymbol{x}_i\in B(\boldsymbol{x}_i)$. The conventional training objective is to minimize the loss on every augmented sample $\boldsymbol{z}$ in $B(\boldsymbol{x}_i)$
as
\begin{equation}
\small
\label{eq:expected loss}
\min_{\theta}\frac{1}{N}\sum\limits_{i=1}^{N}\left[\frac{1}{|B(\boldsymbol{x}_{i})|}\sum_{(\boldsymbol{z}, y_{\boldsymbol{z}})\in B(\boldsymbol{x}_{i})}\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}})\right],
\end{equation}
where $y_{\boldsymbol{z}}$ is the label of $\boldsymbol{z}\in B(\boldsymbol{x}_{i})$, and can be different with $y_{\boldsymbol{x}_{i}}$.
$|B(\boldsymbol{x}_{i})|$ is the number of augmented samples in $B(\boldsymbol{x}_{i})$, which is assumed to be finite.
\par
In equation \eqref{eq:expected loss}, for each given $\boldsymbol{x}_i$, the weights on its augmented samples are the same (i.e., $1/|B(\boldsymbol{x}_i)|$). However,
different samples have different implicit impacts on the loss, and we can assign different weights on them to facilitate training.
Note that computing the weighted sum of losses of each augmented sample in $B(\boldsymbol{x}_{i})$ can be viewed as taking expectation of loss on augmented samples $\boldsymbol{z}\in B(\boldsymbol{x}_{i})$ under a certain distribution.
When the augmented samples generated from the same training sample are drawn from a uniform distribution,
the loss in equation \eqref{eq:expected loss} can be rewritten as
\begin{equation}\small\label{eq:em loss uniform}
\begin{aligned}
\min_{\theta} R_{\theta}(\mathbb{P}_{U}) = \min_{\theta}\frac{1}{N}\sum\limits_{i=1}^{N}\left[\mathbb{E}_{\boldsymbol{z}\sim \mathbb{P}_{U}(\cdot\mid \boldsymbol{x}_{i})}\left[\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}})\right] - \lambda_{P}\textbf{KL}(\mathbb{P}_{U}(\cdot\mid\boldsymbol{x}_{i})\parallel \mathbb{P}_{U}(\cdot\mid\boldsymbol{x}_{i}))\right],
\end{aligned}
\end{equation}
where the Kullback–Leibler (KL) divergence $\textbf{KL}(\mathbb{P}_{U}(\cdot\mid\boldsymbol{x}_{i})\parallel \mathbb{P}_{U}(\cdot\mid\boldsymbol{x}_{i}))$ equals zero.
Here $\mathbb{P}_{U}(\cdot\mid\boldsymbol{x}_{i})$ denotes the uniform distribution on $B(\boldsymbol{x}_{i})$.
When the augmented samples are drawn from a more general distribution $\mathbb{P}_{B}(\cdot\mid\cdot)$\footnote{In the following, we simplify $\mathbb{P}_{B}(\cdot\mid\cdot)$ as $\mathbb{P}_{B}$ if there is no obfuscation.} instead of the uniform distribution, we can generalize $\mathbb{P}_{U}(\cdot\mid \cdot)$ here to some other conditional distribution $\mathbb{P}_{B}$.
\begin{equation}\small\label{eq:objective}
\min_{\theta} R_{\theta}(\mathbb{P}_{B}) = \min_{\theta}\frac{1}{N}\sum\limits_{i=1}^{N}\left[\mathbb{E}_{\boldsymbol{z}\sim \mathbb{P}_{B}(\cdot\mid \boldsymbol{x}_{i})}\left[\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}})\right] - \lambda_{P}\textbf{KL}(\mathbb{P}_{B}(\cdot\mid\boldsymbol{x}_{i})\parallel \mathbb{P}_{U}(\cdot\mid\boldsymbol{x}_{i}))\right].
\end{equation}
\begin{remark}
When $\mathbb{P}_{B}(\cdot\mid \boldsymbol{x}_{i})$ reduces to the uniform distribution $\mathbb{P}_{U}(\cdot\mid\boldsymbol{x}_{i})$ for any $\boldsymbol{x}_{i}$,
since $\emph{\textbf{KL}}(\mathbb{P}_{U}(\cdot\mid\boldsymbol{x}_{i})\parallel \mathbb{P}_{U}(\cdot\mid \boldsymbol{x}_{i}))=0$, the objective in equation \eqref{eq:objective} reduces to the one in equation \eqref{eq:expected loss}.
\end{remark}
The KL divergence term in equation \eqref{eq:objective} is used as a regularizer to encourage $\mathbb{P}_{B}$ close to $\mathbb{P}_{U}$ (see Remark \ref{rmk:kl}). From equation \eqref{eq:objective}, the conditional distribution $\mathbb{P}_{B}$ determines the weights of each augmented sample in $B(\boldsymbol{x}_{i})$. There may exist an optimal formulation of $\mathbb{P}_{B}$ in some regime, e.g. corresponding to the optimal generalization ability of model. Unfortunately, we can not explicitly characterize such an unknown optimal $\mathbb{P}_{B}$. To address this, we borrow the idea from adversarial training \citep{madry2018towards} and minimize the maximal reweighted loss on augmented samples. Then, the model is guaranteed to perform well
under any reweighting strategy, including the underlying optimal one.
Specifically, let the conditional distribution $\mathbb{P}_{B}$ be $\mathbb{P}_{\theta}^{*} = \arg \sup_{\mathbb{P}_{B}}R_{\theta}(\mathbb{P}_{B}).$
Our objective is to minimize the following reweighted loss
\begin{equation}\small\label{eq:mmel loss}
\min_{\theta} R_{\theta}(\mathbb{P}^{*}_{\theta}) = \min_{\theta}\sup_{\mathbb{P}_{B}}R_{\theta}(\mathbb{P}_{B}).
\end{equation}
The following Remark~\ref{rmk:kl} discusses about the KL divergence term in equation \eqref{eq:objective}.
\begin{remark}
\label{rmk:kl}
Since we take a supremum over $\mathbb{P}_{B}$ in equation \eqref{eq:mmel loss}, the regularizer $\emph{\textbf{KL}}(\mathbb{P}_{B}\parallel \mathbb{P}_{U})$ encourages $\mathbb{P}_{B}$ to be close to $\mathbb{P}_{U}$ because it reaches the minimal value zero when $\mathbb{P}_{B} = \mathbb{P}_{U}$. Thus the regularizer controls the diversity among the augmented samples by constraining the discrepancy between $\mathbb{P}_{B}$ and uniform distribution $\mathbb{P}_{U}$, e.g., a larger $\lambda_{P}$ promotes a larger diversity among the augmented samples.
\end{remark}
The following Theorem~\ref{thm:explicit loss} gives the explicit formulation of $R_{\theta}(\mathbb{P}_{\theta}^{*})$.
\begin{theorem}\label{thm:explicit loss}
Let $R_{\theta}(\mathbb{P}_{B})$ and $R_{\theta}(\mathbb{P}^{*}_{\theta})$ be defined in equation \eqref{eq:expected loss} and \eqref{eq:mmel loss}, then we have
\begin{equation}\small
\label{eq:upper_bound_full}
R_{\theta}(\mathbb{P}_{\theta}^{*}) = \frac{1}{N}\sum\limits_{i=1}^{N}\left[\sum_{\boldsymbol{z}\in B(\boldsymbol{x}_{i})}\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i})\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}}) - \lambda_{P}\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i})\log{(|B(\boldsymbol{x}_{i})|\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i}))}\right],
\end{equation}
where
\begin{equation}\small\label{eq:probability measure}
\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i}) = \frac{\exp\left(\frac{1}{\lambda_{P}}\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}})\right)}{\sum_{\boldsymbol{z}\in B(\boldsymbol{x}_{i})}\exp\left(\frac{1}{\lambda_{P}}\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}})\right)} = {\rm Softmax}_{\boldsymbol{z}}\left(\frac{1}{\lambda_{P}}\ell(f_{\theta}(B(\boldsymbol{x}_i)), y_{B(\boldsymbol{x}_i)})\right),
\end{equation}
where ${\rm Softmax}_{\boldsymbol{z}}(\frac{1}{\lambda_{P}}\ell(f_{\theta}(B(\boldsymbol{x}_i)), y_{B(\boldsymbol{x}_i)}))$ represents the output probability of $\boldsymbol{z}$ for vector $(\frac{1}{\lambda_{P}}\ell(f_{\theta}(\boldsymbol{z}_{1}), y_{\boldsymbol{z}_{1}}) ,\cdots, \frac{1}{\lambda_{P}}\ell(f_{\theta}(\boldsymbol{z}_{|B(\boldsymbol{x}_{i})|}), y_{|B(\boldsymbol{x}_{i})|}) )$.
\end{theorem}
\begin{remark}
If we ignore the KL divergence term in equation \eqref{eq:objective},
due to the equivalence of minimizing cross-entropy loss and MLE loss \citep{martens2014new}, the proposed MMEL also falls into the generalized Expectation-Maximization (GEM) framework \citep{dempster1977maximum}.
Specifically, given a training example, the augmented samples of it can be viewed as latent variable, and any reweighting on these augmented samples corresponds to a specific conditional distribution of these augmented samples given the training sample. In the expectation step (E-step),
we explicitly derive the closed-form solution of the weights on each of these augmented samples according to \eqref{eq:probability measure}. In the maximization step, since there is no analytical solution for deep neural networks, following \citep{tran2017bayesian}, we update the model parameters with respect to the reweighted loss by one step of gradient descent.
\end{remark}
The proof of this theorem can be found in Appendix \ref{app:proof of themrem 1}. From
Theorem~\ref{thm:explicit loss},
the loss of it decides the weight on each augmented sample $\boldsymbol{z} \in B_{\boldsymbol{x}_i}$, and the weight is
normalized by Softmax over all augmented samples in $B_{\boldsymbol{x}_i}$. The reweighting strategy allows more attention paid to augmented samples with higher loss values. The strategy is similar to those in \citep{lin2017focal,zhao2014accelerating} but they apply it on training samples.
\subsection{Two Types of Loss}
For augmented sample $\boldsymbol{z}\in B(\boldsymbol{x}_{i})$, instead of computing the discrepancy between the output probability $f_{\theta}(\boldsymbol{z})$ and the hard label $y_{\boldsymbol{z}}$ as in equation \eqref{eq:upper_bound_full}, one can also compute the discrepancy between $f_{\theta}(\boldsymbol{z})$ and the ``soft'' probability $f_{\theta}(\boldsymbol{x}_{i})$ in the absence of ground-truth label on augmented samples as in \citep{xie2019unsupervised}. In the following, We use superscript ``hard" for the loss in equation \eqref{eq:upper_bound_full} as
\begin{equation}\small
\label{eq:hard loss}
R_{\theta}^{{\rm hard}}(\mathbb{P}_{\theta}^{*}, \boldsymbol{x}_i) =\sum_{\boldsymbol{z}\in B(\boldsymbol{x}_{i})}\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i})\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}})) - \lambda_{P}\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i})\log{(|B(\boldsymbol{x}_{i})|\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i}))},
\end{equation}
to distinguish with the following objective which uses the ``soft probability'':
\begin{equation}\small\label{eq:soft loss}
\begin{aligned}
R_{\theta}^{{\rm soft}}(\mathbb{P}_{\theta}^{*}, \boldsymbol{x}_i) =
\ell(f_{\theta}(\boldsymbol{x}_i), y_{\boldsymbol{x}_i})
& + \lambda_{T}\sum\limits_{\boldsymbol{z}\in B(\boldsymbol{x}_{i}); \boldsymbol{z}\neq \boldsymbol{x}_{i}}\big(\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid\boldsymbol{x}_{i})\ell(f_{\theta}(\boldsymbol{z}), f_{\theta}(\boldsymbol{x}_{i})) \\
& - \lambda_{P}\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i})\log{(|B(\boldsymbol{x}_{i})| - 1)\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i})}\big).
\end{aligned}
\end{equation}
The two terms in $R_{\theta}^{{\rm soft}}(\mathbb{P}_{\theta}^{*}, \boldsymbol{x}_{i})$ respectively correspond to the loss on original training samples $\boldsymbol{x}_{i}$ and the reweighted loss on the augmented samples. The reweighted loss promotes a small discrepancy between the augmented samples and the original training sample. $\lambda_{T} > 0$ is the coefficient used to balance the two loss terms, and
$\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i})$ is defined similar to \eqref{eq:probability measure} as
\begin{equation}\small
\label{eq:probability measure_soft}
\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid \boldsymbol{x}_{i}) = \frac{\exp\left(\frac{1}{\lambda_{P}}\ell(f_{\theta}(\boldsymbol{z}), f_{\theta}(\boldsymbol{x}_{i}))\right)}{\sum_{\boldsymbol{z}\in B(\boldsymbol{x}_{i}); \boldsymbol{z}\neq \boldsymbol{x}_{i}}\exp\left(\frac{1}{\lambda_{P}}\ell(f_{\theta}(\boldsymbol{z}), f_{\theta}(\boldsymbol{x}_{i}))\right)}.
\end{equation}
\begin{figure*}[t!]\centering
\vspace{-0.3in}
\subfloat[MMEL-H.\label{fig:hard}]{
\includegraphics[width=0.47\textwidth]{./figures/hard_loss.pdf}}
\hspace{0.1in}
\subfloat[MMEL-S.\label{fig:soft}]{
\includegraphics[width=0.47\textwidth]{./figures/soft_loss.pdf}}
\vspace{-0.1in}
\caption{
MMEL with two types of losses. Figure~(\ref{fig:hard}) is the hard loss \eqref{eq:hard loss} with probability computed using \eqref{eq:probability measure} while Figure~(\ref{fig:soft}) is the soft loss \eqref{eq:soft loss} with the probabilities computed using \eqref{eq:probability measure_soft}.}
\label{fig:loss}
\end{figure*}
The two losses are shown in Figure~\ref{fig:loss}. Summing over all the training samples, we get the two kinds of reweighted training objectives.
\begin{remark}
The proposed MMEL-S tries to reduce the discrepancy between $f_{\theta}(\boldsymbol{z})$ and $f_{\theta}(\boldsymbol{x}_{i})$ for $\boldsymbol{z}\in B(\boldsymbol{x}_{i})$. However, if the prediction $f_{\theta}(\boldsymbol{x}_{i})$ is inaccurate, such misleading supervision for $\boldsymbol{z}$ may lead to the degraded performance of MMEL-S. More details are in Appendix \ref{app:imagenet}.
\end{remark}
\subsection{Example: MMEL Implementation on Natural Language Understanding Tasks}
\label{sec:implementation of algorithm}
\begin{algorithm}[t!]
\caption{Minimize the Maximal Expected Loss (MMEL)}
\label{alg:training}
\textbf{Input:} Training set $\{(\boldsymbol{x}_{1}, y_{\boldsymbol{x}_{1}}), \cdots, (\boldsymbol{x}_{N}, y_{\boldsymbol{x}_{N}})\}$, batch size $S$, learning rate $\eta$, number of training iterations $T$, $R_{\theta}$ equals $R^{{\rm hard}}_{\theta}$ or $R^{{\rm soft}}_{\theta}$.
\begin{algorithmic}[1]
\For {$i$ in $\{1,2,\cdots,N\}$ } \Comment{\emph{\texttt{generate augmented samples}}}
\State{Generating $B(\boldsymbol{x}_i)$ using some data augmentation method.}
\EndFor
\For {$t=1, \cdots ,T$} \Comment{\emph{\texttt{minimize the maximal expected loss}}}
\State {Randomly sample a mini-batch $\mathcal{S} = \{(\boldsymbol{x}_{i_{1}}, y_{\boldsymbol{x}_{i_{1}}}), \cdots, (\boldsymbol{x}_{i_{S}}, y_{\boldsymbol{x}_{i_{S}}})\}$ from training set.
\State Fetch the augmented samples $B(\boldsymbol{x}_{i_1}), B(\boldsymbol{x}_{i_2}), \cdots, B(\boldsymbol{x}_{i_S})$.}
\State {Compute $\mathbb{P}_{\theta}^{*}$ according to \eqref{eq:probability measure} or \eqref{eq:probability measure_soft}. }
\State {Update model parameters $\theta_{t + 1} = \theta_{t} - \frac{\eta}{S}\sum\nolimits_{\boldsymbol{x}\in\mathcal{S}}\nabla_{\theta} R_{\theta}(\mathbb{P}_{\theta}^{*}, \boldsymbol{x})$. }
\EndFor
\end{algorithmic}
\end{algorithm}
In this section, we elaborate on implementing the proposed method using textual data in natural language understanding tasks as an example. Our method is separated into two phases. In the first phase, we generate augmented samples. Then in the second phase, with these augmented samples, we update the model parameters under these augmented samples with respect to the hard reweighted loss \eqref{eq:hard loss} or the soft counterpart \eqref{eq:soft loss}. The generation and training procedure can be decoupled, and the augmented samples are offline generated in the first phase by only once. On the other hand, in the second phase, since we have the explicit solution of weights on augmented samples and the multiple forward and backward passes on these augmented samples can be computed in parallel, the whole training time is similar to the regular training counterpart for an appropriate number of augmented samples. The whole training process is shown in Algorithm \ref{alg:training}.
\paragraph{Generation of Textual Augmented Data.} Various methods have been proposed to generate augmented samples for textual data.
Recently, large-scale pre-trained language models like BERT~\citep{devlin2019bert} and GPT-2~\citep{radford2019language} learn contextualized representations and have been used widely in generating high-quality augmented sentences~\citep{jiao2019tinybert,kumar2020data}. In this paper, we use a pre-trained BERT trained from masked language modeling to generate augmented samples. For each original input sentence, we randomly mask $k$ tokens. Then we do a forward propagation of the BERT to predict the tokens in those masked positions by greedy search. Details can be found in Algorithm \ref{alg:greedy} in Appendix \ref{app:text generation}.
\paragraph{Mismatching Label.} For $R_{\theta}^{{\rm hard}}$ in equation \eqref{eq:hard loss}, the loss term $\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}})$ on augmented sample $\boldsymbol{z}\in B(\boldsymbol{x}_{i})$ for some $\boldsymbol{x}_{i}$ relies on its label $y_{\boldsymbol{z}}$.
Unlike image data, where conventional augmentation methods like random crop and horizontal flip of an image do not change its label, substituting even one word in a sentence can drastically change its meaning.
For instance, suppose the original sentence is \emph{``She is my daughter''}, and the word ``She'' is masked. The top $5$ words predicted by the pre-trained BERT are ``This, She, That, It, He''. Apparently, for the task of linguistic acceptability task, replacing ``She'' with ``He'' can change the label from linguistically ``acceptable'' to ``non-acceptable''. Thus for textual input, for the term $\ell(f_{\theta}(\boldsymbol{z}), y_{\boldsymbol{z}})$ in hard loss \eqref{eq:hard loss}, instead of directly setting $y_{\boldsymbol{z}}$ as $y_{\boldsymbol{x}_{i}}$ \citep{zhu2020freelb},
we replace $y_{\boldsymbol{z}}$ with the output probability of a trained teacher model.
On the other hand, for the soft loss in equation \eqref{eq:soft loss}, if an augmented sample $\boldsymbol{z} \in B(\boldsymbol{x}_{i}) $ is predicted to a different class from $\boldsymbol{x}_{i}$ by the teacher model,
it is unreasonable to still minimize the discrepancy between $f_{\theta}(\boldsymbol{z})$ and $f_{\theta}(\boldsymbol{x}_{i})$.
In this case, we replace $f_{\theta}(\boldsymbol{x}_{i})$ in the loss term
$\lambda_{T}\sum\nolimits_{\boldsymbol{z}\in B(\boldsymbol{x}_{i}); \boldsymbol{z}\neq \boldsymbol{x}_{i}}\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid\boldsymbol{x}_{i})\ell(f_{\theta}(\boldsymbol{z}), f_{\theta}(\boldsymbol{x}_{i}))$ with the output probability from the teacher model.
\section{Experiments}
In this section, we evaluate the efficacy of the proposed MMEL algorithm with both hard loss (MMEL-H) and soft loss (MMEL-S). Experiments are conducted on both the image classification tasks \texttt{CIFAR-10} and \texttt{CIFAR-100}~\citep{krizhevsky2014cifar} with the ResNet Model \citep{he2016deep}, and the General Language Understanding Evaluation~(GLUE) tasks \citep{wang2019glue} with the BERT model~\citep{devlin2019bert}.
\subsection{Experiments on Image Classification Tasks.}\label{sec:exp for image}
\label{expt:cifar}
\paragraph{Data.}
\texttt{CIFAR} \citep{krizhevsky2014cifar} is a benchmark dataset for image classification. We use both \texttt{CIFAR-10} and \texttt{CIFAR-100} in our experiments, which are colorful images with 50000 training samples and 10000 validation samples, but from 10 and 100 object classes, respectively.
\paragraph{Setup.}
The model we used is ResNet \citep{he2016deep} with different depths. We use random crop and horizontal flip \citep{krizhevsky2012imagenet} to augment the original training images. Since these operations do not change the augmented sample label, we directly adopt the original training sample label for all its augmented samples. Following \citep{he2016deep}, we use the SGD with momentum optimizer to train each model for 200 epochs. The learning rate starts from 0.1 and decays by a factor of 0.2 at epochs 60, 120 and 160. The batch size is 128, and weight decay is 5e-4. For each $\boldsymbol{x}_i$, $|B(\boldsymbol{x}_{i})|=10$. The $\lambda_{P}$ of the KL regularization coefficient is 1.0 for both MMEL-H and MMEL-S. The $\lambda_{T}$ in equation \eqref{eq:soft loss} for MMEL-S is selected from \{0.5, 1.0, 2.0\}.
\par
We compare our proposed MMEL with conventional training with data augmentation (abbreviated as ``Baseline(DA)'') under the same number of epochs.
Though MMEL can be computed efficiently in parallel, the proposed MMEL encounters $|B(\boldsymbol{x}_{i})|=10$ times more training data. For fair comparison, we also compare with two other baselines that also use 10 times more data: (i) naive training with data augmentation but with 10 times more training epochs compared with MMEL (abbreviated as ``Baseline(DA+Long)''). In this case, the learning rate accordingly decays at epochs 600, 1200 and 1600; (ii) training with data augmentation under the framework of MMEL but with uniform weights on the augmented samples
(abbreviated as ``Baseline(DA+UNI)'').
\paragraph{Main Results.}
The results are shown in Table \ref{tbl:cifar}. As can be seen, for both \texttt{CIFAR-10} and \texttt{CIFAR-100}, MMEL-H and MMEL-S significantly outperform the Baseline(DA), with over 0.5 points higher accuracy on all four architectures. Compared to Baseline(DA+Long), the proposed MMEL-H and MMEL-S also have comparable or better performance,
while being much more efficient in training. This is because our backward pass only computes the gradient of the weighted loss instead of the separate loss of each example.
Compared to Baseline(DA+UNI) which has the same computational cost as
MMEL-H and MMEL-S, the proposed methods also have better performance. This indicates the efficacy of the proposed maximal expected loss based reweighting strategy.
\par
We further evaluate the proposed method on larege-scale dataset \texttt{ImageNet}\citep{deng2009imagenet}. The detailed results are in Appendix \ref{app:imagenet}.
\begin{table*}[htbp]
\caption{Performance of ResNet on \texttt{CIFAR-10} and \texttt{CIFAR-100}.
The time is the training time
measured on a single NVIDIA V100 GPU. The results of five independent runs with ``mean ($\pm$std)'' are reported, expected for ``Baseline(DA + Long)'' which is slow in training.}
\label{tbl:cifar}
\centering
\scalebox{0.65}{
{
\begin{tabular}{l|c|cc|cc|cc|cc|cc}
\hline
\multirow{2}{*}{dataset } & \multirow{2}{*}{Model } & \multicolumn{2}{c|}{Baseline(DA)} & \multicolumn{2}{c|}{Baseline(DA+Long) } & \multicolumn{2}{c|}{Baseline(DA+UNI) } & \multicolumn{2}{c|}{MMEL-H} & \multicolumn{2}{c}{MMEL-S} \\ \cline{3-12}
& & acc & time & acc & time & acc & time & acc & time & acc & time \\ \hline
\multirow{4}{*}{\texttt{CIFAR-10}} & ResNet20 & 92.53($\pm$0.10) & 0.7h & \textbf{93.27} & 6.7h & 93.00($\pm$0.16) & 2.9h & 93.16($\pm$0.03) & 2.9h & 93.10($\pm$0.18) & 2.9h \\
& ResNet32 & 93.46($\pm$0.21) & 0.7h & \textbf{94.43} & 7.2h & 94.11($\pm$0.33) & 4.3h & 94.31($\pm$0.07) & 4.3h & 93.93($\pm$0.05) & 4.3h \\
& ResNet44 & 93.92($\pm$0.10) & 0.8h & 94.11 & 8.3h & 94.30($\pm$0.18) & 5.7h & \textbf{94.70}($\pm$0.14) & 5.7h & 94.48($\pm$0.08) & 5.7h \\
& ResNet56 & 93.96($\pm$0.20) & 1.1h & 94.12 & 10.6h & 94.62($\pm$0.18) & 7.0h & \textbf{94.85}($\pm$0.15) & 7.0h & 94.64($\pm$0.03) & 7.0h \\ \hline
\multirow{4}{*}{\texttt{CIFAR-100}} & ResNet20 & 68.95($\pm$0.56) & 0.7h & 69.45 & 6.7h & 68.89($\pm$0.06) & 2.9h & \textbf{70.01}($\pm$0.07) & 2.9h & 70.00($\pm$0.07) & 2.9h \\
& ResNet32 & 70.66($\pm$0.16) & 0.7h & 71.98 & 7.2h & 71.59($\pm$0.10) & 4.3h & \textbf{72.51}($\pm$0.07) & 4.3h & 72.57($\pm$0.20) & 4.3h \\
& ResNet44 & 71.43($\pm$0.30) & 0.8h & 72.83 & 8.3h & 72.30($\pm$0.38) & 5.7h & \textbf{73.18}($\pm$0.31) & 5.7h & 72.89($\pm$0.16) & 5.7h \\
& ResNet56 & 72.22($\pm$0.26) & 1.1h & 73.09 & 10.6h & 73.44($\pm$0.13) & 7.0h & \textbf{74.20}($\pm$0.24) & 7.0h & 73.89($\pm$0.15) & 7.0h \\ \hline
\end{tabular}}
\vspace{-0.1in}}
\end{table*}
\paragraph{Varying the Number of Augmented Samples.}
One hyperparameter of the proposed method is the number of augmented samples $|B(\boldsymbol{x}_{i})|$.
In Table~\ref{tbl:mmel-h-k}, we evaluate the effect of $|B(\boldsymbol{x}_{i})|$ on the CIFAR dataset. We vary $|B(\boldsymbol{x}_{i})|$ in $\{2, 5, 10, 20\}$ for both MMEL-H and MMEL-S with other settings unchanged.
As can be seen,
the performance of MMEL improves with more augmented samples for small $|B(\boldsymbol{x}_{i})|$.
However, the performance gain begins to saturate when $|B(\boldsymbol{x}_{i})|$ reaches 5 or 10 for some cases.
Since a larger $|B(\boldsymbol{x}_{i})|$ also brings more training cost, we should choose a proper number of augmented samples rather than continually increasing it.
\begin{table*}[htbp]
\caption{Performance of MMEL on \texttt{CIFAR-10} and \texttt{CIFAR-100} with ResNet with varying $|B_{\boldsymbol{x}_i}|$. Here ``MMEL-*-$k$'' means training with MMEL-* loss with $|B(\boldsymbol{x}_{i})|=k$. The results are averaged over five independent runs with ``mean($\pm$std)'' reported.}
\label{tbl:mmel-h-k}
\centering
\scalebox{0.55}{
{
\begin{tabular}{l|c|c|cc|cc|cc|cc}
\hline
\multirow{2}{*}{dataset } & \multirow{2}{*}{Model } & \multirow{2}{*}{Baseline(DA)} & \multicolumn{2}{c|}{MMEL-*-2} & \multicolumn{2}{c|}{MMEL-*-5} & \multicolumn{2}{c|}{MMEL-*-10} & \multicolumn{2}{c}{MMEL-*-20} \\ \cline{4-11}
& & & H & S & H & S & H & S & H & S \\ \hline
\multirow{4}{*}{\texttt{CIFAR-10}} & ResNet20 & 92.53($\pm$0.10) & 92.77($\pm$0.01) & 92.91($\pm$0.21) & 93.11($\pm$0.13) & 92.89($\pm$0.05) & 93.16($\pm$0.03) & 93.10($\pm$0.18) & \textbf{93.57}($\pm$0.04) & 93.18($\pm$0.08) \\
& ResNet32 & 93.46($\pm$0.21) & 93.85($\pm$0.16) & 93.88($\pm$0.18) & 94.20($\pm$0.18) & 93.88($\pm$0.14) & 94.31($\pm$0.07) & 93.93($\pm$0.05) & \textbf{94.39}($\pm$0.09) & 93.89($\pm$0.17) \\
& ResNet44 & 93.92($\pm$0.10) & 94.18($\pm$0.12) & 93.87($\pm$0.13) & 94.51($\pm$0.13) & 94.35($\pm$0.07) & \textbf{94.70}($\pm$0.14) & 94.48($\pm$0.08) & \textbf{94.70}($\pm$0.20) & 94.39($\pm$0.11) \\
& ResNet56 & 93.96($\pm$0.20) & 94.29($\pm$0.08) & 94.43($\pm$0.05) & 94.78($\pm$0.09) & 94.56($\pm$0.15) & 94.85($\pm$0.15) & 94.64($\pm$0.03) & \textbf{95.01}($\pm$0.12) & 94.62($\pm$0.12) \\ \hline
\multirow{4}{*}{\texttt{CIFAR-100}} & ResNet20 & 68.95($\pm$0.56) & 69.46($\pm$0.24) & 70.00($\pm$0.36) & 69.73($\pm$0.21) & 69.88($\pm$0.18) & 70.01($\pm$0.07) & 70.00($\pm$0.07) & 69.89($\pm$0.09) & \textbf{70.05}($\pm$0.23) \\
& ResNet32 & 70.66($\pm$0.16) & 71.50($\pm$0.09) & 71.37($\pm$0.30) & 72.41($\pm$0.18) & 71.73($\pm$0.16) & 72.51($\pm$0.07) & \textbf{72.57}($\pm$0.20) & 72.25($\pm$0.12) & 72.00($\pm$0.12) \\
& ResNet44 & 71.43($\pm$0.30) & 72.58($\pm$0.08) & 72.42($\pm$0.24) & \textbf{73.38}($\pm$0.07) & 72.92($\pm$0.15) & 73.18($\pm$0.31) & 72.89($\pm$0.16) & 73.23($\pm$0.18) & 72.77($\pm$0.15) \\
& ResNet56 & 72.22($\pm$0.26) & 73.11($\pm$0.36) & 73.33($\pm$0.30) & 73.95($\pm$0.04) & 73.47($\pm$0.14) & \textbf{74.20}($\pm$0.24) & 73.89($\pm$0.15) & 74.10($\pm$0.05) & 73.53($\pm$0.12) \\ \hline
\end{tabular}}
}
\end{table*}
\subsection{Results on Natural Language Understanding Tasks}
\label{expt:glue}
\paragraph{Data.} GLUE is a benchmark containing various natural language understanding tasks, including textual entailment (\texttt{RTE} and \texttt{MNLI}), question answering (\texttt{QNLI}), similarity and paraphrase (\texttt{MRPC}, \texttt{QQP}, \texttt{STS-B}), sentiment analysis (\texttt{SST-2}) and linguistic acceptability (\texttt{CoLA}). Among them, \texttt{STS-B} is a regression task,
\texttt{CoLA} and \texttt{SST-2} are single sentence classification tasks, while the rest are sentence-pair classification tasks.
Following \citep{devlin2019bert}, for the development set, we report Spearman correlation for \texttt{STS-B}, Matthews correlation for \texttt{CoLA} and accuracy for the other tasks. For the test set for \texttt{QQP} and \texttt{MRPC}, we report ``F1''.
\paragraph{Setup.}
The backbone model is $\text{BERT}_\text{BASE}$ \citep{devlin2019bert}.
We use the method in Section~\ref{sec:implementation of algorithm} to generate augmented samples. For the problem of mismatching label as described in Section~\ref{sec:implementation of algorithm}, we use a $\text{BERT}_\text{BASE}$ model fine-tuned on the downstream task as teacher model to predict the label of each generated sample $\boldsymbol{z}$ in $B(\boldsymbol{x}_{i})$. For each $\boldsymbol{x}_{i}$, $|B(\boldsymbol{x}_{i})|=5$. The fraction of masked tokens for each sentence is 0.4. The $\lambda_{P}$ of the KL regularization coefficient is 1.0 for both MMEL-H and MMEL-S. The $\lambda_{T}$ in equation \eqref{eq:soft loss} for MMEL-S is 1.0. The other detailed hyperparameters in training can be found in Appendix \ref{app:hyperparameters}.
\par
The derivation of MMEL in Section~\ref{sec: minimize mel} is based on the classification task, while \texttt{STS-B} is a regression task. Hence, we generalize our loss function accordingly for regression tasks as follows. For the hard loss in equation \eqref{eq:hard loss},
we directly replace $y_{\boldsymbol{z}}\in\mathbb{R}$ with the prediction of teacher model on $\boldsymbol{z}$.
For the soft loss $\eqref{eq:soft loss}$, for each entry of $f_{\theta}(\boldsymbol{x}_{i})$ in loss term $\lambda_{T}\sum\nolimits_{\boldsymbol{z}\in B(\boldsymbol{x}_{i}); \boldsymbol{z}\neq \boldsymbol{x}_{i}}\mathbb{P}_{\theta}^{*}(\boldsymbol{z}\mid\boldsymbol{x}_{i})\text{MSE}(f_{\theta}(\boldsymbol{z}), f_{\theta}(\boldsymbol{x}_{i}))$,
we replace it with the prediction of teacher model if the difference between them is larger than 0.5.
\par
Similar to Section~\ref{expt:cifar}, We compare with three baselines.
However, we change the first baseline to naive training without data augmentation (abbreviated as ``Baseline'') since data augmentation is not used by default in NLP tasks.
The other two baselines are similar to those in Section~\ref{expt:cifar}: (i) ``Baseline(DA+Long)'' which fine-tunes BERT with data augmentation with the same batch size; and (ii)``Baseline(DA+UNI)'' which fine-tunes BERT with augmented samples by using average loss. We also compare with another recent data augmentation technique SMART \citep{jiang2020smart}.
\begin{table*}[htbp]
\caption{Development and test sets results on the $\text{BERT}_\text{BASE}$ model. The training time is measured on a single NVIDIA V100 GPU. The results of Baseline, Baseline(DA+UNI), MMEL-H and MMEL-S are obtained by five independent runs with ``mean($\pm$std)'' reported.}
\label{tbl:glue}
\centering
\scalebox{0.52}{
{
\begin{tabular}{ll|cccccccccc}
\hline & Method & \tabincell{c}{\texttt{CoLA} \\ 8.5k} & \tabincell{c}{\texttt{SST-2} \\ 67k} & \tabincell{c}{\texttt{MRPC} \\ 3.7k} & \tabincell{c}{\texttt{STS-B} \\7k} & \tabincell{c}{\texttt{QQP} \\ 364k} & \tabincell{c}{\texttt{MNLI-m/mm} \\393k} & \tabincell{c}{\texttt{QNLI} \\ 108k} & \tabincell{c}{\texttt{RTE} \\ 2.5k} & Avg & Time \\ \hline
\multirow{6}{*}{Dev} & Baseline & 59.7($\pm$0.61) & 93.1($\pm$0.38) & 87.0($\pm$0.56) & 89.7($\pm$0.34) & 91.1($\pm$0.12) & 84.6($\pm$0.28)/85.0($\pm$0.37) & 91.7($\pm$0.17) & 69.7($\pm$2.3) & 83.5($\pm$0.27) & 21.5h \\
& Baseline(DA+Long) & 61.5 & 93.3 & 88.0 & 89.8 & 91.1 & 84.8/85.3 & 92.0 & \textbf{73.3} & 84.3 & 107.5h \\
& Baseline(DA+UNI) & 61.1($\pm$0.75) & 93.1($\pm$0.17) & 87.9($\pm$0.63) & 90.0($\pm$0.14) & 91.1($\pm$0.03) & 84.8($\pm$0.36)/85.1($\pm$0.26) & 91.9($\pm$0.16) & 71.8($\pm$1.02) & 84.1($\pm$0.14) & 31.6h \\
& SMART & 59.1 & 93.0 & 87.7 & 90.0 & 91.5 & \textbf{85.6/86.0} & 91.7 & 71.2 & 84.0 & - \\
& MMEL-H & \textbf{62.1}($\pm$0.55) & 93.1($\pm$0.14) & 87.7($\pm$0.20) & \textbf{90.4}($\pm$0.14) & \textbf{91.5}($\pm$0.07) & 85.3($\pm$0.06)/85.5($\pm$0.06) & 92.2($\pm$0.10) & 72.3($\pm$0.85) & 84.5($\pm$0.11) & 31.8h \\
& MMEL-S & \textbf{62.1}($\pm$0.55) & \textbf{93.5}($\pm$0.23) & \textbf{88.4}($\pm$0.73) & \textbf{90.4}($\pm$0.14) & \textbf{91.5}($\pm$0.04) & 85.2($\pm$0.05)/85.6($\pm$0.02) & \textbf{92.4}($\pm$0.12) & 71.9($\pm$0.24) & \textbf{84.6}($\pm$0.11) & 32.4h \\ \hline
\multirow{5}{*}{Test} & Baseline & 51.6($\pm$0.73) & 93.3($\pm$0.21) & 88.0($\pm$0.59) & 85.8($\pm$0.88) & 71.3($\pm$0.32) & 84.6($\pm$0.19)/83.8($\pm$0.30) & 91.1($\pm$0.35) & 67.4($\pm$1.30) & 79.6($\pm$0.09) & 21.5h \\
& Baseline(DA+Long) & 52.0 & 93.3 & \textbf{88.8} & \textbf{86.7} & 71.3 & 84.4/83.9 & 90.9 & 69.6 & 80.1 & 107.5h \\
& Baseline(DA+UNI) & 52.4($\pm$1.50) & 92.3($\pm$0.52) & 87.7($\pm$0.72) & 85.8($\pm$0.70) & 71.5($\pm$0.44) & 84.6($\pm$0.31)/83.6($\pm$0.46) & 90.6($\pm$0.43) & 68.8($\pm$1.76) & 79.7($\pm$0.50) & 31.6h \\
& MMEL-H & \textbf{53.6}($\pm$0.90) & 93.4($\pm$0.05) & 88.3($\pm$0.21) & 86.6($\pm$0.45) & \textbf{72.4}($\pm$0.05) & 84.9($\pm$0.19)/\textbf{84.5}($\pm$0.15) & \textbf{91.5}($\pm$0.11) & 69.8($\pm$0.52) & \textbf{80.5}($\pm$0.17) & 31.8h \\
& MMEL-S & 52.5($\pm$0.43) & \textbf{93.5}($\pm$0.16) & 88.3($\pm$0.15) & 86.1($\pm$0.07) & 72.1($\pm$0.10) & \textbf{85.0}($\pm$0.23)/84.2($\pm$0.15) & 91.4($\pm$0.31) & \textbf{69.9}($\pm$0.54) & 80.3($\pm$0.10) & 32.4h \\ \hline
\end{tabular}}}
\end{table*}
\paragraph{Main Results.}
The development and test set results on the GLUE benchmark are shown in Table \ref{tbl:glue}. The development set results for the BERT baseline are from our re-implementation, which is comparable or better than the reported results in the original paper \citep{devlin2019bert}. The results for SMART are taken from \citep{jiang2020smart},
and there are no test set results in \citep{jiang2020smart}.
As can be seen, data augmentation significantly improves the generalization of GLUE tasks. Compared to the baseline without data augmentation (Baseline), MMEL-H or MMEL-S consistently achieves better performance, especially on small datasets like \texttt{CoLA} and \texttt{RTE}.
Similar to the observation in the image classification task in Section~\ref{expt:cifar}, the proposed MMEL-H and MMEL-S are more efficient and have better performance than Baseline(DA+Long). MMEL-H and MMEL-S also outperform Baseline(DA+UNI), indicating the superiority of using the proposed reweighting strategy. In addition, our proposed method also beats SMART in both accuracy and efficiency because they use PGD-k \citep{madry2018towards} to construct adversarial augmented samples which requires nearly $k$ times more training cost.
Figure \ref{fig:test acc} shows the development set accuracy across over the training procedure. As can be seen, training with MMEL-H or MMEL-S converges faster and has better accuracy except \texttt{SST-2} and \texttt{RTE} where the performance is similar.
\begin{figure*}[t!]\centering
\subfloat[\texttt{CoLA}.]{
\includegraphics[width=0.24\textwidth]{./figures/CoLA.png}}
\subfloat[\texttt{SST-2}.]{
\includegraphics[width=0.24\textwidth]{./figures/SST-2.png}}
\subfloat[\texttt{MRPC}.]{
\includegraphics[width=0.24\textwidth]{./figures/MRPC.png}}
\subfloat[\texttt{STS-B}.]{
\includegraphics[width=0.24\textwidth]{./figures/STS-B.png}}
\\
\vspace{-0.1in}
\subfloat[\texttt{QQP}.]{
\includegraphics[width=0.24\textwidth]{./figures/QQP.png}}
\subfloat[\texttt{MNLI}.]{
\includegraphics[width=0.24\textwidth]{./figures/MNLI.png}}
\subfloat[\texttt{QNLI}.]{
\includegraphics[width=0.24\textwidth]{./figures/QNLI.png}}
\subfloat[\texttt{RTE}.]{
\includegraphics[width=0.24\textwidth]{./figures/RTE.png}}
\vspace{-0.1in}
\caption{Development set results on $\text{BERT}_\text{BASE}$ model with different loss functions. }
\label{fig:test acc}
\end{figure*}
\paragraph{Effect of Predicted Labels.} For the augmented samples from same origin, we use a fine-tuned task-specific $\text{BERT}_\text{BASE}$ teacher model to predict their labels as mentioned in Section~\ref{sec:implementation of algorithm} to handle the problem of mismatching label. In Table~\ref{tbl:predict_label}, we show the comparison between using the label of the original sample and using predicted labels. As can be seen, using the predicted label significantly improves the performance. By comparing with the results in Table~\ref{tbl:glue}, using the label of the original sample even hurts the performance.
\begin{table*}[htbp]
\caption{Effect of using the predicted label. Development set results are reported.}
\label{tbl:predict_label}
\centering
\scalebox{0.88}{
{
\begin{tabular}{ll|ccccccccc}
\hline
Method & Label & \tabincell{c}{\texttt{CoLA} } & \tabincell{c}{\texttt{SST-2}} & \tabincell{c}{\texttt{MRPC}} & \tabincell{c}{\texttt{STS-B} } & \tabincell{c}{\texttt{QQP}} & \tabincell{c}{\texttt{MNLI-m/mm}} & \tabincell{c}{\texttt{QNLI} } & \tabincell{c}{\texttt{RTE}} & Avg \\ \hline
MMEL-H & Original & 48.8 & 91.5 & 79.2 & 80.3 & 88.6 & 80.4/79.8 & 88.8 & 65.3 & 78.1 \\
MMEL-H & Predicted & 62.8 & 93.3 & 87.5 & 90.4 & 91.6 & 85.4/85.6 & 92.1 & 72.2 & 84.5 \\
MMEL-S & Original & 56.6 & 91.7 & 85.8 & 81.6 & 90.0 & 81.9/81.3 & 89.9 & 61.0 & 80.0 \\
MMEL-S & Predicted & 61.7 & 93.8 & 89.2 & 90.2 & 91.6 & 85.0/85.5 & 92.5 & 73.3 & 84.7 \\ \hline
\end{tabular}}}
\vspace{-0.1in}
\end{table*}
\section{Conclusion}
In this work, we propose to minimize a reweighted loss over the augmented samples which directly considers their implicit impacts on the loss.
Since we can not access the optimal reweighting strategy, we propose to minimize the supremum of the loss under all reweighting strategies, and give a closed-form solution of the optimal weights. Our method can be applied on top of any data augmentation methods. Experiments on both image classification tasks and natural language understanding tasks show that the proposed method improves the generalization performance of the model, while being efficient in training.
|
2,877,628,088,394 | arxiv | \section{Introduction}\label{sec:intro}
Observations over the last few decades have revealed that stars predominately form within magnetized and turbulent molecular clouds \citep{cr12}. How magnetic fields regulate star formation, however, is still poorly understood. Theoretical works have suggested that magnetic fields could be important in supporting molecular clouds, suppressing the star formation rate \citep[e.g.,][]{na08,pr08}, and removing angular momentum \citep[e.g.,][]{me85,mo86}. Nonetheless, measurements of magnetic field morphologies and strength are still too rare to test these theories \citep[and references therein]{ta87,ta15,kw15}.
Recently, much attention has been drawn to filamentary molecular clouds, which are suggested as the key progenitors of star formation \citep{an10}. \citet{li13} found that the intercloud media magnetic fields, traced by optical polarimetry, were often oriented either parallel or perpendicular to the filamentary clouds. Based on the recent \textit{Planck} data, \citet{pl16} showed that the relative orientations between magnetic fields and filamentary density structure changed systematically from parallel to filaments in low column density areas to perpendicular in high column density areas, with a switch point at $N_{H}\sim 10^{21.7} cm^{-2}$. The observed alignment between magnetic fields and filaments is consistent with theoretical works suggesting that magnetic fields play an important role in guiding the gravitational or turbulence-driven contraction and also supporting filaments from contraction along the filament major axis \citep[e.g.,][]{na08,bu13,in15}.
Within dense filamentary clouds, morphological configurations named ``Hub-Filament structures'' (HFSs) are commonly seen. Such structures consist of a central dense hub ($N_{H_2}>10^{22}~cm^{-2}$) with several converging filaments surrounding the hub \citep{my09,li14}. The central hubs of HFSs often host most of the star formation in a filamentary cloud, and hence are the potential sites for cluster formation. Observations found that converging filaments connecting HFSs often have similar orientations and spacings \citep{my09}. Polarization observations toward the HFS G14.225-0.506 found that its converging filaments are perpendicular to the large-scale magnetic fields \citep{bu13,sa16}. To explain these features, theoretical works have suggested that HFSs are formed via layer fragmentation of clouds threaded by magnetic fields. In this model, the local densest regions collapse quickly and form dense hubs, then the surrounding material tends to fragment along magnetic field lines and become parallel layers, since the gravitational instability grows faster along the magnetic fields \citep{na98,my09,va14}.
Kinematic analyses have shown that the surrounding filaments within HFSs might indeed be infalling material \citep[e.g.,][]{liu15,ju17,yu17}, attributed either to accretion flows attracted by the dense hubs \citep[e.g.,][]{fr13,ki13} or to gravitationally collapsing filaments \citep[e.g.,][]{po11,po12}. As an example, spiral arm-like converging filaments with significant velocity gradients were revealed in G33.92+0.11 using ALMA, supporting the idea that these filaments were eccentric accretion flows preserving high angular momentum and were previously fragmented from a rotating clump \citep{liu12,liu15}. In this point of view, the formation of HFSs is dominated by gravity, and magnetic fields are merely dragged by the accretion flows. Therefore, the magnetic field morphologies are parallel to the converging filaments and different from the larger scale magnetic fields, which have been seen in NGC 6334 V via SMA polarimetry \citep{ju17}. Since most of the current HFS formation models were based on the large-scale magnetic field morphologies, it is still challenging to explain how the observed infalling features evolve at smaller scale. Hence, more observations on the scale of HFS are essential to complete evolutionary models.
Submillimeter dust polarization is commonly used to measure the magnetic field morphology. Whether or not submillimeter polarization can really trace the dust grains in dense clouds, however, is still in debate. Current radiative torque dust alignment (RATs) theory \citep{la97,la07,ho09} suggests that dust grains in high-extinction regions cannot be efficiently aligned with magnetic fields due to the lack of radiation fields. Observationally, polarization efficiency ($PE$), defined as a ratio of absorption polarization percentage to visual extinction $A_V$, is commonly used to evaluate whether or not dust grains within clouds could be aligned. Past observations found that the polarization efficiency in high-extinction regions decreases with $A_V$ by a power-law index of -1, indicating that the polarization only comes from the surface of the clouds \citep[e.g.,][]{jo15,an15}, and also support the prediction of the RATs theory. Nevertheless, some observations do show flatter $PE$--$A_V$ relations \citep[e.g.,][]{jo16,wa17}, suggesting that the dust grains in dense regions do contribute polarization. It is still unclear what mechanism can efficiently align dust grains in high-extinction regions and what environment the mechanism requires. More measurements of polarization efficiency in different environments are needed to settle this debate.
The IC5146 system is a nearby star-forming region in Cygnus, consisting of an H\Rom{2} region, known as the Cocoon Nebula, and a long dark cloud extending from the H\Rom{2} region. The distance of the IC5146 cloud is ambiguous. \citet{ha08} estimated a distance of 950 pc based on the comparison of zero-age main sequence among the Orion Nebula Cluster and the B-type members of IC5146. In contrast, \citet{la99} derived a distance of $460\substack{+40 \\ -60}$\, pc by comparing the number of foreground stars, identified by near-infrared extinction measurements, to those expected from galactic models. \citet{dz18} estimated a distance of $813\pm106$ pc based on the Gaia second data release \citep{ga18} parallax measurements toward the embedded young stellar objects (YSOs) within the Cocoon Nebula. In this paper, we assume a default distance of $813\pm106$ pc for consistency.
The \textit{Herschel} Gould Belt Survey \citep{an10} revealed a complex network of filaments within the IC5146 dark clouds \citep{ar11}: several diffuse sub-filaments extend from its main filamentary structures, and two HFSs are located at the ends of the main filaments. The main filament is a known active star-forming region, where more than 200 YSOs have been identified with $Spitzer$ \citep{ha08,du15}. The variety of filamentary features in the IC5146 system suggests it as an ideal target for investigating the formation and evolution of these filaments \citep{jo17}. \citet{wa17} (hereafter WLE17) measured the optical and near-infrared starlight polarization across the whole IC5146 cloud, and showed that the large-scale magnetic fields are uniform and perpendicular to the main filaments, suggesting that the large-scale filaments were formed under strong magnetic field condition. Since the large-scale magnetic fields have been well probed, the IC5146 system is an excellent target to perform further submillimeter polarimetry to reveal the role of the magnetic field to smaller scales.
In this paper, we report the 850 $\mu$m polarization observations toward the brightest HFS in the IC5146 system taken with Submillimetre Common-User Bolometer Array 2 (SCUBA-2, \citealt{ho13}) and its associated polarimeter (POL-2, \citealt{fr16,ba19}), mounted on the James Clerk Maxwell Telescope (JCMT), as part of the B-fields In STar forming Regions Observations (BISTRO) \citep{war17,kw18,so18,pa18}.
The target HFS has a physical size less than $\sim$ 1.0 pc and a total mass of $\sim$ 100 $M_{\sun}$ \citep{ha08}, lower than the commonly seen parsec-scale HFS such as NGC1333 or IC348, and thus hereafter we named our target as ``core-scale HFS'' to distinguish it from other HFSs with much larger physical scale. In \autoref{sec:obs}, we address the details of our observations and data reduction. In \autoref{sec:results}, we discuss the magnetic field morphology revealed by the polarization map, and we present an analysis of the dependency between polarization efficiency and $A_V$, the magnetic field morphology, and magnetic strength. Our interpretations of the observed polarization data are discussed in \autoref{sec:discussion}. A summary of our conclusions is given in \autoref{sec:summary}.
\section{Observations}\label{sec:obs}
\subsection{Data Acquisition and Reduction Techniques}\label{sec:data}
Our polarimetric continuum observations toward the IC5146 dark cloud system were carried out between 2016 May and 2017 April. The observed field targeted the brightest HFS located at the eastern end of the IC5146 main filament, as shown in \autoref{fig:field}. We performed 20 sets of 40-minute observations toward the IC5146 region with $\tau_{225 GHz}$ ranging from 0.04 to 0.07.
\begin{figure*}
\includegraphics[width=\textwidth]{IC5146_obsregion.pdf}
\caption{The IC5146 field observed in BISTRO overlaid on the \textit{Herschel} 250 $\mu$m image. The blue solid circle represents the field of view of the POL-2 850 $\mu$m polarimetry observations, and the dashed circle indicates the inner 3\arcmin\ region with the best sensitivity. The green vectors show the optical and infrared polarization measurements \citep{wa17}. The white circle at the bottom right corner shows the $Herschel$ 250 $\mu$m FWHM beam size. We note that the Cocoon Nebula is about 1\degr\ to the east of this field.}\label{fig:field}
\end{figure*}
The POL-2 observations were made using POL-2 DAISY scan mode \citep{fr16,ba19}, producing a fully sampled circular region of 11 arcmin diameter. Within the DAISY map, the noise is uniform and lowest in the central 3 arcmin diameter region, and increases to the edge of the map. The POL-2 data were simultaneously taken at 450 $\mu$m with a resolution of 9.6 arcsec and at 850 $\mu$m with a resolution of 14.1 arcsec. The 450 $\mu$m data are not reported in this paper since the 450 $\mu$m instrumental polarization model has been only recently commissioned.
The IC5146 data were reduced in a three-stage process using $pol2map$\footnote{http://starlink.eao.hawaii.edu/docs/sc22.pdf}, a script recently added to the SCUBA-2 mapmaking routine \textsc{smurf} \citep{be05, ch13}.
In the first stage, the raw bolometer timestreams for each observation are converted into separate Stokes $Q$, $U$, and $I$ timestreams using the process $calcqu$.
In the second stage, an initial Stokes $I$ map is created from the $I$ timestream from each observation using the iterative map-making routine $makemap$. For each reduction, areas of astrophysical emission are defined using a signal-to-noise-ratio (SNR) based mask determined iteratively by $makemap$. Areas outside this masked region are set to zero after each iteration until the final iteration of $makemap$ (see \citealt{ma15} for a detailed description of the role of masking in SCUBA-2 data reduction). Convergence is reached when successive iterations of the mapmaker produce pixel values which differ by $< 5$\% on average. Each map is compared to the first map in the sequence to determine a set of relative pointing corrections. The individual $I$ maps are then coadded to produce an initial $I$ map of the region.
In the third stage, the final Stokes $I,$ $Q$, and $U$ maps are created. The initial $I$ map described above is used to generate a fixed SNR-based mask for all further iterations of \textit{makemap}. The pointing corrections determined in the second stage are applied during the map-making process. In this stage, $skyloop$, a variant mode of \textit{makemap}, is invoked. In this mode, rather than each observation being reduced consecutively as is the standard method, one iteration of the mapmaker is performed on each of the observations. At the end of each iteration, all the maps created are coadded. The coadded maps created after successive iterations are compared, and when these coadded maps on average vary by $<5$\% between successive iterations, convergence is reached. Using $skyloop$ typically improves the mapmaker's ability to recover faint extended structure, at the expense of additional memory usage and processing time. The mapmaker was run three times successively to produce the output $I$, $Q$, and $U$ maps from their respective timestreams. The $Q$ and $U$ data were corrected for instrumental polarization (IP) using the final output $I$ map and the latest IP model (January 2018) \citep{fr16, fr18}.
In all $pol2map$ instances of $makemap$, the polarized sky background is estimated by doing a principal component analysis (PCA) of the $I$, $Q$, and $U$ timestreams to identify components that are common to multiple bolometers. In the first run of \textit{makemap} (stage 2), the 50 most correlated components are removed at each iteration. In the second run (stage 3), 150 components are removed at each iteration, resulting in smaller changes in the map between iterations and lower noise in the final map.
The output $I$, $Q$, and $U$ maps were calibrated in units of Jy/beam, using a flux conversion factor (FCF) of 725 Jy/pW -- the standard 850$\mu$m SCUBA-2 FCF multiplied by 1.35 to account for additional losses due to POL-2 (\citealt{de13}, \citealt{fr16}).
Finally, a polarization vector catalogue was created from the coadded Stokes $I$, $Q$, and $U$ maps. To improve the sensitivity, we binned the coadded Stokes $I$, $Q$, and $U$ maps into 12\arcsec\ pixels, and the binned data reached rms noise levels of 1.1 mJy~beam$^{-1}$\ for Stokes $Q$ and $U$.
We calculated the polarization fractions and orientations in the 12\arcsec\ pixel map. We debiased the former with the asymptotic estimator \citep{wa74} as
\begin{equation}\label{eq:debias}
P=\frac{1}{I}\sqrt{(U^2+Q^2)-\frac{1}{2}(\delta Q^2+\delta U^2)},
\end{equation}
where $P$ is the debiased polarization percentage, and $I$, $Q$, $U$, $\delta I$, $\delta Q$, and $\delta U$ are the Stokes $I$, $Q$, $U$, and their uncertainties. The uncertainty of polarization fraction was estimated using
\begin{equation}\label{eq:eP}
\delta P=\sqrt{\frac{(Q^2\delta Q^2+U^2\delta U^2)}{I^2(Q^2+U^2)} + \frac{\delta I^2(Q^2+U^2)}{I^4}}.
\end{equation}
The polarization position angle ($PA$) was calculated as:
\begin{equation}\label{eq:PA}
PA=\frac{1}{2}\tan^{-1}(\frac{U}{Q}),
\end{equation}
and its corresponding uncertainties were estimated using:
\begin{equation}\label{eq:ePA}
\delta PA=\frac{1}{2}\sqrt{\frac{(Q^2\delta U^2+U^2\delta Q^2)}{(Q^2+U^2)^2}} .
\end{equation}
The magnetic field orientations used in this paper were assumed to be $PA +90\degr$.
\subsection{CO Contamination}\label{sec:CO}
The SCUBA-2 850 $\mu$m waveband covers the wavelength of the CO (J=3-2) rotational line, and thus our measured continuum flux could be affected by CO line emission \citep[e.g.,][]{dr12,co16}. Furthermore, the CO (J=3-2) rotational line is known to be polarized via the Goldreich--Kylafis effect \citep{go81,go82}. For example, the typical polarization fraction of CO (J=3-2) could be $\lesssim$ 3\% in dense clouds and outflows \citep{ch16}, calculated using the formulation in \citet{de84} and \citet{co05}. The polarization angle of CO line is either parallel or perpendicular to the magnetic fields depending on optical depth and the relative angle between magnetic fields and gas velocity fields \citep{co05}. If a typical polarization fraction of 2\% for CO (J=3-2) is assumed, the polarized intensity from the line would be only 0.02--0.14\% of the total 850 $\mu$m flux, which is insignificant compared to the uncertainties of polarization, $\gtrsim 0.2$--$0.5\%$, in the central hub.
The CO contamination in total intensity might also decrease the observed polarization fraction. \citet{jo17} calculated the fraction of CO (J=3-2) line emission to the total flux in the JCMT 850 $\mu$m waveband toward several clumps in the IC5146 system. The fraction of CO (J=3-2) to total flux in our target region is mostly $\sim$ 1--3\%, but a higher fraction of $\sim$ 7\% was found in the central hub. Hence, the CO contamination would reduce the measured polarization fraction by a factor of 1--7\%. Nevertheless, this effect is insignificant to our analysis since the SNRs of our polarization detections are typically only $\sim$2-4.
\section{Results and Analysis}\label{sec:results}
\begin{figure*}
\includegraphics[width=\textwidth]{IC5146_pol_all.pdf}
\caption{B-field orientation map sampled on a 12\arcsec\ grid shown on the 850 $\mu$m dust continuum map, sampled on a 4\arcsec\ grid, of IC5146 region. The vectors are selected by $I/\delta I>10$ and $P/\delta P>2$, and rotated by 90\degr\ to represent magnetic field orientations. The yellow and black vectors show the greater-than-3$\sigma$ and 2--3$\sigma$ polarization detections. The green vectors represent the H-band starlight polarization. The white circle at the top right corner shows the POL-2 850 $\mu$m beam size of 14 arcsec. The zoom-in to the red box is shown in \autoref{fig:pmap}}\label{fig:pmap_all}
\end{figure*}
\begin{figure*}
\includegraphics[width=\textwidth]{IC5146_pol.pdf}
\caption{Same as \autoref{fig:pmap_all}, but zoom into the HFS region. The vectors are plotted with constant lengths to emphasize their orientations.}\label{fig:pmap}
\end{figure*}
\subsection{Magnetic Field Morphology}\label{sec:pmap}
We show the observed magnetic field orientations traced by POL-2 850 $\mu$m polarization, with pixel size of 12\arcsec, overlaid on the Stokes $I$ map, with pixel size of 4\arcsec, in \autoref{fig:pmap_all}. We selected the 139 vectors with $I/\delta I>10$ to ensure that the selected data are associated with the target core-scale HFS. \citet{mo15a} suggest that the uncertainty in Stokes $I$ may enhance the bias in polarization fraction for data with $I/\delta I<10$, and thus our $I/\delta I>10$ selection criterion could exclude these biased data. Among these $I/\delta I>10$ data, 30 of them have $2<P/\delta P<3$ and 42 of them have $P/\delta P> 3$. In order to better probe the magnetic fields, we further added the $P/\delta P>2$ criterion to exclude the samples with higher uncertainties in $PA$, and the final selected samples have a maximum $\delta PA$ of 12.7\degr\ and a mean $\delta PA$ of 8.5\degr. \autoref{fig:pmap} shows the zoom-in polarization map toward the HFS and our final selected samples. We note that the CO contamination in Stokes $I$ only has insignificant effect on our sample selection. If assuming the CO contamination in total intensity is 7\% everywhere, as the worst case, the number of $P/\delta P>2$ vectors would only decrease to 68 from 72.
\begin{figure}
\includegraphics[width=\columnwidth]{Hist_2comp.pdf}
\caption{Histogram of the magnetic field orientations. The colored histogram shows the 850 $\mu$m polarimetry data, and the blue histogram represents the optical/infrared data. The bin size of the histograms is set to 10\degr, similar to our typical uncertainties in $PA$. The $PA$=0\degr\ corresponds to north and $PA= +90$\degr\ is east. Our data show two major components with mean $PA$ of -27\degr\ (red) and 37\degr\ (green), separated by a dip at $\approx$ 10\degr, and a minor component peaked at -75\degr\ (gray). The black solid and dashed vertical lines label the orientation parallel and perpendicular to the large-scale filament, respectively, and the perpendicular orientation is consistent with the dip between the observed two components. }\label{fig:hist_2comp}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{pmap_2comp.pdf}
\caption{The polarization vectors of the three components shown in \autoref{fig:hist_2comp}. The green, red, and gray vectors are associated with the components with $10\degr<PA<90\degr$, $-50\degr<PA<10\degr$, and $-90\degr<PA<-60\degr$, respectively. The white contours show the H$_2$ column densities of (0.5, 1, 1.5, and 3)$\times10^{21}$ cm$^{-2}$ \citep{ar11}, indicating the morphology of the large-scale filament. The yellow dashed line shows the direction across the intensity peak of the two clumps along the large-scale filament (PA=-73\degr), which we used to represent the major axis of the large-scale filament. The vectors from the two major components tend to distribute in the upper and bottom half of the system and are separated by the dashed line, which favors the possibility that the magnetic field is dragged by the large-scale main filament. The vectors from the minor component seem to be randomly distributed over the area, which are probably small-scale structures that we cannot resolve.}\label{fig:pmap_2comp}
\end{figure}
The Stokes $I$ map shows a central massive clump in which three filaments intersect. The observed morphology is consistent with the typical hub-filament structure. The central massive clump hosts $\sim 80\%$ of the total intensity within the system, and thus we recognize the central massive clump as the hub of the HFS. Three filaments are identified extending from the central hub to the north, east and south. The magnetic field revealed by our polarization map seems to have small angular dispersion, but also shows change of orientations from the north to the south.
To compare the magnetic fields in the observed HFS with the large-scale magnetic fields shown in \autoref{fig:field}, we plot a histogram of the position angles ($PA$s) of the local magnetic fields from POL-2 and WLE17 data within our field of view (diameter of 11\arcmin) in \autoref{fig:hist_2comp} with a bin size of 10\degr\ that is close to our mean $\delta PA$ of 8.5\degr.
The $PA$ histogram of our data shows two major components separated by a dip at 10\degr. The $PA > 10\degr$ component has a mean $PA$s of 37\degr\ and a $PA$ dispersion of 15\degr, which is similar to the large scale magnetic fields (28$\pm$21\degr). In contrast, the $PA < 10\degr$ component has a mean $PA$s of -27\degr\ and a $PA$ dispersion of 27\degr. The $PA$ difference of 64\degr\ between the two components is much greater than the $PA$ dispersion for large-scale magnetic fields and also our mean observational uncertainty (8.5\degr), suggesting that the observed magnetic field morphology is significantly different from the large-scale magnetic fields.
\autoref{fig:pmap_2comp} shows the locations of these two components. To represent the major axis of the main filament, we plotted the yellow dashed line that shows the direction across the intensity peaks of the two clumps along the parsec-scale filament. This major axis has a orientation of -73\degr, and roughly separates the spatial distribution of the two magnetic field orientation components. Within the HFS, the red and green components tend to distributed in the northern and southern half of the HFS; the tendency, however, is reversed in the western clump. In addition, the orientation perpendicular to the main filament (17\degr) is also close to the dip between the two components. These features favor the possibility that the magnetic fields in the HFS is curved along the main filament. In contrast, the WLE17 data only show a major peak (28\degr) in the $PA$ histogram, which is roughly perpendicular to the large-scale filament but with a $\approx 10\degr\ $ offset in $PA$. A minor component peaked at $\approx$ -75\degr\ is also shown by our data; however, this component is diffusely distributed over the area, and thus more vectors are needed to reveal these structures.
\subsection{Polarization Efficiency}\label{sec:peff}
To investigate whether or not our polarization data trace the dust grains in high-extinction regions, we plot the 850 $\mu$m emission polarization fraction $P_{emit}$ vs. $A_V$ in \autoref{fig:peff}. To reveal the complete $P_{emit}$--$A_V$ distribution, this figure includes all the data with $I/\delta I > 10$, and the data points are color-coded based on their SNR of $P_{emit}$.
To estimate the $A_V$, we calculated the $\tau_{850\mu m}$ from the observed $850 \mu m$ intensity using $I_{850\mu m}= \tau_{850\mu m} B(T_{dust})$, assuming that the dust emission is optically thin at $850 \mu m$. We used the dust temperature $B(T_{dust})$ derived in \citet{ar11} via fitting the \textit{Herschel} data at five wavelengths with a modified blackbody function assuming a dust emissivity index of 2. The dust temperature map and the Stokes $I$ map were both resampled on a 12\arcsec\ grid to match our polarization catalogue. The $\tau_{850\mu m}$ was converted to $A_V$ using the $R_V=3.1$ extinction curve in \citet{we01}. We note that the extinction curve may vary at dense regions due to grain growth. If $R_V$ changes from 3.1 to 5.5 within the observed regions, we would underestimate the $P_{emit}$\ vs. $A_V$ slope by 10\%.
\begin{figure}
\includegraphics[width=\columnwidth]{Peff.pdf}
\caption{Polarization efficiency versus $A_V$. The green, blue, and black points represent the POL-2 data with SNR greater than 3, between 2 and 3, and less than 2, respectively. The optical extinction of POL-2 data is derived from total intensities $I_{\mu}=\tau B_{\mu}(T)$ with temperatures derived in \citet{ar11} using the $Herschel$ data. The green and blue dashed lines show the best least-square fitting to the SNR $>3$ and $2<$ SNR $<3$ data with indices of -1.08 and -1.03, respectively.
The magenta points are the mean H-band polarization efficiency, observed across the whole IC5146 cloud system \citep{wa17}. The $P_{emit}$\ (for POL-2) and $PE$ (for H-band) values are shown in right and left y-axis, respectively. The magenta line represents the best fit for the H-band data \citep{wa17}, and the red line shows the prediction from the mean posterior for our data (\autoref{sec:pext}). These two fitting results are offset by a factor of 48.3 at $A_V$ = 20 mag, due to the wavelength-dependent optical depth of the aligned dust grains, which we use to scale and match the two data sets.
}\label{fig:peff}
\end{figure}
The $P_{emit}$\ are equivalent to the extinction polarization percentages divided by the optical depth ($P_{ext}$/$\tau_{\lambda}$) in the optically thin case \citep{an15}, and so is proportional to the polarization efficiency ($PE$, defined as $P_{ext}$/$A_V$). Thus, the observed $P_{emit}$\ vs. $A_V$ slope is equivalent to the $PE$ vs. $A_V$ slope.
We further plotted in \autoref{fig:peff} the $PE$ vs. $A_V$ revealed by WLE17 optical and infrared polarization data, to show how $PE$ varies in low $A_V$ regions. The $PE$ at 850 $\mu$m are in the form of $P_{ext,850}/\tau_{850}$, and the $PE$ obtained in H-band are represented by $P_{ext, H}/A_V$. Thus, a scaling factor $\frac{P_{ext,850}}{P_{ext, H}}\cdot\frac{A_V}{\tau_{850}}$ is required to convert the $PE$ at the two wavelengths, which is determined by the unknown dust properties \citep{an15}. Via matching the fitting results of $PE$ vs. $A_V$ relation in H-band (WLE17) and in 850 $\mu m$ bands (described in \autoref{sec:pext}) at $A_V$ = 20 mag, we found a scaling factor of 48.3, which we used to match the two data sets. This scaling factor is not a universal constant, as discussed by \citet{jo15}, and varies with physical conditions in different clouds.
\subsection{Polarization Efficiency--$A_V$ Dependence}\label{sec:pext}
To determine the $P_{emit}$\ vs. $A_V$ slope, the conventional approach is to apply a least-squares power-law fit to data selected by a SNR cut in $P_{emit}$. Following this approach, we fitted the $P/\delta P > 2$ and $> 3$ data with a power-law function. The best fit functions are shown in \autoref{fig:peff} by blue and green dashed lines, and the best fit power-law indices are $-1.02\pm0.02$ and $-1.08\pm0.02$, respectively. Nevertheless, since \autoref{fig:peff} shows that the $P_{emit}$--$A_V$ distribution is significantly truncated by the SNR cut, and also the best fit trends are very similar to the truncated boundary, it raises doubts on whether or not the fitting is biased by the sample selection.
We investigated how the sample selection biased on $P/\delta P$ affects the fitting to $P_{emit}$\ vs. $A_V$ distribution by performing Monte Carlo simulations of data sets with underlying $P_{emit}$\ vs. $A_V$ function and randomly generated measurement errors in \autoref{sec:appendix}. We found that the fitted power-index would be dominated by the SNR cut and approaches $-1$ rather than the true underlying value, if the $P_{emit}$\ vs. $A_V$ distribution is significantly truncated by the applied $P/\delta P$ selection criteria. Hence, to obtain an unbiased power-index, it is recommended to include the low $P/\delta P$ data, so that a complete probability density function (PDF) of $P_{emit}$\ can be recovered. Nevertheless, the use of low $P/\delta P$ data would break the Gaussian PDF assumption \citep{wa74,va06}, required for least-squares fit, and therefore we turn to use a Bayesian approach to fit the observed $P_{emit}$\ vs. $A_V$ distribution.
We used a Bayesian approach to apply a non-Gaussian PDF and fit the observed $P_{emit}$\ vs. $A_V$ trend with a power-law model. The Bayesian statistical framework provides a model fitting tool based on probability theory (see detailed introduction in \citealt{ja03}).
The general form of the Bayesian inference is
\begin{equation}
P(\theta|D)=\frac{P(\theta)P(D|\theta)}{P(D)}
\end{equation}
or
\begin{equation}
Posterior=\frac{Prior \times Likelihood}{Evidence},
\end{equation}
where D is the observed data and $\theta$ represents the model parameters. The posterior $P(\theta|D)$ describes the probability of the model parameters matching the given data, which is in what we are interested. The evidence $P(D)$ is the probability of obtaining the data, which mainly serves as a normalization factor for the posterior. The prior $P(\theta)$ serves as the initial guessed probability of the model parameters based on our prior knowledge. The likelihood $P(D|\theta)$ describes how likely it is for a given model parameter set to match the observed data.
\begin{figure*}
\includegraphics[width=\textwidth]{Bayesian_12arcdata_pdf.pdf}
\caption{PDF of the model parameters derived using Bayesian model fitting to our 12 arcsec data. The 95\% highest posterior density (HPD) intervals are plotted to represent the uncertainties. The derived $\alpha$ has a mean of 0.56 and a 95\% confidence interval from 0.22--0.83. The $\alpha$ value much lower than 1 suggests that the dust grains in $A_V\sim20$--300 mag can still be aligned with magnetic fields.
}\label{fig:baye_12asc}
\end{figure*}
\begin{figure}
\includegraphics[width=\linewidth]{Bayesian_Fit.pdf}
\caption{The comparison between the Bayesian posterior prediction and the observations. The green, blue, and black points represent the POL-2 data with SNR greater than 3, between 2 and 3, and less than 2, respectively. The black line and colored regions show the mean, 95\%, 68\%, and 50\% confidence regions, predicted by the posterior shown in \autoref{fig:baye_12asc}, assuming a dust temperature of 13 K. Since the polarization error distribution is non-Gaussian and changes with $A_V$, the expected mean polarization is not simply a straight line on the logarithmic scale.}\label{fig:baye_fit}
\end{figure}
Assuming the measurements in Stokes $Q$ and $U$ have similar and Gaussian distributed noise, the probability density function of the observed polarization fraction has been known to follow the Rice distribution \citep{ri45,wa74,si85,qu12}
\begin{equation}\label{eq:rice}
F(P|P_0)=\frac{P}{\sigma_P^{2}}exp\left[-\frac{P^2+P_0^2}{2{\sigma_P}^2}\right]I_0\left(\frac{PP_0}{\sigma_P^2}\right),
\end{equation}
where $P$ is the observed polarization fraction, $P_0$ is the real polarization fraction, $\sigma_P$ is the uncertainty in polarization fraction, and $I_0$ is the zeroth-order modified Bessel function. The likelihood function of polarization measurements is defined as:
\begin{equation}
L(P_0)=\displaystyle\prod_{i=1}^{n}F(P_n|P_0),
\end{equation}
where $P_n$ represents the nth measurement.
To perform the fit to the $P_{emit}$\ vs. $A_V$ trend using a Bayesian approach, we assumed the following power-law model such that
\begin{equation}\label{eq:prior}
P_0 =\beta A_V^{-\alpha},
\end{equation}
where $\alpha$ and $\beta$ are the free model parameters, and $A_V$ is the observed visual extinction. The uncertainty in the polarization fraction is
\begin{equation}
\sigma_P=\sigma_{QU}/I.
\end{equation}
Here, the $I$ is the observed total intensity, and the $\sigma_{QU}$ is a free model parameter describing the dispersion in Stokes $Q$ and $U$, which has contributions both from the instrumental uncertainty and the intrinsic dispersion due to source properties.
We then simply used uniform priors within reasonable limits as:
\begin{equation}
\begin{aligned}
P(\alpha)& =
\begin{cases}
Uniform & 0< \alpha < 2 \\
0 & otherwise \\
\end{cases}\\
P(\beta)& =
\begin{cases}
Uniform & 0 < \beta < 400 \\
0 & otherwise \\
\end{cases}\\
P(\sigma_{QU})& =
\begin{cases}
Uniform & 0< \sigma_{QU} < 10 \\
0 & otherwise. \\
\end{cases}\\
\end{aligned}
\end{equation}
The Bayesian model fitting was performed with the Python Package PyMC3 \citep{sal16} via a Markov Chain Monte Carlo method using the Metropolis-Hastings sampling algorithm. The 12 arcsec pixel data were used for the fitting to ensure that each measurement is independent.
The derived posterior of each model parameter is shown in \autoref{fig:baye_12asc}, and the 95\% highest posterior density (HPD) interval of each parameter is plotted to represent the uncertainty. The 95\%, 68\%, and 50\% confidence regions (CR) predicted by the posterior distribution in $P_{emit}$\ vs. $A_V$ space is shown in \autoref{fig:baye_fit}, assuming a dust temperature of 13 K. Since the error distribution of $P$ is asymmetric and also varies with $P/\sigma_P$ and $A_V$, the predicted $P$ vs. $A_V$ is not simply a straight line on a logarithmic scale, even though the input model is a power-law. Almost all the data points are well within the 95\% confidence regions predicted by the posterior.
The derived $\alpha$ has a mean value of $0.56$ with a 95\% confidence interval from 0.22 to 0.83. The $\alpha$ derived by the Bayesian method is less steeper than the $\alpha\approx 1.0$ derived from the conventional approach, confirming that the conventional method is biased (see \autoref{fig:peff}). The $\alpha$ range of 0.22--0.83 includes the index of $0.25\pm0.06$ obtained from near-infrared polarization data in $A_V$ of 3--20 mag regions \citep[see][]{wa17}, and thus no significant difference in polarization efficiency was found between $A_V <$ 20 mag and $A_V =$ 20--300 mag regions (\autoref{fig:peff}). In addition, the fitted $\sigma_{QU}$ of 1.78 mJy~beam$^{-1}$ is greater than our estimated instrumental noise of 1.1 mJy~beam$^{-1}$, which indicates a significant intrinsic dispersion in polarization efficiency.
The value of $\alpha$ smaller than unity indicates that the extinction polarization fraction ($P_{ext}$=$\tau$$P_{emit}$) increases with column density. Since the extinction polarization fraction is defined as tanh($\Delta\tau$) \citep{jo89}, where $\Delta\tau$ is the differential optical depth between two polarization directions, the increase of extinction polarization fraction indicates a higher amount of aligned dust grains. Hence, our results suggest that the dust grains in the IC5146 dense regions can still be aligned with magnetic fields. The $\alpha$ of $\sim 0.5$ is also predicted by the simulations based on radiative torque alignment theory \citep[e.g][]{wh08} in low density regions, where the radiation field is sufficiently strong to align dust grains.
\begin{figure*}
\includegraphics[width=\textwidth]{GBS_clump.pdf}
\caption{The clumps identified by the JCMT Gould Belt survey \citep{jo17} overlaid on our polarization map. The image shows the 850 $\mu$m continuum emission. The black contours and circles represent the boundaries and the emission peaks of the identified clumps, respectively. The yellow vectors show the orientation of the major axis for each clump. The green diamonds and yellow boxes label the Class 0/I and II/III YSOs identified in \citet{du15}, respectively.}\label{fig:gbs}
\end{figure*}
Three possibilities could explain why the dust grains within these dense regions can still be aligned. First, since our target is an active star-forming region, the embedded young stars could be the sources of radiation needed to align the dust grains in dense regions. Second, WLE17 found that the dust grains in IC5146 have significantly grown from the diffuse ISM. These large dust grains could be aligned more efficiently by the radiation with longer wavelengths, which can penetrate the dense regions \citep{la07,ho09}. Third, the mechanical torques due to infalling gas and outflows in the star-forming regions could possibly align the dust grains in the absence of a radiation field \citep{la07b,ho18}. These possibilities will be further investigated in upcoming BISTRO papers probing the polarization efficiency in different environments.
\subsection{Orientation of Clumps and Magnetic Fields}\label{sec:clump}
To investigate whether or not magnetic fields influence the clump fragmentation within the IC5146 HFS, we examined the correlation between the magnetic field orientations and the clump morphologies. Based on the JCMT 850 $\mu$m Gould Belt Survey data, \citet{jo17} identified eight submillimeter clumps within the regions where we had polarization detections (\autoref{fig:gbs}). To represent the orientation of each clump, we used our total intensity map, and performed a 2D-Gaussian fit to each clump to find the position angles of its FWHM major axis. The 2D-Gaussian fit had typical orientation uncertainties of $\sim$ 10\degr. The obtained clump orientations are listed in \autoref{tab:clump} and plotted over the polarization map in \autoref{fig:gbs}.
To estimate the local magnetic field orientation within each clump, we calculated the mean magnetic field orientations by averaging the data within 3$\times$3 pixels at the clump intensity peaks. The size of 3$\times$3 pixels is comparable to the typical radius of these clumps ($\approx $20--40\arcsec, see \autoref{tab:clump}), and so the average represents the mean magnetic field orientation over the dense center of these clumps. In addition, The 3$\times$3 pixels also provide an estimation of orientation dispersion, which were used as the uncertainties of the averaged orientations; if only one vector was obtained for a given clump, the instrumental uncertainty would be used. To handle the $\pm$180\degr\ ambiguity, the mean and dispersion of the magnetic field $PA$s were calculated in a new coordinate system, where the $PA$ dispersion were minimized, and the calculated results were converted back to the standard coordinate system.
The derived local magnetic field orientations versus the clump orientations are plotted in \autoref{fig:papa}. The comparison between the clump axis and the magnetic field orientations in the clump is limited by the small number of statistics. Nevertheless, there appears to be a tendency that the observed clumps are likely either parallel or perpendicular to the mean magnetic field orientation within $\sim$20\degr, as shown in \autoref{fig:papa}. The upcoming BISTRO data would provide a much bigger sample set from various systems to statistically confirm this tendency. If the tendency is real, it would suggest that the magnetic fields are a key factor in the fragmentation of these clumps. We note that clumps 43 and 52 only contain two polarization vectors which are almost perpendicular to each other, and thus the mean magnetic field orientations are not meaningful for these two cases. Since the orientation of these two clumps are still parallel to one of the vectors and perpendicular to the other, these two clumps, however, are still consistent with the tendency.
\begin{deluxetable*}{ccccccccccc}
\tablecaption{Geometric and Polarization Properties of the Clumps}
\tablehead{\colhead{ID\tablenotemark{a}} & \colhead{Total Mass\tablenotemark{a}} & \colhead{Major Axis} & \colhead{Minor Axis} & \colhead{R$_{eff}$} & \colhead{Clump Orientation} & \colhead{B-field PA$_{peak}$\tablenotemark{b}} & \colhead{$\sigma_{NT}$} & \colhead{$\alpha_{vir}=\frac{\textrm{M$_{vir}$}}{M_{clump}}$\tablenotemark{c}} & \colhead{$\alpha_{vir,B}$} \\
\colhead{} & \colhead{(M$_{\odot}$)} & \colhead{(arcsec)} & \colhead{(arcsec)} & \colhead{(arcsec)} & \colhead{(deg)} & \colhead{(deg)} & \colhead{{(km~s$^{-1}$)}} & \colhead{} & \colhead{} & \colhead{}}
\startdata
42 & 11$\pm3$ & 51$\pm2$ & 33$\pm2$ & 43$\pm2$ & $14\pm10$ & 134.8$\pm$28.5 & $0.25\pm0.01$ & 0.9$\pm0.2$ & ...\\
43 & 2.0$\pm0.5$ & 29$\pm1$ & 16$\pm1$ & 22$\pm1$ & 4$\pm10$ & 100.1$\pm$6.6 & $0.12\pm0.01$ & 1.4$\pm0.3$ & ...\\
45 & 0.77$\pm0.18$ & 37$\pm3$ & 16$\pm1$ & 24$\pm2$ & 174$\pm10$ & 0.7$\pm$9.5 & $0.18\pm0.02$ & 5.1$\pm1.0$ & ...\\
46 & 6.4$\pm1.6$ & 26$\pm2$ & 23$\pm1$ & 24$\pm2$ & 25$\pm10$ & 51.5$\pm$19.7 & $0.21\pm0.03$ & 0.7$\pm0.1$ & ...\\
47 & 85$\pm20$ & 40$\pm2$ & 36$\pm2$ & 38$\pm2$ & 86$\pm10$ & 21.0$\pm$2.7 & $0.36\pm0.12$ & 0.2$\pm0.1$ & 0.2--1.0\tablenotemark{d}\\
48 & 6.0$\pm1.5$ & 35$\pm3$ & 25$\pm1$ & 30$\pm3$ & 164$\pm10$ & -4.1$\pm$1.6 & $0.14\pm0.01$ & 0.7$\pm0.1$ & ...\\
50 & 0.97$\pm0.24$ & 29$\pm1$ & 18$\pm1$ & 23$\pm1$ & 9$\pm10$ & 13.6$\pm$11.4 & ... & ... & ...\\
52 & 7.6$\pm1.9$ & 31$\pm2$ & 17$\pm1$ & 23$\pm2$ & 135$\pm10$ & 64.7$\pm$40.7 & $0.29\pm0.01$ & 0.9$\pm0.2$ & ...\\
53 & 1.7$\pm0.4$ & 34$\pm3$ & 18$\pm1$ & 24$\pm3$ & 140$\pm10$ & -19.7$\pm$10.1 & ... & ... & ...\\
\enddata
\tablenotetext{a}{The clumps ID and mass were obtained from \citet{jo17} but the masses were scaled to a distance of $813\pm106$ pc.}
\tablenotetext{b}{Mean magnetic field orientation averaged using the $3\times3$ pixels at the intensity peaks.}
\tablenotetext{c}{The virial masses of clumps were calculated considering the support from thermal pressure and turbulence.}
\tablenotetext{d}{If the inclination angle of the magnetic field with respect to the line of sight is greater than 15\degr.}
\label{tab:clump}
\end{deluxetable*}
We further plot in \autoref{fig:papa} the mean large-scale magnetic field orientation, 28\degr, obtained from WLE17. Only Clump 47 has a magnetic field orientation similar to the large-scale magnetic fields within 20\degr. All other clumps are aligned either parallel or perpendicular with the local magnetic field and show no significant correlation with the large-scale magnetic fields. Hence, these clumps are more likely formed after the local magnetic fields were distorted by the process of the clump formation.
\begin{figure}
\includegraphics[width=\columnwidth]{PAvsPA.pdf}
\caption{The comparison of orientations between clumps and magnetic fields. The clump orientations are obtained from 2D-Gaussian fits to the CO-subtracted intensity, and the magnetic field orientations are averaged from the polarization detection within the $3\times3$ pixels at the clump intensity peak. The gray dashed line labels the mean $PA$ of the large-scale magnetic field from WLE17 data. The magenta region represents where the orientation of clumps and magnetic fields are equal, and the green regions show where the orientation of clumps and magnetic fields are perpendicular. All our clumps are close to either magenta or green regions. }\label{fig:papa}
\end{figure}
\subsection{Magnetic Field Strength in IC5146}\label{sec:str}
The Davis-Chandrasekhar-Fermi (DCF) method \citep{da51,ch53} is commonly used to evaluate the magnetic field strength from dust polarizations. The DCF method assumes that turbulent kinematic energy and the turbulent magnetic energy are in equipartition, and hence the magnetic field strength can be estimated using
\begin{equation}\label{eq:CF}
B_{pos}=Q~\sqrt[]{4\pi \rho}\frac{\sigma_v}{\delta \phi_{intrinsic}},
\end{equation}
where $\delta \phi$ is the intrinsic angular dispersion of the magnetic fields, $\sigma_v$ is the velocity dispersion, $\rho$ is the gas density, and Q is a factor accounting for the complicated magnetic field and inhomogeneous density structure. \citet{os01} suggested that $Q=0.5$ yields a good estimation of magnetic field strength on the plane of sky if magnetic field angular dispersion is less than 25\degr .
\subsubsection{Magnetic Field Angular Dispersion}\label{sec:ad}
We used the 12\arcsec\ pixel polarization data to calculate the magnetic field angular dispersion to ensure that all vectors we used are independent measurements. To avoid small number statistics (less than 10 vectors), we only perform the angular dispersion estimation using the polarization vectors (45 vectors) within the central hub (Clump 47) (\autoref{fig:gbs}).
The DCF method requires an estimation of magnetic field distortion caused by turbulence, and the underlying magnetic field geometry might bias the estimation. Thus, we calculated the magnetic field angular dispersion in a local area to avoid the angular dispersion contribution from the large-scale nonuniform magnetic field geometry. Specifically, we selected each 24\arcsec$\times$24\arcsec\ box (i.e. the width of 2 independent beams), and calculated the mean and the corresponding sum of squared differences (SSD=$\sum_{i=1}^{n}(\bar{PA}-PA_i)^2$) of the $PA$ using the, at most, 9 vectors within each box. The SSD from all boxes were averaged with inverse-variance weighting, and the square root of the mean SSD was taken as the observed angular dispersion. Next, the mean instrumental uncertainties ($\delta \phi_{ins}$) of $8.0\degr$ were removed from the observed angular dispersion ($\delta \phi_{obs}$) to obtain the intrinsic dispersion ($\delta \phi_{intrinsic}$) via
\begin{equation}
\delta \phi^2_{intrinsic}=\delta \phi^2_{obs}-\delta \phi^2_{ins}.
\end{equation}
With these corrections, the calculated $\delta \phi_{intrinsic}$ for clump 47 is $17.4\degr\pm0.6\degr$.
\subsubsection{Velocity Dispersion}
To estimate the velocity dispersion, we used the C$^{18}$O (J = 3-2) spectral data taken by \citet{gr08} with the JCMT HARP receiver \citep{bu09}. CO and its isotopomers are well mixed with H$_2$ and are commonly used to trace the gas kinematics. The C$^{18}$O (J = 3-2) line, in particular, is expected to trace gas with volume density up to $\sim 10^5$ cm$^{-3}$ \citep[e.g.,][]{di07}, which is comparable to the densities of our target field. In addition, the C$^{18}$O (J = 3-2) line is likely optically thin in this field \citep{gr08}, and so it traces the kinematics of all the gas in the clump. Therefore, we assumed that the C$^{18}$O (J = 3-2) line width can well represent the gas velocity dispersion in our observing regions.
The C$^{18}$O data reveals, at least, three velocity components within the central hub, peaked at 3.7, 4.1, and 4.5 km~s$^{-1}$. Because the three components have very similar velocities, and also multiple YSOs have been identified in the central hub by \citet{ha08}, the multiple components are possibly structures within the hub, instead of foreground or background components. We performed a multi-component Gaussian fit to estimate the C$^{18}$O (J = 3-2) line width, using the python package PySpecKit \citep{gi11}. We only accepted the fitted Gaussian components with amplitudes larger than 5 $\sigma$. To estimate the thermal velocity dispersion, we adopted a gas kinematic temperature ($T_\mathrm{kin}$) of 10$\pm 1$ K which is the same as the excitation temperature estimated from $^{13}$CO (J = 3-2) line in \citet{gr08}, leading to $\sqrt{\frac{k_B T_\mathrm{kin}}{m_{\mathrm{C}^{18}\mathrm{O}}}}=0.05\pm0.01$ km~s$^{-1}$. The thermal velocity dispersions were then removed from the fitted line widths to obtain the non-thermal velocity dispersions via
\begin{equation}\label{eq:vdisp}
\sigma^2_{NT}=\sigma^2_{obs} - \frac{k_B T_\mathrm{kin}}{m_{\mathrm{C}^{18}\mathrm{O}}}
\end{equation}
where $\sigma_{NT}$ is the non-thermal velocity dispersion, $\sigma_{obs}$ is the observed C$^{18}$O Gaussian line width, and $m_{\mathrm{C}^{18}\mathrm{O}}$ is the molecular weight. The inverse-variance weighted mean of the $\sigma_{obs}$ and $\sigma_{NT}$ of all velocity components over the central hub was 0.37 and 0.36 km~s$^{-1}$, respectively, and the dispersion of $\sigma_{obs}$ of 0.12 km~s$^{-1}$ among the central hub was used as its uncertainty.
\subsubsection{Volume Density}
\citet{jo17} estimated the total mass of the clumps within the IC5146 cloud using JCMT 850 $\mu$m data assuming a distance of 950 pc. The derived total masses of the central hub were scaled down to a distance of $813\pm106$ pc and are listed in \autoref{tab:clump}. We assume that the thickness of the hub is equal to the geometric mean of the observed major and minor axis, obtained from 2D-Gaussian fit listed in \autoref{tab:clump}, and the uncertainty of the thickness is assumed to be the difference between the observed major and minor axis. The mean volume density of the hub is then estimated using its total mass and ellipsoid volume. The calculated H$_2$ volume densities ($n_{H_2}$) for Clump 47 are 9.8$\pm2.4\times10^5$ cm$^{-3}$. We note that our estimated radius is underestimated due to the unknown inclination angle ($i$) of the clump, and thus the volume densities we estimated here are only upper limits.
\begin{deluxetable}{cccccc}
\tablecaption{Derived magnetic field Strength of the Clump 47\label{tab:CF}}
\renewcommand{\thetable}{\arabic{table}}
\tablenum{2}
\tablehead{\colhead{ID} & \colhead{$\sigma_{NT}$} & \colhead{$\delta \phi$} & \colhead{$n_{H_2}$} & \colhead{$B_{pos}$} & \colhead{$\lambda$} \\
\colhead{} & \colhead{(km~s$^{-1}$)} & \colhead{(deg)} & \colhead{(cm${^-3}$)} & \colhead{(mG)} & \colhead{}}
\startdata
47 & $0.36\pm0.12$ & $17.4\pm0.6$ & $(9.8\pm2.4) \times 10^5$ & $0.5\pm0.2$ & $1.3\pm0.4$ \\
\enddata
\tablecomments{The magnetic field strength estimated using DCF method. $\sigma_{NT}$, $\delta \phi$, $n_{H_2}$, $B_{pos}$, and $\lambda$ represent the velocity dispersion, magnetic field angular dispersion, H$_2$ volume density, plane of sky magnetic field strength, and mass-to-flux ratio, respectively. }
{\addtocounter{table}{-1}}
\end{deluxetable}
\subsubsection{Magnetic Field Strength and Mass-to-Flux Ratio}\label{sec:str_m2f}
Using \autoref{eq:CF} and the quantities estimated above (\autoref{tab:CF}), the plane-of-sky magnetic field strength ($B_{pos}$) is estimated to be $0.5\pm0.2$ mG.
To evaluate the relative importance of magnetic fields and gravity in the central hub, we calculate the mass-to-flux critical ratio via
\begin{equation}
\lambda_{obs}=\frac{(M/\Phi)_{obs}}{(M/\Phi)_{cri}},
\end{equation}
where the observed mass-to-flux ratio is
\begin{equation}
(M/\Phi)_{obs}=\frac{\mu m_{H}N_{H_2}}{B_{pos}},
\end{equation}
where $\mu$=2.8 is the mean molecular weight per H$_2$ molecule, and the $(M/\Phi)_{cri}$ is the critical mass-to-flux ratio defined as
\begin{equation}
(M/\Phi)_{cri}=\frac{1}{2\pi \sqrt{G}}
\end{equation}
\citep{na78}. Due to the unknown inclination of the clumps, the observed mass-to-flux ratio $\lambda_{obs}$ is also only an upper limit. \citet{cr04} suggested that a statistically average factor of $\frac{1}{3}$ could be used to estimate the real mass-to-flux ratio accounting for the random inclinations for an oblate spheroid core, flattened perpendicular to the orientation of the magnetic field. Since we have shown that the central clump is elongated with its major axis perpendicular to the local magnetic field, we adopt a factor of $\frac{1}{3}$ to estimate the mass-to-flux ratio via
\begin{equation}
\lambda=\frac{1}{3} \lambda_{obs}.
\end{equation}
The estimated mass-to-flux ratio for the central clump is $1.3\pm0.4$. The DCF method often tends to overestimate the magnetic field strength, since the effect of integration over the telescope beam and along the line of sight might smooth out part of the magnetic field structure, resulting in an underestimated angular dispersion \citep{he01,os01,cr12}. In addition, our target region has a complicated velocity structure, and therefore the measured velocity dispersion might have contributions from gas accretion or contraction motions instead of only isotropic turbulence, also leading to an overestimated magnetic strength. Hence, our estimation of the mass-to-flux ratio only represents a lower limit. A mass-to-flux ratio $\ga 1.0$ suggests that the central hub is super-critical, and that magnetic fields and gravity are comparably important at sub-parcsec scale.
\subsubsection{Angular Dispersion Function}\label{sec:SF}
\citet{hi09} developed an alternative method to improve the DCF method using an polarization angular dispersion function to accurately extract the turbulent component from the polarization data. \citet{hou09} further generalized the angular dispersion function method (hereafter ADF method) by including the effect of signal integration along the thickness of the clouds and over the telescope beam. In this section, we test whether or not the magnetic field strength estimated using the ADF method is significantly different from our estimation in \autoref{sec:str_m2f}.
The ADF method assumes that the magnetic fields in clouds are combinations of ordered large-scale component $B_0$ and turbulent component $B_t$, and the ratio of these two components determines the intrinsic polarization angular dispersion that
\begin{equation}
\delta \phi_{intrinsic} = \left[\frac{\langle B_t^2\rangle}{\langle B_0^2\rangle}\right]^{\frac{1}{2}},
\end{equation}
where $\langle...\rangle$ denotes an average. Hence, the DCF equation (\autoref{eq:CF}) can be written as
\begin{equation}
B_{pos}=\sqrt[]{4\pi \rho}~\sigma_v \left[\frac{\langle B_t^2\rangle}{\langle B_0^2\rangle}\right]^{-\frac{1}{2}}.
\end{equation}
The detailed derivation given by \citet{hi09} and \citet{hou09} shows that the ratio of turbulent to magnetic energy can be estimated from the angular dispersion function using the following equation:
\begin{equation}\label{eq:SF}
1-\langle\cos[\Delta\Phi(\ell)]\rangle \simeq \frac{1}{N}\frac{\langle B_t^2\rangle}{\langle B_0^2\rangle}(1-e^{-\ell^2/2(\delta^2+2W^2)})+a\ell^2,
\end{equation}
where $\Delta \Phi (\ell)$ is the difference in the polarization angle measured at two positions separated by a distance $\ell$. The quantities $\delta$ and $a$ are unknown parameters, representing the turbulent correlation length and the first order Taylor expansion of the large-scale magnetic field structure. The quantity $W$ is the telescope beam radius, which is 7.3 arcsec at 850 $\mu$m. The quantity $N$ is the number of turbulent cell observed along the line of sight and within the telescope beam, and can be estimated from:
\begin{equation}\label{eq:ncell}
N=\Delta'\frac{\delta^2+2W^2}{\sqrt{2\pi}\delta^3},
\end{equation}
where $\Delta'$ is the effective cloud thickness, which is assumed to be the clump effective radius. Via fitting the above equation to the observed $1-\langle\cos[\Delta\Phi(\ell)]\rangle$ vs. $\ell$ distribution, the three unknown parameters $\delta$, $\frac{\langle B_t^2\rangle}{\langle B_0^2\rangle}$, and $a$ can be derived
We applied the ADF method to estimate the magnetic field strength in Clump 47 using the same selected polarization vectors as in \autoref{sec:ad}. We calculated the $\cos[\Delta\Phi(\ell)]$ and $\ell$ from each pair of the polarization vectors within the Clump 47, and the results are averaged in bins of width $\ell$ = 12 arcsec to estimate the angular dispersion function $1-\langle\cos[\Delta\Phi(\ell)]\rangle$ vs. $\ell$. The calculated angular dispersion function is plotted in \autoref{fig:SF}. We fitted the observed angular dispersion function using \autoref{eq:SF} and \autoref{eq:ncell}, and the best fitting parameters are shown in \autoref{tab:SF}. The obtained turbulent to magnetic energy ratio $\frac{\langle B_t^2\rangle}{\langle B_0^2\rangle}$ is $0.33\pm0.04$, suggesting that the turbulent magnetic field component is weaker than the ordered large-scale magnetic field. With the previously derived gas velocity and the volume density (\autoref{tab:CF}), the magnetic field strength is estimated to be $0.5\pm0.2$ mG, which is consistent with our estimation using the DCF method ($0.5\pm0.2$ mG) within the uncertainties.
\begin{figure}
\includegraphics[width=\columnwidth]{SF.pdf}
\caption{The angular dispersion function $1-\langle\cos[\Delta\Phi(\ell)]\rangle$ as a function of the distance $\ell$. The mean $\Delta\Phi(\ell)$ was calculated in bins of 12 arcsec. The green dashed line shows the best fit of \autoref{eq:SF} to the data.}\label{fig:SF}
\end{figure}
\begin{table}[]
\caption{Magnetic field Strength of Clump 47 using the ADF method\label{tab:SF}}
\renewcommand{\thetable}{\arabic{table}}
\tablenum{3}
\begin{tabular}{ccccc}
\hline
\hline
\multicolumn{3}{c}{Fit Result} & \multicolumn{2}{c}{Derived Quantities} \\
\cmidrule(lr){1-3} \cmidrule(lr){4-5}
$\delta$ & $\frac{\langle B_t^2\rangle}{\langle B_0^2\rangle}$ & $a$ & $N$ & $B_{pos}$ \\
(arcsec) & & (arcsec$^{-2}$) & & (mG) \\
\hline
$14.1\pm4.2$ & $0.33\pm0.04$ & $(1.5\pm0.3)\times10^{-5}$ & 1.7$\pm$0.5 & $0.5\pm0.2$ \\
\hline
\hline
\end{tabular}
\end{table}
\subsubsection{Alfv\'{e}nic Mach Number}\label{sec:Ma}
The turbulent Alfv\'{e}nic Mach number ($M_A$) describes the relative importance of magnetic fields and turbulence, and hence it is a key factor in most of cloud evolution models \citep[e.g.,][]{pa01,na08}. In the sub-Alfv\'{e}nic case ($M_A \le$ 1), magnetic fields are strong enough to regulate turbulence, and cause an organized magnetic field and cloud structure. In the super-Alfv\'{e}nic case ($M_A >$ 1), the turbulence can efficiently change the magnetic field morphology, and the magnetic field morphology is expected to be random.
The Alfv\'{e}nic Mach number can be estimated from the angular dispersion of the magnetic field if the same assumptions as for the DCF method are made. In doing so, the definition of the Alfv\'{e}nic Mach number
\begin{equation}
M_A= \frac{\sigma_{NT}}{V_A}= \frac{\sigma_{NT}\sqrt{4\pi \rho}}{B}
\end{equation}
can be combined with the equation of the DCF method (\autoref{eq:CF}), yielding
\begin{equation}
M_A= \frac{\delta \phi \cdot \sin\theta}{Q},
\end{equation}
where $\theta$ is the inclination of the magnetic fields, with respect to the line of sight, so that $B_{pos}=B \sin\theta$. For $Q=0.5$, the obtained magnetic field angular dispersion 17\degr\- corresponds to $M_A$ of $0.6\sin\theta$, and hence the central hub is likely sub-Alfv\'{e}nic.
\subsection{Gravitational Stability of Clumps}
In this section, we use the virial analysis to investigate whether or not thermal pressure, turbulence, and magnetic fields are sufficient to support clumps against gravity. If a clump with uniform density is supported by only thermal pressure and turbulence, the virial mass (M$_{vir}$) is
\begin{equation}
M_{vir}=\frac{5R_{eff}}{G}(\sigma^2_{NT}+c_s^2)
\end{equation}
\citep{be92,pi11,liu18}, where $R_{eff}$ is the geometric mean of major and minor radius, and $c_s=0.19$ km~s$^{-1}$ is the sound speed for a kinematic temperature of 10 K and mean molecular weight. Virial mass is the maximum mass of a stable clump with the support from kinematic and thermal energy. Hence, a clump mass greater than the virial mass, or a virial parameter ($\alpha_{vir}=M_{vir}/M_{clump}$) less than unity, indicates that the clump is gravitational unstable. We calculate the virial parameter of each clump and list the results in \autoref{tab:clump}. Except for the Clump 43 and 45, most of the clumps have $\alpha_{vir}$ less than unity, suggesting that thermal pressure and turbulence are insufficient to support the clump against gravity. Hence, these clumps require additional support from magnetic fields to stop gravitational collapse.
If support from magnetic fields is taken in to account, the virial mass of a clump becomes
\begin{equation}
M_{vir}^B=\frac{5R_{eff}}{G}(\sigma^2_{NT}+c_s^2+\frac{V_A^2}{6})
\end{equation}
\citep{be92,pi11,liu18}, where the additional term $\frac{V_A^2}{6}$ stands for the support from magnetic field pressure. We have estimated an Alfv\'{e}nic Mach number of $0.6\sin \theta$ for Clump 47 in \autoref{sec:str}, which corresponds to an Alfv\'{e}nic velocity of $0.64/\sin \theta$. With support from magnetic fields, the $\alpha_{vir}$ of Clump 47 becomes 0.2--1.0 for a $\theta$ of 15\degr--90\degr\, and greater than unity if $\theta<15\degr$. Hence, if the direction of magnetic field is not very close to the line of sight, Clump 47 is likely gravitationally unstable, which is consistent with the existence of YSOs in the central hub \citep{ha08}. In addition, the presence of YSOs in the Clump 47 indicates a density structure more complicate than our simple assumption, which could further reduce the virial mass \citep{sa17} and thus this clump could be even more unstable than our estimation.
\section{Discussion}\label{sec:discussion}
\subsection{The Origin of the Core-Scale HFS}\label{sec:HFS}
In \autoref{sec:results}, we show that the magnetic field orientation around the HFS have two major components, tending to distributed in the northern and southern part of the system. The two components can be explained by either a curved magnetic field or a foreground/background component overlaid on uniform magnetic field. Nevertheless, since the C$^{18}$O (J = 3-2) spectral data taken by \citet{gr08} shows that all components in the HFS are within a narrow velocities range ($\sim$ 3--5 km~s$^{-1}$), the first possibility is favored, unless the foreground/background component coincidentally has a velocity very similar to the HFS.
The curved magnetic field could be originated by an uniform large-scale magnetic field dragged by the contraction of the large-scale main filament. The dragging along the major axis of the large-scale filament would cause the single peak in large-scale magnetic field broadened and spilt into two peaks, and thus the center of the splitting shown in the histogram ($\sim 15\degr$) is consistent with the orientation perpendicular to the large-scale filament. In addition, the spatial distribution tendency of the two components can also be explained, since the contraction along the major axis would lead to an axisymmetric pattern. The feature that parsec-scale magnetic fields are perpendicular to filaments but modified by core collapsing in smaller scale has been found in other filamentary clouds, such as Serpens South \citep{su11}, Orion A \citep{pa17} and W51 \citep{ko18}.
Supercritical main filaments are expected to fragment along their major axis and trigger star formation \citep[e.g.,][]{an10,po11,mi12,an14,cl16}, which could be a possible origin of the observed HFS. The main filament connected to our observed HFS has a supercritical mass per unit length (152 M$_{\sun}$ ~pc$^{-1}$, \citealt{ar11}), and the submillimeter clumps identified in \citet{jo17} also indicate that some filament fragmentation has already taken place.
Some theoretical work suggests that fragmentation of filaments with aspect ratios greater than 5 tends to first begin at their ends, where the edge-driven collapsing mode is more efficient than the homologous collapse mode over the whole filament \citep{po11,po12}. In contrast, the centralized collapse mode is more important in shorter filaments with high initial density perturbations or no magnetic support \citep{se15}. The edge-driven collapsing mode is consistent with the HFSs found at the end of the main filament in the IC5146 cloud. In addition, \citet{gr08} found the gas velocity within the main filament increases from the center to both ends, based on $^{13}$CO and C$^{18}$O line observations. The velocity gradient suggests that the gas within the main filament is likely fragmented towards the two massive HFSs at the ends.
The center-to-ends filament fragmentation picture might seem inconsistent with the observed magnetic field morphology, which shows a pattern of end-to-center contraction. The curved magnetic field morphology, however, might be shaped at an early evolutionary stage, when the filament was still contracting and accumulating mass until its density was sufficiently high to trigger fragmentation. In addition, the global end-to-center contraction and the local center-to-end fragmentation could be occurring simultaneously but at different scales, as suggested by hierarchical gravitational fragmentation models \citep{go14,go18}.
\begin{figure}
\includegraphics[width=\columnwidth]{Gomez2018F5.pdf}
\caption{Schematic for the magnetic field within an accretion flow, modeled in \citet{go18}. The magnetic field is bent by the ram pressure of the flow, and eventually reaches a stationary stage that the ram pressure balances the magnetic field tensor. The relative strength of the two forces determines the curvature radius and angle, $R_c$ and $\alpha$, by \autoref{eq:ushape}. This figure is adapted from the Figure 5 in \citet{go18} with permission.}\label{fig:go18}
\end{figure}
\begin{figure*}
\includegraphics[width=\textwidth]{cartoon.pdf}
\caption{A cartoon illustrates the possible formation scenario of the core-scale HFSs. (a) The parsec-scale filaments first form via the contraction and fragmentation along magnetic fields. (b) As the parsec-scale filaments become magnetically and thermally supercritical, the filaments fragment along their major axis, and the most massive components form at the end of filaments. At the same time, the magnetic fields are dragged by the filament contraction. (c) The massive fragment at the end of parsec-scale filament further fragments along the curved magnetic fields, and forms a core-scale HFS with orientation parallel or perpendicular to the local magnetic field instead of primordial field.}\label{fig:cartoon}
\end{figure*}
To explore the magnetic field morphology within collapsing clouds, \citet{go18} simulated molecular clouds undergoing global, multi-scale gravitational collapse. In this simulation, the magnetic fields would be dragged by the gravitational contraction, but eventually reach a stationary state in which the ram pressure of the flow balances the magnetic tension. Hence, the model predicts a random magnetic field morphology on parsec scales and a ``U-shape'' magnetic field within the filaments following the equation
\begin{equation}\label{eq:ushape}
\left(\frac{v_l}{v_A}\right)^2=2\sin(2\alpha),
\end{equation}
where $v_l$ is the gas velocity along the filaments, $v_A$ is the Alfv\'{e}nic velocity, and the $\alpha$ is the angle between the magnetic field line and the direction perpendicular to the filament, illustrated in \autoref{fig:go18}. Although the predicted large-scale random magnetic field morphology is inconsistent with the uniform magnetic fields shown by WLE17 data, a ``U-shape'' magnetic field within the filaments has been observed in our POL-2 data, suggesting that this model might be important when the filaments become dense enough.
The observed $\alpha$ is $\sim$30\degr, estimated by the two components shown in the $PA$ histogram (\autoref{fig:hist_2comp}), so $v_l/v_A$ is expected to be 1.3 by \autoref{eq:ushape}. We assume that the $v_l$ is approximately the velocity difference along the filament around the central hub. The line of sight C$^{18}$O centroid velocity of the central hub (clump 47) is $\sim 4.1$ km~s$^{-1}$, and the western clump 42 has a centroid velocity of $\sim 3.8$ km~s$^{-1}$. Hence, the velocity difference along the filament between clumps 42 and 47 is $0.3/\cos \phi$ km~s$^{-1}$, where $\phi$ is the inclination angle of filament with respect to the line of sight. With the Alfv\'{e}nic velocity of $0.64/\sin \theta$ estimated in \autoref{sec:str}, the observed $v_l/v_A$ is $\sim 0.5 \sin \theta/\cos \phi$. Due to the unknown inclination angle, we can only speculate that the model expectation would be correct if the filament is nearly perpendicular to the line of sight ($\phi$ > 67\degr).
Based on our observed magnetic field morphologies, we propose a three-stage scenario to explain the origin of the observed HFS, illustrated in \autoref{fig:cartoon}. In the first stage, the large-scale magnetically subcritical filaments are first formed with dynamically important magnetic fields as described in strong magnetic field filament formation models \citep[e.g.,][]{na08,va14}, and these filaments appear perpendicular to a uniform large-scale magnetic field, as revealed by WLE17 data. In the second stage, the large-scale filaments accumulate mass via accretion along magnetic field lines or filament mergers \citep[e.g.,][]{li10, an14}, and eventually become magnetically and thermally supercritical. The contraction of supercritical filaments would bend the uniform primordial magnetic fields, similar to the case in Orion A \citep{pa17}. In the third stage, the dense clumps within filaments, often at the ends of filaments, would tend to fragment along magnetic fields and form second generation filaments with hub-filament morphologies, because density perturbations parallel to the magnetic fields grow faster than those perpendicular to the fields\citep[e.g.,][]{na98,va14}. The collapse of the cores within the second generation filaments is also regulated by the bent magnetic fields, and so the cores are oriented either parallel or perpendicular to local magnetic fields, as shown in \autoref{fig:papa}, and are less correlated to the primordial magnetic field.
\subsection{The Alignment between Local Magnetic Fields and Clumps}\label{sec:alignclump}
Stars form predominantly from high column density filaments \citep{an10}. Although most filaments are either oriented parallel or perpendicular to the large-scale magnetic fields \citep{li14,pl16}, only a few young stars have been observed with hourglass magnetic field morphologies which favors a star formation scenario where the core collapse is regulated by strong magnetic fields \citep[e.g.,][]{gi06,ra09,ta09}. As a counterexample, ALMA polarization observations toward the embedded source Ser-emb 8 show chaotic magnetic fields \citep{hu17}, indicating that this star was formed under weak magnetic field conditions. This difference poses the question of how physical scales and environments generally determine the role of magnetic fields in star formation.
To address the role of magnetic fields in star formation, the SMA polarization survey toward massive cores \citep{zh14} revealed that magnetic fields on the core scale (0.1--0.01 pc) are mostly either parallel or perpendicular to the magnetic fields on the parsec scales. \citet{li15} further analyzed the magnetic field morphologies in NGC 6334 on the 100 pc to 0.01 pc scales, and found that local magnetic fields on all these scales are either parallel or perpendicular to the local cloud elongation. Both these results suggest that magnetic fields are dynamically important during the collapse and fragmentation of clouds, possibly guiding the contraction of filaments and cores. \citet{ko14} further used a large sample set (50 sources) to examine the bimodal distribution of the relative orientation between the magnetic fields and the density structures, and found that the distribution is more scattered than those in previous surveys, although a bimodal distribution cannot be ruled out.
In \autoref{sec:clump}, we find a tendency that the clumps within the observed HFS have orientations parallel or perpendicular to the local magnetic fields (at 0.1--0.01 pc scale). The local magnetic fields in many of these clumps, however, have orientations 30--60\degr\ different from the parsec scale magnetic field, which is inconsistent with the findings of \citet{zh14} and \citet{li15}. The inconsistent cases are mainly clumps within the extending filaments, which follow the orientation of the curved magnetic fields (see \autoref{sec:HFS}). These clumps are much fainter than those in the central hub, which possibly explains why they were missed in previous surveys. Nevertheless, since we still found the orientations of these clumps to be well coupled with the host filaments and the local magnetic fields, our results support the idea that magnetic fields are important in regulating the core and filament collapse on spatial scales of 0.01-0.1 pc.
\section{Conclusions}\label{sec:summary}
This paper presents the first-look results of SCUBA-2/POL-2 observations at 850 $\mu$m towards the IC5146 filamentary clouds as part of the BISTRO project. Our observations reveal the magnetic field morphology within a core-scale Hub-Filament Structures (HFS) located at the end of a parsec-scale filament. From the analysis of these data, we find:
\begin{enumerate}
\item The observed polarization fraction is found to vary with Stokes $I$ following a power-law with an index of $\approx0.56$, which suggests that dust grains in this $A_V$ $\sim$20--300 mag range can be still aligned with magnetic fields.
\item The observed polarization map shows that the magnetic field of the HFS on core-scales ($\sim 0.05$--$0.5$ pc) is more organized than random. The core-scale magnetic field is likely inherited from a larger scale magnetic field that has been dragged by contraction along the parsec-scale filament.
\item The submillimeter clumps within the observed core-scale HFS tend to be aligned with local magnetic fields, i.e., they are oriented within 20\degr\ of being either parallel or perpendicular to the local magnetic field direction. This trend may suggest that the core-scale HFS formed after the parsec-scale filament became magnetically supercritical, and the magnetic fields have been dynamically important during the formation and the following evolution of the core-scale HFS.
\item We propose a scenario to explain the formation of the core-scale HFS: the parsec-scale filaments first form under a strong and uniform magnetic field, and started to fragment and locally bend the magnetic field as they becomes magnetically supercritical. The massive clump, formed at the end of the parsec-scale filament, further fragments under the strong magnetic fields and becomes the core-scale HFS.
\item Using the Davis-Chandrasekhar-Fermi method, the magnetic strength within the central hub is estimated to be $0.5\pm0.2$ mG, and the mass-to-flux ratio is $1.3\pm0.4$ for D = 813 pc. The Alfv\'{e}nic Mach number estimated using the magnetic field angular dispersion is $<$0.6. These estimates suggests that gravity and magnetic fields is comparably important in the current core-scale HFS, and that turbulence is less important.
\end{enumerate}
\acknowledgments
The James Clerk Maxwell Telescope is operated by the East Asian Observatory on behalf of The National Astronomical Observatory of Japan; Academia Sinica Institute of Astronomy and Astrophysics; the Korea Astronomy and Space Science Institute; the Operation, Maintenance and Upgrading Fund for Astronomical Telescopes and Facility Instruments, budgeted from the Ministry of Finance (MOF) of China and administrated by the Chinese Academy of Sciences (CAS), as well as the National Key R\&D Program of China (No. 2017YFA0402700). Additional funding support is provided by the Science and Technology Facilities Council of the United Kingdom and participating universities in the United Kingdom and Canada. Additional funds for the construction of SCUBA-2 and POL-2 were provided by the Canada Foundation for Innovation. The Starlink software \citep{cu14} is currently supported by the East Asian Observatory. JCMT Project code M16AL004. The authors wish to recognize and acknowledge the very significant cultural role and reverence that the summit of Maunakea has always had within the indigenous Hawaiian community. We are most fortunate to have the opportunity to conduct observations from this mountain. J.W.W., S.P.L., K. P., and C.E. are thankful to the support from the Ministry of Science and Technology (MOST) of Taiwan through the grants MOST 105-2119-M-007-021-MY3, 105-2119-M-007-024, and 106-2119-M-007-021-MY3. J.W.W. is a University Consortium of ALMA--Taiwan (UCAT) graduate fellow supported by the Ministry of Science and Technology (MOST) of Taiwan through the grants MOST 105-2119-M-007-022-MY3. M.K. was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT \& Future Planning (No. NRF-2015R1C1A1A01052160). C.W.L. is supported by the Basic Science Research Program through the National Research Foundation of Korea (NRF), funded by the Ministry of Education, Science and Technology (NRF-2016R1A2B4012593). Woojin Kwon was supported by Basic Science Research Program through the National Research Foundation of Korea (NRF-2016R1C1B2013642). D. J. is supported by the National Research Council of Canada and by an NSERC Discovery Grant.
|
2,877,628,088,395 | arxiv | \section{Introduction}
\label{sec1}
The development of approximate density functional theory (DFT) \cite{Kohn1964,Kohn1965} methods that are able to account for van der Waals (vdW) interactions has attracted a great deal of interest due to its importance in the theoretical description of organic or layered materials as well as that of physical, chemical, and biological processes~\cite{intro1,intro2,intro3,intro4,intro5,RPA1,Haas}. Many encouraging schemes and algorithms have been proposed to include vdW interactions in theoretical simulations based on standard DFT. One of the promising vdW approaches is the vdW density functional (vdW-DF) method, which does not depend on external input parameters and is based directly on the electron density~\cite{llvdw1}. In the vdW-DF method, the exchange-correlation (XC) energy is given as
\begin{equation}
E_{\rm XC} = E_{\rm X}^{\rm GGA} + E_{\rm C}^{\rm LDA} + E_{\rm C}^{\rm nl},
\end{equation}
where $E_{\rm X}^{\rm GGA}$ is the generalized gradient approximation (GGA) to the exchange energy, $E_{\rm C}^{\rm LDA}$ is the local density approximation (LDA) to the correlation energy, and $E_{\rm C}^{\rm nl}$ is the nonlocal electron correlation energy. In the case of the vdW-DF approach, the computational time increases by 50\%{} compared to standard DFT calculations such as the LDA and the GGA calculations~\cite{timespeed}. Depending on the selection of the exchange functional, there are many vdW-DF methods. Here we consider the five vdW-DF functionals; revPBE-vdW~\cite{llvdw1}, rPW86-vdW2~\cite{llvdw2,llvdw3}, optPBE-vdW \cite{optPBE1}, optB88-vdW~\cite{optPBE1}, and optB86b-vdW~\cite{optPBE2}. In addition, there is another widely used vdW approach, the so-called dispersion-corrected DFT-D in which an atom-pairwise potential is added to a standard DFT result. In the original DFT-D scheme \cite{Grimme2004}, the predetermined constant dispersion coefficients are assigned to an element irrespective of its environment. To improve this, the dispersion coefficients are further modified to vary with the environment of an element. In contrast to the vdW-DF schemes, the DFT-D schemes do not add a significant computational cost compared to the standard DFT calculations. In the DFT-D schemes, we consider the five vdW functionals; DFT-D2~\cite{DFTD2}, DFT-D3~\cite{DFTD3}, DFT-D3(BJ)~\cite{DFTD3BJ}, DFT-TS~\cite{DFTTS}, and DFT-TS-SCS~\cite{DFTTSSCS1,DFTTSSCS2}. The environment-dependent DFT-D3 scheme has zero damping for small interatomic distances, whereas the DFT-D3(BJ) scheme has rational damping to finite values (BJ-damping) as Becke and Johnson proposed. Grimme {\em et al.} suggested that the DFT-D3(BJ) performed slightly better than the DFT-D3 for noncovalently-bonded materials systems~\cite{DFTD3BJ}. In the DFT-TS scheme, the dispersion coefficients are determined by employing the partitioning of the electron density~\cite{DFTTS}. The DFT-TS scheme can be further modified by incorporating self-consistent long-range screening effects \cite{DFTTSSCS1,DFTTSSCS2}. This modified scheme is herein called the DFT-TS-SCS functional. Tkatchenko {\em et al.} report that the DFT-TS-SCS functional performs better than the DFT-TS functional~\cite{DFTTSSCS1,DFTTSSCS2}. However, despite many vdW studies, the assessment of the performance of the vdW functionals on a broad range of material systems is lacking.
In the present work, we have investigated structural properties (lattice constants, bulk moduli, and cohesive energies) at equilibrium for bulk solids with body centered cubic (BCC), face centered cubic (FCC), and diamond (DIA) structures, to assess the performance of various vdW functionals based on the DFT. We herein consider the ten vdW functionals implemented in the Vienna {\em Ab-initio} Simulation Package (VASP) code~\cite{VASP1,VASP2,VASP3,VASP4}; revPBE-vdW, rPW86-vdW2, optPBE-vdW, optB88-vdW, optB86b-vdW, DFT-D2, DFT-D3, DFT-D3(BJ), DFT-TS, and DFT-TS-SCS functionals. For comparison, the LDA and GGA calculations were also performed. Our calculations show that the five vdW functionals of optB86b-vdW, optB88-vdW, optPBE-vdW, DFT-D3, and DFT-D3(BJ) give better performance compared to the other vdW functionals. Differences among the results from the five vdW functionals are also discussed. These results provide important information for further development of vdW methods to improve the description of a wide range of materials systems.
The paper is organized as follows. In Sec. \ref{Calculation}, the computational method and settings used in this study are briefly described. The results and discussion are presented in Sec. \ref{Results}. Finally, the conclusions are stated in Sec. \ref{Summary}.
\section{Computation method}
\label{Calculation}
We employed the VASP code to perform the DFT calculations, including spin effects for magnetic elements, with various vdW functionals \cite{VASP1,VASP2,VASP3,VASP4}. In this work, we considered ten vdW functionals implemented in the VASP; revPBE-vdW \cite{llvdw1}, rPW86-vdW2 \cite{llvdw2,llvdw3}, optPBE-vdW~\cite{optPBE1}, optB88-vdW \cite{optPBE1}, optB86b-vdW~\cite{optPBE2}, DFT-D2 \cite{DFTD2}, DFT-D3 \cite{DFTD3}, DFT-D3(BJ) of Becke-Jonson \cite{DFTD3BJ}, DFT-TS \cite{DFTTS}, and DFT-TS-SCS \cite{DFTTSSCS1,DFTTSSCS2}. For comparison, LDA and GGA calculations were also performed using the Ceperley-Alder~\cite{LDA} and the Perdew-Burke-Ernzerhof (PBE)~\cite{GGA} expressions, respectively, for the exchange-correlation functional. In the case of the DFT-D schemes [DFT-D2, DFT-D3, DFT-D3(BJ), DFT-TS, and DFT-TS-SCS], we used the PBE parameterization of the GGA for the exchange-correlational functional. For electron-ion interactions, the projector augmented-wave (PAW) method \cite{PAW1,PAW2} was used. We considered 29 elements with BCC, FCC, and diamond structures as the ground state in the bulk phase. In the calculations, the electronic wave functions were expanded by plane waves with an energy cutoff of 700 eV. The {\bf k}-space integration was performed using a $\Gamma$-centered 12$\times$12$\times$12 mesh in the Brillouin zone (BZ) of the primitive cell. The tetrahedron method with Bl\"ochl corrections~\cite{tetra1,tetra2} was used to improve the computational convergence. We performed total energy calculations to obtain the ground state properties such as the equilibrium lattice constant, the bulk modulus, and the cohesive (atomization) energy. The ground state properties were determined by fitting the calculated total energy as a function of the volume to the Birch-Murnaghan equation of state \cite{Birch1947,Jang2011,Jang2012}. In the fitting, a set of eleven different volumes around the experimental equilibrium volume corresponding to the equilibrium lattice constant was used.
\section{Results and discussion}
\label{Results}
\subsection{Lattice constant}
\label{LatticeConstant}
\begin{table*}[!ht]
\caption{ Equilibrium lattice constants (in \AA) for bulk solids with BCC, FCC, and diamond (DIA) structures using the ten vdW functionals. In addition, standard DFT calculation results using LDA and GGA are also shown. The experimental lattice constants \cite{kittel} are measured at finite temperatures, in contrast to the theoretical lattice constants obtained from ground-state electronic structure calculations at zero temperature. For comparison, the experimental values were corrected to the $T=0$ limit using thermal expansion corrections \cite{csonka} for the solids (denoted with an asterisk) whose zero-point anharmonic expansion values are available. }
\label{table:ALAT}
\noindent \adjustbox{max width=\textwidth}{
\begin{tabular}{ccccccccccccccc}
\hline \hline
Solid & Crystal & revPBE & rPW86 & optB86b & optB88 & optPBE & DFT & DFT & DFT & DFT & DFT & PAW & PAW & Expt. \\
& structure & vdW & vdW2 & vdW & vdW & vdW & D2 & D3 & D3(BJ) & TS & TS-SCS & LDA & GGA & \\
\hline
Li & BCC & 3.453 & 3.393 & 3.457 & 3.435 & 3.442 & 3.270 & 3.374 & 3.329 & 2.607 & 3.122 & 3.367 & 3.439 & 3.449$^{\ast}$ \\
C & DIA & 3.599 & 3.605 & 3.570 & 3.575 & 3.584 & 3.564 & 3.565 & 3.558 & 3.553 & 3.564 & 3.535 & 3.573 & 3.543$^{\ast}$ \\
Na & BCC & 4.214 & 4.135 & 4.176 & 4.152 & 4.178 & 3.980 & 4.161 & 4.078 & 4.131 & 3.852 & 4.055 & 4.193 & 4.210$^{\ast}$ \\
Al & FCC & 4.084 & 4.087 & 4.034 & 4.054 & 4.058 & 4.010 & 4.007 & 3.982 & 3.921 & 4.002 & 3.984 & 4.040 & 4.020$^{\ast}$ \\
Si & DIA & 5.513 & 5.540 & 5.456 & 5.469 & 5.484 & 5.412 & 5.453 & 5.420 & 5.446 & 5.438 & 5.403 & 5.469 & 5.416$^{\ast}$ \\
K & BCC & 5.289 & 5.174 & 5.198 & 5.162 & 5.222 & 5.156 & 5.226 & 5.163 & 4.345 & 4.943 & 5.043 & 5.284 & 5.212$^{\ast}$ \\
Ca & FCC & 5.553 & 5.493 & 5.464 & 5.450 & 5.501 & 5.381 & 5.498 & 5.441 & 5.169 & 5.301 & 5.338 & 5.532 & 5.553$^{\ast}$ \\
V & BCC & 3.007 & 3.026 & 2.958 & 2.969 & 2.982 & 2.970 & 2.938 & 2.932 & 2.917 & 2.954 & 2.912 & 2.978 & 3.030 \\
Cr & BCC & 2.866 & 2.887 & 2.821 & 2.833 & 2.843 & 2.815 & 2.809 & 2.809 & 2.770 & 2.808 & 2.779 & 2.836 & 2.880 \\
Fe & BCC & 2.873 & 2.889 & 2.806 & 2.821 & 2.838 & 2.802 & 2.805 & 2.806 & 2.779 & 2.812 & 2.749 & 2.832 & 2.870 \\
Ni & FCC & 3.572 & 3.609 & 3.488 & 3.511 & 3.529 & 3.459 & 3.476 & 3.477 & 3.424 & 3.489 & 3.422 & 3.518 & 3.520 \\
Cu & FCC & 3.705 & 3.750 & 3.601 & 3.629 & 3.651 & 3.571 & 3.568 & 3.568 & 3.547 & 3.607 & 3.524 & 3.635 & 3.595$^{\ast}$ \\
Ge & DIA & 5.895 & 5.973 & 5.764 & 5.798 & 5.828 & 5.666 & 5.760 & 5.723 & 5.749 & 5.730 & 5.647 & 5.783 & 5.640$^{\ast}$ \\
Rb & BCC & 5.666 & 5.545 & 5.535 & 5.501 & 5.578 & 5.443 & 5.615 & 5.551 & 5.666 & 5.666 & 5.374 & 5.668 & 5.576$^{\ast}$ \\
Sr & FCC & 6.057 & 6.004 & 5.921 & 5.911 & 5.980 & 5.760 & 5.985 & 5.924 & 6.306 & 5.731 & 5.788 & 6.026 & 6.045$^{\ast}$ \\
Nb & BCC & 3.340 & 3.377 & 3.288 & 3.303 & 3.314 & 3.308 & 3.265 & 3.264 & 3.214 & 3.269 & 3.247 & 3.308 & 3.300 \\
Mo & BCC & 3.183 & 3.219 & 3.139 & 3.154 & 3.161 & 3.145 & 3.124 & 3.122 & 3.084 & 3.123 & 3.106 & 3.151 & 3.150 \\
Rh & FCC & 3.879 & 3.942 & 3.803 & 3.829 & 3.841 & 3.773 & 3.786 & 3.788 & 3.767 & 3.806 & 3.752 & 3.824 & 3.793$^{\ast}$ \\
Pd & FCC & 4.012 & 4.083 & 3.905 & 3.938 & 3.957 & 3.888 & 3.886 & 3.889 & 3.912 & 3.921 & 3.841 & 3.942 & 3.875$^{\ast}$ \\
Ag & FCC & 4.242 & 4.313 & 4.091 & 4.130 & 4.163 & 4.131 & 4.073 & 4.072 & 4.068 & 4.117 & 4.003 & 4.147 & 4.056$^{\ast}$ \\
Sn & DIA & 6.764 & 6.876 & 6.593 & 6.637 & 6.677 & 6.505 & 6.612 & 6.571 & 6.538 & 6.577 & 6.478 & 6.652 & 6.490 \\
Cs & BCC & 6.134 & 5.989 & 5.942 & 5.899 & 6.014 & 4.235 & 6.111 & 6.039 & 4.382 & 5.809 & 5.760 & 6.161 & 6.039$^{\ast}$ \\
Ba & BCC & 5.073 & 5.057 & 4.904 & 4.915 & 4.986 & 4.203 & 4.974 & 4.935 & 5.021 & 4.724 & 4.768 & 5.030 & 4.995$^{\ast}$ \\
Ta & BCC & 3.340 & 3.375 & 3.292 & 3.306 & 3.317 & 3.260 & 3.273 & 3.276 & 3.235 & 3.281 & 3.248 & 3.309 & 3.300 \\
W & BCC & 3.203 & 3.239 & 3.163 & 3.178 & 3.184 & 3.106 & 3.147 & 3.148 & 3.123 & 3.154 & 3.129 & 3.172 & 3.160 \\
Ir & FCC & 3.923 & 3.986 & 3.861 & 3.886 & 3.892 & 3.767 & 3.838 & 3.843 & 3.844 & 3.862 & 3.819 & 3.873 & 3.840 \\
Pt & FCC & 4.032 & 4.110 & 3.949 & 3.980 & 3.990 & 3.839 & 3.918 & 3.926 & 3.934 & 3.952 & 3.897 & 3.968 & 3.920 \\
Au & FCC & 4.245 & 4.338 & 4.122 & 4.161 & 4.181 & 3.996 & 4.099 & 4.101 & 4.113 & 4.134 & 4.052 & 4.157 & 4.080 \\
Pb & FCC & 5.134 & 5.233 & 4.972 & 5.018 & 5.053 & 5.072 & 4.971 & 4.948 & 4.949 & 4.997 & 4.875 & 5.031 & 4.902$^{\ast}$ \\
\hline
\end{tabular}
}
\end{table*}
\begin{figure*}[ht!]
\begin{center}
\subfigure{
\label{fig1a}
\includegraphics[width=0.47\textwidth]{ALAT_1}
}
\subfigure{
\label{fig1b}
\includegraphics[width=0.47\textwidth]{ALAT_2}
}
\end{center}
\caption{ Relative errors in the calculated equilibrium lattice constants with respect to the experimental values. Results are shown for the ten vdW functionals and the standard DFT functionals of LDA and GGA. The positive (negative) values in the relative errors represent the larger (smaller) lattice constants than the experimental values. }
\label{fig1}
\end{figure*}
The equilibrium lattice constants calculated with the ten vdW functionals are summarized in Table~\ref{table:ALAT}, and the relative errors in the equilibrium lattice constants with respect to the experimental values are shown in Fig.~\ref{fig1}. For comparison, we also present the results of the standard DFT functionals of LDA and GGA. The standard GGA functional gives the relative errors in the range of $\pm 2\%$, while the standard DFT functional of LDA shows shorter equilibrium lattice constants than those from the other functionals, indicating the well known overbinding of atoms in the LDA approach \cite{LDAWEAKPOINT,LDAoverbinding}. In the case of the vdW functionals, optB86b-vdW, optB88-vdW, optPBE-vdW, and DFT-D3 show the relative errors in the range of $\pm 3\%$. The DFT-D3(BJ) functional shows the relative errors in the range from $-3\%$ to $+1\%$ for all elements except for Li. The DFT-TS and DFT-TS-SCS functionals give results comparable to the other vdW functionals, except for the alkali (Li, Na, K, Cs) and alkali-earth (Ca, Sr, Ba) metals. For these two vdW functionals, the DFT-TS-SCS scheme with the self-consistent screening (SCS) effects shows better performance than the DFT-TS scheme without the SCS effects \cite{DFTTSSCS1,DFTTSSCS2}. In the case of revPBE-vdW and rPW86-vdW2, the relative errors range from $-3\%$ to $+6\%$. The relative errors are observed to be more scattered compared to those from the vdW functionals of optPBE-vdW, optB88-vdW, optB86b-vdW, DFT-D3, and DFT-D3(BJ). This behavior becomes more significant with the increase of the atomic number.
To further aid our understanding, we discuss the differences among the five vdW results of optPBE-vdW, optB88-vdW, optB86b-vdW, DFT-D3, and DFT-D3(BJ). In the case of DFT-D3, and DFT-D3(BJ), Grimme {\em et\ al.} \cite{DFTD3BJ} reported that the DFT-D3(BJ) vdW functional with rational damping to finite values for small interatomic distances performed slightly better than the DFT-D3 functional with zero damping when used for noncovalently-bonded systems. Our calculations show that both schemes give very similar results (see Fig. \ref{fig2}). For alkali and alkali-earth metals, however, the DFT-D3 functional gives much better performance than the DFT-D3(BJ) functional (see Fig. \ref{fig2}). In the case of the vdW-DF functionals of optPBE-vdW, optB88-vdW, and optB86b-vdW, the optB86b-vdW functional gives either better or comparable performance in equilibrium lattice constants compared to the optPBE-vdW and optB88-vdW functionals (see Fig. \ref{fig2}).
\begin{figure*}[ht!]
\begin{center}
\subfigure{%
\label{fig2a}
\includegraphics[width=0.47\textwidth]{ALAT_LLG}
}%
\subfigure{%
\label{fig2b}
\includegraphics[width=0.47\textwidth]{ALAT_D3G}
\end{center}
\caption{ Comparison of the relative errors in the calculated equilibrium lattice constants with respect to the experimental values.}
\label{fig2}
\end{figure*}
\subsection{Bulk modulus}
\label{BulkModulus}
\begin{table*}[!ht]
\caption{Bulk moduli (in GPa) for bulk solids with BCC, FCC, and diamond (DIA) structures using the ten vdW functionals. In addition, standard DFT calculation results using LDA and GGA are also shown. The theoretical values are compared to the experimental data. The experimental bulk moduli \cite{kittel} were corrected to the $T=0$ limit using thermal effects and zero-point phonon effects (ZPPE) \cite{csonka} for the solids (denoted with an asterisk) whose ZPPE values are available. }
\label{table:BM}
\noindent \adjustbox{max width=\textwidth}{
\begin{tabular}{ccccccccccccccc}
\hline \hline
Solid & Crystal & revPBE & rPW86 & optB86b & optB88 & optPBE & DFT & DFT & DFT & DFT & DFT & PAW & PAW & Expt. \\
& structure & vdW & vdW2 & vdW & vdW & vdW & D2 & D3 & D3(BJ) & TS & TS-SCS & LDA & GGA & \\
\hline
Li & BCC & 13.694 & 14.724 & 13.365 & 13.748 & 13.845 & 14.315 & 13.513 & 15.596 & 187.631 & 24.117 & 15.108 & 13.885 & 13.3$^{\ast}$ \\
C & DIA & 404.522 & 396.193 & 432.393 & 426.121 & 418.944 & 437.501 & 433.770 & 439.529 & 439.442 & 434.991 & 463.489 & 430.649 & 443.0 \\
Na & BCC & 7.558 & 8.230 & 7.845 & 8.039 & 7.917 & 8.621 & 8.244 & 8.975 & 14.229 & 14.219 & 9.240 & 7.826 & 7.5$^{\ast}$ \\
Al & FCC & 63.500 & 57.305 & 75.135 & 67.233 & 69.153 & 68.368 & 81.233 & 85.247 & 100.028 & 83.527 & 82.568 & 76.049 & 79.4$^{\ast}$ \\
Si & DIA & 81.566 & 76.461 & 89.842 & 87.250 & 85.647 & 96.468 & 88.212 & 91.276 & 90.158 & 93.356 & 95.410 & 87.711 & 98.8 \\
K & BCC & 3.558 & 3.947 & 3.784 & 3.936 & 3.787 & 3.761 & 4.320 & 4.132 & 70.613 & 5.463 & 4.470 & 3.567 & 3.7$^{\ast}$ \\
Ca & FCC & 16.444 & 16.833 & 17.489 & 17.138 & 17.098 & 17.105 & 15.826 & 17.963 & 15.586 & 20.594 & 18.463 & 17.086 & 18.4$^{\ast}$ \\
V & BCC & 172.466 & 169.806 & 194.718 & 188.971 & 183.512 & 173.953 & 203.049 & 202.604 & 235.407 & 198.016 & 213.713 & 185.675 & 161.9 \\
Cr & BCC & 237.579 & 229.519 & 272.448 & 262.464 & 254.717 & 249.959 & 279.851 & 277.020 & 315.618 & 283.531 & 303.323 & 259.661 & 190.1 \\
Fe & BCC & 147.968 & 154.123 & 202.889 & 192.267 & 176.236 & 177.166 & 213.305 & 197.352 & 242.121 & 194.995 & 247.046 & 174.999 & 168.3 \\
Ni & FCC & 165.299 & 152.039 & 208.651 & 196.380 & 186.375 & 216.252 & 207.194 & 208.723 & 271.665 & 211.216 & 249.627 & 193.570 & 186.0 \\
Cu & FCC & 110.545 & 99.879 & 147.659 & 137.055 & 128.539 & 146.506 & 159.282 & 161.663 & 168.634 & 148.152 & 182.498 & 136.340 & 137.0 \\
Ge & DIA & 48.428 & 41.953 & 60.226 & 56.816 & 54.096 & 68.543 & 59.596 & 62.472 & 64.857 & 65.049 & 71.363 & 58.379 & 77.2 \\
Rb & BCC & 2.812 & 3.119 & 3.060 & 3.192 & 3.028 & 3.004 & 3.308 & 3.143 & 2.772 & 2.772 & 3.570 & 2.770 & 2.9$^{\ast}$ \\
Sr & FCC & 11.014 & 12.123 & 12.461 & 12.744 & 11.840 & 12.945 & 10.281 & 12.195 & 13.501 & 16.295 & 14.315 & 11.312 & 12.4$^{\ast}$ \\
Nb & BCC & 159.232 & 153.666 & 176.423 & 171.920 & 167.693 & 166.781 & 180.698 & 180.745 & 210.983 & 183.804 & 189.481 & 169.206 & 170.2 \\
Mo & BCC & 244.495 & 227.582 & 276.592 & 265.651 & 259.989 & 251.293 & 283.879 & 283.494 & 329.543 & 288.041 & 299.116 & 265.888 & 272.5 \\
Rh & FCC & 219.171 & 191.033 & 273.980 & 255.757 & 245.384 & 267.614 & 279.428 & 279.135 & 289.154 & 268.524 & 314.511 & 254.550 & 270.4 \\
Pd & FCC & 133.434 & 116.972 & 183.377 & 168.781 & 157.180 & 162.033 & 187.936 & 187.733 & 177.399 & 173.192 & 222.962 & 162.581 & 180.8 \\
Ag & FCC & 68.730 & 61.864 & 105.174 & 95.643 & 85.884 & 64.765 & 103.017 & 108.873 & 111.938 & 96.729 & 135.435 & 88.903 & 100.7 \\
Sn & DIA & 30.578 & 26.675 & 38.707 & 36.573 & 34.487 & 39.692 & 38.626 & 39.488 & 44.543 & 39.704 & 44.872 & 35.827 & 53.0 \\
Cs & BCC & 2.019 & 2.268 & 2.173 & 2.294 & 2.167 & 79.424 & 2.254 & 2.128 & 82.969 & 2.921 & 2.458 & 1.937 & 2.1$^{\ast}$ \\
Ba & BCC & 8.906 & 9.597 & 9.595 & 9.933 & 9.349 & 84.987 & 8.264 & 9.097 & 9.425 & 11.487 & 10.361 & 8.768 & 9.3$^{\ast}$ \\
Ta & BCC & 187.810 & 181.116 & 205.534 & 199.998 & 196.462 & 163.345 & 212.634 & 209.379 & 232.166 & 210.435 & 218.981 & 198.963 & 200.0 \\
W & BCC & 286.450 & 267.864 & 318.164 & 306.480 & 301.692 & 296.696 & 329.641 & 326.147 & 357.702 & 325.181 & 340.697 & 309.646 & 323.2 \\
Ir & FCC & 301.726 & 259.177 & 359.776 & 336.899 & 329.294 & 455.665 & 373.098 & 370.375 & 365.836 & 355.695 & 398.543 & 344.228 & 355.0 \\
Pt & FCC & 205.013 & 171.114 & 263.095 & 242.600 & 232.858 & 341.923 & 276.529 & 273.512 & 267.401 & 257.178 & 302.235 & 244.982 & 278.3 \\
Au & FCC & 106.016 & 89.683 & 155.099 & 139.849 & 129.370 & 191.343 & 148.255 & 159.795 & 156.150 & 146.611 & 189.275 & 136.817 & 173.2 \\
Pb & FCC & 34.303 & 31.164 & 44.442 & 41.991 & 39.015 & 25.185 & 42.843 & 44.497 & 41.383 & 40.978 & 52.294 & 39.735 & 46.8$^{\ast}$ \\
\hline
\end{tabular}
}
\end{table*}
\begin{figure*}[ht!]
\begin{center}
\subfigure{%
\label{fig3a}
\includegraphics[width=0.47\textwidth]{BM1}
}%
\subfigure{%
\label{fig3b}
\includegraphics[width=0.47\textwidth]{BM2}
\end{center}
\caption{ Relative errors in the calculated bulk moduli with respect to the experimental values. Results are shown for the ten vdW functionals and the standard DFT functionals of LDA and GGA. The positive (negative) values in the relative errors represent the larger (smaller) bulk moduli than the experimental values.}
\label{fig3}
\end{figure*}
The bulk moduli calculated with the ten vdW functionals are presented in Table \ref{table:BM}, and the relative errors in the calculated bulk moduli with respect to the experimental values are shown in Fig.~\ref{fig3}. In general, the trend of the bulk moduli is adequately reflected in the behavior of equilibrium lattice constants. Fig. \ref{fig3} clearly shows that the smaller equilibrium lattice constants lead to larger values of bulk moduli. As expected in the calculated equilibrium lattice constants, the DFT-TS and DFT-TS-SCS vdW functionals show worse results for alkali metals (Li, Na, K, Cs), and the DFT-D2 functional gives a relative error even over 100\%{} for Cs and Ba. In the case of the vdW-DF methods, for most of elements considered herein, the revPBE-vdW and rPW86-vdW2 give worse results than the other vdW-DF functionals (optB86b-vdW, optB88-vdW, and optPBE-vdW).
Next we discuss the differences among the five vdW results of optB86b-vdW, optB88-vdW, optPBE, DFT-D3, and DFT-D3(BJ). In the case of the DFT-D3 and DFT-D3(BJ) functionals, as expected from the calculated equilibrium lattice constants,
the DFT-D3 functional shows either comparable or better results than the DFT-D3(BJ) functional (see Fig. \ref{fig4}). Better performance of DFT-D3 is observed for alkali and alkali-earth metals. In the case of the optB86b-vdW, optB88-vdW, and optPBE-vdW, the calculated results show very similar behavior, and the optB86b-vdW functional performs better for elements with large atomic number (see Fig. \ref{fig4}).
\begin{figure*}[ht!]
\begin{center}
\subfigure{%
\label{fig4a}
\includegraphics[width=0.47\textwidth]{BM_LLG}
}%
\subfigure{%
\label{fig4b}
\includegraphics[width=0.47\textwidth]{BM_D3G}
\end{center}
\caption{ Comparison of the relative errors in the calculated bulk moduli with respect to the experimental values. }
\label{fig4}
\end{figure*}
\subsection{Cohesive energy}
\label{CohesiveEnergy}
\begin{table*}
\caption{Cohesive energies (in eV/atom) for bulk solids with BCC, FCC, and DIA crystal structures using the ten vdW functionals. In addition, standard DFT calculation results using the LDA and the GGA are also shown. The theoretical values are compared to the experimental values. All the experimental cohesive energies \cite{kittel} were corrected by the zero-point vibration energy $E_{\rm ZPV}$ calculated using the Debye temperature \cite{csonka}. }
\label{ECOH}
\noindent \adjustbox{max width=\textwidth}{
\begin{tabular}{ccccccccccccccc}
\hline \hline
Solid & Crystal & revPBE & rPW86 & optB86b & optB88 & optPBE & DFT & DFT & DFT & DFT & DFT & PAW & PAW & Expt. \\
& structure & vdW & vdW2 & vdW & vdW & vdW & D2 & D3 & D3(BJ) & TS & TS-SCS & LDA & GGA & \\
\hline
Li & BCC & 1.540 & 1.490 & 1.640 & 1.586 & 1.622 & 1.732 & 1.707 & 1.782 & 3.873 & 2.625 & 1.808 & 1.605 & 1.663 \\
C & DIA & 7.323 & 7.208 & 8.034 & 7.891 & 7.732 & 8.065 & 7.959 & 8.043 & 8.053 & 8.012 & 8.998 & 7.851 & 7.586 \\
Na & BCC & 1.035 & 0.929 & 1.130 & 1.070 & 1.115 & 1.225 & 1.246 & 1.249 & 2.576 & 1.902 & 1.258 & 1.088 & 1.128 \\
Al & FCC & 3.004 & 2.596 & 3.643 & 3.379 & 3.339 & 3.800 & 3.709 & 3.888 & 4.137 & 3.879 & 4.052 & 3.544 & 3.431 \\
Si & DIA & 4.369 & 4.167 & 4.878 & 4.744 & 4.656 & 4.839 & 4.761 & 4.926 & 4.899 & 4.880 & 5.344 & 4.614 & 4.693 \\
K & BCC & 0.851 & 0.758 & 0.935 & 0.890 & 0.926 & 0.929 & 0.997 & 0.986 & 4.196 & 1.438 & 1.022 & 0.869 & 0.943 \\
Ca & FCC & 1.627 & 1.402 & 2.001 & 1.875 & 1.832 & 2.086 & 2.054 & 2.137 & 2.514 & 2.798 & 2.204 & 1.903 & 1.862 \\
V & BCC & 5.075 & 5.094 & 5.969 & 5.736 & 5.407 & 5.795 & 5.954 & 6.123 & 6.613 & 6.113 & 6.809 & 5.413 & 5.347 \\
Cr & BCC & 3.845 & 3.906 & 4.717 & 4.483 & 4.355 & 4.471 & 4.482 & 4.559 & 5.436 & 5.054 & 5.728 & 4.050 & 4.161 \\
Fe & BCC & 4.521 & 4.074 & 5.560 & 5.111 & 5.065 & 5.585 & 5.474 & 5.524 & 6.075 & 5.628 & 6.511 & 5.161 & 4.326 \\
Ni & FCC & 4.154 & 4.050 & 5.324 & 4.978 & 4.691 & 5.275 & 5.199 & 5.268 & 5.788 & 5.332 & 6.063 & 4.797 & 4.484 \\
Cu & FCC & 2.975 & 2.868 & 3.755 & 3.572 & 3.399 & 3.901 & 3.994 & 4.075 & 4.092 & 3.840 & 4.521 & 3.485 & 3.523 \\
Ge & DIA & 3.483 & 3.444 & 4.018 & 3.921 & 3.787 & 4.061 & 3.906 & 4.025 & 4.047 & 4.006 & 4.611 & 3.742 & 3.881 \\
Rb & BCC & 0.778 & 0.698 & 0.862 & 0.826 & 0.853 & 0.868 & 0.902 & 0.879 & 0.775 & 0.775 & 0.931 & 0.774 & 0.857 \\
Sr & FCC & 1.364 & 1.130 & 1.736 & 1.607 & 1.566 & 1.873 & 1.759 & 1.810 & 2.495 & 2.515 & 1.894 & 1.609 & 1.734 \\
Nb & BCC & 6.797 & 6.613 & 7.767 & 7.494 & 7.345 & 7.697 & 7.617 & 7.804 & 8.458 & 8.322 & 8.636 & 7.056 & 7.597 \\
Mo & BCC & 6.148 & 5.966 & 7.257 & 6.954 & 6.770 & 8.411 & 6.900 & 8.487 & 7.845 & 7.630 & 8.310 & 7.763 & 6.864 \\
Rh & FCC & 5.400 & 5.093 & 6.657 & 6.341 & 6.057 & 6.720 & 6.598 & 6.667 & 6.818 & 6.528 & 7.622 & 6.021 & 5.797 \\
Pd & FCC & 3.254 & 3.209 & 4.248 & 4.038 & 3.785 & 4.371 & 4.319 & 4.399 & 4.009 & 3.967 & 5.058 & 3.743 & 3.917 \\
Ag & FCC & 2.216 & 2.206 & 2.960 & 2.821 & 2.625 & 3.084 & 3.008 & 3.098 & 2.966 & 2.876 & 3.631 & 2.519 & 2.972 \\
Sn & DIA & 3.025 & 3.007 & 3.512 & 3.437 & 3.307 & 3.529 & 3.378 & 3.488 & 3.515 & 3.428 & 4.001 & 3.199 & 3.159 \\
Cs & BCC & 0.743 & 0.679 & 0.832 & 0.656 & 0.820 & 4.009 & 0.824 & 0.805 & 5.375 & 1.148 & 0.883 & 0.715 & 0.808 \\
Ba & BCC & 1.689 & 1.520 & 2.087 & 1.995 & 1.904 & 4.881 & 1.994 & 2.079 & 3.130 & 2.858 & 2.248 & 1.876 & 1.911 \\
Ta & BCC & 7.645 & 7.220 & 8.878 & 8.499 & 8.285 & 9.809 & 9.008 & 9.080 & 9.562 & 9.424 & 9.807 & 8.411 & 8.123 \\
W & BCC & 8.152 & 8.002 & 9.322 & 9.013 & 8.822 & 10.067 & 9.107 & 9.182 & 9.662 & 9.412 & 10.381 & 8.483 & 8.939 \\
Ir & FCC & 6.489 & 5.767 & 8.091 & 7.602 & 7.309 & 9.257 & 7.990 & 8.006 & 7.838 & 7.639 & 9.132 & 7.282 & 6.981 \\
Pt & FCC & 5.006 & 4.729 & 6.185 & 5.900 & 5.597 & 7.373 & 6.342 & 6.342 & 6.078 & 5.929 & 7.112 & 5.578 & 5.863 \\
Au & FCC & 2.673 & 2.619 & 3.601 & 3.403 & 3.175 & 4.523 & 3.691 & 3.679 & 3.408 & 3.319 & 4.309 & 3.035 & 3.826 \\
Pb & FCC & 2.866 & 2.868 & 3.410 & 3.319 & 3.180 & 3.507 & 3.219 & 3.384 & 3.288 & 3.259 & 3.831 & 2.984 & 2.040 \\
\hline
\end{tabular}
}
\end{table*}
\begin{figure*}[ht!]
\begin{center}
\subfigure{%
\label{fig5a}
\includegraphics[width=0.47\textwidth]{ECOH_1}
}%
\subfigure{%
\label{fig5b}
\includegraphics[width=0.47\textwidth]{ECOH_2}
\end{center}
\caption{ Relative errors in the calculated cohesive energies with respect to the experimental values. Results are shown for the ten vdW functionals and the standard DFT functionals of LDA and GGA. The positive (negative) values in the relative errors represent the larger (smaller) cohesive energies than the experimental values.}
\label{fig5}
\end{figure*}
\begin{figure*}[ht!]
\begin{center}
\subfigure{%
\label{fig6a}
\includegraphics[width=0.47\textwidth]{ECOH_LLG}
}%
\subfigure{%
\label{fig6b}
\includegraphics[width=0.47\textwidth]{ECOH_D3G}
\end{center}
\caption{ Comparison of the relative errors in the calculated cohesive energies with respect to the experimental values. }
\label{fig6}
\end{figure*}
The cohesive energies calculated with the ten vdW functionals are summarized in Table \ref{ECOH}, and the relative errors in the cohesive energies with respect to the experimental values are presented in Fig. \ref{fig5}. The cohesive energies $E_{\rm coh}$ are calculated using the following equation:
\begin{equation}
E_{\rm coh}=(E_{\rm tot}-n E_{\rm atom})/n,
\end{equation}
where $E_{\rm tot}$ and $E_{\rm atom}$ are the total energy of the system for atoms in the primitive unit cell at equilibrium and an isolated (free) spin-polarized atom, respectively, and $n$ is the number of atoms in the primitive unit cell. The experimental cohesive energies were corrected by the zero-point vibration energy $E_{\rm ZPV}$ calculated using the Debye temperature $\Theta_{\rm D}$, $E_{\rm ZPV} = (9/8) k_{\rm B} \Theta_{\rm D}$ \cite{csonka}.
For Pb, the relative errors in the calculated cohesive energies are as large as over 40\%{} for all the vdW functionals. In the case of the DFT-TS and DFT-TS-SCS functionals, poor performance is observed for alkali metals (Li, Na, K, Cs). The DFT-D2 functional shows poor performance for Cs and Ba. In the case of the vdW-DF functionals, the revPBE-vdW and rPW86-vdW2 functionals give lower cohesive energies than the other vdW-DF functionals.
Next we discuss the differences among the five vdW results of optB86b-vdW, optB88-vdW, optPBE, DFT-D3, and DFT-D3(BJ). The DFT-D3 functional shows either comparable or better performance compared to the DFT-D3(BJ) (see Fig. \ref{fig6}). In the case of the vdW-DF functionals, the optB86b-vdW, optB88-vdW, and optPBE-vdW functionals show very similar results, although the optB86b-vdW functional gives higher cohesive energies than the optB88-vdW and optPBE-vdW functionals (see Fig. \ref{fig6}).
\section{Summary}
\label{Summary}
In summary, we have investigated the lattice constants, the bulk moduli, and the cohesive energies for the bulk solids of 29 elements at equilibrium, using various vdW functionals based on the DFT in the VASP code. The assessed vdW functionals are classified into two groups. One is the vdW-DF functionals made by a proper choice of exchange functional, and the other is the vdW functionals of a dispersion-corrected DFT-D approach in which an atom-pairwise potential is added to a standard DFT result. The DFT-TS and DFT-TS-SCS functionals showed relatively poor performance for alkali and alkali-earth metals. Note that in the case of the DFT-TS and DFT-TS-SCS functionals, effective atomic volumes are used to calculate the dispersion coefficients. For the calculations, the partitioning of the electron density for each atom in a molecule or solid is performed and its result is then used to scale the dispersion coefficient with reference to the corresponding value for a free atom. Our calculations suggest that the partitioning of the electron density for effective atomic volumes may not be sufficiently accurate for delocalized alkali and alkali-earth metals. We obtained a general trend that the vdW functionals (optB86b-vdW, optB88-vdW, and optPBE-vdW) with optimized exchange functionals and the DFT-D vdW functionals [DFT-D3 and DFT-D3(BJ)] give better results than the original revPBE-vdW and rPW86-vdW2 functionals. To further aid in our understanding, we also discussed the differences among the vdW results of optB86b-vdW, optB88-vdW, optPBE-vdW, DFT-D3, and DFT-D3(BJ). These five vdW functionals showed very similar results. The DFT-D3 functional with zero damping gave either comparable or better performance compared to the DFT-D3(BJ) with damping. In the case of the vdW functionals with optimized exchange functionals, the optB86b-vdW showed slightly better performance in the equilibrium lattice constants and the bulk moduli compared to the other vdW functionals of optB88-vdW and optPBE-vdW. For the cohesive energies, the vdW functionals of optB86b-vdW, optB88-vdW, and optPBE-vdW functionals showed very similar results with smaller variation compared to the original vdW-DF methods of revPBE-vdW and rPW86-vdW2 and the standard LDA method. The results we present in this study provide fundamental information on how the various vdW functionals perform for the selected solid elements, including alkali, alkali-earth, and transition metals, with BCC, FCC, and diamond structures as the ground state structure.
\section{Acknowledgments}
This research was supported by Nano Material Technology Development Program (2012M3A7B4049888) through the National Research Foundation of Korea (NRF) funded by the Ministry of Science, ICT and Future Planning (MSIP), and Priority Research Center Program (2010-0020207) through NRF funded by the Ministry of Education (MOE). Calculations were performed by using the supercomputing resources (KSC-2014-C1-002) of the Korea Institute of Science and Technology Information (KISTI) and Korea Research Environment Open NETwork (KREONET), and the Partnership \& Leadership for the nationwide Supercomputing Infrastructure (PLSI).
\bibliographystyle{elsarticle-num}
\nocite{*}
|
2,877,628,088,396 | arxiv | \section{Conclusions}
In this work we give the first treatment of two basic problems in the SQ query model: high-dimensional mean estimation and stochastic convex optimization. In the process, we demonstrate new connections of our questions to concepts and tools from convex geometry, optimization with approximate oracles and compressed sensing.
Our results provide detailed (but by no means exhaustive) answers to some of the most basic questions about these problems. At a high level our findings can be summarized as ``estimation complexity of polynomial-time SQ algorithms behaves like sample complexity" for many natural settings of those problems. This correspondence should not, however, be taken for granted. In many cases the SQ version requires a completely different algorithm and for some problems we have not been able to provide upper bounds that match the sample complexity (see below).
Given the fundamental role that SQ model plays in a variety of settings, our primary motivation and focus is understanding of the SQ complexity of these basic tasks for its own sake. At the same time our results lead to numerous applications among which are new strong lower bounds for convex relaxations and results that subsume and improve on recent work that required substantial technical effort.
As usual when exploring uncharted territory, some of the most useful results can be proved relatively easily given the wealth of existing literature on related topics. Still for many questions, new insights and analyses were necessary (such as the characterization of the complexity of mean estimation for all $q\in [1,\infty)$) and we believe that those will prove useful in further research on the SQ model and its applications. There were also many interesting questions that we encountered but were not able to answer. We list some of those below:
\begin{enumerate}
\item How many samples are necessary and sufficient for answering the queries of our adaptive algorithms, such as those based on the inexact mirror descent. The answer to this question should shed new light of the power of adaptivity in statistical data analysis \cite{DworkFHPRR14:arxiv}.
\item Is there an SQ equivalent of upper bounds on sample complexity of mean estimation for uniformly smooth norms (see App.\ref{sec:Samples} for details).
Such result would give a purely geometric characterization of estimation
complexity of mean estimation.
\item In the absence of a general technique like the one above there are still many important norms we have not addressed. Most notably, we do not know what is the estimation complexity of mean estimation in the spectral norm of a matrix (or other Schatten norms).
\item Is there an efficient algorithm for mean estimation (or at least linear optimization) with estimation complexity of $O(d/\eps^2)$ for which a membership oracle for $\K$ suffices (our current algorithm is efficient only for a fixed $\K$ as it assumes knowledge of John's ellipsoid for $\K$).
\end{enumerate}
\section*{Acknowledgements}
We thank Arkadi Nemirovski, Sasha Rakhlin, Ohad Shamir and Karthik Sridharan for discussions and valuable suggestions about this work.
\subsection{Local Differential Privacy}
\label{sec:app-dp-local}
We now exploit the simulation of SQ algorithms by locally differentially private (LDP) algorithms \cite{KasiviswanathanLNRS11} to obtain new LDP mean estimation and optimization algorithms.
We first recall the definition of local differential privacy. In this model it is assumed that each data sample obtained by an analyst is randomized in a differentially private way.
\begin{definition}
An $\alpha$-local randomizer $R:\W \rightarrow \Z$ is a randomized algorithm that satisfies $\forall w\in \W$ and $z_1,z_2\in \Z$,
$\pr[R(w) = z_1] \leq e^\alpha \pr[R(w) = z_2]$. An ${\mathrm{LR}}_D$ oracle for distribution $D$ over $\W$ takes as an input a local randomizer $R$ and outputs a random value $z$ obtained by first choosing a random sample $w$ from $D$ and then outputting $R(w)$. An algorithm is $\alpha$-local if it uses access only to ${\mathrm{LR}}_D$ oracle. Further, if the algorithm uses $n$ samples such that sample $i$ is obtained from $\alpha_i$-randomizer $R_i$ then $\sum_{i\in [n]} \alpha_i \leq \alpha$.
\end{definition}
The composition properties of differential privacy imply that an $\alpha$-local algorithm is $\alpha$-differentially private \cite{DworkMNS:06}.
\citenames{Kasiviswanathan \etal}{KasiviswanathanLNRS11} show that one can simulate ${\mbox{STAT}}_D(\tau)$ oracle with success probability $1-\delta$ by an $\alpha$-local algorithm using $n=O(\log(1/\delta)/(\alpha\tau)^2)$ samples from ${\mathrm{LR}}_D$ oracle. This has the following implication for simulating SQ algorithms.
\begin{theorem}[\cite{KasiviswanathanLNRS11}]
\label{thm:sq-2-LDP}
Let $\A_{SQ}$ be an algorithm that makes at most $t$ queries to ${\mbox{STAT}}_D(\tau)$. Then for every $\alpha >0$ and $\delta >0$ there is an $\alpha$-local algorithm $\A$ that uses $n = O(t\log(t/\delta)/(\alpha\tau^2))$ samples from ${\mathrm{LR}}_D$ oracle and produces the same output as $\A_{SQ}$ (for some answers of ${\mbox{STAT}}_D(\tau)$) with probability at least $1-\delta$.
\end{theorem}
\citenames{Kasiviswanathan \etal}{KasiviswanathanLNRS11} also prove a converse of this theorem that uses $n$ queries to ${\mbox{STAT}}(\Theta(e^{2\alpha}\delta/n))$ to simulate $n$ samples of an $\alpha$-local algorithm with probability $1-\delta$. The high accuracy requirement of this simulation implies that it is unlikely to give a useful SQ algorithm from an LDP algorithm.
\paragraph{Mean estimation:}
\citenames{Duchi \etal}{DuchiJW:13focs} give $\alpha$-local algorithms for $\ell_2$ mean estimation using $O(d/(\varepsilon\alpha)^2)$ samples $\ell_{\infty}$ mean estimation using $O(d\log d/(\varepsilon\alpha)^2)$ samples (their bounds are for the expected error $\eps$ but we can equivalently treat them as ensuring error $\eps$ with probability at least $2/3$). They also prove that these bounds are tight.
We observe that a direct combination of Thm.~\ref{thm:sq-2-LDP} with our mean estimation algorithms implies algorithms with nearly the same sample complexity (up to constants for $q=\infty$ and up to a $O(\log d)$ factor for $q=2$). In addition, we can as easily obtain mean estimation results for other norms. For example we can fill the $q \in (2,\infty)$ regime easily.
\begin{corollary}
For every $\alpha$ and $q \in [2,\infty]$ there is an $\alpha$-local algorithm for $\ell_q$ mean estimation with error $\eps$ and success probability of at least $2/3$ that uses $n$ samples from ${\mathrm{LR}}_D$ where:
\begin{itemize}
\item For $q=2$ and $q = \infty$, $n = O(d\log d/(\alpha\eps)^2)$.
\item For $q\in (2,\infty)$, $n = O(d\log^2 d/(\alpha\eps)^2)$.
\end{itemize}
\end{corollary}
\paragraph{Convex optimization:}
\citenames{Duchi \etal}{DuchiJW14} give locally private versions of the mirror-descent algorithm for $\ell_1$ setup and gradient descent for $\ell_2$ setup. Their algorithms achieve the guarantees of the (non-private) stochastic versions of these algorithms at the expense of using $O(d/\alpha^2)$ times more samples. For example for the mirror-descent over the $\B_1^d$ the bound is $O(d\log d(RW/\varepsilon\alpha)^2)$ samples. $\alpha$-local simulation of our algorithms from Sec.~\ref{sec:gradient} can be used to obtain $\alpha$-local algorithms for these problems. However such simulation leads to an additional factor corresponding to the number of iterations of the algorithm. For example for mirror-descent in $\ell_1$ setup we will obtain and $O(d\log d /\alpha^2 \cdot (RW/\varepsilon)^4)$ bound. At the same time our results in Sec.~\ref{sec:gradient} and Sec.~\ref{sec:range} are substantially more general. In particular, our center-of-gravity-based algorithm (Thm.~\ref{thm:cog-sq-efficient}) gives the first $\alpha$-local algorithm for non-Lipschitz setting.
\begin{corollary}
\label{cor:opt-ldp}
Let $\alpha >0,\eps >0$. There is an $\alpha$-local algorithm that for any convex body $\K$ given by a membership oracle with the guarantee that $\B_2^d(R_0) \subseteq \K \subseteq \B_2^d(R_1)$ and any convex program $\min_{x \in \K} \E_{{\mathbf w} \sim D}[f(x,{\mathbf w})]$ in $\R^d$, where $\forall w$, $f(\cdot,w) \in \F(\K,B)$, with probability at least $2/3$, outputs an $\eps$-optimal solution to the program in time $\poly(d, \frac{B}{\alpha \eps}, \log{(R_1/R_0)})$ and using $n = \tilde{O}(d^4 B^2/(\eps^2 \alpha^2))$ samples from ${\mathrm{LR}}_D$.
\end{corollary}
We note that a closely related application is also discussed in \cite{BelloniLNR15}. It relies on the random walk-based approximate value oracle optimization algorithm similar to the one we outlined in Sec.~\ref{sec:random-walk}. Known optimization algorithms that use only the approximate value oracle require a substantially larger number of queries than our algorithm in Thm.~\ref{thm:cog-sq-efficient} and hence need a substantially larger number of samples to implement (specifically, for the setting in Cor.~\ref{cor:opt-ldp}, $n = \tilde{O}(d^{6.5} B^2/(\eps^2 \alpha^2))$ is implied by the algorithm given in \cite{BelloniLNR15}).
\subsection{Differentially Private Answering of Convex Minimization Queries}
\label{sec:app-dp-queries}
An additional implication in the context of differentially private data analysis is to the problem of releasing answers to convex minimization queries over a single dataset that was recently studied by \citenames{Ullman}{Ullman15}. For a dataset $S = (w^i)_{i=1}^n \in \W^n$, a convex set $\K \subseteq \R^d$ and a family of convex functions $\F = \{f(\cdot,w)\}_{w\in \W}$ over $\K$, let $q_f(S) \doteq \argmin_{x\in \K} \frac{1}{n} \sum_{i\in [n]} f(x,w^i)$. \citenames{Ullman}{Ullman15} considers the question of how to answer sequences of such queries $\eps$-approximately (that is by a point $\tilde{x}$ such that $\frac{1}{n} \sum_{i\in [n]} f(\tilde{x},w^i) \leq q_f(S) + \eps$).
We make a simple observation that our algorithms can be used to reduce answering of such queries to answering of counting queries. A
{\em counting} query for a data set $S$, query function $\phi: \W \rar [0,1]$ and accuracy $\tau$ returns a value $v$ such that $|v-\frac{1}{n}\sum_{i\in [n]} \phi(w^i)| \leq \tau$. A long line of research in differential privacy has considered the question of answering counting queries (see \cite{DworkRoth:14} for an overview). In particular, \citenames{Hardt and Rothblum}{HardtR10} prove that given a dataset of size $n \geq n_0 = O(\sqrt{\log(|\W|)\log(1/\beta)}\cdot \log t/(\alpha\tau^2)$ it is possible to $(\alpha,\beta)$-differentially privately answer any sequence of $t$ counting queries with accuracy $\tau$ (and success probability $\geq 2/3$).
Note that a convex minimization query is equivalent to a stochastic optimization problem when $D$ is the uniform distribution over the elements of $S$ (denote it by $U_S$). Further, a $\tau$-accurate counting query is exactly a statistical query for $D=U_S$. Therefore our SQ algorithms can be seen as reductions from convex minimization queries to counting queries. Thus to answer $t$ convex minimization queries with accuracy $\eps$ we can use the algorithm for answering $t' = t m(\eps)$ counting queries with accuracy $\tau(\eps)$, where $m(\eps)$ is the number of queries to ${\mbox{STAT}}(\tau(\eps))$ needed to solve the corresponding stochastic convex minimization problems with accuracy $\eps$. The sample complexity of the algorithm for answering counting queries in \cite{HardtR10} depends only logarithmically on $t$. As a result, the additional price for such implementation is relatively small since such algorithms are usually considered in the setting where $t$ is large and $\log|\W| = \Theta(d)$.
Hence the counting query algorithm in \cite{HardtR10} together with the results in Corollary \ref{cor:solve_cvx_ellp}
immediately imply an algorithm for answering such queries that strengthens
quantitatively and generalizes results in \cite{Ullman15}.
\begin{corollary}
\label{cor:answer-queries}
Let $p \in [1,2]$, $L_0,R>0$, $\K\subseteq\B_p^d(R)$ be a convex body and let $\F=\{f(\cdot,w)\}_{w\in \W} \subset \F_{\|\cdot\|_p}^0(\K,L_0)$ be a finite family of convex functions. Let $\cal Q_\F$ be the set of convex minimization queries corresponding to $\F$. For any $\alpha,\beta,\eps,\delta>0$, there exists an $(\alpha,\beta)$-differentially private algorithm that, with probability at least $1-\delta$ answers any sequence of $t$ queries from $\cal Q_\F$ with accuracy $\eps$ on datasets of size $n$ for
$$n \geq n_0= \tilde O\left(\frac{(L_0R)^2 \sqrt{\log(|\W|)}\cdot \log t}{\eps^2 \alpha} \cdot \mathrm{polylog}\left(\frac{d}{\beta \delta} \right) \right).$$
\end{corollary}
For comparison, the results in \cite{Ullman15} only consider the $p=2$ case and the stated upper bound is
$$n \geq n_0 = \tilde{O}\left(\frac{(L_0R)^2 \sqrt{\log(|\W|)}\cdot \max\{\log t, \sqrt{d}\}}{\eps^2 \alpha} \cdot \mathrm{polylog}\left(\frac{1}{\beta \delta} \right) \right).$$
Our bound is a significant generalization and an improvement by a factor of at least $\tilde{O}(\sqrt{d}/\log t)$. \citenames{Ullman}{Ullman15} also shows that for generalized linear regression one can replace the $\sqrt{d}$ in the maximum by $L_0R/\eps$. The bound in Corollary \ref{cor:answer-queries} also subsumes this improved bound (in most parameter regimes of interest).
Finally, in the $\kappa$-strongly convex case (with $p=2$),
plugging our bounds from Corollary \ref{cor:solve_str_cvx} into the algorithm in \cite{HardtR10} we obtain that it suffices to use a dataset of size
$$ n \geq n_0 =\tilde O\left( \dfrac{L_0^2\sqrt{\log(|\W|)} \cdot \log(t\cdot d \cdot \log R)}{\varepsilon\alpha \kappa}\cdot
\mathrm{polylog}\left(\frac{1}{\beta\delta} \right)\right).$$
The bound obtained by \citenames{Ullman}{Ullman15} for the same function class is
$$ n_0 =\tilde O\left( \dfrac{L_0^2R \sqrt{\log(|\W|)}}{\varepsilon\alpha}
\cdot \max\left\{ \dfrac{\sqrt d}{\sqrt{\kappa\varepsilon}},\dfrac{R\log t}{\varepsilon} \right\} \mathrm{polylog}\left(\frac{1}{\beta\delta} \right)\right).$$
Here our improvement over \cite{Ullman15} is two-fold: We eliminate the $\sqrt{d}$ factor and we essentially eliminate the dependence on $R$ (as in the non-private setting). We remark that our bound might appear incomparable to that in \cite{Ullman15} but is, in fact, stronger since it can be assumed that $\kappa \geq \eps/R^2$ (otherwise, bounds that do not rely on strong convexity are better).
\subsection{Applications to Generalized Linear Regression}
\else \section{Applications to Generalized Linear Regression} \fi
\label{subsec:regression}
We provide a comparison of the bounds obtained by statistical query inexact
first-order methods with some state-of-the-art error bounds for linear regression problems. To be precise, we
compare sample complexity of obtaining excess error $\varepsilon$ (with
constant success probability or in expectation) with the estimation complexity of the
SQ oracle for achieving $\varepsilon$ accuracy. It is worth
noticing though that these two quantities are not directly comparable,
as an SQ algorithm performs a (polynomial) number
of queries to the oracle. However, this comparison shows that our results roughly match what can
be achieved via samples.
We consider the {\em generalized linear regression} problem:
Given a normed space $(\R^d,\|\cdot\|)$, let $\W\subseteq \R^d$ be
the input space, and $\R$ be the output space.
Let $({\mathbf w},{\mathbf z})\sim D$, where $D$ is an unknown target distribution
supported on $\W\times\R$. The objective is to obtain a linear predictor
$x\in\K$ that predicts the outputs as a function of the inputs coming from $D$. Typically, $\K$ is prescribed by
desirable structural properties of the predictor, {\em e.g.}~sparsity or low norm.
The parameters determining complexity are given by bounds on the predictor and
input space: $\K\subseteq \B_{\|\cdot\|}(R)$
and $\W\subseteq \B_{\|\cdot\|_{\ast}}(W)$. Under these assumptions we
may restrict the output space to $[-M,M]$, where $M=RW$.
The prediction error is measured using a {\em loss function}.
For a function $\ell:\R \times\R\to\R_+$, letting $f(x,(w,z))=\ell(\la w,x\ra,z)$,
we seek to solve the stochastic convex program
$\min_{x\in\K}\{F(x)=\E_{({\mathbf w},{\mathbf z})\sim D}[f(x,({\mathbf w},{\mathbf z}))]\}$.
We assume that $\ell(\cdot,z)$ is convex for every $z$ in the support of $D$. A common example of this problem is
the (random design) least squares linear regression, where $\ell(z',z)=(z'-z)^2$.
\paragraph{Non-smooth case:}
We assume that for every $z$ in the support of $D$, $\ell(\cdot, z)\in{\cal F}_{|\cdot|}^0([-M,M],L_{\ell,0})$.
To make the discussion concrete, let us consider the $\ell_p$-setup, \ie
$\|\cdot\|=\|\cdot\|_p$. Hence the Lipschitz constant of our stochastic objective
$f(\cdot,(w,z))=\ell(\la w,\cdot\ra,z)$ can be upper bounded as $L_0\leq L_{\ell,0}\cdot W$.
For this setting \citenames{Kakade \etal}{Kakade:2008} show that the sample complexity of achieving excess error $\varepsilon>0$
with constant success probability is $n=O\left(\left(\frac{L_{\ell,0}WR}{\varepsilon}\right)^2 \ln d\right)$ when $p=1$;
and $n = O\left(\left(\frac{L_{\ell,0}WR}{\varepsilon}\right)^2 (q-1)\right)$ for $1< p\leq 2$.
Using Corollary \ref{cor:solve_cvx_ellp} we obtain that the estimation complexity of solving this problem using our SQ implementation of
the mirror-descent method gives the same up to (at most) a logarithmic in $d$ factor.
\citenames{Kakade \etal}{Kakade:2008} do not provide sample complexity bounds for $p>2$, however
since their approach is based on Rademacher complexity (see Appendix
\ref{sec:Samples} for the precise bounds), the bounds in this
case should be similar to ours as well.
\remove{
\paragraph{Smooth case:}
Risk bounds for the Hilbert space case
(e.g., $\ell_2$) were studied by
\citenames{Srebro}{Srebro:2010}. Their bounds are not described precisely in our setup, but by
observing that
$$ \|\nabla f(x,(w,z))-\nabla f(x,(w,z))\|_{\ast}
\leq \|w\|_{\ast} |\ell^{\prime}(\la w,x\ra,z)-\ell^{\prime}(\la w,y\ra,z)|
\leq \|w\|_{\ast}^2 L_{\ell,1}\|x-y\|,$$
we have $L_1\leq L_{\ell,1}W^2$, which implies an expected risk bound of
$\varepsilon=
O\left(\frac{L_{\ell,1}(WR)^2}{n}+\sqrt{\frac{L_{\ell,1}(WR)^2 F^{\ast}}{n}}\right)$.
In the non-realizable case (i.e., where $F^{\ast}>0$) the second term dominates
asymptotically in $n$, which leads to a sample complexity
$O((\frac{WR}{\varepsilon})^2L_{\ell,1}F^{\ast})$.
Our estimation
complexity bound from Corollary \ref{cor:solve_smooth_cvx_ellp} is
$O((\frac{WR}{\varepsilon})^2L_{\ell,1}B)$, where $B$ is a global upper bound
on $f$, so it matches the sample complexity bound up to the gap between
$F^{\ast}$ and $B$. This gap is related to a different technique they use
for the risk bound.
}
\remove{
\paragraph{Incorporating sparsity:}
Our optimization algorithms also hold for $\ell_p$ spaces where $1\leq p\leq 2$.
A case of particular interest is $p=1$ which is used for sparse regression. For
the nonsmooth case, the
analysis of mirror-descent over the $\ell_1$-ball was already studied in
\cite{Steinhardt:2015}, and their results in this case are equivalent to ours.
In short, the estimation complexity in this case is $O\left(\frac{k^4}{\varepsilon^2}\right)$,
where $k$ is the target sparsity level (here $W=1$ and $R=k$).
}
\paragraph{Strongly convex case:}
Let us now consider a generalized linear regression with regularization. Here
$$f(x,(w,z)) = \ell(\la w,x\ra,z) +\lambda\cdot \Phi(x),$$
where $\Phi:\K\to \R$ is a 1-strongly convex function and $\lambda>0$.
This model has a variety of applications in machine learning, such as
ridge regression and soft-margin SVM.
For the non-smooth linear regression in $\ell_2$ setup (as described above), \citenames{Shalev-Shwartz \etal}{SSSSS:2009} provide a sample complexity bound of $O\left(\frac{(L_{\ell,0}W)^2}{\lambda\varepsilon}\right)$ (with constant success
probability). Note that the expected objective is $2\lambda$-strongly convex and therefore, applying Corollary \ref{cor:solve_str_cvx}, we get the same (up to constant factors) bounds on estimation complexity of solving this problem by SQ algorithms.
\section{Applications}
\label{sec:apps}
In this section we describe several applications of our results. We start by showing that our algorithms together with lower bounds for SQ algorithms give lower bounds against convex programs. We then give several easy examples of using upper bounds in other contexts. (1) New SQ implementation of algorithms for learning halfspaces that eliminate the linear dependence on the dimension in previous work. (2) Algorithms for high-dimensional mean estimation with local differential privacy that re-derive and generalize existing bounds. We also give the first algorithm for solving general stochastic convex programs with local differential privacy. (3) Strengthening and generalization of algorithms for answering sequences of convex minimization queries differentially privately given in \cite{Ullman15}.
Additional applications in settings where SQ algorithms are used can be derived easily. For example, our results immediately imply that an algorithm for answering a sequence of adaptively chosen SQs (such as those given in \cite{DworkFHPRR14:arxiv,DworkFHPRR15:arxiv,BassilyNSSSU15} can be used to solve a sequence of adaptively chosen stochastic convex minimization problems. This question that has been recently studied by \citenames{Bassily \etal}{BassilyNSSSU15} and our bounds can be easily seen to strengthen and generalize some of their results (see Sec.~\ref{sec:app-dp-queries} for an analogous comparison).
\subsection{Lower Bounds}
\label{sec:app-csp}
We describe a generic approach to combining SQ algorithms for stochastic convex optimization with lower bounds against SQ algorithms to obtain lower bounds against certain type of convex programs. These lower bounds are for problems in which we are given a set of cost functions $(v_i)_{i=1}^m$ from some collection of functions $V$ over a set of ``solutions" $Z$ and the goal is to (approximately) minimize or maximize $\frac{1}{m} \sum_{i\in [m]} v_i(z)$ for $z\in Z$. Here either $Z$ is non-convex or functions in $V$ are non-convex (or both). Naturally, this captures loss (or error) of a model in machine learning and also the number of (un)satisfied constraints in constraint satisfaction problems (CSPs). For example, in the MAX-CUT problem $z \in \zo^n$ represents a subset of vertices and $V$ consists of $n \choose 2$, ``$z_i \neq z_j$" predicates.
A standard approach to such non-convex problems is to map $Z$ to a convex body $\K \subseteq \R^d$ and map $V$ to convex functions over $\K$ in such a way that the resulting convex optimization problem can be solved efficiently and the solution allows one to recover a ``good" solution to the original problem. For example, by ensuring that the mappings, $M:Z \rightarrow\K$ and $T:V \rightarrow \F$ satisfy: for all $z$ and $v$, $v(z)= (T(v))(M(z))$ and for all instances of the problem $(v_i)_{i=1}^m$,
\equ{\min_{z \in Z}\frac{1}{m} \sum_{i\in [m]} v_i(z) - \min_{x \in \K}\frac{1}{m} \sum_{i\in [m]} (T(v_i))(x) < \eps .\label{eq:value-preserve}}
(Approximation is also often stated in terms of the ratio between the original and relaxed values and referred to as the integrality gap. This distinction will not be essential for our discussion.) The goal of lower bounds against such approaches is to show that specific mappings (or classes of mappings) will not allow solving the original problem via this approach, \eg have a large integrality gap.
The class of convex relaxations for which our approach gives lower bounds are those that are ``easy" for SQ algorithms. Accordingly, we define the following measure of complexity of convex optimization problems.
\begin{definition}
For an SQ oracle $\cal O$, $t>0$ and a problem $P$ over distributions we say that $P\in {\mbox{Stat}}({\mathcal O}, t)$ if $P$ can be solved using at most $t$ queries to ${\mathcal O}$ for the input distribution. For a convex set $\K$, a set $\F$ of convex functions over $\K$ and $\eps>0$ we denote by ${\mbox{Opt}}(\K,\F,\eps)$ the problem of finding, for every distribution $D$ over $\F$, $x^*$ such that $F(x^*) \leq \min_{x\in \K} F(x) + \eps$, where $F(x) \doteq \E_{f \sim D}[f(x)]$.
\end{definition}
For simplicity, let's focus on the decision problem\footnote{Indeed, hardness results for optimization are commonly obtained via hardness results for appropriately chosen decision problems.} in which the input distribution $D$ belongs to $\D = \D_+ \cup \D_-$. Let $P(\D_+,\D_-)$ denote the problem of deciding whether the input distribution is in $\D_+$ or $\D_-$. This is a distributional version of a {\em promise} problem in which an instance can be of two types (for example completely satisfiable and one in which at most half of the constraints can be simultaneously satisfied). Statistical query complexity upper bounds are preserved under pointwise mappings of the domain elements and therefore an upper bound on the SQ complexity of a stochastic optimization problem implies an upper bound on any problem that can be reduced pointwise to the stochastic optimization problem.
\begin{theorem}
\label{thm:lower-reduction}
Let $\D_+$ and $\D_-$ be two sets of distributions over a collection of functions $V$ on the domain $Z$. Assume that for some $\K$ and $\F$ there exists a mapping $T:V \rightarrow \F$ such that for all $D\in \D^+$, $\min_{x\in \K}\E_{v \sim D}[(T(v))(x)] > \alpha$ and for all $D\in \D^-$, $\min_{x\in \K}\E_{v \sim D}[(T(v))(x)] \leq 0$. Then if for an SQ oracle ${\mathcal O}$ and $t$ we have a lower bound $P(\D_+,\D_-) \not\in {\mbox{Stat}}({\mathcal O},t)$ then we obtain that ${\mbox{Opt}}(\K,\F,\alpha/2) \not\in {\mbox{Stat}}({\mathcal O},t)$.
\end{theorem}
The conclusion of this theorem, namely ${\mbox{Opt}}(\K,\F,\alpha/2) \not\in {\mbox{Stat}}({\mathcal O},t)$, together with upper bounds from previous sections can be translated into a variety of concrete lower bounds on the dimension, radius, smoothness and other properties of convex relaxations to which one can map (pointwise) instances of $P(\D_+,\D_-)$. We also emphasize that the resulting lower bounds are structural and do not assume that the convex program is solved using an SQ oracle or efficiently.
Note that the assumptions on the mapping in Thm.~\ref{thm:lower-reduction} are stated for the expected value $\min_{x\in \K}\E_{v \sim D}[(T(v))(x)]$ rather than for averages over given relaxed cost functions as in eq.~\eqref{eq:value-preserve}. However, for a sufficiently large number of samples $m$, for every $x$ the average over random samples $\frac{1}{m} \sum_{i\in [m]} (T(v_i))(x)$ is close to the expectation $\E_{v \sim D}[(T(v))(x)]$. Therefore, the condition can be equivalently reformulated in terms of the average over a sufficiently large number of samples drawn i.i.d.~from $D$.
\paragraph{Lower bounds for planted CSPs:}
We now describe an instantiation of this approach using lower bounds for constraint satisfaction problems established in \cite{FeldmanPV:13}. \citenames{Feldman \etal}{FeldmanPV:13} describe implications of their lower bounds for convex relaxations using results for more general (non-Lipschitz) stochastic convex optimization
and discuss their relationship to those for lift-and-project hierarchies (Sherali-Adams, Lov\'asz-Schrijver, Lasserre) of canonical LP/SDP formulations. Here we give examples of implications of our results for the Lipschitz case.
Let $Z = \on^n$ be the set of assignments to $n$ Boolean variables. A distributional $k$-CSP problem is defined by a set $\D$ of distributions over Boolean $k$-ary predicates.
One way to obtain a distribution over constraints is to first pick some assignment $z$ and then generate random constraints that are consistent with $z$ (or depend on $z$ in some other predetermined way). In this way we can obtain a family of distributions $\D$ parameterized by a ``planted" assignment $z$. Two standard examples of such instances are planted $k$-SAT (\eg \cite{coja2010efficient}) and the pseudorandom generator based on Goldreich's proposal for one-way functions \cite{goldreich2000candidate}.
Associated with every family created in this way is a complexity parameter $r$ which, as shown in \cite{FeldmanPV:13}, characterizes the SQ complexity of finding the planted assignment $z$, or even distinguishing between a distribution in $\D$ and a uniform distribution over the same type of $k$-ary constraints. This is not crucial for discussion here but, roughly, the parameter $r$ is the largest value $r$ for which the generated distribution over variables in the constraint is $(r-1)$-wise independent. In particular, random and uniform $k$-XOR constraints (consistent with an assignment) have complexity $k$. The lower bound in \cite{FeldmanPV:13} can be (somewhat informally) restated as follows.
\begin{theorem}[\cite{FeldmanPV:13}]\label{thm:lower-bound-csp}
Let $\D=\{D_z\}_{z\in \on^n}$ be a set of ``planted" distributions over $k$-ary constraints of complexity $r$ and let $U_k$ be the uniform distribution on (the same) $k$-ary constraints. Then any SQ algorithm that, given access to a distribution $D \in \D \cup \{U_k\}$ decides correctly whether $D = D_z$ or $D=U_k$ needs $\Omega(t)$ calls to ${\mbox{VSTAT}}(\frac{n^{r}}{(\log t)^{r}})$ for any $t \geq 1$.
\end{theorem}
Combining this with Theorem \ref{thm:lower-reduction} we get the following general statement:
\begin{theorem}\label{thm:lower-bound-convex}
Let $\D=\{D_z\}_{z\in \on^n}$ be a set of ``planted" distributions over $k$-ary constraints of complexity $r$ and let $U_k$ be the uniform distribution on (the same) $k$-ary constraints. Assume that there exists a mapping $T$ that maps each constraint $C$ to a convex function $f_C \in \F$ over some convex $d$-dimensional set $\K$ such that for all $z \in \on^n$, $\min_{x\in \K}\E_{C \sim D_z}[f_C(x)] \leq 0$ and $\min_{x\in \K}\E_{C \sim U_k}[f_C(x)] > \alpha$. Then for every $t\geq 1$, ${\mbox{Opt}}(\K,\F,\alpha/2) \not\in {\mbox{Stat}}({\mbox{VSTAT}}(\frac{n^{r}}{(\log t)^{r}}),\Omega(t))$.
\end{theorem}
Note that in the context of convex minimization that we consider here, it is more natural to think of the relaxation as minimizing the number of unsatisfied constraints (although if the objective function is linear then the claim also applies to maximization over $\K$). We now instantiate this statement for solving the $k$-SAT problem via a convex program in the class $\F_{\|\cdot\|_p}^0(\B_p^d,1)$ (see Sec.~\ref{sec:gradient}). Let $\C_k$ denote the set of all $k$-clauses (OR of $k$ distinct variables or their negations). Let $U_k$ be the uniform distribution over $\C_k$.
\begin{corollary}
\label{cor:k-sat}
There exists a family of distributions $\D=\{D_z\}_{z\in \on^n}$ over $\C_k$ such that the support of $D_z$ is satisfied by $z$ with the following property: For every $p\in[1,2]$, if there exists a mapping $T:\C_k \rightarrow \F_{\|\cdot\|_p}^0(\B_p^d,1)$ such that for all $z$, $\min_{x\in \B_p^d}\E_{C \sim D_z}[(T(C))(x)] \leq 0$ and $\min_{x\in \B_p^d}\E_{C \sim U_k}[(T(C))(x)] > \eps$ then $\eps = \tilde{O}\left((n/\log(d))^{-k/2}\right)$ or, equivalently, $d = 2^{\tilde{\Omega}(n \cdot \eps^{2/k})}$.
\end{corollary}
This lower bound excludes embeddings in exponentially high (\eg $2^{n^{1/4}}$) dimension for which the lowest value of the program for unsatisfiable instances differs from that for satisfiable instances by more than $n^{-k/4}$ (note that the range of functions in $\F_{\|\cdot\|_p}^0(\B_p^d,1)$ can be as large as $[-1,1]$ so this is a normalized additive gap). For comparison, in the original problem the values of these two types of instances are $1$ and $\approx 1- 2^{-k}$. In particular, this implies that the integrality gap is $1/(1-2^{-k}) - o(1)$ (which is optimal).
We note that the problem described in Cor.~\ref{cor:k-sat} is easier than the distributional $k$-SAT refutation problem, where $\D$ contains all distributions with satisfiable support. Therefore the assumptions of Cor.~\ref{cor:lower-convex-program-norm} that we stated in the introduction imply the assumptions of Cor.~\ref{cor:k-sat}.
Similarly, we can use the results of Sec.~\ref{sec:range} to obtain the following lower bound on the dimension of any convex relaxation:
\begin{corollary}
\label{cor:k-sat-range}
There exists a family of distributions $\D=\{D_z\}_{z\in \on^n}$ over $\C_k$ such that the support of $D_z$ is satisfied by $z$ with the following property: For every convex body $\K \subseteq \R^d$, if there exists a mapping $T:\C_k \rightarrow \F(\K,1)$ such that for all $z$, $\min_{x\in \K}\E_{C \sim D_z}[(T(C))(x)] \leq 0$ and $\min_{x\in \K}\E_{C \sim U_k}[(T(C))(x)] > \eps$ then $d = \tilde{\Omega}\left(n^{k/2} \cdot \eps\right)$.
\end{corollary}
\input{sqc-perceptron}
\input{sqc-app-dp}
\subsubsection{Computational Efficiency}
\else \section{Computational Efficiency of the Center-of-Gravity Algorithm}
\label{sec:sqc-cog-efficient}
\fi
The algorithm described in Theorem \ref{thm:cog-sq} relies on the computation of the exact center of gravity and inertial ellipsoid for each localizer. Such computation is $\#$P-hard in general. We now describe a computationally efficient version of the center-of-gravity method that is based on computation of approximate center of gravity and inertial ellipsoid via random walks, an approach that was first proposed by \citenames{Bertsimas and Vempala}{Bertsimas:2004}.
We first observe that the volumetric guarantee is satisfied by any cut through an approximate center of gravity.
\begin{lemma}[\cite{Bertsimas:2004}]\label{lem:VolGuarantee}
For a convex body $G \subseteq \R^d$, let $z$ be any point s.t. $\|z-z(G)\|_{{\cal E}_G} = t$. Then, for any halfspace $H$ containing $z$,
\[
{\mbox{Vol}}(G \cap H) \ge \left(\frac{1}{e} - t \right){\mbox{Vol}}(G).
\]
\end{lemma}
From this result, we know that it suffices to approximate the center of gravity in
the inertial ellipsoid norm in order to obtain the volumetric guarantee.
\citenames{Lovasz and Vempala}{LovaszV06b} show that for any convex body $G$ given by a membership oracle, a point $x \in G$ and $R_0, R_1$ s.t. $R_0\cdot\B_2^d \subseteq (G-x) \subseteq R_1\cdot\B_2^d$, there is a sampling algorithm based on a random walk that outputs points that are within statistical distance $\alpha$ of the uniform distribution in time polynomial in $d, \log(1/\alpha), \log(R_1/R_0)$. The current best dependence on $d$ is $d^4$ for the first random point and $d^3$ for all subsequent points \cite{LV:2006}. Samples from such a random walk can be directly used to estimate the center of gravity and the inertial ellipsoid of $G$.
\begin{theorem}[\cite{LovaszV06b}] \label{thm:LovaszVempala}
There is a randomized algorithm that for any $\eps > 0, 1 > \delta > 0$, for a convex body $G$ given by a membership oracle and a point $x$ s.t. $R_0\cdot \B_2^d \subseteq (G-x) \subseteq R_1\cdot \B_2^d$, finds a point $z$ and an origin-centered ellipsoid ${\cal E}$ s.t. with probability at least $1-\delta$, $\|z-z(G)\|_{{\cal E}_{G}} \le \eps$ and ${\cal E} \subset {\cal E}_G \subset (1+\eps){\cal E}$. The algorithm uses $\tilde{O}(d^4\log(R_1/R_0)\log(1/\delta)/\eps^2)$ calls to the membership oracle.
\end{theorem}
We now show that an algorithm having the guarantees given in Theorem \ref{thm:cog-sq} can be implemented in time $\poly(d, B/\eps, \log(R_1/R_0))$. More formally,
\begin{theorem}
\label{thm:cog-sq-efficient}
Let $\K\subseteq \R^d$ be a convex body given by a membership oracle and a point $x$ s.t. $R_0\cdot \B_2^d \subseteq (\K-x) \subseteq R_1\cdot \B_2^d$, and assume that for all $w\in\W$, $f(\cdot,w) \in \F(\K,B)$.
Then there is an algorithm that for every distribution $D$ over $\W$ finds an $\eps$-optimal solution for the stochastic convex optimization problem $\min_{x\in\K}\{\E_{{\mathbf w}\sim D}[f(x,{\mathbf w})] \}$ using $O(d^2\log(B/\varepsilon))$ queries to ${\mbox{STAT}}(\Omega(\varepsilon/[Bd]))$. The algorithm succeeds with probability $\geq 2/3$ and runs in $\poly(d, B/\eps, \log(R_1/R_0))$ time.
\end{theorem}
\begin{proof}
Let the initial localizer be $G=\K$.
We will prove the following by induction: For every step of the method, if $G$ is
the current localizer then a membership oracle for $G$ can be implemented efficiently given a membership oracle for $\K$ and we can efficiently compute $x\in G$ such that, with probability at least $1-\delta$,
\begin{equation} \label{sandwich}
R_0^{\prime}\cdot \B_2^d \subseteq G-x \subseteq R_1^{\prime}\cdot \B_2^d,
\end{equation}
where $R_1^{\prime}/R_0^{\prime} \leq \max\{R_1/R_0,4d\}$.
We first note that the basis of the induction holds by the assumptions of the theorem. We next show that the assumption of the induction allows us to compute the desired approximations to the center of gravity and the inertial ellipsoid which in turn will allow us to prove the inductive step.
Since $G$ satisfies the assumptions of Theorem \ref{thm:LovaszVempala}, we can obtain in polynomial time
(with probability $1-\delta$) an approximate center $z$ and ellipsoid ${\cal E}$ satisfying
$\|z-z(G)\|_{{\cal E}_G}\leq \chi$ and ${\cal E}\subseteq {\cal E}_{G}\subseteq (1+\chi) {\cal E}$, where
$\chi\doteq1/e-1/3$. By Lemma \ref{lem:VolGuarantee} and $\|z-z(G)\|_{{\cal E}_G}\leq \chi$, we get that volumetric guarantee holds for the next localizer $G^{\prime}$ with parameter $\gamma=2/3$.
Let us now observe that
$$(\sqrt{(d+2)/d}-\chi)\cdot {\cal E} + z \subseteq
\sqrt{(d+2)/d} \cdot {\cal E}_{G}+z(G)\subseteq G.$$
We only prove the first inclusion, as the second one holds by Theorem
\ref{thm:KLSLemma}. Let $y\in\alpha {\cal E}+z$ (where $\alpha=\sqrt{(d+2)/d}-\chi)$). Now we have
$\|y-z(G)\|_{{\cal E}_G}\leq \|y-z\|_{{\cal E}_G}+\|z-z(G)\|_{{\cal E}_G}\leq \|y-z\|_{\cal E}+\chi
\leq \alpha+\chi=\sqrt{(d+2)/d}$. Similarly, we can prove that
$$G-z\subseteq \sqrt{d(d+2)}\cdot {\cal E}_{G} +(z(G)-z)
\subseteq (\sqrt{d(d+2)}+\chi) \cdot {\cal E}_{G}\subseteq
(1+\chi)(\sqrt{d(d+2)}+\chi) \cdot {\cal E}.$$
Denoting $r_0 \doteq \sqrt{(d+2)/d}-\chi$ and $r_1 \doteq (1+\chi)(\sqrt{d(d+2)}+\chi)$ we obtain that
$r_0\cdot {\cal E} \subseteq G-z \subseteq r_1\cdot {\cal E}$, where
$\frac{r_1}{r_0} =\frac{(1+\chi)(\sqrt{d(d+2)}+\chi)}{\sqrt{(d+2)/d}-\chi} \leq \frac{3}{2} d$. By Lemma \ref{lem:approx_grad_isotrop} this implies that using $2d$ queries to ${\mbox{STAT}}(\Omega(\varepsilon/[Bd]))$ we can obtain an estimate $\tilde g$ of $\nabla F(z)$ that suffices for executing the approximate center-of-gravity method.
We finish the proof by establishing the inductive step. Let the new localizer $G^{\prime}$
be defined as $G$ after removing the cut through $z$ given by $\tilde g$ and transformed
by the affine transformation induced by $z$ and ${\cal E}$ (that is mapping $z$ to the origin and $\cal E$ to $\B_2^d$).
Notice that after the transformation
$r_0 \cdot\B_2^d \subseteq \tilde G \subseteq r_1 \cdot \B_2^d$, where $\tilde G$ denotes $G$ after the affine transformation. $G^{\prime}$ is obtained from $\tilde G$ by a cut though the origin. This implies that $G^{\prime}$ contains a ball of radius $r_0/2$ which is inscribed in the half of $r_0 \cdot\B_2^d$ that is contained in $G^{\prime}$. Let $x^{\prime}$ denote the center of this contained ball (which can be easily computed from $\tilde g$, $z$ and $\cal E$). It is also easy to see that a ball of radius $r_0/2+r_1$ centered at $x^{\prime}$ contains $G^{\prime}$. Hence $G^{\prime} -x^{\prime}$ is sandwiched by two Euclidean balls with the ratio of radii being
$(r_1+ r_0/2)/(r_0/2) \leq 4d$. Also notice that since a membership oracle for $\K$ is given and the number of iterations of this method is $O(d \log(4B/\eps))$ then a membership oracle for $G^{\prime}$ can be efficiently computed.
Finally, choosing the confidence parameter $\delta$ inversely proportional to the
number of iterations of the method guarantees a constant success probability.
\end{proof}
\section{Optimization without Lipschitzness}
\label{sec:range}
The estimation complexity bounds obtained for gradient descent-based methods depend polynomially either on the
the Lipschitz constant $L_0$ and the radius $R$ of $\K$
(unless $F$ is strongly convex).
In some cases such bounds are too large and instead we know that the range of functions in the support of the distribution is bounded, that is, $\max_{(x,y \in \K,\ v,w\in \W)} (f(x,v) - f(y,w)) \le 2B$ for some $B$. Without loss of generality we may assume that for all $w\in \W, f(\cdot,w) \in \F(\K,B)$.
\input{sqc-walk}
\subsection{Center-of-Gravity}
An alternative and simpler technique to establish the $O(d^2B^2/\eps^2)$ upper bound on the estimation complexity for $B$-bounded-range functions is to use cutting-plane methods, more specifically, the classic center-of-gravity method, originally proposed by \citenames{Levin}{Levin:1965}.
We introduce some notation. Given a convex body $\K$, let $\bf x$ be a uniformly and randomly chosen point from $\K$. Let $z(\K) \doteq \E[{\mathbf x}]$ and $A(\K) \doteq \E[({\mathbf x}-z(\K))({\mathbf x}-z(\K))^T]$ be the center of gravity and covariance matrix of $\K$ respectively. We define the (origin-centered) inertial ellipsoid of ${\cal K}$ as
${\cal E}_{\cal K} \doteq \{y\, :\, y^TA(\K)^{-1}y \le 1\}$.
The classic center-of-gravity method starts with $G^0\doteq \K$ and iteratively computes a progressively smaller body containing the optimum of the convex program. We call such a body a {\em localizer}. Given a localizer $G^{t-1}$, for $t \geq 1$, the algorithm
computes $x^t= z(G^{t-1})$ and defines the new localizer to be $$G^t \doteq G^{t-1} \cap\{y\in\R^d \cond \la \nabla F(x^t), y-x^t \ra \leq 0\}.$$
It is known that that any halfspace containing the center of gravity of a convex body
contains at least $1/e$ of its volume \cite{Grunbaum:1960}, that is $\mbox{vol}(G^t) \leq \gamma \cdot\mbox{vol}(G^{t-1})$, where $\gamma = 1-1/e$. We call this property the {\em volumetric guarantee} with parameter $\gamma$.
The first and well-known issue we will deal with is that the exact center of gravity of $G^{t-1}$ is hard to compute. Instead, following the approach in \cite{Bertsimas:2004}, we will let $x^t$ be an approximate center-of-gravity. For such an approximate center we will have a volumetric guarantee with somewhat larger parameter $\gamma$.
The more significant issue is that we do not have access to the exact value of $\nabla F(x^t)$. Instead will show how to compute an approximate gradient $\tilde g(x^t)$ satisfying for all $y\in G^t$,
\begin{equation} \label{CoG_approx_grad}
|\langle \tilde g(x^t)-\nabla F(x^t), y-x^t\rangle|\leq \eta.
\end{equation}
Notice that this is a weaker condition than the one required by \eqref{ApproxSubgrad}: first, we only impose the
approximation on the localizer; second, the gradient approximation
is centered at $x^t$. These two features are crucial for our results.
Condition \eqref{CoG_approx_grad} implies that for all
$y\in G^{t-1}\setminus G^t$,
$$ F(y) \geq F(x^t) +\la \nabla F(x^t),y-x^t\ra
\geq F(x^t) + \la \tilde g(x^t),y-x^t\ra -\eta > F(x^t)-\eta.$$
Therefore we will lose at most $\eta$ by discarding points in $G^{t-1}\setminus G^t$.
Plugging this observation into the standard analysis of the center-of-gravity method (see, \eg \cite[Chapter 2]{Nemirovski:1994}) yields the following result.
\begin{proposition}\label{prop:CoG}
For $B>0$, let $\K\subseteq \R^d$ be a convex body, and
$F \in \F(\K,B)$. Let $x^1,x^2,\ldots$ and $\tilde g(x^1), \tilde g(x^2), \ldots$ be a sequence of points and gradient estimates such that for
$G_0 \doteq \K$ and $G^t \doteq G^{t-1} \cap\{y\in\R^d \cond \la \tilde g(x^t), y-x^t \ra \leq 0\}$ for all $t \geq 1$, we
have a volumetric guarantee with parameter $\gamma<1$ and condition \eqref{CoG_approx_grad} is satisfied for some fixed $\eta>0$.
Let $\hat x^T\doteq \argmin_{t\in[T]} F(x^t)$, then
$$F( \hat x^T) -\min_{x\in\K} F(x) \leq \gamma^{T/d} \cdot 2B +\eta\ .$$ In particular, choosing $\eta=\varepsilon/2$, and
$T=\lceil d\log(\frac{1}{\gamma})\log(\frac{4B}{\varepsilon})\rceil$ gives $F(\hat x^T)- \min_{x\in\K} F(x) \leq \varepsilon$.
\end{proposition}
We now describe how to compute an approximate gradient satisfying condition \eqref{CoG_approx_grad}. We show that it suffices to find an ellipsoid $\cal E$ centered at $x^t$ such that $x^t + \cal E$ is included in $G^t$ and $G^t$ is included in $x^t + R \cdot \cal E$. The first condition, together with the bound on the range of functions in the support of the distribution, implies a bound on the ellipsoidal norm of the gradients. This allows us to use Theorem \ref{thm-l2-kashin} to estimate $\nabla F(x^t)$ in the ellipsoidal norm. The second condition can be used to translate the error in the ellipsoidal norm to the error $\eta$ over $G^t$ as required by condition \eqref{CoG_approx_grad}. Formally we prove the following lemma:
\begin{lemma} \label{lem:approx_grad_isotrop}
Let $G\subseteq\R^d$ be a convex body, $x\in G$, and
${\cal E}\subseteq \R^d$ be an origin-centered ellipsoid that satisfies
\[
R_0\cdot {\cal E} \subseteq (G-x) \subseteq R_1\cdot {\cal E}.
\]
Given
$F(x)=\E_{{\mathbf w}}[f(x,{\mathbf w})]$ a convex function on $G$
such that for all $w\in\W$, $f(\cdot,w)\in \F(\K,B)$,
we can compute a vector $\tilde g(x)$ satisfying
\eqref{CoG_approx_grad} in polynomial time using $2d$
queries to ${\mbox{STAT}}\left(\Omega\left(\frac{\eta}{[R_1/R_0]B}\right)\right)$.
\begin{proof}
Let us first bound the norm of the gradients, using the norm dual to
the one induced by the ellipsoid ${\cal E}$.
\begin{eqnarray*}
\|\nabla f(x,w)\|_{\cal E,\ast} &=& \sup_{y\in {\cal E}}\la \nabla f(x,w),y \ra
\,\,\leq\,\, \frac{1}{R_0}\sup_{y\in G}\la \nabla f(x,w),y-x \ra \\
&\leq& \frac{1}{R_0}\sup_{y\in G} [f(y,w)-f(x,w)]
\,\,\leq\,\, \frac{2B}{R_0}.
\end{eqnarray*}
Next we observe that for any vector $\tilde g$,
\begin{eqnarray*}
\sup_{y\in G}\la \nabla F(x)-\tilde g, y -x \ra
&=& R_1 \sup_{y\in G}\left\la \nabla F(x)-\tilde g, \frac{y-x}{R_1} \right\ra
\,\,\leq\,\, R_1 \sup_{y\in {\cal E}} \la \nabla F(x)-\tilde g, y \ra \\
&=& R_1\, \|\nabla F(x)-\tilde g\|_{\cal E,\ast}.
\end{eqnarray*}
From this we reduce obtaining $\tilde g(x)$ satisfying
\eqref{CoG_approx_grad} to a mean estimation problem in
an ellipsoidal norm with error $R_0\eta/[2R_1B]$, which by Theorem \ref{thm-l2-kashin} (with Lemma \ref{lem:norm-embed}) can be done using $2d$ queries
to ${\mbox{STAT}}\left(\Omega\left(\frac{\eta}{[R_1/R_0]B}\right)\right)$.
\end{proof}
\end{lemma}
It is known that if $x^t= z(G^t)$ then the inertial ellipsoid of $G^t$ has the desired property with the ratio of the radii being $d$.
\begin{theorem}\cite{Kannan:1995} \label{thm:KLSLemma}
For any convex body $G\subseteq\R^d$, ${\cal E}_G$ (the inertial ellipsoid of $G$) satisfies
$$ \sqrt{ \dfrac{d+2}{d} }\cdot {\cal E}_G \subseteq (G-z(G)) \subseteq \sqrt{d(d+2)}\cdot {\cal E}_G. $$
\end{theorem}
This means that estimates of the gradients sufficient for executing the exact center-of-gravity method can be obtained using SQs with estimation complexity of $O(d^2B^2/\varepsilon^2)$.
Finally, before we can apply Theorem \ref{prop:CoG}, we note that instead of $\hat x^T\doteq \argmin_{t\in[T]} F(x^t)$ we can compute $\tilde x^T = \argmin_{t\in[T]} \tilde F(x^t)$ such that $F(\tilde x^T) \leq F( \hat x^T)+\eps/2$. This can be done by using $T$ queries to ${\mbox{STAT}}(\eps/[4B])$ to obtain $\tilde F(x^t)$ such that $|\tilde F(x^t) - F(x^t)|\leq \eps/4$ for all $t\in[T]$. Plugging this into Theorem \ref{prop:CoG} we get the following (inefficient) SQ version of the center-of-gravity method.
\begin{proposition}
\label{thm:cog-sq}
Let $\K\subseteq \R^d$ be a convex body, and assume that for all $w\in\W$, $f(\cdot,w) \in \F(\K,B)$.
Then there is an algorithm that for every distribution $D$ over $\W$ finds
an $\eps$-optimal solution for the stochastic convex optimization problem $\min_{x\in\K}\{\E_{{\mathbf w}\sim D}[f(x,{\mathbf w})] \}$ using $O(d^2\log(B/\varepsilon))$ queries to ${\mbox{STAT}}(\Omega(\varepsilon/[Bd]))$.
\end{proposition}
\iffull \input{sqc-cog-efficient}
\else The algorithm we have proposed is not efficient: It is well known that
exact computation of the center of gravity of a convex body is a hard problem.
In Appendix~\ref{sec:sqc-cog-efficient} we develop a computationally efficient
version of the center-of-gravity algorithm based on random walks, an approach
that was first proposed by \citenames{Bertsimas and Vempala}{Bertsimas:2004}.
\fi
\section{The Gradient Descent Family}
\label{sec:gradient}
We now describe approaches for solving convex programs by SQ algorithms that are based on the broad
literature of inexact gradient methods. We will show that some of the standard oracles
proposed in these works can be implemented by SQs; more precisely, by estimation
of the mean gradient. This reduces the task of solving a stochastic convex program to a
polynomial number of calls to the algorithms for
mean estimation from Section \ref{sec:linear}.
For the rest of the section we use the following notation.
Let ${\cal K}$ be a convex body in a normed space $(\R^d,\|\cdot\|)$,
and let $\W$ be a parameter space (notice we make no assumptions on this set).
Unless we explicitly state it, $\K$ is not assumed to be origin-symmetric.
Let $R\doteq\max_{x,y\in\K}\|x-y\|/2$, which is the $\|\cdot\|$-radius of $\K$.
For a random variable ${\mathbf w}$ supported on $\W$ we consider the stochastic convex optimization problem $ \min_{x\in\K}\left\{ F(x)\doteq\E_{{\mathbf w}}[f(x,{\mathbf w})]\right\},$
where for all $w\in\W$, $f(\cdot,w)$ is convex and subdifferentiable on $\K$.
Given $x\in\K$, we denote $\nabla f(x,w)\in \partial f(x,w)$ an arbitrary selection of a
subgradient;\footnote{We omit some necessary technical conditions,
\eg measurability, for the gradient
selection in the stochastic setting. We refer the reader to \cite{Rockafellar}
for a detailed discussion.}
similarly for $F$, $\nabla F(x)\in \partial F(x)$ is arbitrary.
Let us make a brief reminder of some important classes of convex functions.
We say a subdifferentiable convex function $f:\K\to\R$ is in the class
\begin{itemize}
\item $\F(\K,B)$ of $B$-bounded-range functions if for all $x\in \K$, $|f(x)| \leq B$.
\item $\F_{\|\cdot\|}^0(\K,L_0)$ of $L_0$-Lipschitz continuous functions
w.r.t.~$\|\cdot\|$, if for all $x,y\in\K$, $|f(x)-f(y)|\leq L_0\|x-y\|$; this implies
\begin{equation} \label{nonsmooth_dif_ineq}
f(y) \leq f(x) +\la\nabla f(x),y-x \ra +L_0\|y-x\|.
\end{equation}
\item $\F_{\|\cdot\|}^1(\K,L_1)$ of functions with $L_1$-Lipschitz continuous
gradient w.r.t.~$\|\cdot\|$, if for all $x,y\in\K$, $\|\nabla f(x)-\nabla f(y)\|_{\ast}\leq L_1\|x-y\|$;
this implies
\begin{equation}\label{smooth_dif_ineq}
f(y) \leq f(x) +\la\nabla f(x),y-x \ra +\frac{L_1}{2}\|y-x\|^2.
\end{equation}
\item ${\cal S}_{\|\cdot\|}(\K,\kappa)$ of $\kappa$-strongly convex functions
w.r.t.~$\|\cdot\|$, if for all $x,y\in\K$
\begin{equation} \label{str_cvx_dif_ineq}
f(y) \geq f(x) +\la\nabla f(x),y-x \ra +\frac{\kappa}{2}\|y-x\|^2.
\end{equation}
\end{itemize}
\subsection{SQ Implementation of Approximate Gradient Oracles}
Here we present two classes of oracles previously studied in the literature, together
with SQ algorithms for implementing them.
\begin{definition}[Global approximate gradient \cite{dAspremont:2008}]
\label{def:approx-grad}
Let \(F:{\cal K}\to \mathbb{R}\) be a convex subdifferentiable function.
We say that \(\tilde g:{\cal K}\to \mathbb{R}^d\) is an
{\em \(\eta\)-approximate gradient} of $F$ over $\K$ if for all \(u,x,y \in {\cal K}\)
\begin{equation} \label{ApproxSubgrad}
|\langle \tilde g(x)-\nabla F(x), y-u\rangle| \leq \eta.
\end{equation}
\end{definition}
\begin{observation} \label{obs:approx_grad_oracle}
Let $\Ksym\doteq\{x-y \cond x,y\in\K\}$ (which is origin-symmetric
by construction), let furthermore $\|\cdot\|_{\Ksym}$ be the norm
induced by $\Ksym$ and $\|\cdot\|_{\Ksym_{\ast}}$ its dual norm.
Notice that under this notation, \eqref{ApproxSubgrad} is equivalent to
\(\|\tilde g(x)-\nabla F(x)\|_{\Ksym_\ast} \leq \eta\). Therefore, if
$F(x)=\E_{{\mathbf w}}[f(x,{\mathbf w})]$ satisfies for all $w\in\W$,
$f(\cdot,w)\in \F_{\|\cdot\|_\Ksym}^0(\K,L_0)$
then implementing a $\eta$-approximate gradient reduces to
mean estimation in $\|\cdot\|_{\Ksym_{\ast}}$ with error $\eta/L_0$.
\end{observation}
\begin{definition}[Inexact Oracle \cite{Devolder:2014,Devolder2:2013}]
Let $F:\K\to\R$ be a convex subdifferentiable function. We say that
$(\tilde F(\cdot),\tilde g(\cdot)):\K\to\mathbb{R}\times \mathbb{R}^d$ is
a {\em first-order \((\eta,M,\mu)\)-oracle} of $F$ over $\K$ if for all \(x, y \in {\cal K}\)
\begin{equation} \label{str_cvx_oracle}
\dfrac{\mu}{2}\|y-x\|^2 \leq F(y) - [\tilde F(x) - \langle \tilde g(x),y-x\rangle]
\leq \dfrac{M}{2}\|y-x\|^2+\eta.
\end{equation}
\end{definition}
An important feature of this oracle is that the error for approximating
the gradient is {\em independent of the radius}. This observation
was established by \citenames{Devolder \etal}{Devolder2:2013}, and the consequences for statistical
algorithms are made precise in the following lemma\iffull\else, whose
proof is deferred to Appendix~\ref{sec:proof_lem:str_cvx_oracle}\fi.
\begin{lemma} \label{lem:str_cvx_oracle}
Let $\eta>0$, $0<\kappa\leq L_1$ and assume that for all $w\in\W$, $f(\cdot,w)\in\F(\K,B) \cap \F_{\|\cdot\|}^0(\K,L_0)$ and $F(\cdot) = \E_{{\mathbf w}}[f(\cdot,{\mathbf w})]\in {\cal S}_{\|\cdot\|}(\K,\kappa) \cap \F_{\|\cdot\|}^1(\K,L_1)$.
Then implementing a first-order $(\eta, M,\mu)$-oracle (where $\mu=\kappa/2$
and $M=2L_1$) for $F$ reduces to mean estimation in $\|\cdot\|_{\ast}$
with error $\sqrt{\eta\kappa}/[2L_0]$, plus
a single query to ${\mbox{STAT}}(\Omega(\eta/B))$. Furthermore,
for a first-order method that does not require values of $F$, the latter
query can be omitted.
If we remove the assumption $F\in \F_{\|\cdot\|}^1(\K,L_1)$ we can
instead use the upper bound $M=2L_0^2/\eta$.
\end{lemma}
\iffull
\begin{proof}
\input{sqc-strongly-cvx}
\end{proof}
\fi
\subsection{Classes of Convex Minimization Problems}
We now use known inexact convex minimization algorithms together with our SQ implementation of approximate gradient oracles
to solve several classes of stochastic optimization problems. We will see that in terms
of estimation complexity there is no significant gain from the non-smooth to the smooth
case; however, we can significantly reduce the number of queries by acceleration
techniques.
On the other hand, strong convexity leads to improved estimation complexity
bounds: The key insight here is that only a local approximation of the gradient around the
current query point suffices for methods, as a first order $(\eta,M,\mu)$-oracle
is robust to crude approximation of the gradient at far away points from the query
(see Lemma \ref{lem:str_cvx_oracle}).
We note that both smoothness and strong convexity are required only for the objective function and not for each function in the support of the distribution. This opens up the possibility of applying this algorithm without
the need of adding a strongly convex term pointwise --\eg in regularized linear regression--
as long as the expectation is strongly convex.
\subsubsection{Non-smooth Case: The Mirror-Descent Method}
Before presenting the mirror-descent method we give some necessary background
on prox-functions. We assume the existence of a subdifferentiable $r$-uniformly
convex function (where $2\leq r<\infty$) $\Psi:\K\to\R_+$ w.r.t. the norm $\|\cdot\|$, i.e.,
that satisfies\footnote{We have normalized the function so that the constant of
$r$-uniform convexity is 1.} for all $x,y\in\K$
\begin{equation} \label{unif_conv_grad}
\Psi(y)\geq \Psi(x) + \la \nabla \Psi(x), y-x \ra +\frac1r\|y-x\|^r.
\end{equation}
We will assume w.l.o.g. that $\inf_{x\in \K}\Psi(x) = 0$.
The existence of $r$-strongly convex functions holds in rather general situations
\cite{Pisier:2011}, and, in particular, for finite-dimensional $\ell_p^d$ spaces we have
explicit constructions for $r=\min\{2,p\}$ (see Appendix \ref{sec:unif_cvx} for details).
Let $D_{\Psi}(\K)\doteq\sup_{x\in\K}\Psi(x)$ be the
{\em prox-diameter of} $\K$ w.r.t.~$\Psi$.
We define the prox-function (a.k.a.~Bregman distance)
at $x\in \mbox{int}(\K)$ as $V_x(y) = \Psi(y) -\Psi(x) -\la\nabla\Psi(x), y-x \ra$.
In this case we say the prox-function is based on $\Psi$ proximal setup.
Finally, notice that by \eqref{unif_conv_grad} we have $V_x(y) \geq \frac1r\|y-x\|^r$.\\
For the first-order methods in this section we will assume $\K$ is such that for any
vector $x\in\K$ and $g\in\R^d$ the {\em proximal problem}
$\min\{\la g,y-x\ra+V_{x}(y) :\,y\in\K\}$ can be solved efficiently. For the case
$\Psi(\cdot)=\|\cdot\|_2^2$ this corresponds to Euclidean projection, but
this type of problems can be efficiently solved in more general situations
\cite{nemirovsky1983problem}.
The first class of functions we study is $\F_{\|\cdot\|}^0({\cal K},L_0)$.
We propose to solve problems in this class by the mirror-descent method
\cite{nemirovsky1983problem}. This is a classic method for minimization of non-smooth functions, with various applications to stochastic and online learning. Although simple
and folklore, we are not aware of a reference on the analysis of the inexact version
with proximal setup based on a $r$-uniformly convex function. Therefore we include
its analysis \iffull here\else in Appendix \ref{sec:proof_inexact_MD}\fi.
Mirror-descent uses a prox function $V_x(\cdot)$ based on $\Psi$ proximal setup.
The method starts querying a gradient at point
$x^0=\arg\min_{x\in\K} \Psi(x)$, and given a response $\tilde g^t\doteq \tilde g(x^t)$ to the gradient query at point $x^t$
it will compute its next query point as \begin{equation} \label{Prox_step}
x^{t+1} = \arg\min_{y\in\K} \{ \alpha\la \tilde g^t,y-x^t \ra + V_{x^t}(y) \},
\end{equation}
which corresponds to a proximal problem.
The output of the method is the average of iterates $\bar x^T\doteq \frac1T \sum_{t=1}^T x^t$.
\begin{proposition} \label{Prop:Inexact_MD}
Let \(F\in \F_{\|\cdot\|}^0(\K,L_0)\) and $\Psi:\K\to\R$ be an $r$-uniformly
convex function. Then the inexact mirror-descent method with
$\Psi$ proximal setup, step size $\alpha=\frac{1}{L_0}[rD_{\Psi}(\K)/T]^{1-1/r}$, and an $\eta$-approximate gradient for $F$ over $\K$, guarantees after $T$ steps an accuracy
\[F(\bar x^T)-F^{\ast} \leq L_0\left( \frac{rD_{\Psi}(\K)}{T}\right)^{1/r}+\eta.\]
\end{proposition}
\iffull \input{sqc-proof-MD}
\fi
\begin{remark}
As in our mean estimation problems, we assume a uniform bound on the norm of the gradient over the whole support of the input distribution $D$. It is known that techniques based on stochastic gradient descent achieve similar guarantees when one uses a bound on the second moment of the norm of gradients instead of the uniform bound (\eg \cite{Nemirovski:1978,Nemirovski:2009}). For SQs the same setting and (almost) the same estimation complexity can be obtained using recent results from \cite{Feldman:16sqvar}. The results show that ${\mbox{VSTAT}}$ allows estimation of expectation of any unbounded function $\phi$ of ${\mathbf w}$ within $\eps \sigma$ using $1/\eps^2$ queries of estimation complexity $\tilde{O}(1/\eps^2)$, where $\sigma$ is the standard deviation of $\phi({\mathbf w})$.
\end{remark}
We can readily apply the result above to stochastic convex programs in
non-smooth $\ell_p$ settings.
\begin{definition}[$\ell_p$-setup]
Let $1\leq p\leq \infty$, $L_0,R>0$, and $\K\subseteq\B_p^d(R)$ be a convex body. We define as
the (non-smooth) $\ell_p$-setup the family of problems
$\min_{x\in\K}\{ F(x) \doteq \E_{{\mathbf w}}[f(x,{\mathbf w})]\}$, where for all $w\in\W$, $
f(\cdot,w)\in\F_{\|\cdot\|_p}^0(\K,L_0)$.
In the smooth $\ell_p$-setup we additionally assume that $F\in\F_{\|\cdot\|_p}^1(\K,L_1)$.
\end{definition}
From constructions of $r$-uniformly
convex functions for $\ell_p$ spaces, with $r=\min\{2,p\}$ (see Appendix
\ref{sec:unif_cvx}), we know that there exists an efficiently computable Prox function $\Psi$
(\ie whose value and gradient can be computed exactly, and thus
problem \eqref{Prox_step} is solvable for simple enough $\K$). The
consequences in terms of estimation complexity are summarized in the
following corollary, and proved in Appendix \ref{proof_solve_cvx_ellp}.
\begin{corollary} \label{cor:solve_cvx_ellp}
The stochastic optimization problem in the non-smooth $\ell_p$-setup can be solved with
accuracy $\varepsilon$ by:
\begin{itemize}
\item If $p=1$, using
$O\left(d\log d\cdot \left(\dfrac{L_0R}{\varepsilon}\right)^2 \right)$ queries to
${\mbox{STAT}}\left(\dfrac{\varepsilon}{4L_0R}\right)$;
\item If $1<p< 2$, using
$O\left(d\log d\cdot \dfrac{1}{(p-1)}\left(\dfrac{L_0R}{\varepsilon}\right)^2\right)$
queries to ${\mbox{STAT}}\left( \Omega\left(\dfrac{\varepsilon}{[\log d]L_0R}\right)\right)$;
\item If $p=2$, using $O\left(d \cdot \left(\dfrac{L_0R}{\varepsilon}\right)^2\right)$
queries to ${\mbox{STAT}}\left( \Omega\left(\dfrac{\varepsilon}{L_0R}\right)\right)$;
\item If $2<p<\infty$, using
$O\left(d\log d\cdot 4^{p}\left(\dfrac{L_0R}{\varepsilon}\right)^p\right)$
queries to ${\mbox{VSTAT}}\left(\left(\dfrac{64 L_0R \log d}{\varepsilon}\right)^p\right)$.
\end{itemize}
\end{corollary}
\subsubsection{Smooth Case: Nesterov Accelerated Method}
Now we focus on the class of functions whose expectation has
Lipschitz continuous gradient. For simplicity, we will restrict the
analysis to the case where the Prox function is obtained from a strongly
convex function, i.e., $r$-uniform convexity with $r=2$. We utilize a known
inexact variant of Nesterov's accelerated method \cite{nesterov1983method}.
\begin{proposition}[\cite{dAspremont:2008}] \label{prop:dAspremont}
Let $F\in \F_{\|\cdot\|}^1(\K,L_1)$, and let $\Psi:\K\to\R_+$ be a $1$-strongly
convex function w.r.t. $\|\cdot\|$.
Let \((x^t,y^t,z^t)\) be the iterates of the accelerated method with $\Psi$
proximal setup, and where the algorithm has access to an \(\eta\)-approximate
gradient oracle for $F$ over $\K$. Then,
\[ F(y^T)-F^{\ast} \leq \dfrac{L_1D_{\Psi}(\K)}{T^2}+3\eta.\]
\end{proposition}
The consequences for the smooth $\ell_p$-setup, which are straightforward
from the theorem above and Observation \ref{obs:approx_grad_oracle},
are summarized below, and proved in Appendix \ref{proof_solve_smooth_cvx_ellp}.
\begin{corollary} \label{cor:solve_smooth_cvx_ellp}
Any stochastic convex optimization problem in the smooth $\ell_p$-setup
can be solved with accuracy $\varepsilon$ by:
\begin{itemize}
\item If $p=1$, using
$O\left(d \sqrt{\log d}\cdot \sqrt{\dfrac{L_1R^2}{\varepsilon}} \right)$ queries to
${\mbox{STAT}}\left(\dfrac{\varepsilon}{12L_0R} \right)$;
\item If $1<p< 2$, using
$O\left(d\log d\cdot \dfrac{1}{\sqrt{p-1}}\sqrt{\dfrac{L_1R^2}{\varepsilon}}\right)$
queries to
${\mbox{STAT}}\left( \Omega\left(\dfrac{\varepsilon}{[\log d]L_0R}\right)\right)$;
\item If $p=2$, using $O\left(d\cdot \sqrt{\dfrac{L_1R^2}{\varepsilon}}\right)$
queries to
${\mbox{STAT}}\left( \Omega\left(\dfrac{\varepsilon}{L_0R}\right)\right)$.
\end{itemize}
\end{corollary}
\subsubsection{Strongly Convex Case}
Finally, we consider the class ${\cal S}_{\|\cdot\|}(\K,\kappa)$ of strongly
convex functions.
We further restrict our attention to the Euclidean case, i.e., $\|\cdot\|=\|\cdot\|_2$.
There are two main advantages of having a strongly convex objective: On
the one hand, gradient methods in this case achieve linear convergence
rate, on the other hand we will see that estimation complexity is
independent of the radius. Let us first make precise the first statement:
It turns out that with a \((\eta,M,\mu)\)-oracle we can implement
the inexact dual gradient method \cite{Devolder2:2013} achieving
linear convergence rate. The result is as follows
\begin{theorem}[\cite{Devolder2:2013}] \label{thm:linear_conv}
Let $F:{\cal K}\to\R$ be a subdifferentiable convex function endowed
with a \((\eta,M,\mu)\)-oracle over $\K$.
Let $y^t$ be the sequence of averages of the inexact dual gradient method, then
\[ F(y^T)-F^{\ast} \leq \dfrac{MR^2}{2} \exp\left(-\frac{\mu}{M}(T+1) \right)+\eta.\]
\end{theorem}
The results in \cite{Devolder2:2013} indicate that the accelerated method can also
be applied in this situation, and it does not suffer from noise accumulation. However,
the accuracy requirement is more restrictive than for the primal and dual
gradient methods. In fact, the required accuracy for the approximate gradient is
$\eta=O(\varepsilon\sqrt{\mu/M})$; although this is still independent of the radius,
it makes estimation complexity much more sensitive to condition number, which is
undesirable.
An important observation of the dual gradient algorithm is that it does not
require function values (as opposed to its primal version). This together
with Lemma \ref{lem:str_cvx_oracle} leads to the following result.
\begin{corollary} \label{cor:solve_str_cvx}
The stochastic convex optimization problem $\min_{x\in \K} \{F(x)\doteq \E_{{\mathbf w}}[f(x,w)]\}$,
where $F\in{\cal S}_{\|\cdot\|_2}(\K,\kappa)\cap\F_{\|\cdot\|_2}^1(\K,L_1)$, and for all $w\in\W$,
$f(\cdot,w)\in \F_{\|\cdot\|_2}^0(\K,L_0)$,
can be solved to accuracy $\varepsilon>0$ using
$O\left( d\cdot \dfrac{L_1}{\kappa}\log\left(\dfrac{L_1R}{\varepsilon} \right) \right)$
queries to ${\mbox{STAT}}(\Omega(\sqrt{\varepsilon \kappa}/L_0))$.
Without the assumption $F\in {\cal F}_{\|\cdot\|_2}^1({\cal K},L_1)$ the
problem can be solved to accuracy $\varepsilon>0$ by
using $O\left(d\cdot \dfrac{L_0^2}{\varepsilon\kappa}
\log\left(\dfrac{L_0R}{\varepsilon}\right) \right)$ queries
to ${\mbox{STAT}}(\Omega(\sqrt{\varepsilon\kappa}/L_0))$.
\end{corollary}
\iffull
\input{sqc-apps-learning}
\fi
\section{Introduction}
Statistical query (SQ) algorithms, defined by \citenames{Kearns}{Kearns:98} in the context of PAC learning and by
\citenames{Feldman \etal}{FeldmanGRVX:12} for general problems on inputs sampled i.i.d.~from distributions, are algorithms that can be implemented using estimates of the expectation of any given function on a sample drawn randomly from the input distribution $D$ instead of direct access to random samples. Such access is abstracted using a {\em statistical query oracle} that given a query function $\phi:\W \rar [-1,1]$ returns an estimate of $\E_{{\mathbf w}}[\phi({\mathbf w})]$ within some tolerance $\tau$ (possibly dependent on $\phi$). We will refer to the number of samples sufficient to estimate the expectation of each query of a SQ algorithm with some fixed constant confidence as its {\em estimation complexity} (often $1/\tau^2$) and the number of queries as its {\em query complexity}.
Statistical query access to data was introduced as means to derive noise-tolerant algorithms in the PAC model of learning \cite{Kearns:98}. Subsequently, it was realized that reducing data access to estimation of simple expectations has a wide variety of additional useful properties. It played a key role in the development of the notion of differential privacy \cite{DinurN03,BlumDMN:05,DworkMNS:06} and has been subject of intense subsequent research in differential privacy\footnote{In this context an ``empirical" version of SQs is used which is referred to as {\em counting} or {\em linear} queries. It is now known that empirical values are close to expectations when differential privacy is preserved \cite{DworkFHPRR14:arxiv}.} (see \cite{DworkRoth:14} for a literature review). It has important applications in a large number of other theoretical and practical contexts such as distributed data access \cite{ChuKLYBNO:06,RoySKSW10,Sujeeth:11}, evolvability \cite{Valiant:09,Feldman:08ev,Feldman:09sqd} and memory/communication limited machine learning \cite{BalcanBFM12,SteinhardtVW:2016}. Most recently, in a line of work initiated by \citenames{Dwork \etal}{DworkFHPRR14:arxiv}, SQs have been used as a basis for understanding generalization in adaptive data analysis \cite{DworkFHPRR14:arxiv,HardtU14,DworkFHPRR15:arxiv,SteinkeU15,BassilyNSSSU15}.
Here we consider the complexity of solving stochastic convex minimization problems by SQ algorithms. In stochastic convex optimization the goal is to minimize a convex function $F(x) = \E_{{\mathbf w}}[f(x,{\mathbf w})]$ over a convex set $\K \subset \R^d$, where ${\mathbf w}$ is a random variable distributed according to some distribution $D$ over domain $\W$ and each $f(x,w)$ is convex in $x$. The optimization is based on i.i.d.~samples $w^1,w^2,\ldots,w^n$ of ${\mathbf w}$. Numerous central problems in machine learning and statistics are special cases of this general setting with a vast literature devoted to techniques for solving variants of this problem (\eg \cite{SrebroTewari:2010Tutorial,Shalev-ShwartzBen-David:2014}). It is usually assumed that $\K$ is ``known" to the algorithm (or in some cases given via a sufficiently strong oracle) and the key challenge is understanding how to cope with estimation errors arising from the stochastic nature of information about $F(x)$.
Surprisingly, prior to this work, the complexity of this fundamental class of problems has not been studied in the SQ model. This is in contrast to the rich and nuanced understanding of the sample and computational complexity of solving such problems given unrestricted access to samples as well as in a wide variety of other oracle models.
The second important property of statistical algorithms is that it is possible to prove information-theoretic lower bounds on the complexity of any statistical algorithm that solves a given problem. The first one was shown by \citenames{Kearns}{Kearns:98} who proved that parity functions cannot be learned efficiently using SQs. Subsequent work has developed several techniques for proving such lower bounds (\eg \cite{BlumFJ+:94,Simon:07,FeldmanGRVX:12,FeldmanPV:13}), established relationships to other complexity measures (\eg \cite{Sherstov:08,KallweitSimon:11}) and provided lower bounds for many important problems in learning theory (\eg \cite{BlumFJ+:94,KlivansSherstov:07a,FeldmanLS:11colt}) and beyond \cite{FeldmanGRVX:12,FeldmanPV:13,BreslerGS14a,WangGL:15}.
From this perspective, statistical algorithms for stochastic convex optimization have another important role. For many problems in machine learning and computer science, convex optimization gives state-of-the-art results and therefore lower bounds against such techniques are a subject of significant research interest. Indeed, in recent years this area has been particularly active with major progress made on several long-standing problems (\eg \cite{Fiorini:2012,Rothvoss14,MekaPW15,LeeRS15}). As was shown in \cite{FeldmanPV:13}, it is possible to convert SQ lower bounds into purely structural lower bounds on convex relaxations, in other words lower bounds that hold without assumptions on the algorithm that is used to solve the problem (in particular, not just SQ algorithms). From this point of view, each SQ implementation of a convex optimization algorithm is a new lower bound against the corresponding convex relaxation of the problem.
\subsection{Overview of Results}
We focus on iterative first-order methods namely techniques that rely on updating the current point $x^t$ using only the (sub-)gradient of $F$ at $x^t$. These are among the most widely-used approaches for solving convex programs in theory and practice. It can be immediately observed that for every $x$, $\nabla F(x) = \E_{{\mathbf w}}[\nabla f(x,{{\mathbf w}})]$ and hence it is sufficient to estimate expected gradients to some sufficiently high accuracy in order to implement such algorithms (we are only seeking an approximate optimum anyway). The accuracy corresponds to the number of samples (or estimation complexity) and is the key measure of complexity for SQ algorithms. However, to the best of our knowledge, the estimation complexity for specific SQ implementations of first-order methods has never been formally addressed.
We start with the case of linear optimization, namely $\nabla F(x)$ is the same over the whole body $\K$. It turns out that in this case global approximation of the gradient (that is one for which the linear approximation of $F$ given by the estimated gradient is $\eps$ close to the true linear approximation of $F$) is sufficient. This means that the question becomes that of estimating the mean vector of a distribution over vectors in $\R^d$ in some norm that depends on the geometry of $\K$. This is a basic question (indeed, central to many high-dimensional problems) but it has not been carefully addressed even for the simplest norms like $\ell_2$. We examine it in detail and provide an essentially complete picture for all $\ell_q$ norms with $q\in [1,\infty]$. We also briefly examine the case of general convex bodies (and corresponding norms) and provide some universal bounds.
The analysis of the linear case above gives us the basis for tackling first-order optimization methods for Lipschitz convex functions. That is, we can now obtain an estimate of the expected gradient at each iteration. However we still need to determine whether the global approximation is needed or a local one would suffice and also need to ensure that estimation errors from different iterations do not accumulate. Luckily, for this we can build on the study of the performance of first-order methods with inexact first-order oracles. Methods of this type
have a long history (\eg \cite{Polyak:1987,Shor:2011}), however some of our methods of choice have
only been studied recently. We give SQ algorithms for implementing the global and local oracles and then systematically study several traditional setups of convex optimization: non-smooth, smooth and strongly convex. While that is not the most exciting task in itself, it serves to show the generality of our approach. Remarkably, in all of these common setups we achieve the same estimation complexity as what is known to be achievable with samples.
All of the previous results require that the optimized functions are Lipschitz, that is the gradients are bounded in the appropriate norm (and the complexity depends polynomially on the bound). Addressing non-Lipschitz optimization seems particularly challenging in the stochastic case and SQ model, in particular. Indeed, direct SQ implementation of some techniques would require queries of exponentially high accuracy. We give two approaches for dealing with this problem that require only that the convex functions in the support of distribution have bounded range. The first one avoids gradients altogether by only using estimates of function values. It is based on random walk techniques of \citenames{Kalai and Vempala}{KalaiV06} and \citenames{Lovasz and Vempala}{LovaszV06}. The second one is based on a new analysis of the classic center-of-gravity method. There we show that there exists a local norm, specifically that given by the inertial ellipsoid, that allows to obtain a global approximation relatively cheaply.
Interestingly, these very different methods have the same estimation complexity which is also within factor of $d$ of our lower bound.
Finally, we highlight some theoretical applications of our results. First, we describe a high-level methodology of obtaining lower bound for convex relaxations from our results and give an example for constraint satisfaction problems. We then show that our mean estimation algorithms can greatly improve estimation complexity of the SQ version of the classic Perceptron algorithm and several related algorithms.
Finally, we give corollaries for two problems in differential privacy: (a) new algorithms for solving convex programs with the stringent local differential privacy; (b) strengthening and generalization of algorithms for answering sequences of convex minimization queries differentially privately given by \citenames{Ullman}{Ullman15}.
\subsection{Linear optimization and mean estimation}
We start with the linear optimization case which is a natural special case and also the basis of our implementations of first-order methods. In this setting $\W \subseteq \R^d$ and $f(x,w) = \la x, w \ra$. Hence $F(x) = \la x, \bar{w} \ra$, where $\bar{w} = \E_{{\mathbf w}}[{\mathbf w}]$. This reduces the problem to finding a sufficiently accurate estimate of $\bar{w}$. Specifically, for a given error parameter $\varepsilon$, it is sufficient to find a vector $\tilde{w}$, such that for every $x \in \K$, $|\la x, \bar{w} \ra - \la x, \tilde{w} \ra | \leq \varepsilon$. Given such an estimate $\tilde{w}$, we can solve the original problem with error of at most $2\varepsilon$ by solving $\min_{x\in \K} \la x, \tilde{w} \ra$.
An obvious way to estimate the high-dimensional mean using SQs is to simply estimate each of the coordinates of the mean vector using a separate SQ: that is $\E[{\mathbf w}_i/B_i]$, where $[-B_i,B_i]$ is the range of ${\mathbf w}_i$. Unfortunately, even in the most standard setting, where both $\K$ and $\W$ are $\ell_2$ unit balls, this method requires accuracy that scales with $1/\sqrt{d}$ (or estimation complexity that scales linearly with $d$). In contrast, bounds obtained using samples are dimension-independent making this SQ implementation unsuitable for high-dimensional applications. Estimation of high-dimensional means for various distributions is an even more basic question than stochastic optimization; yet we are not aware of any prior analysis of its statistical query complexity. In particular, SQ implementation of all algorithms for learning halfspaces (including the most basic Perceptron) require estimation of high-dimensional means but known analyses rely on inefficient coordinate-wise estimation (\eg \cite{Bylander:94,BlumFKV:97,BalcanF13}).
The seemingly simple question we would like to answer is whether the SQ estimation complexity is different from the sample complexity of the problem. The first challenge here is that even the sample complexity of mean estimation depends in an involved way on the geometry of $\K$ and $\W$
(\cf \cite{Pisier:2011}). Also some of the general techniques for proving upper bounds on sample complexity (see App.~\ref{sec:Samples}) appeal directly to high-dimensional concentration and do not seem to extend to the intrinsically one-dimensional SQ model.
We therefore focus our attention on the much more benign and well-studied $\ell_{p}/\ell_q$ setting. That is $\K$ is a unit ball in $\ell_p$ norm and $\W$ is the unit ball in $\ell_q$ norm for $p\in [1,\infty]$ and $1/p+1/q=1$ (general radii can be reduced to this setting by scaling). This is equivalent to requiring that $\|\tilde{w} -\bar{w}\|_q \leq \varepsilon$ for a random variable ${\mathbf w}$ supported on the unit $\ell_q$ ball and we refer to it as $\ell_q$ mean estimation. Even in this standard setting the picture is not so clean in the regime when $q\in[1,2)$, where the sample complexity of $\ell_q$ mean estimation depends both on $q$ and the relationship between $d$ and $\varepsilon$.
In a nutshell, we give tight (up to a polylogarithmic in $d$ factor) bounds on the SQ complexity of $\ell_q$ mean estimation for all $q\in [1,\infty]$. These bounds match (up to a polylogarithmic in $d$ factor) the sample complexity of the problem. The upper bounds are based on several different algorithms.
\begin{itemize}
\item For $q=\infty$ straightforward coordinate-wise estimation gives the desired guarantees.
\item For $q=2$ we demonstrate that Kashin's representation of vectors introduced by \citenames{Lyubarskii and Vershynin}{Lyubarskii:2010} gives a set of $2d$ measurements which allow to recover the mean with estimation complexity of $O(1/\varepsilon^2)$. We also give a randomized algorithm based on estimating the truncated coefficients of the mean in a randomly rotated basis. The algorithm has slightly worse $O(\log(1/\varepsilon)/\varepsilon^2)$ estimation complexity but its analysis is simpler and self-contained.
\item For $q \in (2,\infty)$ we use decomposition of the samples into $\log d$ ``rings" in which non-zero coefficients have low dynamic range. For each ring we combine $\ell_2$ and $\ell_\infty$ estimation to ensure low error in $\ell_q$ and nearly optimal estimation complexity.
\item For $q \in [1,2)$ substantially more delicate analysis is necessary. For large $\eps$ we first again use a decomposition into ``rings" of low dynamic range. For each ``ring" we use coordinate-wise estimation and then sparsify the estimate by removing small coefficients. The analysis requires using statistical queries in which accuracy takes into account the variance of the random variable (modeled by ${\mbox{VSTAT}}$ oracle from \cite{FeldmanGRVX:12}). For small $\eps$ a better upper bound can obtained via a reduction to $\ell_2$ case.
\end{itemize}
The nearly tight lower bounds are proved using the technique recently introduced in \cite{FeldmanPV:13}. The lower bound also holds for the (potentially simpler) linear optimization problem. We remark that lower bounds on sample complexity do not imply lower bounds on estimation complexity since a SQ algorithm can use many queries.
We summarize the bounds in Table \ref{table:ellq_mean_est} and compare them with those achievable using samples (we provide the proof for those in Appendix \ref{sec:Samples} since we are not aware of a good reference for $q \in [1,2)$).
\begin{table}[h!] \label{table:ellq_mean_est}
\centering
\begin{tabular}{|M{1cm}|M{4.5cm}|M{4.5cm}|M{4cm}|N}
\hline
\(q\) & \multicolumn{2}{c|}{SQ estimation complexity} & Sample &\\ [8pt] \cline{2-3}
& Upper Bound & Lower bound & complexity & \\ [8pt] \hline
$[1,2)$ & $O\left(\min\left\{\frac{d^{\frac2q-1}}{\varepsilon^2}, \left(\frac{\log d}{\varepsilon}\right)^p \right\}\right)$ &
$\tilde{\Omega}\left(\min\left\{\frac{d^{\frac2q-1}}{\varepsilon^2}, \frac{1}{\varepsilon^p\log d} \right\}\right)$
& $\Theta\left(\min\left\{\frac{d^{\frac2q-1}}{\varepsilon^2}, \frac{1}{\varepsilon^p} \right\}\right)$ & \\
[12pt] \hline
$2$ & $O(1/\varepsilon^2)$ &
$\Omega(1/\varepsilon^2)$ &
$\Theta(1/\varepsilon^2)$ & \\ [12pt] \hline
$(2,\infty)$& $O((\log d/\varepsilon)^2)$ &
$\Omega(1/\varepsilon^2)$ &
$\Theta(1/\varepsilon^2)$ & \\ [12pt] \hline
$\infty$ & $O(1/\varepsilon^2)$ &
$\Omega(1/\varepsilon^2)$ &
$\Theta(\log d/\varepsilon^2)$ & \\ [12pt] \hline
\end{tabular}
\caption{Bounds on $\ell_q$ mean estimation and linear optimization over $\ell_p$ ball. Upper bounds use at most $3d\log d$ (non-adaptive) queries. Lower bounds apply to all algorithms using $\poly(d/\varepsilon)$ queries. Sample complexity is for algorithms with access to samples.}
\end{table}
We then briefly consider the case of general $\K$ with $\W = {\mbox{conv}}(\K^*,-\K^*)$ (which corresponds to normalizing the range of linear functions in the support of the distribution). Here we show that for any polytope $\W$ the estimation complexity is still $O(1/\eps^2)$ but the number of queries grows linearly with the number of faces. More generally,
the estimation complexity of $O(d/\varepsilon^2)$ can be achieved for any $\K$. The algorithm relies on knowing John's ellipsoid \cite{John:1948} for $\W$ and therefore depends on $\K$. Designing a single algorithm that given a sufficiently strong oracle for $\K$ (such as a separation oracle) can achieve the same estimation complexity for all $\K$ is an interesting open problem (see Conclusions for a list of additional open problems). This upper bound is nearly tight since even for $\W$ being the $\ell_1$ ball we give a lower bound of $\tilde{\Omega}(d/\varepsilon^2)$.
\subsection{The Gradient Descent family}
The linear case gives us the basis for the study of the traditional setups of convex optimization for Lipschitz functions: non-smooth, smooth and strongly convex. In this setting we assume that for each $w$ in the support of the distribution $D$ and $x\in \K$, $\|\partial f(x,w)\|_q \leq L_0$ and the radius of $\K$ is bounded by $R$ in $\ell_p$ norm. The smooth and strongly convex settings correspond to second order assumptions on $F$ itself. For the two first classes of problems, our algorithms use
global approximation of the gradient on $\K$ which as we know is necessary already in the linear case.
However, for the strongly convex case we can show that an oracle introduced by
\citenames{Devolder \etal}{Devolder:2014} only requires {\em local} approximation of the gradient, which
leads to improved estimation complexity bounds.
For the non-smooth case we analyze and apply the classic mirror-descent method \cite{nemirovsky1983problem}, for the smooth
case we rely on the analysis by \citenames{d'Aspremont}{dAspremont:2008} of an inexact variant of Nesterov's accelerated method \cite{nesterov1983method}, and for the strongly convex case we use the recent
results by \citenames{Devolder \etal}{Devolder2:2013} on the inexact dual gradient method.
We summarize our results for the $\ell_2$ norm in Table \ref{table:grad_meth_ell2}.
Our results for the mirror-descent and Nesterov's algorithm
apply in more general settings (e.g., $\ell_p$ norms): we refer the reader to Section \ref{sec:gradient} for the detailed statement of results. In \iffull Section \else Appendix \fi\ref{subsec:regression} we also demonstrate and discuss the implications of our results for the well-studied generalized linear regression problems.
\begin{table}[h!] \label{table:grad_meth_ell2}
\centering
\begin{tabular}{|M{2.5cm}|M{3cm}|M{4.5cm}|M{2.5cm}|N}
\hline
Objective & Inexact gradient method & Query complexity & Estimation complexity & \\ \hline
Non-smooth& Mirror-descent & $O\left(d\cdot\left(\frac{L_0R}{\varepsilon}\right)^2\right)$ &
$O\left(\left(\frac{L_0R}{\varepsilon}\right)^2\right)$ & \\ [12pt]\hline
Smooth & Nesterov &$O\left(d\cdot \sqrt{\frac{L_1R^2}{\varepsilon}}\right)$&
$O\left(\left(\frac{L_0R}{\varepsilon}\right)^2\right)$ & \\ [12pt]\hline
Strongly convex non-smooth & Dual gradient &
$O\left(d\cdot \frac{L_0^2}{\varepsilon\kappa} \log\left(\frac{L_0R}{\varepsilon}\right)\right)$ &
$O\left(\frac{L_0^2}{\varepsilon\kappa}\right)$ & \\ [12pt]\hline
Strongly convex smooth & Dual gradient &
$O\left(d \cdot \frac{L_1}{\kappa} \log\left(\frac{L_1R}{\varepsilon}\right)\right)$ &
$O\left(\frac{L_0^2}{\varepsilon\kappa}\right)$ & \\ [12pt]\hline
\end{tabular}
\caption{Upper bounds for inexact gradient methods in
the stochastic $\ell_2$-setup. Here $R$ is the Euclidean radius of the
domain, $L_0$ is the Lipschitz constant of all functions in the support of the distribution.
$L_1$ is the Lipschitz constant of the gradient and $\kappa$ is the strong
convexity parameter for the expected objective.}
\end{table}
It is important to note that, unlike in the linear case, the SQ algorithms for optimization of general convex functions are adaptive.
In other words, the SQs being asked at step $t$ of the iterative algorithm depend on the answers to queries in previous steps. This means that the number of samples that would be necessary to implement such SQ algorithms is no longer easy to determine. In particular, as demonstrated by \citenames{Dwork \etal}{DworkFHPRR14:arxiv}, the number of samples needed for estimation of adaptive SQs using empirical means might scale linearly with the query complexity. While better bounds can be easily achieved in our case (logarithmic --as opposed to linear-- in dimension), they are still worse than the sample complexity. We are not aware of a way to bridge this intriguing gap or prove that it is not possible to answer the SQ queries of these algorithms with the same sample complexity.
Nevertheless, estimation complexity is a key parameter even in the adaptive case. There are many other settings in which one might be interested in implementing answers to SQs and in some of those the complexity of the implementation depends on the estimation complexity and query complexity in other ways (for example, differential privacy). In a number of lower bounds for SQ algorithm (including those in Sec.~\ref{sec:lower-linear}) there is a threshold phenomenon in which as one goes below certain estimation complexity, the query complexity lower bound grows from polynomial to exponential very quickly (\eg \cite{FeldmanGRVX:12,FeldmanPV:13}). For such lower bounds only the estimation complexity matters as long as the query complexity of the algorithm is polynomial.
\subsection{Non-Lipschitz Optimization}
The estimation complexity bounds obtained for gradient descent-based methods depend polynomially on the Lipschitz constant $L_0$ and the radius $R$ (unless $F$ is strongly convex). In some cases such bounds are too large and we only have a bound on the range of $f(x,w)$ for all $w \in \W$ and $x \in \K$ (note that a bound of $L_0R$ on range is also implicit in the Lipschitz setting). This is a natural setting for stochastic optimization (and statistical algorithms, in particular) since even estimating the value of a given solution $x$ with high probability and any desired accuracy from samples requires some assumptions about the range of most functions.
For simplicity we will assume $|f(x,w)| \leq B=1$, although our results can be extended to the setting where only the variance of $f(x,{\mathbf w})$ is bounded by $B^2$ using the technique from \cite{Feldman:16sqvar}. Now, for every $x\in \K$, a single SQ for function $f(x,w)$ with tolerance $\tau$ gives a value $\tilde{F}(x)$ such that $|F(x) - \tilde{F}(x)| \leq \tau$. This, as first observed by \citenames{Valiant}{Valiant14}, gives a $\tau$-approximate value (or zero-order) oracle for $F(x)$.
It was proved by \citenames{Nemirovsky and Yudin}{nemirovsky1983problem} and also by \citenames{Gr\"{o}tschel \etal}{GroetschelLS88} (who refer to such oracle as {\em weak evaluation oracle}) that $\tau$-approximate value oracle suffices to $\varepsilon$-minimize $F(x)$ over $\K$ with running time and $1/\tau$ being polynomial in $d, 1/\varepsilon, \log (R_1/R_0)$, where $\B_2^d(R_0) \subseteq \K \subseteq \B_2^d(R_1)$.
The analysis in \cite{nemirovsky1983problem,GroetschelLS88} is relatively involved and does not provide explicit bounds on $\tau$.
Here we substantially sharpen the understanding of optimization with approximate value oracle. Specifically, we show that $(\varepsilon/d)$-approximate value oracle for $F(x)$ suffices to $\varepsilon$-optimize in polynomial time.
\begin{thm}
\label{thm:random-walk-zero-intro}
There is an algorithm that with probability at least $2/3$, given any convex program $\min_{x \in \K} F(x)$ in $\R^d$ where $\forall x\in \K,\ |F(x)| \leq 1$ and $\K$ is given by a membership oracle with the guarantee that $ \B_2^d(R_0) \subseteq \K \subseteq \B_2^d(R_1)$, outputs an $\eps$-optimal solution in time $\poly(d, \frac{1}{\eps}, \log{(R_1/R_0)})$ using $\poly(d, \frac{1}{\eps})$ queries to $\Omega(\eps/d)$-approximate value oracle.
\end{thm}
We outline a proof of this theorem which is based on an extension of the random walk approach of \citenames{Kalai and Vempala}{KalaiV06} and \citenames{Lovasz and Vempala}{LovaszV06}. This result was also independently obtained in a recent work of \citenames{Belloni \etal}{BelloniLNR15} who provide a detailed analysis of the running time and query complexity.
It turns out that the dependence on $d$ in the tolerance parameter of this result cannot be removed altogether: \citenames{Nemirovsky and Yudin}{nemirovsky1983problem} prove that even linear optimization over $\ell_2$ ball of radius 1 with a $\tau$-approximate value oracle requires $\tau = \tilde \Omega(\eps/\sqrt{d})$ for any polynomial-time algorithm. This
result also highlights the difference between SQs and approximate value oracle since the problem can be solved using SQs of tolerance $\tau=O(\eps)$. Optimization with value oracle is also substantially more challenging algorithmically.
Luckily, SQs are not constrained to the value information and we give a substantially simpler and more efficient algorithm for this setting. Our algorithm is based on the classic center-of-gravity method with a crucial new observation: in every iteration the inertial ellipsoid, whose center is the center of gravity of the current body, can be used to define a (local) norm in which the gradients can be efficiently approximated globally. The exact center of gravity and inertial ellipsoid cannot be found efficiently and the efficiently implementable Ellipsoid method does not have the desired local norm. However\iffull, \else, in Appendix \ref{sec:sqc-cog-efficient} \fi we show that the approximate center-of-gravity method introduced by \citenames{Bertsimas and Vempala}{Bertsimas:2004} and approximate computation of the inertial ellipsoid \cite{LovaszV06b} suffice for our purposes.
\begin{theorem}[Informal]
\label{thm:cog-sq-efficient-intro}
Let $\K\subseteq \R^d$ be a convex body given by a membership oracle $\B_2^d(R_0) \subseteq \K \subseteq \B_2^d(R_1)$, and assume that for all $w\in\W, x\in \K$, $|f(x,w)|\leq 1$. Then there is a randomized algorithm that for every distribution $D$ over $\W$ outputs an $\eps$-optimal solution using $O(d^2\log(1/\varepsilon))$ statistical queries with tolerance $\Omega(\varepsilon/d)$ and runs in $\poly(d, 1/\eps, \log(R_1/R_0))$ time.
\end{theorem}
Closing the gap between the tolerance of $\eps/\sqrt{d}$ in the lower bound (already for the linear case) and the tolerance of $\eps/d$ in the upper bound is an interesting open problem. Remarkably, as Thm.~\ref{thm:random-walk-zero-intro} and the lower bound in \cite{nemirovsky1983problem} show, the same intriguing gap is also present for approximate value oracle.
\subsection{Applications}
We now highlight several applications of our results. Additional results can be easily derived in a variety of other contexts that rely on statistical queries (such as evolvability \cite{Valiant:09}, adaptive data analysis \cite{DworkFHPRR14:arxiv} and distributed data analysis \cite{ChuKLYBNO:06}).
\subsubsection{Lower Bounds}
The statistical query framework provides a natural way to convert algorithms into lower bounds. For many problems over distributions it is possible to prove information-theoretic lower bounds against statistical algorithms that are much stronger than known computational lower bounds for the problem. A classical example of such problem is learning of parity functions with noise (or, equivalently, finding an assignment that maximizes the fraction of satisfied XOR constraints). This implies that any algorithm that can be implemented using statistical queries with complexity below the lower bound cannot solve the problem. If the algorithm relies solely on some structural property of the problem, such as approximation of functions by polynomials or computation by a certain type of circuit, then we can immediately conclude a lower bound for that structural property. This indirect argument exploits the power of the algorithm and hence can lead to results which are harder to derive directly.
One inspiring example of this approach comes from using the statistical query algorithm for learning halfspaces \cite{BlumFKV:97}. The structural property it relies on is linear separability. Combined with the exponential lower bound for learning parities \cite{Kearns:98}, it immediately implies that there is no mapping from $\on^d$ to $\R^N$ which makes parity functions linearly separable for any $N\leq N_0=2^{\Omega(d)}$. Subsequently, and apparently unaware of this technique, \citenames{Forster}{Forster:02} proved a $2^{\Omega(d)}$ lower bound on the sign-rank (also known as the dimension complexity) of the Hadamard matrix which is exactly the same result (in \cite{Sherstov:08} the connection between these two results is stated explicitly). His proof relies on a sophisticated and non-algorithmic technique and is considered a major breakthrough in proving lower bounds on the sign-rank of explicit matrices.
Convex optimization algorithms rely on existence of convex relaxations for problem instances that (approximately) preserve the value of the solution. Therefore given a SQ lower bound for a problem, our algorithmic results can be directly translated into lower bounds for convex relaxations of the problem.
We now focus on a concrete example that is easily implied by our algorithm and a lower bound for planted constraint satisfaction problems from \cite{FeldmanPV:13}. Consider the task of distinguishing a random satisfiable $k$-SAT formula over $n$ variables of length $m$ from a randomly and uniformly drawn $k$-SAT formula of length $m$. This is the refutation problem studied extensively over the past few decades (\eg \cite{feige2002relations}). Now, consider the following common approach to the problem: define a convex domain $\K$ and map every $k$-clause $C$ ( (OR of $k$ distinct variables or their negations) to a convex function $f_C$ over $\K$ scaled to the range $[-1,1]$. Then, given a formula $\phi$ consisting of clauses $C_1,\ldots,C_m$, find $x$ that minimizes $F_\phi(x) = \frac{1}{m}\sum_i f_{C_i}(x)$ which roughly measures the fraction of unsatisfied clauses (if $f_C$'s are linear then one can also maximize $F(x)$ in which case one can also think of the problem as satisfying the largest fraction of clauses). The goal of such a relaxation is to ensure that for every satisfiable $\phi$ we have that $\min_{x\in\K} F_\phi(x) \leq \alpha$ for some fixed $\alpha$. At the same time for a randomly chosen $\phi$, we want to have with high probability $\min_{x\in\K} F_\phi(x) \geq \alpha+ \eps$. Ideally one would hope to get $\eps \approx 2^{-k}$ since for sufficiently large $m$, every Boolean assignment leaves at least $\approx 2^{-k}$ fraction of the constraints unsatisfied. But the relaxation can reduce the difference to a smaller value.
We now plug in our algorithm for $\ell_p/\ell_q$ setting to get the following broad class of corollaries.
\begin{cor}
\label{cor:lower-convex-program-norm}
For $p\in \{1,2\}$, let ${\cal K}\subseteq \B_p^d$ be a convex body and $\F_p = \left\{f(\cdot) \cond \forall x \in \K, \|\nabla f(x)\|_q \leq 1\right\}$. Assume that there exists a mapping that maps each $k$-clause $C$ to a convex function $f_C \in \F_p$. Further assume that for some $\eps > 0$:
If $\phi= C_1,\ldots,C_m$ is satisfiable then $$\min_{x\in \K}\left\{\fr{m} \sum_i f_{C_i}(x) \right\} \leq 0.$$
Yet for the uniform distribution $U_k$ over all the $k$-clauses:
$$\min_{x\in {\cal K}} \left\{\E_{C\sim U_k} \lb f_{C}(x) \rb \right\} > \eps.$$
Then $d = 2^{\tilde{\Omega}(n \cdot \eps^{2/k})}$.
\end{cor}
Note that the second condition is equivalent to applying the relaxation to the formula that includes all the $k$-clauses. Also for every $m$, it is implied by the condition
$$\E_{C_1,\ldots,C_m\sim U_k}\left[\min_{x\in {\cal K}} \left\{\fr{m} \sum_i f_{C_i}(x) \right\}\right] > \eps .$$
As long as $k$ is a constant and $\eps =\Omega_k(1)$ we get a lower bound of $2^{\Omega(n)}$ on the dimension of any convex relaxation (where the radius and the Lipschitz constant are at most 1). We are not aware of any existing techniques that imply comparable lower bounds. More importantly, our results imply that Corollary \ref{cor:lower-convex-program-norm} extends to a very broad class of general state-of-the-art approaches to stochastic convex optimization.
Current research focuses on the linear case and restricted $\K$'s which are obtained through various hierarchies of LP/SDP relaxations or extended formulations(\eg \cite{Schoenebeck:08}). The primary difference between the relaxations used in this line of work and our approach is that our approach only rules out relaxations for which the resulting stochastic convex program can be solved by a statistical algorithm. On the other hand, stochastic convex programs that arise from LP/SDP hierarchies and extended formulations cannot, in general, be solved given the available number of samples (each constraint is a sample). As a result, the use of such relaxations can lead to overfitting and this is the reason why these relaxations fail.
This difference makes our lower bounds incomparable and, in a way, complementary to existing work on lower bounds for specific hierarchies of convex relaxations. For a more detailed discussion of SQ lower bounds, we refer the reader to \cite{FeldmanPV:13}.
\subsubsection{Online Learning of Halfspaces using SQs}
Our high-dimensional mean estimation algorithms allow us to revisit SQ implementations of online algorithms for learning halfspaces, such as the classic Perceptron and Winnow algorithms. These algorithms are based on updating the weight vector iteratively using incorrectly classified examples. The convergence analysis of such algorithms relies on some notion of margin by which positive examples can be separated from the negative ones.
A natural way to implement such an algorithm using SQs is to use the mean vector of all positive (or negative) counterexamples to update the weight vector. By linearity of expectation, the true mean vector is still a positive (or correspondingly, negative) counterexample and it still satisfies the same margin condition. This approach was used by \citenames{Bylander}{Bylander:94} and \citenames{Blum \etal}{BlumFKV:97} to obtain algorithms tolerant to random classification noise for learning halfspaces and by \citenames{Blum \etal}{BlumDMN:05} to obtain a private version of Perceptron. The analyses in these results use the simple coordinate-wise estimation of the mean and incur an additional factor $d$ in their sample complexity. It is easy to see that to approximately preserve the margin $\gamma$ it suffices to estimate the mean of some distribution over an $\ell_q$ ball with $\ell_q$ error of $\gamma/2$. We can therefore plug our mean estimation algorithms to eliminate the dependence on the dimension from these implementations (or in some cases have only logarithmic dependence). In particular, the estimation complexity of our algorithms is essentially the same as the sample complexity of PAC versions of these online algorithms. Note that such improvement is particularly important since Perceptron is usually used with a kernel (or in other high-dimensional space) and Winnow's main property is the logarithmic dependence of its sample complexity on the dimension.
We note that a variant of the Perceptron algorithm referred to as Margin Perceptron outputs a halfspace that approximately maximizes the margin \cite{BalcanB06}. This allows it to be used in place of the SVM algorithm. Our SQ implementation of this algorithm gives an SVM-like algorithm with estimation complexity of $O(1/\gamma^2)$, where $\gamma$ is the (normalized) margin. This is the same as the sample complexity of SVM (\cf \cite{Shalev-ShwartzBen-David:2014}). Further details of this application are given in Sec.\ref{sec:halfspaces}.
\subsubsection{Differential Privacy}
In local or {\em randomized-response} differential privacy the users provide the analyst with differentially private versions of their data points. Any analysis performed on such data is differentially private so, in effect, the data analyst need not be trusted. Such algorithms have been studied and applied for privacy preservation since at least the work of \citenames{Warner}{Warner65} and have more recently been adopted in products by Google and Apple. While there exists a large and growing literature on mean estimation and convex optimization with (global) differential privacy (\eg \cite{ChaudhuriMS11,DworkRoth:14,BassilyST14}), these questions have been only recently and partially addressed for the more stringent local privacy.
Using simple estimation of statistical queries with local differential privacy by \citenames{Kasiviswanathan \etal}{KasiviswanathanLNRS11} we directly obtain a variety of corollaries for locally differentially private mean estimation and optimization. Some of them, including mean estimation for $\ell_2$ and $\ell_{\infty}$ norms and their implications for gradient and mirror descent algorithms are known via specialized arguments \cite{DuchiJW:13focs,DuchiJW14}. Our corollaries for mean estimation achieve the same bounds up to logarithmic in $d$ factors. We also obtain corollaries for more general mean estimation problems and results for optimization that, to the best of our knowledge, were not previously known.
An additional implication in the context of differentially private data analysis is to the problem of releasing answers to multiple queries over a single dataset. A long line of research has considered this question for {\em linear} or {\em counting} queries which for a dataset $S \subseteq \W^n$ and function $\phi:\W\rar [0,1]$ output an estimate of $\frac{1}{n}\sum_{w \in S} \phi(w)$ (see \cite{DworkRoth:14} for an overview). In particular, it is known that an exponential in $n$ number of such queries can be answered differentially privately even when the queries are chosen adaptively \cite{RothR10,HardtR10} (albeit the running time is linear in $|\W|$). Recently, \citenames{Ullman}{Ullman15} has considered the question of answering {\em convex minimization} queries which ask for an approximate minimum of a convex program taking a data point as an input averaged over the dataset. For several convex minimization problems he gives algorithms that can answer an exponential number of convex minimization queries. It is easy to see that the problem considered by \citenames{Ullman}{Ullman15} is a special case of our problem by taking the input distribution to be uniform over the points in $S$. A statistical query for this distribution is equivalent to a counting query and hence our algorithms effectively reduce answering of convex minimization queries to answering of counting queries. As a corollary we strengthen and substantially generalize the results in \cite{Ullman15}.
Details of these applications appear in Sections \ref{sec:app-dp-local} and \ref{sec:app-dp-queries}.
\subsection{Related work}
There is a long history of research on the complexity of convex optimization with access to some type of oracle (\eg \cite{nemirovsky1983problem,Braun:2014,Guzman:2015}) with a lot of renewed interest due to applications in machine learning (\eg \cite{Raginsky:2011,Agarwal:2012}). In particular, a number of works study robustness of optimization methods to errors by considering oracles that provide approximate information about $F$ and its (sub-)gradients \cite{dAspremont:2008,Devolder:2014}. Our approach to getting statistical query algorithms for stochastic convex optimization is based on both establishing bridges to that literature and also on improving state-of-the art for such oracles in the non-Lipschitz case.
A common way to model stochastic optimization is via a stochastic oracle for the objective function \cite{nemirovsky1983problem}. Such oracle is assumed to return a random variable whose expectation is equal to the exact value of the function and/or its gradient (most commonly the random variable is Gaussian or has bounded variance). Analyses of such algorithms (most notably Stochastic Gradient Descent (SGD)) are rather different from ours although in both cases linearity and robustness properties of first-order methods are exploited. In most settings we consider, estimation complexity of our SQ agorithms is comparable to sample complexity of solving the same problem using an appropriate version of SGD (which is, in turn, often known to be optimal). On the other hand lower bounds for stochastic oracles (\eg \cite{Agarwal:2012}) have a very different nature and it is impossible to obtain superpolynomial lower bounds on the number of oracle calls (such as those we prove in Section \ref{sec:lower-linear}).
SQ access is known to be equivalent (up to polynomial factors) to the setting in which the amount of information extracted from (or communicated about) each sample is limited \cite{Ben-DavidD98,FeldmanGRVX:12,FeldmanPV:13}.
In a recent (and independent) work \citenames{Steinhardt \etal}{SteinhardtVW:2016} have established a number of additional relationships between learning with SQs and learning with several types of restrictions on memory and communication. Among other results, they proved an unexpected upper bound on memory-bounded sparse least-squares regression by giving an SQ algorithm for the problem. Their analysis\footnote{The analysis and bounds they give are inaccurate but a similar conclusion follows from the bounds we give in Cor.\ref{cor:solve_cvx_ellp}.} is related to the one we give for inexact mirror-descent over the $\ell_1$-ball. Note that in optimization over $\ell_1$ ball, the straightforward coordinate-wise $\ell_\infty$ estimation of gradients suffices. Together with their framework our results can be easily used to derive low-memory algorithms for other learning problems.
\section{Stochastic Linear Optimization and Vector Mean Estimation}
\label{sec:linear}
We start by considering stochastic linear optimization, that is instances of the problem
$$\min_{x\in \K}\{ \E_{{\mathbf w}}[f(x,{\mathbf w})]\} $$
in which $f(x,w) = \la x,w\ra$. From now on we will use the notation
$\bar w \doteq \E_{{\mathbf w}}[{\mathbf w}]$.
For normalization purposes we will assume that the random variable ${\mathbf w}$ is supported on $\W = \{ w \cond \forall x \in \K,\ |\la x,w\ra| \leq 1\}$. Note that $\W = {\mbox{conv}}(\K_*,-\K_*)$ and if $\K$ is origin-symmetric then $\W = \K_*$. More generally, if ${\mathbf w}$ is supported on $\W$ and $B \doteq \sup_{x \in \K,\ w \in \W}\{ |\la x,w\ra|\}$ then optimization with error $\varepsilon$ can be reduced to optimization with error $\varepsilon/B$ over the normalized setting by scaling.
We first observe that for an origin-symmetric $\K$, stochastic linear optimization with error $\varepsilon$ can be solved by estimating the mean vector $\E[{\mathbf w}]$ with error $\varepsilon/2$ measured in $\K_*$-norm and then optimizing a deterministic objective.
\begin{observation} \label{obs:lin_opt_mean_est}
Let $\W$ be an origin-symmetric convex body and $\K \subseteq \W_*$. Let $\min_{x\in \K}\{F(x) \doteq \E[ \la x,{\mathbf w}\ra]\}$ be an instance of stochastic linear optimization for ${\mathbf w}$ supported on $ \W$. Let $\tilde{w}$ be a vector such that $\|\tilde{w} - \bar{w}\|_{\W} \leq \varepsilon/2$. Let $\tilde{x}\in \K$ be such that $\langle \tilde x, \tilde w\rangle \leq \min _{x\in \K} \la x,\tilde{w} \ra +\xi$. Then for all $x \in \K$, $F(\tilde{x}) \leq F(x) +\varepsilon + \xi$.
\end{observation}
\begin{proof}
Note that $F(x) = \la x, \bar{w} \ra$ and let $\bar{x} = \argmin _{x\in \K} \la x, \bar{w} \ra$. The condition $\|\tilde{w} - \bar{w}\|_{\W} \leq \varepsilon/2$ implies that for every $x \in \W_*$, $|\la x,\tilde{w}- \bar{w} \ra | \leq \varepsilon/2$.
Therefore, for every $x \in \K$,
$$F(\tilde{x}) = \la \tilde{x},\bar{w} \ra \leq \la \tilde{x},\tilde{w} \ra +\varepsilon/2 \leq \la \bar{x},\tilde{w} \ra +\varepsilon/2 +\xi \leq \la \bar{x},\bar{w} \ra +\varepsilon +\xi \leq \la x,\bar{w} \ra +\varepsilon +\xi = F(x) +\varepsilon+\xi .$$
\end{proof}
The mean estimation problem over $\W$ in norm $\| \cdot \|$ is the problem in which, given an error parameter $\varepsilon$ and access to a distribution $D$ supported over $\W$, the goal is to find a vector $\tilde{w}$ such that $\|\E_{{\mathbf w} \sim D}[{\mathbf w}] - \tilde{w} \| \leq \varepsilon$. We will be concerned primarily with the case when $\W$ is the unit ball of $\| \cdot \|$ in which case we refer to it as $\| \cdot \|$ mean estimation or mean estimation over $\W$.
We also make a simple observation that if a norm $\| \cdot \|_A$ can be embedded via a linear map into a norm $\| \cdot \|_B$ (possibly with some distortion) then we can reduce mean estimation in $\| \cdot \|_A$ to mean estimation in $\| \cdot \|_B$.
\begin{lemma}
\label{lem:norm-embed}
Let $\| \cdot \|_{A}$ be a norm over $\R^{d_1}$ and $\| \cdot \|_{B}$ be a norm over $\R^{d_2}$ that for some linear map $T:\R^{d_1} \rightarrow \R^{d_2}$ satisfy: $\forall w \in \R^{d_1}$, $a \cdot \|Tw\|_{B} \leq \|w\|_{A} \leq b \cdot \|Tw\|_{B}$. Then mean estimation in $\| \cdot \|_{A}$ with error $\varepsilon$ reduces to mean estimation in $\| \cdot \|_{B}$ with error $\frac{a}{2b}\varepsilon$ (or error $\frac{a}{b}\varepsilon$ when $d_1 =d_2$).
\begin{proof}
Suppose there exists an statistical algorithm $\A$ that for any input distribution supported on $\B_{\|\cdot\|_B}$ computes $\tilde z\in\R^{d_2}$ satisfying $\|\tilde z-\E_{{\mathbf z}}[{\mathbf z}]\|_B \leq \frac{a}{2b}\varepsilon$.
Let $D$ be the target distribution on $\R^{d_1}$, which is supported on $\B_{\|\cdot\|_A}$.
We use $\A$ on the image of $D$ by $T$, multiplied by $a$. That is, we replace each query $\phi:\R^{d_2} \rightarrow \R$ of $\A$ with query $\phi'(w) = \phi(a\cdot Tw)$.
Notice that by our assumption, $\|a\cdot Tw\|_B \leq \|w\|_{A} \leq 1$.
Let $\tilde y$ be the output of $\A$ divided by $a$. By linearity, we have that $\|\tilde y-T\bar w\|_B\leq \frac{1}{2b}\varepsilon$.
Let $\tilde w$ be any vector such that $\|\tilde y - T \tilde w\|_{B}\leq \frac{1}{2b}\varepsilon$.
Then,
$$\|\tilde w-\bar w\|_A\leq b\|T\tilde w-T\bar w\|_B\leq b\|\tilde y-T\tilde w\|_B+
b \|\tilde y-T\bar w\|_B \leq \varepsilon. $$
Note that if $d_1 = d_2$ then $T$ is invertible and we can use $\tilde w = T^{-1}\tilde y$.
\end{proof}
\end{lemma}
\begin{remark}
The reduction of Lemma \ref{lem:norm-embed} is computationally efficient when the following two tasks can
be performed efficiently: computing $Tw$ for any input $w$, and given $z\in\R^{d_2}$ such that there exists
$ w'\in\R^{d_1}$ with $\|z-Tw'\|_B\leq \delta$, computing $w$ such that
$\|z-Tw\|_B\leq \delta+\xi$, for some precision $\xi = O(\delta)$.
\end{remark}
An immediate implication of this is that if the Banach-Mazur distance between unit balls of two norms $\W_1$ and $\W_2$ is $r$ then mean estimation over $\W_1$ with error $\varepsilon$ can be reduced to mean estimation over $\W_2$ with error $\varepsilon/r$.
\subsection{$\ell_q$ Mean Estimation} \label{Subsec:L_q}
We now consider stochastic linear optimization over $\B_p^d$ and the corresponding $\ell_q$ mean estimation problem.
We first observe that for $q = \infty$ the problem can be solved by directly using coordinate-wise statistical queries with tolerance $\varepsilon$. This is true since each coordinate has range $[-1,1]$ and for an estimate $\tilde{w}$ obtained in this way we have $\|\tilde w -\bar w\|_\infty = \max_i\{ |\tilde{w}_i - \E[{\mathbf w}_i]\} \leq \varepsilon$.
\begin{theorem}
\label{thm:L-infty}
$\ell_{\infty}$ mean estimation problem with error $\varepsilon$ can be efficiently solved using $d$ queries to ${\mbox{STAT}}(\varepsilon)$.
\end{theorem}
A simple application of Theorem \ref{thm:L-infty} is to obtain an algorithm for $\ell_1$ mean estimation. Assume that $d$ is a power of two and let $H$ be the orthonormal Hadamard transform matrix (if $d$ is not a power of two we can first pad the input distribution to to $\R^{d'}$, where $d' = 2^{\lceil\log d \rceil} \leq 2d$). Then it is easy to verify that for every $w \in \R^d$, $\|Hw\|_\infty \leq \|w\|_1 \leq \sqrt{d} \|Hw\|_\infty$. By Lemma \ref{lem:norm-embed} this directly implies the following algorithm:
\begin{theorem}
\label{thm:L1}
$\ell_1$ mean estimation problem with error $\varepsilon$ can be efficiently solved using $2d$ queries to ${\mbox{STAT}}(\varepsilon/\sqrt{2d})$.
\end{theorem}
We next deal with an important case of $\ell_2$ mean estimation. It is not hard to see that using statistical queries for direct coordinate-wise estimation will require estimation complexity of $\Omega(d/\varepsilon^2)$.
We describe two algorithms for this problem with (nearly) optimal estimation complexity. The first one relies on so called Kashin's representations introduced by \citenames{Lyubarskii and Vershynin}{Lyubarskii:2010}.
The second is a simpler but slightly less efficient method based on truncated coordinate-wise estimation in a randomly rotated basis.
\subsubsection{$\ell_2$ Mean Estimation via Kashin's representation}
A Kashin's representation is a representation of a vector in an overcomplete linear system such that the magnitude of each coefficient is small (more precisely, within a constant of the optimum) \cite{Lyubarskii:2010}. Such representations, also referred to as ``democratic", have a variety of applications including vector quantization and peak-to-average power ratio reduction in communication systems (\cf \cite{StuderGYB14}). We show that existence of such representation leads directly to SQ algorithms for $\ell_2$ mean estimation.
We start with some requisite definitions.
\begin{definition}
A sequence $(u_j)_{j=1}^N\subseteq \R^d$ is a {\em tight
frame}\footnote{In \cite{Lyubarskii:2010} complex vector spaces are considered but
the results also hold in the real case.}
if for all $w\in\R^d$,
$$\|w\|_2^2=\sum_{j=1}^{N}|\langle w,u_i\rangle|^2.$$
The redundancy of a frame is defined as $\lambda\doteq N/d\geq 1$.
\end{definition}
An easy to prove property of a tight frame (see Obs.~2.1 in \cite{Lyubarskii:2010}) is that for every frame representation $w = \sum_{j=1}^{N} a_i u_i$ it holds that $\sum_{j=1}^{N} a_i^2 \leq \|w\|_2^2$.
\begin{definition}
Consider a sequence $(u_j)_{j=1}^N\subseteq \R^d$ and $w \in \R^d$. An expansion
$w = \sum_{i=1}^N a_i u_i$ such that $\|a\|_\infty \leq \frac{K}{\sqrt{N}}\|w\|_2$ is referred to as a Kashin's representation of $w$ with level $K$.
\end{definition}
\begin{theorem}[\cite{Lyubarskii:2010}]
\label{thm:kashin-frame}
For all $\lambda=N/d>1$ there exists a tight frame $(u_j)_{j=1}^N\subseteq \R^d$ in which every $w \in \R^d$ has a Kashin's representation of $w$ with level $K$ for some constant $K$ depending only on $\lambda$.
Moreover, such a frame can be computed
in (randomized) polynomial time.
\end{theorem}
The existence of such frames follows from Kashin's theorem \cite{Kashin:1977}. \citenames{Lyubarskii and Vershynin}{Lyubarskii:2010} show that any frame that satisfies a certain uncertainty principle (which itself is implied by the well-studied Restricted Isometry Property) yields a Kashin's representation for all $w \in \R^d$. In particular, various random choices of $u_j$'s have this property with high probability. Given a vector $w$, a Kashin's representation of $w$ for level $K$ can be computed efficiently (whenever it exists) by solving a convex program. For frames that satisfy the above mentioned uncertainty principle a Kashin's representation can also be found using a simple algorithm that involves $\log(N)$ multiplications of a vector by each of $u_j$'s. Other algorithms for the task are discussed in \cite{StuderGYB14}.
\begin{theorem}
\label{thm-l2-kashin}
For every $d$ there is an efficient algorithm that solves $\ell_2$ mean estimation problem (over $\B_2^d$) with error $\varepsilon$ using $2d$ queries to ${\mbox{STAT}}(\Omega(\varepsilon))$.
\end{theorem}
\begin{proof}
For $N=2d$ let $(u_j)_{j=1}^N\subseteq \R^d$ be a frame in which every $w \in \R^d$ has a Kashin's representation of $w$ with level $K=O(1)$ (as implied by Theorem \ref{thm:kashin-frame}). For a vector $w\in\R^d$ let $a(w) \in \R^N$ denote the coefficient vector of some specific Kashin's representation of $w$ (\eg that computed by the algorithm in \cite{Lyubarskii:2010}). Let ${\mathbf w}$ be a random variable supported on $\B_2^d$ and let $\bar{a}_j \doteq \E[a({\mathbf w})_j]$. By linearity of expectation, $\bar{w} = \E[{\mathbf w}] = \sum _{j=1}^N \bar{a}_j u_j$.
For each $j\in[N]$, let $\phi_j(w) \doteq \frac{\sqrt{N}}{K} \cdot a(w)_j$. Let $\tilde a_j$ denote the answer of ${\mbox{STAT}}(\varepsilon/K)$ to query $\phi_j$ multiplied by $\frac{K}{\sqrt{N}}$. By the definition of Kashin's representation with level $K$, the range of $\phi_j$ is $[-1,1]$ and, by the definition of ${\mbox{STAT}}(\varepsilon/K)$, we have that $|\bar{a}_j - \tilde{a}_j| \leq \frac{\varepsilon}{\sqrt{N}}$ for every $j \in [N]$. Let $\tilde{w} \doteq \sum_{j=1}^N \tilde{a}_j u_j$.
Then by the property of tight frames mentioned above, $$\| \bar{w} - \tilde{w}\|_2 = \left\| \sum_{j=1}^N (\bar{a}_j - \tilde{a}_j) u_j \right \|_2 \leq \sqrt{\sum_{j=1}^N ( \bar{a}_j - \tilde{a}_j)^2} \leq \varepsilon .$$
\end{proof}
\subsubsection{$\ell_2$ Mean Estimation using a Random Basis}
We now show a simple to analyze randomized algorithm that achieves dimension independent estimation complexity for $\ell_2$ mean estimation. The algorithm will use coordinate-wise estimation in a randomly and uniformly chosen basis. We show that for such a basis simply truncating coefficients that are too large will, with high probability, have only a small effect on the estimation error.
More formally, we define the truncation operation as follows. For a real value $z$ and $a \in \R^+$, let
\[
m_a(z):=\left\{
\begin{array}{rl}
z & \mbox{if } |z| \leq a\\
a & \mbox{if } z> a\\
-a & \mbox{if } z<-a.
\end{array}
\right.
\]
For a vector $w\in \R^d$ we define $m_a(w)$ as the coordinate-wise application of $m_a$ to $w$.
For a $d\times d$ matrix $U$ we define $m_{U,a}(w) \doteq U^{-1}m_a(Uw)$ and define $r_{U,a}(w) \doteq w - m_{U,a}(w)$. The key step of the analysis is the following lemma:
\begin{lemma}
\label{lem:error-distr}
Let ${\mathbf U}$ be an orthogonal matrix chosen uniformly at random and $a > 0$.
For every $w$, with $\|w\|_2=1$, $\E[\|r_{{\mathbf U},a}(w)\|_2^2] \leq 4 e^{-da^2/2}$.
\end{lemma}
\begin{proof}
Notice that $\|r_{{\mathbf U},a}(w) \|_2 = \| {\mathbf U} w - m_a({\mathbf U} w)\|_2$.
It is therefore sufficient to analyze $\| {\mathbf u} - m_a({\mathbf u})\|_2$ for ${\mathbf u}$ a random uniform vector of length 1. Let ${\mathbf r} \doteq {\mathbf u} - m_a({\mathbf u})$. For each $i$,
\begin{eqnarray*}
\E[{\mathbf r}_i^2] &=& \int_0^{\infty} 2t\, \pr[|{\mathbf r}_i|> t] \, dt
= \int_0^{\infty} 2t\,\{\pr[{\mathbf r}_i>t]+\pr[{\mathbf r}_i<-t] \}\,dt \\
&=& \int_0^{\infty} 4t\, \pr[{\mathbf r}_i>t] \, dt
= \int_0^{\infty} 4t\,\pr[{\mathbf u}_i-a>t]\, dt\\
&=& 4\left\{\int_0^{\infty}(t+a)\pr[{\mathbf u}_i>t+a]\,dt-a\int_0^{\infty}\pr[{\mathbf u}_i>t+a]\,dt \right\}\\
&\leq& 4\dfrac{e^{-da^2/2}}{d},
\end{eqnarray*}
where we have used the symmetry of ${\mathbf r}_i$ and concentration on the unit sphere. From this we obtain $\E[\|{\mathbf r}\|_2^2]\leq 4e^{-da^2/2}$, as claimed.
\end{proof}
From this lemma is easy to obtain the following algorithm.
\begin{theorem} \label{thm:ell_2_estimation}
There is an efficient randomized algorithm that solves the $\ell_2$ mean estimation problem with error $\varepsilon$ and success probability $1-\delta$ using $O(d \log(1/\delta))$ queries to ${\mbox{STAT}}(\Omega(\varepsilon/\log(1/\varepsilon)))$.
\end{theorem}
\begin{proof}
Let ${\mathbf w}$ be a random variable supported on $\B_2^d$. For an orthonormal $d\times d$ matrix $U$, and for $i\in[d]$, let $\phi_{U,i}(w)=(m_a(U w))_i/a$ (for some $a$ to be fixed later). Let $v_i$ be the output of ${\mbox{STAT}}(\varepsilon/[2\sqrt da])$ for query $\phi_{U,i}:\W\to[-1,1]$, multiplied by $a$. Now, let $\tilde w_{U,a} \doteq U^{-1} v$, and let $\bar w_{U,a}\doteq \E[m_{U,a}({\mathbf w})]$. This way,
\begin{eqnarray*}
\|\bar w-\tilde w_{U,a}\|_2 &\leq& \|\bar w-\bar w_{U,a}\|_2 + \|\bar w_{U,a}-\tilde w_{U,a}\|_2 \\
&\leq& \|\bar w-\bar w_{U,a}\|_2 + \|\E[m_a(U {\mathbf w})]-v\|_2 \\
&\leq& \|\bar w-\bar w_{U,a}\|_2 +\varepsilon/2.
\end{eqnarray*}
Let us now bound the norm of ${\mathbf v} \doteq \bar{w} -\bar{w}_{{\mathbf U},a}$ where ${\mathbf U}$ is a randomly and uniformly chosen orthonormal $d\times d$ matrix. By Chebyshev's inequality:
$$ \pr[\|{\mathbf v}\|_2 \geq \varepsilon/2] \leq 4\dfrac{\E[\|{\mathbf v}\|_2^2]}{\varepsilon^2} \leq \dfrac{16\exp(-da^2/2)}{\varepsilon^2}. $$
Notice that to bound the probability above by $\delta$ we may choose
$a=\sqrt{2\ln(16/(\delta\varepsilon^2))/d}$. Therefore, the queries above require querying
${\mbox{STAT}}(\varepsilon/[2\sqrt{2\ln(16/\delta\varepsilon^2)}])$, and they guarantee to
solve the $\ell_2$ mean estimation problem with probability at least $1-\delta$.
Finally, we can remove the dependence on $\delta$ in ${\mbox{STAT}}$ queries by confidence
boosting. Let $\varepsilon^{\prime}=\varepsilon/3$ and
$\delta^{\prime}=1/8$, and
run the algorithm above with error $\varepsilon^{\prime}$ and success probability
$1-\delta^{\prime}$ for ${\mathbf U}_1,\ldots, {\mathbf U}_k$ i.i.d. random orthogonal matrices.
If we define
$\tilde w^1,\ldots,\tilde w^k$ the outputs of the algorithm, we can compute
the (high-dimensional) median $\tilde w$, namely the point $\tilde w^j$ whose median $\ell_2$ distance to all the other points is the smallest.
It is easy to see that (\eg \cite{nemirovsky1983problem,HsuSabato:2013arxiv})
$$ \pr[\|\tilde w-\bar w\|_2 > \varepsilon] \leq e^{-Ck},$$
where $C>0$ is an absolute constant.
Hence, as claimed, it suffices to choose $k=O(\log(1/\delta))$, which means using
$O(d\log(1/\delta))$ queries to ${\mbox{STAT}}(\Omega(\varepsilon/\log(1/\varepsilon))$,
to obtain success probability $1-\delta$.
\end{proof}
\subsubsection{$\ell_q$ Mean Estimation for $q > 2$}
We now demonstrate that by using the results for $\ell_{\infty}$ and $\ell_2$ mean estimation we can get algorithms for $\ell_q$ mean estimation with nearly optimal estimation complexity.
The idea of our approach is to decompose each point into a sum of at most $\log d$ points each of which has a small ``dynamic range" of non-zero coordinates. This property ensures a very tight relationship between the $\ell_{\infty}$, $\ell_2$ and $\ell_q$ norms of these points allowing us to estimate their mean with nearly optimal estimation complexity. More formally we will rely on the following simple lemma.
\begin{lemma}
\label{lem:norm-invert}
For any $x \in \R^d$ and any two $0<p<r$:
\begin{enumerate}
\item $\|x\|_r \leq \|x\|_\infty^{1-p/r} \cdot \|x\|_p^{p/r}$;
\item Let $a = \min_{i \in [d]}\{x_i \cond x_i \neq 0\}$. Then $\|x\|_p \leq a^{1-r/p} \cdot \|x\|_r^{r/p}$.
\end{enumerate}
\end{lemma}
\begin{proof}
\begin{enumerate}
\item $$\|x\|_r^r = \sum_{i=1}^d |x_i|^r \leq \sum_{i=1}^d \|x\|_\infty^{r-p} \cdot |x_i|^p = \|x\|_\infty^{r-p} \cdot \|x\|_p^p$$
\item $$\|x\|_r^r = \sum_{i=1}^d |x_i|^r \geq \sum_{i=1}^d a^{r-p} \cdot |x_i|^p = a^{r-p} \cdot \|x\|_p^p .$$
\end{enumerate}
\end{proof}
\begin{theorem}
For any $q \in (2,\infty)$ and $\varepsilon > 0$, $\ell_q$ mean estimation with error $\varepsilon$ can be solved using $3d \log d$ queries to
${\mbox{STAT}}(\varepsilon/\log(d))$.
\end{theorem}
\begin{proof}
Let $k\doteq \lfloor \log(d)/q \rfloor-2$. For $w\in\R^d$, and $j=0,\ldots,k$ we define
$$R_j(w)\doteq \sum_{i=1}^d e_i w_i \ind{2^{-(j+1)}<|w_i|\leq 2^{-j}}, $$
and $R_{\infty}(w)\doteq \sum_{i=1}^d e_i w_i \ind{|w_i|\leq 2^{-(k+1)}}.$
It is easy to see that if $w\in\B_q$ then $w= \sum_{j=0}^k R_j(w) + R_{\infty}(w)$. Furthermore,
observe that $\|R_j(w)\|_{\infty}\leq 2^{-j}$, and by Lemma \ref{lem:norm-invert},
$\|R_j(w)\|_2 \leq 2^{-(j+1)(1-q/2)}.$
Finally, let $\bar w^j=\E[R_j({\mathbf w})]$, and $\bar w^{\infty}=\E[R_{\infty}({\mathbf w})]$.
Let $\varepsilon^{\prime}\doteq 2^{2/q-3} \varepsilon/(k+1)$. For each level $j=0,\ldots, k$, we
perform the following queries:
\begin{itemize}
\item By using $2d$ queries to ${\mbox{STAT}}(\Omega(\varepsilon^{\prime}))$ we obtain a vector
$\tilde w^{2,j}$
such that $\|\tilde w^{2,j} - \bar w^j\|_{2}\leq 2^{(\frac{q}{2}-1)(j+1)}\varepsilon^{\prime}$.
For this, simply observe that $R_j({\mathbf w})/[2^{(\frac{q}{2}-1)(j+1)}]$ is supported on $\B_2^d$,
so our claim follows from Theorem \ref{thm-l2-kashin}.
\item By using $d$ queries to ${\mbox{STAT}}(\varepsilon^{\prime})$ we obtain a vector
$\tilde w^{\infty,j}$ such
that $\|\tilde w^{\infty,j} - \bar w^j\|_{\infty}\leq 2^{-j}\varepsilon^{\prime}$. For this, notice
that $R_j({\mathbf w})/[2^{-j}]$ is supported on $\B_{\infty}^d$ and appeal to Theorem
\ref{thm:L-infty}.
\end{itemize}
We consider the following feasibility problem, which is always solvable
(e.g., by $\bar w^j$)
$$ \|\tilde w^{\infty,j} - w\|_{\infty}\leq 2^{-j}\varepsilon^{\prime}, \quad
\|\tilde w^{2,j} - w\|_{2}\leq 2^{(\frac{q}{2}-1)(j+1)}\varepsilon^{\prime}.$$
Notice that this problem can be solved easily
(we can minimize $\ell_2$ distance to $\tilde w^{2,j}$ with the $\ell_{\infty}$ constraint
above, and this minimization problem can be solved
coordinate-wise), so let $\tilde w^j$ be a solution. By the
triangle inequality, $\tilde w^j$ satisfies
$ \|\tilde w^j - \bar w^j\|_{\infty}\leq 2^{-j}(2\varepsilon^{\prime})$, and
$\|\tilde w^j - \bar w^j\|_{2}\leq 2^{(\frac{q}{2}-1)(j+1)}(2\varepsilon^{\prime}).$
By Lemma \ref{lem:norm-invert},
$$\|\tilde w^j-\bar w^j\|_q \leq \|\tilde w^j-\bar w^j\|_2^{2/q} \cdot
\|\tilde w^j-\bar w^j\|_{\infty}^{1-2/q} \leq 2^{(1-2/q)(j+1)} \, 2^{-j(1-2/q)} (2\varepsilon^{\prime})
= \varepsilon/[2(k+1)].
$$
Next we estimate $\bar w^{\infty}$. Since
$2^{-(k+1)}=2^{-\lfloor\ln d/q\rfloor+1}\leq 4d^{-1/q}$, by using $d$ queries to
${\mbox{STAT}}(\varepsilon/8)$ we can estimate each coordinate of $\bar w^{\infty}$ with
accuracy $\varepsilon/[2d^{1/q}]$ and
obtain $\tilde w^{\infty}$ satisfying
$\|\tilde w^{\infty}-\bar w^{\infty}\|_{q}
\leq d^{1/q}\|\tilde w^{\infty}-\bar w^{\infty}\|_{\infty}\leq \varepsilon/2$.
Let now $\tilde w=[\sum_{j=0}^k \tilde w^j]+\tilde w^{\infty}$. We have,
\begin{equation*}
\|\tilde w-\bar w\|_q \,\leq\, \sum_{j=0}^k \|\tilde w^j-\bar w^j\|_q +
\|\tilde w^{\infty}-\bar w^{\infty}\|_q
\,\leq\, (k+1)\frac{\varepsilon}{2(k+1)} + \frac{\varepsilon}{2}
\,=\, \varepsilon.
\end{equation*}
\end{proof}
\subsubsection{$\ell_q$ Mean Estimation for $q \in (1,2)$}
Finally, we consider the case when $q \in (1,2)$.
Here we get the nearly optimal estimation complexity via two bounds.
The first bound follows from the simple fact that for all $w \in \R^d$, $\|w\|_2 \leq \|w\|_q \leq d^{1/q - 1/2} \|w\|_2$. Therefore we can reduce $\ell_q$ mean estimation with error $\varepsilon$ to $\ell_2$ mean estimation with error $\varepsilon/d^{1/q - 1/2}$ (this is a special case of Lemma \ref{lem:norm-embed} with the identity embedding). Using Theorem \ref{thm-l2-kashin} we then get the following theorem.
\begin{theorem}
\label{thm-lq-small-eps}
For $q \in (1,2)$ and every $d$ there is an efficient algorithm that solves $\ell_q$ mean estimation problem with error $\varepsilon$ using $2d$ queries to ${\mbox{STAT}}(\Omega(d^{1/2 - 1/q}\varepsilon))$.
\end{theorem}
It turns out that for large $\varepsilon$ better sample complexity can be achieved using a different algorithm. Achieving (nearly) optimal estimation complexity in this case requires the use of ${\mbox{VSTAT}}$ oracle. (The estimation complexity for ${\mbox{STAT}}$ is quadratically worse. That still gives an improvement over Theorem \ref{thm-lq-small-eps} for some range of values of $\varepsilon$.) In in the case of $q >2$, our algorithm decompose each point into a sum of at most $\log d$ points each of which has a small ``dynamic range" of non-zero coordinates. For each component we can then use coordinate-wise estimation with an additional zeroing of coordinates that are too small. Such zeroing ensures that the estimate does not accumulate large error from the coordinates where the mean of the component itself is close to 0.
\begin{theorem}
\label{thm:lq-large-eps}
For any $q \in (1,2)$ and $\varepsilon > 0$, the $\ell_q$ mean estimation problem
can be solved with error $\varepsilon$ using $2d \log d$ queries to
${\mbox{VSTAT}}((16\log(d)/\varepsilon)^p)$.
\end{theorem}
\begin{proof}
Given $w\in\B_q$ we consider its positive and negative parts: $w = w^+ -w^-$, where $w^+ \doteq \sum_{i=1}^d e_i w_i \ind{w_i \geq 0}$ and $w^- \doteq -\sum_{i=1}^d e_i w_i \ind{w_i < 0}$. We again rely on the decomposition of $w$ into ``rings" of dynamic range 2, but now for its positive and negative parts. Namely, $w= \sum_{j=0}^{k} [R_j(w^+)-R_j(w^-)] + [R_\infty(w^+)-R_{\infty}(w^-)]$, where $k\doteq \lfloor \log(d)/q \rfloor-2$, $R_j(w) \doteq \sum_{i=1}^d e_i w_i \ind{2^{-(j+1)}<|w_i|\leq 2^{-j}}$ and $R_\infty(w) \doteq \sum_{i=1}^d e_i w_i \ind{|w_i|\leq 2^{-k-1}}$.
Let ${\mathbf w}$ be a random variable supported on $\B_q^d$.
Let $\varepsilon' \doteq \varepsilon/(2k+3)$. For each level $j=0,\ldots, k$, we now describe how to estimate $\overline{w^{+,j}} = \E[R_j({\mathbf w}^+)]$ with accuracy $\varepsilon'$. The estimation is essentially just coordinate-wise use of ${\mbox{VSTAT}}$ with zeroing of coordinates that are too small. Let $v'_i$ be the value returned by ${\mbox{VSTAT}}(n)$ for query $\phi_i(w)= 2^j \cdot (R_j(w^+))_i$, where $n = (\varepsilon'/8)^{-p}\leq (16\log(d)/\varepsilon)^p$. Note that $2^j \cdot (R_j(w^+))_i \in [0,1]$ for all $w$ and $j$. Further, let $v_i = v'_i \cdot \ind{|v'_i| \geq 2/n}$. We start by proving the following decomposition of the error of $v$.
\newcommand{<}{<}
\newcommand{>}{>}
\begin{lemma}
\label{lem:error-two-bounds}
Let $u \doteq 2^j \cdot \overline{w^{+,j}}$, and $z \doteq u - v$. Then $\|z\|_q^q \leq \|u^{<} \|_q^q + n^{-q/2}\cdot \|u^{>}\|_{q/2}^{q/2}$, where $u^{<}_i = u_i \cdot \ind{u_i < 4/n}$ and $u^{>}_i = u_i \cdot \ind{u_i \geq 1/n}$ and for all $i$.
\end{lemma}
\begin{proof}
For every index $i \in [d]$ we consider two cases. The first case is when $v_i = 0$. By the definition of $v_i$, we know that $v'_i< 2/n$. This implies that $u_i = 2^j \E[(R_j({\mathbf w}^+))_i] < 4/n$. This is true since, otherwise (when $u_i \geq 4/n$), by the guarantees of ${\mbox{VSTAT}}(n)$, we would have $|v'_i - u_i| \leq \sqrt{\frac{u_i}{n}}$ and $v'_i \geq u_i - \sqrt{\frac{u_i}{n}} \geq 2/n$. Therefore in this case, $u_i = u^<_i$ and $z_i = u_i-v_i = u^<_i$.
In the second case $v_i\neq 0$. In this case we have that $v'_i \geq 2/n$. This implies that $u_i \geq 1/n$. This is true since, otherwise (when $u_i < 1/n$), by the guarantees of ${\mbox{VSTAT}}(n)$, we would have $|v'_i - u_i| \leq \sqrt{\frac{u_i}{n}}$ and $v'_i \leq u_i+ \frac{1}{n} < 2/n$. Therefore in this case, $u_i = u^>_i$ and $z_i = u_i-v'_i$. By the guarantees of ${\mbox{VSTAT}}(n)$, $|z_i| =|u^>_i-v'_i| \leq \max\left\{\frac{1}{n},\sqrt{\frac{u^>_i}{n}}\right\}=\sqrt{\frac{u^>_i}{n}}$.
The claim now follows since by combining these two cases we get $|z_i|^q \leq (u^<_i)^q + \left(\frac{u^>_i}{n}\right)^{q/2}$.
\end{proof}
We next observe that by Lemma \ref{lem:norm-invert}, for every $w \in \B_q^d$,
$$\|R_j(w^+)\|_1 \leq (2^{-j-1})^{1-q} \|R_j(w^+)\|_q^q \leq (2^{-j-1})^{1-q}.$$
This implies that \equ{\|u\|_1 = 2^j \cdot \left\|\overline{w^{+,j}}\right\|_1 = 2^j \cdot \left\|\E[R_j({\mathbf w}^+)]\right\|_1 \leq 2^j \cdot (2^{-j-1})^{1-q} = 2^{(j+1)q-1}. \label{eq:bound-u}}
Now by Lemma \ref{lem:norm-invert} and eq.\eqref{eq:bound-u}, we have
\equ{\|u^<\|_q^q \leq \left(\frac{4}{n}\right)^{q-1} \cdot \|u^<\|_1 = n^{1-q} \cdot 2^{(j+3)q-3}. \label{eq:bound-small-u}}
Also by Lemma \ref{lem:norm-invert} and eq.\eqref{eq:bound-u}, we have
\equ{\|u^{>}\|_{q/2}^{q/2} \leq \left(\frac{1}{n}\right)^{q/2-1} \cdot \|u^>\|_1 \leq n^{1-q/2} \cdot 2^{(j+1)q-1} \label{eq:bound-large-u}.}
Substituting eq.~\eqref{eq:bound-small-u} and eq.~\eqref{eq:bound-large-u} into Lemma \ref{lem:error-two-bounds} we get
$$\|z\|_q^q \leq \|u^{<} \|_q^q + n^{-q/2}\cdot \|u^{>}\|_{q/2}^{q/2} \leq n^{1-q} \cdot \left(2^{(j+3)q-3} + 2^{(j+1)q-1}\right) \leq n^{1-q} \cdot 2^{(j+3)q}.$$
Let $\tilde{w}^{+,j} \doteq 2^{-j} v$. We have $$\left\|\overline{w^{+,j}} - 2^{-j} v\right\|_q = 2^{-j} \cdot \|z\|_q \leq 2^3 \cdot n^{1/q-1}=\varepsilon'.$$
We obtain an estimate of $\overline{w^{-,j}}$ in an analogous way. Finally, to estimate, $\bar{w}^\infty \doteq \E[R_\infty({\mathbf w})]$ we observe that $2^{-k-1} \leq 2^{1-\lfloor \log(d)/q \rfloor} \leq 4 d^{-1/q}$. Now using ${\mbox{VSTAT}}(1/(4\varepsilon')^2)$ we can obtain an estimate of each coordinate of $\bar{w}^\infty$ with accuracy $\varepsilon' \cdot d^{-1/q}$. In particular, the estimate $\tilde{w}^\infty$ obtained in this way satisfies $\|\bar{w}^\infty - \tilde{w}^\infty\|_q \leq \varepsilon'$.
Now let $\tilde{w} = \sum_{j=0}^{k} (\tilde{w}^{+,j} - \tilde{w}^{-,j}) + \tilde{w}^\infty$. Each of the estimates has $\ell_q$ error of at most $\varepsilon' =\varepsilon/(2k+3)$ and therefore the total error is at most $\varepsilon$.
\end{proof}
\subsubsection{General Convex Bodies}
Next we consider mean estimation and stochastic linear
optimization for convex bodies beyond $\ell_p$-balls. A first observation is that
Theorem \ref{thm:L-infty} can be easily generalized to origin-symmetric polytopes. The
easiest way to see the result is to use the standard embedding of the origin-symmetric
polytope norm into $\ell_{\infty}$ and appeal to Lemma \ref{lem:norm-embed}.
\begin{corollary}
Let $\W$ be an origin-symmetric polytope with $2m$ facets. Then mean estimation over $\W$ with error $\varepsilon$ can be efficiently solved using $m$ queries to ${\mbox{STAT}}(\varepsilon/2)$.
\end{corollary}
In the case of an arbitrary origin-symmetric convex body \(\W\subseteq \R^d\), we can reduce mean estimation over $\W$ to $\ell_2$ mean estimation using the John ellipsoid. Such an ellipsoid
${\cal E}$ satisfies the inclusions $\frac{1}{\sqrt d}{\cal E}\subseteq \W\subseteq {\cal E}$ and any ellipsoid is linearly isomorphic to a unit $\ell_2$ ball.
Therefore appealing to Lemma \ref{lem:norm-embed} and Theorem \ref{thm-l2-kashin} we have
the following.
\begin{proposition}
Let $\W\subseteq\R^d$ an origin-symmetric convex body. Then the mean estimation
problem over $\W$ can be solved using $2d$ queries to
${\mbox{STAT}}(\Omega(\varepsilon/\sqrt d))$.
\end{proposition}
By Observation \ref{obs:lin_opt_mean_est}, for an arbitrary convex body \(\K\), the
stochastic linear optimization problem over $\K$ reduces to mean estimation over
$\W\doteq{\mbox{conv}}(\K_{\ast},-\K_{\ast})$. This leads to a nearly-optimal (in terms of worst-case dimension dependence) estimation complexity. A matching lower bound for this task will be proved in Corollary \ref{cor:lower-bound-small-eps}.
A drawback of this approach is that it depends on knowledge of the John ellipsoid for $\W$,
which is, in general, cannot be computed efficiently (\eg \cite{Nemirovski:2013lectures}).
However, if $\K$ is a polytope with a polynomial number of facets, then $\W$ is an origin-symmetric polytope
with a polynomial number of vertices, and the John ellipsoid can be computed in
polynomial time \cite{Khachiyan:1996}. From this, we conclude that
\begin{corollary}
There exists an efficient algorithm that given as input the vertices of an origin-symmetric polytope $\W\subseteq \R^d$
solves the mean estimation problem over $\W$ using $2d$ queries to
${\mbox{STAT}}(\Omega(\varepsilon/\sqrt{d}))$. The algorithm runs in time polynomial in the number of vertices.
\end{corollary}
\input{sqc-lower}
\subsection{Lower Bounds}
\label{sec:lower-linear}
We now prove lower bounds for stochastic linear optimization over the $\ell_p$ unit ball and consequently also for $\ell_q$ mean estimation.
We do this using the technique from \cite{FeldmanPV:13} that is based on bounding the statistical dimension with discrimination norm.
The {\em discrimination norm} of a set of distributions $\D'$ relative to a distribution $D$ is denoted by $\kappa_2(\D',D)$ and defined as follows:
\begin{align*}
\kappa_2(\D',D) \doteq \max_{h:X \rightarrow \R, \|h\|_D=1} \left\{ \E_{D' \sim \D'}\left[\left| \E_{D'}[h]-\E_{D}[h]\right| \right] \right\},
\end{align*}
where the norm of $h$ over $D$ is $\|h\|_D = \sqrt{\E_D[h^2(x)]}$ and $D' \sim \D'$ refers to choosing $D'$ randomly and uniformly from the set $\D'$.
Let $\B(\D,D)$ denote the decision problem in which given samples from an unknown input distribution $D' \in \D \cup \{D\}$ the goal is to output $1$ if $D'\in \D$ and 0 if $D'=D$.
\begin{definition}[\cite{FeldmanGRVX:12}]\label{def:sdima}
For $\kappa>0$, domain $X$ and a decision problem $\B(\D,D)$, let $t$ be the largest integer
such that there exists a finite set of distributions $\D_D \subseteq \D$ with the following property:
for any subset $\D' \subseteq \D_D$, where $|\D'| \ge |\D_D|/t$, $\kappa_2(\D',D) \leq \kappa$.
The \textbf{statistical dimension} with discrimination norm $\kappa$ of $\B(\D,D)$
is $t$ and denoted by ${\mathrm{SDN}}(\B(\D,D),\kappa)$.
\end{definition}
The statistical dimension with discrimination norm $\kappa$ of a problem over distributions gives
a lower bound on the complexity of any statistical algorithm.
\begin{thm}[\cite{FeldmanGRVX:12}]
\label{thm:avgvstat-random}
Let $X$ be a domain and $\B(\D,D)$ be a decision problem over a class of distributions $\D$ on $X$ and reference distribution $D$. For $\kappa > 0$, let $t = {\mathrm{SDN}}(\B(\D,D),\kappa)$. Any randomized statistical algorithm that solves $\B(\D,D)$ with probability $\geq 2/3$ requires $t/3$ calls to ${\mbox{VSTAT}}(1/(3 \cdot \kappa^2))$.
\end{thm}
We now reduce a simple decision problem to stochastic linear optimization over the $\ell_p$ unit ball. Let $E = \{e_i \cond i\in [d]\} \cup \{-e_i \cond i\in [d]\}$. Let the reference distribution $D$ be the uniform distribution over $E$.
For a vector $v \in [-1,1]^d$, let $D_v$ denote the following distribution: pick $i\in [d]$ randomly and uniformly, then pick $b \in \on$ randomly subject to the expectation being equal to $v_i$ and output $b \cdot e_i$. By definition, $\E_{{\mathbf w} \sim D_v}[{\mathbf w}] = \frac{1}{d} v$. Further $D_v$ is supported on $E \subset \B_q^d$.
\remove{
For $\alpha \in [0,1]$ and every $v \in \on^{d}$, $\|\E_{{\mathbf w} \sim D_{\alpha v}}[{\mathbf w}]\|_q = \alpha/d \cdot d^{1/q} = \alpha \cdot d^{1/q-1}$. At the same time for the reference distribution $D$, $\|\E_{{\mathbf w} \sim D}[{\mathbf w}]\|_q = 0$.
Therefore to estimate the mean with accuracy $\varepsilon = \alpha d^{1/q-1}/2$ it is necessary to distinguish every distribution in $\D_\alpha$ from $D$, in other words to solve the decision problem $\B(\D_\alpha,D)$.
}
For $q \in [1,2]$, $\alpha \in [0,1]$ and every $v \in \on^{d}$, $d^{1/q-1} \cdot v \in \B_p^d$ and $\la d^{1/q-1} v, \E_{{\mathbf w} \sim D_{\alpha v}}[{\mathbf w}] \ra = \alpha \cdot d^{1/q-1}$. At the same time for the reference distribution $D$ and every $x \in \B_p^d$, we have that $\la x, \E_{{\mathbf w} \sim D}[{\mathbf w}] \ra = 0$. Therefore to optimize with accuracy $\varepsilon = \alpha d^{1/q-1}/2$ it is necessary distinguish every distribution in $\D_\alpha$ from $D$, in other words to solve the decision problem $\B(\D_\alpha,D)$.
\begin{lemma}
\label{lem:lower-bound-decision}
For any $r > 0$, $2^{\Omega(r)}$ queries to ${\mbox{VSTAT}}(d/(r \alpha^2))$ are necessary to solve the decision problem $\B(\D_\alpha,D)$ with success probability at least $2/3$.
\end{lemma}
\begin{proof}
We first observe that for any function $h: \B_1^d \rightarrow \R$,
\equ{\E_{D_{\alpha v}}[h] - \E_{D}[h] = \frac{ \alpha}{2d} \sum_{i \in [d]} v_i \cdot (h( e_i) - h(-e_i)) . \label{eq:discr2h}}
Let $\beta = \sqrt{\sum_{i \in [d]} (h( e_i) - h(-e_i))^2}$.
By Hoeffding's inequality we have that for every $r > 0$,
$$\pr_{v \sim \on^d } \left[ \left| \sum_{i \in [d]} v_i \cdot (h( e_i) - h(-e_i)) \right| \geq r \cdot \beta \right] \leq 2e^{-r^2/2} .$$
This implies that for every set $\V \subseteq \on^d$ such that $|\V| \geq 2^d/t$ we have that
$$\pr_{v \sim \V } \left[ \left| \sum_{i \in [d]} v_i \cdot (h( e_i) - h(-e_i)) \right| \geq r \cdot \beta \right] \leq t \cdot 2 e^{-r^2/2} .$$
From here a simple manipulation (see Lemma A.4 in \cite{Shalev-ShwartzBen-David:2014}) implies that
$$ \E_{v \sim \V } \left[ \left| \sum_{i \in [d]} v_i \cdot (h( e_i) - h(-e_i)) \right| \right] \leq \sqrt{2}(2+ \sqrt{\ln t}) \cdot \beta \leq \sqrt{2\log t} \cdot \beta .$$
Note that $$\beta \leq \sqrt{\sum_{i \in [d]}2 h( e_i)^2 +2 h(-e_i)^2 } = \sqrt{2d} \cdot \|h\|_D.$$
For a set of distributions $\D' \subseteq \D_\alpha$ of size at least $2^d/t$, let $\V \subseteq \on^d$ be the set of vectors in $\on^d$ associated with $\D'$. By eq.\eqref{eq:discr2h} we have that
\alequn{\E_{D' \sim \D'}\left[\left| \E_{D'}[h]-\E_{D}[h]\right| \right] &= \frac{ \alpha}{2d} \E_{v \sim \V } \left[ \left| \sum_{i \in [d]} v_i \cdot (h( e_i) - h(-e_i)) \right| \right] \\& \leq \frac{ \alpha}{2d} 2 \sqrt{d\log t} \cdot \|h\|_D = \alpha \sqrt{\log t/d} \cdot \|h\|_D .}
By Definition \ref{def:sdima}, this implies that for every $t>0$, ${\mathrm{SDN}}(\B(\D_\alpha,D), \alpha \sqrt{\log t/d} ) \geq t$. By Theorem \ref{thm:avgvstat-random} that for any $r > 0$, $2^{\Omega(r)}$ queries to ${\mbox{VSTAT}}(d/(r \alpha^2))$ are necessary to solve the decision problem $\B(\D_\alpha,D)$ with success probability at least $2/3$.
\end{proof}
To apply this lemma with our reduction we set $\alpha = 2\varepsilon d^{1-1/q}$. Note that $\alpha$ must be in the range $[0,1]$ so this is possible only if $\varepsilon < d^{1/q-1}/2$. Hence the lemma gives the following corollary:
\begin{corollary}
\label{cor:lower-bound-small-eps}
For any $\varepsilon \leq d^{1/q-1}/2$ and $r > 0$, $2^{\Omega(r)}$ queries to ${\mbox{VSTAT}}(d^{2/q-1} /(r \varepsilon^2))$ are necessary to find an
$\varepsilon$-optimal solution to the stochastic linear optimization problem over $\B_p^d$ with success probability at least $2/3$. The same lower bound holds for $\ell_q$ mean estimation with error $\varepsilon$.
\end{corollary}
Observe that this lemma does not cover the regime when $q > 1$ and $\varepsilon \geq d^{1/q-1}/2 = d^{-1/p}/2$. We analyze this case via a simple observation that for every $d' \in [d]$, $\B_p^{d'}$ and $\B_q^{d'}$ can be embedded into $\B_p^{d}$ and $\B_q^{d}$ respectively in a trivial way: by adding $d-d'$ zero coordinates. Also the mean of the distribution supported on such an embedding of $\B_q^{d'}$ certainly lies inside the embedding. In particular, a $d$-dimensional solution $x$ can be converted back to a $d'$-dimensional solution $x'$ without increasing the value achieved by the solution. Hence lower bounds for optimization over $\B_p^{d'}$ imply lower bounds for optimization over $\B_p^{d}$. Therefore for any $\varepsilon \geq d^{-1/p}/2$, let $d' = (2\varepsilon)^{-p}$ (ignoring for simplicity the minor issues with rounding). Now Corollary \ref{cor:lower-bound-small-eps} applied to $d'$ implies that $2^{\Omega(r)}$ queries to ${\mbox{VSTAT}}((d')^{2/q-1} /(r \varepsilon^2))$ are necessary for stochastic linear optimization. Substituting the value of $d' =(2\varepsilon)^{-p}$ we get $(d')^{2/q-1} /(r \varepsilon^2) = 2^{2-p}/(r\varepsilon^p)$ and hence we get the following corollary.
\begin{corollary}
\label{cor:lower-bound-large-eps}
For any $q > 1$, $\varepsilon \geq d^{1/q-1}/2$ and $r > 0$, $2^{\Omega(r)}$ queries to ${\mbox{VSTAT}}(1/(r \varepsilon^p))$ are necessary to find an $\varepsilon$-optimal solution to the stochastic linear optimization problem over $\B_p^d$ with success probability at least $2/3$.
The same lower bound holds for $\ell_q$ mean estimation with error $\varepsilon$.
\end{corollary}
These lower bounds are not tight when $q > 2$. In this case a lower bound of $\Omega(1/\varepsilon^2)$ (irrespective of the number of queries) follows from a basic property of ${\mbox{VSTAT}}$: no query to ${\mbox{VSTAT}}(n)$ can distinguish between two input distributions $D_1$ and $D_2$ if the total variation distance between $D_1^n$ and $D_2^n$ is smaller than some (universal) positive constant \cite{FeldmanGRVX:12}.
\subsection{Learning Halfspaces}
\label{sec:halfspaces}
We now use our high-dimensional mean estimation algorithms to address the efficiency of SQ versions of online algorithms for learning halfspaces (also known as linear threshold functions). A linear threshold function is a Boolean function over $\R^d$ described by a weight vector $w \in \R^d$ together with a threshold $\theta \in \R$ and defined as $f_{w,\theta}(x) \doteq \sgn(\la w,x\ra -\theta)$.
\paragraph{Margin Perceptron:} We start with the classic Perceptron algorithm \cite{Rosenblatt:58,Novikoff:62}. For simplicity, and without loss of generality we only consider the case of $\theta =0$. We describe a slightly more general version of the Perceptron algorithm that approximately maximizes the margin and is referred to as Margin Perceptron \cite{BalcanB06}. The Margin Perceptron with parameter $\eta$ works as follows. Initialize the weights $w^0= 0^d$. At round $t\geq 1$, given a vector $x^t$ and correct prediction $y^t\in \on$, if $y^t \cdot \la w^{t-1}, x^t \ra \geq \eta$, then we let $w^{t} = w^{t-1}$. Otherwise, we update $w^{t} = w^{t-1} + y^t x^t$.
The Perceptron algorithm corresponds to using this algorithm with $\eta =0$. This update rule has the following guarantee:
\begin{theorem}[\cite{BalcanB06}]
\label{thm:margin-per-updates}
Let $(x^1,y^1),\ldots,(x^t,y^t)$ be any sequence of examples in $\B_2^d(R) \times \on$ and assume that there exists a vector $w^* \in \B_2^d(W)$ such that for all $t$, $y^t \la w^*,x^t\ra \geq \gamma > 0$. Let $M$ be the number of rounds in which the Margin Perceptron with parameter $\eta$ updates the weights on this sequence of examples. Then $M \leq R^2 W^2/(\gamma-\eta)^2$.
\end{theorem}
The advantage of this version over the standard Perceptron is that it can be used to ensure that the final vector $w^t$ separates the positive examples from the negative ones with margin $\eta$ (as opposed to the plain Percetron which does not guarantee any margin). For example, by choosing $\eta = \gamma/2$ one can approximately maximize the margin while only paying a factor $4$ in the upper bound on the number of updates. This means that the halfspace produced by Margin-Perceptron has essentially the same properties as that produced by the SVM algorithm.
In PAC learning of halfspaces with margin assumption we are given random examples from a distribution $D$ over $\B_2^d(R) \times \on$. The distribution is assumed to be supported only on examples $(x,y)$ that for some vector $w^*$ satisfy $y \la w^*,x\ra \geq \gamma$. It has long been observed that a natural way to convert the Perceptron algorithm to the SQ setting is to use the mean vector of all counterexamples with Perceptron updates \cite{Bylander:94,BlumFKV:97}. Namely, update using the example $(\bar{x}^t,1)$, where $\bar{x}^t = \E_{({\mathbf x},{\mathbf y}) \sim D}[ {\mathbf y} \cdot {\mathbf x} \cond {\mathbf y} \la w^{t-1},{\mathbf x} \ra < \eta]$. Naturally, by linearity of the expectation, we have that $\la w^{t-1}, \bar{x}^t \ra < \eta$ and $\la w^*,\bar{x}^t \ra \geq \gamma$, and also, by convexity, that $\bar{x}^t \in \B_2^d(R)$. This implies that exactly the same analysis can be used for updates based on the mean counterexample vector. Naturally, we can only estimate $\bar{x}^t$ and hence our goal is to find an estimate that still allows the analysis to go through. In other words, we need to use statistical queries to find a vector $\tilde{x}$ which satisfies the conditions above (at least approximately). The main difficulty here is preserving the condition $\la w^*,\tilde{x} \ra \geq \gamma$, since we do not know $w^*$. However, by finding a vector $\tilde{x}$ such that $\|\tilde{x} - \bar{x}^t\|_2 \leq \gamma/(3 W)$ we can ensure that $$\la w^*,\tilde{x} \ra= \la w^*,\bar{x}^t \ra - \la w^*,\bar{x}^t -\tilde{x} \ra \geq \gamma - \|\tilde{x} - \bar{x}^t\|_2 \cdot \|w^*\|_2 \geq 2\gamma/3 .$$
We next note that conditions $\la w^{t-1}, \tilde{x} \ra < \eta$ and $\tilde{x} \in \B_2^d(R)$ are easy to preserve. These are known and convex constraints so we can always project $\tilde{x}$ to the (convex) intersection of these two closed convex sets. This can only decrease the distance to $\bar{x}^t$. This implies that, given an estimate $\tilde{x}$, such that $\|\tilde{x} - \bar{x}^t\|_2 \leq \gamma/(3 W)$ we can use Thm.~\ref{thm:margin-per-updates} with $\gamma' = 2\gamma/3$ to obtain an upper bound of $M \leq R^2 W^2/(2\gamma/3-\eta)^2$ on the number of updates.
Now, by definition, $$\E_{({\mathbf x},{\mathbf y}) \sim D}[ {\mathbf y} \cdot {\mathbf x} \cond {\mathbf y} \la w^{t-1},{\mathbf x} \ra < \eta] = \frac{\E_{({\mathbf x},{\mathbf y}) \sim D}[ {\mathbf y} \cdot {\mathbf x} \cdot \ind{{\mathbf y} \la w^{t-1},{\mathbf x} \ra < \eta}] }{\pr_{({\mathbf x},{\mathbf y}) \sim D}[ {\mathbf y} \la w^{t-1},{\mathbf x} \ra < \eta]} .$$
In PAC learning with error $\eps$ we can assume that $\alpha \doteq \pr_{({\mathbf x},{\mathbf y}) \sim D}[ {\mathbf y} \la w^{t-1},{\mathbf x} \ra < \eta] \geq \eps$ since otherwise the halfspace $f_{w^{t-1}}$ is a sufficiently accurate hypothesis (that is classifies at least a $1-\eps$ fraction of examples with margin at least $\eta$). This implies that it is sufficient to find a vector $\tilde{z}$ such that $\|\tilde{z} - \bar{z}\|_2 \leq \alpha \gamma/(3 W)$, where $\bar{z} = \E_{({\mathbf x},{\mathbf y}) \sim D}[ {\mathbf y} \cdot {\mathbf x} \cdot \ind{{\mathbf y} \la w^{t-1},{\mathbf x} \ra < \eta}]$.
Now the distribution on ${\mathbf y}\cdot {\mathbf x} \cdot \ind{{\mathbf y} \la w^{t-1},{\mathbf x} \ra < \eta}$ is supported on $\B_2^d(R)$ and therefore using Theorem \ref{thm-l2-kashin} we can get the desired estimate using $2d$ queries to ${\mbox{STAT}}(\Omega(\eps \gamma/(RW)))$. In other words, the estimation complexity of this implementation of Margin Perceptron is $O(RW/(\eps\gamma)^2)$. We make a further observation that the dependence of estimation complexity on $\eps$ can be reduced from $1/\eps^2$ to $1/\eps$ by using ${\mbox{VSTAT}}$ in place of ${\mbox{STAT}}$. This follows from Lemma \ref{lem:vstat-condition} which implies that we need to pay only linearly for conditioning on $\ind{{\mathbf y} \la w^{t-1},{\mathbf x} \ra < \eta}$. Altogether we get the following result which we for simplicity state for $\eta =\gamma/2$:
\begin{theorem}
\label{thm:opt-perceptron}
There exists an efficient algorithm {\sf Margin-Perceptron-SQ} that for every $\eps > 0$ and distribution $D$ over $\B_2^d(R) \times \on$ that is supported on examples $(x,y)$ such that for some vector $w^* \in \B_2^d(W)$ satisfy $y \la w^*,x\ra \geq \gamma$, outputs a halfspace $w$ such that $\pr_{({\mathbf x},{\mathbf y})\sim D}[{\mathbf y}\la w,{\mathbf x}\ra < \gamma/2] \leq \eps$. {\sf Margin-Perceptron-SQ} uses $O(d(WR/\gamma)^2)$ queries to ${\mbox{VSTAT}}(O((WR/\gamma)^2/\eps))$.
\end{theorem}
The estimation complexity of our algorithm is the same as the sample complexity of the PAC learning algorithm for learning large-margin halfspaces obtained via a standard online-to-batch conversion (\eg \cite{Cesa-BianchiCG04}).
SQ implementation of Perceptron were used to establish learnability of large-margin halfspaces with random classification noise \cite{Bylander:94} and to give a private version of Perceptron \cite{BlumDMN:05}. Perceptron is also the basis of SQ algorithms for learning halfspaces that do not require a margin assumption \cite{BlumFKV:97,DunaganV08}. All previous analyses that we are aware of used coordinate-wise estimation of $\bar{x}$ and resulted in estimation complexity bound of $O(d(WR/(\gamma\eps)^2)$. Perceptron and SVM algorithms are most commonly applied over a very large number of variables (such as when using a kernel) and the dependence of estimation complexity on $d$ would be prohibitive in such settings.
\paragraph{Online $p$-norm algorithms:}
The Perceptron algorithm can be seen as a member in the family of online $p$-norm algorithms \cite{GroveLS97} with $p=2$. The other famous member of this family is the Winnow algorithm \cite{Littlestone:87} which corresponds to $p=\infty$.
For $p \in [2,\infty]$, a $p$-norm algorithm is based on $p$-margin assumption: there exists $w^* \in \B_q^d(R)$ such that for each example $(x,y) \in \B_p^d(R) \times \on$ we have $y \la w^*,x\ra \geq \gamma$. Under this assumption the upper bound on the number of updates is $O((WR/\gamma)^2)$ for $p \in [2,\infty)$ and $O(\log d \cdot (WR/\gamma)^2)$ for $p=\infty$.
Our $\ell_p$ mean estimation algorithms can be used in exactly the same way to (approximately) preserve the margin in this case giving us the following extension of Theorem \ref{thm:opt-perceptron}.
\begin{theorem}
\label{thm:norm-perceptron}
For every $p\in[2,\infty]$, there exists an efficient algorithm {\sf $p$-norm-SQ} that for every $\eps > 0$ and distribution $D$ over $\B_p^d(R) \times \on$ that is supported on examples $(x,y)$ that for some vector $w^* \in \B_q^d(W)$ satisfy $y \la w^*,x\ra \geq \gamma$, outputs a halfspace $w$ such that $\pr_{({\mathbf x},{\mathbf y})\sim D}[{\mathbf y}\la w,{\mathbf x}\ra < 0] \leq \eps$. For $p\in[2,\infty)$ {\sf $p$-norm-SQ} uses $O(d \log d (WR/\gamma)^2)$ queries to ${\mbox{VSTAT}}(O(\log d (WR/\gamma)^2/\eps))$ and for $p =\infty$ {\sf $p$-norm-SQ} uses $O(d \log d (WR/\gamma)^2)$ queries to ${\mbox{VSTAT}}(O((WR/\gamma)^2/\eps))$.
\end{theorem}
It is not hard to prove that margin can also be approximately maximized for these more general algorithms but we are not aware of an explicit statement of this in the literature. We remark that to implement the Winnow algorithm, the update vector can be estimated via straightforward coordinate-wise statistical queries.
Many variants of the Perceptron and Winnow algorithms have been studied in the literature and applied in a variety of settings (\eg \cite{FreundSchapire:98,Servedio:99colt,DasguptaKM:09}). The analysis inevitably relies on a margin assumption (and its relaxations) and hence, we believe, can be implemented using SQs in a similar manner.
\section{Preliminaries}
For integer $n\geq 1$ let $[n]\doteq \{1,\ldots, n\}$. Typically, $d$ will denote the ambient space dimension,
and $n$ will denote number of samples. Random variables are denoted by bold
letters, e.g., ${\mathbf w}$, ${\mathbf U}$. We denote the indicator function of an event $A$
(i.e., the function taking value zero outside of $A$, and one on $A$) by
${\mathbf 1}_{A}$.
For $i\in[d]$ we denote by $e_i$ the $i$-th basis
vector in $\R^d$. Given a norm $\|\cdot\|$ on $\R^d$ we denote the ball of radius $R>0$ by
$\B_{\|\cdot\|}^d(R)$, and the unit ball by $\B_{\|\cdot\|}^d$. We also recall the definition of the norm dual to $\|\cdot\|$, $\|w\|_{\ast}\doteq \sup_{\|x\|\leq1}\langle w,x\rangle$, where
$\langle \cdot,\cdot\rangle $ is the standard inner product of $\R^d$.
For a convex body (i.e., compact convex set with nonempty interior) $\K\subseteq \R^d$
we define its polar as $\K_{\ast}=\{w\in\R^d:\, \langle w,x\rangle\leq 1 \,\,\forall x\in\K \}$,
and we have that $(\K_{\ast})_{\ast}=\K$.
Any origin-symmetric convex body $\K \subset \R^d$ (i.e., $\K=-\K$) defines a norm
$\|\cdot \|_\K$ as
follows: $\| x \|_\K = \inf_{\alpha > 0}\{\alpha \cond x/\alpha \in \K\}$, and $\K$ is the
unit ball of $\|\cdot\|_\K$. It is easy to see that
the norm dual to $\|\cdot\|_{\cal K}$ is $\|\cdot\|_{\cal K_{\ast}}$.
Our primary case of interest corresponds to $\ell_p$-setups. Given $1\leq p\leq \infty$,
we consider the normed space $\ell_p^d\doteq(\R^d,\|\cdot\|_p)$, where for a vector
$x\in\R^d$, $\|x\|_p\doteq \left(\sum_{i\in[d]}|x_i|^p \right)^{1/p}$. For
$R \geq 0$, we denote by $\B_p^d(R) = \B_{\|\cdot\|_p}^d(R)$ and similarly
for the unit ball, \(\B_p^d=\B_p^d(1)\). We denote the conjugate exponent of $p$
as $q$, meaning that $1/p+1/q=1$; with this, the norm dual to $\|\cdot\|_p$ is the norm
$\|\cdot\|_q$. In all definitions above, when clear from context, we will omit the
dependence on $d$.\\
We consider problems of the form
\begin{equation} \label{StochOpt}
F^{\ast} \doteq \min_{x\in \K}\left\{ F(x)\doteq\E_{{\mathbf w}}[f(x,{\mathbf w})]\right\},
\end{equation}
where $\K$ is a convex body in $\R^d$, ${\mathbf w}$ is a random variable defined over some domain $\W$, and for each $w\in\W$, $f(\cdot,w)$ is convex and subdifferentiable
on $\K$. For an approximation parameter $\eps>0$ the goal is to find $x\in{\cal K}$ such that $F(x) \leq F^* +\eps$, and we call any such $x$ an {\em$\varepsilon$-optimal solution}. We denote the probability distribution of ${\mathbf w}$ by $D$ and refer to it as the input distribution. For convenience we will also assume that $\K$ contains the origin.
\paragraph{Statistical Queries:}
The algorithms we consider here have access to a statistical query oracle for the input distribution. For most of our results a basic oracle introduced by \citenames{Kearns}{Kearns:98} that gives an estimate of the mean with fixed tolerance will suffice. We will also rely on a stronger oracle from \cite{FeldmanGRVX:12} that takes into the account the variance of the query function and faithfully captures estimation of the mean of a random variable from samples.
\begin{definition}
Let $D$ be a distribution over a domain $\W$, $\tau >0$ and $n$ be an integer. A statistical query oracle ${\mbox{STAT}}_D(\tau)$ is an oracle that given as input any function $\phi : \W \rightarrow [-1,1]$, returns some value $v$ such that
$|v - \E_{{\mathbf w}\sim D}[\phi({\mathbf w})]| \leq \tau$. A statistical query oracle ${\mbox{VSTAT}}_D(n)$ is an oracle that given as input any function $\phi : \W \rightarrow [0,1]$ returns some value $v$ such that
$|v - p| \leq \max\left\{\frac{1}{n}, \sqrt{\frac{p(1-p)}{n}}\right\}$, where $p \doteq \E_{{\mathbf w}\sim D}[\phi({\mathbf w})]$.
We say that an algorithm is {\em statistical query} (or, for brevity, just SQ) if it does not have direct access
to $n$ samples from the input distribution $D$, but instead makes calls to a statistical query oracle for the input distribution.
\end{definition}
Clearly ${\mbox{VSTAT}}_D(n)$ is at least as strong as ${\mbox{STAT}}_D(1/\sqrt{n})$ (but no stronger than ${\mbox{STAT}}_D(1/n)$).
Query complexity of a statistical algorithm is the number of queries it uses. The {\em estimation complexity} of a statistical query algorithm using ${\mbox{VSTAT}}_D(n)$ is the value $n$ and for an algorithm using ${\mbox{STAT}}(\tau)$ it is $n=1/\tau^2$.
Note that the estimation complexity corresponds to the number of i.i.d.~samples sufficient to simulate the oracle for a single query with at least some positive constant probability of success. However it is not necessarily true that the whole algorithm can be simulated using $O(n)$ samples since answers to many queries need to be estimated. Answering $m$ fixed (or non-adaptive) statistical queries can be done using $O(\log m \cdot n)$ samples but when queries depend on previous answers the best known bounds require $O(\sqrt{m} \cdot n)$ samples (see \cite{DworkFHPRR14:arxiv} for a detailed discussion). This also implies that a lower bound on sample complexity of solving a problem does not directly imply lower bounds on estimation complexity of a SQ algorithm for the problem.
Whenever that does not make a difference for our upper bounds on estimation complexity, we state results for ${\mbox{STAT}}$ to ensure consistency with prior work in the SQ model. All our lower bounds are stated for the stronger ${\mbox{VSTAT}}$ oracle. One useful property of ${\mbox{VSTAT}}$ is that it only pays linearly when estimating expectations of functions conditioned on a rare event:
\begin{lemma}
\label{lem:vstat-condition}
For any function $\phi : X \rightarrow [0,1]$, input distribution $D$ and condition $A:X \rightarrow \zo$ such that $p_A \doteq \pr_{x\sim D}[A(x)=1] \geq \alpha$, let $p \doteq \E_{x\sim D}[\phi(x) \cdot A(x)]$. Then query $\phi(x)\cdot A(x)$ to ${\mbox{VSTAT}}(n/\alpha)$ returns a value $v$ such that $|v - p| \leq \frac{p_A}{\sqrt{n}}$.
\end{lemma}
\begin{proof}
The value $v$ returned by ${\mbox{VSTAT}}(n/\alpha)$ on query $\phi(x)\cdot A(x)$ satisfies:
$|v - p| \leq \min\left\{\frac{\alpha}{n}, \sqrt{\frac{p(1-p)\alpha}{n}}\right\}$. Note that $p = \E[\phi(x) A(x)] \leq \pr[A(x)=1] = p_A$. Hence $|v-p| \leq \frac{p_A}{\sqrt{n}}$.
\end{proof}
Note that one would need to use ${\mbox{STAT}}(\alpha/\sqrt{n})$ to obtain a value $v$ with the same accuracy of $\frac{p_A}{\sqrt{n}}$ (since $p_A$ can be as low as $\alpha$). This corresponds to estimation complexity of $n/\alpha^2$ vs.~$n/\alpha$ for ${\mbox{VSTAT}}$.
\section{Proof of Proposition \ref{Prop:Inexact_MD}}
\label{sec:proof_inexact_MD}
\fi
We first state without proof the following identity for prox-functions (for example, see
(5.3.20) in \cite{Nemirovski:2013lectures}): for all \(x\), \(x^{\prime}\)
and \(u\) in \(\K\)
\[ V_{x}(u)-V_{x^{\prime}}(u)-V_{x}(x^{\prime}) =
\langle \nabla V_{x}(x^{\prime}),u-x^{\prime}\rangle. \]
On the other hand, the optimality conditions of problem \eqref{Prox_step} are
\begin{equation*}
\langle \alpha \tilde g^t+\nabla V_{x^t}(x^{t+1}),u-x^{t+1} \rangle \geq 0,
\quad \forall u\in \K.
\end{equation*}
Let \(u\in \K\) be an arbitrary vector, and let $s$ be such that $1/r+1/s=1$.
Since $\tilde g^t$ is a $\eta$-approximate gradient,
\begin{eqnarray*}
\alpha [F(x^t)-F(u)] &\leq& \alpha \langle \nabla F(x^t),x^t-u\rangle \\
&\leq& \alpha \langle \tilde g^t,x^t-u\rangle +\alpha\eta \\
& = & \alpha \langle \tilde g^t,x^t-x^{t+1}\rangle
+ \alpha \langle \tilde g^t,x^{t+1}-u\rangle +\alpha\eta \\
&\leq& \alpha \langle \tilde g^t,x^t-x^{t+1}\rangle
- \langle \nabla V_{x^t}(x^{t+1}),x^{t+1}-u\rangle +\alpha\eta \\
& = & \alpha \langle \tilde g^t,x^t-x^{t+1}\rangle
+V_{x^t}(u)-V_{x^{t+1}}(u)-V_{x^t}(x^{t+1}) +\alpha\eta\\
&\leq& [\alpha \langle \tilde g^t,x^t-x^{t+1}\rangle
- \frac{1}{r}\|x^t-x^{t+1}\|^r]
+V_{x^t}(u)-V_{x^{t+1}}(u) +\alpha \eta \\
&\leq& \frac{1}{s}\|\alpha \tilde g^t\|_{\ast}^{s}
+ V_{x^t}(u)-V_{x^{t+1}}(u) +\alpha\eta,
\end{eqnarray*}
where we have used all the observations above, and the last step holds by
Fenchel's inequality.
Let us choose \(u\) such that \(F(u)=F^{\ast}\), thus by definition of \(\bar x^T\)
and by convexity of \(f\)
$$ \alpha T[F(\bar x^T)-F^{\ast}] \,\,\leq\,\, \sum_{t=1}^T \alpha[F(x^t)-F^{\ast}]
\,\,\leq \frac{(\alpha L_0)^{s}}{s}T+D_{\Psi}(\K) +\alpha T\eta.$$
and since
\(\alpha=\frac{1}{L_0}\left( \frac{rD_{\Psi}(\K)}{T}\right)^{1/s}\) we obtain
$F(\bar x^T)-F^{\ast} \leq L_0\left( \frac{rD_{\Psi}(\K)}{T}\right)^{1/r} +\eta. $
\iffull\end{proof}
\else\fi
\section{Proof of Lemma \ref{lem:str_cvx_oracle}}
\label{sec:proof_lem:str_cvx_oracle}
We first observe that we can obtain an approximate
zero-order oracle for $F$ with error $\eta$ by a single query to
${\mbox{STAT}}(\Omega(\eta/B)).$
In particular, we can obtain a value \(\hat F(x)\) such that
\(|\hat F(x)-F(x)|\leq \eta/4\), and then use as approximation
\[ \tilde F(x) = \hat F(x)-\eta/2.\]
This way \(|F(x)-\tilde F(x)| \leq |F(x)-\hat F(x)|+|\hat F(x)-\tilde F(x)|\leq 3\eta/4\),
and also \(F(x)-\tilde F(x) = F(x)-\hat F(x)+\eta/2 \geq \eta/4 \).
Finally, observe that for any gradient method that does not require access
to the function value we can skip the estimation of $\tilde F(x)$, and simply replace
it by $F(x)-\eta/2$ in what comes next.\\
Next, we prove that an approximate gradient \(\tilde g(x)\) satisfying
\begin{equation} \label{str_cvx_grad_approx}
\|\nabla F(x) - \tilde g(x)\|_{\ast} \leq \sqrt{\eta\kappa}/2 \leq \sqrt{\eta L_1}/2 ,
\end{equation}
suffices for a \((\eta,\mu,M)\)-oracle, where,
\(\mu=\kappa/2\), \(M=2L_1\). For convenience, we refer to the first inequality in \eqref{str_cvx_oracle} as the
{\em lower bound} and the second as the {\em upper bound}.\\
\noindent{\bf Lower bound.}
Since \(F\) is \(\kappa\)-strongly convex, and by the lower bound on
$F(x)-\tilde F(x)$
\begin{eqnarray*}
F(y) &\geq& F(x)+\langle \nabla F(x),y-x\rangle +\frac{\kappa}{2}\|x-y\|^2 \\
&\geq& \tilde F(x) + \eta/4 +\langle \tilde g(x),y-x\rangle
+\langle \nabla F(x)-\tilde g(x),y-x\rangle + \frac{\kappa}{2}\|x-y\|^2.
\end{eqnarray*}
Thus to obtain the lower bound it suffices prove that for all \(y\in\R^d\),
\begin{equation} \label{to_prove}
\frac{\eta}{4} + \langle \nabla F(x)-\tilde g(x),y-x\rangle + \frac{\mu}{2}\|x-y\|^2\geq 0.
\end{equation}
In order to prove this inequality, notice that among all \(y\)'s such that
\(\|y-x\|=t\), the minimum of the expression above is attained when
\(\langle \nabla F(x)-\tilde g(x),y-x\rangle = -t\|\nabla F(x)-\tilde g(x)\|_{\ast}\). This
leads to the one dimensional inequality
\[\frac{\eta}{4} - t\|\nabla F(x)-\tilde g(x)\|_{\ast} + \frac{\mu}{2}t^2 \geq 0,\]
whose minimum is attained at \(t=\frac{\|\nabla F(x)-\tilde g(x)\|_{\ast}}{\mu}\),
and thus has minimum value \(\eta/4-\|\nabla F(x)-\tilde g(x)\|_{\ast}^2/(2\mu)\).
Finally, this value is nonnegative by assumption, proving
the lower bound.\\
\noindent{\bf Upper bound.} Since
\(F\) has \(L_1\)-Lipschitz continuous gradient, and by the bound on $|F(x)-\tilde F(x)|$
\begin{eqnarray*}
F(y) &\leq& F(x) +\langle \nabla F(x),y-x\rangle +\dfrac{L_1}{2}\|y-x\|^2 \\
&\leq& \tilde F(x) +\dfrac{3\eta}{4} +\langle \tilde g(x),y-x\rangle +
\langle \nabla F(x)-\tilde g(x),y-x\rangle+\dfrac{L_1}{2}\|x-y\|^2.
\end{eqnarray*}
Now we show that for all \(y\in \R^d\)
\begin{eqnarray*}
\dfrac{L_1}{2}\|y-x\|^2-\langle \nabla F(x)-\tilde g(x),y-x\rangle +\frac{\eta}{4}\geq 0.
\end{eqnarray*}
Indeed, minimizing the expression above in \(y\)
shows that it suffices to have
\(\|\nabla F(x)-\tilde g(x)\|_{\ast}^2\leq \eta L_1/2\),
which is true by assumption.
Finally, combining the two bounds above we get that for all $y\in\K$
\[ F(y)\leq [\tilde F(x)+ \langle \tilde g(x),y-x\rangle]+\frac{M}{2}\|y-x\|^2+\eta,\]
which is precisely the upper bound.\\
As a conclusion, we proved that in order to obtain $\tilde g$ for
a $(\eta,M,\mu)$-oracle it suffices to obtain an approximate gradient
satisfying \eqref{str_cvx_grad_approx}, which can be obtained by solving
a mean estimation problem in $\|\cdot\|_{\ast}$ with error $\sqrt{\eta\kappa}/[2L_0]$.
This together with our analysis of the zero-order oracle proves the result.\\
Finally, if we remove the assumption $F\in{\cal F}_{\|\cdot\|}^1(\K,L_1)$ then from
\eqref{nonsmooth_dif_ineq} we can prove that for all $x,y\in\K$
\[ F(y) - [F(x)+\langle \nabla F(x),y-x\rangle] \leq \frac{L_0^2}{\eta}\|x-y\|^2+\frac{\eta}{4},\]
where $M=2L_0^2/\eta$. This is sufficient for carrying out the proof above, and
the result follows.
\section{Sample complexity of mean estimation}
\label{sec:Samples}
The following is a standard analysis based on Rademacher complexity and
uniform convexity (see, e.g., \cite{Pisier:2011}).
Let $(E,\|\cdot\|)$ be an $r$-uniformly convex space. We are interested in
the convergence of the empirical mean to the true mean in the dual norm (to the one we optimize in).
By Observation \ref{obs:lin_opt_mean_est} this is sufficient to bound the
error of optimization using the empirical estimate of the gradient on
$\K \doteq \B_{\|\cdot\|}$.
Let $({\mathbf w}^j)_{j=1}^n$ be i.i.d.~samples of a random variable ${\mathbf w}$ with
mean $\bar w$, and let $\bar {\mathbf w}^n\doteq \frac{1}{n}\sum_{j=1}^n {\mathbf w}^j$ be the
empirical mean estimator.
Notice that $$ \left\|\bar {\mathbf w}^n -\bar w \right\|_{\ast}
= \sup_{x\in\K} \left|\left\la \bar {\mathbf w}^n -\bar w, x \right\ra \right|.$$
Let $(\sigma_j)_{j=1}^n$ be i.i.d. Rademacher random variables (independent
of $({\mathbf w}^j)_j$). By a standard symmetrization argument, we have
\begin{eqnarray*}
\E_{{\mathbf w}^1,\ldots,{\mathbf w}^n} \sup_{x\in \K}\left| \left\langle \frac{1}{n}\sum_{j=1}^n {\mathbf w}^j, x\right\rangle - \left\langle \bar w,x\right\rangle \right|
&\leq & 2 \E_{\sigma_1,\ldots,\sigma_n}\E_{{\mathbf w}^1,\ldots,{\mathbf w}^n} \sup_{x\in{\cal K}} \left| \sum_{j=1}^n \sigma_j \langle {\mathbf w}^j,x\rangle \right|.
\end{eqnarray*}
For simplicity, we will denote $\|\K\|\doteq \sup_{x\in \K}\|x\|$ the $\|\cdot\|$ radius of $\K$.
Now by the Fenchel inequality
\begin{eqnarray*}
\E_{\sigma_1,\ldots,\sigma_n} \sup_{x\in \K} \left| \sum_{j=1}^n \sigma_j \langle {\mathbf w}^j,x\rangle \right|
&\leq & \inf_{\lambda>0} \E_{\sigma_1,\ldots,\sigma_n} \left\{
\frac{1}{r\lambda}\sup_{x\in \K}\|x\|^r+\frac{1}{s\lambda}\left\|\frac{\lambda}{n} \sum_{j=1}^n \sigma_j {\mathbf w}^j\right\|_{\ast}^s \right\} \\
&\leq & \inf_{\lambda>0} \E_{\sigma_1,\ldots,\sigma_{n-1}} \left\{
\frac{1}{r\lambda}\|\K\|^r \right.\\
& & \left. +\frac{\lambda^{s-1}}{sn^s} \frac12\left[
\left\|\sum_{j=1}^{n-1} \sigma_j {\mathbf w}^j + \sigma_n {\mathbf w}^n\right\|_{\ast}^s
+\left\|\sum_{j=1}^{n-1} \sigma_j {\mathbf w}^j - \sigma_n {\mathbf w}^n\right\|_{\ast}^s
\right]
\right\}\\
&\leq & \inf_{\lambda>0} \E_{\sigma_1,\ldots,\sigma_{n-1}} \left\{
\frac{1}{r\lambda}\|\K\|^r+\frac{\lambda^{s-1}}{sn^s}
\left[ \left\|\sum_{j=1}^{n-1} \sigma_j {\mathbf w}^j \right\|_{\ast}^s + C\|{\mathbf w}^n\|_{\ast}^s \right] \right\},
\end{eqnarray*}
where the last inequality holds from the $s$-uniform smoothness of
$(E^{\ast},\|\cdot\|_{\ast})$. Proceeding inductively we obtain
\begin{eqnarray*}
\E_{\sigma_1,\ldots,\sigma_n} \sup_{x\in{\cal K}} \left| \sum_{j=1}^n \sigma_j \langle {\mathbf w}^j,x\rangle \right|
&\leq & \inf_{\lambda>0} \left\{ \frac{1}{r\lambda}\|\K\|^r+
\frac{C\lambda^{s-1}}{sn^s}\sum_{j=1}^n \|{\mathbf w}^j\|_{\ast}^s
\right\}.
\end{eqnarray*}
It is a straightforward computation to obtain the optimal
$\bar\lambda=\frac{\|\K\|^{r-1}n}{C^{1/s}\left(\sum_j\|{\mathbf w}^j\|_{\ast}^s\right)^{1/s}}$, which gives an upper bound
$$\E_{\sigma_1,\ldots,\sigma_n} \sup_{x\in{\cal K}} \left| \sum_{j=1}^n \sigma_j \langle {\mathbf w}^j,x\rangle \right|\leq
\dfrac{1}{n^{1/r}}C^{1/s} \sup_{x\in \K}\|x\|\left(\frac{1}{n}\sum_{j=1}^n\|{\mathbf w}^j\|_{\ast}^s\right)^{1/s}.$$
By simply upper bounding the quantity above by $\varepsilon>0$,
we get a sample complexity bound for achieving
$\varepsilon$ accuracy in expectation, $n=\lceil C^{r/s}/\varepsilon^r \rceil$, where
$C\geq 1$ is any constant satisfying \eqref{unif_smooth}.
For the standard $\ell_p^d$-setup, i.e., where $(E,\|\cdot\|)=(\R^d,\|\cdot\|_p)$, by
the parameters of uniform convexity and uniform smoothness provided in Appendix
\ref{sec:unif_cvx}, we obtain the following bounds on sample complexity:
\begin{enumerate}
\item[(i)] For $p=1$, we have $r=s=2$ and $C=\ln d$, by using the
equivalent norm $\|\cdot\|_{p(d)}$. This implies that
$n=O\left(\dfrac{\ln d}{\varepsilon^2}\right)$ samples suffice.
\item[(ii)] For $1<p\leq 2$, we have $r=s=2$ and $C=q-1$. This implies
that $n=\left\lceil\dfrac{q-1}{\varepsilon^2}\right\rceil$ samples suffice.
\item[(iii)] For $2<p<\infty$, we have $r=p$, $s=q$ and $C=1$. This implies
that $n=\left\lceil\dfrac{1}{\varepsilon^p}\right\rceil$ samples suffice.
\end{enumerate}
\section{Proof of Corollary \ref{cor:solve_cvx_ellp}}
\label{proof_solve_cvx_ellp}
Note that by Proposition \ref{Prop:Inexact_MD} in order to obtain an
$\varepsilon$-optimal solution to a non-smooth convex optimization problem it suffices
to choose $\eta=\varepsilon/2$, and
$T=\left\lceil r2^r L_0^r D_{\Psi}(\K)/\varepsilon^r\right\rceil$.
Since $\K\subseteq\B_p(R)$, to satisfy \eqref{ApproxSubgrad} it is sufficient
to have for all $y\in \B_p(R)$,
$$ \la \nabla F(x)-\tilde g(x), y \ra \leq \eta/2. $$
Maximizing the left hand side on $y$, we get a sufficient condition:
$\|\nabla F(x)-\tilde g(x)\|_q R \leq \eta/2$. We can satisfy this condition by solving
the mean estimation problem in $\ell_q$-norm with error
$\eta/[2L_0R]=\varepsilon/[4L_0 R]$
(recall that $f(\cdot,w)$ is $L_0$ Lipschitz w.r.t. $\|\cdot\|_p$).
Next, using the uniformly convex functions for $\ell_p$ from Appendix
\ref{sec:unif_cvx}, together with the bound on the number of queries and
error for the mean estimation problems in $\ell_q$-norm
from Section \ref{Subsec:L_q}, we obtain that the total number of queries and the type of queries we need for stochastic optimization in the non-smooth $\ell_p$-setup are:
\begin{itemize}
\item $p=1$: We have $r=2$ and $D_{\Psi}(\K)=\dfrac{e^2\ln d}{2}R^2$.
As a consequnce, solving the convex program amounts to using
$O\left(d\cdot \left(\dfrac{L_0R}{\varepsilon}\right)^2 \ln d\right)$ queries to
${\mbox{STAT}}\left(\dfrac{\varepsilon}{4L_0R}\right)$.
\item $1< p< 2$: We have $r=2$ and $D_{\Psi}(\K)=\dfrac{1}{2(p-1)}R^2$.
As a consequence, solving the convex program amounts to using
$O\left(d\log d\cdot \dfrac{1}{(p-1)}\left(\dfrac{L_0R}{\varepsilon}\right)^2\right)$
queries to ${\mbox{STAT}}\left( \Omega\left(\dfrac{\varepsilon}{[\log d]L_0R}\right)\right)$.
\item $p=2$: We have $r=2$ and $D_{\Psi}(\K)=R^2$.
As a consequence, solving the convex program amounts to using
$O\left(d\cdot \left(\dfrac{L_0R}{\varepsilon}\right)^2\right)$
queries to ${\mbox{STAT}}\left( \Omega\left(\dfrac{\varepsilon}{L_0R}\right)\right)$.
\item {\bf $2<p<\infty$:} We may choose $r=p$, $D_{\Psi}(\K)=\dfrac{2^{p-2}}{p}R^p$.
As a consequence, solving the convex program amounts to using
$O\left(d\log d\cdot 2^{2p-2}\left(\dfrac{L_0R}{\varepsilon}\right)^p\right)$
queries to ${\mbox{VSTAT}}\left(\left(\dfrac{64 L_0R \log d}{\varepsilon}\right)^p\right)$.
\end{itemize}
\hfill $\qed$
\section{Proof of Corollary \ref{cor:solve_smooth_cvx_ellp}}
\label{proof_solve_smooth_cvx_ellp}
Similarly as in Appendix \ref{proof_solve_cvx_ellp}, given $x\in\K$,
we can obtain $\tilde g(x)$ by mean estimation problem in $\ell_q$-norm
with error $\varepsilon/[12L_0 R]$ (notice we have chosen $\eta=\varepsilon/6$).
Now, by Proposition \ref{prop:dAspremont}, in order to obtain an
$\varepsilon$-optimal solution it suffices to run the accelerated method for
$T=\left\lceil\sqrt{2L_1D_{\Psi}(\K)/\varepsilon}\right\rceil$ iterations,
each of them requiring $\tilde g$ as defined above.
By using the $2$-uniformly convex functions for $\ell_p$, with $1\leq p\leq 2$,
from Appendix \ref{sec:unif_cvx}, together with the bound on the number of
queries and error for the mean estimation problems in $\ell_q$-norm
from Section \ref{Subsec:L_q}, we obtain that the total number of queries and the type of queries we need for stochastic optimization in the smooth $\ell_p$-setup is:
\begin{itemize}
\item $p=1$: We have $r=2$ and $D_{\Psi}(\K)=\dfrac{e^2\ln d}{2}R^2$.
As a consequnce, solving the convex program amounts to using
$O\left(d\cdot \sqrt{ \ln d\cdot\dfrac{L_1R^2}{\varepsilon} } \right)$ queries to
${\mbox{STAT}}\left(\dfrac{\varepsilon}{12L_0R}\right)$.
\item $1< p< 2$: We have $r=2$ and $D_{\Psi}(\K)=\dfrac{1}{2(p-1)}R^2$.
As a consequence, solving the convex program amounts to using
$O\left(d\log d\cdot \sqrt{\dfrac{1}{(p-1)}\cdot\dfrac{L_1R^2}{\varepsilon} }\right)$
queries to ${\mbox{STAT}}\left( \Omega\left(\dfrac{\varepsilon}{[\log d]L_0R}\right)\right)$;
\item $p=2$: We have $r=2$ and $D_{\Psi}(\K)=R^2$.
As a consequence, solving the convex program amounts to using
$O\left(d\cdot \sqrt{ \dfrac{L_1R^2}{\varepsilon} }\right)$
queries to ${\mbox{STAT}}\left( \Omega\left(\dfrac{\varepsilon}{L_0R}\right)\right)$.
\end{itemize}
\hfill $\qed$
\section{Proof of Lemma \ref{lem:str_cvx_oracle}}
\label{sec:StronglyConvex}
\fi
We first observe that we can obtain an approximate
zero-order oracle for $F$ with error $\eta$ by a single query to
${\mbox{STAT}}(\Omega(\eta/B)).$
In particular, we can obtain a value \(\hat F(x)\) such that
\(|\hat F(x)-F(x)|\leq \eta/4\), and then use as approximation
\[ \tilde F(x) = \hat F(x)-\eta/2.\]
This way \(|F(x)-\tilde F(x)| \leq |F(x)-\hat F(x)|+|\hat F(x)-\tilde F(x)|\leq 3\eta/4\),
and also \(F(x)-\tilde F(x) = F(x)-\hat F(x)+\eta/2 \geq \eta/4 \).
Finally, observe that for any gradient method that does not require access
to the function value we can skip the estimation of $\tilde F(x)$, and simply replace
it by $F(x)-\eta/2$ in what comes next.
Next, we prove that an approximate gradient \(\tilde g(x)\) satisfying
\begin{equation} \label{str_cvx_grad_approx}
\|\nabla F(x) - \tilde g(x)\|_{\ast} \leq \sqrt{\eta\kappa}/2 \leq \sqrt{\eta L_1}/2 ,
\end{equation}
suffices for a \((\eta,\mu,M)\)-oracle, where,
\(\mu=\kappa/2\), \(M=2L_1\). For convenience, we refer to the first inequality in \eqref{str_cvx_oracle} as the
{\em lower bound} and the second as the {\em upper bound}.\\
\noindent{\bf Lower bound.}
Since \(F\) is \(\kappa\)-strongly convex, and by the lower bound on
$F(x)-\tilde F(x)$
\begin{eqnarray*}
F(y) &\geq& F(x)+\langle \nabla F(x),y-x\rangle +\frac{\kappa}{2}\|x-y\|^2 \\
&\geq& \tilde F(x) + \eta/4 +\langle \tilde g(x),y-x\rangle
+\langle \nabla F(x)-\tilde g(x),y-x\rangle + \frac{\kappa}{2}\|x-y\|^2.
\end{eqnarray*}
Thus to obtain the lower bound it suffices prove that for all \(y\in\R^d\),
\begin{equation} \label{to_prove}
\frac{\eta}{4} + \langle \nabla F(x)-\tilde g(x),y-x\rangle + \frac{\mu}{2}\|x-y\|^2\geq 0.
\end{equation}
In order to prove this inequality, notice that among all \(y\)'s such that
\(\|y-x\|=t\), the minimum of the expression above is attained when
\(\langle \nabla F(x)-\tilde g(x),y-x\rangle = -t\|\nabla F(x)-\tilde g(x)\|_{\ast}\). This
leads to the one dimensional inequality
\[\frac{\eta}{4} - t\|\nabla F(x)-\tilde g(x)\|_{\ast} + \frac{\mu}{2}t^2 \geq 0,\]
whose minimum is attained at \(t=\frac{\|\nabla F(x)-\tilde g(x)\|_{\ast}}{\mu}\),
and thus has minimum value \(\eta/4-\|\nabla F(x)-\tilde g(x)\|_{\ast}^2/(2\mu)\).
Finally, this value is nonnegative by assumption, proving
the lower bound.
\noindent{\bf Upper bound.} Since
\(F\) has \(L_1\)-Lipschitz continuous gradient, and by the bound on $|F(x)-\tilde F(x)|$
\begin{eqnarray*}
F(y) &\leq& F(x) +\langle \nabla F(x),y-x\rangle +\dfrac{L_1}{2}\|y-x\|^2 \\
&\leq& \tilde F(x) +\dfrac{3\eta}{4} +\langle \tilde g(x),y-x\rangle +
\langle \nabla F(x)-\tilde g(x),y-x\rangle+\dfrac{L_1}{2}\|x-y\|^2.
\end{eqnarray*}
Now we show that for all \(y\in \R^d\)
\begin{eqnarray*}
\dfrac{L_1}{2}\|y-x\|^2-\langle \nabla F(x)-\tilde g(x),y-x\rangle +\frac{\eta}{4}\geq 0.
\end{eqnarray*}
Indeed, minimizing the expression above in \(y\)
shows that it suffices to have
\(\|\nabla F(x)-\tilde g(x)\|_{\ast}^2\leq \eta L_1/2\),
which is true by assumption.
Finally, combining the two bounds above we get that for all $y\in\K$
\[ F(y)\leq [\tilde F(x)+ \langle \tilde g(x),y-x\rangle]+\frac{M}{2}\|y-x\|^2+\eta,\]
which is precisely the upper bound.
As a conclusion, we proved that in order to obtain $\tilde g$ for
a $(\eta,M,\mu)$-oracle it suffices to obtain an approximate gradient
satisfying \eqref{str_cvx_grad_approx}, which can be obtained by solving
a mean estimation problem in $\|\cdot\|_{\ast}$ with error $\sqrt{\eta\kappa}/[2L_0]$.
This together with our analysis of the zero-order oracle proves the result.
Finally, if we remove the assumption $F\in{\cal F}_{\|\cdot\|}^1(\K,L_1)$ then from
\eqref{nonsmooth_dif_ineq} we can prove that for all $x,y\in\K$
\[ F(y) - [F(x)+\langle \nabla F(x),y-x\rangle] \leq \frac{L_0^2}{\eta}\|x-y\|^2+\frac{\eta}{4},\]
where $M=2L_0^2/\eta$. This is sufficient for carrying out the proof above, and
the result follows.
\iffull \else \hfill $\qed$ \fi
\section{Uniform convexity, uniform smoothness and consequences}
\label{sec:unif_cvx}
A space $(E,\|\cdot\|)$ is $r$-uniformly convex if there exists constant $0<\delta\leq1$
such that for all $x,y\in E$
\begin{equation} \label{unif_conv}
\|x\|^r+ \delta\|y\|^r \leq \dfrac{\|x+y\|^r + \|x-y\|^r }{2}.
\end{equation}
From classical inequalities (see, e.g., \cite{Ball:1994}) it is known that $\ell_p^d$ for $1<p<\infty$ is
$r$-uniformly convex for $r=\max\{2,p\}$. Furthermore,
\begin{itemize}
\item When $p=1$, the function $\Psi(x)=\frac{1}{2(p(d)-1)}\|x\|_{p(d)}^2$ (with
$p(d)=1+1/\ln d$) is $2$-uniformly convex w.r.t. $\|\cdot\|_1$;
\item When $1<p\leq2$, the function $\Psi(x)=\frac{1}{2(p-1)}\|x\|_p^2$ is
$2$-uniformly convex w.r.t. $\|\cdot\|_p$;
\item When $2<p<\infty$, the function $\Psi(x)=\frac{1}{p}\|x\|_p^p$ is $p$-uniformly
convex w.r.t. $\|\cdot\|_p$.
\end{itemize}
By duality, a Banach space $(E,\|\cdot\|)$ being $r$-uniformly convex is equivalent
to the dual space $(E^{\ast},\|\cdot\|_{\ast})$ being $s$-uniformly smooth,
where $1/r+1/s=1$. This means there exists a constant
$C\geq 1$ such that for all $w,z \in E^{\ast}$
\begin{equation} \label{unif_smooth}
\dfrac{\|w+z\|_{\ast}^s + \|w-z\|_{\ast}^s}{2} \leq \|w\|_{\ast}^s+C\|z\|_{\ast}^s.
\end{equation}
In the case of $\ell_p^d$ space we obtain that its dual $\ell_q^d$ is
$s$-uniformly smooth for $s=\min\{2,q\}$. Furthermore,
when $1<q\leq 2$ the norm $\|\cdot\|_q$ satisfies \eqref{unif_smooth}
with $s=q$ and $C=1$; when $2\leq q<\infty$, the norm $\|\cdot\|_q$
satisfies \eqref{unif_smooth} with $s=2$ and $C=q-1$.
Finally, observe that for $\ell_{\infty}^d$ we can use the equivalent
norm $\|\cdot\|_{q(d)}$, with $q(d)=\ln d +1$:
$$\textstyle
\|x\|_{\infty}\leq \|x\|_{q(d)} \leq
e \,\|x\|_{\infty},$$
and this equivalent norm satisfies \eqref{unif_smooth} with $s=2$ and $C=q(d)-1=\ln d$,
that grows only moderately with dimension.
\subsection{Random walks}
\label{sec:random-walk}
We first show that a simple extension of the random walk approach of \citenames{Kalai and Vempala}{KalaiV06} and \citenames{Lovasz and Vempala}{LovaszV06} can be used to address this setting. One advantage of this approach is that to optimize $F$ it requires only access to approximate values of $F$ (such an oracle is also referred to as approximate zero-order oracle). Namely, a $\tau$-approximate value oracle for a function $F$ is the oracle that for every $x$ in the domain of $F$, returns value $v$ such that $|v - F(x)| \leq \tau$.
We note that the random walk based approach was also (independently\footnote{The statement of our result and proof sketch were included by the authors for completeness in the appendix of \cite[v2]{FeldmanPV:13}.}) used in a recent work of \citenames{Belloni \etal}{BelloniLNR15}. Their work includes an optimized and detailed analysis of this approach and hence we only give a brief outline of the proof here.
\begin{thm}
\label{thm:random-walk-zero}
There is an algorithm that with probability at least $2/3$, given any convex program $\min_{x \in \K} F(x)$ in $\R^d$ where $\forall x\in \K,\ |F(x)| \leq 1$ and $\K$ is given by a membership oracle with the guarantee that $ \B_2^d(R_0) \subseteq \K \subseteq \B_2^d(R_1)$, outputs an $\eps$-optimal solution in time $\poly(d, \frac{1}{\eps}, \log{(R_1/R_0)})$ using $\poly(d, \frac{1}{\eps})$ queries to $(\eps/d)$-approximate value oracle.
\end{thm}
\begin{proof}
Let $x^{\ast} = \argmin_{x \in \K} F(x)$ and $F^{\ast} = F(x^\ast)$. The basic idea is to sample from a distribution that has most of its measure on points with $F(x) \le F^{\ast} + \eps$. To do this, we use the random walk approach as in \cite{KalaiV06, LovaszV06} with a minor extension. The algorithm performs a random walk whose stationary distribution is proportional to $g_\alpha(x)=e^{-\alpha F(x)}$, with $g(x)=e^{-F(x)}$. Each step of the walk is a function evaluation. Noting that $e^{-\alpha F(x)}$ is a logconcave function, the number of steps is
$\poly(d, \log \alpha, \beta)$ to get a point from a distribution within total variation distance $\beta$ of the target distribution.
\remove{
Further, as shown in \cite{KalaiV06}, a random point ${\mathbf x}$ from the target distribution satisfies:
\[
\E[F({\mathbf x})] \le F^* + d/\alpha.
\]
Thus, setting $\alpha = d/\eps$ suffices.}
Applying Lemma 5.1 from \cite{LovaszV06} (which is based on Lemma 5.16 from \cite{LV07}) with $B=2$ to $g_\alpha$ with $\alpha = 4(d+\ln(1/\delta))/\eps$, we have (note that $\alpha$ corresponds to $a_m= \frac{1}{B}(1+1/\sqrt{d})^m$ in that statement).
\equ{
\label{eq:walk-hit-prob}
\pr[g({\mathbf x}) < e^{-\eps}\cdot g(x^\ast)] \le \delta \left(\frac{2}{e}\right)^{d-1}.
}
Therefore, the probability that a random point ${\mathbf x}$ sampled proportionately to $g_\alpha(x)$ does not satisfy $F({\mathbf x}) < F^{\ast} + \eps$ is at most $\delta(2/e)^{d-1}$.
Now we turn to the extension, which arises because we can only evaluate $F(x)$ approximately through the oracle. We assume w.l.o.g.~that the value oracle is consistent in its answers (i.e., returns the same value on the same point). The value returned by the oracle
$\tilde{F}(x)$ satisfies $|F(x) - \tilde{F}(x)| \le \eps/d$. The stationary distribution is now proportional to $\tilde{g}_\alpha(x) = e^{-\alpha \tilde{F}(x)}$ and satisfies
\begin{equation}\label{density-ratio}
\frac{\tilde{g}_\alpha(x)}{g_\alpha(x)} = e^{-\alpha(\tilde{F}(x)-F(x))} \le e^{\alpha \frac{\eps}{d}} \le e^5.
\end{equation}
We now argue that with large probability, the random walk with the approximate evaluation oracle will visit a point $x$ where $F$ has value at most $F^\ast + \eps$. Assuming that a random walk gives samples from a distribution (sufficiently close to being) proportional to $\tilde g_\alpha$, from property (\ref{density-ratio}), the probability of the set $\{x \, : \, g(x) > e^{-\eps}\cdot g(x^\ast)\}$ is at most a factor of $e^{10}$ higher than for the distribution proportional to $g_\alpha$ (given in eq.~\eqref{eq:walk-hit-prob}). Therefore with a small increase in the number of steps a random point from the walk will visit the set where $F$ has value of at most $F^\ast + \eps$ with high probability. Thus the minimum function value that can be achieved is at most $F^{\ast} + \eps+2\eps/d$.
Finally, we need the random walk to mix rapidly for the extension.
Note that $\tilde{F}(x)$ is approximately convex, \ie for any $x,y \in \K$ and any $\lambda \in [0,1]$, we have
\begin{equation}\label{approx-convex}
\tilde{F}(\lambda x + (1-\lambda) y) \le \lambda \tilde{F}(x) + (1-\lambda)\tilde{F}(y) + 2\eps/d.
\end{equation}
and therefore $\tilde{g}_\alpha$ is a near-logconcave function that satisfies, for any $x,y \in \K$ and $\lambda \in [0,1]$,
\[
\tilde{g}_\alpha(\lambda x + (1-\lambda) y) \ge e^{-2\alpha\eps/d}\cdot \tilde{g}_\alpha(x)^\lambda \tilde{g}_\alpha(x)^{1-\lambda} \ge e^{-10}\cdot \tilde{g}_\alpha(x)^\lambda \tilde{g}_\alpha(x)^{1-\lambda}.
\]
As a result, as shown by
\citenames{Applegate and Kannan}{ApplegateK91}, it admits an isoperimetric inequality that is weaker than that for logconcave functions by a factor of $e^{10}$. For the grid walk, as analyzed by them, this increases the convergence time by a factor of at most $e^{20}$. The grid walk's convergence also depends (logarithmically) on the Lipshitz constant of $\tilde{g}_\alpha$. This dependence is avoided by the ball walk, whose convergence is again based on the isoperimetric inequality, as well as on local properties, namely on the $1$-step distribution of the walk. It can be verified that the analysis of the ball walk (e.g., as in \cite{LV07}) can be adapted to near-logconcave functions with an additional factor of $O(1)$ in the mixing time.
\end{proof}
Going back to the stochastic setting, let $F(x) = \E_D[f(x,{\mathbf w})]$. If $\forall w$, $f(\cdot,w) \in \F(\K,B)$ then a single query $f(x,w)$ to ${\mbox{STAT}}(\tau/B)$ is equivalent to a query to a $\tau$-approximate value oracle for $F(x)$.
\begin{corollary}
\label{cor:random-walk}
There is an algorithm that for any distribution $D$ over $\W$ and convex program $\min_{x \in \K}\{F(x) \doteq \E_{{\mathbf w} \sim D}[f(x,{\mathbf w})] \}$ in $\R^d$ where $\forall w$, $f(\cdot,w) \in \F(\K,B)$ and $\K$ is given by a membership oracle with the guarantee that $ \B_2^d(R_0) \subseteq \K \subseteq \B_2^d(R_1)$, with probability at least $2/3$, outputs an $\eps$-optimal solution in time $\poly(d, \frac{B}{\eps}, \log{(R_1/R_0)})$ using $\poly(d, \frac{B}{\eps})$ queries to ${\mbox{STAT}}(\eps/(dB))$.
\end{corollary}
\paragraph{SQ vs approximate value oracle:}
We point out that $\tau$-approximate value oracle is strictly weaker than ${\mbox{STAT}}(\tau)$. This follows from a simple result of
Nemirovsky and Yudin \cite[p.360]{nemirovsky1983problem} who show that linear optimization over $\B_2^d$ with $\tau$-approximate value oracle requires $\tau = \Omega(\sqrt{\log q} \cdot \eps/\sqrt{d})$ for any algorithm using $q$ queries. Together with our upper bounds in Section \ref{sec:linear} this implies that approximate value oracle is weaker than ${\mbox{STAT}}$.
|
2,877,628,088,397 | arxiv | \section{Introduction}
\textit{Control} is a fundamental but difficult issue in multi-agent systems. A multi-agent society may be difficult to control due to the concurrence of several factors, that may interact and drive the dynamics in complex, unpredictable ways. Some of these factors could include uncertainty about agent involvement \cite{reliability-games}, coalition formation \cite{chalkiadakis2007coalition}, the rules \cite{elkind2012manipulation}, the environment \cite{yokoo2005coalitional}, about rewards \cite{ketchpel1994forming}, the presence (or lack) of synergies between players \cite{procaccia2014structure}, etc.
A common type of control is \textit{manipulation}\footnote{We use the word with its wider, commonsense meaning, rather than the specialized one from voting theory \cite{barriers-manipulation-voting}. Our usage encompasses both strategic behavior by an agent or coalition (voting theory "manipulation") and
interventions by a chair or outside agent (such as control and bribery in voting \cite{control-bribery-voting}). We assume, however, that all such interventions are costly.}, which often aims to change the power (index) of a given player by means of interventions in the settings or the dynamics of the agent society. Many types of manipulation have been considered in the literature, often in a computational social choice context. They include \textit{identity} \cite{aziz2011false}, \textit{cloning} \cite{elkind2011cloning} and \textit{quota} \cite{zuckerman2012manipulating} manipulation in voting games, \textit{collusion and mergers} \cite{lev2013mergers}, \textit{sybil attacks} \cite{vallee2014study} and, finally, \textit{multi-mode attacks} \cite{faliszewski2011multimode}, just to name a few.
We contribute to this direction by studying yet another natural mechanism for manipulation: \textbf{changing the propensity of players to participate to the game.} This type of manipulation is quite frequent in real-life situations, a central example being voting - while parties cannot control with absolute certainty voter turnout on election day, they may employ tactics that aim to mobilize their supporters and deter participation of their opponents' voters\footnote{Such scenarios are best modeled as \textit{multichoice voting games} \cite{felsenthal1997ternary}. However, since such games are \textit{multi-cooperative} (rather than cooperative) games \cite{bilbao2000bicooperative}, they fall outside of the scope of the present work, and will be dealt with in a subsequent paper.}. Manipulation could be performed by a centralized actor (like in the voting example), or
by a coalition of players \cite{conitzer2007elections}, strategically modifying their behavior (in our case their reliabilities) in response to a perceived dominance of a player whose power index they wish to decrease.
The main impetus for our work was \cite{bachrach2014cooperative}, where a model of strategic manipulation of player reliabilities was first investigated. Bachrach et al. \cite{bachrach2014cooperative} considered \textit{max games}. In these games each player possesses a weight; the value of a coalition is the maximum weight of a component of the coalition. They proved a "no sabotage theorem" for (the reliability extension of) max-games with a common failure probability. They remarked that manipulating player reliabilities can be studied in principle for all coalitional games, and asked for further investigations of this problem, in settings similar to the one we consider, i.e. under costly player manipulation. Given the negative results for max-games \cite{bachrach2014cooperative} and the fact that computing power indices is often intractable \cite{deng1994complexity}, we concentrate mostly on proving \textit{positive results}, showing that there exist natural scenarios where optimal attacks on power indices by manipulating players' reliabilities are easy to compute (and interesting). We hope that these positive results will encourage renewed interest (and research) on the scope and limits of reliability manipulation.
\textbf{Contributions and outline} In Section~\ref{stwo} we begin by informally stating the problem and justifying our choice of the two classes of coalitional games studied in this paper: \textit{network centrality games} \cite{suri2008determining,aadithya2010game,michalak2013efficient,tarkowski2017game} and \textit{credit attribution games} \cite{papapetrou2011shapley,karpov2014equal}. Even though credit attribution games may seem to be somewhat exotic/of limited use, their importance extends well-beyond scientometry: they were, in fact, anticipated, as \textit{hypergraph games} (see \cite{deng1994complexity} Section 3). The two games we consider from this class, \textit{full credit} and \textit{full obligation} games, are natural examples of read-once marginal contribution (MC) nets \cite{elkind2009tractable}. Full credit games are equivalent to the subclass of basic MC-nets \cite{ieong2005marginal} whose rules are \textit{conjunctions of positive variables}; full obligation games correspond to generalized MC-nets whose rules consist of \textit{disjunctions of positive variables}. Full obligation games can simulate \textit{induced subgraph games} \cite{deng1994complexity}. Full credit games capture an important subclass of coalitional skill games (CSG) \cite{bachrach2008coalitional,bachrach2013computing}, that of CSG games with tasks consisting of a single skill.
Section~\ref{sthree} contains technical details and precise specifications of the models we investigate. We deal with two types of attacks: (node) \textit{removal}, where we are allowed to remove (decrease to zero the reliability of) certain nodes, and \textit{fractional attacks}, where reliability probabilities can be altered continuously.
In Section~\ref{sfive} we give closed-form formulas for the Shapley values of the reliability extensions of network centrality and credit allocation games. Next we particularize these results to centrality games on specific network: first we show that no removal attack is beneficial; as for fractional attacks, we show that in the complete graph $K_{n}$ or when attacking the center of the star graph $S_{n}$, a greedy approach works: one should increase the reliabilities of neighbors
of the attacked node, in descending order of baseline reliabilities. When attacking a non-center player in $S_n$ the result is similar, with the important exception that increasing the reliability of the center should precede all other moves.
In contrast, the situation for the cycle graph $C_{n}$ is more complex, involving all distance-two neighbors of the attacked node. A simple characterization is provided for the optimum as \textit{the best of four fixed ``greedy'' solutions}. This characterization allows the determination of the optimum for all combinations of reliability values and budget.\footnote{The precise formula for the optimum is cumbersome, hence deferred to the full version.}
An interesting, and unintuitive, qualitative feature of the result is that in the optimal attack \textit{a non-neighbor of the attacked node could be targeted \textbf{before} some of the direct neighbors of the attacked node}.
In Section~\ref{ssix} we analyze full credit and full obligation games. Although these two games have the same Shapley value \cite{karpov2014equal}, we show that \textit{they behave very differently with respect to attacks}: removal attacks are not beneficial for full credit games, NP-hard for full obligation games. Fractional attacks also behave differently, modifying probabilities in opposite directions. In a particular setting which includes the case of induced subgraph games we obtain greedy algorithms for both games, derived from expressing the problems as fractional knapsack problems. The determining quantities for the attack orders are (two different) cost-benefit measures.
\section{Problem Statement and Choice of Games}
\label{stwo}
The \textit{power index attack problem}, the main problem of interest in this paper, has the following simple informal statement: we consider the reliability extension of a cooperative game. We are given a positive budget $B>0$ and are allowed to modify reliabilities of all nodes, other than the targeted player $x$, as long as the total cost incurred is at most $B$. Which nodes should we target, and how should we change their reliabilities, in order to decrease as much as possible the Shapley value of node $x$?
A variant of the previous problem, called the \textit{pairwise power index attack problem} and motivated by Example~\ref{ex:hirsch} below, is the following: we are given not one but \textit{two} players $x,y$. The goal is to decrease as much (within the budget) the Shapley value of $x$, while not affecting at all the Shapley value of $y$. This restriction makes some nodes exempt from attacks: we are not allowed to change the reliabilities of players who contribute to the Shapley value of $y$.
\textbf{Choice of games} The problems described above could be investigated in all classes of TU-cooperative games, or compact representation frameworks. However, we feel that the most compelling cases are those where the computation of power indices, e.g. the Shapley value, of (the reliability extensions) of our games is tractable\footnote{This requirement disqualifies many natural candidate games such as \textit{weighted voting games} \cite{deng1994complexity,matsui2001np}, as well as most subclasses of \textit{coalitional skill games} \cite{aziz2009power}}. In other words \textit{the intractability of manipulating a power index should \textit{not} be a consequence of our inability to compute these indices}. In particular, we are interested in scenarios where computing power indices is easy, but computing an optimal attack on them is hard. Theorem~\ref{full-obligation} below provides such an example.
The appeal of studying attacks on node centrality in social networks is quite self-evident: game-theoretic
concepts such as those considered in \cite{suri2008determining,aadithya2010game,michalak2013efficient,tarkowski2017game} formalize appealing notions of leadership in social situations. They have been proposed as tools for identifying key actors, with applications e.g. to terrorist networks \cite{michalak2015defeating,lindelauf2013cooperative}. In such a setting, a direct (physical) attack on a leading node may be infeasible. Instead, one could attempt to indirectly affect its status (centrality), by incentivizing some of its peers.
Relevant examples of targeting nodes in order to affect power indices arose (implicitly) in even earlier work \cite{papapetrou2011shapley}, that attempted to develop coalitional models of credit allocation in scientific work. The following is a version of the example in \cite{papapetrou2011shapley}:
\begin{example} \textit{Two scientists $A,B$ are compared with respect to their publication record\footnote{We \textbf{do not} condone and caution against the real-life use of such crude quantitative metrics for tasks like the one described in this example or our models.}. All their papers have exactly one co-author. Figure~\ref{hirsch} displays this information as a graph, listing for each author pair, the number of publications they have authored and the number of citations. If using the Hirsch index, it would seem that candidate $A$ has a better track record than candidate $B$. But if we discard publications both of them have co-written with ``famous scientist $Y$'' (that is, \textbf{remove $Y$ and its publications from consideration}), then their relative ranking would be reversed.}
The authors of \cite{papapetrou2011shapley} attempted to use the Shapley value of a game based on the Hirsch index for credit allocation. An ulterior, more general and cleaner game-theoretic approach is ~\cite{karpov2014equal}. The author defines several \textit{credit allocation games}, and uses their (identical) Shapley values as a measure of individual publication record. Slightly modified versions of this measure have (regrettably) actually been used in some countries to set minimum publication thresholds for access and promotion to academic positions, e.g. the minimal standards in Romania.
In such a context one could naturally ask the following question: \textit{what are the top $k$ coauthors that account for most of a scientists' publication record?} When using the game-theoretic framework for scientific credit from \cite{karpov2014equal}, this is equivalent to finding the $k$ coauthors whose removal (together with the joint papers) causes the scientist's' Shapley value to decrease the most.
Collaborations may, however, be genuinely productive or just bring to one of the scientists the benefits of association with leading scientists\footnote{One could argue, of course, that such an association itself reflects positively on the scientist. But the opposite argument, that prestige drives scientific inequality, has recently been substantiated by real data \cite{Morgan2018} and is, at the very least, hard to ignore.}. The Shapley value approach of \cite{karpov2014equal} does not distinguish between these two scenarios, as it gives equal credit to all authors of a joint paper, irrespective of "leadership status". Recent work, e.g. Hirsch's \textit{alpha index} \cite{hirsch2018mathbf}, has attempted to quantify "scientific leadership". It is possible to define a measure based on the reliability extension of credit allocation games that factors out the "well connectedness" of an individual from its score\footnote{The measure computes appropriate values of reliability probabilities, the lower the probability the more of a "scientific leader" a coauthor is; we are currently investigating the practicality of such an approach.}. Given such a measure, the previous question, that of finding the top-$k$ co-authors is still interesting, as it \textit{identifies the most (genuinely) fruitful collaborations of a given author, irrespective of status.} This is modeled by the power index attack problem in credit allocation games.
\label{ex:hirsch}
\end{example}
\begin{figure}[h]
\scalebox{0.7}{
\begin{tikzpicture}[node distance=4.5cm, square/.style={regular polygon,regular polygon sides=4},line width=0.75mm, scale=0.3]
\node[minimum size=1.5cm, draw,circle, fill=gray!5] (A) {$A$};
\node[minimum size=1.5cm, draw,square,fill=gray!5,right of=A] (Y) {$Y$};
\node[minimum size=1.5cm, draw,circle,fill=gray!5,right of=Y] (B) {$B$};
\node[minimum size=1.5cm, draw,square,fill=gray!5,above left=1.5cm and 1cm of A] (X1) {$X1$};
\node[minimum size=1.5cm, draw,square,fill=gray!5,above right=1.5cm and 1cm of A] (X2) {$X2$};
\node[minimum size=1.5cm, draw,square,fill=gray!5,below left=1.5cm and 1cm of A] (X3) {$X3$};
\node[minimum size=1.5cm, draw,square,fill=gray!5,above right=1.5cm and 1cm of B] (X4) {$X4$};
\node[minimum size=1.5cm, draw,square,fill=gray!5,below left=1.5cm and 1cm of B] (X5) {$X5$};
\node[minimum size=1.5cm, draw,square,fill=gray!5,below right=1.5cm and 1cm of B] (X6) {$X6$};
\path[-,draw,thick]
(A) edge node [above, align=center] {$P:5$\\$C:4$} (Y)
(Y) edge node [above, align=center] {$P:8$\\$C:8$} (B)
(A) edge node [above right, align=center] {$P:5$\\$C:4$} (X1)
(A) edge node [above left, align=center] {$P:5$\\$C:3$} (X2)
(A) edge node [above left, align=center] {$P:5$\\$C:3$} (X3)
(B) edge node [below right, align=center] {$P:5$\\$C:4$} (X4)
(B) edge node [above left, align=center] {$P:5$\\$C:3$} (X5)
(B) edge node [above right, align=center] {$P:5$\\$C:3$} (X6);
\end{tikzpicture}}
\caption{The scenario (from \cite{papapetrou2011shapley}) in Example~\ref{ex:hirsch}}
\label{hirsch}
\end{figure}
\section{Technical Details}
\label{sthree}
We will be working in the framework of Algorithmic Cooperative Game Theory, see
\cite{chalkiadakis2011computational} for a readable introduction.
We will make use of notation $f\rvert^{b}_{a}$ as a shorthand for $f(b)-f(a)$.
Given graph $G=(V,E)$ and vertex $v\in V$, we will denote by $N(v)$ the set of neighbors of $V$ and by $\widehat{N(v)}=\{v\}\cup N(v)$. Given $S\subseteq V$, we denote by $\delta(S)$ the set of nodes $y\in V\setminus S$ such that there exists $x\in S$, $(x,y)\in E$. We generalize the setting above to the case when $G$ is a \textit{weighted graph}, i.e. there exists a weight function $w:E\rightarrow \mathbb{R}_{+}$. Given set $S\subseteq V$ and integer $r\geq 1$ we define $B_{w}(S,r)$, \textit{the ball of radius $r$ around $S$}, to be the set $B_{w}(S,r)=\{x\in V: (\exists y\in S)\mbox{ s.t. } d_{w}(x,y)\leq r\}$. We may omit $w$ from this notation when it is simply the graph distance in $G$.
Also, given "cutoff" distance $d_{cut}$ we define $N_{cut}(x)=B(\{x\},d_{cut})$.
We will deal with cooperative games with transferable utility, that is pairs $\Gamma=(N,v)$ where $N$ is a set of \textit{players} and $v:\mathcal{P}(S)\rightarrow \mathbb{R}_{+}$ is a \textit{value function} under the partial sets of $S$. If $S\subseteq N$ is a set of players, $v(S)$ is the value that players in coalition $S$ can guarantee for themselves irrespective of the other players' participation.
Although we could prove similar results for other power indices, e.g. the Banzhaf value, in this paper we restrict ourselves to
the \textit{Shapley value}. This index measures the portion of the grand coalition value $v(N)$ that a given player $x\in N$ could fairly request for itself. It has the formula \cite{chalkiadakis2011computational}
$
Sh[v](x) = \frac{1}{n!} \cdot \sum_{\pi\in S_n} [v(S^{x}_{\pi} \cup \{x\}) - v(S^{x}_{\pi})]$, $
\mbox{ where }S^{x}_{\pi} = \{\pi[ i] | \pi[i] \mbox{ precedes x in } \pi\}$ and $S_n$ is the set of permutation.
We are concerned with two classes of cooperative games. The first one arose from efforts to define game-theoretic notions of \textit{network centrality} \cite{suri2008determining,aadithya2010game,michalak2013efficient,tarkowski2017game}.
We define these games as follows:
\begin{itemize}
\item[-] Game $\Gamma_{NC_1}$ is specified by its value function $v_{NC_1}(S)=|S\cup \delta(S)|$.
\item[-] Given integer $k\geq 1$, game $\Gamma_{NC_2}$ is specified by its value function $v_{NC_2}(S)=|S\cup \{x\not \in S\mbox{ s.t. } | N(x)\cap S|\geq k \}|$.
\item[-] In game $\Gamma_{NC3}$ graph $G$ is \textit{weighted}. We are also given a positive "cutoff distance" $d_{cut}$. We give the characteristic function $v_{NC_3}$ by $v_{NC_3}(S)=|B(S,d_{cut})|.$
\end{itemize}
A second class of games, related to the example in \cite{papapetrou2011shapley} is that of \textit{influence-attribution games}, formally defined by Karpov \cite{karpov2014equal}. A \textit{credit-attribution game} is formalized by a set of authors $N = \{1, . . ., n\}$ and a set of publications $P = \{P_1, . . ., P_m\}$. Each paper $P_{j}$ is naturally endowed with a set of \textit{authors} $Auth_j \subseteq N$ and a \textit{quality score} $w_{j}\in \mathbb{R}_{+}$. In real-life scenarios the quality measure could be 1 (i.e. we simply count papers), a score based on the ranking of the venue the paper was published in, the number of its citations, or even some iterative, PageRank-like variant of the above measures.
\begin{itemize}
\item[-] \textit{The full credit game} $\Gamma_{FC}$ is specified by its value function $v_{FC}(S)$ which is simply the sum of weights of papers whose authors' list contains \textbf{at least one member from $S$.}
\item[-] \textit{The full obligation game} $\Gamma_{FO}$ is specified by its value function $v_{FO}(S)$ which is the sum of weights of papers whose authors \textbf{are all members of $S$.}
\end{itemize}
Denote by $Pap_{x}$ the set of papers of $x$, and by $CA(x)$ the set of co-authors of $x$, i.e. the set of players $l$ for which there exists a $k\in Pap_{x}\cap Pap_{l}$. If $l\in CA(x)$ denote by
$C(x,l)= \sum\limits_{k \in Pap_x\cap Pap_{l}} w_{k}$
\textit{the joint contribution} of $x,l$.
\iffalse
A \textit{coalitional skill game} (CSG) is specified by a set of {\em agents}, $I =\{ 1 , \ldots , n \}$, a set of \textit{skills} $S = \{ s_1 , \ldots , s_k \}$, a set of \textit{tasks} $T = \{ t_1 , \ldots , t_m \}$ and a {\em task value function} $u:\mathcal{P}(T)\rightarrow {\bf R}_{+}$. Each agent $i$ has a set of \textit{skills} $S_{i} \subseteq S$. Each task $t_j$ requires a set of skills $T_{j}\subseteq S$. For $C\subseteq I$, $S(C) = \cup_{j\in C} S_{j}$ is the {\em set of skills of coalition C}. Coalition $C$ {\em can perform task $t_j$} if $T_j \subseteq S (C)$ . The set of tasks coalition $C$ can perform will be denoted by $T(C)$. The value function $v(C)$ is defined as $v(C)=u(T(C))$.
A credit attribution game is simply a special case of a CSG where each task has only one skill. Skills/tasks in CSG correspond to papers in credit attribution games, authors of a paper corresponding to the agents that possess that skill. On the other hand edges in induced subgraph games correspond to papers with just two authors in credit allocation games.
Full-obligation games are a subclass of basic MC-nets \cite{ieong2005marginal}. More precisely, they are equivalent to MC-nets in which all formulas are conjunctions of positive variables only. Full-credit games are a subclass of read-once extended MC-nets, being equivalent to MC-nets in which all formulas are disjunctions of positive variables only.
\fi
\textbf {Reliability extension and attack models} We will be working within the framework of \textit{reliability extension} of games, first defined in \cite{reliability-games} and further investigated in \cite{bachrach2014cooperative}. The {\em reliability extension} of cooperative game $G=(N,v)$ with parameters $(p_{1},p_{2},\ldots, p_{n})$ is the cooperative game $\Gamma=(N,\overline{v})$ with $\overline{v}(S)=\sum\limits_{T\subseteq S} v(T)\cdot \Pi_{T,S}$, $\mbox{ where } \Pi_{T,S}=(\prod\limits_{i\in T} p_{i}) \cdot (\prod\limits_{i\in S\setminus T} (1-p_{i})).$
A useful result about these quantities is:
\begin{claim} Let $S\subseteq W$. We have
\begin{displaymath}
\frac{\partial \Pi_{S,W}}{\partial p_{j}}= \left\{\begin{array}{l}
\Pi_{S\setminus j,W\setminus j}\mbox{ if }j\in S,\\
-\Pi_{S,W\setminus j}\mbox{ if }j\in W\setminus S, \mbox{ and } \\
0, \mbox{ if }j\not \in W
\end{array}
\right.
\end{displaymath}
\label{aux2}
\end{claim}
\vspace{-0.4cm}
We will consider in the sequel the following two attack models:
\begin{itemize}
\item[(1). ] \textit{fractional attack:} In this type of attack every node $j$ different from the attacked node $x$ has a \textit{baseline reliability} $p^{*}_{j}\in (0,1]$. We are allowed to manipulate the reliability of each such node $j\neq x$ by changing it from $p^{*}_{j}$ to an arbitrary value $p$. To do so we will incur, however, a cost $u_{j}(p)$. We assume that cost function $u_{j}(\cdot)$ is defined and has an unique zero\footnote{There is no cost for keeping the baseline reliability.} at $p=p^{*}_{j}$, is decreasing and linear on $[0,p^{*}_{j}]$ and increasing and linear on $[p^{*}_{j},1]$ (Figure~\ref{util}). That is: for every player $j\neq x$ there exist values $L_{j},R_{j}>0$ such that
\[
u_{j}(p)=\left\{\begin{array}{ll}
L_{j}(p^{*}_{j}-p), & \mbox{ if } p<p^{*}_{j}, \\
0, & \mbox{ if } p=p^{*}_{i}, \\
R_{j}(p-p^{*}_{j}), & \mbox{ if } p>p^{*}_{j}.
\end{array}
\right.
\]
\item[(2). ] \textit{removal attack:} In this type of attack we are only allowed to change the reliability of any node $j$ (different from the targeted node $x$) from $p^{*}_{j}$ to $0$. To do so will incur a cost $c_{j}$.
\end{itemize}
\textbf{A basis for fractional attacks} The following simple result will be used to analyze fractional attacks in network centrality games:
\begin{lemma}["IMPROVING SWAPS"]
Let $D$ be an open set in $\mathbb{R}^{n}$, let $x=(x_{1},\ldots, x_{n})\in D$ and $f:D\rightarrow \mathbb{R}$ be an analytic function. Assume $1\leq i,j\leq n$ are indices such that $\frac{\partial f(x_{1},\ldots, x_{n})}{\partial x_{i}} > \frac{\partial f(x_{1},\ldots, x_{n})}{\partial x_{j}}. $ Define $x_{i,j}(\epsilon)=(x_{k}(\epsilon))$, with
\begin{equation}
x_{k}(\epsilon)=\left\{\begin{array}{ll}
x_{k}+\epsilon, & \mbox{ if } k=j, \\
x_{k}-\epsilon, & \mbox{ if } k=i, \\
x_{k}, & \mbox{otherwise.}
\end{array}
\right.
\label{change}
\end{equation}
Then there exists $\epsilon_{0}>0$ such that function
$g:[0,\epsilon_{0}]\rightarrow \mathbb{R}$, $g(\epsilon)=f(x_{i,j}(\epsilon))$ is monotonically decreasing.
\label{aux}
\end{lemma}
In other words, to minimize function $f$ one could decrease the variables with the largest partial derivative, while symmetrically increasing a smaller one.
\vspace{-3mm}
\begin{proof}
By the chain rule $g^{\prime}(0)=\sum\limits_{k=1}^n \frac{\partial f(x_{1},\ldots, x_{n})}{\partial x_{k}} \frac{\partial x_{k}(\epsilon)}{\partial \epsilon}|_{\epsilon = 0}$
\begin{align*}
& = \frac{\partial f(x_{1},\ldots, x_{n})}{\partial x_{j}} - \frac{\partial f(x_{1},\ldots, x_{n})}{\partial x_{i}}<0.
\end{align*}
Since $g^{\prime}$ is continuous, $g^{\prime}$ is strictly negative on some interval $[0,\epsilon_{0}]$. The result follows.
\end{proof}
\vspace{-0.7cm}
\begin{figure}[ht]
\begin{tikzpicture}[scale = 1]
\begin{axis}[ymin=0, ymax=1, xmin=0, xmax=1, enlargelimits=false, xlabel=$p$, ylabel={$u_{i}(p)$}, grid=both, grid style={line width=.1pt, draw=gray!10}, major grid style={line width=.2pt,draw=gray!50}]
\addplot[mark=*,black] plot coordinates {(0,0.3) (0.4,0) (1,0.7)};
\addplot[mark=] coordinates {(0.4,0)} node[pin=90:{$p^{*}_{i}=0.4$}]{} ;
\end{axis}
\end{tikzpicture}
\vspace{-0.4cm}
\caption{Shape of utility functions in fractional attacks.}
\label{util}
\end{figure}
\vspace{-0.5cm}
\section{Closed-form formulas}
\label{sfive}
The basis for our manipulation of network centralities is the following characterization of the Shapley value of the reliability extension:
\begin{theorem} The Shapley values of the reliability extensions of network centrality games $\Gamma_{NC_1}, \Gamma_{NC_2},\Gamma_{NC_3}$ have the formulas:
\begin{displaymath}
Sh[\overline{v_{NC_1}}](x)= p_{x} \mathlarger{\sum}\limits_{\stackrel{y\in \widehat{N(x)}}{S\subseteq \widehat{N(y)}\setminus x}} \frac{1}{|S| + 1} \Pi_{S, \widehat{N(y)}\setminus x}
\end{displaymath}
\begin{align*}
& Sh[\overline{v_{NC_2}}](x)=p_{x}[\mathlarger{\sum}\limits_{y\in N(x)} \mathlarger{\sum}\limits_{\stackrel{S\subseteq \widehat{N(y)}\setminus x}{|S|\geq k-1}} \frac{(|S|+1-k)}{|S|(|S| + 1)} \Pi_{S, \widehat{N(y)}\setminus x}+\\
& + \mathlarger{\sum}\limits_{S\subseteq N(x)}\frac{k}{|S| + 1} \Pi_{S, N(x)} ]\\
& Sh[\overline{v_{NC_3}}](x)= p_{x} \mathlarger{\sum}\limits_{\stackrel{y\in \widehat{N}(x)}{S\subseteq \widehat{N_{cut}(y)}\setminus x}} \frac{1}{|S| + 1} \Pi_{S, \widehat{N_{cut}(y)}\setminus x}
\end{align*}
\label{sh-gamma1}
\end{theorem}
\vspace{-5mm}
As for credit atribution games, the corresponding result is
\begin{theorem}
The Shapley values of the reliability extensions of $\Gamma_{FC},\Gamma_{FO}$ with
probabilities $(p_{1},p_{2},\ldots, p_{n})$ have the formulas
\begin{equation} \label{sh:rel:1}
Sh[\overline{v_{FC}}](x)=p_{x}\cdot \sum\limits_{k\in Pap_{x}} w_{k}\cdot \big[\sum_{S\subseteq Auth_{k}\setminus \{x\}}\frac{ \Pi_{\emptyset,S} }{(n_{k}-|S|){{n_{k}}\choose {|S|}}} \big]
\end{equation}
where $Auth_{k}$ is the set of coauthors of paper $k$ and $n_{k}=|Auth_{k}|$, and
\begin{equation} \label{sh:rel:2}
Sh[\overline{v_{FO}}](x)= \sum\limits_{k\in Pap_{x}} \frac{w_{k}}{n_{k}}\cdot \Pi_{Auth_{k},Auth_{k}}
\end{equation}
\label{k=1}
\end{theorem}
\section{Attacking network centralities}
The next result follows from Theorem~\ref{sh-gamma1} and Claim~\ref{aux2}:
\begin{corollary} In the reliability extensions of the centrality games $\Gamma_{NC_1}, \Gamma_{NC_2}, \Gamma_{NC_3}$, the Shapley values of player $1$ are monotonically decreasing functions of distance-two neighbors' reliabilities (and do not depend on other players).
\label{centrality}
\end{corollary}
\begin{proof}
Deferred to the full version.
\end{proof}
The previous corollary shows that for network centrality games no removal attack is
beneficial:
\begin{theorem}
No removal attack on the centrality of a player in games $\Gamma_{NC_1}, \Gamma_{NC_2}, \Gamma_{NC_3}$ can decrease its Shapley value.
\end{theorem}
\textbf{Fractional attacks on specific networks} Given that removal attacks are not beneficial, we now turn to fractional attacks. The objective of this section is to show that
the analysis of optimal fractional attacks is often feasible. Since the graphs in this section are fairly symmetric, we will assume
(for these examples) that the slopes of all utility curves are identical. That is, there exist positive constants $L,R$ such that if $i\neq j$ are different agents then $L_{i}=L_{j}=L$ and $R_{i}=R_{j}=R$ (though, of course, baseline probabilities $p^{*}_{i}$ and $p^{*}_{j}$ may differ).
The graphs we are going to be concerned with are the complete graph $K_{n}$, the star graph $S_{n}$ (where node 1 is either the center or an outer node) and the $n$-cycle $C_{n}$ (Figure~\ref{fig:completeGraph}).
\begin{figure}[ht]
\hspace{-3mm}
\begin{minipage}{.22\textwidth}
\begin{center}
\begin{tikzpicture}[scale=0.090, ->,>=stealth',level/.style={sibling distance = 3cm/#1,
level distance = 1cm}]
\def 8 {6}
\pgfmathparse{1 * (360 / 8) - (360 / 8)};
\node[circle,draw,inner sep=0pt,minimum size=4pt,label=\pgfmathresult:1] (N-1) at (\pgfmathresult:5.4cm){};
\foreach \x in {2,...,8}{
\pgfmathparse{\x * (360 / 8) - (360 / 8)};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt,label=\pgfmathresult:{$p^{*}_{\x}$}] (N-\x) at (\pgfmathresult:5.4cm){};
};
\foreach \x [count=\xi from 1] in {2,...,8}{%
\foreach \y in {\x,...,8}{%
\path (N-\xi) edge [-] (N-\y);
}
}
\end{tikzpicture}
\end{center}
\end{minipage}
\hspace{-2mm}
\begin{minipage}{.22\textwidth}
\begin{center}
\begin{tikzpicture}[scale=0.090, ->,>=stealth',level/.style={sibling distance = 3cm/#1,
level distance = 1cm}]
\def 8 {6}
\foreach \x [count=\xi from 2] in {1,...,8}{
\pgfmathparse{\x * (360 / 8) - (360 / 8)};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt,label=\pgfmathresult:{$p^{*}_{\xi}$}] (N-\x) at (\pgfmathresult:5.4cm){};
};
\node[circle,draw,inner sep=0pt,minimum size=4pt,] (N-0) at (0:0cm){};
\foreach \x in {1,...,8}{%
\path (N-0) edge [-] (N-\x);
}
\end{tikzpicture}
\end{center}
\end{minipage}
\hspace{-1mm}
\begin{minipage}{.22\textwidth}
\begin{center}
\begin{tikzpicture}[scale=0.090, ->,>=stealth',level/.style={sibling distance = 3cm/#1,
level distance = 1cm}]
\def 8 {6}
\pgfmathparse{1 * (360 / 8) - (360 / 8)};
\node[circle,draw,inner sep=0pt,minimum size=4pt,label=\pgfmathresult:1] (N-1) at (\pgfmathresult:5.4cm){};
\foreach \x [count=\xi from 3] in {2,...,8}{
\pgfmathparse{\x * (360 / 8) - (360 / 8)};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt,label=\pgfmathresult:{$p^{*}_{\xi}$}] (N-\x) at (\pgfmathresult:5.4cm){};
};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt,] (N-0) at (0:0cm){};
\foreach \x in {1,...,8}{%
\path (N-0) edge [-] (N-\x);
}
\end{tikzpicture}
\end{center}
\end{minipage}
\hspace{-2mm}
\begin{minipage}{.22\textwidth}
\begin{center}
\begin{tikzpicture}[scale=0.090, ->,>=stealth',level/.style={sibling distance = 3cm/#1,
level distance = 1cm}]
\def 8 {8}
\foreach \x/\l in {1/$p^{*}_3$, 2/$p^{*}_2$, 3/1, 4/$p^{*}_{n}$, 5/$p^{*}_{n-1}$}{
\pgfmathparse{\x * (360 / 8) - (360 / 8)};
\ifthenelse{\equal{\l}{1}}
\node[circle,draw,inner sep=0pt,minimum size=4pt,label=\pgfmathresult:\l] (N-\x) at (\pgfmathresult:5.4cm){};
}
{
\node[circle,fill=black,inner sep=0pt,minimum size=3pt,label=\pgfmathresult:\l] (N-\x) at (\pgfmathresult:5.4cm){};
}
};
\foreach \x [count=\xi from 1] in {2,...,5}{%
\path (N-\x) edge [-] (N-\xi);
}
\path (N-1.south) + (0, -1.95) coordinate(x1) edge[line width=0.85pt,-,densely dotted] (N-1);
\path (N-5.south) + (0, -1.95) coordinate(x2) edge[line width=0.85pt,-,densely dotted] (N-5);
\path (-1, 0) coordinate(x3) edge[line width=0.85pt,-,densely dotted] (1,0);
\end{tikzpicture}
\end{center}
\end{minipage}
\caption{Target topologies for fractional attacks.}
\label{fig:completeGraph}
\end{figure}
\vspace{-0.2cm}
\iffalse
\begin{figure}[ht]
\begin{center}
\begin{minipage}{.11\textwidth}
\begin{center}
\begin{tikzpicture}[scale=0.090, ->,>=stealth',level/.style={sibling distance = 3cm/#1,
level distance = 1cm}]
\def 8 {6}
\pgfmathparse{1 * (360 / 8) - (360 / 8)};
\node[circle,draw,inner sep=0pt,minimum size=4pt,label=\pgfmathresult:1] (N-1) at (\pgfmathresult:5.4cm){};
\foreach \x in {2,...,8}{
\pgfmathparse{\x * (360 / 8) - (360 / 8)};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt,label=\pgfmathresult:{$p^{*}_{\x}$}] (N-\x) at (\pgfmathresult:5.4cm){};
};
\foreach \x [count=\xi from 1] in {2,...,8}{%
\foreach \y in {\x,...,8}{%
\path (N-\xi) edge [-] (N-\y);
}
}
\end{tikzpicture}
\end{center}
\end{minipage}
~
\begin{minipage}{.11\textwidth}
\begin{center}
\begin{tikzpicture}[scale=0.090, ->,>=stealth',level/.style={sibling distance = 3cm/#1,
level distance = 1cm}]
\def 8 {6}
\foreach \x [count=\xi from 2] in {1,...,8}{
\pgfmathparse{\x * (360 / 8) - (360 / 8)};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt,label=\pgfmathresult:{$p^{*}_{\xi}$}] (N-\x) at (\pgfmathresult:5.4cm){};
};
\node[circle,draw,inner sep=0pt,minimum size=4pt,] (N-0) at (0:0cm){};
\foreach \x in {1,...,8}{%
\path (N-0) edge [-] (N-\x);
}
\end{tikzpicture}
\end{center}
\end{minipage}
~
\begin{minipage}{.11\textwidth}
\begin{center}
\begin{tikzpicture}[scale=0.090, ->,>=stealth',level/.style={sibling distance = 3cm/#1,
level distance = 1cm}]
\def 8 {6}
\pgfmathparse{1 * (360 / 8) - (360 / 8)};
\node[circle,draw,inner sep=0pt,minimum size=4pt,label=\pgfmathresult:1] (N-1) at (\pgfmathresult:5.4cm){};
\foreach \x in {2,...,8}{
\pgfmathparse{\x * (360 / 8) - (360 / 8)};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt,label=\pgfmathresult:{$p^{*}_{\x}$}] (N-\x) at (\pgfmathresult:5.4cm){};
};
\node[circle,fill=black,inner sep=0pt,minimum size=3pt,] (N-0) at (0:0cm){};
\foreach \x in {1,...,8}{%
\path (N-0) edge [-] (N-\x);
}
\end{tikzpicture}
\end{center}
\end{minipage}
~
\begin{minipage}{.11\textwidth}
\begin{center}
\begin{tikzpicture}[scale=0.090, ->,>=stealth',level/.style={sibling distance = 3cm/#1,
level distance = 1cm}]
\def 8 {8}
\foreach \x/\l in {1/$p^{*}_3$, 2/$p^{*}_2$, 3/1, 4/$p^{*}_{n}$, 5/$p^{*}_{n-1}$}{
\pgfmathparse{\x * (360 / 8) - (360 / 8)};
\ifthenelse{\equal{\l}{1}}
\node[circle,draw,inner sep=0pt,minimum size=4pt,label=\pgfmathresult:\l] (N-\x) at (\pgfmathresult:5.4cm){};
}
{
\node[circle,fill=black,inner sep=0pt,minimum size=3pt,label=\pgfmathresult:\l] (N-\x) at (\pgfmathresult:5.4cm){};
}
};
\foreach \x [count=\xi from 1] in {2,...,5}{%
\path (N-\x) edge [-] (N-\xi);
}
\path (N-1.south) + (0, -1.95) coordinate(x1) edge[line width=0.85pt,-,densely dotted] (N-1);
\path (N-5.south) + (0, -1.95) coordinate(x2) edge[line width=0.85pt,-,densely dotted] (N-5);
\path (-1, 0) coordinate(x3) edge[line width=0.85pt,-,densely dotted] (1,0);
\end{tikzpicture}
\end{center}
\end{minipage}
\end{center}
\caption{Target topologies for fractional attacks.
\label{fig:completeGraph}
\end{figure}
\fi
\vspace{2mm}
Note that, when $G=K_{n}$ or $G=S_{n}$, pairwise Shapley value attacks are trivially impossible: indeed, these graphs have diameter at most two. Since all distance-two neighbors influence the Shapley value of a given player, all nodes are exempt from attacks.
On the other hand, for these topologies it turns out that the best attack on Shapley value of player $x$ is to increase the reliabilities of its neighbors in the descending order of their baseline reliabilities:
\begin{theorem} Let $G$ be either the complete graph $K_{n}$ with $n$ vertices.
or the star graph with $n$ vertices $S_{n}$ centered at node $x=1$. To optimally attack the centrality of $x$ in the reliability extension of $\Gamma_{NC_1}$ use the following algorithm:
\begin{framed}
\noindent- Consider nodes $2,\ldots, n$ in the decreasing order of their baseline reliabilities, breaking ties arbitrarily. $p^{*}_{sorted(2)}\geq p^{*}_{sorted(3)}\geq \ldots \geq p^{*}_{sorted(n)}.$
\noindent- While the budget allows it, increase to one (if not already equal to 1) the probabilities $p_{sorted(i)}$, starting with $i=2$ and successively increasing $i$.
\noindent- If the budget no longer allows increasing $p_{sorted(i)}$ to one, increase it as much as possible.
\noindent- Leave all other probabilities to their baseline values.
\end{framed}
\noindent If, on the other hand, $G=S_{n}$ centered, say, at node 2, to optimally attack the centrality of node $x=1$, the algorithm changes as follows:
\begin{framed}
\noindent - Consider nodes $2,\ldots, n$ in the following order: node 2, followed by nodes $3,\ldots, n$ sorted in decreasing order of their baseline reliabilities $p^{*}_{sorted(3)}\geq \ldots \geq p^{*}_{sorted(n)}$, breaking ties arbitrarily. Denote the new order by
$Q$.
\noindent - Follow the previous greedy protocol, increasing baseline probabilities up to one (if allowed by the budget) according to the new ordering $Q$.
\end{framed}
Similar statements hold for game $\Gamma_{NC_2}$, and for $\Gamma_{NC_3}$ for large enough values of parameter $d_{cut}$.
\label{kn}
\end{theorem}
\noindent
In the previous examples the optimal attack involved a determined node targeting order, which privileged direct neighbors and could depend on baseline reliabilities but was independent of the value of the budget. None of this holds in general: as the next result shows, on graph $C_{n}$ the optimum can be computed by taking the best of \textit{four} node targeting orders. The optimum may lack the two previously discussed properties of optimal orders:
\begin{itemize}
\item[-] in optimal attacks one should sometimes target a distance-two neighbor (3 or n-1) \textit{before} targeting both of $x=1$'s neighbors (2 and $n$, see Figure~\ref{fig:completeGraph}).
\item[-] the order (among the four) that characterizes the optimum may depend on the budget value $B$ as well. Formally:
\end{itemize}
\begin{theorem} Let $P,Q,R,S$ be the vectors $[2,n,n-1,3]$, $[2,n-1,n,3]$, $[n,3,2,n-1]$, $[n,2,3,n-1]$, respectively. Let $Sol_{P},Sol_{Q},Sol_{R}$, $Sol_{S}$ be the configurations obtained by increasing in turn (as much as possible, subject to the budget $B$) the reliabilities of nodes $2,3,n-1,n$ in the order(s) specified by $P,Q,R,S$, respectively. Then
\begin{itemize}
\item[a.] The best of $Sol_{P},Sol_{Q},Sol_{R},Sol_{S}$ is an
optimal attack on the centrality of $x=1$ in game $\Gamma_{NC_{1}}$ on the cycle graph $C_{n}$.
\item[b.] There exist values of $p_{2}^{*},p_{3}^{*},p_{n-1}^{*},p_{n}^{*}$ s.t. $Sol_{P}$ is optimal for all values of $B$ (by symmetry a similar statement holds for $Sol_S$).
\item[c.] There exist values of $p_{2}^{*},p_{3}^{*},p_{n-1}^{*},p_{n}^{*}$ and an nonempty open interval $I$ for the budget $B$ such that $Sol_{Q}$ is an optimum for all $B\in I$ (by symmetry a similar statement holds for $Sol_R$).
\end{itemize}
\label{cn}
\end{theorem}
\iffalse
\begin{note}
The optimal solution may be non-unique:
If $Sol_{P}$ is optimal and $B>R(1-p^{*}_{2}+1-p^{*}_{n})$ (that is if we have the budget to bring both $p_{2},p_{n}$ to 1) then the Shapley value of node 1 simplifies to a function which only depends on the sum $p_3+p_{n-1}$. Hence we could, in fact, choose (subject to exhausting the remaining budget) any combination of these two variables, in any order, instead of $Sol_{P}$.
\end{note}
\begin{algorithm
\KwIn{Probability values $(p_{2},p_{3},p_{n-1},p_{n})$}
\KwOut{Attacking order.}
if $p_{2}+p_{n-1}>p_{3}+p_{n}$ or $p_{2}+p_{n-1}=p_{3}+p_{n}$ and $p_{2}>p_{n}$, \textbf{return} (2,n,n-1,3)\\
else \textbf{return} (n,2,3,n-1)\\
\vspace{5mm}
\caption{Determining attack order on a circle graph.}
\label{algo1}
\end{algorithm}
\fi
\section{Attacks in credit attribution}
\label{ssix}
In this section we study removal attacks in credit attribution games. Interestingly, while the Shapley values have identical formulas in $\Gamma_{FC},\Gamma_{FO}$ \cite{karpov2014equal}, the two games are \textbf{not} similar with respect to attacks. Indeed, similarly to the case of network centrality, we have:
\begin{theorem}
No removal attack can decrease the Shapley value of a given player in a full credit attribution game.
\label{full-credit}
\end{theorem}
\begin{proof}
At first, this seems counterintuitive, as it would seem to contradict Example~\ref{ex:hirsch}. The answer is that \textit{this example does not correspond to the full credit game, but to the full obligation one}: in game $\Gamma_{FC}$ a player does \textbf{not} lose credit for a paper due to removal of a coauthor; in fact its Shapley value will increase, since the credit for the paper divides among fewer coauthors. It is in $\Gamma_{FO}$ where players may lose credit as a result of coauthor removal.
\end{proof}
This difference between $\Gamma_{FC}$ and $\Gamma_{FO}$ is evident with respect to attacks: As the next result shows, in full-obligation games, finding optimal removal attacks can simulate a well-known hard combinatorial problem:
\begin{theorem} The {\em budgeted maximum coverage problem} (which is NP-complete) reduces to minimizing the Shapley value of a given player in the full-obligation game (under removal attacks).
\label{full-obligation}
\end{theorem}
\begin{proof}
Deferred to the full version.
\end{proof}
\textbf{Fractional attacks} The following is a simple consequence of the formulas in Theorem~\ref{k=1} and Claim~\ref{aux2} shows that \textit{optimal attacks are different in games $\Gamma_{FC}$ and $\Gamma_{FO}$ irrespective of the topology of the coauthorship hypergraph}: in the first case we need to increase the reliability of $x$'s coauthors, in the other case we aim to decrease it:
\begin{theorem}
\iffalse
we have:
\begin{align*}
\frac{\partial Sh[\overline{v_{FC}}](x)}{\partial p_{j}} & = \left\{\begin{array}{l}
-p_{x}\sum\limits_{k\in S_{x}\cap S_{j}} w_{k} [\sum\limits_{\stackrel{B\subseteq Auth_{k}\setminus \{x\}}{j\in B}}\frac{ \Pi_{\emptyset,B\setminus j} }{(n_{k}-|B|){{n_{k}}\choose {|B|}}} ]<0, \mbox{ if } j\in CA(x) \\
0, \mbox{otherwise.}
\end{array}
\right. \\
\frac{\partial Sh[\overline{v_{FO}}}](x)}{\partial p_{j}} & = \left\{\begin{array}{l}
p_{x}\sum\limits_{k\in S_{x}\cap S_{j}}\frac{w_{k}}{n_{k}}\cdot \Pi_{Auth_{k}\setminus \{x,j\},Auth_{k}\setminus \{x,j\}}>0, \mbox{ if } j\in CA(x) \\
0, \mbox{otherwise.}
\end{array}
\right.
\end{align*}
Hence,
\fi
In the reliability extensions of the credit allocation games $\Gamma_{FC}, \Gamma_{FO}$ the Shapley value of player $x$ is a decreasing (respectively increasing) function of coauthors' reliabilities (and does not depend on other players).
\label{credit}
\end{theorem}
Optimal attacks can be explicitly described in the particular scenario when, just as in Example~\ref{ex:hirsch}, each paper has exactly two authors (a situation that corresponds, under the full obligation model, to induced subgraph games). It turns out that \textit{the relevant quantity is the ratio between the score of coauthors' joint contribution with the attacked node and its marginal cost:}
\iffalse
In this case one can assume without loss of generality that there exists a single paper with authors $x,j$. Denote by $w_{x,j}$ its combined weight.
\begin{lemma} In the setting above
\begin{align*}
\frac{\partial Sh[\overline{v_{FC}}](x)}{\partial p_{j}} & = \left\{\begin{array}{l}
-p_{x} \frac{w_{x,j}}{2},\mbox{ if } j\in CA(x) \\
0, \mbox{ otherwise.}
\end{array}
\right. \\
\frac{\partial Sh[\overline{v_{FO}}}](x)}{\partial p_{j}} & = \left\{\begin{array}{l}
p_{x} \frac{w_{x,j}}{2},\mbox{ if } j\in CA(x) \\
0, \mbox{otherwise.}
\end{array}
\right.
\end{align*}
\label{deriv-author}
\end{lemma}
We now apply these formulas to identifying optimal fractional attacks in games
$\Gamma_{*}$ and $\Gamma_{**}$:
\fi
\begin{theorem}
To optimally decrease the Shapley value of node $x$ in game $\Gamma_{FC}$ in the two-author special case:
\begin{framed}
\noindent (a). Sort the coauthors $l$ of $x$ in the decreasing order of
the fractions $\frac{C(x,l)}{R(l)}$, breaking ties arbitrarily.
\noindent (b). While the budget allows it, for $i=1,\ldots |CA(x)|$, \textbf{increase} to 1 the probability of the $i$'th most valuable coauthor.
\noindent (c). If the budget does not allow increasing the probability of the $i$'th coauthor up to 1, increase it as much as possible.
\noindent (d). Leave all other probabilities to their baseline values.
\end{framed}
\label{kn2}
\end{theorem}
\vspace{-0.3cm}
\begin{corollary} In the setting of Theorem~\ref{kn2},
to optimally solve the pairwise Shapley value attack problem for $x,y$, run the algorithm in the Theorem only on those $z$ that are coauthors of $x$ but not of $y$.
\end{corollary}
As for game $\Gamma_{FO}$, the optimal attack is symmetric. Since we are decreasing probabilities, we will be using fractions $\frac{C(x,l)}{L(l)}$ instead:
\begin{theorem}
To optimally decrease the Shapley value of node $x$ in the full obligation game $\Gamma_{FO}$ in the two-author special case:
\begin{framed}
\noindent (a). Sort the coauthors of $x$ in the decreasing order of the fractions $\frac{C(x,l)}{L(l)}$, breaking ties arbitrarily.
\noindent (b). While the budget allows it, for $i=1,\ldots |CA(x)|$, \textbf{decrease} to 0 the probability of the $i$'th most valuable coauthor.
\noindent (c). If the budget does not allow decreasing the probability of the $i$'th coauthor up to 0, decrease it as much as possible.
\noindent (d). Leave all other probabilities to their baseline values.
\end{framed}
\label{cn2}
\end{theorem}
\vspace{-0.5cm}
\begin{corollary} In the setting of Theorem~\ref{cn2},
to solve the pairwise Shapley value attack problem for players $x,y$, run the algorithm in the Theorem only on those $z$ that are coauthors of $x$ but not of $y$.
\end{corollary}
\section{Proof Highlights}
In this section we present some of the proofs of our results. Some other proofs are included in the Appendix, others are deferred to the full version of the paper, to be posted on arxiv.org:
\subsection{Proof of Theorem~\ref{sh-gamma1}}
We prove the formula for the first game only. Similarly to \cite{michalak2013efficient}, the proofs for the other two games are completely analogous, and deferred to the full version. Define, for $y\in V$, $W\subseteq V$
\[
f_{y}(W)=\left\{\begin{array}{ll}
1, & \mbox{ if } y\not \in W\cup \delta(W), \\
0, & \mbox{ otherwise.} \\
\end{array}
\right.
\]
A simple case analysis proves that, for every $W\subseteq V$, $
v_{NC_1}(W\cup \{x\})-v_{NC_1}(W)= \sum_{y\in \widehat{N(x)}} f_{y}(W).$
We therefore have
\begin{align*}
& Sh[\overline{v_{NC_1}}](x) = E_{\pi \in S_{n}}[\overline{v_{NC_1}}(S_{\pi}^{x} \cup \{x\}) - \overline{v_{NC_1}}(S_{\pi}^{x})] = E_{\pi\in S_{n}}[p_{x} \cdot \\
&\sum\limits_{W\subseteq S_{\pi}^{x}} [v_{NC_1}(W \cup \{x\}) - v_{NC_1}(W)] \cdot \Pi_{W, S_{\pi}^{x}}]= p_{x} E_{\pi \in S_{n}} \sum\limits_{W\subseteq S_{\pi}^{x}}\\
& \sum\limits_{y \in \widehat{N(x)}} f_{y}(W) \cdot \Pi_{W, S_{\pi}^{x}} = p_{x} E_{\pi\in S_{n}} \sum\limits_{y \in \widehat{N(x)}} \sum\limits_{W\subseteq S_{\pi}^{x}} f_{y}(W) \cdot \Pi_{W, S_{\pi}^{x}}
\end{align*}
We now introduce two notations that will help us reinterpret the previous sum: given $W\subseteq V$, denote by $Alive(W)$ the set of nodes in $W$ that are \textit{alive} under the reliability extension model. Also, given permutation $\pi\in S_{n}$ and $W\subseteq V$, denote by
$First_{\pi}(W)$ the element of $W$ that appears first in enumeration $\pi$. With these notations
\begin{align*}
&Sh[\overline{v_{NC_1}}](x) = p_{x} \sum\limits_{y \in \widehat{N(x)}} Pr_{\pi\in S_{n}}[y \notin Alive(S_{\pi}^{x}) \cup \delta(Alive(S_{\pi}^{x}))] \\
&= p_{x} \sum\limits_{y \in \widehat{N(x)}} Pr_{\pi\in S_{n}}[x = First_{\pi}(\widehat{N(y)} \cap Alive(V))| x \in Alive(V) ]
\end{align*}
If $S=(\widehat{N(y)}\setminus\{x\}) \cap Alive(V)$ then the conditional probability that $x$ is $First_{\pi}(S\cup \{x\})$, given that $x$ is alive, is $\frac{1}{|S|+1}$. We thus get the desired formula.
\subsection{Proof of Theorem~\ref{k=1}}
Denote, for a set of authors $C$, by $Pap_{C}=\cup_{l\in C} Pap_{l}$ the set of papers with at least one author in $C$.
We decompose function $v_{FC}$ as $v_{FC}(\cdot)=\sum_{k} w_{k}v_{k}(\cdot)$ where
\begin{equation}
v_{k}(C)=\left\{\begin{array}{l}
1,\mbox{ if } k\in Pap_{C} \\
0, \mbox{otherwise.}
\end{array}
\right.
\end{equation}
\begin{align*}
\mbox{ Thus }v_{FC}(C) & = \sum\limits_{R\subseteq C} v_{FC}(R)\Pi_{R,C} = \sum\limits_{R\subseteq C} \Pi_{R,C} \sum_{k} w_{k}v_{k}(R) = \\
& = \sum_{k} \sum\limits_{R\subseteq C}\Pi_{R,C} w_{k}v_{k}(R)= \sum_{k} w_{k}\overline{v_{k}}(C)
\end{align*}
which means that we can decompose $\overline{v_{FC}}=\sum_{k} w_{k}\overline{v_{k}}$, and the Shapley value of $\overline{v_{FC}}$ decomposes as well $
Sh(\overline{v_{FC}})=\sum_{k} w_{k}\cdot Sh(\overline{v_{k}}), $
and similarly for $v_{FO}$. On the other hand
\[
Sh[\overline{v_{k}}](x)=\frac{1}{n!}\sum\limits_{\pi \in S_{n}} [\overline{v_{k}}(S^{x}_{\pi}\cup \{x\})- \overline{v_{k}}(S^{x}_{\pi})]
\]
Given set $A$ of authors,
\begin{align*}
& \overline{v_{k}}(A\cup \{x\})-\overline{v_{k}}(A) = \sum\limits_{R\subseteq A\cup\{x\}} v_{k}(R) \Pi_{R,A\cup \{x\}}- \sum\limits_{R\subseteq A} v_{k}(R)\Pi_{R,A} \\
& = (1-p_{x})\sum\limits_{R\subseteq A\setminus x} v_{k}(R)\Pi_{R,A\setminus \{x\}}+
p_{x}\sum\limits_{R\subseteq A\setminus x} \Pi_{R,A} v_{k}(R\cup \{x\}) \\
&- \sum\limits_{R\subseteq A} \Pi_{R,A} v_{k}(R) = p_{x}\cdot \sum\limits_{R\subseteq A} \Pi_{R,A} [v_{k}(R\cup\{x\})-v_{k}(R)]
\end{align*}
Now $v_{k}(R\cup\{x\})-v_{k}(R)$ is 1 if $k\in Pap_{x}\setminus Pap_{R}$, 0 otherwise. For
$k\not \in Pap_{x}$, $\overline{v_{k}}(A\cup \{x\})-\overline{v_{k}}(A)=0$. Otherwise
$\overline{v_{k}}(A\cup \{x\})-\overline{v_{k}}(A) = p_{x}\cdot \sum\limits_{\stackrel{R\subseteq A}{k\not \in Pap_{R}}} \Pi_{R,A}.$
We can interpret this quantity as the probability that the live subset of $A$ does not cover $k$, but $x$ is live and does. Applying this to the Shapley value we infer that $Sh[\overline{v_{k}}](x)$ is the probability that in a random permutation $\pi$ the live subset of $S^{x}_{\pi}$ does not cover $k$, but $x$ is live and does.
\textbf{Full credit model:} There are $n_{k}!$ permutations $\Xi$ of indices in $Auth_{k}$, each of them equally likely when $\pi$ is a random permutation in $S_{n}$. Given subset $S\subset Auth_{k}\setminus \{x\}$, the probability that $\Xi$ starts with $S$ followed by $x$ is
$\frac{|S|!(n_{k}-|S|-1)!}{n_{k}!}$. To make $x$ pivotal for paper $k$, none of the agents in $S$ must be live. This happens with probability $\Pi_{\emptyset,S}$. Given the above argument, we have
\begin{align*}
Sh[\overline{v_{k}}](x)& = p_{x}\cdot \sum_{S\subseteq Auth_{k}\setminus \{x\}} \frac{(|S|)!(n_{k}-|S|-1)!}{n_{k}!}\cdot [\prod_{l\in S} (1-p_{l})] \\
& =p_{x}\cdot \sum_{S\subseteq Auth_{k}\setminus \{x\}}\frac{ \Pi_{\emptyset,S}}{(n_{k}-|S|){{n_{k}}\choose {|S|}}}, \mbox{ hence }
\end{align*}
\begin{equation}
Sh[\overline{v_{FC}}](x)=p_{x}\cdot \sum\limits_{k\in Pap_{x}} w_{k}\cdot [\sum_{S\subseteq Auth_{k}\setminus \{x\}}\frac{ \Pi_{\emptyset,S} }{(n_{k}-|S|){{n_{k}}\choose {|S|}}}]
\end{equation}
which is what we had to prove.
\textbf{Full obligation model:} For $x$ to be pivotal for paper $k$, $x$ and all its coauthors in $Auth_{k}$ must all be live, and all elements of $Auth_{k}\setminus x$ must appear before $x$ in ordering $\pi$. This happens with probability
$\frac{1}{n_{k}}\cdot \Pi_{Auth_{k},Auth_{k}}.$
\subsection{Proof of Theorem~\ref{kn}}
First of all, the following claim holds for all graphs $G$:
\begin{claim}
The minimum of function $z\rightarrow Sh[\overline{v_{NC_1}}](1)|_{z}$ exists and is reached on some profile $(p_{i})$ with $p^{*}_{i}\leq p_{i}\leq 1$.
\end{claim}
\begin{proof}
Function $z\rightarrow Sh[\overline{v_{NC_1}}](1)|_{z}$ is continuous and the set $[0,1]^{n}$ is compact, so the minimum is reached. Assuming some $p_{j}<p^{*}_{j}$, we could increase $p_{j}$ up to $p^{*}_{j}$, reducing total cost. This does not increase (and perhaps further decreases) the Shapley value.
\end{proof}
Next, we (jointly) prove cases $G=K_{n}$ and $G=S_{n}$ with $x=1$ being a center, since the proofs are practically identical. The remaining case ($K=S_{n}$, $x=1$ not a center) is deferred to the Appendix. We start with the following
\begin{lemma}
For $G=K_n$ or $G=S_{n}$, $j\neq l\in V(G)\setminus 1$ and any probability profile $p=(p_{1},\ldots, p_{n})\in (0,1]^{n}$,
\[
sign\big( \frac{\partial Sh[\overline{v_{NC_1}}](1)}{\partial p_{j}}|_{p} - \frac{\partial Sh[\overline{v_{NC_1}}](1)}{\partial p_{l}}|_{p} \big)=sign(p_{j}-p_{l})
\]
\label{sign}
\end{lemma}
\vspace{-0.8cm}
\begin{proof}
Deferred to the full version.
\end{proof}
\vspace{-0.3cm}
We first prove that in the optimal solution on these graphs no two variables could assume equal values, unless both equal to the endpoints of their restricting intervals:
\begin{lemma}
In the setting of Theorem~\ref{kn}, suppose $p=(p_{1},\ldots, p_{n})$ is such there is are indices $2\leq i\neq j\leq n$ with $0<p_{i}=p_{j}<1$. Then there exists $\epsilon_{0}>0$ such that for every $\epsilon \in [-\epsilon_{0},\epsilon_{0}]$, $\epsilon \neq 0$, $
Sh[\overline{v_{NC_1}}](1)|_{p_{i,j}(\epsilon)}< Sh[\overline{v_{NC_1}}](1)|_{p}$,
(where $p_{i,j}(\epsilon)$ is defined as in equation~(\ref{change})).
\label{equal}
\end{lemma}
\begin{proof}
Deferred to the full version.
\end{proof}
Now we prove:
\begin{claim}
In the optimal solution there is at most one index $i_{1}$ with $p_{i_{1}}\in (p_{i_1}^{*},1)$. In other words, in the optimal solution some probabilities are increased up to 1, some ae left unchanged to their baseline values, and at most one variable is increased to a value less than 1.
\label{atmostone}
\end{claim}
\begin{proof}
Suppose there were two different indices $i_{1}\neq i_{2}$. We must have $p_{i_{1}}=p_{i_{2}}$, or, by Lemma~\ref{aux}, one could decrease the Shapley value by increasing the larger one and symmetrically decreasing the smaller one. But this is impossible, due to Lemma~\ref{equal}.
\end{proof}
Note that the greedy solution $\Gamma$ has the structure from Claim~\ref{atmostone} and that any permutation of OPT on variables $p_{2},\ldots, p_{n}$ has the same Shapley value as OPT (since $K_n,S_n$ have this symmetry).
We compare the vectors $\Gamma,OPT$, both sorted in decreasing order. Our goal is to show that these sorted
versions are equal. Without loss of generality, we may assume that $OPT$ creates the same ordering on variables as the $p_{i}^{*}$'s (and $\Gamma$), when considered in decreasing sorted order (we break ties, if any, in the same way). Indeed, if there were indices $i,j$ such that $p_{sorted(i)}^{*}\geq p_{sorted(j)}^{*}$ but $p_{sorted(i)}<p_{sorted(j)}$ then, since $p_{sorted(j)}>p_{sorted(i)}\geq p_{sorted(i)}^{*}\geq p_{sorted(j)}^{*}$, we could simply swap values $p_{sorted(i)}$ and $p_{sorted(j)}$ and obtain another legal, optimal solution.
If $\Gamma$ were different from $OPT$, since Greedy increases the largest variables first, there must be variables $x,y$ such that $\Gamma_{x}\geq \Gamma_{y}$, $\Gamma_{x}>p_{x}$ and $\Gamma_{y}<p_{y}$. Since $\Gamma$ and $OPT$ have the same ordering of variables, we also must have in fact
$p_{x}\geq p_{y}$, i.e. $1\geq \Gamma_{x}>p_{x}\geq p_{y}>\Gamma_{y}\geq p^{*}_{y}$. But then, using either Lemma~\ref{aux} (if $p_{x}\neq p_{y}$) or Lemma~\ref{equal} (otherwise) we could further improve $OPT$ by increasing $p_{x}$ and symmetrically decreasing $p_{y}$, a contradiction.
\subsection{Proof of Theorem~\ref{cn}}
A simple computation shows that for $G=C_{n}$
\[
Sh[\overline{v_{NC_1}}](1)=p_1(\frac{p_{2}p_{n}+p_{2}p_{3}+p_{n-1}p_{n}}{3}-\frac{p_{3}+p_{n-1}}{2}- p_{2}-p_{n}+3).
\]
As $p_1$ does not influence any attack on itself, w.l.o.g. we will assume $p_1 = 1$.
We need to minimize the above quantity, subject to
\begin{align*}
& p_{2}+p_{3}+p_{n-1}+p_{n}=B+p^{*}_{2}+p^{*}_{3}+p^{*}_{n-1}+p^{*}_{n}, p^{*}_{i}\leq p_{i}\leq 1.
\end{align*}
We now prove a result somewhat similar to Claim~\ref{atmostone}. However, now we will only interdict certain patterns.
\begin{claim}
In an optimal solution it is not possible that $p^{*}_{k}< p_{k}< 1$, $p^{*}_{l}< p_{l}< 1$ when:
\begin{itemize}
\item[a.] $k=2$, $l=n-1$ (and, symmetrically, $k=3$, $l=n$). In fact, in this case we have the stronger implication $p_{n-1}>p_{n-1}^{*}\Rightarrow p_{2}=1$. Symetrically, $p_{3}>p_{3}^{*}\Rightarrow p_{n}=1$.
\item[b.] $k=2$, $l=n$.
\item[c.] $k=2$, $l=3$ (and, symmetrically, $k=n$, $l=n-1$.) In the case when $\frac{p_{3}+p_{n}}{3}\leq \frac{p_{2}}{3}-\frac{1}{2}$ we have the stronger implication $p_{3}> p_{3}^{*}\Rightarrow p_{2}=1$. Symetrically, in the case when $\frac{p_{2}+p_{n-1}}{3}\leq \frac{p_{n}}{3}-\frac{1}{2}$, $p_{n-1}>p_{n-1}^{*}\Rightarrow p_{n}=1$.
\end{itemize}
\label{foo}
\end{claim}
\begin{proof}
Suppose there were two such indices $k,l$. We must also have $\frac{\partial Sh[\overline{v_{NC_1}}](1)}{\partial x_{k}} = \frac{\partial Sh[\overline{v_{NC_1}}](1)}{\partial x_{l}}$, otherwise we could decrease the Shapley value using Lemma~\ref{aux}. We reason in all cases by contradiction:
a. We prove directly the stronger result. Suppose $p_{2}<1$. We have $\frac{\partial Sh[\overline{v_{NC_1}}](1)}{\partial x_{2}}= \frac{p_{3}+p_{n-1}}{3}-1\leq \frac{p_{3}}{3}-\frac{2}{3}< \frac{p_{3}}{3}-\frac{1}{2}=\frac{\partial Sh[\overline{v_{NC_1}}](1)}{\partial x_{n-1}}$.
So we can apply Lemma~\ref{aux} to $p_{2}$ and $p_{n-1}$, further decreasing the Shapley value as we increase $p_{2}$ and decrease $p_{n-1}$.
b. Equality of partial derivatives can be rewritten as $p_{2}+p_{n}=p_{3}+p_{n-1}$. An easy computation (which uses this equality) shows that $Sh[\overline{v_{NC_1}}](1)|^{p_{n,2}(\epsilon)}_{p}=-\frac{\epsilon^2}{3}$. But then it means that one could further decrease
the Shapley value of player 1, hence we are not at an optimum, a contradiction.
c.
As in the proof of a. $\frac{p_{3}+p_{n}-p_{2}}{3}-\frac{1}{2}=
\frac{\partial Sh[\overline{v_{NC_1}}](1)}{\partial x_{2}}- \frac{\partial Sh[\overline{v_{NC_1}}](1)}{\partial x_{3}}$ $=0$, otherwise we could use Lemma~\ref{aux} with $p_{2},p_{3}$ to decrease the Shapley value. An easy computation (which uses this equality) shows that
$Sh[\overline{v_{NC_1}}](1)|^{p_{3,2}(\epsilon)}_{p}=\frac{\epsilon(p_{n}-p_{2}+p_{3})}{3}-\frac{\epsilon}{2}-
\frac{\epsilon^2}{3}= -\frac{\epsilon^2}{3}<0$. But then one could further decrease
the Shapley value of 1, a contradiction.
\end{proof}
\vspace{-0.5cm}
We use Claim \ref{foo} to prove Theorem~\ref{cn}:
\textbf{a.} The conclusion of this claim is that the only case when there could exist two values $p_{k},p_{l}$ strictly between their baseline values and 1 is $k=3,l=n-1$ (or vice-versa), a case when we must further have $p_{2}=p_{n}=1$. Thus the optimal solution is the best of the configurations obtained by greedily increasing probabilities (up to 1, if the budget will allow it) in one of the orders
$[2,n,3,n-1],[2,n,n-1,3],[2,n-1,n,3],[n,3,2,n-1], [n,2,3,n-1],[n,2,n-1,3]$. An easy computation shows that the first two orders are equally good for all possible budget values $B$, and so are the last two. So, in the end we only have to compare the four orders $P,Q,R,S$ to find an optimum, proving the first part of the theorem.
\textbf{b,c:} Symmetry between 2,3 and n,n-1 reduces the proof of these two points to analyzing the ``winners'' among $Sol_{P},Sol_{Q},Sol_{R}$, $Sol_{S}$, and proving that, under suitable conditions, it belongs either to $\{Sol_{P},Sol_{S}\}$ (point b.) or to
$\{Sol_{Q},Sol_{R}\}$ (point c.).
If we start by increasing $p_{2}$ by $\epsilon$, the Shapley value decreases by $\epsilon(1-\frac{p_{3}^{*}+p_{n-1}^{*}}{3})$.
We will call the number $1-\frac{p_{3}^{*}+p_{n-1}^{*}}{3}$
the \textit{speed of the decrease}. It is maintained while $p_{2}$ increases from $p_{2}^{*}$ to 1, i.e. over a \textit{segment} (interval) of \textit{size} $1-p_{2}^{*}$. There are four segments, corresponding to the four variables being increased. The table in Figure~\ref{dec} summarizes the effect of variable increases on the decrease of the Shapley value of node 1. Using this table it is easy to compare the four permutations with respect to this decrease:
\begin{figure}
\scalebox{0.7}{
\begin{tabular}{|c|c|c|c|c|c|c|c|c|}
\hline
Perm, & Sp$_{1}$ & sz$_{1}$ & Sp$_{2}$ & sz$_{2}$ & Sp$_{3}$ & sz$_{3}$ & Sp$_{4}$ & sz$_{4}$\\
\hline
$P$ & $1-\frac{p_{3}^{*}+p_{n}^{*}}{3}$ & $1-p_{2}^{*}$ & $\frac{2-p_{n-1}^{*}}{3}$& $1-p_{n}^{*}$ & 1/6 & $1-p_{n-1}^{*}$ &1/6 & $1-p_{3}^{*}$ \\
\hline
$Q$ & $1-\frac{p_{3}^{*}+p_{n}^{*}}{3}$ & $1-p_{2}^{*}$ & $\frac{1}{2}-\frac{p_{n}^{*}}{3}$ & $1-p_{n-1}^{*}$ & 1/3 & $1-p_{n}^{*}$ &1/6 & $1-p_{3}^{*}$ \\
\hline
$R$ & $1-\frac{p_{2}^{*}+p_{n-1}^{*}}{3}$ & $1-p_{n}^{*}$ & $\frac{1}{2}-\frac{p_{2}^{*}}{3}$ & $1-p_{3}^{*}$ & 1/3 & $1-p_{2}^{*}$ &1/6 & $1-p_{n-1}^{*}$ \\
\hline
$S$ & $1-\frac{p_{2}^{*}+p_{n-1}^{*}}{3}$ & $1-p_{n}^{*}$ &$\frac{2-p_{3}^{*}}{3}$ & $1-p_{2}^{*}$ & 1/6 & $1-p_{3}^{*}$ &1/6 & $1-p_{n-1}^{*}$ \\
\hline
\end{tabular}}
\caption{Dynamics of the decrease of the Shapley value.}
\label{dec}
\vspace{-5mm}
\end{figure}
\textbf{P versus Q:} Since they use the same variable, $\Delta_P= \Delta_Q$ throughout the first segment.
At the (common) end of the third segment, a simple computation yields $
\Delta_P - \Delta_Q = 0,$ and since $P,Q$ use identical fourth segments, $\Delta_P = \Delta_Q$ throughout their fourth segment.
As for the second/third segments, if $p_{n}^{*}<1$ and $p_{n-1}^{*}-p_{n}^{*}>\frac{1}{2}$ then throughout the common portion of the second segment $\Delta_P<\Delta_Q$. Afterwards the difference will start shrinking, and will become positive after a certain value $\lambda_{P,Q}$
where $\Delta_{P}=\Delta_{Q}$. Note that at the end of the second segment of $Q$,
$\Delta_{P}-\Delta_{Q}=\frac{1-p_{n-1}^{*}}{6}\geq 0$, so
$\lambda_{P,Q}$ is in the second segment of $P$ and the third of $Q$.
\noindent To determine $\lambda_{P,Q}$ write $\lambda_{P,Q}=1 -p_{2}^{*}+1-p_{n-1}^{*}+\mu_{P,Q}$. We have:
$\frac{2-p_{n-1}^{*}}{3}(1-p_{n-1}^{*} + \mu_{P,Q}) = (\frac{1}{2}-\frac{p_{n}^{*}}{3})(1-p_{n-1}^{*})+\frac{\mu_{P,Q}}{3}$, $
\mbox{or }\mu_{P,Q}= p_{n-1}^{*} - p_{n}^{*} - \frac{1}{2}
\mbox{ hence } \lambda_{P,Q}=\frac{3}{2} -p_{2}^{*} - p_{n}^{*}.$
The conclusion is that $\Delta_P\geq \Delta_Q$ for all budgets if $p_{n-1}^{*}-p_{n}^{*}\leq \frac{1}{2}$.
Otherwise $\Delta_P \geq \Delta_Q$, except for $B\in I_{P,Q}:= (1-p_{2}^{*},\frac{3}{2} -p_{2}^{*} - p_{n}^{*})$. Similar conclusions hold for comparing S versus R.
\textbf{P versus S:} At the (common) end of their second segment
$\Delta_P - \Delta_S = (1-p_{n}^{*})(\frac{p_{2}^{*}-1}{3}) + (1-p_{2}^{*})(\frac{1-p_{n}^{*}}{3})=0$.
So $\Delta_P = \Delta_S$, and this prevails throughout the third and fourth segments.
As for the first and second segment, $\Delta_P - \Delta_S \leq 0$ if $p_{3}^{*}+p_{n}^{*}\geq p_{2}^{*}+p_{n-1}^{*}$, $\Delta_P - \Delta_S \geq 0$ if $p_{3}^{*}+p_{n}^{*}\leq p_{2}^{*}+p_{n-1}^{*}$. Hence $\Delta_P\leq \Delta_S$ for all budgets if $p_{3}^{*}+p_{n}^{*}\geq p_{2}^{*}+p_{n-1}^{*}$. Otherwise $\Delta_P \geq \Delta_S$.
Summing up:
- If $p_{n-1}^{*}-p_{n}^{*}<1/2$, $p_{3}^{*}-p_{2}^{*}<1/2$, $p_{3}^{*}+p_{n}^{*}\leq p_{2}^{*}+p_{n-1}^{*}$, then $\Delta_{P}\geq \Delta_{Q}$, $\Delta_{S}\geq \Delta_{R},\Delta_{P}\geq \Delta_{S}$ for all budgets, so $P$ is optimal. If the last condition is reversed then $S$ is optimal.
- If $p_{n-1}^{*}-p_{n}^{*}>1/2$, $p_{3}^{*}-p_{2}^{*}>1/2$ then $\Delta_{P}\leq \Delta_{Q}$ on $I_{P,Q}$, $\Delta_{S}\leq \Delta_{R}$ on $I_{S,R}$. So the best of $Q,R$ is an optimum on $I_{P,Q}\cap I_{S,R}.$ Since $Q,R$ are piecewise linear functions, one of them is better than the other one on an open interval.
\iffalse
\underline{B versus C:}
A simple computation reveals that for $B=4-K$, $\Delta_{B}-\Delta_{C}$. By the dynamics of $\Delta_{B},\Delta_{C}$, this equality actually holds on the interval
$B\in [3-K+min(p_{3}^{*},p_{n-1}^{*},4-K]$.
On the other hand, if $p_{3}+p_{n}<p_{2}+p_{n-1}$ then $\Delta_{B}>\Delta_{C}$ on the segment $B\in (0,1-max(p_{2}^{*},p_{n}^{*})$. By our assumption, at the end of the second segment in $B$, the second segment in $C$ has not ended.
\fi
\subsection{Proof sketch of Theorems~\ref{kn2} and~\ref{cn2}}
The two proof are very similar, so we only present the one of Theorem~\ref{kn2}. Particularizing formula~\ref{sh:rel:1} to the case of induced subgraph games, we infer that the Shapley value of player $x$ has the formula $Sh[\overline{v_{FC}}](x)=p_{x}\cdot \sum\limits_{l \in CA(x)} C(x,l) \cdot \frac{2-p_{l}}{2}\mbox{ (*)}
.$
We claim that minimizing $Sh[\overline{v_{FC}}](x)$ is equivalent to solving the following fractional knapsack problem:
\begin{equation}
\left\{\begin{array}{l}
max[\sum\limits_{l\in CA(x)} C(x,l)(1-p^{*}_{l})\cdot y_{l}]\\
\sum\limits_{l\in CA(x)} R_{l}(1-p^{*}_{l})\cdot y_{l} = \sum\limits_{l\in CA(x)} R_{l}\cdot (1-p^{*}_{l}) - B. \\
0\leq y_{l}\leq 1, \forall l\in CA(x)\\
\end{array}
\right.
\label{fr-knap}
\end{equation}
Indeed, by formula~(*) it is only efficient to increase the reliability probabilities of $x$'s authors from $p^{*}_{l}$ to some $p_{l}\in [p^{*}_{l},1]$. If we introduce variables $y_{l}\in [0,1]$ by equation $1-y_{l}=\frac{p_{l}-p^{*}_{l}}{1-p^{*}_{l}}$, (or, equivalently, $ y_{l}=\frac{1-p_{l}}{1-p^{*}_{l}}$), the cost of such move is $R_{l}\cdot (p^{*}_{l}-p_{l})=R_{l}\cdot {(1-y_{l})(1-p^{*}_{l})}$. The total costs must add up to $B$, so $\sum\limits_{l\in CA(x)} R_{l}\cdot {(1-y_{l})(1-p^{*}_{l})} = B$, which is equivalent to system~(\ref{fr-knap}). The minimization of the Shapley value is easily seen to correspond to the maximization of the objective function of~(\ref{fr-knap}).
Now it is well-known that the greedy algorithm that considers variables $y_{l}$ in decreasing order of their cost/benefit ratio finds an optimal solution to problem~(\ref{fr-knap}). Reinterpreting this result in our language we get the algorithm described in Theorem~\ref{kn2}.
\section{Related work\protect\footnote{F\lowercase{or reasons of space this section is only sketched.}}}
First of all, \textit{network interdiction} (see e.g. \cite{doi:10.1002/9780470400531.eorms0089,smith2013modern}) is a well-established theme in combinatorial optimization. Our removal model can be seen as a special case of node interdiction.
Results on the {\it reliability extension} of a cooperative game \cite{meir2012congestion, reliability-games,bachrach2012agent,bachrach2013reliability,bachrach2014cooperative} are naturally related. So is the rich literature on \textit{manipulation}, both in non-cooperative and coalitional settings \cite{aziz2011false,faliszewski2011multimode,zuckerman2012manipulating,lev2013mergers,Waniek2018, Waniek2017,vallee2014study} and \textit{bribery} \cite{faliszewski2006complexity} in voting. Our framework covers both scenarios, that in which an external perpetrator bribes agents to change their reliabilities, and that in which this is done by a coalition of agents.
A lot of work has been devoted recently to measuring and characterizing \textit{synergies between players} in multi-agent settings \cite{procaccia2014structure,liemhetcharat2012modeling,liemhetcharat2014weighted}. Synergies between players in cooperative games are obviously relevant to the theme of this paper: synergic agents' participation to coalitions increases the Shapley value of the given agent. The nature of some of our results (Theorems~\ref{kn},~\ref{kn2} and~\ref{cn2}), that target nodes in a fixed order, provide a concrete way for ranking synergies between these nodes and the attacked one.
\section{Conclusions and open issues}
Our results have uncovered a rich typology of optimal attacks on players' power indices: Sometimes no attack is beneficial. Sometimes, the optimal attack is intractable, even when computing the power indices is feasible. For fractional attacks, in many cases (but not always) greedy-type approaches provide an optimal strategy.
\noindent An open question raised by our work is the complexity of fractional attacks in general full-obligation credit attribution games. Motivated by Theorem~\ref{full-obligation} we believe that even this version is intractable. On the other hand we would like to see our framework applied to more settings. They include bicooperative games \cite{bilbao2000bicooperative}, generalized MC-nets \cite{elkind2009tractable}, etc. Of special interest are cases when computing the Shapley value is easy, e.g. voting games with super-increasing weights \cite{bachrach2016analyzing}, flow games on series-parallel networks \cite{elkind2009tractable}, or games with bounded dependency degree \cite{DBLP:conf/aaai/IgarashiIE18}.
As for relative attacks, we propose studying a more realistic \textit{bicriteria optimization} version of the problem \cite{ravi1993many}: decrease as much as possible the Shapley value of node $x$ while not affecting the Shapley value of node $y$ by more than a certain amount $D$.
Finally, the related problem of \textit{increasing} the power index of a given node subject to budget constraints is also worth investigating.
\section*{Acknowledgements}
This work was supported by a grant of Ministry of Research and Innovation, CNCS - UEFISCDI, project number
PN-III-P4-ID-PCE-2016-0842, within PNCDI III.
\bibliographystyle{unsrt}
|
2,877,628,088,398 | arxiv |
\section{Introduction}
Caching at mobile devices significantly improves system performance by facilitating \ac{D2D} communications, which enhances the spectrum efficiency and alleviate the heavy burden on backhaul links \cite{Femtocaching}. Modeling the cache-enabled heterogeneous networks, including \ac{SBS} and mobile devices, follows two main directions in the literature. The first line of work focuses on the fundamental throughput scaling results by assuming a simple protocol channel model \cite{Femtocaching,golrezaei2014base,8412262,amer2017delay}, known as the protocol model, where two devices can communicate if they are within a certain distance. The second line of work, defined as the physical interference model, considers a more realistic model for the underlying physical layer \cite{andreev2015analyzing,cache_schedule}. In the following, we review some of the works relevant to the second line, focusing mainly on the \ac{EE} and delay analysis of wireless caching networks.
The physical interference model is based on the fundamental \ac{SIR} metric, and therefore, is applicable to any wireless communication system. Modeling devices' locations as a \ac{PPP} is widely employed in the literature, especially, in the wireless caching area \cite{andreev2015analyzing,cache_schedule,energy_efficiency,ee_BS,hajri2018energy}. However, a realistic model for \ac{D2D} caching networks requires that a given device typically has multiple proximate devices, where any of them can potentially act as a serving device. This deployment is known as clustered devices' deployment, which can be characterized by cluster processes \cite{haenggi2012stochastic}. Unlike the popular \ac{PPP} approach, the authors in \cite{clustered_twc,clustered_tcom,8070464} developed a stochastic geometry based model to characterize the performance of content placement in the clustered \ac{D2D} network. In \cite{clustered_twc}, the authors discuss two strategies of content placement in a \ac{PCP} deployment. First, when each device randomly chooses its serving device from its local cluster, and secondly, when each device connects to its $k$-th closest transmitting device from its local cluster. The authors characterize the optimal number of \ac{D2D} transmitters that must be simultaneously activated in each cluster to maximize the area spectral efficiency. The performance of cluster-centric content placement is characterized in \cite{clustered_tcom}, where the content of interest in each cluster is cached closer to the cluster center, such that the collective performance of all the devices in each cluster is optimized. Inspired by the Matern hard-core point process, which captures pairwise interactions between nodes, the authors in \cite{8070464} devised a novel spatially correlated caching strategy called \ac{HCP} such that the \ac{D2D} devices caching the same content are never closer to each other than the exclusion radius.
Energy efficiency in wireless caching networks is widely studied in the literature \cite{energy_efficiency,ee_BS,hajri2018energy}.
For example, an optimal caching problem is formulated in \cite{energy_efficiency} to minimize the energy consumption of a wireless network. The authors consider a cooperative wireless caching network where relay nodes cooperate with the devices to cache the most popular files in order to minimize energy consumption. In \cite{ee_BS}, the authors investigate how caching at BSs can improve \ac{EE} of wireless access networks. The condition when \ac{EE} can benefit from caching is characterized, and the optimal cache capacity that maximizes the network \ac{EE} is found. It is shown that \ac{EE} benefit from caching depends on content popularity, backhaul capacity, and interference level.
The authors in \cite{hajri2018energy} exploit the spatial repartitions of devices and the correlation in their content popularity profiles to improve the achievable EE. The \ac{EE} optimization problem is decoupled into two related subproblems, the first one addresses the issue of content popularity modeling, and the second subproblem investigates the impact of exploiting the spatial repartitions of devices. It is shown that the small base station allocation algorithm improves the energy efficiency and hit probability. However, the problem of \ac{EE} for \ac{D2D} based caching is not yet addressed in the literature.
Recently, the joint optimization of delay and energy in wireless caching is conducted, see, for instance \cite{wu2018energy,huang2018energy,jiang2018energy}. The authors in \cite{wu2018energy} jointly optimize the delay and energy in a cache-enabled dense small cell network. The authors formulate the energy-delay optimization problem as a mixed integer programming problem, where file placement, device association to the small cells, and power control are jointly considered. To model the energy consumption and end-to-end file delivery-delay tradeoff, a utility function linearly combining these two metrics is used as an objective function of the optimization problem. An efficient algorithm is proposed to approach the optimal association and power solution, which could achieve the optimal tradeoff between energy consumption and end-to-end file delivery delay. In \cite{huang2018energy}, the authors showed that with caching, the energy consumption can be reduced by extending transmission time. However, it may incur wasted energy if the device never needs the cached content. Based on the random content request delay, the authors study the maximization of \ac{EE} subject to a hard delay constraint in an additive white Gaussian noise channel. It is shown that the \ac{EE} of a system with caching can be significantly improved with increasing content request probability and target transmission rate compared with the traditional on-demand scheme, in which the \ac{BS} transmits content file only after it is requested by the user. However, the problem of energy consumption and joint communication and caching for clustered \ac{D2D} networks is not yet addressed in the literature.
In this paper, we conduct a comprehensive performance analysis and optimization of the joint communication and caching for a clustered \ac{D2D} network, where the devices have unused memory to cache some files, following a random probabilistic caching scheme. Our network model effectively characterizes the stochastic nature of channel fading and clustered geographic locations of devices. Furthermore, this paper emphasizes on the need for considering the traffic dynamics and rate of requests when studying the delay incurred to deliver requests to devices. To the best of our knowledge, our work is the first in the literature that conducts a comprehensive spatial analysis of a doubly \ac{PCP} (also called doubly \ac{PPP} \cite{haenggi2012stochastic}) with the devices adopting a slotted-ALOHA random access technique to access a shared channel. The key advantage of adopting the slotted-ALOHA access protocol is that it is a simple yet fundamental medium access control (MAC) protocol, wherein no central controller exists to schedule the users' transmissions.
We also incorporate the spatio-temporal analysis in wireless caching networks by combining tools from stochastic geometry and queuing theory in order to analyze and minimize the average delay (see, for instance, \cite{zhong2015stability,zhong2017heterogeneous,7917340}).
The main contributions of this paper are summarized below.
\begin{itemize}
\item We consider a Thomas cluster process (TCP) where the devices are spatially distributed as groups in clusters. The clusters' centers are drawn from a parent PPP, and the clusters' members are normally distributed around the centers, forming a Gaussian PPP. This organization of the parent and offspring PPPs forms the so-called doubly PPP.
\item We conduct the coverage probability analysis where the devices adopt a slotted-ALOHA random access technique. We then jointly optimize the access probability and caching probability to maximize the cluster offloading gain. We obtain the optimal channel access probability, and then a closed-form solution of the optimal caching sub-problem is provided. The energy consumption problem is then formulated and shown to be convex and the optimal caching probability is also formulated.
\item By combining tools from stochastic geometry as well as queuing theory,
we minimize the per request weighted average delay by jointly optimizing bandwidth allocation between \ac{D2D} and \ac{BS}-to-Device communication and the caching probability. The delay minimization problem is shown to be non-convex. Applying the block coordinate descent (BCD) optimization technique, the joint minimization problem is solved in an iterative manner.
\item We validate our theoretical findings via simulations. Results show a significant improvement in the network performance metrics, namely, the offloading gain, energy consumption, and average delay as compared to other caching schemes proposed earlier in literature.
\end{itemize}
The rest of this paper is organized as follows. Section II and Section III discuss the system model and the offloading gain, respectively. The energy consumption is discussed in Section IV and the delay analysis is conducted in Section V. Numerical results are then presented in Section VI before we conclude the paper in Section VII.
\section{System Model}
\subsection{System Setup}
We model the location of the mobile devices with a \ac{TCP} in which the parent points are drawn from a \ac{PPP} $\Phi_p$ with density $\lambda_p$, and the daughter points are drawn from a Gaussian \ac{PPP} around each parent point. In fact, the TCP is considered as a doubly \ac{PCP} where the daughter points are normally scattered with variance $\sigma^2 \in \mathbb{R}^2$ around each parent point \cite{haenggi2012stochastic}.
The parent points and offspring are referred to as cluster centers and cluster members, respectively. The number of cluster members in each cluster is a Poisson random variable with mean $\overline{n}$. The density function of the location of a cluster member relative to its cluster center is
\begin{equation}
f_Y(y) = \frac{1}{2\pi\sigma^2}\textrm{exp}\Big(-\frac{\lVert y\rVert^2}{2\sigma^2}\Big), \quad \quad y \in \mathbb{R}^2
\label{pcp}
\end{equation}
where $\lVert .\rVert$ is the Euclidean norm. The intensity function of a cluster is given by $\lambda_c(y) = \frac{\overline{n}}{2\pi\sigma^2}\textrm{exp}\big(-\frac{\lVert y\rVert^2}{2\sigma^2}\big)$. Therefore, the intensity of the entire process is given by $\lambda = \overline{n}\lambda_p$. We assume that the BSs' distribution follows another \ac{PPP} $\Phi_{bs}$ with density $\lambda_{bs}$, which is independent of $\Phi_p$.
\subsection{Content Popularity and Probabilistic Caching Placement}
We assume that each device has a surplus memory of size $M$ designated for caching files.
The total number of files is $N_f> M$ and the set (library) of content indices is denoted as $\mathcal{F} = \{1, 2, \dots , N_f\}$. These files represent the content catalog that all the devices in a cluster may request, which are indexed in a descending order of popularity. The probability that the $i$-th file is requested follows a Zipf's distribution given by,
\begin{equation}
q_i = \frac{ i^{-\beta} }{\sum_{k=1}^{N_f}k^{-\beta}},
\label{zipf}
\end{equation}
where $\beta$ is a parameter that reflects how skewed the popularity distribution is. For example, if $\beta= 0$, the popularity of the files has a uniform distribution. Increasing $\beta$ increases the disparity among the files popularity such that lower indexed files have higher popularity. By definition, $\sum_{i=1}^{N_f}q_i = 1$.
We use Zipf's distribution to model the popularity of files per cluster.
\ac{D2D} communication is enabled within each cluster to deliver popular content. It is assumed that the devices adopt a slotted-ALOHA medium access protocol, where each transmitter during each time slot, independently and randomly accesses the channel with the same probability $p$. This implies that multiple active \ac{D2D} links might coexist within a cluster. Therefore, $p$ is a design parameter that directly controls \textcolor{black}{(mainly)} the intra-cluster interference, as described later in the paper.
We adopt a random content placement where each device independently selects a file to cache according to a specific probability function $\textbf{b} = \{b_1, b_2, \dots, b_{N_{f}}\}$, where $b_i$ is the probability that a device caches the $i$-th file, $0 \leq b_i \leq 1$ for all $i=\{1, \dots, N_f\}$. To avoid duplicate caching of the same content within the memory of the same device, we follow a probabilistic caching approach proposed in \cite{blaszczyszyn2015optimal} and illustrated in Fig. \ref{prob_cache_example}.
\begin{figure
\vspace{-0.4cm}
\begin{center}
\includegraphics[width=1.5in]{Figures/ch3/prob_cache_exam}
\caption {The cache memory of size $M = 3$ is equally divided into $3$ blocks of unit size. A random number $\in [0,1]$ is generated, and a content $i$ is chosen from each block, whose $b_i$ fills the part intersecting with the generated random number. In this way, in the given example, the contents $\{1, 2, 4\}$ are chosen to be cached.}
\label{prob_cache_example}
\end{center}
\vspace{-0.8cm}
\end{figure}
If a device caches the desired file, the device directly retrieves the content. However, if the device does not cache the file, the file can be downloaded from any neighboring device that caches the file (henceforth called catering device) in the same cluster. According to the proposed access model, the probability that a chosen catering device is admitted to access the channel is the access probability $p$. Finally, the device attaches to the nearest \ac{BS} as a last resort to download the content which is not cached entirely within the device's cluster. We assume that the \ac{D2D} communication is operating as out-of-band D2D. \textcolor{black}{$W_{1}$ and $W_{2}$ denote respectively the bandwidth allocated to the \ac{D2D} and \ac{BS}-to-Device communication, and the total system bandwidth is denoted as $W=W_{1} + W_{2}$. It is assumed that device requests are served in a random manner, i.e., among the cluster devices, a random device request is chosen to be scheduled and content is served.
In the following, we aim at studying and optimizing three important metrics, widely studied in the literature. The first metric is the offloading gain, which is defined as the probability of obtaining the requested file from the local cluster, either from the self-cache or from a neighboring device in the same cluster, with a rate higher than a required threshold $R_0$. The second metric is the energy consumption which represents the dissipated energy when downloading files either from the BSs or via \ac{D2D} communication. Finally, the latency which accounts for the weighted average delay over all the requests served from the \ac{D2D} and \ac{BS}-to-Device communication.
\section{Maximum Offloading Gain}
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{Figures/ch3/distance1.png}
\vspace{-0.4cm}
\caption {Illustration of the representative cluster and one interfering cluster.}
\label{distance}
\vspace{-0.5cm}
\end{figure}
Without loss of generality, we conduct the analysis for a cluster whose center is at $x_0\in \Phi_p$ (referred to as representative cluster), and the device who requests the content (henceforth called typical device) is located at the origin. We denote the location of the \ac{D2D} transmitter by $y_0$ relative to $x_0$, where $x_0, y_0\in \mathbb{R}^2$. The distance from the typical device (\ac{D2D} receiver of interest) to this \ac{D2D} transmitter is denoted as $r=\lVert x_0+y_0\rVert$, which is a realization of a random variable $R$ whose distribution is described later. This setup is illustrated in Fig. \ref{distance}. It is assumed that a requested file is served from a randomly selected catering device, which is, in turn, admitted to access the channel based on the slotted-ALOHA protocol. The successful offloading probability is then given by
\begin{align}
\label{offloading_gain}
\mathbb{P}_o(p,\textbf{b}) &= \sum_{i=1}^{N_f} q_i b_i + q_i(1 - b_i)(1 - e^{-b_i\overline{n}})
\underbrace{\int_{r=0}^{\infty}f_R(r) \mathbb{P}(R_{1}(r)>R_0) \dd{r}}_{\mathbb{P}(R_1>R_0)},
\end{align}
where $R_{1}(r)$ is the achievable rate when downloading content from a catering device at a distance $r$ from the typical device with the distance \ac{PDF} $f_R(r)$. The first term on the right-hand side is the probability of requesting a locally cached file (self-cache) whereas the remaining term incorporates the probability that a requested file $i$ is cached among at least one cluster member and being downloadable with a rate greater than $R_0$. More precisely, since the number of devices per cluster has a Poisson distribution, the probability that there are $k$ devices per cluster is equal to $\frac{\overline{n}^k e^{-\overline{n}}}{k!}$. Accordingly, the probability that there are $k$ devices caching content $i$ can be written as $\frac{(b_i\overline{n})^k e^{-b_i\overline{n}}}{k!}$. Hence, the probability that at least one device caches content $i$ is $1$-minus the void probability (i.e., $k=0$), which equals $1 - e^{-b_i\overline{n}}$.
In the following, we first compute the probability $ \mathbb{P}(R_{1}(r)>R_0)$ given the distance $r$ between the typical device and a catering device, \textcolor{black}{then we conduct averaging over $r$ using the \ac{PDF} $f_R(r)$}. The received power at the typical device from a catering device located at $y_0$ relative to the cluster center is given by
\begin{align}
P &= P_d g_0 \lVert x_0+y_0\rVert^{-\alpha}= P_d g_0 r^{-\alpha}
\label{pwr}
\end{align}
where $P_d$ denotes the \ac{D2D} transmission power, \textcolor{black}{$g_0$ is the complex Gaussian fading channel coefficient between a catering device located at $y_0$ relative to its cluster center at $x_0$ and the typical device}, and $\alpha > 2$ is the path loss exponent. Under the above assumption, the typical device sees two types of interference, namely, the intra-and inter-cluster interference. We first describe the inter-cluster interference, then the intra-cluster interference is characterized. The set of active devices in any remote cluster is denoted as $\mathcal{B}^p$, where $p$ refers to the access probability. Similarly, the set of active devices in the local cluster is denoted as $\mathcal{A}^p$. Similar to (\ref{pwr}), the interference from the simultaneously active \ac{D2D} transmitters outside the representative cluster, at the typical device is given by
\begin{align}
I_{\Phi_p^{!}} &= \sum_{x \in \Phi_p^{!}} \sum_{y\in \mathcal{B}^p} P_d g_{y_x} \lVert x+y\rVert^{-\alpha}\\
& = \sum_{x\in \Phi_p^{!}} \sum_{y\in \mathcal{B}^p} P_d g_{u} u^{-\alpha}
\end{align}
where $\Phi_p^{!}=\Phi_p \setminus x_0$ for ease of notation, $y$ is the marginal distance between a potential interfering device and its cluster center at $x \in \Phi_p$ , $u = \lVert x+y\rVert$ is a realization of a random variable $U$ modeling the inter-cluster interfering distance (shown in Fig. \ref{distance}), \textcolor{black}{$g_{y_x} \sim $ exp(1) are \ac{i.i.d.} exponential random variables modeling Rayleigh fading}, and $g_{u} = g_{y_x}$ for ease of notation. The intra-cluster interference is then given by
\begin{align}
I_{\Phi_c} &= \sum_{y\in \mathcal{A}^p} P_d g_{y_{x_0}} \lVert x_0+y\rVert^{-\alpha}\\
& = \sum_{y\in \mathcal{A}^p} P_d g_{h} h^{-\alpha}
\end{align}
where $y$ is the marginal distance between the intra-cluster interfering devices and the cluster center at $x_0 \in \Phi_p$, $h = \lVert x_0+y\rVert$ is a realization of a random variable $H$ modeling the intra-cluster interfering distance (shown in Fig. \ref{distance}), \textcolor{black}{$g_{y_{x_0}} \sim $ exp(1) are \ac{i.i.d.} exponential random variables modeling Rayleigh fading, and $g_{h} = g_{y_{x_0}}$ for ease of notation}. From the thinning theorem \cite{haenggi2012stochastic}, the set of active transmitters following the slotted-ALOHA medium access forms Gaussian \ac{PPP} $\Phi_{cp}$ whose intensity is given by
\begin{align}
\lambda_{cp} = p\lambda_{c}(y) = p\overline{n}f_Y(y) =\frac{p\overline{n}}{2\pi\sigma^2}\textrm{exp}\Big(-\frac{\lVert y\rVert^2}{2\sigma^2}\Big), \quad y \in \mathbb{R}^2
\end{align}
Assuming that the thermal noise is neglected as compared to the aggregate interference, the \ac{D2D} $\mathrm{SIR}$ at the typical device is written as
\begin{equation}
\gamma_{r}=\frac{P}{I_{\Phi_p^{!}} + I_{\Phi_c}} = \frac{P_d g_0 r^{-\alpha}}{I_{\Phi_p^{!}} + I_{\Phi_c}}
\end{equation}
\textcolor{black}{A fixed rate transmission model is adopted in our study}, where each transmitter (device or BS) transmits at the fixed rate of log$_2[1+\theta]$ \SI{}{bits/sec/Hz}, where $\theta$ is a design parameter. Since, the rate is fixed, the transmission is subject to outage due to fading and interference fluctuations. Consequently, the de facto average transmissions rate (i.e., average throughput) is given by
\begin{equation}
\label{rate_eqn}
R_i = W_i\textrm{ log$_{2}$}[1+ \theta]\mathrm{P_c},
\end{equation}
\textcolor{black}{where $i= 1,2$ for the \ac{D2D} and \ac{BS}-to-Device communication, respectively. $W_i$ is the bandwidth, $\theta$ is the pre-determined threshold for successful reception, $\mathrm{P_c} =\mathbb{E}(\textbf{1}\{\mathrm{SIR}>\theta\})$ is the coverage probability, and $\textbf{1}\{.\}$ is the indicator function. When served by a catering device $r$ apart from the origin, the achievable rate of the typical device under slotted-ALOHA medium access technique can be deduced from \cite[Equation (10)]{jing2012achievable} as}
\begin{equation}
\label{rate_eqn1}
R_{1}(r) = pW_{1} {\rm log}_2 \big(1 + \theta \big) \textbf{1}\{ \gamma_{r} > \theta\}
\end{equation}
Then, the probability $ \mathbb{P}(R_{1}(r)>R_0)$ is derived as follows.
\begin{align}
\mathbb{P}(R_{1}(r)>R_0)&= \mathbb{P} \Big(pW_{1} {\rm log}_2 (1 + \theta)\textbf{1}\{ \gamma_{r} > \theta\}
>R_0\Big) \nonumber \\
&=\mathbb{P} \Big(\textbf{1}\{ \gamma_{r} > \theta\}
>\frac{R_0}{pW_{1} {\rm log}_2 (1 + \theta )}\Big) \nonumber \\
&\overset{(a)}{=} \mathbb{P}\big(\gamma_r >\theta \big)=\mathbb{P}\Big(\frac{P_d g_0 r^{-\alpha}}{I_{\Phi_p^{!}} + I_{\Phi_c}} > \theta\Big)
\end{align}
\textcolor{black}{where (a) follows from the assumption that $R_0 < pW_{1} {\rm log}_2 \big(1 + \theta \big)$, i.e., $\frac{R_0}{pW_{1} {\rm log}_2 \big(1 + \theta \big)}<1$, always holds, otherwise, it is infeasible to get $\mathbb{P}(R_{1}>R_0)$ greater than zero}. Rearranging the right-hand side, we get
\begin{align}
\mathbb{P}(R_{1}(r)>R_0)&= \mathbb{P}\Big( g_0 > \frac{\theta r^{\alpha}}{P_d} [I_{\Phi_p^{!}} + I_{\Phi_c}] \Big)
\nonumber \\
&\overset{(b)}{=}\mathbb{E}_{I_{\Phi_p^{!}},I_{\Phi_c}}\Big[\text{exp}\big(\frac{-\theta r^{\alpha}}{P_d}{[I_{\Phi_p^{!}} + I_{\Phi_c}] }\big)\Big]
\nonumber \\
\label{prob-R1-g-R0}
&\overset{(c)}{=} \mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big)
\end{align}
where (b) follows from the assumption $g_0 \sim \mathcal{CN}(0,1)$, and (c) follows from the independence of the intra- and inter-cluster interference and the Laplace transform of them.
In what follows, we first derive the Laplace transform of interference to get $\mathbb{P}(R_{1}(r)>R_0)$. Then, we formulate the offloading gain maximization problem.
\begin{lemma}
The Laplace transform of the inter-cluster aggregate interference $I_{\Phi_p^{!}}$ evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$ is given by
\begin{align}
\label{LT_inter}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &= {\rm exp}\Big(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm e}^{-p\overline{n} \varphi(s,v)}\Big)v\dd{v}\Big),
\end{align}
where $\varphi(s,v) = \int_{u=0}^{\infty}\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}$, and $f_U(u|v) = \mathrm{Rice} (u| v, \sigma)$ represents Rice's \ac{PDF} of parameter $\sigma$, and $v=\lVert x\rVert$.
\end{lemma}
\begin{proof}
Please see Appendix A.
\end{proof}
\begin{lemma}
The Laplace transform of the intra-cluster aggregate interference $I_{\Phi_c}$ evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$ can be approximated by
\begin{align}
\label{LT_intra}
\mathscr{L}_{I_{\Phi_c} }(s) \approx {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\end{align}
where $f_H(h) =\mathrm{Rayleigh}(h,\sqrt{2}\sigma)$ represents Rayleigh's \ac{PDF} with a scale parameter $\sqrt{2}\sigma$.
\end{lemma}
\begin{proof}
Please see Appendix B.
\end{proof}
For the serving distance distribution $f_R(r)$, since both the typical device as well as a potential catering device have their locations drawn from a normal distribution with variance $\sigma^2$ around the cluster center, then by definition, the serving distance has a Rayleigh distribution with parameter $\sqrt{2}\sigma$, and given by
\begin{align}
\label{rayleigh}
f_R(r)= \frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}, \quad r>0
\end{align}
From (\ref{LT_inter}), (\ref{LT_intra}), and (\ref{rayleigh}), the offloading gain in (\ref{offloading_gain}) is written as
\begin{align}
\label{offloading_gain_1}
\mathbb{P}_o(p,\textbf{b}) &= \sum_{i=1}^{N_f} q_i b_i + q_i(1 - b_i)(1 - e^{-b_i\overline{n}}) \underbrace{\int_{r=0}^{\infty}
\frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}
\mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big)\dd{r}}_{\mathbb{P}(R_1>R_0)},
\end{align}
Hence, the offloading gain maximization problem can be formulated as
\begin{align}
\label{optimize_eqn_p}
\textbf{P1:} \quad &\underset{p,\textbf{b}}{\text{max}} \quad \mathbb{P}_o(p,\textbf{b}) \\
\label{const110}
&\textrm{s.t.}\quad \sum_{i=1}^{N_f} b_i = M, \\
\label{const111}
& b_i \in [ 0, 1], \\
\label{const112}
& p \in [ 0, 1],
\end{align}
where (\ref{const110}) is the device cache size constraint, which is consistent with the illustration of the example in Fig. \ref{prob_cache_example}. \textcolor{black}{On one hand, from the assumption that the fixed transmission rate $pW_{1} {\rm log}_2 \big(1 + \theta \big)$ being larger than the required threshold $R_0$, we have the condition $p>\frac{R_0}{W_{1} {\rm log}_2 \big(1 + \theta \big)}$ on the access probability. On the other hand, from (\ref{prob-R1-g-R0}), with further increase of the access probability $p$, intra- and inter-cluster interference powers increase, and the probability $\mathbb{P}(R_{1}(r)>R_0)$ decreases accordingly. From intuition, the optimal access probability for the offloading gain maximization is chosen as $p^* >_{\epsilon} \frac{R_0}{W_{1} {\rm log}_2 \big(1 + \theta \big)}$, where $\epsilon \to 0$. However, increasing the access probability $p$ further above $p^*$ may lead to higher \ac{D2D} average achievable rate $R_1$, as elaborated in the next section.} The obtained $p^*$ is now used to solve for the caching probability $\textbf{b}$ in the optimization problem below. Since in the structure of \textbf{P1}, $p$ and $\textbf{b}$ are separable, it is possible to solve for $p^*$ and then substitute to get $\textbf{b}^*$.
\begin{align}
\label{optimize_eqn_b_i}
\textbf{P2:} \quad &\underset{\textbf{b}}{\text{max}} \quad \mathbb{P}_o(p^*,\textbf{b}) \\
\label{const11}
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber
\end{align}
The optimal caching probability is formulated in the next lemma.
\begin{lemma}
$\mathbb{P}_o(p^*,\textbf{b})$ is a concave function w.r.t. $\textbf{b}$ and the optimal caching probability $\textbf{b}^{*}$ that maximizes the offloading gain is given by
\[
b_{i}^{*}=\left\{
\begin{array}{ll}
1 \quad\quad\quad , v^{*} < q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)\\
0 \quad\quad\quad, v^{*} > q_i + \overline{n}q_i\mathbb{P}(R_1>R_0)\\
\psi(v^{*}) \quad, {\rm otherwise}
\end{array}
\right.
\]
where $\psi(v^{*})$ is the solution of $v^{*}$ of (\ref{psii_offload}) in Appendix C that satisfies $\sum_{i=1}^{N_f} b_i^*=M$.
\end{lemma}
\begin{proof}
Please see Appendix C.
\end{proof}
\section{Energy Consumption}
In this section, we formulate the energy consumption minimization problem for the clustered \ac{D2D} caching network. In fact, significant energy consumption occurs only when content is served via \ac{D2D} or \ac{BS}-to-Device transmission. We consider the time cost $c_{d_i}$ as the time it takes to download the $i$-th content from a neighboring device in the same cluster. Considering the size ${S}_i$ of the $i$-th ranked content, $c_{d_i}=S_i/R_1 $, where $R_1 $ denotes the average rate of the \ac{D2D} communication. Similarly, we have $c_{b_i} = S_i/R_2 $ when the $i$-th content is served by the \ac{BS} with average rate $R_2 $. The average energy consumption when downloading files by the devices in the representative cluster is given by
\begin{align}
\label{energy_avrg}
E_{av} = \sum_{k=1}^{\infty} E(\textbf{b}|k)\mathbb{P}(n=k)
\end{align}
where $\mathbb{P}(n=k)$ is the probability that there are $k$ devices in the representative cluster, \textcolor{black}{equal to $\frac{\overline{n}^k e^{-\overline{n}}}{k!}$}, and $E(\textbf{b}|k)$ is the energy consumption conditioning on having $k$ devices in the cluster, written similar to \cite{energy_efficiency} as
\begin{equation}
E(\textbf{b}|k) = \sum_{j=1}^{k} \sum_{i=1}^{N_f}\big[\mathbb{P}_{j,i}^d q_i P_d c_{d_i} + \mathbb{P}_{j,i}^b q_i P_b c_{b_i}\big]
\label{energy}
\end{equation}
where $\mathbb{P}_{j,i}^d$ and $\mathbb{P}_{j,i}^b$ represent the probability of obtaining the $i$-th content by the $j$-th device from the local cluster, i.e., via \ac{D2D} communication, and the BS, respectively. $P_b$ denotes the \ac{BS} transmission power. Given that there are $k$ devices in the cluster, it is obvious that $\mathbb{P}_{j,i}^b=(1-b_i)^{k}$, and $\mathbb{P}_{j,i}^d=(1 - b_i)\big(1-(1-b_i)^{k-1}\big)$.
The average rates $R_1$ and $R_2$ are now computed to get a closed-form expression for $E(\textbf{b}|k)$.
From equation (\ref{rate_eqn}), we need to obtain the \ac{D2D} coverage probability $\mathrm{P_{c_d}}$ and \ac{BS}-to-Device coverage probability $\mathrm{P_{c_b}}$ to calculate $R_1$ and $R_2$, respectively. Given the number of devices $k$ in the representative cluster, the Laplace transform of the inter-cluster interference \textcolor{black}{is as obtained in (\ref{LT_inter})}. However, the intra-cluster interfering devices no longer represent a Gaussian \ac{PPP} since the number of devices is conditionally fixed, i.e., not a Poisson random number as before. To facilitate the analysis, for every realization $k$, we assume that the intra-cluster interfering devices form a Gaussian \ac{PPP} with intensity function given by $pkf_Y(y)$. Such an assumption is mandatory for analytical tractability.
From Lemma 2, the intra-cluster Laplace transform conditioning on $k$ can be approximated as
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|k) &\approx {\rm exp}\Big(-pk \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\nonumber
\end{align}
and the conditional \ac{D2D} coverage probability is given by
\begin{align}
\label{p_b_d2d}
\mathrm{P_{c_d}} =
\int_{r=0}^{\infty} \frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}
\mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big|k\big)\dd{r}
\end{align}
%
\textcolor{black}{With the adopted slotted-ALOHA scheme, the access probability $p$ minimizing $E(\textbf{b}|k)$ is computed over the interval [0,1] to maximize the \ac{D2D} achievable rate $R_1$ in (\ref{rate_eqn1}), with the condition $p>\frac{R_0}{W_{1} {\rm log}_2 \big(1 + \theta \big)}$ holding to fulfill the probability $\mathbb{P}(R_{1}>R_0)$ greater than zero.
As an illustrative example, in Fig.~\ref{prob_r_geq_r0_vs_p}, we plot the \ac{D2D} average achievable rate $R_1$ against the channel access probability $p$.
As evident from the plot, we see that there is a certain access probability, such that before it the rate $R_1$ tends to increase since the channel access probability increases, and beyond it, the rate $R_1$ decreases monotonically due to the effect of more interferers accessing the channel. In such a case, although we observe that increasing $p$ above $\frac{R_0}{W_{1} {\rm log}_2 \big(1 + \theta \big)}=0.1$ improves the average achievable rate $R_1$, it comes at a price of a decreased $\mathbb{P}(R_{1}>R_0)$.}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=2.5in]{Figures/ch3/prob_r_geq_r0_vs_p2.eps}
\caption {\textcolor{black}{The \ac{D2D} average achievable rate $R_1$ versus the access probability $p$ ($\lambda_p = 20 \text{ clusters}/\SI{}{km}^2$, $\overline{n}=12$, $\sigma=\SI{30}{m}$, $\theta=\SI{0}{dB}$, $R_0/W_1=0.1\SI{}{bits/sec/Hz}$).}}
\label{prob_r_geq_r0_vs_p}
\end{center}
\vspace{-0.6cm}
\end{figure}
Analogously, under the \ac{PPP} $\Phi_{bs}$, and based on the nearest \ac{BS} association principle, it is shown in \cite{andrews2011tractable} that the \ac{BS} coverage probability can be expressed as
\begin{equation}
\mathrm{P_{c_b}} =\frac{1}{{}_2 F_1(1,-\delta;1-\delta;-\theta)},
\label{p_b_bs}
\end{equation}
where ${}_2 F_1(.)$ is the Gaussian hypergeometric function and $\delta = 2/\alpha$. Given the coverage probabilities $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ in (\ref{p_b_d2d}) and (\ref{p_b_bs}), respectively, $R_1$ and $R_2 $ can be calculated from (\ref{rate_eqn}), and hence $E(\textbf{b}|k)$ is expressed in a closed-form.
\subsection{Energy Consumption Minimization}
The energy minimization problem can be formulated as
\begin{align}
\label{optimize_eqn1}
&\textbf{P3:} \quad\underset{\textbf{b}}{\text{min}} \quad E(\textbf{b}|k) = \sum_{j=1}^{k} \sum_{i=1}^{N_f}\big[\mathbb{P}_{j,i}^d q_i P_d c_{d_i} + \mathbb{P}_{j,i}^b q_i P_b c_{b_i}\big]
\\
\label{const11}
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber
\end{align}
In the next lemma, we prove the convexity condition for $E(\textbf{b}|k)$.
\begin{lemma}
\label{convex_E}
The energy consumption $E(\textbf{b}|k)$ is convex if $\frac{P_b}{R_2}>\frac{P_d}{R_1}$.
\end{lemma}
\begin{proof}
\textcolor{black}{We proceed by deriving the Hessian matrix of $E(\textbf{b}|k)$. The Hessian matrix of $E(\textbf{b}|k)$ w.r.t. the caching variables is \textbf{H}$_{i,j} = \frac{\partial^2 E(\textbf{b}|k)}{\partial b_i \partial b_j}$, $\forall i,j \in \mathcal{F}$. \textbf{H}$_{i,j}$ a diagonal matrix whose $i$-th row and $j$-th column element is given by $k(k-1) S_i\Big(\frac{P_b}{R_2}-\frac{P_d}{R_1}\Big)q_i(1 - b_i)^{k-2}$.}
Since the obtained Hessian matrix is full-rank and diagonal, $\textbf{H}_{i,j}$ is positive semidefinite (and hence $E(\textbf{b}|k)$ is convex) if all the diagonal entries are nonnegative, i.e., when
$\frac{P_b}{R_2}>\frac{P_d}{R_1}$. In practice, it is reasonable to assume that $P_b \gg P_d$, as in \cite{ericsson}, the \ac{BS} transmission power is 100 fold the \ac{D2D} power.
\end{proof}
As a result of Lemma 3, the optimal caching probability can be computed to minimize $E(\textbf{b}|k)$.
\begin{lemma}
The optimal caching probability $\textbf{b}^{*}$ for the energy minimization problem \textbf{P3} is given by,
\begin{equation}
b_i^* = \Bigg[ 1 - \Big(\frac{v^* + k^2q_iS_i\frac{P_d}{R_1 }}{kq_iS_i\big(\frac{P_d}{R_1 }-\frac{P_b}{R_2}\big)} \Big)^{\frac{1}{k-1}}\Bigg]^{+}
\label{energy11}
\end{equation}
where $v^{*}$ satisfies the maximum cache constraint $\sum_{i=1}^{N_f} \Big[ 1 - \Big(\frac{v^* + k^2q_iS_i\frac{P_d}{R_1 }}{kq_iS_i\big(\frac{P_d}{R_1 }-\frac{P_b}{R_2 }\big)} \Big)^{\frac{1}{k-1}}\Big]^{+}=M$, and $[x]^+ =$ max$(x,0)$.
\end{lemma}
\begin{proof}
The proof proceeds in a similar manner to Lemma 3 and is omitted.
\end{proof}
\begin{proposition} {\rm \textcolor{black}{By observing (\ref{energy11}), we can demonstrate the effects of content size and popularity on the optimal caching probability. $S_i$ exists in the numerator and denominator
of the second term in (\ref{energy11}), however, the effect on numerator is more significant due to larger multiplier. The same property is observed for $q_i$. With the increase of $S_i$ or $q_i$, the magnitude of the second term in (\ref{energy11}) increases, and correspondingly, $b_i^*$ decreases. That is a content with larger size or lower popularity has smaller probability to be cached.}}
\end{proposition}
By substituting $b_i^*$ into (\ref{energy_avrg}), the average energy consumption per cluster is obtained. In the rest of the paper, we study and minimize the weighted average delay per request for the proposed system.
\section{Delay Analysis}
In this section, the delay analysis and minimization are discussed. A joint stochastic geometry and queueing theory model is exploited to study this problem. The delay analysis incorporates the study of a system of spatially interacting queues. To simplify the mathematical analysis, we further consider that only one \ac{D2D} link can be active within a cluster of $k$ devices, where $k$ is fixed. As shown later, such an assumption facilitates the analysis by deriving simple expressions. We begin by deriving the \ac{D2D} coverage probability under the above assumption, which is used later in this section.
\begin{lemma}
\label{coverage_lemma}
The \ac{D2D} coverage probability of the proposed clustered model with one active \ac{D2D} link within a cluster is given by
\begin{align}
\label{v_0}
\mathrm{P_{c_d}} = \frac{1}{4\sigma^2 Z(\theta,\alpha,\sigma)} ,
\end{align}
where $Z(\theta,\alpha,\sigma) = (\pi\lambda_p \theta^{2/\alpha}\Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)+ \frac{1}{4\sigma^2})$.
\end{lemma}
\begin{proof}
The result can be proved by using the displacement theory of the \ac{PPP} \cite{daley2007introduction}, and then proceeding in a similar manner to Lemma 1 and 2. We delegate this proof to the conference version of this paper \cite{amer2018minimizing}.
\end{proof}
In the following, we firstly describe the traffic model of the network, and then we formulate the delay minimization problem.
\begin{figure
\begin{center}
\includegraphics[width=2.5in]{Figures/ch3/delay_queue3}
\caption {\textcolor{black}{The traffic model of request arrivals and departures in a given cluster. $Q_1$ and $Q_2$ are M/G/1 queues modeling requests served by \ac{D2D} and \ac{BS}-to-Device communication, respectively.}}
\label{delay_queue}
\end{center}
\vspace{-0.8cm}
\end{figure}
\subsection{Traffic Model}
We assume that the aggregate request arrival process from the devices in each cluster follows a Poisson arrival process with parameter $\zeta_{tot}$ (requests per time slot). As shown in Fig.~\ref{delay_queue}, the incoming requests are further divided according to where they are served from. $\zeta_{1}$ represents the arrival rate of requests served via the \ac{D2D} communication, whereas $\zeta_{2}$ is the arrival rate for those served from the BSs. $\zeta_{3} = 1 - \zeta_{1} - \zeta_{2}$ denotes the arrival rate of requests served via the self-cache with zero delay. By definition, $\zeta_{1}$ and $\zeta_{2}$ are also Poisson arrival processes. Without loss of generality, we assume that the file size has a general distribution $G$ whose mean is denoted as $\overline{S}$ \SI{}{MBytes}. Hence, an M/G/1 queuing model is adopted whereby two non-interacting queues, $Q_1$ and $Q_2$, model the traffic in each cluster served via the \ac{D2D} and \ac{BS}-to-Device communication, respectively. Although $Q_1$ and $Q_2$ are non-interacting as the \ac{D2D} communication is assumed to be out-of-band, these two queues are spatially interacting with similar queues in other clusters. To recap, $Q_1$ and $Q_2$ are two M/G/1 queues with arrival rates $\zeta_{1}$ and $\zeta_{1}$, and service rates $\mu_1$ and $\mu_2$, respectively.
\subsection{Queue Dynamics}
It is worth highlighting that the two queues $Q_i$, $i \in \{1,2\}$, accumulate requests for files demanded by the clusters members, not the files themselves. First-in first-out (FIFO) scheduling is assumed where a request for content arrives first will be scheduled first either by the \ac{D2D} or \ac{BS} communication if the content is cached among the devices or not, respectively. The result of FIFO scheduling only relies on the time when the request arrives at the queue and is irrelevant to the particular device that issues the request. Given the parameter of the Poisson's arrival process $\zeta_{tot}$, the arrival rates at the two queues are expressed respectively as
\begin{align}
\zeta_{1} &= \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big), \\
\zeta_{2} &= \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}
\end{align}
The network operation is depicted in Fig. \ref{delay_queue}, and described in detail below.
\begin{enumerate}
\item Given the memoryless property of the arrival process (Poisson arrival) along with the assumption that the service process is independent of the arrival process,
the number of requests in any queue at a future time only depends upon the current number in the system (at time $t$) and the arrivals or departures that occur within the interval $e$.
\begin{align}
Q_{i}(t+e) = Q_{i}(t) + \Lambda_{i}(e) - M_{i}(e)
\end{align}
where $\Lambda_{i}(e)$ is the number of arrivals in the time interval $(t,t+e)$, whose mean is $\zeta_i$ \SI{}{sec}$^{-1}$, and $M_{i}(e)$ is the number of departures in the time interval $(t,t+e)$, whose mean is $\mu_i = \frac{\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})W_i{\rm log}_2(1+\theta)}{\overline{S}}$ \SI{}{sec}$^{-1}$. It is worth highlighting that, unlike the spatial-only model studied in the previous sections, the term $\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})$ is dependent on the traffic dynamics \textcolor{black}{since a request being served in a given cluster is interfered only from other clusters that also have requests to serve}. What is more noteworthy is that the mean service time $\tau_i = \frac{1}{\mu_i}$ follows the same distribution as the file size. These aspects will be revisited later in this section.
\item $\Lambda_{i}(e)$ is dependent only on $e$ because the arrival process is Poisson. $M_{i}(e)$ is $0$ if the service time of the file being served $\epsilon_i >e$. $M_{i}(e)$ is 1 if $\epsilon_1 <e$ and $\epsilon_2 + \epsilon_1>e$, and so on. As the service times $\epsilon_1, \epsilon_2, \dots, \epsilon_n$ are independent, neither $\Lambda_{i}(e)$ nor $M_{i}(e)$ depends on what happened prior to $t$. Thus, $Q_{i}(t+e)$ only depends upon $Q_{i}(t)$ and not the past history. Hence it is a \ac{CTMC} which obeys the stability conditions in \cite{szpankowski1994stability}.
\end{enumerate}
The following proposition provides the sufficient conditions for the stability of the buffers in the sense defined in \cite{szpankowski1994stability}, i.e., $\{Q_{i}\}$ has a limiting distribution for $t \rightarrow \infty$.
\begin{proposition} {\rm The \ac{D2D} and \ac{BS}-to-Device traffic modeling queues are stable, respectively, if and only if}
\begin{align}
\label{stable1}
\zeta_{1} < \mu_1 &= \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}} \\
\label{stable2}
\zeta_{2} < \mu_2 & =\frac{\mathrm{P_{c_b}} W_2{\rm log}_2(1+\theta)}{\overline{S}}
\end{align}
\end{proposition}
\begin{proof}
We show sufficiency by proving that (\ref{stable1}) and (\ref{stable2}) guarantee stability in a dominant network, where all queues that have empty buffers make dummy transmissions. The dominant network is a fictitious system that is identical to the original system, except that terminals may choose to transmit even when their respective buffers are empty, in which case they simply transmit a dummy packet. If both systems are started from the same initial state and fed with the same arrivals, then the queues in the fictitious dominant system can never be shorter than the queues in the original system.
\textcolor{black}{Similar to the spatial-only network, in the dominant system, the typical receiver is seeing an interference from all other clusters whether they have requests to serve or not (dummy transmission).} This dominant system approach yields $\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})$ equal to $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ for the \ac{D2D} and \ac{BS}-to-Device communication, respectively. Also, the obtained delay is an upper bound for the actual delay of the system. The necessity of (\ref{stable1}) and (\ref{stable2}) is shown as follows: If $\zeta_i>\mu_i$, then, by Loynes' theorem \cite{loynes1962stability}, it follows that lim$_{t\rightarrow \infty}Q_i(t)=\infty$ (a.s.) for all queues in the dominant network.
\end{proof}
Next, we conduct the analysis for the dominant system whose parameters are as follows. The content size has an exponential distribution of mean $\overline{S}$ \SI{}{MBytes}. The service times also obey an exponential distribution with means $\tau_1 = \frac{\overline{S}}{R_1}$ \SI{}{seconds} and $\tau_2 = \frac{\overline{S}}{R_2}$ \SI{}{seconds}. The rates $R_1$ and $R_2$ are calculated from (\ref{rate_eqn}) where $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ are from (\ref{v_0}) and (\ref{p_b_bs}), respectively. Accordingly, $Q_1$ and $Q_2$ are two continuous time independent (non-interacting) M/M/1 queues with service rates $\mu_1 = \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}}$ and $\mu_2 = \frac{\mathrm{P_{c_b}} W_2{\rm log}_2(1+\theta)}{\overline{S}}$ \SI{}{sec}$^{-1}$, respectively. \begin{proposition} {\rm The mean queue length $L_i$ of the $i$-th queue is given by}
\begin{align}
\label{queue_len}
L_i &= \rho_i + \frac{2\rho_i^2}{2\zeta_i(1 - \rho_i)},
\end{align}
\end{proposition}
\begin{proof}
We can easily calculate $L_i$ by observing that $Q_i$ are continuous time M/M/1 queues with arrival rates $\zeta_i$, service rates $\mu_i$, and traffic intensities $\rho_i = \frac{\zeta_i}{\mu_i}$. Then, by applying the Pollaczek-Khinchine formula \cite{Kleinrock}, $L_i$ is directly obtained.
\end{proof}
The average delay per request for each queue is calculated from
\begin{align}
D_1 &= \frac{L_1}{\zeta_1}= \frac{1}{\mu_1 - \zeta_1} = \frac{1}{W_1\mathcal{O}_{1} - \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)} \\
D_2 &= \frac{L_2}{\zeta_2}=\frac{1}{\mu_2 - \zeta_2} = \frac{1}{W_2\mathcal{O}_{2} - \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}}
\end{align}
where $\mathcal{O}_{1} = \frac{\mathrm{P_{c_d}} {\rm log}_2(1+\theta)}{\overline{S}}$, $\mathcal{O}_{2}= \frac{\mathrm{P_{c_b}} {\rm log}_2(1+\theta)}{\overline{S}}$ for notational simplicity. The weighted average delay $D$ is then expressed as
\begin{align}
D&= \frac{\zeta_{1}D_1 + \zeta_{2}D_2}{\zeta_{tot}} \nonumber \\
&= \frac{\sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)}{ \mathcal{O}_{1}W_1 - \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)} + \frac{\sum_{i=1}^{N_f}q_i (1-b_i)^{k}}{ \mathcal{O}_{2}W_2 - \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}}
\end{align}
One important insight from the delay equation is that the caching probability $\textbf{b}$ controls the arrival rates $\zeta_{1}$ and $\zeta_{2}$ while the bandwidth determines the service rates $\mu_1$ and $\mu_2$. Therefore, it turns out to be of paramount importance to jointly optimize $\textbf{b}$ and $W_1$ to minimize the average delay. One relevant work is carried out in \cite{tamoor2016caching} where the authors investigate the storage-bandwidth tradeoffs for small cell \ac{BS}s that are subject to storage constraints. Subsequently, we formulate the weighted average delay joint caching and bandwidth minimization problem as
\begin{align}
\label{optimize_eqn3}
\textbf{P4:} \quad \quad&\underset{\textbf{b},{\rm W}_1}{\text{min}} \quad D(\textbf{b},W_1) \\
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber \\
& 0 \leq W_1 \leq W, \\
\label{stab1}
&\zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big) < \mu_1, \\
\label{stab2}
&\zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k} < \mu_2,
\end{align}
where constraints (\ref{stab1}) and (\ref{stab2}) are the stability conditions for the queues $Q_1$ and $Q_2$, respectively. Although the objective function of \textbf{P4} is convex w.r.t. $W_1$, as derived below, the coupling of the optimization variables $\textbf{b}$ and $W_1$ makes \textbf{P4} a non-convex optimization problem. Therefore, \textbf{P4} cannot be solved directly using standard convex optimization techniques.
\textcolor{black}{By applying the \ac{BCD} optimization technique, \textbf{P4} can be solved in an iterative manner as follows. First, for a given caching probability $\textbf{b}$, we calculate the bandwidth allocation subproblem. Afterwards, the obtained optimal bandwidth is used to update $\textbf{b}$}. The optimal bandwidth for the bandwidth allocation subproblem is given in the next Lemma.
\begin{lemma}
The objective function of \textbf{P4} in (\ref{optimize_eqn3}) is convex w.r.t. $W_1$, and the optimal bandwidth allocation to the \ac{D2D} communication is given by
\begin{align}
\label{optimal-w-1}
W_1^* = \frac{\zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k}) +\varpi \big(\mathcal{O}_{2}W - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)}{\mathcal{O}_{1}+\varpi\mathcal{O}_{2}},
\end{align}
where $\overline{b}_i = 1 - b_i$ and $\varpi=\sqrt{\frac{\mathcal{O}_{1}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})}{\mathcal{O}_{2} \sum_{i=1}^{N_f}q_i\overline{b}_i^{k}}}$
\end{lemma}
\begin{proof}
$D(\textbf{b},W_1)$ can be written as
\begin{align}
\label{optimize_eqn3_p1}
\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big(\mathcal{O}_{1}W_1 - \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big)^{-1} + \sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big(\mathcal{O}_{2}W_2 - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)^{-1}, \nonumber
\end{align}
The second derivative $\frac{\partial^2 D(\textbf{b},W_1)}{\partial W_1^2}$ is hence given by
\begin{align}
2\mathcal{O}_{1}^2\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big(\mathcal{O}_{1}W_1 - \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big)^{-3} + 2\mathcal{O}_{2}^2\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big(\mathcal{O}_{2}W_2 - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)^{-3}, \nonumber
\end{align}
The stability conditions require that $\mu_1 = \mathcal{O}_{1}W_1 > \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})$ and $\mu_2 =\mathcal{O}_{2}W_2 > \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}$. Also, $\overline{b}_i \geq \overline{b}_i^{k}$ by definition. Hence, $\frac{\partial^2 D(\textbf{b},W_1)}{\partial W_1^2} > 0$, and the objective function is a convex function of $W_1$. The optimal bandwidth allocation can be obtained from the \ac{KKT} conditions similar to problems \textbf{P2} and \textbf{P3}, with the details omitted for brevity.
\end{proof}
Given $W_1^*$ from the bandwidth allocation subproblem, the caching probability subproblem can be written as
\begin{align}
\textbf{P5:} \quad \quad&\underset{\textbf{b}}{\text{min}} \quad D(\textbf{b},W_1^*) \\
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}), (\ref{stab1}), (\ref{stab2}) \nonumber
\end{align}
The caching probability subproblem \textbf{P5} is a sum of two fractional functions, where the first fraction is in the form of a concave over convex functions while the second fraction is in the form of a convex over concave functions. The first fraction structure, i.e., concave over convex functions, renders solving this problem using fractional programming (FP) very challenging.\footnote{A quadratic transform technique for tackling the multiple-ratio concave-convex FP problem is recently used to solve a minimization of fractional functions that has the form of convex over concave functions, whereby an equivalent problem is solved with the objective function reformulated as a difference between convex minus concave functions \cite{shen2018fractional}.}
\textcolor{black}{Moreover, the constraint (\ref{stab1}) is concave w.r.t. $\textbf{b}$.
Hence, we adopt the interior point method to obtain local optimal solution of $\textbf{b}$ given the optimal bandwidth $W_1^*$, which depends on the initial value input to the algorithm \cite{boyd2004convex}. Nonetheless, we can increase the probability to find a near-optimal solution of problem \textbf{P5} by using the interior point method with multiple random initial values and then picking the solution with lowest weighted average delay. The explained procedure is repeated until the value of \textbf{P4}'s objective function converges to a pre-specified accuracy.}
\section{Numerical Results}
\begin{table}[ht]
\caption{Simulation Parameters}
\centering
\begin{tabular}{c c c}
\hline\hline
Description & Parameter & Value \\ [0.5ex]
\hline
\textcolor{black}{System bandwidth} & W & \SI{20}{\mega\hertz} \\
\ac{BS} transmission power & $P_b$ & \SI{43}{\deci\bel\of{m}} \\
\ac{D2D} transmission power & $P_d$ & \SI{23}{\deci\bel\of{m}} \\
Displacement standard deviation & $\sigma$ & \SI{10}{\metre} \\
Popularity index&$\beta$&1\\
Path loss exponent&$\alpha$&4\\
Library size&$N_f$&500 files\\
Cache size per device&$M$&10 files\\
Average number of devices per cluster&$\overline{n}$&5\\
Density of $\Phi_p$&$\lambda_{p}$&20 clusters/\SI{}{km}$^2$ \\
Average content size&$\textcolor{black}{\overline{S}}$&\SI{5}{MBits} \\
$\mathrm{SIR}$ threshold&$\theta$&\SI{0}{\deci\bel}\\
\textcolor{black}{Total request arrival rate}&$\zeta_{tot}$&\SI{2}{request/sec}\\
\hline
\end{tabular}
\label{ch3:table:sim-parameter}
\end{table}
At first, we validate the developed mathematical model via Monte Carlo simulations. Then we benchmark the proposed caching scheme against conventional caching schemes. Unless otherwise stated, the network parameters are selected as shown in Table \ref{ch3:table:sim-parameter}.
\subsection{Offloading Gain Results}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=2.5in]{Figures/ch3/prob_r_geq_r0.eps}
\caption {The probability that the \ac{D2D} achievable rate is greater than a threshold $R_0$ versus standard deviation $\sigma$.}
\label{prob_r_geq_r0}
\end{center}
\vspace{-0.5cm}
\end{figure}
In this subsection, we present the offloading gain performance for the proposed caching model.
In Fig.~\ref{prob_r_geq_r0}, we verify the accuracy of the analytical results for the probability $\mathbb{P}(R_1>R_0)$. The theoretical and simulated results are plotted together, and they are consistent. We can observe that the probability $\mathbb{P}(R_1>R_0)$ decreases monotonically with the increase of $\sigma$. This is because as $\sigma$ increases, the serving distance increases and the inter-cluster interfering distance between out-of-cluster interferers and the typical device decreases, and equivalently, the $\mathrm{SIR}$ decreases. It is also shown that $\mathbb{P}(R_1>R_0)$ decreases with the $\mathrm{SIR}$ threshold $\theta$ as the channel becomes more prone to be in outage when increasing the $\mathrm{SIR}$ threshold $\theta$.
\begin{figure*}
\centering
\subfigure[$p=p^*$. ]{\includegraphics[width=2.0in]{Figures/ch3/histogram_b_i_p_star1.eps}
\label{histogram_b_i_p_star}}
\subfigure[$p > p^*$.]{\includegraphics[width=2.0in]{Figures/ch3/histogram_b_i_p_leq_p_star1.eps}
\label{histogram_b_i_p_leq_p_star}}
\caption{Histogram of the optimal caching probability $\textbf{b}^*$ when (a) $p=p^*$ and (b) $p > p^*$.}
\label{histogram_b_i}
\vspace{-0.5cm}
\end{figure*}
To show the effect of $p$ on the caching probability, in Fig.~\ref{histogram_b_i}, we plot the histogram of the optimal caching probability at different values of $p$, where $p=p^*$ in Fig.~\ref{histogram_b_i_p_star} and $p>p^*$ in Fig.~\ref{histogram_b_i_p_leq_p_star}. It is clear from the histograms that the optimal caching probability $\textbf{b}^*$ tends to be more skewed when $p > p^*$, i.e., when $\mathbb{P}(R_1>R_0)$ decreases. This shows that file sharing is more difficult when $p$ is larger than the optimal access probability. More precisely, for $p>p^*$, the outage probability is high due to the aggressive interference. In such a low coverage probability regime, each device tends to cache the most popular files leading to fewer opportunities of content transfer between the devices.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=2.5in]{Figures/ch3/offloading_prob_cach_compare1.eps}
\caption {The offloading probability versus the popularity of files $\beta$ under different caching schemes, namely, \ac{PC}, Zipf, and \ac{CPF}, $\overline{n}=10$, $\sigma=$\SI{5}{\metre}.}
\label{offloading_gain_vs_beta}
\end{center}
\vspace{-0.5cm}
\end{figure}
Last but not least, Fig.~\ref{offloading_gain_vs_beta} manifests the prominent effect of the files' popularity on the offloading gain. We compare the offloading gain of three different caching schemes, namely, the proposed \ac{PC}, Zipf's caching (Zipf), and \ac{CPF}. We can see that the offloading gain under the \ac{PC} scheme attains the best performance as compared to other schemes. Also, we note that both \ac{PC} and Zipf schemes encompass the same offloading gain when $\beta=0$ owing to the uniformity of content popularity.
\subsection{\textcolor{black}{Energy Consumption Results}}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=2.5in]{Figures/ch3/energy_vs_beta7.eps}
\caption {Normalized energy consumption versus popularity exponent $\beta$.}
\label{energy_vs_beta}
\end{center}
\vspace{-0.5cm}
\end{figure}
The results in this part are given for the energy consumption.
Fig.~\ref{energy_vs_beta} shows the energy consumption, \textcolor{black}{normalized to the mean number of devices per cluster}, versus $\beta$ under different caching schemes, namely, \ac{PC}, Zipf, and \ac{CPF}. We can see that the minimized energy consumption under the proposed \ac{PC} scheme attains the best performance as compared to other schemes. Also, it is clear that the consumed energy decreases with $\beta$. This can be justified by the fact that as $\beta$ increases, fewer files are frequently requested which are more likely to be cached among the devices under \ac{PC}, \ac{CPF}, and the Zipf schemes. These few files therefore are downloadable from the devices via low power \ac{D2D} communication.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=2.5in]{Figures/ch3/energy_vs_n6.eps}
\caption {\textcolor{black}{Normalized energy consumption versus the mean number of devices per cluster.}}
\label{energy_vs_n}
\end{center}
\vspace{-0.5cm}
\end{figure}
We plot the normalized energy consumption versus the \textcolor{black}{mean number of devices per cluster} in Fig.~\ref{energy_vs_n}. First, we see that energy consumption decreases with the mean number of devices per cluster. As the number of devices per cluster increases, it is more probable to obtain requested files via low power \ac{D2D} communication. When the number of devices per cluster is relatively large, the normalized energy consumption tends to flatten as most of the content becomes cached at the cluster devices.
\subsection{Delay Results}
\begin{figure*} [!h]
\centering
\subfigure[Weighted average delay versus the popularity exponent $\beta$. ]{\includegraphics[width=2.0in]{Figures/ch3/delay_compare3.eps}
\label{delay_compare}}
\subfigure[Optimal allocated bandwidth versus the popularity exponent $\beta$.]{\includegraphics[width=2.0in]{Figures/ch3/BW_compare4.eps}
\label{BW_compare}}
\caption{\textcolor{black}{Evaluation and comparison of average delay for the proposed joint \ac{PC} and bandwidth allocation scheme with the Zipf's baseline scheme against popularity exponent $\beta$, $N_f = 100$, $M = 4$, $k = 8$.}}
\label{delay-analysis}
\vspace{-0.5cm}
\end{figure*}
\textcolor{black}{The results in this part are devoted to the average delay metric. The performance of the proposed joint \ac{PC} and bandwidth allocation scheme is evaluated in Fig.~\ref{delay-analysis}, and the optimized bandwidth allocation is also shown. Firstly, in Fig.~\ref{delay_compare}, we compare the average delay for two different caching schemes, namely, \ac{PC}, and Zipf's scheme. We can see that the minimized average delay under the proposed joint \ac{PC} and bandwidth allocation scheme attains substantially better performance as compared to the Zipf's scheme with fixed bandwidth allocation (i.e., $W_1=W_2=W/2$). Also, we see that, in general, the average delay monotonically decreases with $\beta$ when a fewer number of files undergoes the highest demand.
Secondly, Fig.~\ref{BW_compare} manifests the effect of the files' popularity $\beta$ on the allocated bandwidth. It is shown that optimal \ac{D2D} allocated bandwidth $W_1^*$ continues increasing with $\beta$. This can be interpreted as follows. When $\beta$ increases, a fewer number of files become highly demanded. These files can be entirely cached among the devices. To cope with such a larger number of requests served via the \ac{D2D} communication, the \ac{D2D} allocated bandwidth needs to be increased.}
\textcolor{black}{Fig.~\ref{scaling} shows the geometrical scaling effects on the system performance, e.g., the effect of clusters' density $\lambda_p$ and the displacement standard deviation $\sigma$ on the \ac{D2D} coverage probability $\mathrm{P_{c_d}}$, optimal allocated bandwidth $W_1^*$, and the average delay. In Fig.~\ref{cache_size}, we plot the \ac{D2D} coverage probability $\mathrm{P_{c_d}}$ versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$. It is clear from the plot that $\mathrm{P_{c_d}}$ monotonically decreases with both $\sigma$ and $\lambda_p$. Obviously, increasing $\sigma$ and $\lambda_p$ results in larger serving distance, i.e., higher path-loss effect, and shorter interfering distance, i.e., higher interference power received by the typical device, respectively. This explains the encountered degradation for $\mathrm{P_{c_d}}$ with $\sigma$ and $\lambda_p$. }
In Fig.~\ref{optimal-w}, we plot the optimal allocated bandwidth $W_1^*$ normalized to $W$ versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$. In this case too it is quite obvious that $W_1^*$ tends to increase with both $\sigma$ and $\lambda_p$. This behavior can be directly understood from (\ref{optimal-w-1}) where $W_1^*$ is inversely proportional to $\mathcal{O}_{1} = \frac{\mathrm{P_{c_d}} {\rm log}_2(1+\theta)}{\overline{S}}$, and $\mathrm{P_{c_d}}$ decreases with $\sigma$ and $\lambda_p$ as discussed above. More precisely, while the \ac{D2D} service rate $\mu_1$ tends to decrease with the decrease of $\mathrm{P_{c_d}}$ since $\mu_1 = \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}}$, the optimal allocated bandwidth $W_1^*$ tends to increase with the decrease of $\mathrm{P_{c_d}}$ to compensate for the service rate degradation, and eventually, minimizing the weighted average delay.
In Fig.~\ref{av-delay}, we plot the weighted average delay versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$. Following the same interpretations as in Fig.~\ref{cache_size} and Fig.~\ref{optimal-w}, we can notice that the weighted average delay monotonically increases with $\sigma$ and $\lambda_p$ due to the decrease of the \ac{D2D} coverage probability $\mathrm{P_{c_d}}$ and the \ac{D2D} service rate $\mu_1$ with $\sigma$ and $\lambda_p$.
\begin{figure*} [t!]
\vspace{-0.5cm}
\centering
\subfigure[\ac{D2D} coverage probability $\mathrm{P_{c_d}}$ versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$.]{\includegraphics[width=2.0in]{Figures/ch3/d2d-coverage-prob.eps}
\label{cache_size}}
\subfigure[Optimal allocated bandwidth versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$.]{\includegraphics[width=2.0in]{Figures/ch3/bw-allocated.eps}
\label{optimal-w}}
\subfigure[Weighted average delay versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$.]{\includegraphics[width=2.0in]{Figures/ch3/delay-sigma-lamda.eps}
\label{av-delay}}
\caption{Effect of geometrical parameters, e.g., clusters' density $\lambda_p$ and the displacement standard deviation $\sigma$ on the system performance, $\beta= 0.5$, $N_f = 100$, $M = 4$, $k = 8$.}
\vspace{-0.6cm}
\label{scaling}
\end{figure*}
\section{Conclusion}
In this work, we conduct a comprehensive analysis of the joint communication and caching for a clustered \ac{D2D} network with random probabilistic caching incorporated at the devices. We first maximize the offloading gain of the proposed system by jointly optimizing the channel access and caching probability. We obtain the optimal channel access probability, and the optimal caching probability is then characterized. We show that deviating from the optimal access probability $p^*$ makes file sharing more difficult. More precisely, the system is too conservative for small access probabilities, while the interference is too aggressive for larger access probabilities. Then, we minimize the energy consumption of the proposed clustered \ac{D2D} network. We formulate the energy minimization problem and show that it is convex and the optimal caching probability is obtained. We show that a content with a large size or low popularity has a small probability to be cached. Finally, we adopt a queuing model for the devices' traffic within each cluster to investigate the network average delay. Two M/G/1 queues are employed to model the \ac{D2D} and \ac{BS}-to-Device communications. We then derive an expression for the weighted average delay per request. We observe that the average delay is dependent on the caching probability and bandwidth allocated, which control respectively the arrival rates and service rates for the two modeling queues. Therefore, we minimize the per request weighted average delay by jointly optimizing bandwidth allocation between \ac{D2D} and \ac{BS}-to-Device communication and the caching probability. The delay minimization problem is shown to be non-convex. Applying the \ac{BCD} optimization technique, the joint minimization problem can be solved in an iterative manner. Results show up to $10\%$, $17\%$, and $300\%$ improvement gain in the offloading gain, energy consumption, and average delay, respectively, compared to the Zipf's caching technique.
\begin{appendices}
\section{Proof of lemma 1}
Laplace transform of the inter-cluster aggregate interference $I_{\Phi_p^{!}}$ can be evaluated as
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &= \mathbb{E} \Bigg[e^{-s \sum_{\Phi_p^{!}} \sum_{y \in \mathcal{B}^p} g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&= \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\Phi_{cp},g_{y_{x}}} \prod_{y \in \mathcal{B}^p} e^{-s g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&= \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\Phi_{cp}} \prod_{y \in \mathcal{B}^p} \mathbb{E}_{g_{y_{x}}} e^{-s g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&\overset{(a)}{=} \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\Phi_{cp}} \prod_{y \in \mathcal{B}^p} \frac{1}{1+s \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&\overset{(b)}{=} \mathbb{E}_{\Phi_p} \prod_{\Phi_p^{!}} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x + y\rVert^{-\alpha}}\Big)f_Y(y)\dd{y}\Big) \nonumber
\\
&\overset{(c)}{=} {\rm exp}\Bigg(-\lambda_p \int_{\mathbb{R}^2}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x + y\rVert^{-\alpha}}\Big)f_Y(y)\dd{y}\Big)\dd{x}\Bigg)
\end{align}
where $\varphi(s,v) = \int_{u=0}^{\infty}\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}$; (a) follows from the Rayleigh fading assumption, (b)
follows from the \ac{PGFL} of Gaussian \ac{PPP} $\Phi_{cp}$, and (c) follows from the \ac{PGFL} of the parent \ac{PPP} $\Phi_p$. By using change of variables $z = x + y$ with $\dd z = \dd y$, we proceed as
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &\overset{}{=} {\rm exp}\Bigg(-\lambda_p \int_{\mathbb{R}^2}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert z\rVert^{-\alpha}}\Big)f_Y(z-x)\dd{y}\Big)\dd{x}\Bigg) \nonumber
\\
&\overset{(d)}{=} {\rm exp}\Bigg(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{u=0}^{\infty}\Big(1 - \frac{1}{1+s u^{-\alpha}}\Big)f_U(u|v)\dd{u}\Big)v\dd{v}\Bigg) \nonumber
\\
&= {\rm exp}\Bigg(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm exp}\big(-p\overline{n} \int_{u=0}^{\infty}
\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}\big)v\dd{v}\Big)\Bigg) \nonumber
\\
&= {\rm exp}\Big(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm e}^{-p\overline{n} \varphi(s,v)}\Big)v\dd{v}\Big),
\end{align}
where (d) follows from converting the cartesian coordinates to the polar coordinates with $u=\lVert z\rVert$.
\textcolor{black}{To clarify how in (d) the normal distribution $f_Y(z-x)$ is converted to the Rice distribution $f_U(u|v)$, consider a remote cluster centered at $x \in \Phi_p^!$, with a distance $v=\lVert x\rVert$ from the origin. Every interfering device belonging to the cluster centered at $x$ has its coordinates in $\mathbb{R}^2$ chosen independently from a Gaussian distribution with standard deviation $\sigma$. Then, by definition, the distance from such an interfering device to the origin, denoted as $u$, has a Rice distribution, denoted as $f_U(u|v)=\frac{u}{\sigma^2}\mathrm{exp}\big(-\frac{u^2 + v^2}{2\sigma^2}\big) I_0\big(\frac{uv}{\sigma^2}\big)$, where $I_0$ is the modified Bessel function of the first kind with order zero and $\sigma$ is the scale parameter.
Hence, Lemma 1 is proven.}
\section {Proof of lemma 2}
Laplace transform of the intra-cluster aggregate interference $I_{\Phi_c}$, conditioning on the distance $v_0$ from the cluster center to the origin, see Fig~\ref{distance}, is written as
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|v_0) &= \mathbb{E} \Bigg[e^{-s \sum_{y \in \mathcal{A}^p} g_{y_{x_0}} \lVert x_0 + y\rVert^{-\alpha}} \Bigg] \nonumber
\\
&= \mathbb{E}_{\Phi_{cp},g_{y_{x_0}}} \prod_{y \in\mathcal{A}^p} e^{-s g_{y_{x_0}} \lVert x_0 + y\rVert^{-\alpha}} \nonumber
\\
&= \mathbb{E}_{\Phi_{cp}} \prod_{y \in\mathcal{A}^p} \mathbb{E}_{g_{y_{x_0}}} e^{-s g_{y_{x_0}} \lVert x_0 + y\rVert^{-\alpha}} \nonumber
\\
&\overset{(a)}{=} \mathbb{E}_{\Phi_{cp}} \prod_{y \in\mathcal{A}^p} \frac{1}{1+s \lVert x_0 + y\rVert^{-\alpha}}
\nonumber
\\
&\overset{(b)}{=} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x_0 + y\rVert^{-\alpha}}\Big)f_{Y}(y)\dd{y}\Big) \nonumber
\\
&\overset{(c)}{=} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert z_0\rVert^{-\alpha}}\Big)f_{Y}(z_0-x_0)\dd{z_0}\Big)
\end{align}
where (a) follows from the Rayleigh fading assumption, (b) follows from the \ac{PGFL} of the Gaussian \ac{PPP} $\Phi_{cp}$, (c) follows from changing of variables $z_0 = x_0 + y$ with $\dd z_0 = \dd y$. By converting the cartesian coordinates to the polar coordinates, with $h=\lVert z_0\rVert$, we get
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|v_0) &\overset{}{=} {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\Big(1 - \frac{1}{1+s h^{-\alpha}}\Big)f_H(h|v_0)\dd{h}\Big) \nonumber
\\
&= {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h|v_0)\dd{h}\Big)
\end{align}
By neglecting the correlation of the intra-cluster interfering distances as in \cite{clustered_twc}, i.e., the common part $x_0$ in the intra-cluster interfering distances $\lVert x_0 + y\rVert$, $y \in \mathcal{A}^p$, we get.
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s) &\approx {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\end{align}
Similar to the serving distance \ac{PDF} $f_R(r)$, since both the typical device and a potential interfering device have their locations drawn from a normal distribution with variance $\sigma^2$ around the cluster center, then by definition, the intra-cluster interfering distance has a Rayleigh distribution with parameter $\sqrt{2}\sigma$, and given by $f_H(h)= \frac{h}{2 \sigma^2} {\rm e}^{\frac{-h^2}{4 \sigma^2}}$. Hence, Lemma 2 is proven.
\section {Proof of lemma 3}
First, to prove concavity, we proceed as follows.
\begin{align}
\frac{\partial \mathbb{P}_o}{\partial b_i} &= q_i + q_i\big(\overline{n}(1-b_i)e^{-\overline{n}b_i}-(1-e^{-\overline{n}b_i})\big)\mathbb{P}(R_1>R_0) \\
\frac{\partial^2 \mathbb{P}_o}{\partial b_i^2} &= -q_i\big(\overline{n}e^{-\overline{n}b_i} + \overline{n}^2(1-b_i)e^{-\overline{n}b_i} + \overline{n}e^{-\overline{n}b_i}\big)\mathbb{P}(R_1>R_0)
\end{align}
It is clear that the second derivative $\frac{\partial^2 \mathbb{P}_o}{\partial b_i^2}$ is always negative, and
$\frac{\partial^2 \mathbb{P}_o}{\partial b_i \partial b_j}=0$ for all $i\neq j$. Hence, the Hessian matrix \textbf{H}$_{i,j}$ of $\mathbb{P}_o(p^*,\textbf{b})$ w.r.t. $\textbf{b}$ is negative semidefinite, and $\mathbb{P}_o(p^*,\textbf{b})$ is a concave function of $\textbf{b}$. Also, the constraints are linear, which imply that the necessity and sufficiency conditions for optimality exist. The dual Lagrangian function and the \ac{KKT} conditions are then employed to solve \textbf{P2}.
The \ac{KKT} Lagrangian function of the energy minimization problem is given by
\begin{align}
\mathcal{L}(\textbf{b},w_i,\mu_i,v) =& \sum_{i=1}^{N_f} q_i b_i + q_i(1- b_i)(1-e^{-b_i\overline{n}})\mathbb{P}(R_1>R_0) \nonumber \\
&+ v(M-\sum_{i=1}^{N_f} b_i) + \sum_{i=1}^{N_f} w_i (b_i-1) - \sum_{i=1}^{N_f} \mu_i b_i
\end{align}
where $v, w_i, \mu_i$ are the dual equality and two inequality constraints, respectively. Now, the optimality conditions are written as
\begin{align}
\label{grad}
\grad_{\textbf{b}} \mathcal{L}(\textbf{b}^*,w_i^*,\mu_i^*,v^*) = q_i + q_i&\big(\overline{n}(1-b_i^*)e^{-\overline{n}b_i^*}-(1-e^{-\overline{n}b_i^*})\big)\mathbb{P}(R_1>R_0) -v^* + w_i^* -\mu_i^*= 0 \\
&w_i^* \geq 0 \\
&\mu_i^* \leq 0 \\
&w_i^* (b_i^* - 1) =0 \\
&\mu_i^* b_i^* = 0\\
&(M-\sum_{i=1}^{N_f} b_i^*) = 0
\end{align}
\begin{enumerate}
\item $w_i^* > 0$: We have $b_i^* = 1$, $\mu_i^*=0$, and
\begin{align}
&q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)= v^* - w_i^* \nonumber \\
\label{cond1_offload}
&v^{*} < q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)
\end{align}
\item $\mu_i^* < 0$: We have $b_i^* = 0$, and $w_i^*=0$, and
\begin{align}
& q_i + \overline{n}q_i\mathbb{P}(R_1>R_0) = v^* + \mu_i^* \nonumber \\
\label{cond2_offload}
&v^{*} > q_i + \overline{n}q_i\mathbb{P}(R_1>R_0)
\end{align}
\item $0 <b_i^*<1$: We have $w_i^*=\mu_i^*=0$, and
\begin{align}
\label{psii_offload}
v^{*} = q_i + q_i\big(\overline{n}(1-b_i^*)e^{-\overline{n}b_i^*} - (1 - e^{-\overline{n}b_i^*})\big)\mathbb{P}(R_1>R_0)
\end{align}
\end{enumerate}
By combining (\ref{cond1_offload}), (\ref{cond2_offload}), and (\ref{psii_offload}), with the fact that $\sum_{i=1}^{N_f} b_i^*=M$, Lemma 3 is proven.
\end{appendices}
\bibliographystyle{IEEEtran}
\section{Introduction}
Caching at mobile devices significantly improves system performance by facilitating device-to-device (D2D) communications, which enhances the spectrum efficiency and alleviate the heavy burden on backhaul links \cite{Femtocaching}. Modeling the cache-enabled heterogeneous networks, including small cell base stations (SBSs) and mobile devices, follows two main directions in the literature. The first line of work focuses on the fundamental throughput scaling results by assuming a simple protocol channel model \cite{Femtocaching,golrezaei2014base}, known as the protocol model, where two devices can communicate if they are within a certain distance. The second line of work, defined as the physical interference model, considers a more realistic model for the underlying physical layer \cite{andreev2015analyzing,cache_schedule}. In the following, we review some of the works relevant to the second line, focusing mainly on the energy efficiency (EE) and delay analysis of wireless caching networks.
The physical interference model is based on the fundamental signal-to-interference (SIR) metric, and therefore, is applicable to any wireless communication system. Modeling devices' (users') locations as a PPP is widely employed in the literature, especially, in the wireless caching area \cite{andreev2015analyzing,cache_schedule,energy_efficiency,ee_BS,hajri2018energy}. However, a realistic model for D2D caching networks requires that a given device typically has multiple proximate devices, where any of them can potentially act as a serving device. This deployment is known as clustered devices deployment, which can be characterized by cluster processes \cite{haenggi2012stochastic}. Unlike the popular PPP approach, the authors in \cite{clustered_twc,clustered_tcom,8070464} developed a stochastic geometry based model to characterize the performance of the content placement in the clustered D2D network. In \cite{clustered_twc}, the authors discuss two strategies of content placement in a Poisson cluster process (PCP) deployment. First, when each device randomly chooses its serving device from its local cluster, and secondly, when each device connects to its $k$-th closest transmitting device from its local cluster. The authors characterize the optimal number of D2D transmitters that must be simultaneously activated in each cluster to maximize the area spectral efficiency. The performance of cluster-centric content placement is characterized in \cite{clustered_tcom}, where the content of interest in each cluster is cached closer to the cluster center, such that the collective performance of all the devices in each cluster is optimized. Inspired by the Matern hard-core point process, which captures pairwise interactions between nodes, the authors in \cite{8070464} devised a novel spatially correlated caching strategy called hard-core placement (HCP) such that the D2D devices caching the same content are never closer to each other than the exclusion radius.
Energy efficiency in wireless caching networks is widely studied in the literature \cite{energy_efficiency,ee_BS,hajri2018energy}.
For example, an optimal caching problem is formulated in \cite{energy_efficiency} to minimize the energy consumption of a wireless network. The authors consider a cooperative wireless caching network where relay nodes cooperate with the devices to cache the most popular files in order to minimize energy consumption. In \cite{ee_BS}, the authors investigate how caching at BSs can improve EE of wireless access networks. The condition when the EE can benefit from caching is characterized, and the optimal cache capacity that maximizes the network EE is found. It is shown that EE benefit from caching depends on content popularity, backhaul capacity, and interference level.
The authors in \cite{hajri2018energy} exploit the spatial repartitions of devices and the correlation in their content popularity profiles to improve of the achievable EE. The EE optimization problem is decoupled into two related subproblems, the first one addresses the issue of content popularity modeling, and the second subproblem investigates the impact of exploiting the spatial repartitions of devices. The authors derive a closed-form expression of the achievable EE and find the optimal density of active small cells that maximizes it. It is shown that the small base station allocation algorithm improves the energy efficiency and hit probability. However, the problem of EE for D2D based caching is not yet addressed in the literature.
Recently, the joint optimization of delay and energy in wireless caching is conducted, see, for instance \cite{wu2018energy,huang2018energy,jiang2018energy}. The authors in \cite{wu2018energy} jointly optimize the delay and energy in a cache-enabled dense small cell network. The authors formulate the energy-delay optimization problem as a mixed integer programming problem, where file placement, device association to the small cells, and power control are jointly considered. To model the energy consumption and end-to-end file delivery delay tradeoff, a utility function linearly combining these two metrics is used as an objective function of the optimization problem. An efficient algorithm is proposed to approach the optimal association and power solution, which could achieve the optimal tradeoff between energy consumption and end-to-end file delivery delay. In \cite{huang2018energy}, the authors showed that with caching, the energy consumption can be reduced by extending transmission time. However, it may incur wasted energy if the device never needs the cached content. Based on the random content request delay, the authors study the maximization of EE subject to a hard delay constraint in an additive white Gaussian noise channel. It is shown that the EE of a system with caching can be significantly improved with increasing content request probability and target transmission rate compared with the traditional on-demand scheme, in which the BS transmits a content file only after it is requested by the user. However, the problem of energy consumption and joint communication and caching for clustered D2D networks is not yet addressed in the literature.
In this paper, we conduct a comprehensive performance analysis and optimization of the joint communication and caching for a clustered D2D network, where the devices have unused memory to cache some files, following a random probabilistic caching scheme. Our network model effectively characterizes the stochastic nature of channel fading and clustered geographic locations of devices. Furthermore, this paper also puts some emphasis on the need for considering the traffic dynamics and rate of requests when studying the delay incurred to deliver requests to devices. Our work is the first in literature that conducts a comprehensive spatial analysis of a doubly Poisson cluster process (also called doubly Poisson point process \cite{haenggi2012stochastic}) with the devices adopting a slotted-ALOHA random access technique to access a shared channel. Also, we are the first to incorporate the spatio-temporal analysis in wireless caching networks by combining tools from stochastic geometry and queuing theory, in order to analyze and minimize the delay.
The main contributions of this paper are summarized below.
\begin{itemize}
\item We consider a Thomas cluster process (TCP) where the devices are spatially distributed as groups in clusters. The cluster centers are drawn from a parent PPP, and the cluster members are normally distributed around the centers, forming a Gaussian PPP. This organization of the parent and offspring PPPs forms the so-called doubly PPP.
\item \textcolor{red}{We conduct the coverage probability analysis} where the devices adopt a slotted-ALOHA random access technique. We then jointly optimize the access probability and caching probability to maximize the cluster offloading gain. A closed-form solution of the optimal caching probability sub-problem is provided.
\item The energy consumption problem is then formulated and shown to be convex and the optimal caching probability is also formulated.
\item By combining tools from stochastic geometry as well as queuing theory,
we minimize the per request weighted average delay by jointly optimizing bandwidth allocation between D2D and BSs' communication and the caching probability. The delay minimization problem is shown to be non-convex. Applying the block coordinate descent optimization technique, the joint minimization problem is solved in an iterative manner.
\item We validate our theoretical findings via simulations. Results shows a significant improvement in the network performance metrics, namely, the offloading gain, energy consumption, and average delay as compared to other caching schemes proposed earlier in literature.
\end{itemize}
The rest of this paper is organized as follows. Section II and Section III discuss the system model and the maximum offloading gain respectively. The energy consumption is discussed in Section IV and the delay analysis is conducted in Section V. Numerical results are then presented in Section VI before we conclude the paper in Section VII.
\section{System Model}
\subsection{System Setup}
We model the location of the mobile devices with a Thomas cluster process in which the parent points are drawn from a PPP $\Phi_p$ with density $\lambda_p$, and the daughter points are drawn from a Gaussian PPP around each parent point. In fact, the TCP is considered as a doubly Poisson cluster process where the daughter points are normally scattered with variance $\sigma^2 \in \mathbb{R}^2$ around each parent point \cite{haenggi2012stochastic}.
The parent points and offspring are referred to as cluster centers and cluster members, respectively. The number of cluster members in each cluster is a Poisson random variable with mean $\overline{n}$. The density function of the location of a cluster member relative to its cluster center is
\begin{equation}
f_Y(y) = \frac{1}{2\pi\sigma^2}\textrm{exp}\Big(-\frac{\lVert y\rVert^2}{2\sigma^2}\Big), \quad \quad y \in \mathbb{R}^2
\label{pcp}
\end{equation}
where $\lVert .\rVert$ is the Euclidean norm. The intensity function of a cluster is given by $\lambda_c(y) = \frac{\overline{n}}{2\pi\sigma^2}\textrm{exp}\big(-\frac{\lVert y\rVert^2}{2\sigma^2}\big)$. Therefore, the intensity of the process is given by $\lambda = \overline{n}\lambda_p$. We assume that the BSs' distribution follows another PPP $\Phi_{bs}$ with density $\lambda_{bs}$, which is independent of $\Phi_p$.
\subsection{Content Popularity and Probabilistic Caching Placement}
We assume that each device has a surplus memory of size $M$ designated for caching files.
The total number of files is $N_f> M$ and the set (library) of content indices is denoted as $\mathcal{F} = \{1, 2, \dots , N_f\}$. These files represent the content catalog that all the devices in a cluster may request, which are indexed in a descending order of popularity. The probability that the $i$-th file is requested follows a Zipf's distribution given by,
\begin{equation}
q_i = \frac{ i^{-\beta} }{\sum_{k=1}^{N_f}k^{-\beta}},
\label{zipf}
\end{equation}
where $\beta$ is a parameter that reflects how skewed the popularity distribution is. For example, if $\beta= 0$, the popularity of the files has a uniform distribution. Increasing $\beta$ increases the disparity among the files popularity such that lower indexed files have higher popularity. By definition, $\sum_{i=1}^{N_f}q_i = 1$.
We use Zipf's distribution to model the popularity of files per cluster.
D2D communication is enabled within each cluster to deliver popular content. It is assumed that the devices adopt a slotted-ALOHA medium access protocol, where each transmitter independently and randomly accesses the channel with the same probability $p$. This implies that multiple active D2D links might coexist within a cluster. Therefore, $p$ is a design parameter that directly controls the intra-cluster interference, as described later in the next section.
A probabilistic caching model is assumed, where the content is randomly and independently placed in the cache memories of different devices in the same cluster, according to the same distribution. The probability that a generic device stores a particular file $i$ is denoted as $b_i$, $0 \leq b_i \leq 1$ for all $i \in \mathcal{F}$. To avoid duplicate caching of the same content within the memory of the same device, we follow the caching approach proposed in \cite{blaszczyszyn2015optimal} and illustrated in Fig. \ref{prob_cache_example}.
\begin{figure
\begin{center}
\includegraphics[width=2.5in]{prob_cache_exam}
\caption {The cache memory of size $M = 3$ is equally divided into $3$ blocks of unit size. Starting from content $i=1$ to $i=N_f$, each content sequentially fills these $3$ memory blocks by an amount $b_i$. The amounts (probabilities) $b_i$ eventually fill all $3$ blocks since $\sum_{i=1}^{N_f} b_i = M$ \cite{blaszczyszyn2015optimal}. Then a random number $\in [0,1]$ is generated, and a content $i$ is chosen from each block, whose $b_i$ fills the part intersecting with the generated random number. In this way, in the given example, the contents $\{1, 2, 4\}$ are chosen to be cached.}
\label{prob_cache_example}
\end{center}
\end{figure}
If a device caches the desired file, the device directly retrieves the content. However, if the device does not cache the file, it downloads it from any neighboring device that caches the file (henceforth called catering device) in the same cluster. According to the proposed access model, the probability that a chosen catering device is admitted to access the channel is the access probability $p$. Finally, the device attaches to the nearest BS as a last resort to download the content which is not cached entirely within the device's cluster. We assume that the D2D communication is operating as out-of-band D2D. $W_{1}$ and $W_{2}$ denote respectively the bandwidth allocated to the D2D and BSs' communication, and the total bandwidth for the system is denoted as $W=W_{1} + W_{2}$. \textcolor{blue}{It is assumed that device requests are served in a random manner, i.e., among the cluster devices, a random device request is chosen to be scheduled and a content is served.
In the following, we aim at studying and optimizing three important metrics, widely studied in the literature. The first metric is the offloading gain, which is defined as the probability of obtaining the requested file from the local cluster, either from the self-cache or from a neighboring device in the same cluster with a rate greater than certain threshold. The second metric is the energy consumption which represents the dissipated energy when downloading files either from the BSs or D2D communication. Finally, the latency which accounts for the weighted average delay over the all requests served from the BS or the D2D communication.
\section{\textcolor{blue}{Maximum Offloading Gain}}
\begin{figure}[t]
\centering
\includegraphics[width=0.55\textwidth]{distance1.png}
\caption {Illustration of the representative cluster and one interfering cluster.}
\label{distance}
\end{figure}
Let us define the successful offloading probability as the probability that a device can find a requested file in its own cache, or in the caches of neighboring devices within the same cluster with D2D link data rate higher than a required threshold $R_0$. Without loss of generality, we conduct the analysis for a cluster whose center is $x_0\in \Phi_p$ (referred to as representative cluster), and the device who requests the content (henceforth called typical device) is located at the origin. We denote the location of the D2D-TX by $y_0$ w.r.t. $x_0$, where $x_0, y_0\in \mathbb{R}^2$. The distance from the typical device (D2D-RX of interest) to this D2D-TX is denoted as $r=\lVert x_0+y_0\rVert$, which is a realization of a random variable $R$ whose distribution is described later. This setup is illustrated in Fig. \ref{distance}. It is assumed that a requested file is served from a randomly selected catering device, which is, in turn, admitted to access the channel based on the slotted-ALOHA protocol. \textcolor{blue}{The successful offloading probability is then given by}
\begin{align}
\label{offloading_gain}
\mathbb{P}_o(p,b_i) &= \sum_{i=1}^{N_f} q_i b_i + q_i(1 - b_i)(1 - e^{-b_i\overline{n}})\int_{r=0}^{\infty}f(r) \mathbb{P}(R_{1}(r)>R_0) \dd{r},
\end{align}
where $R_{1}(r)$ is the achievable rate when downloading a content from a catering device at a distance $r$ from the typical device with pdf $f(r)$. The first term on the right-hand side is the probability of requesting a locally cached file (self-cache) whereas the remaining term incorporates the probability that a requested file $i$ is cached among at least one cluster member and being downloadable with a rate greater than $R_0$. To further clarity, since the number of devices per cluster has a Poisson distribution, the probability that there are $k$ devices per cluster is equal to $\frac{\overline{n}^k e^{-\overline{n}}}{k!}$. Accordingly, the probability that there are $k$ devices caching a content $i$ is equal to $\frac{(b_i\overline{n})^k e^{-b_i\overline{n}}}{k!}$. Hence, the probability that at least one device caches a content $i$ is $1$-minus the void probability (i.e., $k=0$), which equals $1 - e^{-b_i\overline{n}}$.
In the following, we first compute the probability $ \mathbb{P}(R_{1}(r)>R_0)$ conditioning on the distance $r$ between the typical and catering device, then we relax this condition. The received power at the typical device from a catering D2D-TX located at $y_0$ from the cluster center is given by
\begin{align}
P &= P_d g_0 \lVert x_0+y_0\rVert^{-\alpha}= P_d g_0 r^{-\alpha}
\label{pwr}
\end{align}
where $P_d$ denotes the D2D transmission power, $g_0 \sim $ exp(1) is an i.i.d. exponential random variable which models Rayleigh fading and $\alpha > 2$ is the path loss exponent. Under the above assumption, the typical device sees two types of interference, namely, the intra-and inter-cluster interference. We first describe the inter-cluster interference, then the intra-cluster interference is characterized. The set of active devices in any remote cluster is denoted as $\mathcal{B}^p$, where $p$ refers to the access probability. Similarly, the set of active devices in the local cluster is denoted as $\mathcal{A}^p$. Similar to (\ref{pwr}), the interference from the simultaneously active D2D-TXs outside the representative cluster, at the typical device is given by
\begin{align}
I_{\Phi_p^{!}} &= \sum_{x \in \Phi_p^{!}} \sum_{y\in \mathcal{B}^p} P_d g_{yx} \lVert x+y\rVert^{-\alpha}\\
& = \sum_{x\in \Phi_p^{!}} \sum_{y\in \mathcal{B}^p} P_d g_{u} u^{-\alpha}
\end{align}
where $\Phi_p^{!}=\Phi_p \setminus x_0$ for ease of notation, $y$ is the marginal distance between a potential interfering device and its cluster center at $x \in \Phi_p$ , $u = \lVert x+y\rVert$ is a realization of a random variable $U$ modeling the inter-cluster interfering distance (shown in Fig. \ref{distance}), $g_{yx} \sim $ exp(1) are an i.i.d. exponential random variables modeling Rayleigh fading, and $g_{u} = g_{yx}$ for ease of notation. The intra-cluster interference is then given by
\begin{align}
I_{\Phi_c} &= \sum_{y\in \mathcal{A}^p} P_d g_{yx_0} \lVert x_0+y\rVert^{-\alpha}\\
& = \sum_{y\in \mathcal{A}^p} P_d g_{h} h^{-\alpha}
\end{align}
where $y$ is the marginal distance between the intra-cluster interfering devices and the cluster center at $x_0 \in \Phi_p$, $h = \lVert x_0+y\rVert$ is a realization of a random variable $H$ modeling the intra-cluster interfering distance (shown in Fig. \ref{distance}), $g_{yx_0} \sim $ exp(1) are an i.i.d. exponential random variables modeling Rayleigh fading, and $g_{h} = g_{yx_0}$ for ease of notation. From the thinning theorem \cite{haenggi2012stochastic}, the set of active transmitters following the slotted-ALOHA medium access forms a PPP $\Phi_c^{p}$ \textcolor{red}{whose intensity is given by}
\begin{align}
\lambda_{cp} = p\lambda_{c}(y) = p\overline{n}f_Y(y) =\frac{p\overline{n}}{2\pi\sigma^2}\textrm{exp}\Big(-\frac{\lVert y\rVert^2}{2\sigma^2}\Big), \quad y \in \mathbb{R}^2
\end{align}
Assuming that the thermal noise is neglected as compared to the aggregate interference, the $\mathrm{SIR}$ at the typical device is written as
\begin{equation}
\gamma_{r}=\frac{P}{I_{\Phi_p^{!}} + I_{\Phi_c}} = \frac{P_d g_0 r^{-\alpha}}{I_{\Phi_p^{!}} + I_{\Phi_c}}
\end{equation}
\textcolor{magenta}{A fixed rate transmission model is adopted in our study}, where each TX (D2D or BS) transmits at the fixed rate of log$_2[1+\theta]$ bits/sec/Hz, where $\theta$ is a design parameter. Since, the rate is fixed, the transmission is subject to outage due to fading and interference fluctuations. Consequently, the de facto average transmissions rate (i.e., average throughput) is given by
\begin{equation}
\label{rate_eqn}
R = W\textrm{ log$_{2}$}[1+ \theta]\mathrm{P_c},
\end{equation}
where $W$ is the bandwidth, $\theta$ is the pre-determined threshold for successful reception, $\mathrm{P_c} =\mathbb{E}(\textbf{1}\{\mathrm{SIR}>\theta\})$ is the coverage probability, and $\textbf{1}\{.\}$ is the indicator function. The D2D communication rate under ALOHA scheme is then given by
\begin{equation}
\label{rate_eqn1}
R_{1}(r) = pW_{1} {\rm log}_2 \big(1 + \theta \big) \textbf{1}\{ \gamma_{r} > \theta\}
\end{equation}
Then, the probability $ \mathbb{P}(R_{1}(r)>R_0)$ is derived as follows.
\begin{align}
\mathbb{P}(R_{1}(r)>R_0)&= \mathbb{P} \big(pW_{1} {\rm log}_2 (1 + \theta)\textbf{1}\{ \gamma_{r} > \theta\}
>R_0\big) \nonumber \\
&=\mathbb{P} \big(\textbf{1}\{ \gamma_{r} > \theta\}
>\frac{R_0}{pW_{1} {\rm log}_2 (1 + \theta )}\big) \nonumber \\
&\overset{(a)}{=}\mathbb{E} \big(\textbf{1}\{ \gamma_{r} > \theta\}\big) \nonumber \\
&= \mathbb{P}\big(\frac{P_d g_0 r^{-\alpha}}{I_{\Phi_p^{!}} + I_{\Phi_c}} > \theta\big)
\nonumber \\
&= \mathbb{P}\big( g_0 > \frac{\theta r^{\alpha}}{P_d} [I_{\Phi_p^{!}} + I_{\Phi_c}] \big)
\nonumber \\
&\overset{(b)}{=}\mathbb{E}_{I_{\Phi_p^{!}},I_{\Phi_c}}\Big[\text{exp}\big(\frac{-\theta r^{\alpha}}{P_d}{[I_{\Phi_p^{!}} + I_{\Phi_c}] }\big)\Big]
\nonumber \\
&\overset{(c)}{=} \mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big)
\end{align}
\textcolor{blue}{where (a) follows from the assumption that $R_0 < pW_{1} {\rm log}_2 \big(1 + \theta \big)$ always hold, otherwise, it is infeasible to get $\mathbb{P}(R_{1}(r)>R_0)$ greater than zero}. (b) follows from the fact that $g_o$ follows an exponential distribution, and (c) follows from the independence of the intra- and inter-cluster interference and the Laplace transform of them.
In what follows, we first derive the Laplace transform of interference to get $\mathbb{P}(R_{1}(r)>R_0)$. Then, we formulate the offloading gain maximization problem.
\begin{lemma}
The Laplace transform of the inter-cluster aggregate interference $I_{\Phi_p^{!}}$ evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$ is given by
\begin{align}
\label{LT_inter}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &= {\rm exp}\Big(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm e}^{-p\overline{n} \varphi(s,v)}\Big)v\dd{v}\Big),
\end{align}
where $\varphi(s,v) = \int_{u=0}^{\infty}\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}$, and $f_U(u|v) = \mathrm{Rice} (u| v, \sigma)$ represents Rice's probability density function of parameter $\sigma$.
\end{lemma}
\begin{proof}
Please see Appendix A.
\end{proof}
\begin{lemma}
The Laplace transform of the intra-cluster aggregate interference $I_{\Phi_c}$ evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$ is given by
\begin{align}
\label{LT_intra}
\mathscr{L}_{I_{\Phi_c} }(s) = {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\end{align}
where $f_H(h) =\mathrm{Rayleigh}(h,\sqrt{2}\sigma)$ represents Rayleigh's probability density function with a scale parameter $\sqrt{2}\sigma$.
\end{lemma}
\begin{proof}
Please see Appendix B.
\end{proof}
For the serving distance distribution $f(r)$, since both the typical device as well as a potential catering device have their locations drawn from a normal distribution with variance $\sigma^2$ around the cluster center, the serving distance has a Rayleigh distribution with parameter $\sqrt{2}\sigma$, and given by
\begin{align}
\label{rayleigh}
f(r)= \frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}, \quad r>0
\end{align}
From (\ref{LT_inter}), (\ref{LT_intra}), and (\ref{rayleigh}), the offloading gain in (\ref{offloading_gain}) is characterized as
\begin{align}
\label{offloading_gain_1}
\mathbb{P}_o(p,b_i) &= \sum_{i=1}^{N_f} q_i b_i + q_i(1 - b_i)(1 - e^{-b_i\overline{n}}) \underbrace{\int_{r=0}^{\infty}
\frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}
\mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big)\dd{r}}_{\mathbb{P}(R_1>R_0)},
\end{align}
Hence, the offloading gain maximization problem can be formulated as
\begin{align}
\label{optimize_eqn_p}
\textbf{P1:} \quad &\underset{p, b_i}{\text{max}} \quad \mathbb{P}_o(p,b_i) \\
\label{const110}
&\textrm{s.t.}\quad \sum_{i=1}^{N_f} b_i = M, \\
\label{const111}
& b_i \in [ 0, 1], \\
\label{const112}
& p \in [ 0, 1],
\end{align}
where (\ref{const110}) is the device cache size constraint, which is consistent with the illustration of the example in Fig. \ref{prob_cache_example}. Since the offloading gain depends on the access probability $p$, and $p$ exists as a complex exponential term in $\mathbb{P}(R_1>R_0)$, it is hard to analytically characterize (e.g., show concavity of) the objective function or find a tractable expression for the optimal access probability. In order to tackle this, we propose to solve \textbf{P1} by \textcolor{blue}{finding first} the optimal $p^*$ that maximizes the probability $\mathbb{P}(R_{1}>R_0)$ over the interval $p \in [ 0, 1]$. Then, the obtained $p^*$ is used to solve for the caching probability $b_i$ in the optimization problem below. \textcolor{blue}{Since in the structure of \textbf{P1} $p$ and $b_i$ are separable, it is possible to solve numerically for $p^*$ and then substitute to get $b_i^*$.}
\begin{align}
\label{optimize_eqn_b_i}
\textbf{P2:} \quad &\underset{b_i}{\text{max}} \quad \mathbb{P}_o(p^*,b_i) \\
\label{const11}
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber
\end{align}
The optimal caching probability is formulated in the next lemma.
\begin{lemma}
$\mathbb{P}_o(p^*,b_i)$ is a concave function w.r.t. $b_i$ and the optimal caching probability $b_i^{*}$ that maximizes the offloading gain is given by
\[
b_{i}^{*}=\left\{
\begin{array}{ll}
1 \quad\quad\quad , v^{*} < q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)\\
0 \quad\quad\quad, v^{*} > q_i + \overline{n}q_i\mathbb{P}(R_1>R_0)\\
\psi(v^{*}) \quad, {\rm otherwise}
\end{array}
\right.
\]
where $\psi(v^{*})$ is the solution of $v^{*}$ of (\ref{psii_offload}) that satisfies $\sum_{i=1}^{N_f} b_i^*=M$.
\end{lemma}
\begin{proof}
Please see Appendix C.
\end{proof}
\section{\textcolor{blue}{Energy Consumption}}
In this section, we formulate the energy consumption minimization problem for the clustered D2D caching network. In fact, significant energy consumption occurs only when a content is served via D2D or BSs' transmission. We consider the time cost $c_{d_i}$ as the time it takes to download the $i$-th content from a neighboring device in the same cluster. Considering the size $\overline{b}_i$ of the $i$-th ranked content, $c_{d_i}=\overline{b}_i/R_1 $, where $R_1 $ denotes the average rate of the D2D communication. Similarly, we have $c_{b_i} = \overline{b}_i/R_2 $ when the $i$-th content is served by the BS with average rate $R_2 $. The average energy consumption when downloading files by the devices in the representative cluster is given by
\begin{align}
\label{energy_avrg}
E_{av} = \sum_{k=1}^{\infty} E(b_i|k)\mathbb{P}(n=k)
\end{align}
where $\mathbb{P}(n=k)$ is the probability that there are $k$ devices in the cluster $x_0$, and $E(b_i|k)$ is the consumed energy conditioning on having $k$ devices within the cluster $x_0$, written similar to \cite{energy_efficiency} as
\begin{equation}
E(b_i|k) = \sum_{j=1}^{k} \sum_{i=1}^{N_f}\big[\mathbb{P}_{j,i}^d q_i P_d c_{d_i} + \mathbb{P}_{j,i}^b q_i P_b c_{b_i}\big]
\label{energy}
\end{equation}
where $\mathbb{P}_{j,i}^d$ and $\mathbb{P}_{j,i}^b$ represent the probability of obtaining the $i$-th content by the $j$-th device from the local cluster, i.e., via D2D communication, and the BS, respectively. $P_b$ denotes the BS transmission power. Given that there are $k$ devices per cluster, it is obvious that $\mathbb{P}_{j,i}^b=(1-b_i)^{k}$, and $\mathbb{P}_{j,i}^d=(1 - b_i)\big(1-(1-b_i)^{k-1}\big)$.
The average rates $R_1$ and $R_2$ are now computed to get a closed-form expression for $E(b_i|k)$.
From equation (\ref{rate_eqn}), we need to obtain the D2D coverage probability $\mathrm{P_{c_d}}$ and BSs coverage probability $\mathrm{P_{c_b}}$ to calculate $R_1$ and $R_2$, respectively. Given the number of devices $k$ in the representative cluster, the Laplace transform of the inter-cluster interference is as obtained in (\ref{LT_inter}). \textcolor{blue}{However, the intra-cluster interfering devices no longer represent a Gaussian PPP since the number of devices become fixed, i.e., not a Poisson random number as before. To facilitate the analysis, for every realization $k$, we assume that the intra-cluster interfering devices form a Gaussian PPP with intensity function is $pkf_Y(y)$.}
\textcolor{magenta}{Such an assumption is mandatory for tractability and is validated in the numerical section.}
From Lemma 2, the intra-cluster Laplace transform conditioning on $k$ can be approximated as
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|k) &\approx {\rm exp}\Big(-pk \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\nonumber
\end{align}
and directly, the D2D coverage probability is given by
\begin{align}
\label{p_b_d2d}
\mathrm{P_{c_d}} =
\int_{r=0}^{\infty} \frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}
\mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big|k\big)\dd{r}
\end{align}
\textcolor{blue}{Again, the devices adopt the slotted-ALOHA with access probability $p$, which is computed over the interval [0,1] to maximize $\mathrm{P_{c_d}}$.} Similarly, under the PPP $\Phi_{bs}$, and based on the nearest BS association principle, it is shown in \cite{andrews2011tractable} that the BS coverage probability can be expressed as
\begin{equation}
\mathrm{P_{c_b}} =\frac{1}{{}_2 F_1(1,-\delta;1-\delta;-\theta)},
\label{p_b_bs}
\end{equation}
where ${}_2 F_1(.)$ is the Gaussian hypergeometric function and $\delta = 2/\alpha$. Given the coverage probabilities $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ from (\ref{p_b_d2d}) and (\ref{p_b_bs}), $R_1 $ and $R_2 $ can be calculated from (\ref{rate_eqn}), and hence $E(b_i|k)$ is expressed in a closed-form.
\subsection{Energy Consumption Minimization}
The energy minimization problem can be formulated as
\begin{align}
\label{optimize_eqn1}
&\textbf{P3:} \quad\underset{b_i}{\text{min}} \quad E(b_i|k) = \sum_{j=1}^{k} \sum_{i=1}^{N_f}\big[\mathbb{P}_{j,i}^d q_i P_d c_{d_i} + \mathbb{P}_{j,i}^b q_i P_b c_{b_i}\big]
\\
\label{const11}
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber
\end{align}
In the next lemma, we prove the convexity condition for $E$.
\begin{lemma}
\label{convex_E}
The energy consumption $E(b_i|k)$ is convex if $\frac{P_b}{R_2}>\frac{P_d}{R_1}$.
\end{lemma}
\begin{proof}
We proceed by deriving the Hessian matrix of $E$. The Hessian matrix of $E(b_i|k)(b_1,\dots,b_{Nf})$ w.r.t. the caching variables is \textbf{H}$_{i,j} = \frac{\partial^2 E(b_i|k)}{\partial b_i \partial b_j}$, $\forall i,j \in \mathcal{F}$. \textbf{H}$_{i,j}$ a diagonal matrix whose $i$-th row and $j$-th column element is given by $k(k-1) S_i\Big(\frac{P_b}{R_2}-\frac{P_d}{R_1}\Big)q_i(1 - b_i)^{k-2}$.
Since the obtained Hessian matrix is full-rank and diagonal, $\textbf{H}_{i,j}$ is positive semidefinite (and hence $E(b_i|k)$ is convex) if all the diagonal entries are nonnegative, i.e., when
$\frac{P_b}{R_2}>\frac{P_d}{R_1}$. In practice, it is reasonable to assume that $P_b \gg P_d$, as in \cite{ericsson}, the BS transmit power is 100 fold the D2D power.
\end{proof}
As a result of Lemma 3, the optimal caching probability can be computed to minimize $E(b_i|k)$.
\begin{lemma}
The optimal caching probability $b_i^{*}$ for the energy minimization problem \textbf{P3} is given by,
\begin{align}
b_i^* = \Bigg[ 1 - \Big(\frac{v^* + k^2q_iS_i\frac{P_d}{R_1 }}{kq_iS_i\big(\frac{P_d}{R_1 }-\frac{P_b}{R_2 }\big)} \Big)^{\frac{1}{k-1}}\Bigg]^{+}
\end{align}
where $v^{*}$ satisfies the maximum cache constraint $\sum_{i=1}^{N_f} \Big[ 1 - \Big(\frac{v^* + k^2q_iS_i\frac{P_d}{R_1 }}{kq_iS_i\big(\frac{P_d}{R_1 }-\frac{P_b}{R_2 }\big)} \Big)^{\frac{1}{k-1}}\Big]^{+}=M$, and $[x]^+ =$ max$(x,0)$.
\end{lemma}
\begin{proof}
The proof proceeds in a similar manner to Lemma 3 and so is omitted.
\end{proof}
By substituting $b_i^*$ into (\ref{energy_avrg}), the average energy consumption per cluster is obtained. In the remaining of the paper, we study and minimize the weighted average delay per request for the proposed system.
\section{Delay Analysis}
In this section, the delay analysis and minimization are discussed. A joint stochastic geometry and queueing theory model is exploited to study this problem. The delay analysis incorporates the study of a system of spatially interacting queues. To simplify the mathematical analysis, we further consider that only one D2D link can be active within a cluster of $k$ devices, where $k$ is fixed. As shown later, such an assumption facilitates the analysis by deriving simple expressions.
\textcolor{blue}{We begin by deriving the D2D coverage probability under the above assumption, which is used later in this section.}
\begin{lemma}
\label{coverage_lemma}
The D2D coverage probability of the proposed clustered model with one active D2D link within a cluster is given by
\begin{align}
\label{v_0}
\mathrm{P_{c_d}} = \frac{1}{4\sigma^2 Z(\theta,\alpha,\sigma)} ,
\end{align}
where $Z(\theta,\alpha,\sigma) = (\pi\lambda_p \theta^{2/\alpha}\Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)+ \frac{1}{4\sigma^2})$.
\end{lemma}
\begin{proof}
The result can be proved by using the displacement theory of the PPP \cite{daley2007introduction}, and then proceeding in a similar manner to Lemma 1 and 2. The proof is presented in Appendix D for completeness.
\end{proof}
In the following, we firstly describe the traffic model of the network, then we formulate the delay minimization problem.
\begin{figure
\begin{center}
\includegraphics[width=3.5in]{delay_queue}
\caption {The traffic model of the requests in a given cluster. Two M/G/1 queues $Q_1$ and $Q_2$ are assumed which represent respectively the requests served by the D2D and BSs' communication.}
\label{delay_queue}
\end{center}
\end{figure}
\subsection{Traffic Model}
We assume that the aggregate request arrival process from the devices in each cluster follows a Poisson arrival process with parameter $\zeta_{tot}$ (requests per time slot). As shown in Fig.~\ref{delay_queue}, the incoming requests are further divided according to where they are served from. $\zeta_{1}$ represents the arrival rate of requests served via the D2D communication, whereas $\zeta_{2}$ is the arrival rate for those served from the BSs. $\zeta_{3} = 1 - \zeta_{1} - \zeta_{2}$ denotes the arrival rate of requests served via the self-cache with zero delay. By definition, $\zeta_{1}$ and $\zeta_{2}$ are also Poisson arrival processes. Without loss of generality, we assume that the file size has a general distribution $G$ whose mean is denoted as $\overline{S}$ MBytes. Hence, an M/G/1 queuing model is adopted whereby two non-interacting queues, $Q_1$ and $Q_2$, model the traffic in each cluster served via the D2D and BSs' communication, respectively. Although $Q_1$ and $Q_2$ are non-interacting as the D2D communication is assumed to be out-of-band, these two queues are spatially interacting with similar queues in other clusters. To recap, $Q_1$ and $Q_2$ are two M/G/1 queues with arrival rates $\zeta_{1}$ and $\zeta_{1}$, and service rates $\mu_1$ and $\mu_2$, respectively.
\subsection{Queue Dynamics}
It is worth highlighting that the two queues $Q_i$, $i \in \{1,2\}$, accumulate requests for files demanded by the clusters members, not the files themselves. First-in first-out (FIFO) scheduling is assumed where a request for content arrives first will be scheduled first either by the D2D or BS communication if the content is cached among the devices or not, respectively. The result of FIFO scheduling only relies on the time when the request arrives the queue and is irrelevant to the particular device that issues the request. Given the parameter of the Poisson's arrival process $\zeta_{tot}$, the arrival rates at the two queues are expressed respectively as
\begin{align}
\zeta_{1} &= \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big), \\
\zeta_{2} &= \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}
\end{align}
The network operation is depicted in Fig. \ref{delay_queue}, and described in detail below.
\begin{enumerate}
\item Given the memoryless property of the arrival process (Poisson arrival) along with the assumption that the service process is independent of the arrival process,
the number of requests in any queue at a future time only depends upon the current number in the system (at time $t$) and the arrivals or departures that occur within the interval $h$.
\begin{align}
Q_{i}(t+h) = Q_{i}(t) + \Lambda_{i}(h) - M_{i}(h)
\end{align}
where $\Lambda_{i}(h)$ is the number of arrivals in the time interval $(t,t+h)$, whose mean is $\zeta_i$ [sec$^{-1}$], and $M_{i}(h)$ is the number of departures in the time interval $(t,t+h)$, whose mean is $\mu_i = \frac{\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})W_i{\rm log}_2(1+\theta)}{\overline{S}}$ [sec$^{-1}$]. It is worth highlighting that, unlike the spatial-only model studied in the previous sections, the term $\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})$ is dependent on the traffic dynamics \textcolor{blue}{since a request being served in a given cluster is interfered only from other clusters that also have requests to serve}. What is more noteworthy is that the mean service time $\tau_i = \frac{1}{\mu_i}$ follows the same distribution as the file size. These aspects will be revisited later in this section.
\item $\Lambda_{i}(h)$ is dependent only on $h$ because the arrival process is Poisson. $M_{i}(h)$ is $0$ if the service time of the file being served $\epsilon_i >h$. $M_{i}(h)$ is 1 if $\epsilon_1 <h$ and $\epsilon_2 + \epsilon_1>h$, and so on. As the service times $\epsilon_1, \epsilon_2, \dots, \epsilon_n$ are independent, neither $\Lambda_{i}(h)$ nor $M_{i}(h)$ depend on what happened prior to $t$. Thus, $Q_{i}(t+h)$ only depends upon $Q_{i}(t)$ and not the past history. Hence it is a is a continuous-time Markov chain (CTMC) which obeys the stability conditions in \cite{szpankowski1994stability}.
\end{enumerate}
The following proposition provides the sufficient conditions for the stability of the buffers in the sense defined in \cite{szpankowski1994stability}, i.e., $\{Q_{i}\}$ has a limiting distribution for $t \rightarrow \infty$.
\begin{proposition} {\rm The D2D and BSs' traffic modeling queues are stable, respectively, if and only if}
\begin{align}
\label{stable1}
\lambda_1 < \mu_1 &= \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}} \\
\label{stable2}
\lambda_2 < \mu_2 & =\frac{\mathrm{P_{c_b}} W_2{\rm log}_2(1+\theta)}{\overline{S}}
\end{align}
\end{proposition}
\begin{proof}
We show sufficiency by proving that (\ref{stable1}) and (\ref{stable2}) guarantee stability in a dominant network, where all queues that have empty buffers make dummy transmissions. \textcolor{red}{In other words, like the spatial-only network model, the typical receiver is seeing an interference from all other clusters whether they have requests to serve or not.} This dominant system approach yields $\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})$ equal to $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ for the D2D and BS' communication, respectively. Also, the obtained delay is an upper bound for the actual system delay. The necessity of (\ref{stable1}) and (\ref{stable2}) is shown as follows: If $\zeta_i>\mu_i$, then, by Loynes' theorem, it follows that lim$_{t\rightarrow \infty}Q_i(t)=\infty$ (a.s.) for all queues in the dominant network.
\end{proof}
Next, we conduct the analysis for the dominant system whose parameters are as follows. The content size has an exponential distribution with mean $\overline{S}$ [MBytes]. The service times also obey the same exponential distribution with means $\tau_1 = \frac{\overline{S}}{R_1}$ [second] and $\tau_2 = \frac{\overline{S}}{R_2}$ [second]. The rates $R_1$ and $R_2$ are calculated from (\ref{rate_eqn}) where $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ are from (\ref{v_0}) and (\ref{p_b_bs}), respectively. Accordingly, $Q_1$ and $Q_2$ are two continuous time independent (non-interacting) M/M/1 queues with service rates $\mu_1 = \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}}$ and $\mu_2 = \frac{\mathrm{P_{c_b}} W_2{\rm log}_2(1+\theta)}{\overline{S}}$ [sec$^{-1}$], respectively. \textcolor{black}{$W = W_1 + W_2$ is the total bandwidth allocated to the system}.
\begin{proposition} {\rm The mean queue length $L_i$ of the $i$-th queue is given by}
\begin{align}
\label{queue_len}
L_i &= \rho_i + \frac{2\rho_i^2}{2\zeta_i(1 - \rho_i)},
\end{align}
\end{proposition}
\begin{proof}
We can easily calculate $L_i$ by observing that $Q_i$ are continuous time M/M/1 queues with arrival rates $\zeta_i$, service rates $\mu_i$, and traffic intensities $\rho_i = \frac{\zeta_i}{\mu_i}$. Thus, applying the Pollaczek-Khinchine formula \cite{Kleinrock}, $L_i$ is directly written as (\ref{queue_len}).
\end{proof}
The average delay per request for each queue is calculated from
\begin{align}
D_1 &= \frac{L_1}{\zeta_1}= \frac{1}{\mu_1 - \zeta_1} = \frac{1}{W_1\mathcal{O}_{1} - \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)} \\
D_2 &= \frac{L_2}{\zeta_2}=\frac{1}{\mu_2 - \zeta_2} = \frac{1}{W_2\mathcal{O}_{2} - \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}}
\end{align}
where $\mathcal{O}_{1} = \frac{\mathrm{P_{c_d}} {\rm log}_2(1+\theta)}{\overline{S}}$, $\mathcal{O}_{2}= \frac{\mathrm{P_{c_b}} {\rm log}_2(1+\theta)}{\overline{S}}$ for notational simplicity, and $W_2=W-W_1$. The weighted average delay $D$ is then expressed as
\begin{align}
D&= \frac{\zeta_{1}D_1 + \zeta_{2}D_2}{\zeta_{tot}} \nonumber \\
&= \frac{\sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)}{ \mathcal{O}_{1}W_1 - \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)} + \frac{\sum_{i=1}^{N_f}q_i (1-b_i)^{k}}{ \mathcal{O}_{2}W_2 - \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}}
\end{align}
One important insight from the delay equation is that the caching probability $b_i$ controls the arrival rates $\zeta_{1}$ and $\zeta_{2}$ while the bandwidth determines the service rates $\mu_1$ and $\mu_2$. Therefore, it turns out to be of paramount importance to jointly optimize $b_i$ and $W_1$ to minimize the average delay. Consequently, we formulate the weighted average delay joint caching and bandwidth minimization problem as
\begin{align}
\label{optimize_eqn3}
\textbf{P4:} \quad \quad&\underset{b_i,{\rm W}_1}{\text{min}} \quad D(b_i,W_1) \\
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber \\
& 0 \leq W_1 \leq W,
\end{align}
Although the objective function of \textbf{P4} is convex w.r.t. $W_1$, as derived below, the coupling of the optimization variables $b_i$ and $W_1$ makes \textbf{P4} a non-convex optimization problem. Therefore, \textbf{P4} can not be solved directly using standard convex optimization techniques.
\textcolor{blue}{By applying the block coordinate descent optimization technique, \textbf{P4} can be solved in an iterative manner as follows. First, for a given caching probability $b_i$, we calculate the bandwidth allocation subproblem. Afterwards, the obtained optimal bandwidth is used to update $b_i$}. The optimal bandwidth for the bandwidth allocation subproblem is given in the next Lemma.
\begin{lemma}
The objective function of \textbf{P4} in (\ref{optimize_eqn3}) is convex w.r.t. $W_1$, and the optimal bandwidth allocation to the D2D communication is given by
\begin{align}
W_1^* = \frac{\zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k}) +\varpi \big(\mathcal{O}_{2}W - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)}{\mathcal{O}_{1}+\varpi\mathcal{O}_{2}},
\end{align}
where $\overline{b}_i = 1 - b_i$, and $\varpi=\sqrt{\frac{\mathcal{O}_{1}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})}{\mathcal{O}_{2} \sum_{i=1}^{N_f}q_i\overline{b}_i^{k}}}$
\end{lemma}
\begin{proof}
$D(b_i,W_1|k)$ can be written as
\begin{align}
\label{optimize_eqn3_p1}
\sum_{i=1}^{N_f}q_i(s - \overline{b}_i^{k})\big(\mathcal{O}_{1}W_1 - \zeta_{tot}\sum_{i=1}^{N_f}q_i(s - \overline{b}_i^{k})\big)^{-1} + \sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big(\mathcal{O}_{2}W_2 - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)^{-1}, \nonumber
\end{align}
The second derivative $\frac{\partial^2 D(b_i,W_1|k)}{\partial W_1^2}$ is hence given by
\begin{align}
2\mathcal{O}_{1}^2\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big(\mathcal{O}_{1}W_1 - \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big)^{-3} + 2\mathcal{O}_{2}^2\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big(\mathcal{O}_{2}W_2 - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)^{-3}, \nonumber
\end{align}
The stability condition requires that $\mathcal{O}_{1}W_1 > \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})$ and $\mathcal{O}_{2}W_2 > \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}$. Also, $\overline{b}_i \geq \overline{b}_i^{k}$ by definition. Hence, $\frac{\partial^2 D(b_i,W_1|k)}{\partial W_1^2} > 0$, and the objective function is a convex function of $W_1$. The optimal bandwidth allocation can be obtained from the Karush-Kuhn-Tucker (KKT) conditions similar to problems \textbf{P2} and \textbf{P3}, with the details omitted for brevity.
\end{proof}
Given $W_1^*$ from the bandwidth allocation subproblem, the caching probability subproblem can be written as
\begin{align}
\textbf{P5:} \quad \quad&\underset{b_i}{\text{min}} \quad D(b_i,W_1^*) \\
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111})
\end{align}
The caching probability subproblem \textbf{P5} is a sum of two fractional functions, where the first fraction is in the form of a concave over convex functions while the second fraction is in the form of a convex over concave functions. The first fraction structure, i.e., concave over convex functions, renders solving this problem using fractional programming very challenging.\footnote{Dinkelbach's transform can be used to solve a minimization of fractional functions that has the form of convex over concave functions, whereby an equivalent problem is solved with the objective function as the difference between convex (numerator) minus concave (denominator) functions \cite{schaible1973fractional}.} Hence, we use the successive convex approximation technique to solve for $b_i$ given the optimal bandwidth $W_1^*$. The explained procedure is repeated until the value of \textbf{P4}'s objective function converges to a pre-specified accuracy.
\section{Numerical Results}
The simulation setup is as follows. \textcolor{red}{$W=20$ MHz}. $P_b = 43$ dBm, $P_d = 23$ dBm, $\sigma=10$ m, $\beta=1, \alpha=4$, $N_f=500$ files, $M=10$ files, \textcolor{red}{$\overline{n}=5$ devices}, $\lambda_{p} =50$ clusters/km$^2$, $\overline{S}=3$ Mbits, and $\theta=0$ dB. This simulation setup will be used unless otherwise specified.
\subsection{Offloading Gain Results}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{prob_r_geq_r0.eps}
\caption {The probability that the achievable rate is greater than a threshold $R_0$ versus standard deviation $\sigma$.}
\label{prob_r_geq_r0}
\end{center}
\end{figure}
In this subsection, we present the offloading gain performance for the proposed caching model.
In Fig.~\ref{prob_r_geq_r0}, we verify the accuracy of the analytical results for the probability $\mathbb{P}(R_1>R_0)$. The theoretical and simulated results are plotted together, and they are consistent. We can observe that the probability $\mathbb{P}(R_1>R_0)$ decreases monotonically with the increase of $\sigma$. This is because as $\sigma$ increases, the serving distance increases and the inter-cluster interfering distance, between the out-of-cluster interferers and the typical device, decreases, and equivalently, the $\mathrm{SIR}$ decreases. It is also shown that $\mathbb{P}(R_1>R_0)$ decreases with the $\mathrm{SIR}$ threshold $\theta$ as the channel becomes more prone to be in outage when increasing the $\mathrm{SIR}$ threshold $\theta$.
In Fig.~\ref{prob_r_geq_r0_vs_p}, we plot the probability $\mathbb{P}(R_1>R_0)$ against the channel access probability $p$ at different thresholds $R_0$. As evident from the plot, we see that there is an optimal $p^*$; before it the probability $\mathbb{P}(R_1>R_0)$ tends to increase since the channel access probability increases, and beyond it, the probability $\mathbb{P}(R_1>R_0)$ decreases monotonically due to the effect of more interferers accessing the channel. It is quite natural that $\mathbb{P}(R_1>R_0)$ is higher when decreasing the rate threshold $R_0$. Also, we observe that the optimal access probability $p^*$ is smaller when $R_0$ decreases. This implies that a transmitting device can maximize the probability $\mathbb{P}(R_1>R_0)$ at the receiver when $R_0$ is smaller by accessing less the channel, and correspondingly, receiving lower interference.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{prob_r_geq_r0_vs_p.eps}
\caption {The probability that the achievable rate is greater than a threshold $R_0$ versus the access probability $p$.}
\label{prob_r_geq_r0_vs_p}
\end{center}
\end{figure}
\begin{figure*}
\centering
\subfigure[$p=p^*$. ]{\includegraphics[width=3.0in]{histogram_b_i_p_star.eps}
\label{histogram_b_i_p_star}}
\subfigure[$p \neq p^*$.]{\includegraphics[width=3.0in]{histogram_b_i_p_leq_p_star.eps}
\label{histogram_b_i_p_leq_p_star}}
\caption{Histogram of the caching probability $b_i$ when (a) $p=p^*$ and (b) $p \neq p^*$.}
\label{histogram_b_i}
\end{figure*}
To show the effect of $p$ on the caching probability, in Fig.~\ref{histogram_b_i}, we plot the histogram of the optimal caching probability at different values of $p$, where $p=p^*$ in Fig.~\ref{histogram_b_i_p_star} and $p\neq p^*$ in Fig.~\ref{histogram_b_i_p_leq_p_star}. It is clear from the histograms that optimal caching probability $b_i$ tends to be more skewed when $p\neq p^*$, i.e., when $\mathbb{P}(R_1>R_0)$ decreases. This shows that file sharing is more difficult when $p$ is not optimal. For example, if $p<p^*$, the system is too conservative owing to the small access probabilities. However, for $p>p^*$, the outage probability is high due to the aggressive interference. In such a low coverage probability regime, each device tend to cache the most popular files leading to fewer opportunities of content transfer between devices.
\begin{figure*}
\centering
\subfigure[The offloading probability versus the popularity of files $\beta$ at different thresholds $R_0$. ]{\includegraphics[width=3.0in]{offloading_gain_vs_beta.eps}
\label{offloading_gain_vs_beta_R_0}}
\subfigure[The offloading probability versus the popularity of files $\beta$ under different caching schemes (PC, Zipf, CPF).]{\includegraphics[width=3.0in]{offloading_prob_cach_compare.eps}
\label{offloading_prob_cach_compare}}
\caption{The offloading probability versus the popularity of files $\beta$.}
\label{offloading_gain_vs_beta}
\end{figure*}
Last but not least, Fig.~\ref{offloading_gain_vs_beta} manifests the prominent effect of the files' popularity on the offloading gain. In Fig.~\ref{offloading_gain_vs_beta_R_0}, we plot the offloading gain against $\beta$ at different rate thresholds $R_0$. We note that the offloading gain monotonically increases with $\beta$ since fewer files are frequently requested such that the files can be entirely cached among the cluster devices. Also, we see that offloading gain decreases with the increase of $R_0$ since the probability $\mathbb{P}(R_1>R_0)$ decreases with $R_0$. In Fig.~\ref{offloading_prob_cach_compare}, we compare the offloading gain of three different caching schemes, namely, the proposed probabilistic caching (PC), Zipf's caching (Zipf), and caching popular files (CPF). We can see that the offloading gain under the probabilistic caching scheme attains the best performance as compared to other schemes. Also, we note that both PC and Zipf's schemes encompass the same energy consumption when $\beta=0$ owing to the uniformity of content popularity.
\subsection{Energy Consumption Results}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{energy_vs_beta4.eps}
\caption {Normalized energy consumption versus popularity exponent $\beta$.}
\label{energy_vs_beta}
\end{center}
\end{figure}
The results in this part are given for the energy consumption.
Fig.~\ref{energy_vs_beta} shows the energy consumption, normalized to the number of devices per cluster, versus $\beta$ under different caching schemes, namely, PC, Zipf, CPF, and uniform random caching (random). We can see that the minimized energy under the proposed probabilistic caching scheme attains the best performance as compared to other schemes. Also, it is clear that, except for the random uniform caching, the consumed energy decreases with $\beta$. This can be justified by the fact that as $\beta$ increases, fewer files are frequently requested which are more likely to be cached among the devices under PC, CPF, and the Zipf's caching schemes. These few files therefore are downloadable from the devices via low power D2D communication. In the random caching scheme, files are uniformly chosen for caching independently of their popularity.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{energy_vs_n3.eps}
\caption {Normalized energy consumption versus number of devices per cluster.}
\label{energy_vs_n}
\end{center}
\end{figure}
We plot the normalized energy consumption per device versus the number of devices per cluster in Fig.~\ref{energy_vs_n}. First, we see that the normalized energy consumption decreases with the number of devices. As the number of devices per cluster increases, it is more probable to obtain requested files via low power D2D communication. When the number of devices per cluster is relatively large, the normalized energy consumption tends to flatten as most of the content becomes cached at the cluster devices.
\subsection{Delay Results}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{delay_compare.eps}
\caption {Weighted average delay versus the popularity exponent $\beta$.}
\label{delay_compare}
\end{center}
\end{figure}
In Fig.~\ref{delay_compare}, we compare the average delay of three different caching schemes PC, Zipf, and CPF. We can see that the jointly minimized average delay under probabilistic caching scheme attains the best performance as compared to other caching schemes. Also, we see that, in general, the average delay monotonically decreases with $\beta$ when fewer number of files undergoes the highest demand. Fig.~\ref{BW_compare} manifests the effect of the files' popularity $\beta$ on the allocated bandwidth. It is shown that optimal D2D allocated bandwidth $W_1^*$ continues increasing with $\beta$. This can be interpreted as follows. When $\beta$ increases, fewer number of files become highly demanded. These files can be entirely cached among the devices. To cope with larger number of requests served via the D2D communication, the D2D allocated bandwidth needs to be increased.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{BW_compare1.eps}
\caption {Normalized bandwidth allocation versus the popularity exponent $\beta$.}
\label{BW_compare}
\end{center}
\end{figure}
\section{Conclusion}
In this work, we conduct a comprehensive analysis of the joint communication and caching for a clustered D2D network with random probabilistic caching incorporated at the devices. We first maximize the offloading gain of the proposed network by jointly optimizing the channel access probability and caching probability. We solve for the channel access probability numerically and the optimal caching probability is then characterized. Then, we minimize the energy consumption of the proposed clustered D2D network. We formulate the energy minimization problem and show that it is convex and the optimal caching probability is obtained. Finally, we minimize the per request weighted average delay by jointly optimizing bandwidth allocation between D2D and BSs' communication and the caching probability. The delay minimization problem is shown to be non-convex. Applying the block coordinate descent optimization technique, the joint minimization problem can be solved in an iterative manner. Results shows roughly up to $10\%$, $17\%$, and $140\%$ improvement gain in the offloading gain, energy consumption, and average delay, respectively, compared to the Zipf's caching technique.
\begin{appendices}
\section{Proof of lemma 1}
Laplace transform of the inter-cluster aggregate interference $I_{\Phi_p^{!}}$, evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$, can be evaluated as
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &= \mathbb{E} \Bigg[e^{-s \sum_{\Phi_p^{!}} \sum_{y \in \mathcal{B}^p} g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&= \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\mathcal{B}^p,g_{y_{x}}} \prod_{y \in \mathcal{B}^p} e^{-s g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&= \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\mathcal{B}^p} \prod_{y \in \mathcal{B}^p} \mathbb{E}_{g_{y_{x}}} e^{-s g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&\overset{(a)}{=} \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\mathcal{B}^p} \prod_{y \in \mathcal{B}^p} \frac{1}{1+s \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&\overset{(b)}{=} \mathbb{E}_{\Phi_p} \prod_{\Phi_p^{!}} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x + y\rVert^{-\alpha}}\Big)f_Y(y)\dd{y}\Big) \nonumber
\\
&\overset{(c)}{=} {\rm exp}\Bigg(-\lambda_p \int_{\mathbb{R}^2}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x + y\rVert^{-\alpha}}\Big)f_Y(y)\dd{y}\Big)\dd{x}\Bigg) \nonumber
\\
&\overset{(d)}{=} {\rm exp}\Bigg(-\lambda_p \int_{\mathbb{R}^2}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert z\rVert^{-\alpha}}\Big)f_Y(z-x)\dd{y}\Big)\dd{x}\Bigg) \nonumber
\\
&\overset{(e)}{=} {\rm exp}\Bigg(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{u=0}^{\infty}\Big(1 - \frac{1}{1+s u^{-\alpha}}\Big)f_U(u|v)\dd{u}\Big)v\dd{v}\Bigg) \nonumber
\\
&= {\rm exp}\Bigg(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm exp}\big(-p\overline{n} \int_{u=0}^{\infty}
\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}\big)v\dd{v}\Big)\Bigg) \nonumber
\\
&= {\rm exp}\Big(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm e}^{-p\overline{n} \varphi(s,v)}\Big)v\dd{v}\Big),
\end{align}
where $\varphi(s,v) = \int_{u=0}^{\infty}\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}$. (a) follows from the Rayleigh fading assumption, (b)
follows from the probability generating functional (PGFL) of PPP $\Phi_c^{p}$, (c) follows from the PGFL of the parent PPP $\Phi_p$, (d) follows from change of variables $z = x + y$, and (e) follows from converting the cartesian coordinates to the polar coordinates. Hence, the lemma is proven.
\section {Proof of lemma 2}
Laplace transform of the intra-cluster aggregate interference $I_{\Phi_c}$, evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$, is written as
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|v_0) &= \mathbb{E} \Bigg[e^{-s \sum_{y \in \mathcal{A}^p} g_{y} \lVert x_0 + y\rVert^{-\alpha}} \Bigg] \nonumber
\\
&= \mathbb{E}_{\mathcal{A}^p,g_{y}} \prod_{y \in\mathcal{A}^p} e^{-s g_{y} \lVert x_0 + y\rVert^{-\alpha}} \nonumber
\\
&= \mathbb{E}_{\mathcal{A}^p} \prod_{y \in\mathcal{A}^p} \mathbb{E}_{g_{y}} e^{-s g_{y} \lVert x_0 + y\rVert^{-\alpha}} \nonumber
\\
&\overset{(a)}{=} \mathbb{E}_{\mathcal{A}^p} \prod_{y \in\mathcal{A}^p} \frac{1}{1+s \lVert x_0 + y\rVert^{-\alpha}}
\nonumber
\\
&\overset{(b)}{=} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x_0 + y\rVert^{-\alpha}}\Big)f_{Y}(y)\dd{y}\Big) \nonumber
\\
&\overset{(c)}{=} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert z_0\rVert^{-\alpha}}\Big)f_{Y}(z_0-x_0)\dd{z_0}\Big) \nonumber
\\
&\overset{(d)}{=} {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\Big(1 - \frac{1}{1+s h^{-\alpha}}\Big)f_H(h|v_0)\dd{h}\Big) \nonumber
\\
&= {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h|v_0)\dd{h}\Big) \nonumber
\\
&= {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h|v_0)\dd{h}\Big) \nonumber
\\
\mathscr{L}_{I_{\Phi_c} }(s) &\approx {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\end{align}
where (a) follows from the Rayleigh fading assumption, (b) follows from the PGFL of the PPP $\Phi_c^p$, (c) follows from changing of variables $z_0 = x_0 + y$, (d) follows from converting the cartesian to polar coordinates, and the approximation comes from neglecting the correlation of the intra-cluster interfering distances, i.e., the common part $v_0$, as in \cite{clustered_twc}. Hence, the lemma is proven.
\section {Proof of lemma 3}
First, to prove concavity, we proceed as follows.
\begin{align}
\frac{\partial \mathbb{P}_o}{\partial b_i} &= q_i + q_i\big(\overline{n}(1-b_i)e^{-\overline{n}b_i}-(1-e^{-\overline{n}b_i})\big)\mathbb{P}(R_1>R_0)\nonumber \\
\frac{\partial^2 \mathbb{P}_o}{\partial b_i \partial b_j} &= -q_i\big(\overline{n}e^{-\overline{n}b_i} + \overline{n}^2(1-b_i)e^{-\overline{n}b_i} + \overline{n}e^{-\overline{n}b_i}\big)\mathbb{P}(R_1>R_0)
\end{align}
It is clear that the second derivative $\frac{\partial^2 \mathbb{P}_o}{\partial b_i \partial b_j}$ is negative. Hence, the Hessian matrix \textbf{H}$_{i,j}$ of $\mathbb{P}_o(p^*,b_i)$ w.r.t. $b_i$ is negative semidefinite, and the function $\mathbb{P}_o(p^*,b_i)$ is concave with respect to $b_i$. Also, the constraints are linear, which imply that the necessity and sufficiency conditions for optimality exist. The dual Lagrangian function and the KKT conditions are then employed to solve \textbf{P2}.
The KKT Lagrangian function of the energy minimization problem is given by
\begin{align}
\mathcal{L}(b_i,w_i,\mu_i,v) =& \sum_{i=1}^{N_f} q_i b_i + q_i(1- b_i)(1-e^{-b_i\overline{n}})\mathbb{P}(R_1>R_0) \nonumber \\
&+ v(M-\sum_{i=1}^{N_f} b_i) + \sum_{i=1}^{N_f} w_i (b_i-1) - \sum_{i=1}^{N_f} \mu_i b_i
\end{align}
where $v, w_i, \mu_i$ are the dual equality and two inequality constraints, respectively. Now, the optimality conditions are written as,
\begin{align}
\label{grad}
& \grad_{b_i} \mathcal{L}(b_i^*,w_i^*,\mu_i^*,v^*) = q_i + q_i\big(\overline{n}(1-b_i)e^{-\overline{n}b_i}-(1-e^{-\overline{n}b_i})\big)\mathbb{P}(R_1>R_0) -v^* + w_i^* -\mu_i^*= 0 \\
&w_i^* \geq 0 \\
&\mu_i^* \leq 0 \\
&w_i^* (b_i^* - 1) =0 \\
&\mu_i^* b_i^* = 0\\
&(M-\sum_{i=1}^{N_f} b_i^*) = 0
\end{align}
\begin{enumerate}
\item $w_i^* > 0$: We have $b_i^* = 1$, $\mu_i^*=0$, and
\begin{align}
&q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)= v^* - w_i^* \nonumber \\
\label{cond1_offload}
&v^{*} < q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)
\end{align}
\item $\mu_i^* < 0$: We have $b_i^* = 0$, and $w_i^*=0$, and
\begin{align}
& q_i + \overline{n}q_i\mathbb{P}(R_1>R_0) = v^* + \mu_i^* \nonumber \\
\label{cond2_offload}
&v^{*} > q_i + \overline{n}q_i\mathbb{P}(R_1>R_0)
\end{align}
\item $0 <b_i^*<1$: We have $w_i^*=\mu_i^*=0$, and
\begin{align}
\label{psii_offload}
v^{*} = q_i + q_i\big(\overline{n}(1-b_i^*)e^{-\overline{n}b_i} - (1 - e^{-\overline{n}b_i})\big)\mathbb{P}(R_1>R_0)
\end{align}
\end{enumerate}
By combining (\ref{cond1_offload}), (\ref{cond2_offload}), and (\ref{psii_offload}), with the fact that $\sum_{i=1}^{N_f} b_i^*=M$, the lemma is proven.
\section {Proof of lemma 6}
Under the assumption of one active D2D link within a cluster, there is no intra-cluster interference. Also, the Laplace transform of the inter-cluster interference is similar to that of the PPP \cite{andrews2011tractable} whose density is the same as that of the parent PPP. In fact, this is true according to the displacement theory of the PPP \cite{daley2007introduction}, where each interferer is a point of a PPP that is displaced randomly and independently of all other points. For the sake of completeness, we prove it here. Starting from the third line of the proof of Lemma 1, we get
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &\overset{(a)}{=} \text{exp}\Bigg(-2\pi\lambda_p \mathbb{E}_{g_{u}} \int_{v=0}^{\infty}\mathbb{E}_{u|v}\Big[1 -
e^{-s P_d g_{u} u^{-\alpha}} \Big]v\dd{v}\Bigg), \nonumber \\
&= \text{exp}\Big(-2\pi\lambda_p \mathbb{E}_{g_{u}} \big[\int_{v=0}^{\infty}\int_{u=0}^{\infty}\big(1 - e^{-s P_d g_{u} u^{-\alpha}} \big)f_U(u|v)\dd{u}v\dd{v}\big]\Big) \nonumber \\
\label{prelaplace}
&\overset{(b)}{=} \text{exp}\Bigg(-2\pi\lambda_p \mathbb{E}_{g_{u}} \underbrace{\int_{v=0}^{\infty}v\dd{v} - \int_{v=0}^{\infty}\int_{u=0}^{\infty} e^{-s P_d g_{u} u^{-\alpha}} f_{U}(u|v)\dd{u} v \dd{v}}_{\mathcal{R}(s,\alpha)}\Bigg)
\end{align}
where (a) follows from the PGFL of the parent PPP \cite{andrews2011tractable}, and (b) follows from $\int_{u=0}^{\infty} f_{U}(u|v)\dd{u} =1$. Now, we proceed by calculating the integrands of $\mathcal{R}(s,\alpha)$ as follows.
\begin{align}
\mathcal{R}(s,\alpha)&\overset{(c)}{=} \int_{v=0}^{\infty}v\dd{v} - \int_{u=0}^{\infty}e^{-s P_d g_{u} u^{-\alpha}}
\int_{v=0}^{\infty} f_{U}(u|v)v \dd{v}\dd{u}\nonumber \\
&\overset{(d)}{=} \int_{v=0}^{\infty}v\dd{v} - \int_{u=0}^{\infty}e^{-s P_d g_{u} u^{-\alpha}}u\dd{u} \nonumber \\
&\overset{(e)}{=} \int_{u=0}^{\infty}(1 - e^{-s P_d g_{u} u^{-\alpha}})u\dd{u} \nonumber \\
&\overset{(f)}{=} \frac{(s P_d g_{u})^{2/\alpha}}{\alpha} \int_{u=0}^{\infty}(1 - e^{-t})t^{-1-\frac{2}{\alpha}}du \nonumber \\
\label{laplaceppp1}
&\overset{(g)}{=} \frac{(s P_d)^{2/\alpha}}{2} g_{u}^{2/\alpha} \Gamma(1 + 2/\alpha),
\end{align}
where (c) follows from changing the order of integration, (d) follows from $ \int_{v=0}^{\infty} f_{U}(u|v)v\dd{v} = u$, (e) follows from changing the dummy variable $v$ to $u$, (f) follows from changing the variables $t=s g_{u}u^{-\alpha}$, and (g) follows from solving the integration of (f) by parts. Substituting the obtained value for $\mathcal{R}(s,\alpha)$ into (\ref{prelaplace}), and taking the expectation over the exponential random variable $g_u$, with the fact that $\mathbb{E}_{g_{u}} [g_{u}^{2/\alpha}] = \Gamma(1 - 2/\alpha)$, we get
\begin{align}
\label{laplace_trans}
\mathscr{L}_{I_{\Phi_p^{!}}} (s)&= {\rm exp}\Big(-\pi\lambda_p (sP_d )^{2/\alpha} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)\Big),
\end{align}
Substituting this expression with the distance probability density function $f_R(r)$ into the coverage probability equation yields
\begin{align}
\mathrm{P_{c_d}} &=\int_{r=0}^{\infty}
{\rm e}^{-\pi\lambda_p (sP_d)^{2/\alpha} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)}\frac{r}{2\sigma^2}{\rm e}^{\frac{-r^2}{4\sigma^2}} {\rm dr} , \nonumber \\
&\overset{(h)}{=}\int_{r=0}^{\infty} \frac{r}{2\sigma^2}
{\rm e}^{-\pi\lambda_p \theta^{2/\alpha}r^{2} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)}{\rm e}^{\frac{-r^2}{4\sigma^2}} {\rm dr} , \nonumber \\
&\overset{(i)}{=}\int_{r=0}^{\infty} \frac{r}{2\sigma^2}
{\rm e}^{-r^2Z(\theta,\sigma,\alpha)} {\rm dr} , \nonumber \\
&\overset{}{=} \frac{1}{4\sigma^2 Z(\theta,\alpha,\sigma)}
\end{align}
where (h) comes from the substitution ($s = \frac{\theta r^{\alpha}}{P_d}$), and (i) from $Z(\theta,\alpha,\sigma) = (\pi\lambda_p \theta^{2/\alpha}\Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)+ \frac{1}{4\sigma^2})$.
\end{appendices}
\bibliographystyle{IEEEtran}
\section{Introduction}
Caching at mobile devices significantly improves system performance by facilitating \ac{D2D} communications, which enhances the spectrum efficiency and alleviate the heavy burden on backhaul links \cite{Femtocaching}. Modeling the cache-enabled heterogeneous networks, including \ac{SBS} and mobile devices, follows two main directions in the literature. The first line of work focuses on the fundamental throughput scaling results by assuming a simple protocol channel model \cite{Femtocaching,golrezaei2014base,8412262}, known as the protocol model, where two devices can communicate if they are within a certain distance. The second line of work, defined as the physical interference model, considers a more realistic model for the underlying physical layer \cite{andreev2015analyzing,cache_schedule}. In the following, we review some of the works relevant to the second line, focusing mainly on the energy efficiency \ac{EE} and delay analysis of wireless caching networks.
The physical interference model is based on the fundamental \ac{SIR} metric, and therefore, is applicable to any wireless communication system. Modeling devices' locations as a \ac{PPP} is widely employed in the literature, especially, in the wireless caching area \cite{andreev2015analyzing,cache_schedule,energy_efficiency,ee_BS,hajri2018energy}. However, a realistic model for \ac{D2D} caching networks requires that a given device typically has multiple proximate devices, where any of them can potentially act as a serving device. This deployment is known as clustered devices deployment, which can be characterized by cluster processes \cite{haenggi2012stochastic}. Unlike the popular PPP approach, the authors in \cite{clustered_twc,clustered_tcom,8070464} developed a stochastic geometry based model to characterize the performance of content placement in the clustered \ac{D2D} network. In \cite{clustered_twc}, the authors discuss two strategies of content placement in a \ac{PCP} deployment. First, when each device randomly chooses its serving device from its local cluster, and secondly, when each device connects to its $k$-th closest transmitting device from its local cluster. The authors characterize the optimal number of \ac{D2D} transmitters that must be simultaneously activated in each cluster to maximize the area spectral efficiency. The performance of cluster-centric content placement is characterized in \cite{clustered_tcom}, where the content of interest in each cluster is cached closer to the cluster center, such that the collective performance of all the devices in each cluster is optimized. Inspired by the Matern hard-core point process, which captures pairwise interactions between nodes, the authors in \cite{8070464} devised a novel spatially correlated caching strategy called \ac{HCP} such that the \ac{D2D} devices caching the same content are never closer to each other than the exclusion radius.
Energy efficiency in wireless caching networks is widely studied in the literature \cite{energy_efficiency,ee_BS,hajri2018energy}.
For example, an optimal caching problem is formulated in \cite{energy_efficiency} to minimize the energy consumption of a wireless network. The authors consider a cooperative wireless caching network where relay nodes cooperate with the devices to cache the most popular files in order to minimize energy consumption. In \cite{ee_BS}, the authors investigate how caching at BSs can improve \ac{EE} of wireless access networks. The condition when \ac{EE} can benefit from caching is characterized, and the optimal cache capacity that maximizes the network \ac{EE} is found. It is shown that \ac{EE} benefit from caching depends on content popularity, backhaul capacity, and interference level.
The authors in \cite{hajri2018energy} exploit the spatial repartitions of devices and the correlation in their content popularity profiles to improve the achievable EE. The \ac{EE} optimization problem is decoupled into two related subproblems, the first one addresses the issue of content popularity modeling, and the second subproblem investigates the impact of exploiting the spatial repartitions of devices. The authors derive a closed-form expression of the achievable \ac{EE} and find the optimal density of active small cells to maximize the EE. It is shown that the small base station allocation algorithm improves the energy efficiency and hit probability. However, the problem of \ac{EE} for \ac{D2D} based caching is not yet addressed in the literature.
Recently, the joint optimization of delay and energy in wireless caching is conducted, see, for instance \cite{wu2018energy,huang2018energy,jiang2018energy}. The authors in \cite{wu2018energy} jointly optimize the delay and energy in a cache-enabled dense small cell network. The authors formulate the energy-delay optimization problem as a mixed integer programming problem, where file placement, device association to the small cells, and power control are jointly considered. To model the energy consumption and end-to-end file delivery delay tradeoff, a utility function linearly combining these two metrics is used as an objective function of the optimization problem. An efficient algorithm is proposed to approach the optimal association and power solution, which could achieve the optimal tradeoff between energy consumption and end-to-end file delivery delay. In \cite{huang2018energy}, the authors showed that with caching, the energy consumption can be reduced by extending transmission time. However, it may incur wasted energy if the device never needs the cached content. Based on the random content request delay, the authors study the maximization of \ac{EE} subject to a hard delay constraint in an additive white Gaussian noise channel. It is shown that the \ac{EE} of a system with caching can be significantly improved with increasing content request probability and target transmission rate compared with the traditional on-demand scheme, in which the \ac{BS} transmits content file only after it is requested by the user. However, the problem of energy consumption and joint communication and caching for clustered \ac{D2D} networks is not yet addressed in the literature.
In this paper, we conduct comprehensive performance analysis and optimization of the joint communication and caching for a clustered \ac{D2D} network, where the devices have unused memory to cache some files, following a random probabilistic caching scheme. Our network model effectively characterizes the stochastic nature of channel fading and clustered geographic locations of devices. Furthermore, this paper also puts emphasis on the need for considering the traffic dynamics and rate of requests when studying the delay incurred to deliver requests to devices. Our work is the first in the literature that conducts a comprehensive spatial analysis of a doubly \ac{PCP} (also called doubly \ac{PPP} \cite{haenggi2012stochastic}) with the devices adopting a slotted-ALOHA random access technique to access a shared channel. Also, we are the first to incorporate the spatio-temporal analysis in wireless caching networks by combining tools from stochastic geometry and queuing theory in order to analyze and minimize the average delay (see, for instance, \cite{zhong2015stability,stamatiou2010random,zhong2017heterogeneous,7917340}).
The main contributions of this paper are summarized below.
\begin{itemize}
\item We consider a \ac{TCP} where the devices are spatially distributed as groups in clusters. The cluster centers are drawn from a parent PPP, and the cluster members are normally distributed around the centers, forming a Gaussian PPP. This organization of the parent and offspring PPPs forms the so-called doubly PPP.
\item \textcolor{blue}{We conduct the coverage probability analysis} where the devices adopt a slotted-ALOHA random access technique. We then jointly optimize the access probability and caching probability to maximize the cluster offloading gain. A closed-form solution of the optimal caching probability sub-problem is provided.
\item The energy consumption problem is then formulated and shown to be convex and the optimal caching probability is also formulated.
\item By combining tools from stochastic geometry as well as queuing theory,
we minimize the per request weighted average delay by jointly optimizing bandwidth allocation between \ac{D2D} and \ac{BS}-to-Device communication and the caching probability. The delay minimization problem is shown to be non-convex. Applying the \ac{BCD} optimization technique, the joint minimization problem is solved in an iterative manner.
\item We validate our theoretical findings via simulations. Results show a significant improvement in the network performance metrics, namely, the offloading gain, energy consumption, and average delay as compared to other caching schemes proposed earlier in literature.
\end{itemize}
The rest of this paper is organized as follows. Section II and Section III discuss the system model and the maximum offloading gain respectively. The energy consumption is discussed in Section IV and the delay analysis is conducted in Section V. Numerical results are then presented in Section VI before we conclude the paper in Section VII.
\section{System Model}
\subsection{System Setup}
We model the location of the mobile devices with a \ac{TCP} in which the parent points are drawn from a PPP $\Phi_p$ with density $\lambda_p$, and the daughter points are drawn from a Gaussian PPP around each parent point. In fact, the TCP is considered as a doubly \ac{PCP} where the daughter points are normally scattered with variance $\sigma^2 \in \mathbb{R}^2$ around each parent point \cite{haenggi2012stochastic}.
The parent points and offspring are referred to as cluster centers and cluster members, respectively. The number of cluster members in each cluster is a Poisson random variable with mean $\overline{n}$. The density function of the location of a cluster member relative to its cluster center is
\begin{equation}
f_Y(y) = \frac{1}{2\pi\sigma^2}\textrm{exp}\Big(-\frac{\lVert y\rVert^2}{2\sigma^2}\Big), \quad \quad y \in \mathbb{R}^2
\label{pcp}
\end{equation}
where $\lVert .\rVert$ is the Euclidean norm. The intensity function of a cluster is given by $\lambda_c(y) = \frac{\overline{n}}{2\pi\sigma^2}\textrm{exp}\big(-\frac{\lVert y\rVert^2}{2\sigma^2}\big)$. Therefore, the intensity of the process is given by $\lambda = \overline{n}\lambda_p$. We assume that the BSs' distribution follows another PPP $\Phi_{bs}$ with density $\lambda_{bs}$, which is independent of $\Phi_p$.
\subsection{Content Popularity and Probabilistic Caching Placement}
We assume that each device has a surplus memory of size $M$ designated for caching files.
The total number of files is $N_f> M$ and the set (library) of content indices is denoted as $\mathcal{F} = \{1, 2, \dots , N_f\}$. These files represent the content catalog that all the devices in a cluster may request, which are indexed in a descending order of popularity. The probability that the $i$-th file is requested follows a Zipf's distribution given by,
\begin{equation}
q_i = \frac{ i^{-\beta} }{\sum_{k=1}^{N_f}k^{-\beta}},
\label{zipf}
\end{equation}
where $\beta$ is a parameter that reflects how skewed the popularity distribution is. For example, if $\beta= 0$, the popularity of the files has a uniform distribution. Increasing $\beta$ increases the disparity among the files popularity such that lower indexed files have higher popularity. By definition, $\sum_{i=1}^{N_f}q_i = 1$.
We use Zipf's distribution to model the popularity of files per cluster.
\ac{D2D} communication is enabled within each cluster to deliver popular content. It is assumed that the devices adopt a slotted-ALOHA medium access protocol, where each transmitter independently and randomly accesses the channel with the same probability $p$. This implies that multiple active \ac{D2D} links might coexist within a cluster. Therefore, $p$ is a design parameter that directly controls the intra-cluster interference, as described later in the next section.
A probabilistic caching model is assumed, where the content is randomly and independently placed in the cache memories of different devices in the same cluster, according to the same distribution. The probability that a generic device stores a particular file $i$ is denoted as $b_i$, $0 \leq b_i \leq 1$ for all $i \in \mathcal{F}$. To avoid duplicate caching of the same content within the memory of the same device, we follow the caching approach proposed in \cite{blaszczyszyn2015optimal} and illustrated in Fig. \ref{prob_cache_example}.
\begin{figure
\begin{center}
\includegraphics[width=2.5in]{Figures/ch3/prob_cache_exam}
\caption {The cache memory of size $M = 3$ is equally divided into $3$ blocks of unit size. Starting from content $i=1$ to $i=N_f$, each content sequentially fills these $3$ memory blocks by an amount $b_i$. The amounts (probabilities) $b_i$ eventually fill all $3$ blocks since $\sum_{i=1}^{N_f} b_i = M$ \cite{blaszczyszyn2015optimal}. Then a random number $\in [0,1]$ is generated, and content $i$ is chosen from each block, whose $b_i$ fills the part intersecting with the generated random number. In this way, in the given example, the contents $\{1, 2, 4\}$ are chosen to be cached.}
\label{prob_cache_example}
\end{center}
\end{figure}
If a device caches the desired file, the device directly retrieves the content. However, if the device does not cache the file, it downloads it from any neighboring device that caches the file (henceforth called catering device) in the same cluster. According to the proposed access model, the probability that a chosen catering device is admitted to access the channel is the access probability $p$. Finally, the device attaches to the nearest \ac{BS} as a last resort to download the content which is not cached entirely within the device's cluster. We assume that the \ac{D2D} communication is operating as out-of-band D2D. \textcolor{red}{$W_{1}$ and $W_{2}$ denote respectively the bandwidth allocated to the \ac{D2D} and \ac{BS}-to-Device communication, and the total bandwidth of the system is denoted as $W=W_{1} + W_{2}$}. \textcolor{blue}{It is assumed that device requests are served in a random manner, i.e., among the cluster devices, a random device request is chosen to be scheduled and content is served.
In the following, we aim at studying and optimizing three important metrics, widely studied in the literature. The first metric is the offloading gain, which is defined as the probability of obtaining the requested file from the local cluster, either from the self-cache or from a neighboring device in the same cluster with a rate greater than a certain threshold. The second metric is the energy consumption which represents the dissipated energy when downloading files either from the BSs or via \ac{D2D} communication. Finally, the latency which accounts for the weighted average delay over all the requests served from the \ac{D2D} and \ac{BS}-to-Device communication.
\section{\textcolor{blue}{Maximum Offloading Gain}}
\begin{figure}[t]
\centering
\includegraphics[width=0.55\textwidth]{Figures/ch3/distance1.png}
\caption {Illustration of the representative cluster and one interfering cluster.}
\label{distance}
\end{figure}
Let us define the successful offloading probability as the probability that a device can find a requested file in its own cache, or in the caches of neighboring devices within the same cluster with \ac{D2D} link rate higher than a required threshold $R_0$. Without loss of generality, we conduct the analysis for a cluster whose center is $x_0\in \Phi_p$ (referred to as representative cluster), and the device who requests the content (henceforth called typical device) is located at the origin. We denote the location of the \ac{D2D} transmitter by $y_0$ w.r.t. $x_0$, where $x_0, y_0\in \mathbb{R}^2$. The distance from the typical device (\ac{D2D} receiver of interest) to this \ac{D2D} transmitter is denoted as $r=\lVert x_0+y_0\rVert$, which is a realization of a random variable $R$ whose distribution is described later. This setup is illustrated in Fig. \ref{distance}. It is assumed that a requested file is served from a randomly selected catering device, which is, in turn, admitted to access the channel based on the slotted-ALOHA protocol. \textcolor{blue}{The successful offloading probability is then given by}
\begin{align}
\label{offloading_gain}
\mathbb{P}_o(p,b_i) &= \sum_{i=1}^{N_f} q_i b_i + q_i(1 - b_i)(1 - e^{-b_i\overline{n}})\int_{r=0}^{\infty}f(r) \mathbb{P}(R_{1}(r)>R_0) \dd{r},
\end{align}
where $R_{1}(r)$ is the achievable rate when downloading content from a catering device at a distance $r$ from the typical device with pdf $f(r)$. The first term on the right-hand side is the probability of requesting a locally cached file (self-cache) whereas the remaining term incorporates the probability that a requested file $i$ is cached among at least one cluster member and being downloadable with a rate greater than $R_0$. To further clarity, since the number of devices per cluster has a Poisson distribution, the probability that there are $k$ devices per cluster is equal to $\frac{\overline{n}^k e^{-\overline{n}}}{k!}$. Accordingly, the probability that there are $k$ devices caching content $i$ is equal to $\frac{(b_i\overline{n})^k e^{-b_i\overline{n}}}{k!}$. Hence, the probability that at least one device caches content $i$ is $1$-minus the void probability (i.e., $k=0$), which equals $1 - e^{-b_i\overline{n}}$.
In the following, we first compute the probability $ \mathbb{P}(R_{1}(r)>R_0)$ conditioning on the distance $r$ between the typical and catering device, then we relax this condition. The received power at the typical device from a catering \ac{D2D} transmitter located at $y_0$ from the cluster center is given by
\begin{align}
P &= P_d g_0 \lVert x_0+y_0\rVert^{-\alpha}= P_d g_0 r^{-\alpha}
\label{pwr}
\end{align}
where $P_d$ denotes the \ac{D2D} transmission power, $g_0 \sim $ exp(1) is an exponential random variable which models Rayleigh fading and $\alpha > 2$ is the path loss exponent. Under the above assumption, the typical device sees two types of interference, namely, the intra-and inter-cluster interference. We first describe the inter-cluster interference, then the intra-cluster interference is characterized. The set of active devices in any remote cluster is denoted as $\mathcal{B}^p$, where $p$ refers to the access probability. Similarly, the set of active devices in the local cluster is denoted as $\mathcal{A}^p$. Similar to (\ref{pwr}), the interference from the simultaneously active \ac{D2D} transmitters outside the representative cluster, at the typical device is given by
\begin{align}
I_{\Phi_p^{!}} &= \sum_{x \in \Phi_p^{!}} \sum_{y\in \mathcal{B}^p} P_d g_{yx} \lVert x+y\rVert^{-\alpha}\\
& = \sum_{x\in \Phi_p^{!}} \sum_{y\in \mathcal{B}^p} P_d g_{u} u^{-\alpha}
\end{align}
where $\Phi_p^{!}=\Phi_p \setminus x_0$ for ease of notation, $y$ is the marginal distance between a potential interfering device and its cluster center at $x \in \Phi_p$ , $u = \lVert x+y\rVert$ is a realization of a random variable $U$ modeling the inter-cluster interfering distance (shown in Fig. \ref{distance}), $g_{yx} \sim $ exp(1) are \ac{i.i.d.} exponential random variables modeling Rayleigh fading, and $g_{u} = g_{yx}$ for ease of notation. The intra-cluster interference is then given by
\begin{align}
I_{\Phi_c} &= \sum_{y\in \mathcal{A}^p} P_d g_{yx_0} \lVert x_0+y\rVert^{-\alpha}\\
& = \sum_{y\in \mathcal{A}^p} P_d g_{h} h^{-\alpha}
\end{align}
where $y$ is the marginal distance between the intra-cluster interfering devices and the cluster center at $x_0 \in \Phi_p$, $h = \lVert x_0+y\rVert$ is a realization of a random variable $H$ modeling the intra-cluster interfering distance (shown in Fig. \ref{distance}), $g_{yx_0} \sim $ exp(1) are \ac{i.i.d.} exponential random variables modeling Rayleigh fading, and $g_{h} = g_{yx_0}$ for ease of notation. From the thinning theorem \cite{haenggi2012stochastic}, the set of active transmitters following the slotted-ALOHA medium access forms a PPP $\Phi_{cp}$ \textcolor{blue}{whose intensity is given by}
\begin{align}
\lambda_{cp} = p\lambda_{c}(y) = p\overline{n}f_Y(y) =\frac{p\overline{n}}{2\pi\sigma^2}\textrm{exp}\Big(-\frac{\lVert y\rVert^2}{2\sigma^2}\Big), \quad y \in \mathbb{R}^2
\end{align}
Assuming that the thermal noise is neglected as compared to the aggregate interference, the $\mathrm{SIR}$ at the typical device is written as
\begin{equation}
\gamma_{r}=\frac{P}{I_{\Phi_p^{!}} + I_{\Phi_c}} = \frac{P_d g_0 r^{-\alpha}}{I_{\Phi_p^{!}} + I_{\Phi_c}}
\end{equation}
\textcolor{blue}{A fixed rate transmission model is adopted in our study}, where each transmitter (\ac{D2D} or BS) transmits at the fixed rate of log$_2[1+\theta]$ bits/sec/Hz, where $\theta$ is a design parameter. Since, the rate is fixed, the transmission is subject to outage due to fading and interference fluctuations. Consequently, the de facto average transmissions rate (i.e., average throughput) is given by
\begin{equation}
\label{rate_eqn}
R = W\textrm{ log$_{2}$}[1+ \theta]\mathrm{P_c},
\end{equation}
where $W$ is the bandwidth, $\theta$ is the pre-determined threshold for successful reception, $\mathrm{P_c} =\mathbb{E}(\textbf{1}\{\mathrm{SIR}>\theta\})$ is the coverage probability, and $\textbf{1}\{.\}$ is the indicator function. \textcolor{red}{The \ac{D2D} communication rate under slotted-ALOHA access scheme is then given by}
\begin{equation}
\label{rate_eqn1}
R_{1}(r) = pW_{1} {\rm log}_2 \big(1 + \theta \big) \textbf{1}\{ \gamma_{r} > \theta\}
\end{equation}
Then, the probability $ \mathbb{P}(R_{1}(r)>R_0)$ is derived as follows.
\begin{align}
\mathbb{P}(R_{1}(r)>R_0)&= \mathbb{P} \big(pW_{1} {\rm log}_2 (1 + \theta)\textbf{1}\{ \gamma_{r} > \theta\}
>R_0\big) \nonumber \\
&=\mathbb{P} \big(\textbf{1}\{ \gamma_{r} > \theta\}
>\frac{R_0}{pW_{1} {\rm log}_2 (1 + \theta )}\big) \nonumber \\
&\overset{(a)}{=} \mathbb{P}\big(\frac{P_d g_0 r^{-\alpha}}{I_{\Phi_p^{!}} + I_{\Phi_c}} > \theta\big)
\nonumber \\
&= \mathbb{P}\big( g_0 > \frac{\theta r^{\alpha}}{P_d} [I_{\Phi_p^{!}} + I_{\Phi_c}] \big)
\nonumber \\
&\overset{(b)}{=}\mathbb{E}_{I_{\Phi_p^{!}},I_{\Phi_c}}\Big[\text{exp}\big(\frac{-\theta r^{\alpha}}{P_d}{[I_{\Phi_p^{!}} + I_{\Phi_c}] }\big)\Big]
\nonumber \\
&\overset{(c)}{=} \mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big)
\end{align}
\textcolor{blue}{where (a) follows from the assumption that $R_0 < pW_{1} {\rm log}_2 \big(1 + \theta \big)$ always hold, otherwise, it is infeasible to get $\mathbb{P}(R_{1}(r)>R_0)$ greater than zero}. (b) follows from the fact that $g_o$ follows an exponential distribution, and (c) follows from the independence of the intra- and inter-cluster interference and the Laplace transform of them.
In what follows, we first derive the Laplace transform of interference to get $\mathbb{P}(R_{1}(r)>R_0)$. Then, we formulate the offloading gain maximization problem.
\begin{lemma}
The Laplace transform of the inter-cluster aggregate interference $I_{\Phi_p^{!}}$ evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$ is given by
\begin{align}
\label{LT_inter}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &= {\rm exp}\Big(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm e}^{-p\overline{n} \varphi(s,v)}\Big)v\dd{v}\Big),
\end{align}
where $\varphi(s,v) = \int_{u=0}^{\infty}\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}$, and $f_U(u|v) = \mathrm{Rice} (u| v, \sigma)$ represents Rice's \ac{PDF} of parameter $\sigma$.
\end{lemma}
\begin{proof}
Please see Appendix A.
\end{proof}
\begin{lemma}
The Laplace transform of the intra-cluster aggregate interference $I_{\Phi_c}$ evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$ is given by
\begin{align}
\label{LT_intra}
\mathscr{L}_{I_{\Phi_c} }(s) = {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\end{align}
where $f_H(h) =\mathrm{Rayleigh}(h,\sqrt{2}\sigma)$ represents Rayleigh's \ac{PDF} with a scale parameter $\sqrt{2}\sigma$.
\end{lemma}
\begin{proof}
Please see Appendix B.
\end{proof}
For the serving distance distribution $f(r)$, since both the typical device as well as a potential catering device have their locations drawn from a normal distribution with variance $\sigma^2$ around the cluster center, the serving distance has a Rayleigh distribution with parameter $\sqrt{2}\sigma$, and given by
\begin{align}
\label{rayleigh}
f(r)= \frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}, \quad r>0
\end{align}
From (\ref{LT_inter}), (\ref{LT_intra}), and (\ref{rayleigh}), the offloading gain in (\ref{offloading_gain}) is characterized as
\begin{align}
\label{offloading_gain_1}
\mathbb{P}_o(p,b_i) &= \sum_{i=1}^{N_f} q_i b_i + q_i(1 - b_i)(1 - e^{-b_i\overline{n}}) \underbrace{\int_{r=0}^{\infty}
\frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}
\mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big)\dd{r}}_{\mathbb{P}(R_1>R_0)},
\end{align}
Hence, the offloading gain maximization problem can be formulated as
\begin{align}
\label{optimize_eqn_p}
\textbf{P1:} \quad &\underset{p, b_i}{\text{max}} \quad \mathbb{P}_o(p,b_i) \\
\label{const110}
&\textrm{s.t.}\quad \sum_{i=1}^{N_f} b_i = M, \\
\label{const111}
& b_i \in [ 0, 1], \\
\label{const112}
& p \in [ 0, 1],
\end{align}
where (\ref{const110}) is the device cache size constraint, which is consistent with the illustration of the example in Fig. \ref{prob_cache_example}. Since the offloading gain depends on the access probability $p$, and $p$ exists as a complex exponential term in $\mathbb{P}(R_1>R_0)$, it is hard to analytically characterize (e.g., show concavity of) the objective function or find a tractable expression for the optimal access probability. In order to tackle this, we propose to solve \textbf{P1} by \textcolor{blue}{finding first} the optimal $p^*$ that maximizes the probability $\mathbb{P}(R_{1}>R_0)$ over the interval $p \in [ 0, 1]$. Then, the obtained $p^*$ is used to solve for the caching probability $b_i$ in the optimization problem below. \textcolor{blue}{Since in the structure of \textbf{P1} $p$ and $b_i$ are separable, it is possible to solve numerically for $p^*$ and then substitute to get $b_i^*$.}
\begin{align}
\label{optimize_eqn_b_i}
\textbf{P2:} \quad &\underset{b_i}{\text{max}} \quad \mathbb{P}_o(p^*,b_i) \\
\label{const11}
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber
\end{align}
The optimal caching probability is formulated in the next lemma.
\begin{lemma}
$\mathbb{P}_o(p^*,b_i)$ is a concave function w.r.t. $b_i$ and the optimal caching probability $b_i^{*}$ that maximizes the offloading gain is given by
\[
b_{i}^{*}=\left\{
\begin{array}{ll}
1 \quad\quad\quad , v^{*} < q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)\\
0 \quad\quad\quad, v^{*} > q_i + \overline{n}q_i\mathbb{P}(R_1>R_0)\\
\psi(v^{*}) \quad, {\rm otherwise}
\end{array}
\right.
\]
where $\psi(v^{*})$ is the solution of $v^{*}$ of (\ref{psii_offload}) that satisfies $\sum_{i=1}^{N_f} b_i^*=M$.
\end{lemma}
\begin{proof}
Please see Appendix C.
\end{proof}
\section{\textcolor{blue}{Energy Consumption}}
In this section, we formulate the energy consumption minimization problem for the clustered \ac{D2D} caching network. In fact, significant energy consumption occurs only when content is served via \ac{D2D} or \ac{BS}-to-Device transmission. We consider the time cost $c_{d_i}$ as the time it takes to download the $i$-th content from a neighboring device in the same cluster. Considering the size ${S}_i$ of the $i$-th ranked content, $c_{d_i}=\overline{b}_i/R_1 $, where $R_1 $ denotes the average rate of the \ac{D2D} communication. Similarly, we have $c_{b_i} = \overline{b}_i/R_2 $ when the $i$-th content is served by the \ac{BS} with average rate $R_2 $. The average energy consumption when downloading files by the devices in the representative cluster is given by
\begin{align}
\label{energy_avrg}
E_{av} = \sum_{k=1}^{\infty} E(b_i|k)\mathbb{P}(n=k)
\end{align}
where $\mathbb{P}(n=k)$ is the probability that there are $k$ devices in the cluster $x_0$, and $E(b_i|k)$ is the energy consumption conditioning on having $k$ devices within the cluster $x_0$, written similar to \cite{energy_efficiency} as
\begin{equation}
E(b_i|k) = \sum_{j=1}^{k} \sum_{i=1}^{N_f}\big[\mathbb{P}_{j,i}^d q_i P_d c_{d_i} + \mathbb{P}_{j,i}^b q_i P_b c_{b_i}\big]
\label{energy}
\end{equation}
where $\mathbb{P}_{j,i}^d$ and $\mathbb{P}_{j,i}^b$ represent the probability of obtaining the $i$-th content by the $j$-th device from the local cluster, i.e., via \ac{D2D} communication, and the BS, respectively. $P_b$ denotes the \ac{BS} transmission power. Given that there are $k$ devices per cluster, it is obvious that $\mathbb{P}_{j,i}^b=(1-b_i)^{k}$, and $\mathbb{P}_{j,i}^d=(1 - b_i)\big(1-(1-b_i)^{k-1}\big)$.
The average rates $R_1$ and $R_2$ are now computed to get a closed-form expression for $E(b_i|k)$.
From equation (\ref{rate_eqn}), we need to obtain the \ac{D2D} coverage probability $\mathrm{P_{c_d}}$ and \ac{BS}-to-Device coverage probability $\mathrm{P_{c_b}}$ to calculate $R_1$ and $R_2$, respectively. Given the number of devices $k$ in the representative cluster, the Laplace transform of the inter-cluster interference \textcolor{red}{is as obtained in (\ref{LT_inter})}. However, the intra-cluster interfering devices no longer represent a Gaussian PPP since the number of devices is conditionally fixed, i.e., not a Poisson random number as before. To facilitate the analysis, for every realization $k$, we assume that the intra-cluster interfering devices form a Gaussian PPP with intensity function is given by $pkf_Y(y)$.
\textcolor{magenta}{Such an assumption is mandatory for tractability and is validated in the numerical section.}
From Lemma 2, the intra-cluster Laplace transform conditioning on $k$ can be approximated as
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|k) &\approx {\rm exp}\Big(-pk \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\nonumber
\end{align}
and directly, the \ac{D2D} coverage probability is given by
\begin{align}
\label{p_b_d2d}
\mathrm{P_{c_d}} =
\int_{r=0}^{\infty} \frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}
\mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big|k\big)\dd{r}
\end{align}
\textcolor{red}{With the adopted slotted-ALOHA scheme, the access probability $p$ is computed over the interval [0,1] to maximize $\mathrm{P_{c_d}}$ and, in turn, the \ac{D2D} achievable rate $R_1$.} Analogously, under the PPP $\Phi_{bs}$, and based on the nearest \ac{BS} association principle, it is shown in \cite{andrews2011tractable} that the \ac{BS} coverage probability can be expressed as
\begin{equation}
\mathrm{P_{c_b}} =\frac{1}{{}_2 F_1(1,-\delta;1-\delta;-\theta)},
\label{p_b_bs}
\end{equation}
where ${}_2 F_1(.)$ is the Gaussian hypergeometric function and $\delta = 2/\alpha$. Given the coverage probabilities $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ in (\ref{p_b_d2d}) and (\ref{p_b_bs}), respectively, $R_1$ and $R_2 $ can be calculated from (\ref{rate_eqn}), and hence $E(b_i|k)$ is expressed in a closed-form.
\subsection{Energy Consumption Minimization}
The energy minimization problem can be formulated as
\begin{align}
\label{optimize_eqn1}
&\textbf{P3:} \quad\underset{b_i}{\text{min}} \quad E(b_i|k) = \sum_{j=1}^{k} \sum_{i=1}^{N_f}\big[\mathbb{P}_{j,i}^d q_i P_d c_{d_i} + \mathbb{P}_{j,i}^b q_i P_b c_{b_i}\big]
\\
\label{const11}
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber
\end{align}
In the next lemma, we prove the convexity condition for $E$.
\begin{lemma}
\label{convex_E}
The energy consumption $E(b_i|k)$ is convex if $\frac{P_b}{R_2}>\frac{P_d}{R_1}$.
\end{lemma}
\begin{proof}
We proceed by deriving the Hessian matrix of $E$. The Hessian matrix of $E(b_i|k)(b_1,\dots,b_{Nf})$ w.r.t. the caching variables is \textbf{H}$_{i,j} = \frac{\partial^2 E(b_i|k)}{\partial b_i \partial b_j}$, $\forall i,j \in \mathcal{F}$. \textbf{H}$_{i,j}$ a diagonal matrix whose $i$-th row and $j$-th column element is given by $k(k-1) S_i\Big(\frac{P_b}{R_2}-\frac{P_d}{R_1}\Big)q_i(1 - b_i)^{k-2}$.
Since the obtained Hessian matrix is full-rank and diagonal, $\textbf{H}_{i,j}$ is positive semidefinite (and hence $E(b_i|k)$ is convex) if all the diagonal entries are nonnegative, i.e., when
$\frac{P_b}{R_2}>\frac{P_d}{R_1}$. In practice, it is reasonable to assume that $P_b \gg P_d$, as in \cite{ericsson}, the \ac{BS} transmit power is 100 fold the \ac{D2D} power.
\end{proof}
As a result of Lemma 3, the optimal caching probability can be computed to minimize $E(b_i|k)$.
\begin{lemma}
The optimal caching probability $b_i^{*}$ for the energy minimization problem \textbf{P3} is given by,
\begin{align}
b_i^* = \Bigg[ 1 - \Big(\frac{v^* + k^2q_iS_i\frac{P_d}{R_1 }}{kq_iS_i\big(\frac{P_d}{R_1 }-\frac{P_b}{R_2 }\big)} \Big)^{\frac{1}{k-1}}\Bigg]^{+}
\end{align}
where $v^{*}$ satisfies the maximum cache constraint $\sum_{i=1}^{N_f} \Big[ 1 - \Big(\frac{v^* + k^2q_iS_i\frac{P_d}{R_1 }}{kq_iS_i\big(\frac{P_d}{R_1 }-\frac{P_b}{R_2 }\big)} \Big)^{\frac{1}{k-1}}\Big]^{+}=M$, and $[x]^+ =$ max$(x,0)$.
\end{lemma}
\begin{proof}
The proof proceeds in a similar manner to Lemma 3 and is omitted.
\end{proof}
\begin{proposition} {\rm \textcolor{red}{The effect of file size, and the difference between the two fraction power over rate \ac{D2D} and BS}}
\end{proposition}
By substituting $b_i^*$ into (\ref{energy_avrg}), the average energy consumption per cluster is obtained. In the remaining of the paper, we study and minimize the weighted average delay per request for the proposed system.
\section{Delay Analysis}
In this section, the delay analysis and minimization are discussed. A joint stochastic geometry and queueing theory model is exploited to study this problem. The delay analysis incorporates the study of a system of spatially interacting queues. To simplify the mathematical analysis, we further consider that only one \ac{D2D} link can be active within a cluster of $k$ devices, where $k$ is fixed. As shown later, such an assumption facilitates the analysis by deriving simple expressions. We begin by deriving the \ac{D2D} coverage probability under the above assumption, which is used later in this section.
\begin{lemma}
\label{coverage_lemma}
The \ac{D2D} coverage probability of the proposed clustered model with one active \ac{D2D} link within a cluster is given by
\begin{align}
\label{v_0}
\mathrm{P_{c_d}} = \frac{1}{4\sigma^2 Z(\theta,\alpha,\sigma)} ,
\end{align}
where $Z(\theta,\alpha,\sigma) = (\pi\lambda_p \theta^{2/\alpha}\Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)+ \frac{1}{4\sigma^2})$.
\end{lemma}
\begin{proof}
The result can be proved by using the displacement theory of the PPP \cite{daley2007introduction}, and then proceeding in a similar manner to Lemma 1 and 2. The proof is presented in Appendix D for completeness.
\end{proof}
In the following, we firstly describe the traffic model of the network, and then we formulate the delay minimization problem.
\begin{figure
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/delay_queue2}
\caption {\textcolor{blue}{The traffic model of request arrivals and departures in a given cluster. Two M/G/1 queues are assumed, $Q_1$ and $Q_2$, that represent respectively requests served by the \ac{D2D} and \ac{BS}-to-Device communication.}}
\label{delay_queue}
\end{center}
\end{figure}
\subsection{Traffic Model}
We assume that the aggregate request arrival process from the devices in each cluster follows a Poisson arrival process with parameter $\zeta_{tot}$ (requests per time slot). As shown in Fig.~\ref{delay_queue}, the incoming requests are further divided according to where they are served from. $\zeta_{1}$ represents the arrival rate of requests served via the \ac{D2D} communication, whereas $\zeta_{2}$ is the arrival rate for those served from the BSs. $\zeta_{3} = 1 - \zeta_{1} - \zeta_{2}$ denotes the arrival rate of requests served via the self-cache with zero delay. By definition, $\zeta_{1}$ and $\zeta_{2}$ are also Poisson arrival processes. Without loss of generality, we assume that the file size has a general distribution $G$ whose mean is denoted as $\overline{S}$ MBytes. Hence, an M/G/1 queuing model is adopted whereby two non-interacting queues, $Q_1$ and $Q_2$, model the traffic in each cluster served via the \ac{D2D} and \ac{BS}-to-Device communication, respectively. Although $Q_1$ and $Q_2$ are non-interacting as the \ac{D2D} communication is assumed to be out-of-band, these two queues are spatially interacting with similar queues in other clusters. To recap, $Q_1$ and $Q_2$ are two M/G/1 queues with arrival rates $\zeta_{1}$ and $\zeta_{1}$, and service rates $\mu_1$ and $\mu_2$, respectively.
\subsection{Queue Dynamics}
It is worth highlighting that the two queues $Q_i$, $i \in \{1,2\}$, accumulate requests for files demanded by the clusters members, not the files themselves. First-in first-out (FIFO) scheduling is assumed where a request for content arrives first will be scheduled first either by the \ac{D2D} or \ac{BS} communication if the content is cached among the devices or not, respectively. The result of FIFO scheduling only relies on the time when the request arrives the queue and is irrelevant to the particular device that issues the request. Given the parameter of the Poisson's arrival process $\zeta_{tot}$, the arrival rates at the two queues are expressed respectively as
\begin{align}
\zeta_{1} &= \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big), \\
\zeta_{2} &= \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}
\end{align}
The network operation is depicted in Fig. \ref{delay_queue}, and described in detail below.
\begin{enumerate}
\item Given the memoryless property of the arrival process (Poisson arrival) along with the assumption that the service process is independent of the arrival process,
the number of requests in any queue at a future time only depends upon the current number in the system (at time $t$) and the arrivals or departures that occur within the interval $h$.
\begin{align}
Q_{i}(t+h) = Q_{i}(t) + \Lambda_{i}(h) - M_{i}(h)
\end{align}
where $\Lambda_{i}(h)$ is the number of arrivals in the time interval $(t,t+h)$, whose mean is $\zeta_i$ [sec$^{-1}$], and $M_{i}(h)$ is the number of departures in the time interval $(t,t+h)$, whose mean is $\mu_i = \frac{\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})W_i{\rm log}_2(1+\theta)}{\overline{S}}$ [sec$^{-1}$]. It is worth highlighting that, unlike the spatial-only model studied in the previous sections, the term $\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})$ is dependent on the traffic dynamics \textcolor{blue}{since a request being served in a given cluster is interfered only from other clusters that also have requests to serve}. What is more noteworthy is that the mean service time $\tau_i = \frac{1}{\mu_i}$ follows the same distribution as the file size. These aspects will be revisited later in this section.
\item $\Lambda_{i}(h)$ is dependent only on $h$ because the arrival process is Poisson. $M_{i}(h)$ is $0$ if the service time of the file being served $\epsilon_i >h$. $M_{i}(h)$ is 1 if $\epsilon_1 <h$ and $\epsilon_2 + \epsilon_1>h$, and so on. As the service times $\epsilon_1, \epsilon_2, \dots, \epsilon_n$ are independent, neither $\Lambda_{i}(h)$ nor $M_{i}(h)$ depend on what happened prior to $t$. Thus, $Q_{i}(t+h)$ only depends upon $Q_{i}(t)$ and not the past history. Hence it is a is a \ac{CTMC} which obeys the stability conditions in \cite{szpankowski1994stability}.
\end{enumerate}
The following proposition provides the sufficient conditions for the stability of the buffers in the sense defined in \cite{szpankowski1994stability}, i.e., $\{Q_{i}\}$ has a limiting distribution for $t \rightarrow \infty$.
\begin{proposition} {\rm The \ac{D2D} and \ac{BS}-to-Device traffic modeling queues are stable, respectively, if and only if}
\begin{align}
\label{stable1}
\lambda_1 < \mu_1 &= \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}} \\
\label{stable2}
\lambda_2 < \mu_2 & =\frac{\mathrm{P_{c_b}} W_2{\rm log}_2(1+\theta)}{\overline{S}}
\end{align}
\end{proposition}
\begin{proof}
We show sufficiency by proving that (\ref{stable1}) and (\ref{stable2}) guarantee stability in a dominant network, where all queues that have empty buffers make dummy transmissions. The dominant network is a fictitious system that is identical to the original system, except that terminals may choose to transmit even when their respective buffers are empty, in which case they simply transmit a dummy packet. If both systems are started from the same initial state and fed with the same arrivals, then the queues in the fictitious dominant system can never be shorter than the queues in the original system.
\textcolor{blue}{Similar to the spatial-only network, in the dominant system, the typical receiver is seeing an interference from all other clusters whether they have requests to serve or not.} This dominant system approach yields $\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})$ equal to $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ for the \ac{D2D} and \ac{BS}-to-Device communication, respectively. Also, the obtained delay is an upper bound for the actual delay of the system. The necessity of (\ref{stable1}) and (\ref{stable2}) is shown as follows: If $\zeta_i>\mu_i$, then, by Loynes' theorem \cite{loynes1962stability}, it follows that lim$_{t\rightarrow \infty}Q_i(t)=\infty$ (a.s.) for all queues in the dominant network.
\end{proof}
Next, we conduct the analysis for the dominant system whose parameters are as follows. The content size has an exponential distribution with \textcolor{red}{mean $\overline{S}$ [MBytes]}. The service times also obey the same exponential distribution with means $\tau_1 = \frac{\overline{S}}{R_1}$ [second] and $\tau_2 = \frac{\overline{S}}{R_2}$ [second]. The rates $R_1$ and $R_2$ are calculated from (\ref{rate_eqn}) where $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ are from (\ref{v_0}) and (\ref{p_b_bs}), respectively. Accordingly, $Q_1$ and $Q_2$ are two continuous time independent (non-interacting) M/M/1 queues with service rates $\mu_1 = \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}}$ and $\mu_2 = \frac{\mathrm{P_{c_b}} W_2{\rm log}_2(1+\theta)}{\overline{S}}$ [sec$^{-1}$], respectively. \begin{proposition} {\rm The mean queue length $L_i$ of the $i$-th queue is given by}
\begin{align}
\label{queue_len}
L_i &= \rho_i + \frac{2\rho_i^2}{2\zeta_i(1 - \rho_i)},
\end{align}
\end{proposition}
\begin{proof}
We can easily calculate $L_i$ by observing that $Q_i$ are continuous time M/M/1 queues with arrival rates $\zeta_i$, service rates $\mu_i$, and traffic intensities $\rho_i = \frac{\zeta_i}{\mu_i}$. Then, by applying the Pollaczek-Khinchine formula \cite{Kleinrock}, $L_i$ is directly obtained.
\end{proof}
The average delay per request for each queue is calculated from
\begin{align}
D_1 &= \frac{L_1}{\zeta_1}= \frac{1}{\mu_1 - \zeta_1} = \frac{1}{W_1\mathcal{O}_{1} - \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)} \\
D_2 &= \frac{L_2}{\zeta_2}=\frac{1}{\mu_2 - \zeta_2} = \frac{1}{W_2\mathcal{O}_{2} - \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}}
\end{align}
where $\mathcal{O}_{1} = \frac{\mathrm{P_{c_d}} {\rm log}_2(1+\theta)}{\overline{S}}$, $\mathcal{O}_{2}= \frac{\mathrm{P_{c_b}} {\rm log}_2(1+\theta)}{\overline{S}}$ for notational simplicity. The weighted average delay $D$ is then expressed as
\begin{align}
D&= \frac{\zeta_{1}D_1 + \zeta_{2}D_2}{\zeta_{tot}} \nonumber \\
&= \frac{\sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)}{ \mathcal{O}_{1}W_1 - \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)} + \frac{\sum_{i=1}^{N_f}q_i (1-b_i)^{k}}{ \mathcal{O}_{2}W_2 - \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}}
\end{align}
One important insight from the delay equation is that the caching probability $b_i$ controls the arrival rates $\zeta_{1}$ and $\zeta_{2}$ while the bandwidth determines the service rates $\mu_1$ and $\mu_2$. Therefore, it turns out to be of paramount importance to jointly optimize $b_i$ and $W_1$ to minimize the average delay. Subsequently, we formulate the weighted average delay joint caching and bandwidth minimization problem as
\begin{align}
\label{optimize_eqn3}
\textbf{P4:} \quad \quad&\underset{b_i,{\rm W}_1}{\text{min}} \quad D(b_i,W_1) \\
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber \\
& 0 \leq W_1 \leq W,
\end{align}
Although the objective function of \textbf{P4} is convex w.r.t. $W_1$, as derived below, the coupling of the optimization variables $b_i$ and $W_1$ makes \textbf{P4} a non-convex optimization problem. Therefore, \textbf{P4} can not be solved directly using standard convex optimization techniques.
\textcolor{blue}{By applying the \ac{BCD} optimization technique, \textbf{P4} can be solved in an iterative manner as follows. First, for a given caching probability $b_i$, we calculate the bandwidth allocation subproblem. Afterwards, the obtained optimal bandwidth is used to update $b_i$}. The optimal bandwidth for the bandwidth allocation subproblem is given in the next Lemma.
\begin{lemma}
The objective function of \textbf{P4} in (\ref{optimize_eqn3}) is convex w.r.t. $W_1$, and the optimal bandwidth allocation to the \ac{D2D} communication is given by
\begin{align}
W_1^* = \frac{\zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k}) +\varpi \big(\mathcal{O}_{2}W - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)}{\mathcal{O}_{1}+\varpi\mathcal{O}_{2}},
\end{align}
where $\overline{b}_i = 1 - b_i$ and $\varpi=\sqrt{\frac{\mathcal{O}_{1}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})}{\mathcal{O}_{2} \sum_{i=1}^{N_f}q_i\overline{b}_i^{k}}}$
\end{lemma}
\begin{proof}
$D(b_i,W_1|k)$ can be written as
\begin{align}
\label{optimize_eqn3_p1}
\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big(\mathcal{O}_{1}W_1 - \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big)^{-1} + \sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big(\mathcal{O}_{2}W_2 - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)^{-1}, \nonumber
\end{align}
The second derivative $\frac{\partial^2 D(b_i,W_1|k)}{\partial W_1^2}$ is hence given by
\begin{align}
2\mathcal{O}_{1}^2\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big(\mathcal{O}_{1}W_1 - \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big)^{-3} + 2\mathcal{O}_{2}^2\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big(\mathcal{O}_{2}W_2 - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)^{-3}, \nonumber
\end{align}
The stability condition requires that $\mathcal{O}_{1}W_1 > \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})$ and $\mathcal{O}_{2}W_2 > \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}$. Also, $\overline{b}_i \geq \overline{b}_i^{k}$ by definition. Hence, $\frac{\partial^2 D(b_i,W_1|k)}{\partial W_1^2} > 0$, and the objective function is a convex function of $W_1$. The optimal bandwidth allocation can be obtained from the Karush-Kuhn-Tucker (KKT) conditions similar to problems \textbf{P2} and \textbf{P3}, with the details omitted for brevity.
\end{proof}
Given $W_1^*$ from the bandwidth allocation subproblem, the caching probability subproblem can be written as
\begin{align}
\textbf{P5:} \quad \quad&\underset{b_i}{\text{min}} \quad D(b_i,W_1^*) \\
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111})
\end{align}
\textbf{The} caching probability subproblem \textbf{P5} is a sum of two fractional functions, where the first fraction is in the form of a concave over convex functions while the second fraction is in the form of a convex over concave functions. The first fraction structure, i.e., concave over convex functions, renders solving this problem using fractional programming very challenging.\footnote{\textbf{Dinkelbach's} transform can be used to solve a minimization of fractional functions that has the form of convex over concave functions, whereby an equivalent problem is solved with the objective function as the difference between convex (numerator) minus concave (denominator) functions \cite{schaible1973fractional}.} Hence, we use the \textcolor{red}{successive convex approximation} technique to solve for $b_i$ given the optimal bandwidth $W_1^*$. The explained procedure is repeated until the value of \textbf{P4}'s objective function converges to a pre-specified accuracy.
\section{Numerical Results}
\begin{table}[ht]
\caption{Simulation Parameters}
\centering
\begin{tabular}{c c c}
\hline\hline
Description & Parameter & Value \\ [0.5ex]
\hline
Network bandwidth & W & \SI{20}{\mega\hertz} \\
\ac{BS} transmission power & $P_b$ & \SI{43}{\deci\bel\of{m}} \\
\ac{D2D} transmission power & $P_d$ & \SI{23}{\deci\bel\of{m}} \\
Displacement standard deviation & $\sigma$ & \SI{10}{\metre} \\
Popularity index&$\beta$&1\\
Path loss exponent&$\alpha$&4\\
Library size&$N_f$&500 files\\
Cache size per device&$M$&5 files\\
Mean number of devices per cluster&\textcolor{red}{$\overline{n}$}&10\\
Density of $\Phi_p$&$\lambda_{p}$&50 clusters/\SI{}{km}$^2$ \\
Average content size&$\textcolor{red}{\overline{S}}$&\SI{3}{Mbits} \\
$\mathrm{SIR}$ threshold&$\theta$&\SI{0}{\deci\bel}\\
\hline
\end{tabular}
\label{ch3:table:sim-parameter}
\end{table}
At first, we validate the developed mathematical model via Monte Carlo simulations. Then we benchmark the proposed caching scheme against conventional caching schemes. Unless otherwise stated, the network parameters are selected as shown in Table \ref{ch3:table:sim-parameter}.
\subsection{Offloading Gain Results}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/prob_r_geq_r0.eps}
\caption {The probability that the achievable rate is greater than a threshold $R_0$ versus standard deviation $\sigma$.}
\label{prob_r_geq_r0}
\end{center}
\end{figure}
In this subsection, we present the offloading gain performance for the proposed caching model.
In Fig.~\ref{prob_r_geq_r0}, we verify the accuracy of the analytical results for the probability $\mathbb{P}(R_1>R_0)$. The theoretical and simulated results are plotted together, and they are consistent. We can observe that the probability $\mathbb{P}(R_1>R_0)$ decreases monotonically with the increase of $\sigma$. This is because as $\sigma$ increases, the serving distance increases and the inter-cluster interfering distance between the out-of-cluster interferers and the typical device decreases, and equivalently, the $\mathrm{SIR}$ decreases. It is also shown that $\mathbb{P}(R_1>R_0)$ decreases with the $\mathrm{SIR}$ threshold $\theta$ as the channel becomes more prone to be in outage when increasing the $\mathrm{SIR}$ threshold $\theta$.
In Fig.~\ref{prob_r_geq_r0_vs_p}, we plot the probability $\mathbb{P}(R_1>R_0)$ against the channel access probability $p$ at different thresholds $R_0$. As evident from the plot, we see that there is an optimal $p^*$; before it the probability $\mathbb{P}(R_1>R_0)$ tends to increase since the channel access probability increases, and beyond it, the probability $\mathbb{P}(R_1>R_0)$ decreases monotonically due to the effect of more interferers accessing the channel. It is quite natural that $\mathbb{P}(R_1>R_0)$ is higher when decreasing the rate threshold $R_0$. Also, we observe that the optimal access probability $p^*$ is smaller when $R_0$ decreases. This implies that a transmitting device can maximize the probability $\mathbb{P}(R_1>R_0)$ at the receiver when $R_0$ is smaller by accessing the channel less frequently, and correspondingly, receiving lower interference.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/prob_r_geq_r0_vs_p.eps}
\caption {\textcolor{red}{The probability that the achievable rate is greater than a threshold $R_0$ versus the access probability $p$.}}
\label{prob_r_geq_r0_vs_p}
\end{center}
\end{figure}
\begin{figure*}
\centering
\subfigure[$p=p^*$. ]{\includegraphics[width=3.0in]{Figures/ch3/histogram_b_i_p_star.eps}
\label{histogram_b_i_p_star}}
\subfigure[$p \neq p^*$.]{\includegraphics[width=3.0in]{Figures/ch3/histogram_b_i_p_leq_p_star.eps}
\label{histogram_b_i_p_leq_p_star}}
\caption{Histogram of the caching probability $b_i$ when (a) $p=p^*$ and (b) $p \neq p^*$.}
\label{histogram_b_i}
\end{figure*}
To show the effect of $p$ on the caching probability, in Fig.~\ref{histogram_b_i}, we plot the histogram of the optimal caching probability at different values of $p$, where $p=p^*$ in Fig.~\ref{histogram_b_i_p_star} and $p\neq p^*$ in Fig.~\ref{histogram_b_i_p_leq_p_star}. It is clear from the histograms that optimal caching probability $b_i$ tends to be more skewed when $p\neq p^*$, i.e., when $\mathbb{P}(R_1>R_0)$ decreases. This shows that file sharing is more difficult when $p$ is not optimal. For example, if $p<p^*$, the system is too conservative owing to the small access probabilities. However, for $p>p^*$, the outage probability is high due to the aggressive interference. In such a low coverage probability regime, each device tend to cache the most popular files leading to fewer opportunities of content transfer between devices.
\begin{figure*}
\centering
\subfigure[The offloading probability versus the popularity of files $\beta$ at different thresholds $R_0$. ]{\includegraphics[width=3.0in]{Figures/ch3/offloading_gain_vs_beta.eps}
\label{offloading_gain_vs_beta_R_0}}
\subfigure[The offloading probability versus the popularity of files $\beta$ under different caching schemes (PC, Zipf, CPF).]{\includegraphics[width=3.0in]{Figures/ch3/offloading_prob_cach_compare.eps}
\label{offloading_prob_cach_compare}}
\caption{The offloading probability versus the popularity of files $\beta$.}
\label{offloading_gain_vs_beta}
\end{figure*}
Last but not least, Fig.~\ref{offloading_gain_vs_beta} manifests the prominent effect of the files' popularity on the offloading gain. In Fig.~\ref{offloading_gain_vs_beta_R_0}, we plot the offloading gain against $\beta$ at different rate thresholds $R_0$. We note that the offloading gain monotonically increases with $\beta$ since fewer files are frequently requested such that the files can be entirely cached among the cluster devices. Also, we see that offloading gain decreases with the increase of $R_0$ since the probability $\mathbb{P}(R_1>R_0)$ decreases with $R_0$. In Fig.~\ref{offloading_prob_cach_compare}, we compare the offloading gain of three different caching schemes, namely, the proposed \ac{PC}, Zipf's caching (Zipf), and \ac{CPF}. We can see that the offloading gain under the \ac{PC} scheme attains the best performance as compared to other schemes. Also, we note that both \ac{PC} and Zipf's schemes encompass the same energy consumption when $\beta=0$ owing to the uniformity of content popularity.
\subsection{\textcolor{red}{Energy Consumption Results}}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/energy_vs_beta5.eps}
\caption {Normalized energy consumption versus popularity exponent $\beta$.}
\label{energy_vs_beta}
\end{center}
\end{figure}
The results in this part are given for the energy consumption.
Fig.~\ref{energy_vs_beta} shows the energy consumption, \textcolor{magenta}{normalized to the mean of the number of devices per cluster}, versus $\beta$ under different caching schemes, namely, \ac{PC}, Zipf, \ac{CPF}, and \ac{RC}. We can see that the minimized energy under the proposed \ac{PC} scheme attains the best performance as compared to other schemes. Also, it is clear that, except for the \ac{RC}, the consumed energy decreases with $\beta$. This can be justified by the fact that as $\beta$ increases, fewer files are frequently requested which are more likely to be cached among the devices under \ac{PC}, \ac{CPF}, and the Zipf's caching schemes. These few files therefore are downloadable from the devices via low power \ac{D2D} communication. In the \ac{RC} scheme, files are uniformly chosen for caching independent of their popularity.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/energy_vs_n4.eps}
\caption {\textcolor{red}{Normalized energy consumption versus number of devices per cluster.}}
\label{energy_vs_n}
\end{center}
\end{figure}
We plot the normalized energy consumption per device versus the \textcolor{magenta}{the mean of the number of devices per cluster} in Fig.~\ref{energy_vs_n}. First, we see that the normalized energy consumption decreases with the number of devices. As the number of devices per cluster increases, it is more probable to obtain requested files via low power \ac{D2D} communication. When the number of devices per cluster is relatively large, the normalized energy consumption tends to flatten as most of the content becomes cached at the cluster devices.
\subsection{Delay Results}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/delay_compare.eps}
\caption {Weighted average delay versus the popularity exponent $\beta$.}
\label{delay_compare}
\end{center}
\end{figure}
In Fig.~\ref{delay_compare}, we compare the average delay of three different caching schemes \ac{PC}, Zipf, and \ac{CPF}. We can see that the jointly minimized average delay under \ac{PC} scheme attains the best performance as compared to other caching schemes. Also, we see that, in general, the average delay monotonically decreases with $\beta$ when a fewer number of files undergoes the highest demand. Fig.~\ref{BW_compare} manifests the effect of the files' popularity $\beta$ on the allocated bandwidth. It is shown that optimal \ac{D2D} allocated bandwidth $W_1^*$ continues increasing with $\beta$. This can be interpreted as follows. When $\beta$ increases, a fewer number of files become highly demanded. These files can be entirely cached among the devices. To cope with such a larger number of requests served via the \ac{D2D} communication, the \ac{D2D} allocated bandwidth needs to be increased.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/BW_compare1.eps}
\caption {Normalized bandwidth allocation versus the popularity exponent $\beta$.}
\label{BW_compare}
\end{center}
\end{figure}
\section{Conclusion}
In this work, we conduct a comprehensive analysis of the joint communication and caching for a clustered \ac{D2D} network with random probabilistic caching incorporated at the devices. We first maximize the offloading gain of the proposed system by jointly optimizing the channel access and caching probability. We solve for the channel access probability numerically, and the optimal caching probability is then characterized. We show that deviating from the optimal access probability $p^*$ makes file sharing more difficult. More precisely, the system is too conservative for small access probabilities, while the interference is too aggressive for larger access probabilities. \textcolor{magenta}{Then, we minimize the energy consumption of the proposed clustered \ac{D2D} network. We formulate the energy minimization problem and show that it is convex and the optimal caching probability is obtained.} Finally, we adopt a queuing model for the devices' traffic within each cluster to investigate the network average delay. Two M/G/1 queues are employed to model the \ac{D2D} and \ac{BS}-to-Device communications. We then derive an expression for the weighted average delay per request. We observe that the average delay is dependent on the caching probability and bandwidth allocated, which control respectively the arrival rate and service rate for the two modeling queues. Therefore, we minimize the per request weighted average delay by jointly optimizing bandwidth allocation between \ac{D2D} and \ac{BS}-to-Device communication and the caching probability. The delay minimization problem is shown to be non-convex. Applying the \ac{BCD} optimization technique, the joint minimization problem can be solved in an iterative manner. Results show roughly up to $10\%$, $17\%$, and $140\%$ improvement gain in the offloading gain, energy consumption, and average delay, respectively, compared to the Zipf's caching technique.
\begin{appendices}
\section{Proof of lemma 1}
Laplace transform of the inter-cluster aggregate interference $I_{\Phi_p^{!}}$, evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$, can be evaluated as
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &= \mathbb{E} \Bigg[e^{-s \sum_{\Phi_p^{!}} \sum_{y \in \mathcal{B}^p} g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&= \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\mathcal{B}^p,g_{y_{x}}} \prod_{y \in \mathcal{B}^p} e^{-s g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&= \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\mathcal{B}^p} \prod_{y \in \mathcal{B}^p} \mathbb{E}_{g_{y_{x}}} e^{-s g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&\overset{(a)}{=} \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\mathcal{B}^p} \prod_{y \in \mathcal{B}^p} \frac{1}{1+s \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&\overset{(b)}{=} \mathbb{E}_{\Phi_p} \prod_{\Phi_p^{!}} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x + y\rVert^{-\alpha}}\Big)f_Y(y)\dd{y}\Big) \nonumber
\\
&\overset{(c)}{=} {\rm exp}\Bigg(-\lambda_p \int_{\mathbb{R}^2}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x + y\rVert^{-\alpha}}\Big)f_Y(y)\dd{y}\Big)\dd{x}\Bigg) \nonumber
\\
&\overset{(d)}{=} {\rm exp}\Bigg(-\lambda_p \int_{\mathbb{R}^2}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert z\rVert^{-\alpha}}\Big)f_Y(z-x)\dd{y}\Big)\dd{x}\Bigg) \nonumber
\\
&\overset{(e)}{=} {\rm exp}\Bigg(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{u=0}^{\infty}\Big(1 - \frac{1}{1+s u^{-\alpha}}\Big)f_U(u|v)\dd{u}\Big)v\dd{v}\Bigg) \nonumber
\\
&= {\rm exp}\Bigg(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm exp}\big(-p\overline{n} \int_{u=0}^{\infty}
\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}\big)v\dd{v}\Big)\Bigg) \nonumber
\\
&= {\rm exp}\Big(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm e}^{-p\overline{n} \varphi(s,v)}\Big)v\dd{v}\Big),
\end{align}
where $\varphi(s,v) = \int_{u=0}^{\infty}\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}$; (a) follows from the Rayleigh fading assumption, (b)
follows from the probability generating functional (PGFL) of PPP $\Phi_{cp}$, (c) follows from the PGFL of the parent PPP $\Phi_p$, (d) follows from change of variables $z = x + y$, and (e) follows from converting the cartesian coordinates to the polar coordinates. Hence, Lemma 1 is proven.
\section {Proof of lemma 2}
Laplace transform of the intra-cluster aggregate interference $I_{\Phi_c}$, evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$, is written as
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|v_0) &= \mathbb{E} \Bigg[e^{-s \sum_{y \in \mathcal{A}^p} g_{y} \lVert x_0 + y\rVert^{-\alpha}} \Bigg] \nonumber
\\
&= \mathbb{E}_{\mathcal{A}^p,g_{y}} \prod_{y \in\mathcal{A}^p} e^{-s g_{y} \lVert x_0 + y\rVert^{-\alpha}} \nonumber
\\
&= \mathbb{E}_{\mathcal{A}^p} \prod_{y \in\mathcal{A}^p} \mathbb{E}_{g_{y}} e^{-s g_{y} \lVert x_0 + y\rVert^{-\alpha}} \nonumber
\\
&\overset{(a)}{=} \mathbb{E}_{\mathcal{A}^p} \prod_{y \in\mathcal{A}^p} \frac{1}{1+s \lVert x_0 + y\rVert^{-\alpha}}
\nonumber
\\
&\overset{(b)}{=} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x_0 + y\rVert^{-\alpha}}\Big)f_{Y}(y)\dd{y}\Big) \nonumber
\\
&\overset{(c)}{=} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert z_0\rVert^{-\alpha}}\Big)f_{Y}(z_0-x_0)\dd{z_0}\Big) \nonumber
\\
&\overset{(d)}{=} {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\Big(1 - \frac{1}{1+s h^{-\alpha}}\Big)f_H(h|v_0)\dd{h}\Big) \nonumber
\\
&= {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h|v_0)\dd{h}\Big) \nonumber
\\
&= {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h|v_0)\dd{h}\Big) \nonumber
\\
\mathscr{L}_{I_{\Phi_c} }(s) &\approx {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\end{align}
where (a) follows from the Rayleigh fading assumption, (b) follows from the PGFL of the PPP $\Phi_c^p$, (c) follows from changing of variables $z_0 = x_0 + y$, (d) follows from converting the cartesian to polar coordinates, and the approximation comes from neglecting the correlation of the intra-cluster interfering distances, i.e., the common part $v_0$, as in \cite{clustered_twc}. Hence, Lemma 2 is proven.
\section {Proof of lemma 3}
First, to prove concavity, we proceed as follows.
\begin{align}
\frac{\partial \mathbb{P}_o}{\partial b_i} &= q_i + q_i\big(\overline{n}(1-b_i)e^{-\overline{n}b_i}-(1-e^{-\overline{n}b_i})\big)\mathbb{P}(R_1>R_0)\nonumber \\
\frac{\partial^2 \mathbb{P}_o}{\partial b_i \partial b_j} &= -q_i\big(\overline{n}e^{-\overline{n}b_i} + \overline{n}^2(1-b_i)e^{-\overline{n}b_i} + \overline{n}e^{-\overline{n}b_i}\big)\mathbb{P}(R_1>R_0)
\end{align}
It is clear that the second derivative $\frac{\partial^2 \mathbb{P}_o}{\partial b_i \partial b_j}$ is negative. Hence, the Hessian matrix \textbf{H}$_{i,j}$ of $\mathbb{P}_o(p^*,b_i)$ w.r.t. $b_i$ is negative semidefinite, and the function $\mathbb{P}_o(p^*,b_i)$ is concave with respect to $b_i$. Also, the constraints are linear, which imply that the necessity and sufficiency conditions for optimality exist. The dual Lagrangian function and the KKT conditions are then employed to solve \textbf{P2}.
The KKT Lagrangian function of the energy minimization problem is given by
\begin{align}
\mathcal{L}(b_i,w_i,\mu_i,v) =& \sum_{i=1}^{N_f} q_i b_i + q_i(1- b_i)(1-e^{-b_i\overline{n}})\mathbb{P}(R_1>R_0) \nonumber \\
&+ v(M-\sum_{i=1}^{N_f} b_i) + \sum_{i=1}^{N_f} w_i (b_i-1) - \sum_{i=1}^{N_f} \mu_i b_i
\end{align}
where $v, w_i, \mu_i$ are the dual equality and two inequality constraints, respectively. Now, the optimality conditions are written as,
\begin{align}
\label{grad}
\grad_{b_i} \mathcal{L}(b_i^*,w_i^*,\mu_i^*,v^*) = q_i + q_i&\big(\overline{n}(1-b_i)e^{-\overline{n}b_i}-(1-e^{-\overline{n}b_i})\big)\mathbb{P}(R_1>R_0) -v^* + w_i^* -\mu_i^*= 0 \\
&w_i^* \geq 0 \\
&\mu_i^* \leq 0 \\
&w_i^* (b_i^* - 1) =0 \\
&\mu_i^* b_i^* = 0\\
&(M-\sum_{i=1}^{N_f} b_i^*) = 0
\end{align}
\begin{enumerate}
\item $w_i^* > 0$: We have $b_i^* = 1$, $\mu_i^*=0$, and
\begin{align}
&q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)= v^* - w_i^* \nonumber \\
\label{cond1_offload}
&v^{*} < q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)
\end{align}
\item $\mu_i^* < 0$: We have $b_i^* = 0$, and $w_i^*=0$, and
\begin{align}
& q_i + \overline{n}q_i\mathbb{P}(R_1>R_0) = v^* + \mu_i^* \nonumber \\
\label{cond2_offload}
&v^{*} > q_i + \overline{n}q_i\mathbb{P}(R_1>R_0)
\end{align}
\item $0 <b_i^*<1$: We have $w_i^*=\mu_i^*=0$, and
\begin{align}
\label{psii_offload}
v^{*} = q_i + q_i\big(\overline{n}(1-b_i^*)e^{-\overline{n}b_i} - (1 - e^{-\overline{n}b_i})\big)\mathbb{P}(R_1>R_0)
\end{align}
\end{enumerate}
By combining (\ref{cond1_offload}), (\ref{cond2_offload}), and (\ref{psii_offload}), with the fact that $\sum_{i=1}^{N_f} b_i^*=M$, Lemma 3 is proven.
\section {Proof of lemma 6}
Under the assumption of one active \ac{D2D} link within a cluster, there is no intra-cluster interference. Also, the Laplace transform of the inter-cluster interference is similar to that of the PPP \cite{andrews2011tractable} whose density is the same as that of the parent PPP. In fact, this is true according to the displacement theory of the PPP \cite{daley2007introduction}, where each interferer is a point of a PPP that is displaced randomly and independently of all other points. For the sake of completeness, we prove it here. Starting from the third line of the proof of Lemma 1, we get
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &\overset{(a)}{=} \text{exp}\Bigg(-2\pi\lambda_p \mathbb{E}_{g_{u}} \int_{v=0}^{\infty}\mathbb{E}_{u|v}\Big[1 -
e^{-s P_d g_{u} u^{-\alpha}} \Big]v\dd{v}\Bigg), \nonumber \\
&= \text{exp}\Big(-2\pi\lambda_p \mathbb{E}_{g_{u}} \big[\int_{v=0}^{\infty}\int_{u=0}^{\infty}\big(1 - e^{-s P_d g_{u} u^{-\alpha}} \big)f_U(u|v)\dd{u}v\dd{v}\big]\Big) \nonumber \\
\label{prelaplace}
&\overset{(b)}{=} \text{exp}\Bigg(-2\pi\lambda_p \mathbb{E}_{g_{u}} \underbrace{\int_{v=0}^{\infty}v\dd{v} - \int_{v=0}^{\infty}\int_{u=0}^{\infty} e^{-s P_d g_{u} u^{-\alpha}} f_{U}(u|v)\dd{u} v \dd{v}}_{\mathcal{R}(s,\alpha)}\Bigg)
\end{align}
where (a) follows from the PGFL of the parent PPP \cite{andrews2011tractable}, and (b) follows from $\int_{u=0}^{\infty} f_{U}(u|v)\dd{u} =1$. Now, we proceed by calculating the integrands of $\mathcal{R}(s,\alpha)$ as follows.
\begin{align}
\mathcal{R}(s,\alpha)&\overset{(c)}{=} \int_{v=0}^{\infty}v\dd{v} - \int_{u=0}^{\infty}e^{-s P_d g_{u} u^{-\alpha}}
\int_{v=0}^{\infty} f_{U}(u|v)v \dd{v}\dd{u}\nonumber \\
&\overset{(d)}{=} \int_{v=0}^{\infty}v\dd{v} - \int_{u=0}^{\infty}e^{-s P_d g_{u} u^{-\alpha}}u\dd{u} \nonumber \\
&\overset{(e)}{=} \int_{u=0}^{\infty}(1 - e^{-s P_d g_{u} u^{-\alpha}})u\dd{u} \nonumber \\
&\overset{(f)}{=} \frac{(s P_d g_{u})^{2/\alpha}}{\alpha} \int_{u=0}^{\infty}(1 - e^{-t})t^{-1-\frac{2}{\alpha}}du \nonumber \\
\label{laplaceppp1}
&\overset{(g)}{=} \frac{(s P_d)^{2/\alpha}}{2} g_{u}^{2/\alpha} \Gamma(1 + 2/\alpha),
\end{align}
where (c) follows from changing the order of integration, (d) follows from $ \int_{v=0}^{\infty} f_{U}(u|v)v\dd{v} = u$, (e) follows from changing the dummy variable $v$ to $u$, (f) follows from changing the variables $t=s g_{u}u^{-\alpha}$, and (g) follows from solving the integration of (f) by parts. Substituting the obtained value for $\mathcal{R}(s,\alpha)$ into (\ref{prelaplace}), and taking the expectation over the exponential random variable $g_u$, with the fact that $\mathbb{E}_{g_{u}} [g_{u}^{2/\alpha}] = \Gamma(1 - 2/\alpha)$, we get
\begin{align}
\label{laplace_trans}
\mathscr{L}_{I_{\Phi_p^{!}}} (s)&= {\rm exp}\Big(-\pi\lambda_p (sP_d )^{2/\alpha} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)\Big),
\end{align}
Substituting this expression with the distance \ac{PDF} $f_R(r)$ into the coverage probability equation yields
\begin{align}
\mathrm{P_{c_d}} &=\int_{r=0}^{\infty}
{\rm e}^{-\pi\lambda_p (sP_d)^{2/\alpha} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)}\frac{r}{2\sigma^2}{\rm e}^{\frac{-r^2}{4\sigma^2}} {\rm dr} , \nonumber \\
&\overset{(h)}{=}\int_{r=0}^{\infty} \frac{r}{2\sigma^2}
{\rm e}^{-\pi\lambda_p \theta^{2/\alpha}r^{2} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)}{\rm e}^{\frac{-r^2}{4\sigma^2}} {\rm dr} , \nonumber \\
&\overset{(i)}{=}\int_{r=0}^{\infty} \frac{r}{2\sigma^2}
{\rm e}^{-r^2Z(\theta,\sigma,\alpha)} {\rm dr} , \nonumber \\
&\overset{}{=} \frac{1}{4\sigma^2 Z(\theta,\alpha,\sigma)}
\end{align}
where (h) comes from the substitution ($s = \frac{\theta r^{\alpha}}{P_d}$), and (i) from $Z(\theta,\alpha,\sigma) = (\pi\lambda_p \theta^{2/\alpha}\Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)+ \frac{1}{4\sigma^2})$.
\end{appendices}
\bibliographystyle{IEEEtran}
\section{Introduction}
Caching at mobile devices significantly improves system performance by facilitating \ac{D2D} communications, which enhances the spectrum efficiency and alleviate the heavy burden on backhaul links \cite{Femtocaching}. Modeling the cache-enabled heterogeneous networks, including \ac{SBS} and mobile devices, follows two main directions in the literature. The first line of work focuses on the fundamental throughput scaling results by assuming a simple protocol channel model \cite{Femtocaching,golrezaei2014base,8412262}, known as the protocol model, where two devices can communicate if they are within a certain distance. The second line of work, defined as the physical interference model, considers a more realistic model for the underlying physical layer \cite{andreev2015analyzing,cache_schedule}. In the following, we review some of the works relevant to the second line, focusing mainly on the \ac{EE} and delay analysis of wireless caching networks.
The physical interference model is based on the fundamental \ac{SIR} metric, and therefore, is applicable to any wireless communication system. Modeling devices' locations as a \ac{PPP} is widely employed in the literature, especially, in the wireless caching area \cite{andreev2015analyzing,cache_schedule,energy_efficiency,ee_BS,hajri2018energy}. However, a realistic model for \ac{D2D} caching networks requires that a given device typically has multiple proximate devices, where any of them can potentially act as a serving device. This deployment is known as clustered devices deployment, which can be characterized by cluster processes \cite{haenggi2012stochastic}. Unlike the popular \ac{PPP} approach, the authors in \cite{clustered_twc,clustered_tcom,8070464} developed a stochastic geometry based model to characterize the performance of content placement in the clustered \ac{D2D} network. In \cite{clustered_twc}, the authors discuss two strategies of content placement in a \ac{PCP} deployment. First, when each device randomly chooses its serving device from its local cluster, and secondly, when each device connects to its $k$-th closest transmitting device from its local cluster. The authors characterize the optimal number of \ac{D2D} transmitters that must be simultaneously activated in each cluster to maximize the area spectral efficiency. The performance of cluster-centric content placement is characterized in \cite{clustered_tcom}, where the content of interest in each cluster is cached closer to the cluster center, such that the collective performance of all the devices in each cluster is optimized. Inspired by the Matern hard-core point process, which captures pairwise interactions between nodes, the authors in \cite{8070464} devised a novel spatially correlated caching strategy called \ac{HCP} such that the \ac{D2D} devices caching the same content are never closer to each other than the exclusion radius.
Energy efficiency in wireless caching networks is widely studied in the literature \cite{energy_efficiency,ee_BS,hajri2018energy}.
For example, an optimal caching problem is formulated in \cite{energy_efficiency} to minimize the energy consumption of a wireless network. The authors consider a cooperative wireless caching network where relay nodes cooperate with the devices to cache the most popular files in order to minimize energy consumption. In \cite{ee_BS}, the authors investigate how caching at BSs can improve \ac{EE} of wireless access networks. The condition when \ac{EE} can benefit from caching is characterized, and the optimal cache capacity that maximizes the network \ac{EE} is found. It is shown that \ac{EE} benefit from caching depends on content popularity, backhaul capacity, and interference level.
The authors in \cite{hajri2018energy} exploit the spatial repartitions of devices and the correlation in their content popularity profiles to improve the achievable EE. The \ac{EE} optimization problem is decoupled into two related subproblems, the first one addresses the issue of content popularity modeling, and the second subproblem investigates the impact of exploiting the spatial repartitions of devices. It is shown that the small base station allocation algorithm improves the energy efficiency and hit probability. However, the problem of \ac{EE} for \ac{D2D} based caching is not yet addressed in the literature.
Recently, the joint optimization of delay and energy in wireless caching is conducted, see, for instance \cite{wu2018energy,huang2018energy,jiang2018energy,yang2018cache}. The authors in \cite{wu2018energy} jointly optimize the delay and energy in a cache-enabled dense small cell network. The authors formulate the energy-delay optimization problem as a mixed integer programming problem, where file placement, device association to the small cells, and power control are jointly considered. To model the energy consumption and end-to-end file delivery-delay tradeoff, a utility function linearly combining these two metrics is used as an objective function of the optimization problem. An efficient algorithm is proposed to approach the optimal association and power solution, which could achieve the optimal tradeoff between energy consumption and end-to-end file delivery delay. In \cite{huang2018energy}, the authors showed that with caching, the energy consumption can be reduced by extending transmission time. However, it may incur wasted energy if the device never needs the cached content. Based on the random content request delay, the authors study the maximization of \ac{EE} subject to a hard delay constraint in an additive white Gaussian noise channel. It is shown that the \ac{EE} of a system with caching can be significantly improved with increasing content request probability and target transmission rate compared with the traditional on-demand scheme, in which the \ac{BS} transmits content file only after it is requested by the user. However, the problem of energy consumption and joint communication and caching for clustered \ac{D2D} networks is not yet addressed in the literature.
In this paper, we conduct a comprehensive performance analysis and optimization of the joint communication and caching for a clustered \ac{D2D} network, where the devices have unused memory to cache some files, following a random probabilistic caching scheme. Our network model effectively characterizes the stochastic nature of channel fading and clustered geographic locations of devices. Furthermore, this paper emphasizes on the need for considering the traffic dynamics and rate of requests when studying the delay incurred to deliver requests to devices. To the best of our knowledge, our work is the first in the literature that conducts a comprehensive spatial analysis of a doubly \ac{PCP} (also called doubly \ac{PPP} \cite{haenggi2012stochastic}) with the devices adopting a slotted-ALOHA random access technique to access a shared channel. The key advantage of adopting the slotted-ALOHA access protocol is that it is a simple yet fundamental medium access control (MAC) protocol, wherein no central controller exists to schedule the users' transmissions.
We also incorporate the spatio-temporal analysis in wireless caching networks by combining tools from stochastic geometry and queuing theory in order to analyze and minimize the average delay (see, for instance, \cite{zhong2015stability,stamatiou2010random,zhong2017heterogeneous,7917340,kim2017ultra}).
The main contributions of this paper are summarized below.
\begin{itemize}
\item We consider a Thomas cluster process (TCP) where the devices are spatially distributed as groups in clusters. The cluster centers are drawn from a parent PPP, and the cluster members are normally distributed around the centers, forming a Gaussian PPP. This organization of the parent and offspring PPPs forms the so-called doubly PPP.
\item We conduct the coverage probability analysis where the devices adopt a slotted-ALOHA random access technique. We then jointly optimize the access probability and caching probability to maximize the cluster offloading gain. We obtain the optimal channel access probability, and then a closed-form solution of the optimal caching sub-problem is provided. The energy consumption problem is then formulated and shown to be convex and the optimal caching probability is also formulated.
\item By combining tools from stochastic geometry as well as queuing theory,
we minimize the per request weighted average delay by jointly optimizing bandwidth allocation between \ac{D2D} and \ac{BS}-to-Device communication and the caching probability. The delay minimization problem is shown to be non-convex. Applying the block coordinate descent (BCD) optimization technique, the joint minimization problem is solved in an iterative manner.
\item We validate our theoretical findings via simulations. Results show a significant improvement in the network performance metrics, namely, the offloading gain, energy consumption, and average delay as compared to other caching schemes proposed earlier in literature.
\end{itemize}
The rest of this paper is organized as follows. Section II and Section III discuss the system model and the offloading gain, respectively. The energy consumption is discussed in Section IV and the delay analysis is conducted in Section V. Numerical results are then presented in Section VI before we conclude the paper in Section VII.
\section{System Model}
\subsection{System Setup}
We model the location of the mobile devices with a \ac{TCP} in which the parent points are drawn from a \ac{PPP} $\Phi_p$ with density $\lambda_p$, and the daughter points are drawn from a Gaussian \ac{PPP} around each parent point. In fact, the TCP is considered as a doubly \ac{PCP} where the daughter points are normally scattered with variance $\sigma^2 \in \mathbb{R}^2$ around each parent point \cite{haenggi2012stochastic}.
The parent points and offspring are referred to as cluster centers and cluster members, respectively. The number of cluster members in each cluster is a Poisson random variable with mean $\overline{n}$. The density function of the location of a cluster member relative to its cluster center is
\begin{equation}
f_Y(y) = \frac{1}{2\pi\sigma^2}\textrm{exp}\Big(-\frac{\lVert y\rVert^2}{2\sigma^2}\Big), \quad \quad y \in \mathbb{R}^2
\label{pcp}
\end{equation}
where $\lVert .\rVert$ is the Euclidean norm. The intensity function of a cluster is given by $\lambda_c(y) = \frac{\overline{n}}{2\pi\sigma^2}\textrm{exp}\big(-\frac{\lVert y\rVert^2}{2\sigma^2}\big)$. Therefore, the intensity of the entire process is given by $\lambda = \overline{n}\lambda_p$. We assume that the BSs' distribution follows another \ac{PPP} $\Phi_{bs}$ with density $\lambda_{bs}$, which is independent of $\Phi_p$.
\subsection{Content Popularity and Probabilistic Caching Placement}
We assume that each device has a surplus memory of size $M$ designated for caching files.
The total number of files is $N_f> M$ and the set (library) of content indices is denoted as $\mathcal{F} = \{1, 2, \dots , N_f\}$. These files represent the content catalog that all the devices in a cluster may request, which are indexed in a descending order of popularity. The probability that the $i$-th file is requested follows a Zipf's distribution given by,
\begin{equation}
q_i = \frac{ i^{-\beta} }{\sum_{k=1}^{N_f}k^{-\beta}},
\label{zipf}
\end{equation}
where $\beta$ is a parameter that reflects how skewed the popularity distribution is. For example, if $\beta= 0$, the popularity of the files has a uniform distribution. Increasing $\beta$ increases the disparity among the files popularity such that lower indexed files have higher popularity. By definition, $\sum_{i=1}^{N_f}q_i = 1$.
We use Zipf's distribution to model the popularity of files per cluster.
\ac{D2D} communication is enabled within each cluster to deliver popular content. It is assumed that the devices adopt a slotted-ALOHA medium access protocol, where each transmitter during each time slot, independently and randomly accesses the channel with the same probability $p$. This implies that multiple active \ac{D2D} links might coexist within a cluster. Therefore, $p$ is a design parameter that directly controls \textcolor{blue}{(mainly)} the intra-cluster interference, as described later in the paper.
We adopt a random content placement where each device independently selects a file to cache according to a specific probability function $\textbf{b} = \{b_1, b_2, \dots, b_{N_{f}}\}$, where $b_i$ is the probability that a device caches the $i$-th file, $0 \leq b_i \leq 1$ for all $i=\{1, \dots, N_f\}$. To avoid duplicate caching of the same content within the memory of the same device, we follow a probabilistic caching approach proposed in \cite{blaszczyszyn2015optimal} and illustrated in Fig. \ref{prob_cache_example}.
\begin{figure
\begin{center}
\includegraphics[width=2.5in]{Figures/ch3/prob_cache_exam}
\caption {The cache memory of size $M = 3$ is equally divided into $3$ blocks of unit size. Starting from content $i=1$ to $i=N_f$, each content sequentially fills these $3$ memory blocks by an amount $b_i$. The amounts (probabilities) $b_i$ eventually fill all $3$ blocks since $\sum_{i=1}^{N_f} b_i = M$ \cite{blaszczyszyn2015optimal}. Then a random number $\in [0,1]$ is generated, and content $i$ is chosen from each block, whose $b_i$ fills the part intersecting with the generated random number. In this way, in the given example, the contents $\{1, 2, 4\}$ are chosen to be cached.}
\label{prob_cache_example}
\end{center}
\end{figure}
If a device caches the desired file, the device directly retrieves the content. However, if the device does not cache the file, the file can be downloaded from any neighboring device that caches the file (henceforth called catering device) in the same cluster. According to the proposed access model, the probability that a chosen catering device is admitted to access the channel is the access probability $p$. Finally, the device attaches to the nearest \ac{BS} as a last resort to download the content which is not cached entirely within the device's cluster. We assume that the \ac{D2D} communication is operating as out-of-band D2D. \textcolor{blue}{$W_{1}$ and $W_{2}$ denote respectively the bandwidth allocated to the \ac{D2D} and \ac{BS}-to-Device communication, and the total system bandwidth is denoted as $W=W_{1} + W_{2}$. It is assumed that device requests are served in a random manner, i.e., among the cluster devices, a random device request is chosen to be scheduled and content is served.
In the following, we aim at studying and optimizing three important metrics, widely studied in the literature. The first metric is the offloading gain, which is defined as the probability of obtaining the requested file from the local cluster, either from the self-cache or from a neighboring device in the same cluster, with a rate higher than a required threshold $R_0$. The second metric is the energy consumption which represents the dissipated energy when downloading files either from the BSs or via \ac{D2D} communication. Finally, the latency which accounts for the weighted average delay over all the requests served from the \ac{D2D} and \ac{BS}-to-Device communication.
\section{Maximum Offloading Gain}
\begin{figure}[t]
\centering
\includegraphics[width=0.55\textwidth]{Figures/ch3/distance1.png}
\caption {Illustration of the representative cluster and one interfering cluster.}
\label{distance}
\end{figure}
Without loss of generality, we conduct the analysis for a cluster whose center is at $x_0\in \Phi_p$ (referred to as representative cluster), and the device who requests the content (henceforth called typical device) is located at the origin. We denote the location of the \ac{D2D} transmitter by $y_0$ relative to $x_0$, where $x_0, y_0\in \mathbb{R}^2$. The distance from the typical device (\ac{D2D} receiver of interest) to this \ac{D2D} transmitter is denoted as $r=\lVert x_0+y_0\rVert$, which is a realization of a random variable $R$ whose distribution is described later. This setup is illustrated in Fig. \ref{distance}. It is assumed that a requested file is served from a randomly selected catering device, which is, in turn, admitted to access the channel based on the slotted-ALOHA protocol. The successful offloading probability is then given by
\begin{align}
\label{offloading_gain}
\mathbb{P}_o(p,\textbf{b}) &= \sum_{i=1}^{N_f} q_i b_i + q_i(1 - b_i)(1 - e^{-b_i\overline{n}})
\underbrace{\int_{r=0}^{\infty}f_R(r) \mathbb{P}(R_{1}(r)>R_0) \dd{r}}_{\mathbb{P}(R_1>R_0)},
\end{align}
where $R_{1}(r)$ is the achievable rate when downloading content from a catering device at a distance $r$ from the typical device with \ac{PDF} $f_R(r)$. The first term on the right-hand side is the probability of requesting a locally cached file (self-cache) whereas the remaining term incorporates the probability that a requested file $i$ is cached among at least one cluster member and being downloadable with a rate greater than $R_0$. More precisely, since the number of devices per cluster has a Poisson distribution, the probability that there are $k$ devices per cluster is equal to $\frac{\overline{n}^k e^{-\overline{n}}}{k!}$. Accordingly, the probability that there are $k$ devices caching content $i$ can be written as $\frac{(b_i\overline{n})^k e^{-b_i\overline{n}}}{k!}$. Hence, the probability that at least one device caches content $i$ is $1$-minus the void probability (i.e., $k=0$), which equals $1 - e^{-b_i\overline{n}}$.
In the following, we first compute the probability $ \mathbb{P}(R_{1}(r)>R_0)$ given the distance $r$ between the typical device and a catering device, \textcolor{blue}{then we conduct averaging over $r$ using the \ac{PDF} $f_R(r)$}. The received power at the typical device from a catering device located at $y_0$ relative to the cluster center is given by
\begin{align}
P &= P_d g_0 \lVert x_0+y_0\rVert^{-\alpha}= P_d g_0 r^{-\alpha}
\label{pwr}
\end{align}
where $P_d$ denotes the \ac{D2D} transmission power, $g_0$ is the complex Gaussian fading channel coefficient between a catering device located at $y_0$ relative to its cluster center at $x_0$ and the typical device, and $\alpha > 2$ is the path loss exponent. Under the above assumption, the typical device sees two types of interference, namely, the intra-and inter-cluster interference. We first describe the inter-cluster interference, then the intra-cluster interference is characterized. The set of active devices in any remote cluster is denoted as $\mathcal{B}^p$, where $p$ refers to the access probability. Similarly, the set of active devices in the local cluster is denoted as $\mathcal{A}^p$. Similar to (\ref{pwr}), the interference from the simultaneously active \ac{D2D} transmitters outside the representative cluster, at the typical device is given by
\begin{align}
I_{\Phi_p^{!}} &= \sum_{x \in \Phi_p^{!}} \sum_{y\in \mathcal{B}^p} P_d g_{y_x} \lVert x+y\rVert^{-\alpha}\\
& = \sum_{x\in \Phi_p^{!}} \sum_{y\in \mathcal{B}^p} P_d g_{u} u^{-\alpha}
\end{align}
where $\Phi_p^{!}=\Phi_p \setminus x_0$ for ease of notation, $y$ is the marginal distance between a potential interfering device and its cluster center at $x \in \Phi_p$ , $u = \lVert x+y\rVert$ is a realization of a random variable $U$ modeling the inter-cluster interfering distance (shown in Fig. \ref{distance}), $g_{y_x} \sim $ exp(1) are \ac{i.i.d.} exponential random variables modeling Rayleigh fading, and $g_{u} = g_{y_x}$ for ease of notation. The intra-cluster interference is then given by
\begin{align}
I_{\Phi_c} &= \sum_{y\in \mathcal{A}^p} P_d g_{y_{x_0}} \lVert x_0+y\rVert^{-\alpha}\\
& = \sum_{y\in \mathcal{A}^p} P_d g_{h} h^{-\alpha}
\end{align}
where $y$ is the marginal distance between the intra-cluster interfering devices and the cluster center at $x_0 \in \Phi_p$, $h = \lVert x_0+y\rVert$ is a realization of a random variable $H$ modeling the intra-cluster interfering distance (shown in Fig. \ref{distance}), $g_{y_{x_0}} \sim $ exp(1) are \ac{i.i.d.} exponential random variables modeling Rayleigh fading, and $g_{h} = g_{y_{x_0}}$ for ease of notation. From the thinning theorem \cite{haenggi2012stochastic}, the set of active transmitters following the slotted-ALOHA medium access forms Gaussian \ac{PPP} $\Phi_{cp}$ whose intensity is given by
\begin{align}
\lambda_{cp} = p\lambda_{c}(y) = p\overline{n}f_Y(y) =\frac{p\overline{n}}{2\pi\sigma^2}\textrm{exp}\Big(-\frac{\lVert y\rVert^2}{2\sigma^2}\Big), \quad y \in \mathbb{R}^2
\end{align}
Assuming that the thermal noise is neglected as compared to the aggregate interference, the $\mathrm{SIR}$ at the typical device is written as
\begin{equation}
\gamma_{r}=\frac{P}{I_{\Phi_p^{!}} + I_{\Phi_c}} = \frac{P_d g_0 r^{-\alpha}}{I_{\Phi_p^{!}} + I_{\Phi_c}}
\end{equation}
\textcolor{black}{A fixed rate transmission model is adopted in our study}, where each transmitter (\ac{D2D} or BS) transmits at the fixed rate of log$_2[1+\theta]$ \SI{}{bits/sec/Hz}, where $\theta$ is a design parameter. Since, the rate is fixed, the transmission is subject to outage due to fading and interference fluctuations. Consequently, the de facto average transmissions rate (i.e., average throughput) is given by
\begin{equation}
\label{rate_eqn}
R = W\textrm{ log$_{2}$}[1+ \theta]\mathrm{P_c},
\end{equation}
where $W$ is the bandwidth, $\theta$ is the pre-determined threshold for successful reception, $\mathrm{P_c} =\mathbb{E}(\textbf{1}\{\mathrm{SIR}>\theta\})$ is the coverage probability, and $\textbf{1}\{.\}$ is the indicator function. \textcolor{blue}{When served by a catering device $r$ apart from the origin, the achievable rate of the typical device under slotted-ALOHA medium access technique can be deduced from \cite[Equation (10)]{jing2012achievable} as}
\begin{equation}
\label{rate_eqn1}
R_{1}(r) = pW_{1} {\rm log}_2 \big(1 + \theta \big) \textbf{1}\{ \gamma_{r} > \theta\}
\end{equation}
Then, the probability $ \mathbb{P}(R_{1}(r)>R_0)$ is derived as follows.
\begin{align}
\mathbb{P}(R_{1}(r)>R_0)&= \mathbb{P} \big(pW_{1} {\rm log}_2 (1 + \theta)\textbf{1}\{ \gamma_{r} > \theta\}
>R_0\big) \nonumber \\
&=\mathbb{P} \big(\textbf{1}\{ \gamma_{r} > \theta\}
>\frac{R_0}{pW_{1} {\rm log}_2 (1 + \theta )}\big) \nonumber \\
&\overset{(a)}{=} \mathbb{E}\big(\textbf{1}\{ \gamma_{r} > \theta\}\big) = \mathbb{P}\big(\frac{P_d g_0 r^{-\alpha}}{I_{\Phi_p^{!}} + I_{\Phi_c}} > \theta\big)
\nonumber \\
&= \mathbb{P}\big( g_0 > \frac{\theta r^{\alpha}}{P_d} [I_{\Phi_p^{!}} + I_{\Phi_c}] \big)
\nonumber \\
&\overset{(b)}{=}\mathbb{E}_{I_{\Phi_p^{!}},I_{\Phi_c}}\Big[\text{exp}\big(\frac{-\theta r^{\alpha}}{P_d}{[I_{\Phi_p^{!}} + I_{\Phi_c}] }\big)\Big]
\nonumber \\
\label{prob-R1-g-R0}
&\overset{(c)}{=} \mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big)
\end{align}
\textcolor{blue}{where (a) follows from the assumption that $R_0 < pW_{1} {\rm log}_2 \big(1 + \theta \big)$ always holds, otherwise, it is infeasible to get $\mathbb{P}(R_{1}(r)>R_0)$ greater than zero}. (b) follows from the fact that $g_o$ follows an exponential distribution, and (c) follows from the independence of the intra- and inter-cluster interference and the Laplace transform of them.
In what follows, we first derive the Laplace transform of interference to get $\mathbb{P}(R_{1}(r)>R_0)$. Then, we formulate the offloading gain maximization problem.
\begin{lemma}
The Laplace transform of the inter-cluster aggregate interference $I_{\Phi_p^{!}}$ evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$ is given by
\begin{align}
\label{LT_inter}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &= {\rm exp}\Big(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm e}^{-p\overline{n} \varphi(s,v)}\Big)v\dd{v}\Big),
\end{align}
where $\varphi(s,v) = \int_{u=0}^{\infty}\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}$, $f_U(u|v) = \mathrm{Rice} (u| v, \sigma)$ represents Rice's \ac{PDF} of parameter $\sigma$, and $v=\lVert x\rVert$.
\end{lemma}
\begin{proof}
Please see Appendix A.
\end{proof}
\begin{lemma}
The Laplace transform of the intra-cluster aggregate interference $I_{\Phi_c}$ evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$ can be approximated by
\begin{align}
\label{LT_intra}
\mathscr{L}_{I_{\Phi_c} }(s) \approx {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\end{align}
where $f_H(h) =\mathrm{Rayleigh}(h,\sqrt{2}\sigma)$ represents Rayleigh's \ac{PDF} with a scale parameter $\sqrt{2}\sigma$.
\end{lemma}
\begin{proof}
Please see Appendix B.
\end{proof}
For the serving distance distribution $f_R(r)$, since both the typical device as well as a potential catering device have their locations drawn from a normal distribution with variance $\sigma^2$ around the cluster center, then by definition, the serving distance has a Rayleigh distribution with parameter $\sqrt{2}\sigma$, and given by
\begin{align}
\label{rayleigh}
f_R(r)= \frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}, \quad r>0
\end{align}
From (\ref{LT_inter}), (\ref{LT_intra}), and (\ref{rayleigh}), the offloading gain in (\ref{offloading_gain}) is written as
\begin{align}
\label{offloading_gain_1}
\mathbb{P}_o(p,\textbf{b}) &= \sum_{i=1}^{N_f} q_i b_i + q_i(1 - b_i)(1 - e^{-b_i\overline{n}}) \underbrace{\int_{r=0}^{\infty}
\frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}
\mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big)\dd{r}}_{\mathbb{P}(R_1>R_0)},
\end{align}
Hence, the offloading gain maximization problem can be formulated as
\begin{align}
\label{optimize_eqn_p}
\textbf{P1:} \quad &\underset{p,\textbf{b}}{\text{max}} \quad \mathbb{P}_o(p,\textbf{b}) \\
\label{const110}
&\textrm{s.t.}\quad \sum_{i=1}^{N_f} b_i = M, \\
\label{const111}
& b_i \in [ 0, 1], \\
\label{const112}
& p \in [ 0, 1],
\end{align}
where (\ref{const110}) is the device cache size constraint, which is consistent with the illustration of the example in Fig. \ref{prob_cache_example}. \textcolor{black}{On one hand, from the assumption that the fixed transmission rate $pW_{1} {\rm log}_2 \big(1 + \theta \big)$ being larger than the required threshold $R_0$, we have the condition $p>\frac{R_0}{W_{1} {\rm log}_2 \big(1 + \theta \big)}$ on the access probability. On the other hand, from (\ref{prob-R1-g-R0}), with further increase of the access probability $p$, intra- and inter-cluster interference powers increase, and the probability $\mathbb{P}(R_{1}(r)>R_0)$ decreases accordingly. From intuition, the optimal access probability for the offloading gain maximization is chosen as $p^* >_{\epsilon} \frac{R_0}{W_{1} {\rm log}_2 \big(1 + \theta \big)}$, where $\epsilon \to 0$. However, increasing the access probability $p$ further above $p^*$ may lead to higher \ac{D2D} average achievable rate $R_1$, as elaborated in the next section.} The obtained $p^*$ is now used to solve for the caching probability $\textbf{b}$ in the optimization problem below. Since in the structure of \textbf{P1}, $p$ and $\textbf{b}$ are separable, it is possible to solve for $p^*$ and then substitute to get $\textbf{b}^*$.
\begin{align}
\label{optimize_eqn_b_i}
\textbf{P2:} \quad &\underset{\textbf{b}}{\text{max}} \quad \mathbb{P}_o(p^*,\textbf{b}) \\
\label{const11}
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber
\end{align}
The optimal caching probability is formulated in the next lemma.
\begin{lemma}
$\mathbb{P}_o(p^*,\textbf{b})$ is a concave function w.r.t. $\textbf{b}$ and the optimal caching probability $\textbf{b}^{*}$ that maximizes the offloading gain is given by
\[
b_{i}^{*}=\left\{
\begin{array}{ll}
1 \quad\quad\quad , v^{*} < q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)\\
0 \quad\quad\quad, v^{*} > q_i + \overline{n}q_i\mathbb{P}(R_1>R_0)\\
\psi(v^{*}) \quad, {\rm otherwise}
\end{array}
\right.
\]
where $\psi(v^{*})$ is the solution of $v^{*}$ of (\ref{psii_offload}) in Appendix C that satisfies $\sum_{i=1}^{N_f} b_i^*=M$.
\end{lemma}
\begin{proof}
Please see Appendix C.
\end{proof}
\section{Energy Consumption}
In this section, we formulate the energy consumption minimization problem for the clustered \ac{D2D} caching network. In fact, significant energy consumption occurs only when content is served via \ac{D2D} or \ac{BS}-to-Device transmission. We consider the time cost $c_{d_i}$ as the time it takes to download the $i$-th content from a neighboring device in the same cluster. Considering the size ${S}_i$ of the $i$-th ranked content, $c_{d_i}=S_i/R_1 $, where $R_1 $ denotes the average rate of the \ac{D2D} communication. Similarly, we have $c_{b_i} = S_i/R_2 $ when the $i$-th content is served by the \ac{BS} with average rate $R_2 $. The average energy consumption when downloading files by the devices in the representative cluster is given by
\begin{align}
\label{energy_avrg}
E_{av} = \sum_{k=1}^{\infty} E(\textbf{b}|k)\mathbb{P}(n=k)
\end{align}
where $\mathbb{P}(n=k)$ is the probability that there are $k$ devices in the representative cluster, \textcolor{blue}{equal to $\frac{\overline{n}^k e^{-\overline{n}}}{k!}$}, and $E(\textbf{b}|k)$ is the energy consumption conditioning on having $k$ devices in the cluster, written similar to \cite{energy_efficiency} as
\begin{equation}
E(\textbf{b}|k) = \sum_{j=1}^{k} \sum_{i=1}^{N_f}\big[\mathbb{P}_{j,i}^d q_i P_d c_{d_i} + \mathbb{P}_{j,i}^b q_i P_b c_{b_i}\big]
\label{energy}
\end{equation}
where $\mathbb{P}_{j,i}^d$ and $\mathbb{P}_{j,i}^b$ represent the probability of obtaining the $i$-th content by the $j$-th device from the local cluster, i.e., via \ac{D2D} communication, and the BS, respectively. $P_b$ denotes the \ac{BS} transmission power. Given that there are $k$ devices in the cluster, it is obvious that $\mathbb{P}_{j,i}^b=(1-b_i)^{k}$, and $\mathbb{P}_{j,i}^d=(1 - b_i)\big(1-(1-b_i)^{k-1}\big)$.
The average rates $R_1$ and $R_2$ are now computed to get a closed-form expression for $E(\textbf{b}|k)$.
From equation (\ref{rate_eqn}), we need to obtain the \ac{D2D} coverage probability $\mathrm{P_{c_d}}$ and \ac{BS}-to-Device coverage probability $\mathrm{P_{c_b}}$ to calculate $R_1$ and $R_2$, respectively. Given the number of devices $k$ in the representative cluster, the Laplace transform of the inter-cluster interference \textcolor{blue}{is as obtained in (\ref{LT_inter})}. However, the intra-cluster interfering devices no longer represent a Gaussian \ac{PPP} since the number of devices is conditionally fixed, i.e., not a Poisson random number as before. To facilitate the analysis, for every realization $k$, we assume that the intra-cluster interfering devices form a Gaussian \ac{PPP} with intensity function given by $pkf_Y(y)$. Such an assumption is mandatory for analytical tractability.
From Lemma 2, the intra-cluster Laplace transform conditioning on $k$ can be approximated as
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|k) &\approx {\rm exp}\Big(-pk \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\nonumber
\end{align}
and the conditional \ac{D2D} coverage probability is given by
\begin{align}
\label{p_b_d2d}
\mathrm{P_{c_d}} =
\int_{r=0}^{\infty} \frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}
\mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big|k\big)\dd{r}
\end{align}
%
\textcolor{blue}{With the adopted slotted-ALOHA scheme, the access probability $p$ minimizing $E(\textbf{b}|k)$ is computed over the interval [0,1] to maximize the \ac{D2D} achievable rate $R_1$ in (\ref{rate_eqn1}), with the condition $p>\frac{R_0}{W_{1} {\rm log}_2 \big(1 + \theta \big)}$ holds to fulfill the probability $\mathbb{P}(R_{1}>R_0)$ greater than zero.
As an illustrative example, in Fig.~\ref{prob_r_geq_r0_vs_p}, we plot the \ac{D2D} average achievable rate $R_1$ against the channel access probability $p$.
As evident from the plot, we see that there is certain access probability; before it the rate $R_1$ tends to increase since the channel access probability increases, and beyond it, the rate $R_1$ decreases monotonically due to the effect of more interferers accessing the channel. In such a case, although we observe that increasing $p$ above $\frac{R_0}{W_{1} {\rm log}_2 \big(1 + \theta \big)}=0.1$ improves the average achievable rate $R_1$, it comes at a price of a decreased $\mathbb{P}(R_{1}>R_0)$.}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/prob_r_geq_r0_vs_p2.eps}
\caption {\textcolor{blue}{The \ac{D2D} average achievable rate $R_1$ versus the access probability $p$ ($\lambda_p = 20 \text{ clusters}/\SI{}{km}^2$, $\overline{n}=12$, $\sigma=\SI{30}{m}$, $\theta=\SI{0}{dB}$, $R_0/W_1=0.1\SI{}{bits/sec/Hz}$).}}
\label{prob_r_geq_r0_vs_p}
\end{center}
\end{figure}
Analogously, under the \ac{PPP} $\Phi_{bs}$, and based on the nearest \ac{BS} association principle, it is shown in \cite{andrews2011tractable} that the \ac{BS} coverage probability can be expressed as
\begin{equation}
\mathrm{P_{c_b}} =\frac{1}{{}_2 F_1(1,-\delta;1-\delta;-\theta)},
\label{p_b_bs}
\end{equation}
where ${}_2 F_1(.)$ is the Gaussian hypergeometric function and $\delta = 2/\alpha$. Given the coverage probabilities $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ in (\ref{p_b_d2d}) and (\ref{p_b_bs}), respectively, $R_1$ and $R_2 $ can be calculated from (\ref{rate_eqn}), and hence $E(\textbf{b}|k)$ is expressed in a closed-form.
\subsection{Energy Consumption Minimization}
The energy minimization problem can be formulated as
\begin{align}
\label{optimize_eqn1}
&\textbf{P3:} \quad\underset{\textbf{b}}{\text{min}} \quad E(\textbf{b}|k) = \sum_{j=1}^{k} \sum_{i=1}^{N_f}\big[\mathbb{P}_{j,i}^d q_i P_d c_{d_i} + \mathbb{P}_{j,i}^b q_i P_b c_{b_i}\big]
\\
\label{const11}
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber
\end{align}
In the next lemma, we prove the convexity condition for $E(\textbf{b}|k)$.
\begin{lemma}
\label{convex_E}
The energy consumption $E(\textbf{b}|k)$ is convex if $\frac{P_b}{R_2}>\frac{P_d}{R_1}$.
\end{lemma}
\begin{proof}
\textcolor{blue}{We proceed by deriving the Hessian matrix of $E(\textbf{b}|k)$. The Hessian matrix of $E(\textbf{b}|k)$ w.r.t. the caching variables is \textbf{H}$_{i,j} = \frac{\partial^2 E(\textbf{b}|k)}{\partial b_i \partial b_j}$, $\forall i,j \in \mathcal{F}$. \textbf{H}$_{i,j}$ a diagonal matrix whose $i$-th row and $i$-th column element is given by $k(k-1) S_i\Big(\frac{P_b}{R_2}-\frac{P_d}{R_1}\Big)q_i(1 - b_i)^{k-2}$.}
Since the obtained Hessian matrix is full-rank and diagonal, $\textbf{H}_{i,j}$ is positive semidefinite (and hence $E(\textbf{b}|k)$ is convex) if all the diagonal entries are nonnegative, i.e., when
$\frac{P_b}{R_2}>\frac{P_d}{R_1}$. In practice, it is reasonable to assume that $P_b \gg P_d$, as in \cite{ericsson}, the \ac{BS} transmission power is 100 fold the \ac{D2D} power.
\end{proof}
As a result of Lemma 3, the optimal caching probability can be computed to minimize $E(\textbf{b}|k)$.
\begin{lemma}
The optimal caching probability $\textbf{b}^{*}$ for the energy minimization problem \textbf{P3} is given by,
\begin{equation}
b_i^* = \Bigg[ 1 - \Big(\frac{v^* + k^2q_iS_i\frac{P_d}{R_1 }}{kq_iS_i\big(\frac{P_d}{R_1 }-\frac{P_b}{R_2}\big)} \Big)^{\frac{1}{k-1}}\Bigg]^{+}
\label{energy11}
\end{equation}
where $v^{*}$ satisfies the maximum cache constraint $\sum_{i=1}^{N_f} \Big[ 1 - \Big(\frac{v^* + k^2q_iS_i\frac{P_d}{R_1 }}{kq_iS_i\big(\frac{P_d}{R_1 }-\frac{P_b}{R_2 }\big)} \Big)^{\frac{1}{k-1}}\Big]^{+}=M$, and $[x]^+ =$ max$(x,0)$.
\end{lemma}
\begin{proof}
The proof proceeds in a similar manner to Lemma 3 and is omitted.
\end{proof}
\begin{proposition} {\rm \textcolor{blue}{By observing (\ref{energy11}), we can demonstrate the effects of content size and popularity on the optimal caching probability. $S_i$ exists in the numerator and denominator
of the second term in (\ref{energy11}), however, the effect on numerator is more significant due to larger multiplier. The same property is observed for $q_i$. With the increase of $S_i$ or $q_i$, the magnitude of the second term in (\ref{energy11}) increases, and correspondingly, $b_i^*$ decreases. That is a content with larger size or lower popularity has smaller probability to be cached.}}
\end{proposition}
By substituting $b_i^*$ into (\ref{energy_avrg}), the average energy consumption per cluster is obtained. In the rest of the paper, we study and minimize the weighted average delay per request for the proposed system.
\section{Delay Analysis}
In this section, the delay analysis and minimization are discussed. A joint stochastic geometry and queueing theory model is exploited to study this problem. The delay analysis incorporates the study of a system of spatially interacting queues. To simplify the mathematical analysis, we further consider that only one \ac{D2D} link can be active within a cluster of $k$ devices, where $k$ is fixed. As shown later, such an assumption facilitates the analysis by deriving simple expressions. We begin by deriving the \ac{D2D} coverage probability under the above assumption, which is used later in this section.
\begin{lemma}
\label{coverage_lemma}
The \ac{D2D} coverage probability of the proposed clustered model with one active \ac{D2D} link within a cluster is given by
\begin{align}
\label{v_0}
\mathrm{P_{c_d}} = \frac{1}{4\sigma^2 Z(\theta,\alpha,\sigma)} ,
\end{align}
where $Z(\theta,\alpha,\sigma) = (\pi\lambda_p \theta^{2/\alpha}\Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)+ \frac{1}{4\sigma^2})$.
\end{lemma}
\begin{proof}
The result can be proved by using the displacement theory of the \ac{PPP} \cite{daley2007introduction}, and then proceeding in a similar manner to Lemma 1 and 2. The proof is presented in Appendix D for completeness.
\end{proof}
In the following, we firstly describe the traffic model of the network, and then we formulate the delay minimization problem.
\begin{figure
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/delay_queue2}
\caption {\textcolor{black}{The traffic model of request arrivals and departures in a given cluster. $Q_1$ and $Q_2$ are M/G/1 queues modeling requests served by \ac{D2D} and \ac{BS}-to-Device communication, respectively.}}
\label{delay_queue}
\end{center}
\end{figure}
\subsection{Traffic Model}
We assume that the aggregate request arrival process from the devices in each cluster follows a Poisson arrival process with parameter $\zeta_{tot}$ (requests per time slot). As shown in Fig.~\ref{delay_queue}, the incoming requests are further divided according to where they are served from. $\zeta_{1}$ represents the arrival rate of requests served via the \ac{D2D} communication, whereas $\zeta_{2}$ is the arrival rate for those served from the BSs. $\zeta_{3} = 1 - \zeta_{1} - \zeta_{2}$ denotes the arrival rate of requests served via the self-cache with zero delay. By definition, $\zeta_{1}$ and $\zeta_{2}$ are also Poisson arrival processes. Without loss of generality, we assume that the file size has a general distribution $G$ whose mean is denoted as $\overline{S}$ \SI{}{MBytes}. Hence, an M/G/1 queuing model is adopted whereby two non-interacting queues, $Q_1$ and $Q_2$, model the traffic in each cluster served via the \ac{D2D} and \ac{BS}-to-Device communication, respectively. Although $Q_1$ and $Q_2$ are non-interacting as the \ac{D2D} communication is assumed to be out-of-band, these two queues are spatially interacting with similar queues in other clusters. To recap, $Q_1$ and $Q_2$ are two M/G/1 queues with arrival rates $\zeta_{1}$ and $\zeta_{1}$, and service rates $\mu_1$ and $\mu_2$, respectively.
\subsection{Queue Dynamics}
It is worth highlighting that the two queues $Q_i$, $i \in \{1,2\}$, accumulate requests for files demanded by the clusters members, not the files themselves. First-in first-out (FIFO) scheduling is assumed where a request for content arrives first will be scheduled first either by the \ac{D2D} or \ac{BS} communication if the content is cached among the devices or not, respectively. The result of FIFO scheduling only relies on the time when the request arrives at the queue and is irrelevant to the particular device that issues the request. Given the parameter of the Poisson's arrival process $\zeta_{tot}$, the arrival rates at the two queues are expressed respectively as
\begin{align}
\zeta_{1} &= \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big), \\
\zeta_{2} &= \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}
\end{align}
The network operation is depicted in Fig. \ref{delay_queue}, and described in detail below.
\begin{enumerate}
\item Given the memoryless property of the arrival process (Poisson arrival) along with the assumption that the service process is independent of the arrival process,
the number of requests in any queue at a future time only depends upon the current number in the system (at time $t$) and the arrivals or departures that occur within the interval $e$.
\begin{align}
Q_{i}(t+e) = Q_{i}(t) + \Lambda_{i}(e) - M_{i}(e)
\end{align}
where $\Lambda_{i}(e)$ is the number of arrivals in the time interval $(t,t+e)$, whose mean is $\zeta_i$ \SI{}{sec}$^{-1}$, and $M_{i}(e)$ is the number of departures in the time interval $(t,t+e)$, whose mean is $\mu_i = \frac{\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})W_i{\rm log}_2(1+\theta)}{\overline{S}}$ \SI{}{sec}$^{-1}$. It is worth highlighting that, unlike the spatial-only model studied in the previous sections, the term $\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})$ is dependent on the traffic dynamics \textcolor{black}{since a request being served in a given cluster is interfered only from other clusters that also have requests to serve}. What is more noteworthy is that the mean service time $\tau_i = \frac{1}{\mu_i}$ follows the same distribution as the file size. These aspects will be revisited later in this section.
\item $\Lambda_{i}(e)$ is dependent only on $e$ because the arrival process is Poisson. $M_{i}(e)$ is $0$ if the service time of the file being served $\epsilon_i >e$. $M_{i}(e)$ is 1 if $\epsilon_1 <e$ and $\epsilon_2 + \epsilon_1>e$, and so on. As the service times $\epsilon_1, \epsilon_2, \dots, \epsilon_n$ are independent, neither $\Lambda_{i}(e)$ nor $M_{i}(e)$ depends on what happened prior to $t$. Thus, $Q_{i}(t+e)$ only depends upon $Q_{i}(t)$ and not the past history. Hence it is a is a \ac{CTMC} which obeys the stability conditions in \cite{szpankowski1994stability}.
\end{enumerate}
The following proposition provides the sufficient conditions for the stability of the buffers in the sense defined in \cite{szpankowski1994stability}, i.e., $\{Q_{i}\}$ has a limiting distribution for $t \rightarrow \infty$.
\begin{proposition} {\rm The \ac{D2D} and \ac{BS}-to-Device traffic modeling queues are stable, respectively, if and only if}
\begin{align}
\label{stable1}
\zeta_{1} < \mu_1 &= \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}} \\
\label{stable2}
\zeta_{2} < \mu_2 & =\frac{\mathrm{P_{c_b}} W_2{\rm log}_2(1+\theta)}{\overline{S}}
\end{align}
\end{proposition}
\begin{proof}
We show sufficiency by proving that (\ref{stable1}) and (\ref{stable2}) guarantee stability in a dominant network, where all queues that have empty buffers make dummy transmissions. The dominant network is a fictitious system that is identical to the original system, except that terminals may choose to transmit even when their respective buffers are empty, in which case they simply transmit a dummy packet. If both systems are started from the same initial state and fed with the same arrivals, then the queues in the fictitious dominant system can never be shorter than the queues in the original system.
\textcolor{blue}{Similar to the spatial-only network, in the dominant system, the typical receiver is seeing an interference from all other clusters whether they have requests to serve or not (dummy transmission).} This dominant system approach yields $\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})$ equal to $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ for the \ac{D2D} and \ac{BS}-to-Device communication, respectively. Also, the obtained delay is an upper bound for the actual delay of the system. The necessity of (\ref{stable1}) and (\ref{stable2}) is shown as follows: If $\zeta_i>\mu_i$, then, by Loynes' theorem \cite{loynes1962stability}, it follows that lim$_{t\rightarrow \infty}Q_i(t)=\infty$ (a.s.) for all queues in the dominant network.
\end{proof}
Next, we conduct the analysis for the dominant system whose parameters are as follows. The content size has an exponential distribution of mean $\overline{S}$ \SI{}{MBytes}. The service times also obey an exponential distribution with means $\tau_1 = \frac{\overline{S}}{R_1}$ \SI{}{seconds} and $\tau_2 = \frac{\overline{S}}{R_2}$ \SI{}{seconds}. The rates $R_1$ and $R_2$ are calculated from (\ref{rate_eqn}) where $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ are from (\ref{v_0}) and (\ref{p_b_bs}), respectively. Accordingly, $Q_1$ and $Q_2$ are two continuous time independent (non-interacting) M/M/1 queues with service rates $\mu_1 = \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}}$ and $\mu_2 = \frac{\mathrm{P_{c_b}} W_2{\rm log}_2(1+\theta)}{\overline{S}}$ \SI{}{sec}$^{-1}$, respectively. \begin{proposition} {\rm The mean queue length $L_i$ of the $i$-th queue is given by}
\begin{align}
\label{queue_len}
L_i &= \rho_i + \frac{2\rho_i^2}{2\zeta_i(1 - \rho_i)},
\end{align}
\end{proposition}
\begin{proof}
We can easily calculate $L_i$ by observing that $Q_i$ are continuous time M/M/1 queues with arrival rates $\zeta_i$, service rates $\mu_i$, and traffic intensities $\rho_i = \frac{\zeta_i}{\mu_i}$. Then, by applying the Pollaczek-Khinchine formula \cite{Kleinrock}, $L_i$ is directly obtained.
\end{proof}
The average delay per request for each queue is calculated from
\begin{align}
D_1 &= \frac{L_1}{\zeta_1}= \frac{1}{\mu_1 - \zeta_1} = \frac{1}{W_1\mathcal{O}_{1} - \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)} \\
D_2 &= \frac{L_2}{\zeta_2}=\frac{1}{\mu_2 - \zeta_2} = \frac{1}{W_2\mathcal{O}_{2} - \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}}
\end{align}
where $\mathcal{O}_{1} = \frac{\mathrm{P_{c_d}} {\rm log}_2(1+\theta)}{\overline{S}}$, $\mathcal{O}_{2}= \frac{\mathrm{P_{c_b}} {\rm log}_2(1+\theta)}{\overline{S}}$ for notational simplicity. The weighted average delay $D$ is then expressed as
\begin{align}
D&= \frac{\zeta_{1}D_1 + \zeta_{2}D_2}{\zeta_{tot}} \nonumber \\
&= \frac{\sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)}{ \mathcal{O}_{1}W_1 - \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)} + \frac{\sum_{i=1}^{N_f}q_i (1-b_i)^{k}}{ \mathcal{O}_{2}W_2 - \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}}
\end{align}
One important insight from the delay equation is that the caching probability $\textbf{b}$ controls the arrival rates $\zeta_{1}$ and $\zeta_{2}$ while the bandwidth determines the service rates $\mu_1$ and $\mu_2$. Therefore, it turns out to be of paramount importance to jointly optimize $\textbf{b}$ and $W_1$ to minimize the average delay. One relevant work is carried out in \cite{tamoor2016caching} where the authors investigate the storage-bandwidth tradeoffs for small cell \ac{BS}s that are subject to storage constraints. Subsequently, we formulate the weighted average delay joint caching and bandwidth minimization problem as
\begin{align}
\label{optimize_eqn3}
\textbf{P4:} \quad \quad&\underset{\textbf{b},{\rm W}_1}{\text{min}} \quad D(\textbf{b},W_1) \\
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber \\
& 0 \leq W_1 \leq W, \\
\label{stab1}
&\zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big) < \mu_1, \\
\label{stab2}
&\zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k} < \mu_2,
\end{align}
where constraints (\ref{stab1}) and (\ref{stab2}) are the stability conditions for the queues $Q_1$ and $Q_2$, respectively. Although the objective function of \textbf{P4} is convex w.r.t. $W_1$, as derived below, the coupling of the optimization variables $\textbf{b}$ and $W_1$ makes \textbf{P4} a non-convex optimization problem. Therefore, \textbf{P4} cannot be solved directly using standard convex optimization techniques.
\textcolor{black}{By applying the \ac{BCD} optimization technique, \textbf{P4} can be solved in an iterative manner as follows. First, for a given caching probability $\textbf{b}$, we calculate the bandwidth allocation subproblem. Afterwards, the obtained optimal bandwidth is used to update $\textbf{b}$}. The optimal bandwidth for the bandwidth allocation subproblem is given in the next Lemma.
\begin{lemma}
The objective function of \textbf{P4} in (\ref{optimize_eqn3}) is convex w.r.t. $W_1$, and the optimal bandwidth allocation to the \ac{D2D} communication is given by
\begin{align}
\label{optimal-w-1}
W_1^* = \frac{\zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k}) +\varpi \big(\mathcal{O}_{2}W - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)}{\mathcal{O}_{1}+\varpi\mathcal{O}_{2}},
\end{align}
where $\overline{b}_i = 1 - b_i$ and $\varpi=\sqrt{\frac{\mathcal{O}_{1}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})}{\mathcal{O}_{2} \sum_{i=1}^{N_f}q_i\overline{b}_i^{k}}}$
\end{lemma}
\begin{proof}
$D(\textbf{b},W_1)$ can be written as
\begin{align}
\label{optimize_eqn3_p1}
\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big(\mathcal{O}_{1}W_1 - \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big)^{-1} + \sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big(\mathcal{O}_{2}W_2 - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)^{-1}, \nonumber
\end{align}
The second derivative $\frac{\partial^2 D(\textbf{b},W_1)}{\partial W_1^2}$ is hence given by
\begin{align}
2\mathcal{O}_{1}^2\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big(\mathcal{O}_{1}W_1 - \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big)^{-3} + 2\mathcal{O}_{2}^2\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big(\mathcal{O}_{2}W_2 - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)^{-3}, \nonumber
\end{align}
The stability conditions require that $\mu_1 = \mathcal{O}_{1}W_1 > \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})$ and $\mu_2 =\mathcal{O}_{2}W_2 > \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}$. Also, $\overline{b}_i \geq \overline{b}_i^{k}$ by definition. Hence, $\frac{\partial^2 D(\textbf{b},W_1)}{\partial W_1^2} > 0$, and the objective function is a convex function of $W_1$. The optimal bandwidth allocation can be obtained from the \ac{KKT} conditions similar to problems \textbf{P2} and \textbf{P3}, with the details omitted for brevity.
\end{proof}
Given $W_1^*$ from the bandwidth allocation subproblem, the caching probability subproblem can be written as
\begin{align}
\textbf{P5:} \quad \quad&\underset{\textbf{b}}{\text{min}} \quad D(\textbf{b},W_1^*) \\
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}), (\ref{stab1}), (\ref{stab2}) \nonumber
\end{align}
The caching probability subproblem \textbf{P5} is a sum of two fractional functions, where the first fraction is in the form of a concave over convex functions while the second fraction is in the form of a convex over concave functions. The first fraction structure, i.e., concave over convex functions, renders solving this problem using fractional programming (FP) very challenging.\footnote{A quadratic transform technique for tackling the multiple-ratio concave-convex FP problem is recently used to solve a minimization of fractional functions that has the form of convex over concave functions, whereby an equivalent problem is solved with the objective function reformulated as a difference between convex minus concave functions \cite{shen2018fractional}.}
\textcolor{blue}{Moreover, the constraint (\ref{const110}) is a concave w.r.t. $\textbf{b}$.
Hence, we adopt the interior point method to obtain local optimal solution of $\textbf{b}$ given the optimal bandwidth $W_1^*$, which depends on the initial value input to the algorithm \cite{boyd2004convex}. Nonetheless, we can increase the probability to find a near-optimal solution of problem \textbf{P5} by using the interior point method with multiple random initial values and then picking the solution with lowest weighted average delay. The explained procedure is repeated until the value of \textbf{P4}'s objective function converges to a pre-specified accuracy.}
\section{Numerical Results}
\begin{table}[ht]
\caption{Simulation Parameters}
\centering
\begin{tabular}{c c c}
\hline\hline
Description & Parameter & Value \\ [0.5ex]
\hline
\textcolor{black}{System bandwidth} & W & \SI{20}{\mega\hertz} \\
\ac{BS} transmission power & $P_b$ & \SI{43}{\deci\bel\of{m}} \\
\ac{D2D} transmission power & $P_d$ & \SI{23}{\deci\bel\of{m}} \\
Displacement standard deviation & $\sigma$ & \SI{10}{\metre} \\
Popularity index&$\beta$&1\\
Path loss exponent&$\alpha$&4\\
Library size&$N_f$&500 files\\
Cache size per device&$M$&10 files\\
Average number of devices per cluster&$\overline{n}$&5\\
Density of $\Phi_p$&$\lambda_{p}$&20 clusters/\SI{}{km}$^2$ \\
Average content size&$\textcolor{black}{\overline{S}}$&\SI{5}{MBytes} \\
$\mathrm{SIR}$ threshold&$\theta$&\SI{0}{\deci\bel}\\
\textcolor{black}{Total request arrival rate}&$\zeta_{tot}$&\SI{2}{request/sec}\\
\hline
\end{tabular}
\label{ch3:table:sim-parameter}
\end{table}
At first, we validate the developed mathematical model via Monte Carlo simulations. Then we benchmark the proposed caching scheme against conventional caching schemes. Unless otherwise stated, the network parameters are selected as shown in Table \ref{ch3:table:sim-parameter}.
\subsection{Offloading Gain Results}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/prob_r_geq_r0.eps}
\caption {The probability that the \ac{D2D} achievable rate is greater than a threshold $R_0$ versus standard deviation $\sigma$.}
\label{prob_r_geq_r0}
\end{center}
\end{figure}
In this subsection, we present the offloading gain performance for the proposed caching model.
In Fig.~\ref{prob_r_geq_r0}, we verify the accuracy of the analytical results for the probability $\mathbb{P}(R_1>R_0)$. The theoretical and simulated results are plotted together, and they are consistent. We can observe that the probability $\mathbb{P}(R_1>R_0)$ decreases monotonically with the increase of $\sigma$. This is because as $\sigma$ increases, the serving distance increases and the inter-cluster interfering distance between the out-of-cluster interferers and the typical device decreases, and equivalently, the $\mathrm{SIR}$ decreases. It is also shown that $\mathbb{P}(R_1>R_0)$ decreases with the $\mathrm{SIR}$ threshold $\theta$ as the channel becomes more prone to be in outage when increasing the $\mathrm{SIR}$ threshold $\theta$.
\begin{figure*}
\centering
\subfigure[$p=p^*$. ]{\includegraphics[width=3.0in]{Figures/ch3/histogram_b_i_p_star1.eps}
\label{histogram_b_i_p_star}}
\subfigure[$p > p^*$.]{\includegraphics[width=3.0in]{Figures/ch3/histogram_b_i_p_leq_p_star1.eps}
\label{histogram_b_i_p_leq_p_star}}
\caption{Histogram of the optimal caching probability $\textbf{b}^*$ when (a) $p=p^*$ and (b) $p > p^*$.}
\label{histogram_b_i}
\end{figure*}
To show the effect of $p$ on the caching probability, in Fig.~\ref{histogram_b_i}, we plot the histogram of the optimal caching probability at different values of $p$, where $p=p^*$ in Fig.~\ref{histogram_b_i_p_star} and $p>p^*$ in Fig.~\ref{histogram_b_i_p_leq_p_star}. It is clear from the histograms that the optimal caching probability $\textbf{b}^*$ tends to be more skewed when $p > p^*$, i.e., when $\mathbb{P}(R_1>R_0)$ decreases. This shows that file sharing is more difficult when $p$ is larger than the optimal access probability. More precisely, for $p>p^*$, the outage probability is high due to the aggressive interference. In such a low coverage probability regime, each device tends to cache the most popular files leading to fewer opportunities of content transfer between the devices.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/offloading_prob_cach_compare1.eps}
\caption {The offloading probability versus the popularity of files $\beta$ under different caching schemes, namely, \ac{PC}, Zipf, and \ac{CPF} ($\overline{n}=10$, $\sigma=$\SI{5}{\metre}).}
\label{offloading_gain_vs_beta}
\end{center}
\end{figure}
Last but not least, Fig.~\ref{offloading_gain_vs_beta} manifests the prominent effect of the files' popularity on the offloading gain. We compare the offloading gain of three different caching schemes, namely, the proposed \ac{PC}, Zipf's caching (Zipf), and \ac{CPF}. We can see that the offloading gain under the \ac{PC} scheme attains the best performance as compared to other schemes. Also, we note that both \ac{PC} and Zipf schemes encompass the same offloading gain when $\beta=0$ owing to the uniformity of content popularity.
\subsection{\textcolor{black}{Energy Consumption Results}}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/energy_vs_beta7.eps}
\caption {Normalized energy consumption versus popularity exponent $\beta$.}
\label{energy_vs_beta}
\end{center}
\end{figure}
The results in this part are given for the energy consumption.
Fig.~\ref{energy_vs_beta} shows the energy consumption, \textcolor{black}{normalized to the mean number of devices per cluster}, versus $\beta$ under different caching schemes, namely, \ac{PC}, Zipf, and \ac{CPF}. We can see that the minimized energy consumption under the proposed \ac{PC} scheme attains the best performance as compared to other schemes. Also, it is clear that the consumed energy decreases with $\beta$. This can be justified by the fact that as $\beta$ increases, fewer files are frequently requested which are more likely to be cached among the devices under \ac{PC}, \ac{CPF}, and the Zipf schemes. These few files therefore are downloadable from the devices via low power \ac{D2D} communication.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/energy_vs_n6.eps}
\caption {\textcolor{black}{Normalized energy consumption versus the mean number of devices per cluster.}}
\label{energy_vs_n}
\end{center}
\end{figure}
We plot the normalized energy consumption versus the \textcolor{black}{mean number of devices per cluster} in Fig.~\ref{energy_vs_n}. First, we see that energy consumption decreases with the mean number of devices per cluster. As the number of devices per cluster increases, it is more probable to obtain requested files via low power \ac{D2D} communication. When the number of devices per cluster is relatively large, the normalized energy consumption tends to flatten as most of the content becomes cached at the cluster devices.
\subsection{Delay Results}
\begin{figure*} [!h]
\centering
\subfigure[Weighted average delay versus the popularity exponent $\beta$. ]{\includegraphics[width=2.8in]{Figures/ch3/delay_compare3.eps}
\label{delay_compare}}
\subfigure[Optimal allocated bandwidth versus the popularity exponent $\beta$.]{\includegraphics[width=2.8in]{Figures/ch3/BW_compare4.eps}
\label{BW_compare}}
\caption{Evaluation and comparison of average delay for the proposed joint \ac{PC} and bandwidth allocation scheme with different baseline schemes against popularity exponent $\beta$, $N_f = 100$, $M = 4$, $k = 8$.}
\label{delay-analysis}
\end{figure*}
\textcolor{blue}{The results in this part are devoted to the average delay metric. The performance of the proposed joint \ac{PC} and bandwidth allocation scheme is evaluated in Fig.~\ref{delay-analysis}, and the optimized bandwidth allocation is also shown. Firstly, in Fig.~\ref{delay_compare}, we compare the average delay for two different caching schemes, namely, \ac{PC}, and Zipf's scheme. We can see that the minimized average delay under the proposed joint \ac{PC} and bandwidth allocation scheme attains substantially better performance as compared to the Zipf's scheme with fixed bandwidth allocation (i.e., $W_1=W_2=W/2$). Also, we see that, in general, the average delay monotonically decreases with $\beta$ when a fewer number of files undergoes the highest demand.
Secondly, Fig.~\ref{BW_compare} manifests the effect of the files' popularity $\beta$ on the allocated bandwidth. It is shown that optimal \ac{D2D} allocated bandwidth $W_1^*$ continues increasing with $\beta$. This can be interpreted as follows. When $\beta$ increases, a fewer number of files become highly demanded. These files can be entirely cached among the devices. To cope with such a larger number of requests served via the \ac{D2D} communication, the \ac{D2D} allocated bandwidth needs to be increased.}
\textcolor{blue}{Last but not least, Fig.~\ref{scaling} shows the geometrical scaling effects on the system performance, e.g., the effect of clusters' density $\lambda_p$ and the displacement standard deviation $\sigma$ on the \ac{D2D} coverage probability $\mathrm{P_{c_d}}$, optimal allocated bandwidth $W_1^*$, and the average delay. In Fig.~\ref{cache_size}, we plot the \ac{D2D} coverage probability $\mathrm{P_{c_d}}$ versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$. It is clear from the plot that $\mathrm{P_{c_d}}$ monotonically decreases with both $\sigma$ and $\lambda_p$. Obviously, increasing $\sigma$ and $\lambda_p$ results in larger serving distance, i.e., higher path-loss effect, and shorter interfering distance, i.e., higher interference power received by the typical device, respectively. This explains the encountered degradation for $\mathrm{P_{c_d}}$ with $\sigma$ and $\lambda_p$.
In Fig.~\ref{optimal-w}, we plot the optimal allocated bandwidth $W_1^*$ normalized to $W$ versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$. Here also, it is quite obvious that $W_1^*$ tends to increase with both $\sigma$ and $\lambda_p$. This behavior can be directly understood from (\ref{optimal-w-1}) where $W_1^*$ is inversely proportional to $\mathcal{O}_{1} = \frac{\mathrm{P_{c_d}} {\rm log}_2(1+\theta)}{\overline{S}}$, and $\mathrm{P_{c_d}}$ decreases with $\sigma$ and $\lambda_p$ as discussed above. More precisely, while the \ac{D2D} service rate $\mu_1$ decreases with $\mathrm{P_{c_d}}$ since $\mu_1 = \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}}$, the optimal allocated bandwidth $W_1^*$ tends to increase with $\mathrm{P_{c_d}}$ to compensate for the service rate degradation, and eventually, minimizing the weighted average delay.
In Fig.~\ref{av-delay}, we plot the weighted average delay versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$. Following the same interpretations as in Fig.~\ref{cache_size} and Fig.~\ref{optimal-w}, we can notice that the weighted average delay monotonically increases with $\sigma$ and $\lambda_p$.}
\begin{figure*} [t!]
\vspace{-0.5cm}
\centering
\subfigure[\ac{D2D} coverage probability $\mathrm{P_{c_d}}$ versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$.]{\includegraphics[width=2.0in]{Figures/ch3/d2d-coverage-prob.eps}
\label{cache_size}}
\subfigure[Optimal allocated bandwidth versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$.]{\includegraphics[width=2.0in]{Figures/ch3/bw-allocated.eps}
\label{optimal-w}}
\subfigure[Weighted average delay versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$.]{\includegraphics[width=2.0in]{Figures/ch3/delay-sigma-lamda.eps}
\label{av-delay}}
\caption{Effect of geometrical parameters, e.g., clusters' density $\lambda_p$ and the displacement standard deviation $\sigma$ on the system performance, $\beta= 0.5$, $N_f = 100$, $M = 4$, $k = 8$.}
\label{scaling}
\end{figure*}
\section{Conclusion}
In this work, we conduct a comprehensive analysis of the joint communication and caching for a clustered \ac{D2D} network with random probabilistic caching incorporated at the devices. We first maximize the offloading gain of the proposed system by jointly optimizing the channel access and caching probability. We obtain the optimal channel access probability, and the optimal caching probability is then characterized. We show that deviating from the optimal access probability $p^*$ makes file sharing more difficult. More precisely, the system is too conservative for small access probabilities, while the interference is too aggressive for larger access probabilities. Then, we minimize the energy consumption of the proposed clustered \ac{D2D} network. We formulate the energy minimization problem and show that it is convex and the optimal caching probability is obtained. We show that a content with a large size or low popularity has a small probability to be cached. Finally, we adopt a queuing model for the devices' traffic within each cluster to investigate the network average delay. Two M/G/1 queues are employed to model the \ac{D2D} and \ac{BS}-to-Device communications. We then derive an expression for the weighted average delay per request. We observe that the average delay is dependent on the caching probability and bandwidth allocated, which control respectively the arrival rates and service rates for the two modeling queues. Therefore, we minimize the per request weighted average delay by jointly optimizing bandwidth allocation between \ac{D2D} and \ac{BS}-to-Device communication and the caching probability. The delay minimization problem is shown to be non-convex. Applying the \ac{BCD} optimization technique, the joint minimization problem can be solved in an iterative manner. Results show up to $10\%$, $17\%$, and $300\%$ improvement gain in the offloading gain, energy consumption, and average delay, respectively, compared to the Zipf's caching technique.
\begin{appendices}
\section{Proof of lemma 1}
Laplace transform of the inter-cluster aggregate interference $I_{\Phi_p^{!}}$ can be evaluated as
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &= \mathbb{E} \Bigg[e^{-s \sum_{\Phi_p^{!}} \sum_{y \in \mathcal{B}^p} g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&= \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\Phi_{cp},g_{y_{x}}} \prod_{y \in \mathcal{B}^p} e^{-s g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&= \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\Phi_{cp}} \prod_{y \in \mathcal{B}^p} \mathbb{E}_{g_{y_{x}}} e^{-s g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&\overset{(a)}{=} \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\Phi_{cp}} \prod_{y \in \mathcal{B}^p} \frac{1}{1+s \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&\overset{(b)}{=} \mathbb{E}_{\Phi_p} \prod_{\Phi_p^{!}} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x + y\rVert^{-\alpha}}\Big)f_Y(y)\dd{y}\Big) \nonumber
\\
&\overset{(c)}{=} {\rm exp}\Bigg(-\lambda_p \int_{\mathbb{R}^2}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x + y\rVert^{-\alpha}}\Big)f_Y(y)\dd{y}\Big)\dd{x}\Bigg)
\end{align}
where $\varphi(s,v) = \int_{u=0}^{\infty}\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}$; (a) follows from the Rayleigh fading assumption, (b)
follows from the \ac{PGFL} of Gaussian \ac{PPP} $\Phi_{cp}$, and (c) follows from the \ac{PGFL} of the parent \ac{PPP} $\Phi_p$. By using change of variables $z = x + y$ with $\dd z = \dd y$, we proceed as
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &\overset{}{=} {\rm exp}\Bigg(-\lambda_p \int_{\mathbb{R}^2}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert z\rVert^{-\alpha}}\Big)f_Y(z-x)\dd{y}\Big)\dd{x}\Bigg) \nonumber
\\
&\overset{(d)}{=} {\rm exp}\Bigg(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{u=0}^{\infty}\Big(1 - \frac{1}{1+s u^{-\alpha}}\Big)f_U(u|v)\dd{u}\Big)v\dd{v}\Bigg) \nonumber
\\
&= {\rm exp}\Bigg(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm exp}\big(-p\overline{n} \int_{u=0}^{\infty}
\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}\big)v\dd{v}\Big)\Bigg) \nonumber
\\
&= {\rm exp}\Big(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm e}^{-p\overline{n} \varphi(s,v)}\Big)v\dd{v}\Big),
\end{align}
where (d) follows from converting the cartesian coordinates to the polar coordinates with $u=\lVert z\rVert$.
\textcolor{blue}{To clarify how in (d) the normal distribution $f_Y(z-x)$ is converted to the Rice distribution $f_U(u|v)$, consider a remote cluster centered at $x \in \Phi_p^!$, with a distance $v=\lVert x\rVert$ from the origin. Every interfering device belonging to the cluster centered at $x$ has its coordinates in $\mathbb{R}^2$ chosen independently from a gaussian distribution with standard deviation $\sigma$. Then, by definition, the distance from such an interfering device to the origin, denoted as $u$, has a Rice distribution, denoted as $f_U(u|v)=\frac{u}{\sigma^2}\mathrm{exp}\big(-\frac{u^2 + v^2}{2\sigma^2}\big) I_0\big(\frac{uv}{\sigma^2}\big)$, where $I_0$ is the modified Bessel function of the first kind with order zero and $\sigma$ is the scale parameter.
Hence, Lemma 1 is proven.}
\section {Proof of lemma 2}
Laplace transform of the intra-cluster aggregate interference $I_{\Phi_c}$, conditioning on the distance $v_0$ from the cluster center to the origin, see Fig~\ref{distance}, is written as
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|v_0) &= \mathbb{E} \Bigg[e^{-s \sum_{y \in \mathcal{A}^p} g_{y_{x_0}} \lVert x_0 + y\rVert^{-\alpha}} \Bigg] \nonumber
\\
&= \mathbb{E}_{\Phi_{cp},g_{y_{x_0}}} \prod_{y \in\mathcal{A}^p} e^{-s g_{y_{x_0}} \lVert x_0 + y\rVert^{-\alpha}} \nonumber
\\
&= \mathbb{E}_{\Phi_{cp}} \prod_{y \in\mathcal{A}^p} \mathbb{E}_{g_{y_{x_0}}} e^{-s g_{y_{x_0}} \lVert x_0 + y\rVert^{-\alpha}} \nonumber
\\
&\overset{(a)}{=} \mathbb{E}_{\Phi_{cp}} \prod_{y \in\mathcal{A}^p} \frac{1}{1+s \lVert x_0 + y\rVert^{-\alpha}}
\nonumber
\\
&\overset{(b)}{=} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x_0 + y\rVert^{-\alpha}}\Big)f_{Y}(y)\dd{y}\Big) \nonumber
\\
&\overset{(c)}{=} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert z_0\rVert^{-\alpha}}\Big)f_{Y}(z_0-x_0)\dd{z_0}\Big)
\end{align}
where (a) follows from the Rayleigh fading assumption, (b) follows from the \ac{PGFL} of the Gaussian \ac{PPP} $\Phi_{cp}$, (c) follows from changing of variables $z_0 = x_0 + y$ with $\dd z_0 = \dd y$. By converting the cartesian coordinates to the polar coordinates, with $h=\lVert z_0\rVert$, we get
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|v_0) &\overset{}{=} {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\Big(1 - \frac{1}{1+s h^{-\alpha}}\Big)f_H(h|v_0)\dd{h}\Big) \nonumber
\\
&= {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h|v_0)\dd{h}\Big)
\end{align}
By neglecting the correlation of the intra-cluster interfering distances as in \cite{clustered_twc}, i.e., the common part $x_0$ in the intra-cluster interfering distances $\lVert x_0 + y\rVert$, $y \in \mathcal{A}^p$, we get.
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s) &\approx {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\end{align}
Hence, Lemma 2 is proven.
\section {Proof of lemma 3}
First, to prove concavity, we proceed as follows.
\begin{align}
\frac{\partial \mathbb{P}_o}{\partial b_i} &= q_i + q_i\big(\overline{n}(1-b_i)e^{-\overline{n}b_i}-(1-e^{-\overline{n}b_i})\big)\mathbb{P}(R_1>R_0) \\
\frac{\partial^2 \mathbb{P}_o}{\partial b_i^2} &= -q_i\big(\overline{n}e^{-\overline{n}b_i} + \overline{n}^2(1-b_i)e^{-\overline{n}b_i} + \overline{n}e^{-\overline{n}b_i}\big)\mathbb{P}(R_1>R_0)
\end{align}
It is clear that the second derivative $\frac{\partial^2 \mathbb{P}_o}{\partial b_i^2}$ is always negative, and
$\frac{\partial^2 \mathbb{P}_o}{\partial b_i \partial b_j}=0$ for all $i\neq j$. Hence, the Hessian matrix \textbf{H}$_{i,j}$ of $\mathbb{P}_o(p^*,\textbf{b})$ w.r.t. $\textbf{b}$ is negative semidefinite, and $\mathbb{P}_o(p^*,\textbf{b})$ is a concave function of $\textbf{b}$. Also, the constraints are linear, which imply that the necessity and sufficiency conditions for optimality exist. The dual Lagrangian function and the \ac{KKT} conditions are then employed to solve \textbf{P2}.
The \ac{KKT} Lagrangian function of the energy minimization problem is given by
\begin{align}
\mathcal{L}(\textbf{b},w_i,\mu_i,v) =& \sum_{i=1}^{N_f} q_i b_i + q_i(1- b_i)(1-e^{-b_i\overline{n}})\mathbb{P}(R_1>R_0) \nonumber \\
&+ v(M-\sum_{i=1}^{N_f} b_i) + \sum_{i=1}^{N_f} w_i (b_i-1) - \sum_{i=1}^{N_f} \mu_i b_i
\end{align}
where $v, w_i, \mu_i$ are the dual equality and two inequality constraints, respectively. Now, the optimality conditions are written as
\begin{align}
\label{grad}
\grad_{\textbf{b}} \mathcal{L}(\textbf{b}^*,w_i^*,\mu_i^*,v^*) = q_i + q_i&\big(\overline{n}(1-b_i^*)e^{-\overline{n}b_i^*}-(1-e^{-\overline{n}b_i^*})\big)\mathbb{P}(R_1>R_0) -v^* + w_i^* -\mu_i^*= 0 \\
&w_i^* \geq 0 \\
&\mu_i^* \leq 0 \\
&w_i^* (b_i^* - 1) =0 \\
&\mu_i^* b_i^* = 0\\
&(M-\sum_{i=1}^{N_f} b_i^*) = 0
\end{align}
\begin{enumerate}
\item $w_i^* > 0$: We have $b_i^* = 1$, $\mu_i^*=0$, and
\begin{align}
&q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)= v^* - w_i^* \nonumber \\
\label{cond1_offload}
&v^{*} < q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)
\end{align}
\item $\mu_i^* < 0$: We have $b_i^* = 0$, and $w_i^*=0$, and
\begin{align}
& q_i + \overline{n}q_i\mathbb{P}(R_1>R_0) = v^* + \mu_i^* \nonumber \\
\label{cond2_offload}
&v^{*} > q_i + \overline{n}q_i\mathbb{P}(R_1>R_0)
\end{align}
\item $0 <b_i^*<1$: We have $w_i^*=\mu_i^*=0$, and
\begin{align}
\label{psii_offload}
v^{*} = q_i + q_i\big(\overline{n}(1-b_i^*)e^{-\overline{n}b_i^*} - (1 - e^{-\overline{n}b_i^*})\big)\mathbb{P}(R_1>R_0)
\end{align}
\end{enumerate}
By combining (\ref{cond1_offload}), (\ref{cond2_offload}), and (\ref{psii_offload}), with the fact that $\sum_{i=1}^{N_f} b_i^*=M$, Lemma 3 is proven.
\section {Proof of lemma 6}
Under the assumption of one active \ac{D2D} link within a cluster, there is no intra-cluster interference. Also, the Laplace transform of the inter-cluster interference is similar to that of the \ac{PPP} \cite{andrews2011tractable} whose density is the same as that of the parent PPP. In fact, this is true according to the displacement theory of the \ac{PPP} \cite{daley2007introduction}, where each interferer is a point of a \ac{PPP} that is displaced randomly and independently of all other points. For the sake of completeness, we prove it here. Starting from the third line of the proof of Lemma 1, we get
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &\overset{(a)}{=} \text{exp}\Bigg(-2\pi\lambda_p \mathbb{E}_{g_{u}} \int_{v=0}^{\infty}\mathbb{E}_{u|v}\Big[1 -
e^{-s P_d g_{u} u^{-\alpha}} \Big]v\dd{v}\Bigg), \nonumber \\
&= \text{exp}\Big(-2\pi\lambda_p \mathbb{E}_{g_{u}} \big[\int_{v=0}^{\infty}\int_{u=0}^{\infty}\big(1 - e^{-s P_d g_{u} u^{-\alpha}} \big)f_U(u|v)\dd{u}v\dd{v}\big]\Big) \nonumber \\
\label{prelaplace}
&\overset{(b)}{=} \text{exp}\Bigg(-2\pi\lambda_p \mathbb{E}_{g_{u}} \underbrace{\int_{v=0}^{\infty}v\dd{v} - \int_{v=0}^{\infty}\int_{u=0}^{\infty} e^{-s P_d g_{u} u^{-\alpha}} f_{U}(u|v)\dd{u} v \dd{v}}_{\mathcal{R}(s,\alpha)}\Bigg)
\end{align}
where (a) follows from the \ac{PGFL} of the parent \ac{PPP} \cite{andrews2011tractable}, and (b) follows from $\int_{u=0}^{\infty} f_{U}(u|v)\dd{u} =1$. Now, we proceed by calculating the integrands of $\mathcal{R}(s,\alpha)$ as follows.
\begin{align}
\mathcal{R}(s,\alpha)&\overset{(c)}{=} \int_{v=0}^{\infty}v\dd{v} - \int_{u=0}^{\infty}e^{-s P_d g_{u} u^{-\alpha}}
\int_{v=0}^{\infty} f_{U}(u|v)v \dd{v}\dd{u}\nonumber \\
&\overset{(d)}{=} \int_{v=0}^{\infty}v\dd{v} - \int_{u=0}^{\infty}e^{-s P_d g_{u} u^{-\alpha}}u\dd{u} \nonumber \\
&\overset{(e)}{=} \int_{u=0}^{\infty}(1 - e^{-s P_d g_{u} u^{-\alpha}})u\dd{u} \nonumber \\
&\overset{(f)}{=} \frac{(s P_d g_{u})^{2/\alpha}}{\alpha} \int_{u=0}^{\infty}(1 - e^{-t})t^{-1-\frac{2}{\alpha}}du \nonumber \\
\label{laplaceppp1}
&\overset{(g)}{=} \frac{(s P_d)^{2/\alpha}}{2} g_{u}^{2/\alpha} \Gamma(1 + 2/\alpha),
\end{align}
where (c) follows from changing the order of integration, (d) follows from $ \int_{v=0}^{\infty} f_{U}(u|v)v\dd{v} = u$, (e) follows from changing the dummy variable $v$ to $u$, (f) follows from changing the variables $t=s g_{u}u^{-\alpha}$, and (g) follows from solving the integration of (f) by parts. Substituting the obtained value for $\mathcal{R}(s,\alpha)$ into (\ref{prelaplace}), and taking the expectation over the exponential random variable $g_u$, with the fact that $\mathbb{E}_{g_{u}} [g_{u}^{2/\alpha}] = \Gamma(1 - 2/\alpha)$, we get
\begin{align}
\label{laplace_trans}
\mathscr{L}_{I_{\Phi_p^{!}}} (s)&= {\rm exp}\Big(-\pi\lambda_p (sP_d )^{2/\alpha} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)\Big),
\end{align}
Substituting this expression with the distance \ac{PDF} $f_R(r)$ into the coverage probability equation yields
\begin{align}
\mathrm{P_{c_d}} &=\int_{r=0}^{\infty}
{\rm e}^{-\pi\lambda_p (sP_d)^{2/\alpha} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)}\frac{r}{2\sigma^2}{\rm e}^{\frac{-r^2}{4\sigma^2}} {\rm dr} , \nonumber \\
&\overset{(h)}{=}\int_{r=0}^{\infty} \frac{r}{2\sigma^2}
{\rm e}^{-\pi\lambda_p \theta^{2/\alpha}r^{2} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)}{\rm e}^{\frac{-r^2}{4\sigma^2}} {\rm dr} , \nonumber \\
&\overset{(i)}{=}\int_{r=0}^{\infty} \frac{r}{2\sigma^2}
{\rm e}^{-r^2Z(\theta,\sigma,\alpha)} {\rm dr} , \nonumber \\
&\overset{}{=} \frac{1}{4\sigma^2 Z(\theta,\alpha,\sigma)}
\end{align}
where (h) comes from the substitution $s = \frac{\theta r^{\alpha}}{P_d}$, and (i) from $Z(\theta,\alpha,\sigma) = (\pi\lambda_p \theta^{2/\alpha}\Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)+ \frac{1}{4\sigma^2})$.
\end{appendices}
\bibliographystyle{IEEEtran}
\section{Introduction}
Caching at mobile devices significantly improves system performance by facilitating \ac{D2D} communications, which enhances the spectrum efficiency and alleviate the heavy burden on backhaul links \cite{Femtocaching}. Modeling the cache-enabled heterogeneous networks, including \ac{SBS} and mobile devices, follows two main directions in the literature. The first line of work focuses on the fundamental throughput scaling results by assuming a simple protocol channel model \cite{Femtocaching,golrezaei2014base,8412262,amer2017delay}, known as the protocol model, where two devices can communicate if they are within a certain distance. The second line of work, defined as the physical interference model, considers a more realistic model for the underlying physical layer \cite{andreev2015analyzing,cache_schedule}. In the following, we review some of the works relevant to the second line, focusing mainly on the \ac{EE} and delay analysis of wireless caching networks.
The physical interference model is based on the fundamental \ac{SIR} metric, and therefore, is applicable to any wireless communication system. Modeling devices' locations as a \ac{PPP} is widely employed in the literature, especially, in the wireless caching area \cite{andreev2015analyzing,cache_schedule,energy_efficiency,ee_BS,hajri2018energy}. However, a realistic model for \ac{D2D} caching networks requires that a given device typically has multiple proximate devices, where any of them can potentially act as a serving device. This deployment is known as clustered devices' deployment, which can be characterized by cluster processes \cite{haenggi2012stochastic}. Unlike the popular \ac{PPP} approach, the authors in \cite{clustered_twc,clustered_tcom,8070464} developed a stochastic geometry based model to characterize the performance of content placement in the clustered \ac{D2D} network. In \cite{clustered_twc}, the authors discuss two strategies of content placement in a \ac{PCP} deployment. First, when each device randomly chooses its serving device from its local cluster, and secondly, when each device connects to its $k$-th closest transmitting device from its local cluster. The authors characterize the optimal number of \ac{D2D} transmitters that must be simultaneously activated in each cluster to maximize the area spectral efficiency. The performance of cluster-centric content placement is characterized in \cite{clustered_tcom}, where the content of interest in each cluster is cached closer to the cluster center, such that the collective performance of all the devices in each cluster is optimized. Inspired by the Matern hard-core point process, which captures pairwise interactions between nodes, the authors in \cite{8070464} devised a novel spatially correlated caching strategy called \ac{HCP} such that the \ac{D2D} devices caching the same content are never closer to each other than the exclusion radius.
Energy efficiency in wireless caching networks is widely studied in the literature \cite{energy_efficiency,ee_BS,hajri2018energy}.
For example, an optimal caching problem is formulated in \cite{energy_efficiency} to minimize the energy consumption of a wireless network. The authors consider a cooperative wireless caching network where relay nodes cooperate with the devices to cache the most popular files in order to minimize energy consumption. In \cite{ee_BS}, the authors investigate how caching at BSs can improve \ac{EE} of wireless access networks. The condition when \ac{EE} can benefit from caching is characterized, and the optimal cache capacity that maximizes the network \ac{EE} is found. It is shown that \ac{EE} benefit from caching depends on content popularity, backhaul capacity, and interference level.
The authors in \cite{hajri2018energy} exploit the spatial repartitions of devices and the correlation in their content popularity profiles to improve the achievable EE. The \ac{EE} optimization problem is decoupled into two related subproblems, the first one addresses the issue of content popularity modeling, and the second subproblem investigates the impact of exploiting the spatial repartitions of devices. It is shown that the small base station allocation algorithm improves the energy efficiency and hit probability. However, the problem of \ac{EE} for \ac{D2D} based caching is not yet addressed in the literature.
Recently, the joint optimization of delay and energy in wireless caching is conducted, see, for instance \cite{wu2018energy,huang2018energy,jiang2018energy}. The authors in \cite{wu2018energy} jointly optimize the delay and energy in a cache-enabled dense small cell network. The authors formulate the energy-delay optimization problem as a mixed integer programming problem, where file placement, device association to the small cells, and power control are jointly considered. To model the energy consumption and end-to-end file delivery-delay tradeoff, a utility function linearly combining these two metrics is used as an objective function of the optimization problem. An efficient algorithm is proposed to approach the optimal association and power solution, which could achieve the optimal tradeoff between energy consumption and end-to-end file delivery delay. In \cite{huang2018energy}, the authors showed that with caching, the energy consumption can be reduced by extending transmission time. However, it may incur wasted energy if the device never needs the cached content. Based on the random content request delay, the authors study the maximization of \ac{EE} subject to a hard delay constraint in an additive white Gaussian noise channel. It is shown that the \ac{EE} of a system with caching can be significantly improved with increasing content request probability and target transmission rate compared with the traditional on-demand scheme, in which the \ac{BS} transmits content file only after it is requested by the user. However, the problem of energy consumption and joint communication and caching for clustered \ac{D2D} networks is not yet addressed in the literature.
In this paper, we conduct a comprehensive performance analysis and optimization of the joint communication and caching for a clustered \ac{D2D} network, where the devices have unused memory to cache some files, following a random probabilistic caching scheme. Our network model effectively characterizes the stochastic nature of channel fading and clustered geographic locations of devices. Furthermore, this paper emphasizes on the need for considering the traffic dynamics and rate of requests when studying the delay incurred to deliver requests to devices. To the best of our knowledge, our work is the first in the literature that conducts a comprehensive spatial analysis of a doubly \ac{PCP} (also called doubly \ac{PPP} \cite{haenggi2012stochastic}) with the devices adopting a slotted-ALOHA random access technique to access a shared channel. The key advantage of adopting the slotted-ALOHA access protocol is that it is a simple yet fundamental medium access control (MAC) protocol, wherein no central controller exists to schedule the users' transmissions.
We also incorporate the spatio-temporal analysis in wireless caching networks by combining tools from stochastic geometry and queuing theory in order to analyze and minimize the average delay (see, for instance, \cite{zhong2015stability,zhong2017heterogeneous,7917340}).
The main contributions of this paper are summarized below.
\begin{itemize}
\item We consider a Thomas cluster process (TCP) where the devices are spatially distributed as groups in clusters. The clusters' centers are drawn from a parent PPP, and the clusters' members are normally distributed around the centers, forming a Gaussian PPP. This organization of the parent and offspring PPPs forms the so-called doubly PPP.
\item We conduct the coverage probability analysis where the devices adopt a slotted-ALOHA random access technique. We then jointly optimize the access probability and caching probability to maximize the cluster offloading gain. We obtain the optimal channel access probability, and then a closed-form solution of the optimal caching sub-problem is provided. The energy consumption problem is then formulated and shown to be convex and the optimal caching probability is also formulated.
\item By combining tools from stochastic geometry as well as queuing theory,
we minimize the per request weighted average delay by jointly optimizing bandwidth allocation between \ac{D2D} and \ac{BS}-to-Device communication and the caching probability. The delay minimization problem is shown to be non-convex. Applying the block coordinate descent (BCD) optimization technique, the joint minimization problem is solved in an iterative manner.
\item We validate our theoretical findings via simulations. Results show a significant improvement in the network performance metrics, namely, the offloading gain, energy consumption, and average delay as compared to other caching schemes proposed earlier in literature.
\end{itemize}
The rest of this paper is organized as follows. Section II and Section III discuss the system model and the offloading gain, respectively. The energy consumption is discussed in Section IV and the delay analysis is conducted in Section V. Numerical results are then presented in Section VI before we conclude the paper in Section VII.
\section{System Model}
\subsection{System Setup}
We model the location of the mobile devices with a \ac{TCP} in which the parent points are drawn from a \ac{PPP} $\Phi_p$ with density $\lambda_p$, and the daughter points are drawn from a Gaussian \ac{PPP} around each parent point. In fact, the TCP is considered as a doubly \ac{PCP} where the daughter points are normally scattered with variance $\sigma^2 \in \mathbb{R}^2$ around each parent point \cite{haenggi2012stochastic}.
The parent points and offspring are referred to as cluster centers and cluster members, respectively. The number of cluster members in each cluster is a Poisson random variable with mean $\overline{n}$. The density function of the location of a cluster member relative to its cluster center is
\begin{equation}
f_Y(y) = \frac{1}{2\pi\sigma^2}\textrm{exp}\Big(-\frac{\lVert y\rVert^2}{2\sigma^2}\Big), \quad \quad y \in \mathbb{R}^2
\label{pcp}
\end{equation}
where $\lVert .\rVert$ is the Euclidean norm. The intensity function of a cluster is given by $\lambda_c(y) = \frac{\overline{n}}{2\pi\sigma^2}\textrm{exp}\big(-\frac{\lVert y\rVert^2}{2\sigma^2}\big)$. Therefore, the intensity of the entire process is given by $\lambda = \overline{n}\lambda_p$. We assume that the BSs' distribution follows another \ac{PPP} $\Phi_{bs}$ with density $\lambda_{bs}$, which is independent of $\Phi_p$.
\subsection{Content Popularity and Probabilistic Caching Placement}
We assume that each device has a surplus memory of size $M$ designated for caching files.
The total number of files is $N_f> M$ and the set (library) of content indices is denoted as $\mathcal{F} = \{1, 2, \dots , N_f\}$. These files represent the content catalog that all the devices in a cluster may request, which are indexed in a descending order of popularity. The probability that the $i$-th file is requested follows a Zipf's distribution given by,
\begin{equation}
q_i = \frac{ i^{-\beta} }{\sum_{k=1}^{N_f}k^{-\beta}},
\label{zipf}
\end{equation}
where $\beta$ is a parameter that reflects how skewed the popularity distribution is. For example, if $\beta= 0$, the popularity of the files has a uniform distribution. Increasing $\beta$ increases the disparity among the files popularity such that lower indexed files have higher popularity. By definition, $\sum_{i=1}^{N_f}q_i = 1$.
We use Zipf's distribution to model the popularity of files per cluster.
\ac{D2D} communication is enabled within each cluster to deliver popular content. It is assumed that the devices adopt a slotted-ALOHA medium access protocol, where each transmitter during each time slot, independently and randomly accesses the channel with the same probability $p$. This implies that multiple active \ac{D2D} links might coexist within a cluster. Therefore, $p$ is a design parameter that directly controls \textcolor{black}{(mainly)} the intra-cluster interference, as described later in the paper.
We adopt a random content placement where each device independently selects a file to cache according to a specific probability function $\textbf{b} = \{b_1, b_2, \dots, b_{N_{f}}\}$, where $b_i$ is the probability that a device caches the $i$-th file, $0 \leq b_i \leq 1$ for all $i=\{1, \dots, N_f\}$. To avoid duplicate caching of the same content within the memory of the same device, we follow a probabilistic caching approach proposed in \cite{blaszczyszyn2015optimal} and illustrated in Fig. \ref{prob_cache_example}.
\begin{figure
\vspace{-0.4cm}
\begin{center}
\includegraphics[width=1.5in]{Figures/ch3/prob_cache_exam}
\caption {The cache memory of size $M = 3$ is equally divided into $3$ blocks of unit size. A random number $\in [0,1]$ is generated, and a content $i$ is chosen from each block, whose $b_i$ fills the part intersecting with the generated random number. In this way, in the given example, the contents $\{1, 2, 4\}$ are chosen to be cached.}
\label{prob_cache_example}
\end{center}
\vspace{-0.8cm}
\end{figure}
If a device caches the desired file, the device directly retrieves the content. However, if the device does not cache the file, the file can be downloaded from any neighboring device that caches the file (henceforth called catering device) in the same cluster. According to the proposed access model, the probability that a chosen catering device is admitted to access the channel is the access probability $p$. Finally, the device attaches to the nearest \ac{BS} as a last resort to download the content which is not cached entirely within the device's cluster. We assume that the \ac{D2D} communication is operating as out-of-band D2D. \textcolor{black}{$W_{1}$ and $W_{2}$ denote respectively the bandwidth allocated to the \ac{D2D} and \ac{BS}-to-Device communication, and the total system bandwidth is denoted as $W=W_{1} + W_{2}$. It is assumed that device requests are served in a random manner, i.e., among the cluster devices, a random device request is chosen to be scheduled and content is served.
In the following, we aim at studying and optimizing three important metrics, widely studied in the literature. The first metric is the offloading gain, which is defined as the probability of obtaining the requested file from the local cluster, either from the self-cache or from a neighboring device in the same cluster, with a rate higher than a required threshold $R_0$. The second metric is the energy consumption which represents the dissipated energy when downloading files either from the BSs or via \ac{D2D} communication. Finally, the latency which accounts for the weighted average delay over all the requests served from the \ac{D2D} and \ac{BS}-to-Device communication.
\section{Maximum Offloading Gain}
\begin{figure}[t]
\centering
\includegraphics[width=0.35\textwidth]{Figures/ch3/distance1.png}
\vspace{-0.4cm}
\caption {Illustration of the representative cluster and one interfering cluster.}
\label{distance}
\vspace{-0.5cm}
\end{figure}
Without loss of generality, we conduct the analysis for a cluster whose center is at $x_0\in \Phi_p$ (referred to as representative cluster), and the device who requests the content (henceforth called typical device) is located at the origin. We denote the location of the \ac{D2D} transmitter by $y_0$ relative to $x_0$, where $x_0, y_0\in \mathbb{R}^2$. The distance from the typical device (\ac{D2D} receiver of interest) to this \ac{D2D} transmitter is denoted as $r=\lVert x_0+y_0\rVert$, which is a realization of a random variable $R$ whose distribution is described later. This setup is illustrated in Fig. \ref{distance}. It is assumed that a requested file is served from a randomly selected catering device, which is, in turn, admitted to access the channel based on the slotted-ALOHA protocol. The successful offloading probability is then given by
\begin{align}
\label{offloading_gain}
\mathbb{P}_o(p,\textbf{b}) &= \sum_{i=1}^{N_f} q_i b_i + q_i(1 - b_i)(1 - e^{-b_i\overline{n}})
\underbrace{\int_{r=0}^{\infty}f_R(r) \mathbb{P}(R_{1}(r)>R_0) \dd{r}}_{\mathbb{P}(R_1>R_0)},
\end{align}
where $R_{1}(r)$ is the achievable rate when downloading content from a catering device at a distance $r$ from the typical device with the distance \ac{PDF} $f_R(r)$. The first term on the right-hand side is the probability of requesting a locally cached file (self-cache) whereas the remaining term incorporates the probability that a requested file $i$ is cached among at least one cluster member and being downloadable with a rate greater than $R_0$. More precisely, since the number of devices per cluster has a Poisson distribution, the probability that there are $k$ devices per cluster is equal to $\frac{\overline{n}^k e^{-\overline{n}}}{k!}$. Accordingly, the probability that there are $k$ devices caching content $i$ can be written as $\frac{(b_i\overline{n})^k e^{-b_i\overline{n}}}{k!}$. Hence, the probability that at least one device caches content $i$ is $1$-minus the void probability (i.e., $k=0$), which equals $1 - e^{-b_i\overline{n}}$.
In the following, we first compute the probability $ \mathbb{P}(R_{1}(r)>R_0)$ given the distance $r$ between the typical device and a catering device, \textcolor{black}{then we conduct averaging over $r$ using the \ac{PDF} $f_R(r)$}. The received power at the typical device from a catering device located at $y_0$ relative to the cluster center is given by
\begin{align}
P &= P_d g_0 \lVert x_0+y_0\rVert^{-\alpha}= P_d g_0 r^{-\alpha}
\label{pwr}
\end{align}
where $P_d$ denotes the \ac{D2D} transmission power, \textcolor{black}{$g_0$ is the complex Gaussian fading channel coefficient between a catering device located at $y_0$ relative to its cluster center at $x_0$ and the typical device}, and $\alpha > 2$ is the path loss exponent. Under the above assumption, the typical device sees two types of interference, namely, the intra-and inter-cluster interference. We first describe the inter-cluster interference, then the intra-cluster interference is characterized. The set of active devices in any remote cluster is denoted as $\mathcal{B}^p$, where $p$ refers to the access probability. Similarly, the set of active devices in the local cluster is denoted as $\mathcal{A}^p$. Similar to (\ref{pwr}), the interference from the simultaneously active \ac{D2D} transmitters outside the representative cluster, at the typical device is given by
\begin{align}
I_{\Phi_p^{!}} &= \sum_{x \in \Phi_p^{!}} \sum_{y\in \mathcal{B}^p} P_d g_{y_x} \lVert x+y\rVert^{-\alpha}\\
& = \sum_{x\in \Phi_p^{!}} \sum_{y\in \mathcal{B}^p} P_d g_{u} u^{-\alpha}
\end{align}
where $\Phi_p^{!}=\Phi_p \setminus x_0$ for ease of notation, $y$ is the marginal distance between a potential interfering device and its cluster center at $x \in \Phi_p$ , $u = \lVert x+y\rVert$ is a realization of a random variable $U$ modeling the inter-cluster interfering distance (shown in Fig. \ref{distance}), \textcolor{black}{$g_{y_x} \sim $ exp(1) are \ac{i.i.d.} exponential random variables modeling Rayleigh fading}, and $g_{u} = g_{y_x}$ for ease of notation. The intra-cluster interference is then given by
\begin{align}
I_{\Phi_c} &= \sum_{y\in \mathcal{A}^p} P_d g_{y_{x_0}} \lVert x_0+y\rVert^{-\alpha}\\
& = \sum_{y\in \mathcal{A}^p} P_d g_{h} h^{-\alpha}
\end{align}
where $y$ is the marginal distance between the intra-cluster interfering devices and the cluster center at $x_0 \in \Phi_p$, $h = \lVert x_0+y\rVert$ is a realization of a random variable $H$ modeling the intra-cluster interfering distance (shown in Fig. \ref{distance}), \textcolor{black}{$g_{y_{x_0}} \sim $ exp(1) are \ac{i.i.d.} exponential random variables modeling Rayleigh fading, and $g_{h} = g_{y_{x_0}}$ for ease of notation}. From the thinning theorem \cite{haenggi2012stochastic}, the set of active transmitters following the slotted-ALOHA medium access forms Gaussian \ac{PPP} $\Phi_{cp}$ whose intensity is given by
\begin{align}
\lambda_{cp} = p\lambda_{c}(y) = p\overline{n}f_Y(y) =\frac{p\overline{n}}{2\pi\sigma^2}\textrm{exp}\Big(-\frac{\lVert y\rVert^2}{2\sigma^2}\Big), \quad y \in \mathbb{R}^2
\end{align}
Assuming that the thermal noise is neglected as compared to the aggregate interference, the \ac{D2D} $\mathrm{SIR}$ at the typical device is written as
\begin{equation}
\gamma_{r}=\frac{P}{I_{\Phi_p^{!}} + I_{\Phi_c}} = \frac{P_d g_0 r^{-\alpha}}{I_{\Phi_p^{!}} + I_{\Phi_c}}
\end{equation}
\textcolor{black}{A fixed rate transmission model is adopted in our study}, where each transmitter (device or BS) transmits at the fixed rate of log$_2[1+\theta]$ \SI{}{bits/sec/Hz}, where $\theta$ is a design parameter. Since, the rate is fixed, the transmission is subject to outage due to fading and interference fluctuations. Consequently, the de facto average transmissions rate (i.e., average throughput) is given by
\begin{equation}
\label{rate_eqn}
R_i = W_i\textrm{ log$_{2}$}[1+ \theta]\mathrm{P_c},
\end{equation}
\textcolor{black}{where $i= 1,2$ for the \ac{D2D} and \ac{BS}-to-Device communication, respectively. $W_i$ is the bandwidth, $\theta$ is the pre-determined threshold for successful reception, $\mathrm{P_c} =\mathbb{E}(\textbf{1}\{\mathrm{SIR}>\theta\})$ is the coverage probability, and $\textbf{1}\{.\}$ is the indicator function. When served by a catering device $r$ apart from the origin, the achievable rate of the typical device under slotted-ALOHA medium access technique can be deduced from \cite[Equation (10)]{jing2012achievable} as}
\begin{equation}
\label{rate_eqn1}
R_{1}(r) = pW_{1} {\rm log}_2 \big(1 + \theta \big) \textbf{1}\{ \gamma_{r} > \theta\}
\end{equation}
Then, the probability $ \mathbb{P}(R_{1}(r)>R_0)$ is derived as follows.
\begin{align}
\mathbb{P}(R_{1}(r)>R_0)&= \mathbb{P} \Big(pW_{1} {\rm log}_2 (1 + \theta)\textbf{1}\{ \gamma_{r} > \theta\}
>R_0\Big) \nonumber \\
&=\mathbb{P} \Big(\textbf{1}\{ \gamma_{r} > \theta\}
>\frac{R_0}{pW_{1} {\rm log}_2 (1 + \theta )}\Big) \nonumber \\
&\overset{(a)}{=} \mathbb{P}\big(\gamma_r >\theta \big)=\mathbb{P}\Big(\frac{P_d g_0 r^{-\alpha}}{I_{\Phi_p^{!}} + I_{\Phi_c}} > \theta\Big)
\end{align}
\textcolor{black}{where (a) follows from the assumption that $R_0 < pW_{1} {\rm log}_2 \big(1 + \theta \big)$, i.e., $\frac{R_0}{pW_{1} {\rm log}_2 \big(1 + \theta \big)}<1$, always holds, otherwise, it is infeasible to get $\mathbb{P}(R_{1}>R_0)$ greater than zero}. Rearranging the right-hand side, we get
\begin{align}
\mathbb{P}(R_{1}(r)>R_0)&= \mathbb{P}\Big( g_0 > \frac{\theta r^{\alpha}}{P_d} [I_{\Phi_p^{!}} + I_{\Phi_c}] \Big)
\nonumber \\
&\overset{(b)}{=}\mathbb{E}_{I_{\Phi_p^{!}},I_{\Phi_c}}\Big[\text{exp}\big(\frac{-\theta r^{\alpha}}{P_d}{[I_{\Phi_p^{!}} + I_{\Phi_c}] }\big)\Big]
\nonumber \\
\label{prob-R1-g-R0}
&\overset{(c)}{=} \mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big)
\end{align}
where (b) follows from the assumption $g_0 \sim \mathcal{CN}(0,1)$, and (c) follows from the independence of the intra- and inter-cluster interference and the Laplace transform of them.
In what follows, we first derive the Laplace transform of interference to get $\mathbb{P}(R_{1}(r)>R_0)$. Then, we formulate the offloading gain maximization problem.
\begin{lemma}
The Laplace transform of the inter-cluster aggregate interference $I_{\Phi_p^{!}}$ evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$ is given by
\begin{align}
\label{LT_inter}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &= {\rm exp}\Big(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm e}^{-p\overline{n} \varphi(s,v)}\Big)v\dd{v}\Big),
\end{align}
where $\varphi(s,v) = \int_{u=0}^{\infty}\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}$, and $f_U(u|v) = \mathrm{Rice} (u| v, \sigma)$ represents Rice's \ac{PDF} of parameter $\sigma$, and $v=\lVert x\rVert$.
\end{lemma}
\begin{proof}
Please see Appendix A.
\end{proof}
\begin{lemma}
The Laplace transform of the intra-cluster aggregate interference $I_{\Phi_c}$ evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$ can be approximated by
\begin{align}
\label{LT_intra}
\mathscr{L}_{I_{\Phi_c} }(s) \approx {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\end{align}
where $f_H(h) =\mathrm{Rayleigh}(h,\sqrt{2}\sigma)$ represents Rayleigh's \ac{PDF} with a scale parameter $\sqrt{2}\sigma$.
\end{lemma}
\begin{proof}
Please see Appendix B.
\end{proof}
For the serving distance distribution $f_R(r)$, since both the typical device as well as a potential catering device have their locations drawn from a normal distribution with variance $\sigma^2$ around the cluster center, then by definition, the serving distance has a Rayleigh distribution with parameter $\sqrt{2}\sigma$, and given by
\begin{align}
\label{rayleigh}
f_R(r)= \frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}, \quad r>0
\end{align}
From (\ref{LT_inter}), (\ref{LT_intra}), and (\ref{rayleigh}), the offloading gain in (\ref{offloading_gain}) is written as
\begin{align}
\label{offloading_gain_1}
\mathbb{P}_o(p,\textbf{b}) &= \sum_{i=1}^{N_f} q_i b_i + q_i(1 - b_i)(1 - e^{-b_i\overline{n}}) \underbrace{\int_{r=0}^{\infty}
\frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}
\mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big)\dd{r}}_{\mathbb{P}(R_1>R_0)},
\end{align}
Hence, the offloading gain maximization problem can be formulated as
\begin{align}
\label{optimize_eqn_p}
\textbf{P1:} \quad &\underset{p,\textbf{b}}{\text{max}} \quad \mathbb{P}_o(p,\textbf{b}) \\
\label{const110}
&\textrm{s.t.}\quad \sum_{i=1}^{N_f} b_i = M, \\
\label{const111}
& b_i \in [ 0, 1], \\
\label{const112}
& p \in [ 0, 1],
\end{align}
where (\ref{const110}) is the device cache size constraint, which is consistent with the illustration of the example in Fig. \ref{prob_cache_example}. \textcolor{black}{On one hand, from the assumption that the fixed transmission rate $pW_{1} {\rm log}_2 \big(1 + \theta \big)$ being larger than the required threshold $R_0$, we have the condition $p>\frac{R_0}{W_{1} {\rm log}_2 \big(1 + \theta \big)}$ on the access probability. On the other hand, from (\ref{prob-R1-g-R0}), with further increase of the access probability $p$, intra- and inter-cluster interference powers increase, and the probability $\mathbb{P}(R_{1}(r)>R_0)$ decreases accordingly. From intuition, the optimal access probability for the offloading gain maximization is chosen as $p^* >_{\epsilon} \frac{R_0}{W_{1} {\rm log}_2 \big(1 + \theta \big)}$, where $\epsilon \to 0$. However, increasing the access probability $p$ further above $p^*$ may lead to higher \ac{D2D} average achievable rate $R_1$, as elaborated in the next section.} The obtained $p^*$ is now used to solve for the caching probability $\textbf{b}$ in the optimization problem below. Since in the structure of \textbf{P1}, $p$ and $\textbf{b}$ are separable, it is possible to solve for $p^*$ and then substitute to get $\textbf{b}^*$.
\begin{align}
\label{optimize_eqn_b_i}
\textbf{P2:} \quad &\underset{\textbf{b}}{\text{max}} \quad \mathbb{P}_o(p^*,\textbf{b}) \\
\label{const11}
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber
\end{align}
The optimal caching probability is formulated in the next lemma.
\begin{lemma}
$\mathbb{P}_o(p^*,\textbf{b})$ is a concave function w.r.t. $\textbf{b}$ and the optimal caching probability $\textbf{b}^{*}$ that maximizes the offloading gain is given by
\[
b_{i}^{*}=\left\{
\begin{array}{ll}
1 \quad\quad\quad , v^{*} < q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)\\
0 \quad\quad\quad, v^{*} > q_i + \overline{n}q_i\mathbb{P}(R_1>R_0)\\
\psi(v^{*}) \quad, {\rm otherwise}
\end{array}
\right.
\]
where $\psi(v^{*})$ is the solution of $v^{*}$ of (\ref{psii_offload}) in Appendix C that satisfies $\sum_{i=1}^{N_f} b_i^*=M$.
\end{lemma}
\begin{proof}
Please see Appendix C.
\end{proof}
\section{Energy Consumption}
In this section, we formulate the energy consumption minimization problem for the clustered \ac{D2D} caching network. In fact, significant energy consumption occurs only when content is served via \ac{D2D} or \ac{BS}-to-Device transmission. We consider the time cost $c_{d_i}$ as the time it takes to download the $i$-th content from a neighboring device in the same cluster. Considering the size ${S}_i$ of the $i$-th ranked content, $c_{d_i}=S_i/R_1 $, where $R_1 $ denotes the average rate of the \ac{D2D} communication. Similarly, we have $c_{b_i} = S_i/R_2 $ when the $i$-th content is served by the \ac{BS} with average rate $R_2 $. The average energy consumption when downloading files by the devices in the representative cluster is given by
\begin{align}
\label{energy_avrg}
E_{av} = \sum_{k=1}^{\infty} E(\textbf{b}|k)\mathbb{P}(n=k)
\end{align}
where $\mathbb{P}(n=k)$ is the probability that there are $k$ devices in the representative cluster, \textcolor{black}{equal to $\frac{\overline{n}^k e^{-\overline{n}}}{k!}$}, and $E(\textbf{b}|k)$ is the energy consumption conditioning on having $k$ devices in the cluster, written similar to \cite{energy_efficiency} as
\begin{equation}
E(\textbf{b}|k) = \sum_{j=1}^{k} \sum_{i=1}^{N_f}\big[\mathbb{P}_{j,i}^d q_i P_d c_{d_i} + \mathbb{P}_{j,i}^b q_i P_b c_{b_i}\big]
\label{energy}
\end{equation}
where $\mathbb{P}_{j,i}^d$ and $\mathbb{P}_{j,i}^b$ represent the probability of obtaining the $i$-th content by the $j$-th device from the local cluster, i.e., via \ac{D2D} communication, and the BS, respectively. $P_b$ denotes the \ac{BS} transmission power. Given that there are $k$ devices in the cluster, it is obvious that $\mathbb{P}_{j,i}^b=(1-b_i)^{k}$, and $\mathbb{P}_{j,i}^d=(1 - b_i)\big(1-(1-b_i)^{k-1}\big)$.
The average rates $R_1$ and $R_2$ are now computed to get a closed-form expression for $E(\textbf{b}|k)$.
From equation (\ref{rate_eqn}), we need to obtain the \ac{D2D} coverage probability $\mathrm{P_{c_d}}$ and \ac{BS}-to-Device coverage probability $\mathrm{P_{c_b}}$ to calculate $R_1$ and $R_2$, respectively. Given the number of devices $k$ in the representative cluster, the Laplace transform of the inter-cluster interference \textcolor{black}{is as obtained in (\ref{LT_inter})}. However, the intra-cluster interfering devices no longer represent a Gaussian \ac{PPP} since the number of devices is conditionally fixed, i.e., not a Poisson random number as before. To facilitate the analysis, for every realization $k$, we assume that the intra-cluster interfering devices form a Gaussian \ac{PPP} with intensity function given by $pkf_Y(y)$. Such an assumption is mandatory for analytical tractability.
From Lemma 2, the intra-cluster Laplace transform conditioning on $k$ can be approximated as
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|k) &\approx {\rm exp}\Big(-pk \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\nonumber
\end{align}
and the conditional \ac{D2D} coverage probability is given by
\begin{align}
\label{p_b_d2d}
\mathrm{P_{c_d}} =
\int_{r=0}^{\infty} \frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}
\mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big|k\big)\dd{r}
\end{align}
%
\textcolor{black}{With the adopted slotted-ALOHA scheme, the access probability $p$ minimizing $E(\textbf{b}|k)$ is computed over the interval [0,1] to maximize the \ac{D2D} achievable rate $R_1$ in (\ref{rate_eqn1}), with the condition $p>\frac{R_0}{W_{1} {\rm log}_2 \big(1 + \theta \big)}$ holding to fulfill the probability $\mathbb{P}(R_{1}>R_0)$ greater than zero.
As an illustrative example, in Fig.~\ref{prob_r_geq_r0_vs_p}, we plot the \ac{D2D} average achievable rate $R_1$ against the channel access probability $p$.
As evident from the plot, we see that there is a certain access probability, such that before it the rate $R_1$ tends to increase since the channel access probability increases, and beyond it, the rate $R_1$ decreases monotonically due to the effect of more interferers accessing the channel. In such a case, although we observe that increasing $p$ above $\frac{R_0}{W_{1} {\rm log}_2 \big(1 + \theta \big)}=0.1$ improves the average achievable rate $R_1$, it comes at a price of a decreased $\mathbb{P}(R_{1}>R_0)$.}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=2.5in]{Figures/ch3/prob_r_geq_r0_vs_p2.eps}
\caption {\textcolor{black}{The \ac{D2D} average achievable rate $R_1$ versus the access probability $p$ ($\lambda_p = 20 \text{ clusters}/\SI{}{km}^2$, $\overline{n}=12$, $\sigma=\SI{30}{m}$, $\theta=\SI{0}{dB}$, $R_0/W_1=0.1\SI{}{bits/sec/Hz}$).}}
\label{prob_r_geq_r0_vs_p}
\end{center}
\vspace{-0.6cm}
\end{figure}
Analogously, under the \ac{PPP} $\Phi_{bs}$, and based on the nearest \ac{BS} association principle, it is shown in \cite{andrews2011tractable} that the \ac{BS} coverage probability can be expressed as
\begin{equation}
\mathrm{P_{c_b}} =\frac{1}{{}_2 F_1(1,-\delta;1-\delta;-\theta)},
\label{p_b_bs}
\end{equation}
where ${}_2 F_1(.)$ is the Gaussian hypergeometric function and $\delta = 2/\alpha$. Given the coverage probabilities $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ in (\ref{p_b_d2d}) and (\ref{p_b_bs}), respectively, $R_1$ and $R_2 $ can be calculated from (\ref{rate_eqn}), and hence $E(\textbf{b}|k)$ is expressed in a closed-form.
\subsection{Energy Consumption Minimization}
The energy minimization problem can be formulated as
\begin{align}
\label{optimize_eqn1}
&\textbf{P3:} \quad\underset{\textbf{b}}{\text{min}} \quad E(\textbf{b}|k) = \sum_{j=1}^{k} \sum_{i=1}^{N_f}\big[\mathbb{P}_{j,i}^d q_i P_d c_{d_i} + \mathbb{P}_{j,i}^b q_i P_b c_{b_i}\big]
\\
\label{const11}
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber
\end{align}
In the next lemma, we prove the convexity condition for $E(\textbf{b}|k)$.
\begin{lemma}
\label{convex_E}
The energy consumption $E(\textbf{b}|k)$ is convex if $\frac{P_b}{R_2}>\frac{P_d}{R_1}$.
\end{lemma}
\begin{proof}
\textcolor{black}{We proceed by deriving the Hessian matrix of $E(\textbf{b}|k)$. The Hessian matrix of $E(\textbf{b}|k)$ w.r.t. the caching variables is \textbf{H}$_{i,j} = \frac{\partial^2 E(\textbf{b}|k)}{\partial b_i \partial b_j}$, $\forall i,j \in \mathcal{F}$. \textbf{H}$_{i,j}$ a diagonal matrix whose $i$-th row and $j$-th column element is given by $k(k-1) S_i\Big(\frac{P_b}{R_2}-\frac{P_d}{R_1}\Big)q_i(1 - b_i)^{k-2}$.}
Since the obtained Hessian matrix is full-rank and diagonal, $\textbf{H}_{i,j}$ is positive semidefinite (and hence $E(\textbf{b}|k)$ is convex) if all the diagonal entries are nonnegative, i.e., when
$\frac{P_b}{R_2}>\frac{P_d}{R_1}$. In practice, it is reasonable to assume that $P_b \gg P_d$, as in \cite{ericsson}, the \ac{BS} transmission power is 100 fold the \ac{D2D} power.
\end{proof}
As a result of Lemma 3, the optimal caching probability can be computed to minimize $E(\textbf{b}|k)$.
\begin{lemma}
The optimal caching probability $\textbf{b}^{*}$ for the energy minimization problem \textbf{P3} is given by,
\begin{equation}
b_i^* = \Bigg[ 1 - \Big(\frac{v^* + k^2q_iS_i\frac{P_d}{R_1 }}{kq_iS_i\big(\frac{P_d}{R_1 }-\frac{P_b}{R_2}\big)} \Big)^{\frac{1}{k-1}}\Bigg]^{+}
\label{energy11}
\end{equation}
where $v^{*}$ satisfies the maximum cache constraint $\sum_{i=1}^{N_f} \Big[ 1 - \Big(\frac{v^* + k^2q_iS_i\frac{P_d}{R_1 }}{kq_iS_i\big(\frac{P_d}{R_1 }-\frac{P_b}{R_2 }\big)} \Big)^{\frac{1}{k-1}}\Big]^{+}=M$, and $[x]^+ =$ max$(x,0)$.
\end{lemma}
\begin{proof}
The proof proceeds in a similar manner to Lemma 3 and is omitted.
\end{proof}
\begin{proposition} {\rm \textcolor{black}{By observing (\ref{energy11}), we can demonstrate the effects of content size and popularity on the optimal caching probability. $S_i$ exists in the numerator and denominator
of the second term in (\ref{energy11}), however, the effect on numerator is more significant due to larger multiplier. The same property is observed for $q_i$. With the increase of $S_i$ or $q_i$, the magnitude of the second term in (\ref{energy11}) increases, and correspondingly, $b_i^*$ decreases. That is a content with larger size or lower popularity has smaller probability to be cached.}}
\end{proposition}
By substituting $b_i^*$ into (\ref{energy_avrg}), the average energy consumption per cluster is obtained. In the rest of the paper, we study and minimize the weighted average delay per request for the proposed system.
\section{Delay Analysis}
In this section, the delay analysis and minimization are discussed. A joint stochastic geometry and queueing theory model is exploited to study this problem. The delay analysis incorporates the study of a system of spatially interacting queues. To simplify the mathematical analysis, we further consider that only one \ac{D2D} link can be active within a cluster of $k$ devices, where $k$ is fixed. As shown later, such an assumption facilitates the analysis by deriving simple expressions. We begin by deriving the \ac{D2D} coverage probability under the above assumption, which is used later in this section.
\begin{lemma}
\label{coverage_lemma}
The \ac{D2D} coverage probability of the proposed clustered model with one active \ac{D2D} link within a cluster is given by
\begin{align}
\label{v_0}
\mathrm{P_{c_d}} = \frac{1}{4\sigma^2 Z(\theta,\alpha,\sigma)} ,
\end{align}
where $Z(\theta,\alpha,\sigma) = (\pi\lambda_p \theta^{2/\alpha}\Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)+ \frac{1}{4\sigma^2})$.
\end{lemma}
\begin{proof}
The result can be proved by using the displacement theory of the \ac{PPP} \cite{daley2007introduction}, and then proceeding in a similar manner to Lemma 1 and 2. We delegate this proof to the conference version of this paper \cite{amer2018minimizing}.
\end{proof}
In the following, we firstly describe the traffic model of the network, and then we formulate the delay minimization problem.
\begin{figure
\begin{center}
\includegraphics[width=2.5in]{Figures/ch3/delay_queue3}
\caption {\textcolor{black}{The traffic model of request arrivals and departures in a given cluster. $Q_1$ and $Q_2$ are M/G/1 queues modeling requests served by \ac{D2D} and \ac{BS}-to-Device communication, respectively.}}
\label{delay_queue}
\end{center}
\vspace{-0.8cm}
\end{figure}
\subsection{Traffic Model}
We assume that the aggregate request arrival process from the devices in each cluster follows a Poisson arrival process with parameter $\zeta_{tot}$ (requests per time slot). As shown in Fig.~\ref{delay_queue}, the incoming requests are further divided according to where they are served from. $\zeta_{1}$ represents the arrival rate of requests served via the \ac{D2D} communication, whereas $\zeta_{2}$ is the arrival rate for those served from the BSs. $\zeta_{3} = 1 - \zeta_{1} - \zeta_{2}$ denotes the arrival rate of requests served via the self-cache with zero delay. By definition, $\zeta_{1}$ and $\zeta_{2}$ are also Poisson arrival processes. Without loss of generality, we assume that the file size has a general distribution $G$ whose mean is denoted as $\overline{S}$ \SI{}{MBytes}. Hence, an M/G/1 queuing model is adopted whereby two non-interacting queues, $Q_1$ and $Q_2$, model the traffic in each cluster served via the \ac{D2D} and \ac{BS}-to-Device communication, respectively. Although $Q_1$ and $Q_2$ are non-interacting as the \ac{D2D} communication is assumed to be out-of-band, these two queues are spatially interacting with similar queues in other clusters. To recap, $Q_1$ and $Q_2$ are two M/G/1 queues with arrival rates $\zeta_{1}$ and $\zeta_{1}$, and service rates $\mu_1$ and $\mu_2$, respectively.
\subsection{Queue Dynamics}
It is worth highlighting that the two queues $Q_i$, $i \in \{1,2\}$, accumulate requests for files demanded by the clusters members, not the files themselves. First-in first-out (FIFO) scheduling is assumed where a request for content arrives first will be scheduled first either by the \ac{D2D} or \ac{BS} communication if the content is cached among the devices or not, respectively. The result of FIFO scheduling only relies on the time when the request arrives at the queue and is irrelevant to the particular device that issues the request. Given the parameter of the Poisson's arrival process $\zeta_{tot}$, the arrival rates at the two queues are expressed respectively as
\begin{align}
\zeta_{1} &= \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big), \\
\zeta_{2} &= \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}
\end{align}
The network operation is depicted in Fig. \ref{delay_queue}, and described in detail below.
\begin{enumerate}
\item Given the memoryless property of the arrival process (Poisson arrival) along with the assumption that the service process is independent of the arrival process,
the number of requests in any queue at a future time only depends upon the current number in the system (at time $t$) and the arrivals or departures that occur within the interval $e$.
\begin{align}
Q_{i}(t+e) = Q_{i}(t) + \Lambda_{i}(e) - M_{i}(e)
\end{align}
where $\Lambda_{i}(e)$ is the number of arrivals in the time interval $(t,t+e)$, whose mean is $\zeta_i$ \SI{}{sec}$^{-1}$, and $M_{i}(e)$ is the number of departures in the time interval $(t,t+e)$, whose mean is $\mu_i = \frac{\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})W_i{\rm log}_2(1+\theta)}{\overline{S}}$ \SI{}{sec}$^{-1}$. It is worth highlighting that, unlike the spatial-only model studied in the previous sections, the term $\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})$ is dependent on the traffic dynamics \textcolor{black}{since a request being served in a given cluster is interfered only from other clusters that also have requests to serve}. What is more noteworthy is that the mean service time $\tau_i = \frac{1}{\mu_i}$ follows the same distribution as the file size. These aspects will be revisited later in this section.
\item $\Lambda_{i}(e)$ is dependent only on $e$ because the arrival process is Poisson. $M_{i}(e)$ is $0$ if the service time of the file being served $\epsilon_i >e$. $M_{i}(e)$ is 1 if $\epsilon_1 <e$ and $\epsilon_2 + \epsilon_1>e$, and so on. As the service times $\epsilon_1, \epsilon_2, \dots, \epsilon_n$ are independent, neither $\Lambda_{i}(e)$ nor $M_{i}(e)$ depends on what happened prior to $t$. Thus, $Q_{i}(t+e)$ only depends upon $Q_{i}(t)$ and not the past history. Hence it is a \ac{CTMC} which obeys the stability conditions in \cite{szpankowski1994stability}.
\end{enumerate}
The following proposition provides the sufficient conditions for the stability of the buffers in the sense defined in \cite{szpankowski1994stability}, i.e., $\{Q_{i}\}$ has a limiting distribution for $t \rightarrow \infty$.
\begin{proposition} {\rm The \ac{D2D} and \ac{BS}-to-Device traffic modeling queues are stable, respectively, if and only if}
\begin{align}
\label{stable1}
\zeta_{1} < \mu_1 &= \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}} \\
\label{stable2}
\zeta_{2} < \mu_2 & =\frac{\mathrm{P_{c_b}} W_2{\rm log}_2(1+\theta)}{\overline{S}}
\end{align}
\end{proposition}
\begin{proof}
We show sufficiency by proving that (\ref{stable1}) and (\ref{stable2}) guarantee stability in a dominant network, where all queues that have empty buffers make dummy transmissions. The dominant network is a fictitious system that is identical to the original system, except that terminals may choose to transmit even when their respective buffers are empty, in which case they simply transmit a dummy packet. If both systems are started from the same initial state and fed with the same arrivals, then the queues in the fictitious dominant system can never be shorter than the queues in the original system.
\textcolor{black}{Similar to the spatial-only network, in the dominant system, the typical receiver is seeing an interference from all other clusters whether they have requests to serve or not (dummy transmission).} This dominant system approach yields $\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})$ equal to $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ for the \ac{D2D} and \ac{BS}-to-Device communication, respectively. Also, the obtained delay is an upper bound for the actual delay of the system. The necessity of (\ref{stable1}) and (\ref{stable2}) is shown as follows: If $\zeta_i>\mu_i$, then, by Loynes' theorem \cite{loynes1962stability}, it follows that lim$_{t\rightarrow \infty}Q_i(t)=\infty$ (a.s.) for all queues in the dominant network.
\end{proof}
Next, we conduct the analysis for the dominant system whose parameters are as follows. The content size has an exponential distribution of mean $\overline{S}$ \SI{}{MBytes}. The service times also obey an exponential distribution with means $\tau_1 = \frac{\overline{S}}{R_1}$ \SI{}{seconds} and $\tau_2 = \frac{\overline{S}}{R_2}$ \SI{}{seconds}. The rates $R_1$ and $R_2$ are calculated from (\ref{rate_eqn}) where $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ are from (\ref{v_0}) and (\ref{p_b_bs}), respectively. Accordingly, $Q_1$ and $Q_2$ are two continuous time independent (non-interacting) M/M/1 queues with service rates $\mu_1 = \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}}$ and $\mu_2 = \frac{\mathrm{P_{c_b}} W_2{\rm log}_2(1+\theta)}{\overline{S}}$ \SI{}{sec}$^{-1}$, respectively. \begin{proposition} {\rm The mean queue length $L_i$ of the $i$-th queue is given by}
\begin{align}
\label{queue_len}
L_i &= \rho_i + \frac{2\rho_i^2}{2\zeta_i(1 - \rho_i)},
\end{align}
\end{proposition}
\begin{proof}
We can easily calculate $L_i$ by observing that $Q_i$ are continuous time M/M/1 queues with arrival rates $\zeta_i$, service rates $\mu_i$, and traffic intensities $\rho_i = \frac{\zeta_i}{\mu_i}$. Then, by applying the Pollaczek-Khinchine formula \cite{Kleinrock}, $L_i$ is directly obtained.
\end{proof}
The average delay per request for each queue is calculated from
\begin{align}
D_1 &= \frac{L_1}{\zeta_1}= \frac{1}{\mu_1 - \zeta_1} = \frac{1}{W_1\mathcal{O}_{1} - \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)} \\
D_2 &= \frac{L_2}{\zeta_2}=\frac{1}{\mu_2 - \zeta_2} = \frac{1}{W_2\mathcal{O}_{2} - \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}}
\end{align}
where $\mathcal{O}_{1} = \frac{\mathrm{P_{c_d}} {\rm log}_2(1+\theta)}{\overline{S}}$, $\mathcal{O}_{2}= \frac{\mathrm{P_{c_b}} {\rm log}_2(1+\theta)}{\overline{S}}$ for notational simplicity. The weighted average delay $D$ is then expressed as
\begin{align}
D&= \frac{\zeta_{1}D_1 + \zeta_{2}D_2}{\zeta_{tot}} \nonumber \\
&= \frac{\sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)}{ \mathcal{O}_{1}W_1 - \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)} + \frac{\sum_{i=1}^{N_f}q_i (1-b_i)^{k}}{ \mathcal{O}_{2}W_2 - \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}}
\end{align}
One important insight from the delay equation is that the caching probability $\textbf{b}$ controls the arrival rates $\zeta_{1}$ and $\zeta_{2}$ while the bandwidth determines the service rates $\mu_1$ and $\mu_2$. Therefore, it turns out to be of paramount importance to jointly optimize $\textbf{b}$ and $W_1$ to minimize the average delay. One relevant work is carried out in \cite{tamoor2016caching} where the authors investigate the storage-bandwidth tradeoffs for small cell \ac{BS}s that are subject to storage constraints. Subsequently, we formulate the weighted average delay joint caching and bandwidth minimization problem as
\begin{align}
\label{optimize_eqn3}
\textbf{P4:} \quad \quad&\underset{\textbf{b},{\rm W}_1}{\text{min}} \quad D(\textbf{b},W_1) \\
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber \\
& 0 \leq W_1 \leq W, \\
\label{stab1}
&\zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big) < \mu_1, \\
\label{stab2}
&\zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k} < \mu_2,
\end{align}
where constraints (\ref{stab1}) and (\ref{stab2}) are the stability conditions for the queues $Q_1$ and $Q_2$, respectively. Although the objective function of \textbf{P4} is convex w.r.t. $W_1$, as derived below, the coupling of the optimization variables $\textbf{b}$ and $W_1$ makes \textbf{P4} a non-convex optimization problem. Therefore, \textbf{P4} cannot be solved directly using standard convex optimization techniques.
\textcolor{black}{By applying the \ac{BCD} optimization technique, \textbf{P4} can be solved in an iterative manner as follows. First, for a given caching probability $\textbf{b}$, we calculate the bandwidth allocation subproblem. Afterwards, the obtained optimal bandwidth is used to update $\textbf{b}$}. The optimal bandwidth for the bandwidth allocation subproblem is given in the next Lemma.
\begin{lemma}
The objective function of \textbf{P4} in (\ref{optimize_eqn3}) is convex w.r.t. $W_1$, and the optimal bandwidth allocation to the \ac{D2D} communication is given by
\begin{align}
\label{optimal-w-1}
W_1^* = \frac{\zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k}) +\varpi \big(\mathcal{O}_{2}W - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)}{\mathcal{O}_{1}+\varpi\mathcal{O}_{2}},
\end{align}
where $\overline{b}_i = 1 - b_i$ and $\varpi=\sqrt{\frac{\mathcal{O}_{1}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})}{\mathcal{O}_{2} \sum_{i=1}^{N_f}q_i\overline{b}_i^{k}}}$
\end{lemma}
\begin{proof}
$D(\textbf{b},W_1)$ can be written as
\begin{align}
\label{optimize_eqn3_p1}
\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big(\mathcal{O}_{1}W_1 - \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big)^{-1} + \sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big(\mathcal{O}_{2}W_2 - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)^{-1}, \nonumber
\end{align}
The second derivative $\frac{\partial^2 D(\textbf{b},W_1)}{\partial W_1^2}$ is hence given by
\begin{align}
2\mathcal{O}_{1}^2\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big(\mathcal{O}_{1}W_1 - \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big)^{-3} + 2\mathcal{O}_{2}^2\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big(\mathcal{O}_{2}W_2 - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)^{-3}, \nonumber
\end{align}
The stability conditions require that $\mu_1 = \mathcal{O}_{1}W_1 > \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})$ and $\mu_2 =\mathcal{O}_{2}W_2 > \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}$. Also, $\overline{b}_i \geq \overline{b}_i^{k}$ by definition. Hence, $\frac{\partial^2 D(\textbf{b},W_1)}{\partial W_1^2} > 0$, and the objective function is a convex function of $W_1$. The optimal bandwidth allocation can be obtained from the \ac{KKT} conditions similar to problems \textbf{P2} and \textbf{P3}, with the details omitted for brevity.
\end{proof}
Given $W_1^*$ from the bandwidth allocation subproblem, the caching probability subproblem can be written as
\begin{align}
\textbf{P5:} \quad \quad&\underset{\textbf{b}}{\text{min}} \quad D(\textbf{b},W_1^*) \\
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}), (\ref{stab1}), (\ref{stab2}) \nonumber
\end{align}
The caching probability subproblem \textbf{P5} is a sum of two fractional functions, where the first fraction is in the form of a concave over convex functions while the second fraction is in the form of a convex over concave functions. The first fraction structure, i.e., concave over convex functions, renders solving this problem using fractional programming (FP) very challenging.\footnote{A quadratic transform technique for tackling the multiple-ratio concave-convex FP problem is recently used to solve a minimization of fractional functions that has the form of convex over concave functions, whereby an equivalent problem is solved with the objective function reformulated as a difference between convex minus concave functions \cite{shen2018fractional}.}
\textcolor{black}{Moreover, the constraint (\ref{stab1}) is concave w.r.t. $\textbf{b}$.
Hence, we adopt the interior point method to obtain local optimal solution of $\textbf{b}$ given the optimal bandwidth $W_1^*$, which depends on the initial value input to the algorithm \cite{boyd2004convex}. Nonetheless, we can increase the probability to find a near-optimal solution of problem \textbf{P5} by using the interior point method with multiple random initial values and then picking the solution with lowest weighted average delay. The explained procedure is repeated until the value of \textbf{P4}'s objective function converges to a pre-specified accuracy.}
\section{Numerical Results}
\begin{table}[ht]
\caption{Simulation Parameters}
\centering
\begin{tabular}{c c c}
\hline\hline
Description & Parameter & Value \\ [0.5ex]
\hline
\textcolor{black}{System bandwidth} & W & \SI{20}{\mega\hertz} \\
\ac{BS} transmission power & $P_b$ & \SI{43}{\deci\bel\of{m}} \\
\ac{D2D} transmission power & $P_d$ & \SI{23}{\deci\bel\of{m}} \\
Displacement standard deviation & $\sigma$ & \SI{10}{\metre} \\
Popularity index&$\beta$&1\\
Path loss exponent&$\alpha$&4\\
Library size&$N_f$&500 files\\
Cache size per device&$M$&10 files\\
Average number of devices per cluster&$\overline{n}$&5\\
Density of $\Phi_p$&$\lambda_{p}$&20 clusters/\SI{}{km}$^2$ \\
Average content size&$\textcolor{black}{\overline{S}}$&\SI{5}{MBits} \\
$\mathrm{SIR}$ threshold&$\theta$&\SI{0}{\deci\bel}\\
\textcolor{black}{Total request arrival rate}&$\zeta_{tot}$&\SI{2}{request/sec}\\
\hline
\end{tabular}
\label{ch3:table:sim-parameter}
\end{table}
At first, we validate the developed mathematical model via Monte Carlo simulations. Then we benchmark the proposed caching scheme against conventional caching schemes. Unless otherwise stated, the network parameters are selected as shown in Table \ref{ch3:table:sim-parameter}.
\subsection{Offloading Gain Results}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=2.5in]{Figures/ch3/prob_r_geq_r0.eps}
\caption {The probability that the \ac{D2D} achievable rate is greater than a threshold $R_0$ versus standard deviation $\sigma$.}
\label{prob_r_geq_r0}
\end{center}
\vspace{-0.5cm}
\end{figure}
In this subsection, we present the offloading gain performance for the proposed caching model.
In Fig.~\ref{prob_r_geq_r0}, we verify the accuracy of the analytical results for the probability $\mathbb{P}(R_1>R_0)$. The theoretical and simulated results are plotted together, and they are consistent. We can observe that the probability $\mathbb{P}(R_1>R_0)$ decreases monotonically with the increase of $\sigma$. This is because as $\sigma$ increases, the serving distance increases and the inter-cluster interfering distance between out-of-cluster interferers and the typical device decreases, and equivalently, the $\mathrm{SIR}$ decreases. It is also shown that $\mathbb{P}(R_1>R_0)$ decreases with the $\mathrm{SIR}$ threshold $\theta$ as the channel becomes more prone to be in outage when increasing the $\mathrm{SIR}$ threshold $\theta$.
\begin{figure*}
\centering
\subfigure[$p=p^*$. ]{\includegraphics[width=2.0in]{Figures/ch3/histogram_b_i_p_star1.eps}
\label{histogram_b_i_p_star}}
\subfigure[$p > p^*$.]{\includegraphics[width=2.0in]{Figures/ch3/histogram_b_i_p_leq_p_star1.eps}
\label{histogram_b_i_p_leq_p_star}}
\caption{Histogram of the optimal caching probability $\textbf{b}^*$ when (a) $p=p^*$ and (b) $p > p^*$.}
\label{histogram_b_i}
\vspace{-0.5cm}
\end{figure*}
To show the effect of $p$ on the caching probability, in Fig.~\ref{histogram_b_i}, we plot the histogram of the optimal caching probability at different values of $p$, where $p=p^*$ in Fig.~\ref{histogram_b_i_p_star} and $p>p^*$ in Fig.~\ref{histogram_b_i_p_leq_p_star}. It is clear from the histograms that the optimal caching probability $\textbf{b}^*$ tends to be more skewed when $p > p^*$, i.e., when $\mathbb{P}(R_1>R_0)$ decreases. This shows that file sharing is more difficult when $p$ is larger than the optimal access probability. More precisely, for $p>p^*$, the outage probability is high due to the aggressive interference. In such a low coverage probability regime, each device tends to cache the most popular files leading to fewer opportunities of content transfer between the devices.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=2.5in]{Figures/ch3/offloading_prob_cach_compare1.eps}
\caption {The offloading probability versus the popularity of files $\beta$ under different caching schemes, namely, \ac{PC}, Zipf, and \ac{CPF}, $\overline{n}=10$, $\sigma=$\SI{5}{\metre}.}
\label{offloading_gain_vs_beta}
\end{center}
\vspace{-0.5cm}
\end{figure}
Last but not least, Fig.~\ref{offloading_gain_vs_beta} manifests the prominent effect of the files' popularity on the offloading gain. We compare the offloading gain of three different caching schemes, namely, the proposed \ac{PC}, Zipf's caching (Zipf), and \ac{CPF}. We can see that the offloading gain under the \ac{PC} scheme attains the best performance as compared to other schemes. Also, we note that both \ac{PC} and Zipf schemes encompass the same offloading gain when $\beta=0$ owing to the uniformity of content popularity.
\subsection{\textcolor{black}{Energy Consumption Results}}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=2.5in]{Figures/ch3/energy_vs_beta7.eps}
\caption {Normalized energy consumption versus popularity exponent $\beta$.}
\label{energy_vs_beta}
\end{center}
\vspace{-0.5cm}
\end{figure}
The results in this part are given for the energy consumption.
Fig.~\ref{energy_vs_beta} shows the energy consumption, \textcolor{black}{normalized to the mean number of devices per cluster}, versus $\beta$ under different caching schemes, namely, \ac{PC}, Zipf, and \ac{CPF}. We can see that the minimized energy consumption under the proposed \ac{PC} scheme attains the best performance as compared to other schemes. Also, it is clear that the consumed energy decreases with $\beta$. This can be justified by the fact that as $\beta$ increases, fewer files are frequently requested which are more likely to be cached among the devices under \ac{PC}, \ac{CPF}, and the Zipf schemes. These few files therefore are downloadable from the devices via low power \ac{D2D} communication.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=2.5in]{Figures/ch3/energy_vs_n6.eps}
\caption {\textcolor{black}{Normalized energy consumption versus the mean number of devices per cluster.}}
\label{energy_vs_n}
\end{center}
\vspace{-0.5cm}
\end{figure}
We plot the normalized energy consumption versus the \textcolor{black}{mean number of devices per cluster} in Fig.~\ref{energy_vs_n}. First, we see that energy consumption decreases with the mean number of devices per cluster. As the number of devices per cluster increases, it is more probable to obtain requested files via low power \ac{D2D} communication. When the number of devices per cluster is relatively large, the normalized energy consumption tends to flatten as most of the content becomes cached at the cluster devices.
\subsection{Delay Results}
\begin{figure*} [!h]
\centering
\subfigure[Weighted average delay versus the popularity exponent $\beta$. ]{\includegraphics[width=2.0in]{Figures/ch3/delay_compare3.eps}
\label{delay_compare}}
\subfigure[Optimal allocated bandwidth versus the popularity exponent $\beta$.]{\includegraphics[width=2.0in]{Figures/ch3/BW_compare4.eps}
\label{BW_compare}}
\caption{\textcolor{black}{Evaluation and comparison of average delay for the proposed joint \ac{PC} and bandwidth allocation scheme with the Zipf's baseline scheme against popularity exponent $\beta$, $N_f = 100$, $M = 4$, $k = 8$.}}
\label{delay-analysis}
\vspace{-0.5cm}
\end{figure*}
\textcolor{black}{The results in this part are devoted to the average delay metric. The performance of the proposed joint \ac{PC} and bandwidth allocation scheme is evaluated in Fig.~\ref{delay-analysis}, and the optimized bandwidth allocation is also shown. Firstly, in Fig.~\ref{delay_compare}, we compare the average delay for two different caching schemes, namely, \ac{PC}, and Zipf's scheme. We can see that the minimized average delay under the proposed joint \ac{PC} and bandwidth allocation scheme attains substantially better performance as compared to the Zipf's scheme with fixed bandwidth allocation (i.e., $W_1=W_2=W/2$). Also, we see that, in general, the average delay monotonically decreases with $\beta$ when a fewer number of files undergoes the highest demand.
Secondly, Fig.~\ref{BW_compare} manifests the effect of the files' popularity $\beta$ on the allocated bandwidth. It is shown that optimal \ac{D2D} allocated bandwidth $W_1^*$ continues increasing with $\beta$. This can be interpreted as follows. When $\beta$ increases, a fewer number of files become highly demanded. These files can be entirely cached among the devices. To cope with such a larger number of requests served via the \ac{D2D} communication, the \ac{D2D} allocated bandwidth needs to be increased.}
\textcolor{black}{Fig.~\ref{scaling} shows the geometrical scaling effects on the system performance, e.g., the effect of clusters' density $\lambda_p$ and the displacement standard deviation $\sigma$ on the \ac{D2D} coverage probability $\mathrm{P_{c_d}}$, optimal allocated bandwidth $W_1^*$, and the average delay. In Fig.~\ref{cache_size}, we plot the \ac{D2D} coverage probability $\mathrm{P_{c_d}}$ versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$. It is clear from the plot that $\mathrm{P_{c_d}}$ monotonically decreases with both $\sigma$ and $\lambda_p$. Obviously, increasing $\sigma$ and $\lambda_p$ results in larger serving distance, i.e., higher path-loss effect, and shorter interfering distance, i.e., higher interference power received by the typical device, respectively. This explains the encountered degradation for $\mathrm{P_{c_d}}$ with $\sigma$ and $\lambda_p$. }
In Fig.~\ref{optimal-w}, we plot the optimal allocated bandwidth $W_1^*$ normalized to $W$ versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$. In this case too it is quite obvious that $W_1^*$ tends to increase with both $\sigma$ and $\lambda_p$. This behavior can be directly understood from (\ref{optimal-w-1}) where $W_1^*$ is inversely proportional to $\mathcal{O}_{1} = \frac{\mathrm{P_{c_d}} {\rm log}_2(1+\theta)}{\overline{S}}$, and $\mathrm{P_{c_d}}$ decreases with $\sigma$ and $\lambda_p$ as discussed above. More precisely, while the \ac{D2D} service rate $\mu_1$ tends to decrease with the decrease of $\mathrm{P_{c_d}}$ since $\mu_1 = \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}}$, the optimal allocated bandwidth $W_1^*$ tends to increase with the decrease of $\mathrm{P_{c_d}}$ to compensate for the service rate degradation, and eventually, minimizing the weighted average delay.
In Fig.~\ref{av-delay}, we plot the weighted average delay versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$. Following the same interpretations as in Fig.~\ref{cache_size} and Fig.~\ref{optimal-w}, we can notice that the weighted average delay monotonically increases with $\sigma$ and $\lambda_p$ due to the decrease of the \ac{D2D} coverage probability $\mathrm{P_{c_d}}$ and the \ac{D2D} service rate $\mu_1$ with $\sigma$ and $\lambda_p$.
\begin{figure*} [t!]
\vspace{-0.5cm}
\centering
\subfigure[\ac{D2D} coverage probability $\mathrm{P_{c_d}}$ versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$.]{\includegraphics[width=2.0in]{Figures/ch3/d2d-coverage-prob.eps}
\label{cache_size}}
\subfigure[Optimal allocated bandwidth versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$.]{\includegraphics[width=2.0in]{Figures/ch3/bw-allocated.eps}
\label{optimal-w}}
\subfigure[Weighted average delay versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$.]{\includegraphics[width=2.0in]{Figures/ch3/delay-sigma-lamda.eps}
\label{av-delay}}
\caption{Effect of geometrical parameters, e.g., clusters' density $\lambda_p$ and the displacement standard deviation $\sigma$ on the system performance, $\beta= 0.5$, $N_f = 100$, $M = 4$, $k = 8$.}
\vspace{-0.6cm}
\label{scaling}
\end{figure*}
\section{Conclusion}
In this work, we conduct a comprehensive analysis of the joint communication and caching for a clustered \ac{D2D} network with random probabilistic caching incorporated at the devices. We first maximize the offloading gain of the proposed system by jointly optimizing the channel access and caching probability. We obtain the optimal channel access probability, and the optimal caching probability is then characterized. We show that deviating from the optimal access probability $p^*$ makes file sharing more difficult. More precisely, the system is too conservative for small access probabilities, while the interference is too aggressive for larger access probabilities. Then, we minimize the energy consumption of the proposed clustered \ac{D2D} network. We formulate the energy minimization problem and show that it is convex and the optimal caching probability is obtained. We show that a content with a large size or low popularity has a small probability to be cached. Finally, we adopt a queuing model for the devices' traffic within each cluster to investigate the network average delay. Two M/G/1 queues are employed to model the \ac{D2D} and \ac{BS}-to-Device communications. We then derive an expression for the weighted average delay per request. We observe that the average delay is dependent on the caching probability and bandwidth allocated, which control respectively the arrival rates and service rates for the two modeling queues. Therefore, we minimize the per request weighted average delay by jointly optimizing bandwidth allocation between \ac{D2D} and \ac{BS}-to-Device communication and the caching probability. The delay minimization problem is shown to be non-convex. Applying the \ac{BCD} optimization technique, the joint minimization problem can be solved in an iterative manner. Results show up to $10\%$, $17\%$, and $300\%$ improvement gain in the offloading gain, energy consumption, and average delay, respectively, compared to the Zipf's caching technique.
\begin{appendices}
\section{Proof of lemma 1}
Laplace transform of the inter-cluster aggregate interference $I_{\Phi_p^{!}}$ can be evaluated as
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &= \mathbb{E} \Bigg[e^{-s \sum_{\Phi_p^{!}} \sum_{y \in \mathcal{B}^p} g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&= \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\Phi_{cp},g_{y_{x}}} \prod_{y \in \mathcal{B}^p} e^{-s g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&= \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\Phi_{cp}} \prod_{y \in \mathcal{B}^p} \mathbb{E}_{g_{y_{x}}} e^{-s g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&\overset{(a)}{=} \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\Phi_{cp}} \prod_{y \in \mathcal{B}^p} \frac{1}{1+s \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&\overset{(b)}{=} \mathbb{E}_{\Phi_p} \prod_{\Phi_p^{!}} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x + y\rVert^{-\alpha}}\Big)f_Y(y)\dd{y}\Big) \nonumber
\\
&\overset{(c)}{=} {\rm exp}\Bigg(-\lambda_p \int_{\mathbb{R}^2}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x + y\rVert^{-\alpha}}\Big)f_Y(y)\dd{y}\Big)\dd{x}\Bigg)
\end{align}
where $\varphi(s,v) = \int_{u=0}^{\infty}\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}$; (a) follows from the Rayleigh fading assumption, (b)
follows from the \ac{PGFL} of Gaussian \ac{PPP} $\Phi_{cp}$, and (c) follows from the \ac{PGFL} of the parent \ac{PPP} $\Phi_p$. By using change of variables $z = x + y$ with $\dd z = \dd y$, we proceed as
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &\overset{}{=} {\rm exp}\Bigg(-\lambda_p \int_{\mathbb{R}^2}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert z\rVert^{-\alpha}}\Big)f_Y(z-x)\dd{y}\Big)\dd{x}\Bigg) \nonumber
\\
&\overset{(d)}{=} {\rm exp}\Bigg(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{u=0}^{\infty}\Big(1 - \frac{1}{1+s u^{-\alpha}}\Big)f_U(u|v)\dd{u}\Big)v\dd{v}\Bigg) \nonumber
\\
&= {\rm exp}\Bigg(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm exp}\big(-p\overline{n} \int_{u=0}^{\infty}
\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}\big)v\dd{v}\Big)\Bigg) \nonumber
\\
&= {\rm exp}\Big(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm e}^{-p\overline{n} \varphi(s,v)}\Big)v\dd{v}\Big),
\end{align}
where (d) follows from converting the cartesian coordinates to the polar coordinates with $u=\lVert z\rVert$.
\textcolor{black}{To clarify how in (d) the normal distribution $f_Y(z-x)$ is converted to the Rice distribution $f_U(u|v)$, consider a remote cluster centered at $x \in \Phi_p^!$, with a distance $v=\lVert x\rVert$ from the origin. Every interfering device belonging to the cluster centered at $x$ has its coordinates in $\mathbb{R}^2$ chosen independently from a Gaussian distribution with standard deviation $\sigma$. Then, by definition, the distance from such an interfering device to the origin, denoted as $u$, has a Rice distribution, denoted as $f_U(u|v)=\frac{u}{\sigma^2}\mathrm{exp}\big(-\frac{u^2 + v^2}{2\sigma^2}\big) I_0\big(\frac{uv}{\sigma^2}\big)$, where $I_0$ is the modified Bessel function of the first kind with order zero and $\sigma$ is the scale parameter.
Hence, Lemma 1 is proven.}
\section {Proof of lemma 2}
Laplace transform of the intra-cluster aggregate interference $I_{\Phi_c}$, conditioning on the distance $v_0$ from the cluster center to the origin, see Fig~\ref{distance}, is written as
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|v_0) &= \mathbb{E} \Bigg[e^{-s \sum_{y \in \mathcal{A}^p} g_{y_{x_0}} \lVert x_0 + y\rVert^{-\alpha}} \Bigg] \nonumber
\\
&= \mathbb{E}_{\Phi_{cp},g_{y_{x_0}}} \prod_{y \in\mathcal{A}^p} e^{-s g_{y_{x_0}} \lVert x_0 + y\rVert^{-\alpha}} \nonumber
\\
&= \mathbb{E}_{\Phi_{cp}} \prod_{y \in\mathcal{A}^p} \mathbb{E}_{g_{y_{x_0}}} e^{-s g_{y_{x_0}} \lVert x_0 + y\rVert^{-\alpha}} \nonumber
\\
&\overset{(a)}{=} \mathbb{E}_{\Phi_{cp}} \prod_{y \in\mathcal{A}^p} \frac{1}{1+s \lVert x_0 + y\rVert^{-\alpha}}
\nonumber
\\
&\overset{(b)}{=} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x_0 + y\rVert^{-\alpha}}\Big)f_{Y}(y)\dd{y}\Big) \nonumber
\\
&\overset{(c)}{=} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert z_0\rVert^{-\alpha}}\Big)f_{Y}(z_0-x_0)\dd{z_0}\Big)
\end{align}
where (a) follows from the Rayleigh fading assumption, (b) follows from the \ac{PGFL} of the Gaussian \ac{PPP} $\Phi_{cp}$, (c) follows from changing of variables $z_0 = x_0 + y$ with $\dd z_0 = \dd y$. By converting the cartesian coordinates to the polar coordinates, with $h=\lVert z_0\rVert$, we get
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|v_0) &\overset{}{=} {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\Big(1 - \frac{1}{1+s h^{-\alpha}}\Big)f_H(h|v_0)\dd{h}\Big) \nonumber
\\
&= {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h|v_0)\dd{h}\Big)
\end{align}
By neglecting the correlation of the intra-cluster interfering distances as in \cite{clustered_twc}, i.e., the common part $x_0$ in the intra-cluster interfering distances $\lVert x_0 + y\rVert$, $y \in \mathcal{A}^p$, we get.
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s) &\approx {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\end{align}
Similar to the serving distance \ac{PDF} $f_R(r)$, since both the typical device and a potential interfering device have their locations drawn from a normal distribution with variance $\sigma^2$ around the cluster center, then by definition, the intra-cluster interfering distance has a Rayleigh distribution with parameter $\sqrt{2}\sigma$, and given by $f_H(h)= \frac{h}{2 \sigma^2} {\rm e}^{\frac{-h^2}{4 \sigma^2}}$. Hence, Lemma 2 is proven.
\section {Proof of lemma 3}
First, to prove concavity, we proceed as follows.
\begin{align}
\frac{\partial \mathbb{P}_o}{\partial b_i} &= q_i + q_i\big(\overline{n}(1-b_i)e^{-\overline{n}b_i}-(1-e^{-\overline{n}b_i})\big)\mathbb{P}(R_1>R_0) \\
\frac{\partial^2 \mathbb{P}_o}{\partial b_i^2} &= -q_i\big(\overline{n}e^{-\overline{n}b_i} + \overline{n}^2(1-b_i)e^{-\overline{n}b_i} + \overline{n}e^{-\overline{n}b_i}\big)\mathbb{P}(R_1>R_0)
\end{align}
It is clear that the second derivative $\frac{\partial^2 \mathbb{P}_o}{\partial b_i^2}$ is always negative, and
$\frac{\partial^2 \mathbb{P}_o}{\partial b_i \partial b_j}=0$ for all $i\neq j$. Hence, the Hessian matrix \textbf{H}$_{i,j}$ of $\mathbb{P}_o(p^*,\textbf{b})$ w.r.t. $\textbf{b}$ is negative semidefinite, and $\mathbb{P}_o(p^*,\textbf{b})$ is a concave function of $\textbf{b}$. Also, the constraints are linear, which imply that the necessity and sufficiency conditions for optimality exist. The dual Lagrangian function and the \ac{KKT} conditions are then employed to solve \textbf{P2}.
The \ac{KKT} Lagrangian function of the energy minimization problem is given by
\begin{align}
\mathcal{L}(\textbf{b},w_i,\mu_i,v) =& \sum_{i=1}^{N_f} q_i b_i + q_i(1- b_i)(1-e^{-b_i\overline{n}})\mathbb{P}(R_1>R_0) \nonumber \\
&+ v(M-\sum_{i=1}^{N_f} b_i) + \sum_{i=1}^{N_f} w_i (b_i-1) - \sum_{i=1}^{N_f} \mu_i b_i
\end{align}
where $v, w_i, \mu_i$ are the dual equality and two inequality constraints, respectively. Now, the optimality conditions are written as
\begin{align}
\label{grad}
\grad_{\textbf{b}} \mathcal{L}(\textbf{b}^*,w_i^*,\mu_i^*,v^*) = q_i + q_i&\big(\overline{n}(1-b_i^*)e^{-\overline{n}b_i^*}-(1-e^{-\overline{n}b_i^*})\big)\mathbb{P}(R_1>R_0) -v^* + w_i^* -\mu_i^*= 0 \\
&w_i^* \geq 0 \\
&\mu_i^* \leq 0 \\
&w_i^* (b_i^* - 1) =0 \\
&\mu_i^* b_i^* = 0\\
&(M-\sum_{i=1}^{N_f} b_i^*) = 0
\end{align}
\begin{enumerate}
\item $w_i^* > 0$: We have $b_i^* = 1$, $\mu_i^*=0$, and
\begin{align}
&q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)= v^* - w_i^* \nonumber \\
\label{cond1_offload}
&v^{*} < q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)
\end{align}
\item $\mu_i^* < 0$: We have $b_i^* = 0$, and $w_i^*=0$, and
\begin{align}
& q_i + \overline{n}q_i\mathbb{P}(R_1>R_0) = v^* + \mu_i^* \nonumber \\
\label{cond2_offload}
&v^{*} > q_i + \overline{n}q_i\mathbb{P}(R_1>R_0)
\end{align}
\item $0 <b_i^*<1$: We have $w_i^*=\mu_i^*=0$, and
\begin{align}
\label{psii_offload}
v^{*} = q_i + q_i\big(\overline{n}(1-b_i^*)e^{-\overline{n}b_i^*} - (1 - e^{-\overline{n}b_i^*})\big)\mathbb{P}(R_1>R_0)
\end{align}
\end{enumerate}
By combining (\ref{cond1_offload}), (\ref{cond2_offload}), and (\ref{psii_offload}), with the fact that $\sum_{i=1}^{N_f} b_i^*=M$, Lemma 3 is proven.
\end{appendices}
\bibliographystyle{IEEEtran}
\section{Introduction}
Caching at mobile devices significantly improves system performance by facilitating device-to-device (D2D) communications, which enhances the spectrum efficiency and alleviate the heavy burden on backhaul links \cite{Femtocaching}. Modeling the cache-enabled heterogeneous networks, including small cell base stations (SBSs) and mobile devices, follows two main directions in the literature. The first line of work focuses on the fundamental throughput scaling results by assuming a simple protocol channel model \cite{Femtocaching,golrezaei2014base}, known as the protocol model, where two devices can communicate if they are within a certain distance. The second line of work, defined as the physical interference model, considers a more realistic model for the underlying physical layer \cite{andreev2015analyzing,cache_schedule}. In the following, we review some of the works relevant to the second line, focusing mainly on the energy efficiency (EE) and delay analysis of wireless caching networks.
The physical interference model is based on the fundamental signal-to-interference (SIR) metric, and therefore, is applicable to any wireless communication system. Modeling devices' (users') locations as a PPP is widely employed in the literature, especially, in the wireless caching area \cite{andreev2015analyzing,cache_schedule,energy_efficiency,ee_BS,hajri2018energy}. However, a realistic model for D2D caching networks requires that a given device typically has multiple proximate devices, where any of them can potentially act as a serving device. This deployment is known as clustered devices deployment, which can be characterized by cluster processes \cite{haenggi2012stochastic}. Unlike the popular PPP approach, the authors in \cite{clustered_twc,clustered_tcom,8070464} developed a stochastic geometry based model to characterize the performance of the content placement in the clustered D2D network. In \cite{clustered_twc}, the authors discuss two strategies of content placement in a Poisson cluster process (PCP) deployment. First, when each device randomly chooses its serving device from its local cluster, and secondly, when each device connects to its $k$-th closest transmitting device from its local cluster. The authors characterize the optimal number of D2D transmitters that must be simultaneously activated in each cluster to maximize the area spectral efficiency. The performance of cluster-centric content placement is characterized in \cite{clustered_tcom}, where the content of interest in each cluster is cached closer to the cluster center, such that the collective performance of all the devices in each cluster is optimized. Inspired by the Matern hard-core point process, which captures pairwise interactions between nodes, the authors in \cite{8070464} devised a novel spatially correlated caching strategy called hard-core placement (HCP) such that the D2D devices caching the same content are never closer to each other than the exclusion radius.
Energy efficiency in wireless caching networks is widely studied in the literature \cite{energy_efficiency,ee_BS,hajri2018energy}.
For example, an optimal caching problem is formulated in \cite{energy_efficiency} to minimize the energy consumption of a wireless network. The authors consider a cooperative wireless caching network where relay nodes cooperate with the devices to cache the most popular files in order to minimize energy consumption. In \cite{ee_BS}, the authors investigate how caching at BSs can improve EE of wireless access networks. The condition when the EE can benefit from caching is characterized, and the optimal cache capacity that maximizes the network EE is found. It is shown that EE benefit from caching depends on content popularity, backhaul capacity, and interference level.
The authors in \cite{hajri2018energy} exploit the spatial repartitions of devices and the correlation in their content popularity profiles to improve of the achievable EE. The EE optimization problem is decoupled into two related subproblems, the first one addresses the issue of content popularity modeling, and the second subproblem investigates the impact of exploiting the spatial repartitions of devices. The authors derive a closed-form expression of the achievable EE and find the optimal density of active small cells that maximizes it. It is shown that the small base station allocation algorithm improves the energy efficiency and hit probability. However, the problem of EE for D2D based caching is not yet addressed in the literature.
Recently, the joint optimization of delay and energy in wireless caching is conducted, see, for instance \cite{wu2018energy,huang2018energy,jiang2018energy}. The authors in \cite{wu2018energy} jointly optimize the delay and energy in a cache-enabled dense small cell network. The authors formulate the energy-delay optimization problem as a mixed integer programming problem, where file placement, device association to the small cells, and power control are jointly considered. To model the energy consumption and end-to-end file delivery delay tradeoff, a utility function linearly combining these two metrics is used as an objective function of the optimization problem. An efficient algorithm is proposed to approach the optimal association and power solution, which could achieve the optimal tradeoff between energy consumption and end-to-end file delivery delay. In \cite{huang2018energy}, the authors showed that with caching, the energy consumption can be reduced by extending transmission time. However, it may incur wasted energy if the device never needs the cached content. Based on the random content request delay, the authors study the maximization of EE subject to a hard delay constraint in an additive white Gaussian noise channel. It is shown that the EE of a system with caching can be significantly improved with increasing content request probability and target transmission rate compared with the traditional on-demand scheme, in which the BS transmits a content file only after it is requested by the user. However, the problem of energy consumption and joint communication and caching for clustered D2D networks is not yet addressed in the literature.
In this paper, we conduct a comprehensive performance analysis and optimization of the joint communication and caching for a clustered D2D network, where the devices have unused memory to cache some files, following a random probabilistic caching scheme. Our network model effectively characterizes the stochastic nature of channel fading and clustered geographic locations of devices. Furthermore, this paper also puts some emphasis on the need for considering the traffic dynamics and rate of requests when studying the delay incurred to deliver requests to devices. Our work is the first in literature that conducts a comprehensive spatial analysis of a doubly Poisson cluster process (also called doubly Poisson point process \cite{haenggi2012stochastic}) with the devices adopting a slotted-ALOHA random access technique to access a shared channel. Also, we are the first to incorporate the spatio-temporal analysis in wireless caching networks by combining tools from stochastic geometry and queuing theory, in order to analyze and minimize the delay.
The main contributions of this paper are summarized below.
\begin{itemize}
\item We consider a Thomas cluster process (TCP) where the devices are spatially distributed as groups in clusters. The cluster centers are drawn from a parent PPP, and the cluster members are normally distributed around the centers, forming a Gaussian PPP. This organization of the parent and offspring PPPs forms the so-called doubly PPP.
\item \textcolor{red}{We conduct the coverage probability analysis} where the devices adopt a slotted-ALOHA random access technique. We then jointly optimize the access probability and caching probability to maximize the cluster offloading gain. A closed-form solution of the optimal caching probability sub-problem is provided.
\item The energy consumption problem is then formulated and shown to be convex and the optimal caching probability is also formulated.
\item By combining tools from stochastic geometry as well as queuing theory,
we minimize the per request weighted average delay by jointly optimizing bandwidth allocation between D2D and BSs' communication and the caching probability. The delay minimization problem is shown to be non-convex. Applying the block coordinate descent optimization technique, the joint minimization problem is solved in an iterative manner.
\item We validate our theoretical findings via simulations. Results shows a significant improvement in the network performance metrics, namely, the offloading gain, energy consumption, and average delay as compared to other caching schemes proposed earlier in literature.
\end{itemize}
The rest of this paper is organized as follows. Section II and Section III discuss the system model and the maximum offloading gain respectively. The energy consumption is discussed in Section IV and the delay analysis is conducted in Section V. Numerical results are then presented in Section VI before we conclude the paper in Section VII.
\section{System Model}
\subsection{System Setup}
We model the location of the mobile devices with a Thomas cluster process in which the parent points are drawn from a PPP $\Phi_p$ with density $\lambda_p$, and the daughter points are drawn from a Gaussian PPP around each parent point. In fact, the TCP is considered as a doubly Poisson cluster process where the daughter points are normally scattered with variance $\sigma^2 \in \mathbb{R}^2$ around each parent point \cite{haenggi2012stochastic}.
The parent points and offspring are referred to as cluster centers and cluster members, respectively. The number of cluster members in each cluster is a Poisson random variable with mean $\overline{n}$. The density function of the location of a cluster member relative to its cluster center is
\begin{equation}
f_Y(y) = \frac{1}{2\pi\sigma^2}\textrm{exp}\Big(-\frac{\lVert y\rVert^2}{2\sigma^2}\Big), \quad \quad y \in \mathbb{R}^2
\label{pcp}
\end{equation}
where $\lVert .\rVert$ is the Euclidean norm. The intensity function of a cluster is given by $\lambda_c(y) = \frac{\overline{n}}{2\pi\sigma^2}\textrm{exp}\big(-\frac{\lVert y\rVert^2}{2\sigma^2}\big)$. Therefore, the intensity of the process is given by $\lambda = \overline{n}\lambda_p$. We assume that the BSs' distribution follows another PPP $\Phi_{bs}$ with density $\lambda_{bs}$, which is independent of $\Phi_p$.
\subsection{Content Popularity and Probabilistic Caching Placement}
We assume that each device has a surplus memory of size $M$ designated for caching files.
The total number of files is $N_f> M$ and the set (library) of content indices is denoted as $\mathcal{F} = \{1, 2, \dots , N_f\}$. These files represent the content catalog that all the devices in a cluster may request, which are indexed in a descending order of popularity. The probability that the $i$-th file is requested follows a Zipf's distribution given by,
\begin{equation}
q_i = \frac{ i^{-\beta} }{\sum_{k=1}^{N_f}k^{-\beta}},
\label{zipf}
\end{equation}
where $\beta$ is a parameter that reflects how skewed the popularity distribution is. For example, if $\beta= 0$, the popularity of the files has a uniform distribution. Increasing $\beta$ increases the disparity among the files popularity such that lower indexed files have higher popularity. By definition, $\sum_{i=1}^{N_f}q_i = 1$.
We use Zipf's distribution to model the popularity of files per cluster.
D2D communication is enabled within each cluster to deliver popular content. It is assumed that the devices adopt a slotted-ALOHA medium access protocol, where each transmitter independently and randomly accesses the channel with the same probability $p$. This implies that multiple active D2D links might coexist within a cluster. Therefore, $p$ is a design parameter that directly controls the intra-cluster interference, as described later in the next section.
A probabilistic caching model is assumed, where the content is randomly and independently placed in the cache memories of different devices in the same cluster, according to the same distribution. The probability that a generic device stores a particular file $i$ is denoted as $b_i$, $0 \leq b_i \leq 1$ for all $i \in \mathcal{F}$. To avoid duplicate caching of the same content within the memory of the same device, we follow the caching approach proposed in \cite{blaszczyszyn2015optimal} and illustrated in Fig. \ref{prob_cache_example}.
\begin{figure
\begin{center}
\includegraphics[width=2.5in]{prob_cache_exam}
\caption {The cache memory of size $M = 3$ is equally divided into $3$ blocks of unit size. Starting from content $i=1$ to $i=N_f$, each content sequentially fills these $3$ memory blocks by an amount $b_i$. The amounts (probabilities) $b_i$ eventually fill all $3$ blocks since $\sum_{i=1}^{N_f} b_i = M$ \cite{blaszczyszyn2015optimal}. Then a random number $\in [0,1]$ is generated, and a content $i$ is chosen from each block, whose $b_i$ fills the part intersecting with the generated random number. In this way, in the given example, the contents $\{1, 2, 4\}$ are chosen to be cached.}
\label{prob_cache_example}
\end{center}
\end{figure}
If a device caches the desired file, the device directly retrieves the content. However, if the device does not cache the file, it downloads it from any neighboring device that caches the file (henceforth called catering device) in the same cluster. According to the proposed access model, the probability that a chosen catering device is admitted to access the channel is the access probability $p$. Finally, the device attaches to the nearest BS as a last resort to download the content which is not cached entirely within the device's cluster. We assume that the D2D communication is operating as out-of-band D2D. $W_{1}$ and $W_{2}$ denote respectively the bandwidth allocated to the D2D and BSs' communication, and the total bandwidth for the system is denoted as $W=W_{1} + W_{2}$. \textcolor{blue}{It is assumed that device requests are served in a random manner, i.e., among the cluster devices, a random device request is chosen to be scheduled and a content is served.
In the following, we aim at studying and optimizing three important metrics, widely studied in the literature. The first metric is the offloading gain, which is defined as the probability of obtaining the requested file from the local cluster, either from the self-cache or from a neighboring device in the same cluster with a rate greater than certain threshold. The second metric is the energy consumption which represents the dissipated energy when downloading files either from the BSs or D2D communication. Finally, the latency which accounts for the weighted average delay over the all requests served from the BS or the D2D communication.
\section{\textcolor{blue}{Maximum Offloading Gain}}
\begin{figure}[t]
\centering
\includegraphics[width=0.55\textwidth]{distance1.png}
\caption {Illustration of the representative cluster and one interfering cluster.}
\label{distance}
\end{figure}
Let us define the successful offloading probability as the probability that a device can find a requested file in its own cache, or in the caches of neighboring devices within the same cluster with D2D link data rate higher than a required threshold $R_0$. Without loss of generality, we conduct the analysis for a cluster whose center is $x_0\in \Phi_p$ (referred to as representative cluster), and the device who requests the content (henceforth called typical device) is located at the origin. We denote the location of the D2D-TX by $y_0$ w.r.t. $x_0$, where $x_0, y_0\in \mathbb{R}^2$. The distance from the typical device (D2D-RX of interest) to this D2D-TX is denoted as $r=\lVert x_0+y_0\rVert$, which is a realization of a random variable $R$ whose distribution is described later. This setup is illustrated in Fig. \ref{distance}. It is assumed that a requested file is served from a randomly selected catering device, which is, in turn, admitted to access the channel based on the slotted-ALOHA protocol. \textcolor{blue}{The successful offloading probability is then given by}
\begin{align}
\label{offloading_gain}
\mathbb{P}_o(p,b_i) &= \sum_{i=1}^{N_f} q_i b_i + q_i(1 - b_i)(1 - e^{-b_i\overline{n}})\int_{r=0}^{\infty}f(r) \mathbb{P}(R_{1}(r)>R_0) \dd{r},
\end{align}
where $R_{1}(r)$ is the achievable rate when downloading a content from a catering device at a distance $r$ from the typical device with pdf $f(r)$. The first term on the right-hand side is the probability of requesting a locally cached file (self-cache) whereas the remaining term incorporates the probability that a requested file $i$ is cached among at least one cluster member and being downloadable with a rate greater than $R_0$. To further clarity, since the number of devices per cluster has a Poisson distribution, the probability that there are $k$ devices per cluster is equal to $\frac{\overline{n}^k e^{-\overline{n}}}{k!}$. Accordingly, the probability that there are $k$ devices caching a content $i$ is equal to $\frac{(b_i\overline{n})^k e^{-b_i\overline{n}}}{k!}$. Hence, the probability that at least one device caches a content $i$ is $1$-minus the void probability (i.e., $k=0$), which equals $1 - e^{-b_i\overline{n}}$.
In the following, we first compute the probability $ \mathbb{P}(R_{1}(r)>R_0)$ conditioning on the distance $r$ between the typical and catering device, then we relax this condition. The received power at the typical device from a catering D2D-TX located at $y_0$ from the cluster center is given by
\begin{align}
P &= P_d g_0 \lVert x_0+y_0\rVert^{-\alpha}= P_d g_0 r^{-\alpha}
\label{pwr}
\end{align}
where $P_d$ denotes the D2D transmission power, $g_0 \sim $ exp(1) is an i.i.d. exponential random variable which models Rayleigh fading and $\alpha > 2$ is the path loss exponent. Under the above assumption, the typical device sees two types of interference, namely, the intra-and inter-cluster interference. We first describe the inter-cluster interference, then the intra-cluster interference is characterized. The set of active devices in any remote cluster is denoted as $\mathcal{B}^p$, where $p$ refers to the access probability. Similarly, the set of active devices in the local cluster is denoted as $\mathcal{A}^p$. Similar to (\ref{pwr}), the interference from the simultaneously active D2D-TXs outside the representative cluster, at the typical device is given by
\begin{align}
I_{\Phi_p^{!}} &= \sum_{x \in \Phi_p^{!}} \sum_{y\in \mathcal{B}^p} P_d g_{yx} \lVert x+y\rVert^{-\alpha}\\
& = \sum_{x\in \Phi_p^{!}} \sum_{y\in \mathcal{B}^p} P_d g_{u} u^{-\alpha}
\end{align}
where $\Phi_p^{!}=\Phi_p \setminus x_0$ for ease of notation, $y$ is the marginal distance between a potential interfering device and its cluster center at $x \in \Phi_p$ , $u = \lVert x+y\rVert$ is a realization of a random variable $U$ modeling the inter-cluster interfering distance (shown in Fig. \ref{distance}), $g_{yx} \sim $ exp(1) are an i.i.d. exponential random variables modeling Rayleigh fading, and $g_{u} = g_{yx}$ for ease of notation. The intra-cluster interference is then given by
\begin{align}
I_{\Phi_c} &= \sum_{y\in \mathcal{A}^p} P_d g_{yx_0} \lVert x_0+y\rVert^{-\alpha}\\
& = \sum_{y\in \mathcal{A}^p} P_d g_{h} h^{-\alpha}
\end{align}
where $y$ is the marginal distance between the intra-cluster interfering devices and the cluster center at $x_0 \in \Phi_p$, $h = \lVert x_0+y\rVert$ is a realization of a random variable $H$ modeling the intra-cluster interfering distance (shown in Fig. \ref{distance}), $g_{yx_0} \sim $ exp(1) are an i.i.d. exponential random variables modeling Rayleigh fading, and $g_{h} = g_{yx_0}$ for ease of notation. From the thinning theorem \cite{haenggi2012stochastic}, the set of active transmitters following the slotted-ALOHA medium access forms a PPP $\Phi_c^{p}$ \textcolor{red}{whose intensity is given by}
\begin{align}
\lambda_{cp} = p\lambda_{c}(y) = p\overline{n}f_Y(y) =\frac{p\overline{n}}{2\pi\sigma^2}\textrm{exp}\Big(-\frac{\lVert y\rVert^2}{2\sigma^2}\Big), \quad y \in \mathbb{R}^2
\end{align}
Assuming that the thermal noise is neglected as compared to the aggregate interference, the $\mathrm{SIR}$ at the typical device is written as
\begin{equation}
\gamma_{r}=\frac{P}{I_{\Phi_p^{!}} + I_{\Phi_c}} = \frac{P_d g_0 r^{-\alpha}}{I_{\Phi_p^{!}} + I_{\Phi_c}}
\end{equation}
\textcolor{magenta}{A fixed rate transmission model is adopted in our study}, where each TX (D2D or BS) transmits at the fixed rate of log$_2[1+\theta]$ bits/sec/Hz, where $\theta$ is a design parameter. Since, the rate is fixed, the transmission is subject to outage due to fading and interference fluctuations. Consequently, the de facto average transmissions rate (i.e., average throughput) is given by
\begin{equation}
\label{rate_eqn}
R = W\textrm{ log$_{2}$}[1+ \theta]\mathrm{P_c},
\end{equation}
where $W$ is the bandwidth, $\theta$ is the pre-determined threshold for successful reception, $\mathrm{P_c} =\mathbb{E}(\textbf{1}\{\mathrm{SIR}>\theta\})$ is the coverage probability, and $\textbf{1}\{.\}$ is the indicator function. The D2D communication rate under ALOHA scheme is then given by
\begin{equation}
\label{rate_eqn1}
R_{1}(r) = pW_{1} {\rm log}_2 \big(1 + \theta \big) \textbf{1}\{ \gamma_{r} > \theta\}
\end{equation}
Then, the probability $ \mathbb{P}(R_{1}(r)>R_0)$ is derived as follows.
\begin{align}
\mathbb{P}(R_{1}(r)>R_0)&= \mathbb{P} \big(pW_{1} {\rm log}_2 (1 + \theta)\textbf{1}\{ \gamma_{r} > \theta\}
>R_0\big) \nonumber \\
&=\mathbb{P} \big(\textbf{1}\{ \gamma_{r} > \theta\}
>\frac{R_0}{pW_{1} {\rm log}_2 (1 + \theta )}\big) \nonumber \\
&\overset{(a)}{=}\mathbb{E} \big(\textbf{1}\{ \gamma_{r} > \theta\}\big) \nonumber \\
&= \mathbb{P}\big(\frac{P_d g_0 r^{-\alpha}}{I_{\Phi_p^{!}} + I_{\Phi_c}} > \theta\big)
\nonumber \\
&= \mathbb{P}\big( g_0 > \frac{\theta r^{\alpha}}{P_d} [I_{\Phi_p^{!}} + I_{\Phi_c}] \big)
\nonumber \\
&\overset{(b)}{=}\mathbb{E}_{I_{\Phi_p^{!}},I_{\Phi_c}}\Big[\text{exp}\big(\frac{-\theta r^{\alpha}}{P_d}{[I_{\Phi_p^{!}} + I_{\Phi_c}] }\big)\Big]
\nonumber \\
&\overset{(c)}{=} \mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big)
\end{align}
\textcolor{blue}{where (a) follows from the assumption that $R_0 < pW_{1} {\rm log}_2 \big(1 + \theta \big)$ always hold, otherwise, it is infeasible to get $\mathbb{P}(R_{1}(r)>R_0)$ greater than zero}. (b) follows from the fact that $g_o$ follows an exponential distribution, and (c) follows from the independence of the intra- and inter-cluster interference and the Laplace transform of them.
In what follows, we first derive the Laplace transform of interference to get $\mathbb{P}(R_{1}(r)>R_0)$. Then, we formulate the offloading gain maximization problem.
\begin{lemma}
The Laplace transform of the inter-cluster aggregate interference $I_{\Phi_p^{!}}$ evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$ is given by
\begin{align}
\label{LT_inter}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &= {\rm exp}\Big(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm e}^{-p\overline{n} \varphi(s,v)}\Big)v\dd{v}\Big),
\end{align}
where $\varphi(s,v) = \int_{u=0}^{\infty}\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}$, and $f_U(u|v) = \mathrm{Rice} (u| v, \sigma)$ represents Rice's probability density function of parameter $\sigma$.
\end{lemma}
\begin{proof}
Please see Appendix A.
\end{proof}
\begin{lemma}
The Laplace transform of the intra-cluster aggregate interference $I_{\Phi_c}$ evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$ is given by
\begin{align}
\label{LT_intra}
\mathscr{L}_{I_{\Phi_c} }(s) = {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\end{align}
where $f_H(h) =\mathrm{Rayleigh}(h,\sqrt{2}\sigma)$ represents Rayleigh's probability density function with a scale parameter $\sqrt{2}\sigma$.
\end{lemma}
\begin{proof}
Please see Appendix B.
\end{proof}
For the serving distance distribution $f(r)$, since both the typical device as well as a potential catering device have their locations drawn from a normal distribution with variance $\sigma^2$ around the cluster center, the serving distance has a Rayleigh distribution with parameter $\sqrt{2}\sigma$, and given by
\begin{align}
\label{rayleigh}
f(r)= \frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}, \quad r>0
\end{align}
From (\ref{LT_inter}), (\ref{LT_intra}), and (\ref{rayleigh}), the offloading gain in (\ref{offloading_gain}) is characterized as
\begin{align}
\label{offloading_gain_1}
\mathbb{P}_o(p,b_i) &= \sum_{i=1}^{N_f} q_i b_i + q_i(1 - b_i)(1 - e^{-b_i\overline{n}}) \underbrace{\int_{r=0}^{\infty}
\frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}
\mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big)\dd{r}}_{\mathbb{P}(R_1>R_0)},
\end{align}
Hence, the offloading gain maximization problem can be formulated as
\begin{align}
\label{optimize_eqn_p}
\textbf{P1:} \quad &\underset{p, b_i}{\text{max}} \quad \mathbb{P}_o(p,b_i) \\
\label{const110}
&\textrm{s.t.}\quad \sum_{i=1}^{N_f} b_i = M, \\
\label{const111}
& b_i \in [ 0, 1], \\
\label{const112}
& p \in [ 0, 1],
\end{align}
where (\ref{const110}) is the device cache size constraint, which is consistent with the illustration of the example in Fig. \ref{prob_cache_example}. Since the offloading gain depends on the access probability $p$, and $p$ exists as a complex exponential term in $\mathbb{P}(R_1>R_0)$, it is hard to analytically characterize (e.g., show concavity of) the objective function or find a tractable expression for the optimal access probability. In order to tackle this, we propose to solve \textbf{P1} by \textcolor{blue}{finding first} the optimal $p^*$ that maximizes the probability $\mathbb{P}(R_{1}>R_0)$ over the interval $p \in [ 0, 1]$. Then, the obtained $p^*$ is used to solve for the caching probability $b_i$ in the optimization problem below. \textcolor{blue}{Since in the structure of \textbf{P1} $p$ and $b_i$ are separable, it is possible to solve numerically for $p^*$ and then substitute to get $b_i^*$.}
\begin{align}
\label{optimize_eqn_b_i}
\textbf{P2:} \quad &\underset{b_i}{\text{max}} \quad \mathbb{P}_o(p^*,b_i) \\
\label{const11}
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber
\end{align}
The optimal caching probability is formulated in the next lemma.
\begin{lemma}
$\mathbb{P}_o(p^*,b_i)$ is a concave function w.r.t. $b_i$ and the optimal caching probability $b_i^{*}$ that maximizes the offloading gain is given by
\[
b_{i}^{*}=\left\{
\begin{array}{ll}
1 \quad\quad\quad , v^{*} < q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)\\
0 \quad\quad\quad, v^{*} > q_i + \overline{n}q_i\mathbb{P}(R_1>R_0)\\
\psi(v^{*}) \quad, {\rm otherwise}
\end{array}
\right.
\]
where $\psi(v^{*})$ is the solution of $v^{*}$ of (\ref{psii_offload}) that satisfies $\sum_{i=1}^{N_f} b_i^*=M$.
\end{lemma}
\begin{proof}
Please see Appendix C.
\end{proof}
\section{\textcolor{blue}{Energy Consumption}}
In this section, we formulate the energy consumption minimization problem for the clustered D2D caching network. In fact, significant energy consumption occurs only when a content is served via D2D or BSs' transmission. We consider the time cost $c_{d_i}$ as the time it takes to download the $i$-th content from a neighboring device in the same cluster. Considering the size $\overline{b}_i$ of the $i$-th ranked content, $c_{d_i}=\overline{b}_i/R_1 $, where $R_1 $ denotes the average rate of the D2D communication. Similarly, we have $c_{b_i} = \overline{b}_i/R_2 $ when the $i$-th content is served by the BS with average rate $R_2 $. The average energy consumption when downloading files by the devices in the representative cluster is given by
\begin{align}
\label{energy_avrg}
E_{av} = \sum_{k=1}^{\infty} E(b_i|k)\mathbb{P}(n=k)
\end{align}
where $\mathbb{P}(n=k)$ is the probability that there are $k$ devices in the cluster $x_0$, and $E(b_i|k)$ is the consumed energy conditioning on having $k$ devices within the cluster $x_0$, written similar to \cite{energy_efficiency} as
\begin{equation}
E(b_i|k) = \sum_{j=1}^{k} \sum_{i=1}^{N_f}\big[\mathbb{P}_{j,i}^d q_i P_d c_{d_i} + \mathbb{P}_{j,i}^b q_i P_b c_{b_i}\big]
\label{energy}
\end{equation}
where $\mathbb{P}_{j,i}^d$ and $\mathbb{P}_{j,i}^b$ represent the probability of obtaining the $i$-th content by the $j$-th device from the local cluster, i.e., via D2D communication, and the BS, respectively. $P_b$ denotes the BS transmission power. Given that there are $k$ devices per cluster, it is obvious that $\mathbb{P}_{j,i}^b=(1-b_i)^{k}$, and $\mathbb{P}_{j,i}^d=(1 - b_i)\big(1-(1-b_i)^{k-1}\big)$.
The average rates $R_1$ and $R_2$ are now computed to get a closed-form expression for $E(b_i|k)$.
From equation (\ref{rate_eqn}), we need to obtain the D2D coverage probability $\mathrm{P_{c_d}}$ and BSs coverage probability $\mathrm{P_{c_b}}$ to calculate $R_1$ and $R_2$, respectively. Given the number of devices $k$ in the representative cluster, the Laplace transform of the inter-cluster interference is as obtained in (\ref{LT_inter}). \textcolor{blue}{However, the intra-cluster interfering devices no longer represent a Gaussian PPP since the number of devices become fixed, i.e., not a Poisson random number as before. To facilitate the analysis, for every realization $k$, we assume that the intra-cluster interfering devices form a Gaussian PPP with intensity function is $pkf_Y(y)$.}
\textcolor{magenta}{Such an assumption is mandatory for tractability and is validated in the numerical section.}
From Lemma 2, the intra-cluster Laplace transform conditioning on $k$ can be approximated as
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|k) &\approx {\rm exp}\Big(-pk \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\nonumber
\end{align}
and directly, the D2D coverage probability is given by
\begin{align}
\label{p_b_d2d}
\mathrm{P_{c_d}} =
\int_{r=0}^{\infty} \frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}
\mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big|k\big)\dd{r}
\end{align}
\textcolor{blue}{Again, the devices adopt the slotted-ALOHA with access probability $p$, which is computed over the interval [0,1] to maximize $\mathrm{P_{c_d}}$.} Similarly, under the PPP $\Phi_{bs}$, and based on the nearest BS association principle, it is shown in \cite{andrews2011tractable} that the BS coverage probability can be expressed as
\begin{equation}
\mathrm{P_{c_b}} =\frac{1}{{}_2 F_1(1,-\delta;1-\delta;-\theta)},
\label{p_b_bs}
\end{equation}
where ${}_2 F_1(.)$ is the Gaussian hypergeometric function and $\delta = 2/\alpha$. Given the coverage probabilities $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ from (\ref{p_b_d2d}) and (\ref{p_b_bs}), $R_1 $ and $R_2 $ can be calculated from (\ref{rate_eqn}), and hence $E(b_i|k)$ is expressed in a closed-form.
\subsection{Energy Consumption Minimization}
The energy minimization problem can be formulated as
\begin{align}
\label{optimize_eqn1}
&\textbf{P3:} \quad\underset{b_i}{\text{min}} \quad E(b_i|k) = \sum_{j=1}^{k} \sum_{i=1}^{N_f}\big[\mathbb{P}_{j,i}^d q_i P_d c_{d_i} + \mathbb{P}_{j,i}^b q_i P_b c_{b_i}\big]
\\
\label{const11}
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber
\end{align}
In the next lemma, we prove the convexity condition for $E$.
\begin{lemma}
\label{convex_E}
The energy consumption $E(b_i|k)$ is convex if $\frac{P_b}{R_2}>\frac{P_d}{R_1}$.
\end{lemma}
\begin{proof}
We proceed by deriving the Hessian matrix of $E$. The Hessian matrix of $E(b_i|k)(b_1,\dots,b_{Nf})$ w.r.t. the caching variables is \textbf{H}$_{i,j} = \frac{\partial^2 E(b_i|k)}{\partial b_i \partial b_j}$, $\forall i,j \in \mathcal{F}$. \textbf{H}$_{i,j}$ a diagonal matrix whose $i$-th row and $j$-th column element is given by $k(k-1) S_i\Big(\frac{P_b}{R_2}-\frac{P_d}{R_1}\Big)q_i(1 - b_i)^{k-2}$.
Since the obtained Hessian matrix is full-rank and diagonal, $\textbf{H}_{i,j}$ is positive semidefinite (and hence $E(b_i|k)$ is convex) if all the diagonal entries are nonnegative, i.e., when
$\frac{P_b}{R_2}>\frac{P_d}{R_1}$. In practice, it is reasonable to assume that $P_b \gg P_d$, as in \cite{ericsson}, the BS transmit power is 100 fold the D2D power.
\end{proof}
As a result of Lemma 3, the optimal caching probability can be computed to minimize $E(b_i|k)$.
\begin{lemma}
The optimal caching probability $b_i^{*}$ for the energy minimization problem \textbf{P3} is given by,
\begin{align}
b_i^* = \Bigg[ 1 - \Big(\frac{v^* + k^2q_iS_i\frac{P_d}{R_1 }}{kq_iS_i\big(\frac{P_d}{R_1 }-\frac{P_b}{R_2 }\big)} \Big)^{\frac{1}{k-1}}\Bigg]^{+}
\end{align}
where $v^{*}$ satisfies the maximum cache constraint $\sum_{i=1}^{N_f} \Big[ 1 - \Big(\frac{v^* + k^2q_iS_i\frac{P_d}{R_1 }}{kq_iS_i\big(\frac{P_d}{R_1 }-\frac{P_b}{R_2 }\big)} \Big)^{\frac{1}{k-1}}\Big]^{+}=M$, and $[x]^+ =$ max$(x,0)$.
\end{lemma}
\begin{proof}
The proof proceeds in a similar manner to Lemma 3 and so is omitted.
\end{proof}
By substituting $b_i^*$ into (\ref{energy_avrg}), the average energy consumption per cluster is obtained. In the remaining of the paper, we study and minimize the weighted average delay per request for the proposed system.
\section{Delay Analysis}
In this section, the delay analysis and minimization are discussed. A joint stochastic geometry and queueing theory model is exploited to study this problem. The delay analysis incorporates the study of a system of spatially interacting queues. To simplify the mathematical analysis, we further consider that only one D2D link can be active within a cluster of $k$ devices, where $k$ is fixed. As shown later, such an assumption facilitates the analysis by deriving simple expressions.
\textcolor{blue}{We begin by deriving the D2D coverage probability under the above assumption, which is used later in this section.}
\begin{lemma}
\label{coverage_lemma}
The D2D coverage probability of the proposed clustered model with one active D2D link within a cluster is given by
\begin{align}
\label{v_0}
\mathrm{P_{c_d}} = \frac{1}{4\sigma^2 Z(\theta,\alpha,\sigma)} ,
\end{align}
where $Z(\theta,\alpha,\sigma) = (\pi\lambda_p \theta^{2/\alpha}\Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)+ \frac{1}{4\sigma^2})$.
\end{lemma}
\begin{proof}
The result can be proved by using the displacement theory of the PPP \cite{daley2007introduction}, and then proceeding in a similar manner to Lemma 1 and 2. The proof is presented in Appendix D for completeness.
\end{proof}
In the following, we firstly describe the traffic model of the network, then we formulate the delay minimization problem.
\begin{figure
\begin{center}
\includegraphics[width=3.5in]{delay_queue}
\caption {The traffic model of the requests in a given cluster. Two M/G/1 queues $Q_1$ and $Q_2$ are assumed which represent respectively the requests served by the D2D and BSs' communication.}
\label{delay_queue}
\end{center}
\end{figure}
\subsection{Traffic Model}
We assume that the aggregate request arrival process from the devices in each cluster follows a Poisson arrival process with parameter $\zeta_{tot}$ (requests per time slot). As shown in Fig.~\ref{delay_queue}, the incoming requests are further divided according to where they are served from. $\zeta_{1}$ represents the arrival rate of requests served via the D2D communication, whereas $\zeta_{2}$ is the arrival rate for those served from the BSs. $\zeta_{3} = 1 - \zeta_{1} - \zeta_{2}$ denotes the arrival rate of requests served via the self-cache with zero delay. By definition, $\zeta_{1}$ and $\zeta_{2}$ are also Poisson arrival processes. Without loss of generality, we assume that the file size has a general distribution $G$ whose mean is denoted as $\overline{S}$ MBytes. Hence, an M/G/1 queuing model is adopted whereby two non-interacting queues, $Q_1$ and $Q_2$, model the traffic in each cluster served via the D2D and BSs' communication, respectively. Although $Q_1$ and $Q_2$ are non-interacting as the D2D communication is assumed to be out-of-band, these two queues are spatially interacting with similar queues in other clusters. To recap, $Q_1$ and $Q_2$ are two M/G/1 queues with arrival rates $\zeta_{1}$ and $\zeta_{1}$, and service rates $\mu_1$ and $\mu_2$, respectively.
\subsection{Queue Dynamics}
It is worth highlighting that the two queues $Q_i$, $i \in \{1,2\}$, accumulate requests for files demanded by the clusters members, not the files themselves. First-in first-out (FIFO) scheduling is assumed where a request for content arrives first will be scheduled first either by the D2D or BS communication if the content is cached among the devices or not, respectively. The result of FIFO scheduling only relies on the time when the request arrives the queue and is irrelevant to the particular device that issues the request. Given the parameter of the Poisson's arrival process $\zeta_{tot}$, the arrival rates at the two queues are expressed respectively as
\begin{align}
\zeta_{1} &= \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big), \\
\zeta_{2} &= \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}
\end{align}
The network operation is depicted in Fig. \ref{delay_queue}, and described in detail below.
\begin{enumerate}
\item Given the memoryless property of the arrival process (Poisson arrival) along with the assumption that the service process is independent of the arrival process,
the number of requests in any queue at a future time only depends upon the current number in the system (at time $t$) and the arrivals or departures that occur within the interval $h$.
\begin{align}
Q_{i}(t+h) = Q_{i}(t) + \Lambda_{i}(h) - M_{i}(h)
\end{align}
where $\Lambda_{i}(h)$ is the number of arrivals in the time interval $(t,t+h)$, whose mean is $\zeta_i$ [sec$^{-1}$], and $M_{i}(h)$ is the number of departures in the time interval $(t,t+h)$, whose mean is $\mu_i = \frac{\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})W_i{\rm log}_2(1+\theta)}{\overline{S}}$ [sec$^{-1}$]. It is worth highlighting that, unlike the spatial-only model studied in the previous sections, the term $\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})$ is dependent on the traffic dynamics \textcolor{blue}{since a request being served in a given cluster is interfered only from other clusters that also have requests to serve}. What is more noteworthy is that the mean service time $\tau_i = \frac{1}{\mu_i}$ follows the same distribution as the file size. These aspects will be revisited later in this section.
\item $\Lambda_{i}(h)$ is dependent only on $h$ because the arrival process is Poisson. $M_{i}(h)$ is $0$ if the service time of the file being served $\epsilon_i >h$. $M_{i}(h)$ is 1 if $\epsilon_1 <h$ and $\epsilon_2 + \epsilon_1>h$, and so on. As the service times $\epsilon_1, \epsilon_2, \dots, \epsilon_n$ are independent, neither $\Lambda_{i}(h)$ nor $M_{i}(h)$ depend on what happened prior to $t$. Thus, $Q_{i}(t+h)$ only depends upon $Q_{i}(t)$ and not the past history. Hence it is a is a continuous-time Markov chain (CTMC) which obeys the stability conditions in \cite{szpankowski1994stability}.
\end{enumerate}
The following proposition provides the sufficient conditions for the stability of the buffers in the sense defined in \cite{szpankowski1994stability}, i.e., $\{Q_{i}\}$ has a limiting distribution for $t \rightarrow \infty$.
\begin{proposition} {\rm The D2D and BSs' traffic modeling queues are stable, respectively, if and only if}
\begin{align}
\label{stable1}
\lambda_1 < \mu_1 &= \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}} \\
\label{stable2}
\lambda_2 < \mu_2 & =\frac{\mathrm{P_{c_b}} W_2{\rm log}_2(1+\theta)}{\overline{S}}
\end{align}
\end{proposition}
\begin{proof}
We show sufficiency by proving that (\ref{stable1}) and (\ref{stable2}) guarantee stability in a dominant network, where all queues that have empty buffers make dummy transmissions. \textcolor{red}{In other words, like the spatial-only network model, the typical receiver is seeing an interference from all other clusters whether they have requests to serve or not.} This dominant system approach yields $\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})$ equal to $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ for the D2D and BS' communication, respectively. Also, the obtained delay is an upper bound for the actual system delay. The necessity of (\ref{stable1}) and (\ref{stable2}) is shown as follows: If $\zeta_i>\mu_i$, then, by Loynes' theorem, it follows that lim$_{t\rightarrow \infty}Q_i(t)=\infty$ (a.s.) for all queues in the dominant network.
\end{proof}
Next, we conduct the analysis for the dominant system whose parameters are as follows. The content size has an exponential distribution with mean $\overline{S}$ [MBytes]. The service times also obey the same exponential distribution with means $\tau_1 = \frac{\overline{S}}{R_1}$ [second] and $\tau_2 = \frac{\overline{S}}{R_2}$ [second]. The rates $R_1$ and $R_2$ are calculated from (\ref{rate_eqn}) where $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ are from (\ref{v_0}) and (\ref{p_b_bs}), respectively. Accordingly, $Q_1$ and $Q_2$ are two continuous time independent (non-interacting) M/M/1 queues with service rates $\mu_1 = \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}}$ and $\mu_2 = \frac{\mathrm{P_{c_b}} W_2{\rm log}_2(1+\theta)}{\overline{S}}$ [sec$^{-1}$], respectively. \textcolor{black}{$W = W_1 + W_2$ is the total bandwidth allocated to the system}.
\begin{proposition} {\rm The mean queue length $L_i$ of the $i$-th queue is given by}
\begin{align}
\label{queue_len}
L_i &= \rho_i + \frac{2\rho_i^2}{2\zeta_i(1 - \rho_i)},
\end{align}
\end{proposition}
\begin{proof}
We can easily calculate $L_i$ by observing that $Q_i$ are continuous time M/M/1 queues with arrival rates $\zeta_i$, service rates $\mu_i$, and traffic intensities $\rho_i = \frac{\zeta_i}{\mu_i}$. Thus, applying the Pollaczek-Khinchine formula \cite{Kleinrock}, $L_i$ is directly written as (\ref{queue_len}).
\end{proof}
The average delay per request for each queue is calculated from
\begin{align}
D_1 &= \frac{L_1}{\zeta_1}= \frac{1}{\mu_1 - \zeta_1} = \frac{1}{W_1\mathcal{O}_{1} - \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)} \\
D_2 &= \frac{L_2}{\zeta_2}=\frac{1}{\mu_2 - \zeta_2} = \frac{1}{W_2\mathcal{O}_{2} - \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}}
\end{align}
where $\mathcal{O}_{1} = \frac{\mathrm{P_{c_d}} {\rm log}_2(1+\theta)}{\overline{S}}$, $\mathcal{O}_{2}= \frac{\mathrm{P_{c_b}} {\rm log}_2(1+\theta)}{\overline{S}}$ for notational simplicity, and $W_2=W-W_1$. The weighted average delay $D$ is then expressed as
\begin{align}
D&= \frac{\zeta_{1}D_1 + \zeta_{2}D_2}{\zeta_{tot}} \nonumber \\
&= \frac{\sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)}{ \mathcal{O}_{1}W_1 - \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)} + \frac{\sum_{i=1}^{N_f}q_i (1-b_i)^{k}}{ \mathcal{O}_{2}W_2 - \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}}
\end{align}
One important insight from the delay equation is that the caching probability $b_i$ controls the arrival rates $\zeta_{1}$ and $\zeta_{2}$ while the bandwidth determines the service rates $\mu_1$ and $\mu_2$. Therefore, it turns out to be of paramount importance to jointly optimize $b_i$ and $W_1$ to minimize the average delay. Consequently, we formulate the weighted average delay joint caching and bandwidth minimization problem as
\begin{align}
\label{optimize_eqn3}
\textbf{P4:} \quad \quad&\underset{b_i,{\rm W}_1}{\text{min}} \quad D(b_i,W_1) \\
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber \\
& 0 \leq W_1 \leq W,
\end{align}
Although the objective function of \textbf{P4} is convex w.r.t. $W_1$, as derived below, the coupling of the optimization variables $b_i$ and $W_1$ makes \textbf{P4} a non-convex optimization problem. Therefore, \textbf{P4} can not be solved directly using standard convex optimization techniques.
\textcolor{blue}{By applying the block coordinate descent optimization technique, \textbf{P4} can be solved in an iterative manner as follows. First, for a given caching probability $b_i$, we calculate the bandwidth allocation subproblem. Afterwards, the obtained optimal bandwidth is used to update $b_i$}. The optimal bandwidth for the bandwidth allocation subproblem is given in the next Lemma.
\begin{lemma}
The objective function of \textbf{P4} in (\ref{optimize_eqn3}) is convex w.r.t. $W_1$, and the optimal bandwidth allocation to the D2D communication is given by
\begin{align}
W_1^* = \frac{\zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k}) +\varpi \big(\mathcal{O}_{2}W - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)}{\mathcal{O}_{1}+\varpi\mathcal{O}_{2}},
\end{align}
where $\overline{b}_i = 1 - b_i$, and $\varpi=\sqrt{\frac{\mathcal{O}_{1}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})}{\mathcal{O}_{2} \sum_{i=1}^{N_f}q_i\overline{b}_i^{k}}}$
\end{lemma}
\begin{proof}
$D(b_i,W_1|k)$ can be written as
\begin{align}
\label{optimize_eqn3_p1}
\sum_{i=1}^{N_f}q_i(s - \overline{b}_i^{k})\big(\mathcal{O}_{1}W_1 - \zeta_{tot}\sum_{i=1}^{N_f}q_i(s - \overline{b}_i^{k})\big)^{-1} + \sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big(\mathcal{O}_{2}W_2 - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)^{-1}, \nonumber
\end{align}
The second derivative $\frac{\partial^2 D(b_i,W_1|k)}{\partial W_1^2}$ is hence given by
\begin{align}
2\mathcal{O}_{1}^2\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big(\mathcal{O}_{1}W_1 - \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big)^{-3} + 2\mathcal{O}_{2}^2\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big(\mathcal{O}_{2}W_2 - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)^{-3}, \nonumber
\end{align}
The stability condition requires that $\mathcal{O}_{1}W_1 > \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})$ and $\mathcal{O}_{2}W_2 > \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}$. Also, $\overline{b}_i \geq \overline{b}_i^{k}$ by definition. Hence, $\frac{\partial^2 D(b_i,W_1|k)}{\partial W_1^2} > 0$, and the objective function is a convex function of $W_1$. The optimal bandwidth allocation can be obtained from the Karush-Kuhn-Tucker (KKT) conditions similar to problems \textbf{P2} and \textbf{P3}, with the details omitted for brevity.
\end{proof}
Given $W_1^*$ from the bandwidth allocation subproblem, the caching probability subproblem can be written as
\begin{align}
\textbf{P5:} \quad \quad&\underset{b_i}{\text{min}} \quad D(b_i,W_1^*) \\
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111})
\end{align}
The caching probability subproblem \textbf{P5} is a sum of two fractional functions, where the first fraction is in the form of a concave over convex functions while the second fraction is in the form of a convex over concave functions. The first fraction structure, i.e., concave over convex functions, renders solving this problem using fractional programming very challenging.\footnote{Dinkelbach's transform can be used to solve a minimization of fractional functions that has the form of convex over concave functions, whereby an equivalent problem is solved with the objective function as the difference between convex (numerator) minus concave (denominator) functions \cite{schaible1973fractional}.} Hence, we use the successive convex approximation technique to solve for $b_i$ given the optimal bandwidth $W_1^*$. The explained procedure is repeated until the value of \textbf{P4}'s objective function converges to a pre-specified accuracy.
\section{Numerical Results}
The simulation setup is as follows. \textcolor{red}{$W=20$ MHz}. $P_b = 43$ dBm, $P_d = 23$ dBm, $\sigma=10$ m, $\beta=1, \alpha=4$, $N_f=500$ files, $M=10$ files, \textcolor{red}{$\overline{n}=5$ devices}, $\lambda_{p} =50$ clusters/km$^2$, $\overline{S}=3$ Mbits, and $\theta=0$ dB. This simulation setup will be used unless otherwise specified.
\subsection{Offloading Gain Results}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{prob_r_geq_r0.eps}
\caption {The probability that the achievable rate is greater than a threshold $R_0$ versus standard deviation $\sigma$.}
\label{prob_r_geq_r0}
\end{center}
\end{figure}
In this subsection, we present the offloading gain performance for the proposed caching model.
In Fig.~\ref{prob_r_geq_r0}, we verify the accuracy of the analytical results for the probability $\mathbb{P}(R_1>R_0)$. The theoretical and simulated results are plotted together, and they are consistent. We can observe that the probability $\mathbb{P}(R_1>R_0)$ decreases monotonically with the increase of $\sigma$. This is because as $\sigma$ increases, the serving distance increases and the inter-cluster interfering distance, between the out-of-cluster interferers and the typical device, decreases, and equivalently, the $\mathrm{SIR}$ decreases. It is also shown that $\mathbb{P}(R_1>R_0)$ decreases with the $\mathrm{SIR}$ threshold $\theta$ as the channel becomes more prone to be in outage when increasing the $\mathrm{SIR}$ threshold $\theta$.
In Fig.~\ref{prob_r_geq_r0_vs_p}, we plot the probability $\mathbb{P}(R_1>R_0)$ against the channel access probability $p$ at different thresholds $R_0$. As evident from the plot, we see that there is an optimal $p^*$; before it the probability $\mathbb{P}(R_1>R_0)$ tends to increase since the channel access probability increases, and beyond it, the probability $\mathbb{P}(R_1>R_0)$ decreases monotonically due to the effect of more interferers accessing the channel. It is quite natural that $\mathbb{P}(R_1>R_0)$ is higher when decreasing the rate threshold $R_0$. Also, we observe that the optimal access probability $p^*$ is smaller when $R_0$ decreases. This implies that a transmitting device can maximize the probability $\mathbb{P}(R_1>R_0)$ at the receiver when $R_0$ is smaller by accessing less the channel, and correspondingly, receiving lower interference.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{prob_r_geq_r0_vs_p.eps}
\caption {The probability that the achievable rate is greater than a threshold $R_0$ versus the access probability $p$.}
\label{prob_r_geq_r0_vs_p}
\end{center}
\end{figure}
\begin{figure*}
\centering
\subfigure[$p=p^*$. ]{\includegraphics[width=3.0in]{histogram_b_i_p_star.eps}
\label{histogram_b_i_p_star}}
\subfigure[$p \neq p^*$.]{\includegraphics[width=3.0in]{histogram_b_i_p_leq_p_star.eps}
\label{histogram_b_i_p_leq_p_star}}
\caption{Histogram of the caching probability $b_i$ when (a) $p=p^*$ and (b) $p \neq p^*$.}
\label{histogram_b_i}
\end{figure*}
To show the effect of $p$ on the caching probability, in Fig.~\ref{histogram_b_i}, we plot the histogram of the optimal caching probability at different values of $p$, where $p=p^*$ in Fig.~\ref{histogram_b_i_p_star} and $p\neq p^*$ in Fig.~\ref{histogram_b_i_p_leq_p_star}. It is clear from the histograms that optimal caching probability $b_i$ tends to be more skewed when $p\neq p^*$, i.e., when $\mathbb{P}(R_1>R_0)$ decreases. This shows that file sharing is more difficult when $p$ is not optimal. For example, if $p<p^*$, the system is too conservative owing to the small access probabilities. However, for $p>p^*$, the outage probability is high due to the aggressive interference. In such a low coverage probability regime, each device tend to cache the most popular files leading to fewer opportunities of content transfer between devices.
\begin{figure*}
\centering
\subfigure[The offloading probability versus the popularity of files $\beta$ at different thresholds $R_0$. ]{\includegraphics[width=3.0in]{offloading_gain_vs_beta.eps}
\label{offloading_gain_vs_beta_R_0}}
\subfigure[The offloading probability versus the popularity of files $\beta$ under different caching schemes (PC, Zipf, CPF).]{\includegraphics[width=3.0in]{offloading_prob_cach_compare.eps}
\label{offloading_prob_cach_compare}}
\caption{The offloading probability versus the popularity of files $\beta$.}
\label{offloading_gain_vs_beta}
\end{figure*}
Last but not least, Fig.~\ref{offloading_gain_vs_beta} manifests the prominent effect of the files' popularity on the offloading gain. In Fig.~\ref{offloading_gain_vs_beta_R_0}, we plot the offloading gain against $\beta$ at different rate thresholds $R_0$. We note that the offloading gain monotonically increases with $\beta$ since fewer files are frequently requested such that the files can be entirely cached among the cluster devices. Also, we see that offloading gain decreases with the increase of $R_0$ since the probability $\mathbb{P}(R_1>R_0)$ decreases with $R_0$. In Fig.~\ref{offloading_prob_cach_compare}, we compare the offloading gain of three different caching schemes, namely, the proposed probabilistic caching (PC), Zipf's caching (Zipf), and caching popular files (CPF). We can see that the offloading gain under the probabilistic caching scheme attains the best performance as compared to other schemes. Also, we note that both PC and Zipf's schemes encompass the same energy consumption when $\beta=0$ owing to the uniformity of content popularity.
\subsection{Energy Consumption Results}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{energy_vs_beta4.eps}
\caption {Normalized energy consumption versus popularity exponent $\beta$.}
\label{energy_vs_beta}
\end{center}
\end{figure}
The results in this part are given for the energy consumption.
Fig.~\ref{energy_vs_beta} shows the energy consumption, normalized to the number of devices per cluster, versus $\beta$ under different caching schemes, namely, PC, Zipf, CPF, and uniform random caching (random). We can see that the minimized energy under the proposed probabilistic caching scheme attains the best performance as compared to other schemes. Also, it is clear that, except for the random uniform caching, the consumed energy decreases with $\beta$. This can be justified by the fact that as $\beta$ increases, fewer files are frequently requested which are more likely to be cached among the devices under PC, CPF, and the Zipf's caching schemes. These few files therefore are downloadable from the devices via low power D2D communication. In the random caching scheme, files are uniformly chosen for caching independently of their popularity.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{energy_vs_n3.eps}
\caption {Normalized energy consumption versus number of devices per cluster.}
\label{energy_vs_n}
\end{center}
\end{figure}
We plot the normalized energy consumption per device versus the number of devices per cluster in Fig.~\ref{energy_vs_n}. First, we see that the normalized energy consumption decreases with the number of devices. As the number of devices per cluster increases, it is more probable to obtain requested files via low power D2D communication. When the number of devices per cluster is relatively large, the normalized energy consumption tends to flatten as most of the content becomes cached at the cluster devices.
\subsection{Delay Results}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{delay_compare.eps}
\caption {Weighted average delay versus the popularity exponent $\beta$.}
\label{delay_compare}
\end{center}
\end{figure}
In Fig.~\ref{delay_compare}, we compare the average delay of three different caching schemes PC, Zipf, and CPF. We can see that the jointly minimized average delay under probabilistic caching scheme attains the best performance as compared to other caching schemes. Also, we see that, in general, the average delay monotonically decreases with $\beta$ when fewer number of files undergoes the highest demand. Fig.~\ref{BW_compare} manifests the effect of the files' popularity $\beta$ on the allocated bandwidth. It is shown that optimal D2D allocated bandwidth $W_1^*$ continues increasing with $\beta$. This can be interpreted as follows. When $\beta$ increases, fewer number of files become highly demanded. These files can be entirely cached among the devices. To cope with larger number of requests served via the D2D communication, the D2D allocated bandwidth needs to be increased.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{BW_compare1.eps}
\caption {Normalized bandwidth allocation versus the popularity exponent $\beta$.}
\label{BW_compare}
\end{center}
\end{figure}
\section{Conclusion}
In this work, we conduct a comprehensive analysis of the joint communication and caching for a clustered D2D network with random probabilistic caching incorporated at the devices. We first maximize the offloading gain of the proposed network by jointly optimizing the channel access probability and caching probability. We solve for the channel access probability numerically and the optimal caching probability is then characterized. Then, we minimize the energy consumption of the proposed clustered D2D network. We formulate the energy minimization problem and show that it is convex and the optimal caching probability is obtained. Finally, we minimize the per request weighted average delay by jointly optimizing bandwidth allocation between D2D and BSs' communication and the caching probability. The delay minimization problem is shown to be non-convex. Applying the block coordinate descent optimization technique, the joint minimization problem can be solved in an iterative manner. Results shows roughly up to $10\%$, $17\%$, and $140\%$ improvement gain in the offloading gain, energy consumption, and average delay, respectively, compared to the Zipf's caching technique.
\begin{appendices}
\section{Proof of lemma 1}
Laplace transform of the inter-cluster aggregate interference $I_{\Phi_p^{!}}$, evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$, can be evaluated as
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &= \mathbb{E} \Bigg[e^{-s \sum_{\Phi_p^{!}} \sum_{y \in \mathcal{B}^p} g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&= \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\mathcal{B}^p,g_{y_{x}}} \prod_{y \in \mathcal{B}^p} e^{-s g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&= \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\mathcal{B}^p} \prod_{y \in \mathcal{B}^p} \mathbb{E}_{g_{y_{x}}} e^{-s g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&\overset{(a)}{=} \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\mathcal{B}^p} \prod_{y \in \mathcal{B}^p} \frac{1}{1+s \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&\overset{(b)}{=} \mathbb{E}_{\Phi_p} \prod_{\Phi_p^{!}} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x + y\rVert^{-\alpha}}\Big)f_Y(y)\dd{y}\Big) \nonumber
\\
&\overset{(c)}{=} {\rm exp}\Bigg(-\lambda_p \int_{\mathbb{R}^2}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x + y\rVert^{-\alpha}}\Big)f_Y(y)\dd{y}\Big)\dd{x}\Bigg) \nonumber
\\
&\overset{(d)}{=} {\rm exp}\Bigg(-\lambda_p \int_{\mathbb{R}^2}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert z\rVert^{-\alpha}}\Big)f_Y(z-x)\dd{y}\Big)\dd{x}\Bigg) \nonumber
\\
&\overset{(e)}{=} {\rm exp}\Bigg(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{u=0}^{\infty}\Big(1 - \frac{1}{1+s u^{-\alpha}}\Big)f_U(u|v)\dd{u}\Big)v\dd{v}\Bigg) \nonumber
\\
&= {\rm exp}\Bigg(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm exp}\big(-p\overline{n} \int_{u=0}^{\infty}
\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}\big)v\dd{v}\Big)\Bigg) \nonumber
\\
&= {\rm exp}\Big(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm e}^{-p\overline{n} \varphi(s,v)}\Big)v\dd{v}\Big),
\end{align}
where $\varphi(s,v) = \int_{u=0}^{\infty}\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}$. (a) follows from the Rayleigh fading assumption, (b)
follows from the probability generating functional (PGFL) of PPP $\Phi_c^{p}$, (c) follows from the PGFL of the parent PPP $\Phi_p$, (d) follows from change of variables $z = x + y$, and (e) follows from converting the cartesian coordinates to the polar coordinates. Hence, the lemma is proven.
\section {Proof of lemma 2}
Laplace transform of the intra-cluster aggregate interference $I_{\Phi_c}$, evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$, is written as
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|v_0) &= \mathbb{E} \Bigg[e^{-s \sum_{y \in \mathcal{A}^p} g_{y} \lVert x_0 + y\rVert^{-\alpha}} \Bigg] \nonumber
\\
&= \mathbb{E}_{\mathcal{A}^p,g_{y}} \prod_{y \in\mathcal{A}^p} e^{-s g_{y} \lVert x_0 + y\rVert^{-\alpha}} \nonumber
\\
&= \mathbb{E}_{\mathcal{A}^p} \prod_{y \in\mathcal{A}^p} \mathbb{E}_{g_{y}} e^{-s g_{y} \lVert x_0 + y\rVert^{-\alpha}} \nonumber
\\
&\overset{(a)}{=} \mathbb{E}_{\mathcal{A}^p} \prod_{y \in\mathcal{A}^p} \frac{1}{1+s \lVert x_0 + y\rVert^{-\alpha}}
\nonumber
\\
&\overset{(b)}{=} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x_0 + y\rVert^{-\alpha}}\Big)f_{Y}(y)\dd{y}\Big) \nonumber
\\
&\overset{(c)}{=} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert z_0\rVert^{-\alpha}}\Big)f_{Y}(z_0-x_0)\dd{z_0}\Big) \nonumber
\\
&\overset{(d)}{=} {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\Big(1 - \frac{1}{1+s h^{-\alpha}}\Big)f_H(h|v_0)\dd{h}\Big) \nonumber
\\
&= {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h|v_0)\dd{h}\Big) \nonumber
\\
&= {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h|v_0)\dd{h}\Big) \nonumber
\\
\mathscr{L}_{I_{\Phi_c} }(s) &\approx {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\end{align}
where (a) follows from the Rayleigh fading assumption, (b) follows from the PGFL of the PPP $\Phi_c^p$, (c) follows from changing of variables $z_0 = x_0 + y$, (d) follows from converting the cartesian to polar coordinates, and the approximation comes from neglecting the correlation of the intra-cluster interfering distances, i.e., the common part $v_0$, as in \cite{clustered_twc}. Hence, the lemma is proven.
\section {Proof of lemma 3}
First, to prove concavity, we proceed as follows.
\begin{align}
\frac{\partial \mathbb{P}_o}{\partial b_i} &= q_i + q_i\big(\overline{n}(1-b_i)e^{-\overline{n}b_i}-(1-e^{-\overline{n}b_i})\big)\mathbb{P}(R_1>R_0)\nonumber \\
\frac{\partial^2 \mathbb{P}_o}{\partial b_i \partial b_j} &= -q_i\big(\overline{n}e^{-\overline{n}b_i} + \overline{n}^2(1-b_i)e^{-\overline{n}b_i} + \overline{n}e^{-\overline{n}b_i}\big)\mathbb{P}(R_1>R_0)
\end{align}
It is clear that the second derivative $\frac{\partial^2 \mathbb{P}_o}{\partial b_i \partial b_j}$ is negative. Hence, the Hessian matrix \textbf{H}$_{i,j}$ of $\mathbb{P}_o(p^*,b_i)$ w.r.t. $b_i$ is negative semidefinite, and the function $\mathbb{P}_o(p^*,b_i)$ is concave with respect to $b_i$. Also, the constraints are linear, which imply that the necessity and sufficiency conditions for optimality exist. The dual Lagrangian function and the KKT conditions are then employed to solve \textbf{P2}.
The KKT Lagrangian function of the energy minimization problem is given by
\begin{align}
\mathcal{L}(b_i,w_i,\mu_i,v) =& \sum_{i=1}^{N_f} q_i b_i + q_i(1- b_i)(1-e^{-b_i\overline{n}})\mathbb{P}(R_1>R_0) \nonumber \\
&+ v(M-\sum_{i=1}^{N_f} b_i) + \sum_{i=1}^{N_f} w_i (b_i-1) - \sum_{i=1}^{N_f} \mu_i b_i
\end{align}
where $v, w_i, \mu_i$ are the dual equality and two inequality constraints, respectively. Now, the optimality conditions are written as,
\begin{align}
\label{grad}
& \grad_{b_i} \mathcal{L}(b_i^*,w_i^*,\mu_i^*,v^*) = q_i + q_i\big(\overline{n}(1-b_i)e^{-\overline{n}b_i}-(1-e^{-\overline{n}b_i})\big)\mathbb{P}(R_1>R_0) -v^* + w_i^* -\mu_i^*= 0 \\
&w_i^* \geq 0 \\
&\mu_i^* \leq 0 \\
&w_i^* (b_i^* - 1) =0 \\
&\mu_i^* b_i^* = 0\\
&(M-\sum_{i=1}^{N_f} b_i^*) = 0
\end{align}
\begin{enumerate}
\item $w_i^* > 0$: We have $b_i^* = 1$, $\mu_i^*=0$, and
\begin{align}
&q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)= v^* - w_i^* \nonumber \\
\label{cond1_offload}
&v^{*} < q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)
\end{align}
\item $\mu_i^* < 0$: We have $b_i^* = 0$, and $w_i^*=0$, and
\begin{align}
& q_i + \overline{n}q_i\mathbb{P}(R_1>R_0) = v^* + \mu_i^* \nonumber \\
\label{cond2_offload}
&v^{*} > q_i + \overline{n}q_i\mathbb{P}(R_1>R_0)
\end{align}
\item $0 <b_i^*<1$: We have $w_i^*=\mu_i^*=0$, and
\begin{align}
\label{psii_offload}
v^{*} = q_i + q_i\big(\overline{n}(1-b_i^*)e^{-\overline{n}b_i} - (1 - e^{-\overline{n}b_i})\big)\mathbb{P}(R_1>R_0)
\end{align}
\end{enumerate}
By combining (\ref{cond1_offload}), (\ref{cond2_offload}), and (\ref{psii_offload}), with the fact that $\sum_{i=1}^{N_f} b_i^*=M$, the lemma is proven.
\section {Proof of lemma 6}
Under the assumption of one active D2D link within a cluster, there is no intra-cluster interference. Also, the Laplace transform of the inter-cluster interference is similar to that of the PPP \cite{andrews2011tractable} whose density is the same as that of the parent PPP. In fact, this is true according to the displacement theory of the PPP \cite{daley2007introduction}, where each interferer is a point of a PPP that is displaced randomly and independently of all other points. For the sake of completeness, we prove it here. Starting from the third line of the proof of Lemma 1, we get
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &\overset{(a)}{=} \text{exp}\Bigg(-2\pi\lambda_p \mathbb{E}_{g_{u}} \int_{v=0}^{\infty}\mathbb{E}_{u|v}\Big[1 -
e^{-s P_d g_{u} u^{-\alpha}} \Big]v\dd{v}\Bigg), \nonumber \\
&= \text{exp}\Big(-2\pi\lambda_p \mathbb{E}_{g_{u}} \big[\int_{v=0}^{\infty}\int_{u=0}^{\infty}\big(1 - e^{-s P_d g_{u} u^{-\alpha}} \big)f_U(u|v)\dd{u}v\dd{v}\big]\Big) \nonumber \\
\label{prelaplace}
&\overset{(b)}{=} \text{exp}\Bigg(-2\pi\lambda_p \mathbb{E}_{g_{u}} \underbrace{\int_{v=0}^{\infty}v\dd{v} - \int_{v=0}^{\infty}\int_{u=0}^{\infty} e^{-s P_d g_{u} u^{-\alpha}} f_{U}(u|v)\dd{u} v \dd{v}}_{\mathcal{R}(s,\alpha)}\Bigg)
\end{align}
where (a) follows from the PGFL of the parent PPP \cite{andrews2011tractable}, and (b) follows from $\int_{u=0}^{\infty} f_{U}(u|v)\dd{u} =1$. Now, we proceed by calculating the integrands of $\mathcal{R}(s,\alpha)$ as follows.
\begin{align}
\mathcal{R}(s,\alpha)&\overset{(c)}{=} \int_{v=0}^{\infty}v\dd{v} - \int_{u=0}^{\infty}e^{-s P_d g_{u} u^{-\alpha}}
\int_{v=0}^{\infty} f_{U}(u|v)v \dd{v}\dd{u}\nonumber \\
&\overset{(d)}{=} \int_{v=0}^{\infty}v\dd{v} - \int_{u=0}^{\infty}e^{-s P_d g_{u} u^{-\alpha}}u\dd{u} \nonumber \\
&\overset{(e)}{=} \int_{u=0}^{\infty}(1 - e^{-s P_d g_{u} u^{-\alpha}})u\dd{u} \nonumber \\
&\overset{(f)}{=} \frac{(s P_d g_{u})^{2/\alpha}}{\alpha} \int_{u=0}^{\infty}(1 - e^{-t})t^{-1-\frac{2}{\alpha}}du \nonumber \\
\label{laplaceppp1}
&\overset{(g)}{=} \frac{(s P_d)^{2/\alpha}}{2} g_{u}^{2/\alpha} \Gamma(1 + 2/\alpha),
\end{align}
where (c) follows from changing the order of integration, (d) follows from $ \int_{v=0}^{\infty} f_{U}(u|v)v\dd{v} = u$, (e) follows from changing the dummy variable $v$ to $u$, (f) follows from changing the variables $t=s g_{u}u^{-\alpha}$, and (g) follows from solving the integration of (f) by parts. Substituting the obtained value for $\mathcal{R}(s,\alpha)$ into (\ref{prelaplace}), and taking the expectation over the exponential random variable $g_u$, with the fact that $\mathbb{E}_{g_{u}} [g_{u}^{2/\alpha}] = \Gamma(1 - 2/\alpha)$, we get
\begin{align}
\label{laplace_trans}
\mathscr{L}_{I_{\Phi_p^{!}}} (s)&= {\rm exp}\Big(-\pi\lambda_p (sP_d )^{2/\alpha} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)\Big),
\end{align}
Substituting this expression with the distance probability density function $f_R(r)$ into the coverage probability equation yields
\begin{align}
\mathrm{P_{c_d}} &=\int_{r=0}^{\infty}
{\rm e}^{-\pi\lambda_p (sP_d)^{2/\alpha} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)}\frac{r}{2\sigma^2}{\rm e}^{\frac{-r^2}{4\sigma^2}} {\rm dr} , \nonumber \\
&\overset{(h)}{=}\int_{r=0}^{\infty} \frac{r}{2\sigma^2}
{\rm e}^{-\pi\lambda_p \theta^{2/\alpha}r^{2} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)}{\rm e}^{\frac{-r^2}{4\sigma^2}} {\rm dr} , \nonumber \\
&\overset{(i)}{=}\int_{r=0}^{\infty} \frac{r}{2\sigma^2}
{\rm e}^{-r^2Z(\theta,\sigma,\alpha)} {\rm dr} , \nonumber \\
&\overset{}{=} \frac{1}{4\sigma^2 Z(\theta,\alpha,\sigma)}
\end{align}
where (h) comes from the substitution ($s = \frac{\theta r^{\alpha}}{P_d}$), and (i) from $Z(\theta,\alpha,\sigma) = (\pi\lambda_p \theta^{2/\alpha}\Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)+ \frac{1}{4\sigma^2})$.
\end{appendices}
\bibliographystyle{IEEEtran}
\section{Introduction}
Caching at mobile devices significantly improves system performance by facilitating \ac{D2D} communications, which enhances the spectrum efficiency and alleviate the heavy burden on backhaul links \cite{Femtocaching}. Modeling the cache-enabled heterogeneous networks, including \ac{SBS} and mobile devices, follows two main directions in the literature. The first line of work focuses on the fundamental throughput scaling results by assuming a simple protocol channel model \cite{Femtocaching,golrezaei2014base,8412262}, known as the protocol model, where two devices can communicate if they are within a certain distance. The second line of work, defined as the physical interference model, considers a more realistic model for the underlying physical layer \cite{andreev2015analyzing,cache_schedule}. In the following, we review some of the works relevant to the second line, focusing mainly on the energy efficiency \ac{EE} and delay analysis of wireless caching networks.
The physical interference model is based on the fundamental \ac{SIR} metric, and therefore, is applicable to any wireless communication system. Modeling devices' locations as a \ac{PPP} is widely employed in the literature, especially, in the wireless caching area \cite{andreev2015analyzing,cache_schedule,energy_efficiency,ee_BS,hajri2018energy}. However, a realistic model for \ac{D2D} caching networks requires that a given device typically has multiple proximate devices, where any of them can potentially act as a serving device. This deployment is known as clustered devices deployment, which can be characterized by cluster processes \cite{haenggi2012stochastic}. Unlike the popular PPP approach, the authors in \cite{clustered_twc,clustered_tcom,8070464} developed a stochastic geometry based model to characterize the performance of content placement in the clustered \ac{D2D} network. In \cite{clustered_twc}, the authors discuss two strategies of content placement in a \ac{PCP} deployment. First, when each device randomly chooses its serving device from its local cluster, and secondly, when each device connects to its $k$-th closest transmitting device from its local cluster. The authors characterize the optimal number of \ac{D2D} transmitters that must be simultaneously activated in each cluster to maximize the area spectral efficiency. The performance of cluster-centric content placement is characterized in \cite{clustered_tcom}, where the content of interest in each cluster is cached closer to the cluster center, such that the collective performance of all the devices in each cluster is optimized. Inspired by the Matern hard-core point process, which captures pairwise interactions between nodes, the authors in \cite{8070464} devised a novel spatially correlated caching strategy called \ac{HCP} such that the \ac{D2D} devices caching the same content are never closer to each other than the exclusion radius.
Energy efficiency in wireless caching networks is widely studied in the literature \cite{energy_efficiency,ee_BS,hajri2018energy}.
For example, an optimal caching problem is formulated in \cite{energy_efficiency} to minimize the energy consumption of a wireless network. The authors consider a cooperative wireless caching network where relay nodes cooperate with the devices to cache the most popular files in order to minimize energy consumption. In \cite{ee_BS}, the authors investigate how caching at BSs can improve \ac{EE} of wireless access networks. The condition when \ac{EE} can benefit from caching is characterized, and the optimal cache capacity that maximizes the network \ac{EE} is found. It is shown that \ac{EE} benefit from caching depends on content popularity, backhaul capacity, and interference level.
The authors in \cite{hajri2018energy} exploit the spatial repartitions of devices and the correlation in their content popularity profiles to improve the achievable EE. The \ac{EE} optimization problem is decoupled into two related subproblems, the first one addresses the issue of content popularity modeling, and the second subproblem investigates the impact of exploiting the spatial repartitions of devices. The authors derive a closed-form expression of the achievable \ac{EE} and find the optimal density of active small cells to maximize the EE. It is shown that the small base station allocation algorithm improves the energy efficiency and hit probability. However, the problem of \ac{EE} for \ac{D2D} based caching is not yet addressed in the literature.
Recently, the joint optimization of delay and energy in wireless caching is conducted, see, for instance \cite{wu2018energy,huang2018energy,jiang2018energy}. The authors in \cite{wu2018energy} jointly optimize the delay and energy in a cache-enabled dense small cell network. The authors formulate the energy-delay optimization problem as a mixed integer programming problem, where file placement, device association to the small cells, and power control are jointly considered. To model the energy consumption and end-to-end file delivery delay tradeoff, a utility function linearly combining these two metrics is used as an objective function of the optimization problem. An efficient algorithm is proposed to approach the optimal association and power solution, which could achieve the optimal tradeoff between energy consumption and end-to-end file delivery delay. In \cite{huang2018energy}, the authors showed that with caching, the energy consumption can be reduced by extending transmission time. However, it may incur wasted energy if the device never needs the cached content. Based on the random content request delay, the authors study the maximization of \ac{EE} subject to a hard delay constraint in an additive white Gaussian noise channel. It is shown that the \ac{EE} of a system with caching can be significantly improved with increasing content request probability and target transmission rate compared with the traditional on-demand scheme, in which the \ac{BS} transmits content file only after it is requested by the user. However, the problem of energy consumption and joint communication and caching for clustered \ac{D2D} networks is not yet addressed in the literature.
In this paper, we conduct comprehensive performance analysis and optimization of the joint communication and caching for a clustered \ac{D2D} network, where the devices have unused memory to cache some files, following a random probabilistic caching scheme. Our network model effectively characterizes the stochastic nature of channel fading and clustered geographic locations of devices. Furthermore, this paper also puts emphasis on the need for considering the traffic dynamics and rate of requests when studying the delay incurred to deliver requests to devices. Our work is the first in the literature that conducts a comprehensive spatial analysis of a doubly \ac{PCP} (also called doubly \ac{PPP} \cite{haenggi2012stochastic}) with the devices adopting a slotted-ALOHA random access technique to access a shared channel. Also, we are the first to incorporate the spatio-temporal analysis in wireless caching networks by combining tools from stochastic geometry and queuing theory in order to analyze and minimize the average delay (see, for instance, \cite{zhong2015stability,stamatiou2010random,zhong2017heterogeneous,7917340}).
The main contributions of this paper are summarized below.
\begin{itemize}
\item We consider a \ac{TCP} where the devices are spatially distributed as groups in clusters. The cluster centers are drawn from a parent PPP, and the cluster members are normally distributed around the centers, forming a Gaussian PPP. This organization of the parent and offspring PPPs forms the so-called doubly PPP.
\item \textcolor{blue}{We conduct the coverage probability analysis} where the devices adopt a slotted-ALOHA random access technique. We then jointly optimize the access probability and caching probability to maximize the cluster offloading gain. A closed-form solution of the optimal caching probability sub-problem is provided.
\item The energy consumption problem is then formulated and shown to be convex and the optimal caching probability is also formulated.
\item By combining tools from stochastic geometry as well as queuing theory,
we minimize the per request weighted average delay by jointly optimizing bandwidth allocation between \ac{D2D} and \ac{BS}-to-Device communication and the caching probability. The delay minimization problem is shown to be non-convex. Applying the \ac{BCD} optimization technique, the joint minimization problem is solved in an iterative manner.
\item We validate our theoretical findings via simulations. Results show a significant improvement in the network performance metrics, namely, the offloading gain, energy consumption, and average delay as compared to other caching schemes proposed earlier in literature.
\end{itemize}
The rest of this paper is organized as follows. Section II and Section III discuss the system model and the maximum offloading gain respectively. The energy consumption is discussed in Section IV and the delay analysis is conducted in Section V. Numerical results are then presented in Section VI before we conclude the paper in Section VII.
\section{System Model}
\subsection{System Setup}
We model the location of the mobile devices with a \ac{TCP} in which the parent points are drawn from a PPP $\Phi_p$ with density $\lambda_p$, and the daughter points are drawn from a Gaussian PPP around each parent point. In fact, the TCP is considered as a doubly \ac{PCP} where the daughter points are normally scattered with variance $\sigma^2 \in \mathbb{R}^2$ around each parent point \cite{haenggi2012stochastic}.
The parent points and offspring are referred to as cluster centers and cluster members, respectively. The number of cluster members in each cluster is a Poisson random variable with mean $\overline{n}$. The density function of the location of a cluster member relative to its cluster center is
\begin{equation}
f_Y(y) = \frac{1}{2\pi\sigma^2}\textrm{exp}\Big(-\frac{\lVert y\rVert^2}{2\sigma^2}\Big), \quad \quad y \in \mathbb{R}^2
\label{pcp}
\end{equation}
where $\lVert .\rVert$ is the Euclidean norm. The intensity function of a cluster is given by $\lambda_c(y) = \frac{\overline{n}}{2\pi\sigma^2}\textrm{exp}\big(-\frac{\lVert y\rVert^2}{2\sigma^2}\big)$. Therefore, the intensity of the process is given by $\lambda = \overline{n}\lambda_p$. We assume that the BSs' distribution follows another PPP $\Phi_{bs}$ with density $\lambda_{bs}$, which is independent of $\Phi_p$.
\subsection{Content Popularity and Probabilistic Caching Placement}
We assume that each device has a surplus memory of size $M$ designated for caching files.
The total number of files is $N_f> M$ and the set (library) of content indices is denoted as $\mathcal{F} = \{1, 2, \dots , N_f\}$. These files represent the content catalog that all the devices in a cluster may request, which are indexed in a descending order of popularity. The probability that the $i$-th file is requested follows a Zipf's distribution given by,
\begin{equation}
q_i = \frac{ i^{-\beta} }{\sum_{k=1}^{N_f}k^{-\beta}},
\label{zipf}
\end{equation}
where $\beta$ is a parameter that reflects how skewed the popularity distribution is. For example, if $\beta= 0$, the popularity of the files has a uniform distribution. Increasing $\beta$ increases the disparity among the files popularity such that lower indexed files have higher popularity. By definition, $\sum_{i=1}^{N_f}q_i = 1$.
We use Zipf's distribution to model the popularity of files per cluster.
\ac{D2D} communication is enabled within each cluster to deliver popular content. It is assumed that the devices adopt a slotted-ALOHA medium access protocol, where each transmitter independently and randomly accesses the channel with the same probability $p$. This implies that multiple active \ac{D2D} links might coexist within a cluster. Therefore, $p$ is a design parameter that directly controls the intra-cluster interference, as described later in the next section.
A probabilistic caching model is assumed, where the content is randomly and independently placed in the cache memories of different devices in the same cluster, according to the same distribution. The probability that a generic device stores a particular file $i$ is denoted as $b_i$, $0 \leq b_i \leq 1$ for all $i \in \mathcal{F}$. To avoid duplicate caching of the same content within the memory of the same device, we follow the caching approach proposed in \cite{blaszczyszyn2015optimal} and illustrated in Fig. \ref{prob_cache_example}.
\begin{figure
\begin{center}
\includegraphics[width=2.5in]{Figures/ch3/prob_cache_exam}
\caption {The cache memory of size $M = 3$ is equally divided into $3$ blocks of unit size. Starting from content $i=1$ to $i=N_f$, each content sequentially fills these $3$ memory blocks by an amount $b_i$. The amounts (probabilities) $b_i$ eventually fill all $3$ blocks since $\sum_{i=1}^{N_f} b_i = M$ \cite{blaszczyszyn2015optimal}. Then a random number $\in [0,1]$ is generated, and content $i$ is chosen from each block, whose $b_i$ fills the part intersecting with the generated random number. In this way, in the given example, the contents $\{1, 2, 4\}$ are chosen to be cached.}
\label{prob_cache_example}
\end{center}
\end{figure}
If a device caches the desired file, the device directly retrieves the content. However, if the device does not cache the file, it downloads it from any neighboring device that caches the file (henceforth called catering device) in the same cluster. According to the proposed access model, the probability that a chosen catering device is admitted to access the channel is the access probability $p$. Finally, the device attaches to the nearest \ac{BS} as a last resort to download the content which is not cached entirely within the device's cluster. We assume that the \ac{D2D} communication is operating as out-of-band D2D. \textcolor{red}{$W_{1}$ and $W_{2}$ denote respectively the bandwidth allocated to the \ac{D2D} and \ac{BS}-to-Device communication, and the total bandwidth of the system is denoted as $W=W_{1} + W_{2}$}. \textcolor{blue}{It is assumed that device requests are served in a random manner, i.e., among the cluster devices, a random device request is chosen to be scheduled and content is served.
In the following, we aim at studying and optimizing three important metrics, widely studied in the literature. The first metric is the offloading gain, which is defined as the probability of obtaining the requested file from the local cluster, either from the self-cache or from a neighboring device in the same cluster with a rate greater than a certain threshold. The second metric is the energy consumption which represents the dissipated energy when downloading files either from the BSs or via \ac{D2D} communication. Finally, the latency which accounts for the weighted average delay over all the requests served from the \ac{D2D} and \ac{BS}-to-Device communication.
\section{\textcolor{blue}{Maximum Offloading Gain}}
\begin{figure}[t]
\centering
\includegraphics[width=0.55\textwidth]{Figures/ch3/distance1.png}
\caption {Illustration of the representative cluster and one interfering cluster.}
\label{distance}
\end{figure}
Let us define the successful offloading probability as the probability that a device can find a requested file in its own cache, or in the caches of neighboring devices within the same cluster with \ac{D2D} link rate higher than a required threshold $R_0$. Without loss of generality, we conduct the analysis for a cluster whose center is $x_0\in \Phi_p$ (referred to as representative cluster), and the device who requests the content (henceforth called typical device) is located at the origin. We denote the location of the \ac{D2D} transmitter by $y_0$ w.r.t. $x_0$, where $x_0, y_0\in \mathbb{R}^2$. The distance from the typical device (\ac{D2D} receiver of interest) to this \ac{D2D} transmitter is denoted as $r=\lVert x_0+y_0\rVert$, which is a realization of a random variable $R$ whose distribution is described later. This setup is illustrated in Fig. \ref{distance}. It is assumed that a requested file is served from a randomly selected catering device, which is, in turn, admitted to access the channel based on the slotted-ALOHA protocol. \textcolor{blue}{The successful offloading probability is then given by}
\begin{align}
\label{offloading_gain}
\mathbb{P}_o(p,b_i) &= \sum_{i=1}^{N_f} q_i b_i + q_i(1 - b_i)(1 - e^{-b_i\overline{n}})\int_{r=0}^{\infty}f(r) \mathbb{P}(R_{1}(r)>R_0) \dd{r},
\end{align}
where $R_{1}(r)$ is the achievable rate when downloading content from a catering device at a distance $r$ from the typical device with pdf $f(r)$. The first term on the right-hand side is the probability of requesting a locally cached file (self-cache) whereas the remaining term incorporates the probability that a requested file $i$ is cached among at least one cluster member and being downloadable with a rate greater than $R_0$. To further clarity, since the number of devices per cluster has a Poisson distribution, the probability that there are $k$ devices per cluster is equal to $\frac{\overline{n}^k e^{-\overline{n}}}{k!}$. Accordingly, the probability that there are $k$ devices caching content $i$ is equal to $\frac{(b_i\overline{n})^k e^{-b_i\overline{n}}}{k!}$. Hence, the probability that at least one device caches content $i$ is $1$-minus the void probability (i.e., $k=0$), which equals $1 - e^{-b_i\overline{n}}$.
In the following, we first compute the probability $ \mathbb{P}(R_{1}(r)>R_0)$ conditioning on the distance $r$ between the typical and catering device, then we relax this condition. The received power at the typical device from a catering \ac{D2D} transmitter located at $y_0$ from the cluster center is given by
\begin{align}
P &= P_d g_0 \lVert x_0+y_0\rVert^{-\alpha}= P_d g_0 r^{-\alpha}
\label{pwr}
\end{align}
where $P_d$ denotes the \ac{D2D} transmission power, $g_0 \sim $ exp(1) is an exponential random variable which models Rayleigh fading and $\alpha > 2$ is the path loss exponent. Under the above assumption, the typical device sees two types of interference, namely, the intra-and inter-cluster interference. We first describe the inter-cluster interference, then the intra-cluster interference is characterized. The set of active devices in any remote cluster is denoted as $\mathcal{B}^p$, where $p$ refers to the access probability. Similarly, the set of active devices in the local cluster is denoted as $\mathcal{A}^p$. Similar to (\ref{pwr}), the interference from the simultaneously active \ac{D2D} transmitters outside the representative cluster, at the typical device is given by
\begin{align}
I_{\Phi_p^{!}} &= \sum_{x \in \Phi_p^{!}} \sum_{y\in \mathcal{B}^p} P_d g_{yx} \lVert x+y\rVert^{-\alpha}\\
& = \sum_{x\in \Phi_p^{!}} \sum_{y\in \mathcal{B}^p} P_d g_{u} u^{-\alpha}
\end{align}
where $\Phi_p^{!}=\Phi_p \setminus x_0$ for ease of notation, $y$ is the marginal distance between a potential interfering device and its cluster center at $x \in \Phi_p$ , $u = \lVert x+y\rVert$ is a realization of a random variable $U$ modeling the inter-cluster interfering distance (shown in Fig. \ref{distance}), $g_{yx} \sim $ exp(1) are \ac{i.i.d.} exponential random variables modeling Rayleigh fading, and $g_{u} = g_{yx}$ for ease of notation. The intra-cluster interference is then given by
\begin{align}
I_{\Phi_c} &= \sum_{y\in \mathcal{A}^p} P_d g_{yx_0} \lVert x_0+y\rVert^{-\alpha}\\
& = \sum_{y\in \mathcal{A}^p} P_d g_{h} h^{-\alpha}
\end{align}
where $y$ is the marginal distance between the intra-cluster interfering devices and the cluster center at $x_0 \in \Phi_p$, $h = \lVert x_0+y\rVert$ is a realization of a random variable $H$ modeling the intra-cluster interfering distance (shown in Fig. \ref{distance}), $g_{yx_0} \sim $ exp(1) are \ac{i.i.d.} exponential random variables modeling Rayleigh fading, and $g_{h} = g_{yx_0}$ for ease of notation. From the thinning theorem \cite{haenggi2012stochastic}, the set of active transmitters following the slotted-ALOHA medium access forms a PPP $\Phi_{cp}$ \textcolor{blue}{whose intensity is given by}
\begin{align}
\lambda_{cp} = p\lambda_{c}(y) = p\overline{n}f_Y(y) =\frac{p\overline{n}}{2\pi\sigma^2}\textrm{exp}\Big(-\frac{\lVert y\rVert^2}{2\sigma^2}\Big), \quad y \in \mathbb{R}^2
\end{align}
Assuming that the thermal noise is neglected as compared to the aggregate interference, the $\mathrm{SIR}$ at the typical device is written as
\begin{equation}
\gamma_{r}=\frac{P}{I_{\Phi_p^{!}} + I_{\Phi_c}} = \frac{P_d g_0 r^{-\alpha}}{I_{\Phi_p^{!}} + I_{\Phi_c}}
\end{equation}
\textcolor{blue}{A fixed rate transmission model is adopted in our study}, where each transmitter (\ac{D2D} or BS) transmits at the fixed rate of log$_2[1+\theta]$ bits/sec/Hz, where $\theta$ is a design parameter. Since, the rate is fixed, the transmission is subject to outage due to fading and interference fluctuations. Consequently, the de facto average transmissions rate (i.e., average throughput) is given by
\begin{equation}
\label{rate_eqn}
R = W\textrm{ log$_{2}$}[1+ \theta]\mathrm{P_c},
\end{equation}
where $W$ is the bandwidth, $\theta$ is the pre-determined threshold for successful reception, $\mathrm{P_c} =\mathbb{E}(\textbf{1}\{\mathrm{SIR}>\theta\})$ is the coverage probability, and $\textbf{1}\{.\}$ is the indicator function. \textcolor{red}{The \ac{D2D} communication rate under slotted-ALOHA access scheme is then given by}
\begin{equation}
\label{rate_eqn1}
R_{1}(r) = pW_{1} {\rm log}_2 \big(1 + \theta \big) \textbf{1}\{ \gamma_{r} > \theta\}
\end{equation}
Then, the probability $ \mathbb{P}(R_{1}(r)>R_0)$ is derived as follows.
\begin{align}
\mathbb{P}(R_{1}(r)>R_0)&= \mathbb{P} \big(pW_{1} {\rm log}_2 (1 + \theta)\textbf{1}\{ \gamma_{r} > \theta\}
>R_0\big) \nonumber \\
&=\mathbb{P} \big(\textbf{1}\{ \gamma_{r} > \theta\}
>\frac{R_0}{pW_{1} {\rm log}_2 (1 + \theta )}\big) \nonumber \\
&\overset{(a)}{=} \mathbb{P}\big(\frac{P_d g_0 r^{-\alpha}}{I_{\Phi_p^{!}} + I_{\Phi_c}} > \theta\big)
\nonumber \\
&= \mathbb{P}\big( g_0 > \frac{\theta r^{\alpha}}{P_d} [I_{\Phi_p^{!}} + I_{\Phi_c}] \big)
\nonumber \\
&\overset{(b)}{=}\mathbb{E}_{I_{\Phi_p^{!}},I_{\Phi_c}}\Big[\text{exp}\big(\frac{-\theta r^{\alpha}}{P_d}{[I_{\Phi_p^{!}} + I_{\Phi_c}] }\big)\Big]
\nonumber \\
&\overset{(c)}{=} \mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big)
\end{align}
\textcolor{blue}{where (a) follows from the assumption that $R_0 < pW_{1} {\rm log}_2 \big(1 + \theta \big)$ always hold, otherwise, it is infeasible to get $\mathbb{P}(R_{1}(r)>R_0)$ greater than zero}. (b) follows from the fact that $g_o$ follows an exponential distribution, and (c) follows from the independence of the intra- and inter-cluster interference and the Laplace transform of them.
In what follows, we first derive the Laplace transform of interference to get $\mathbb{P}(R_{1}(r)>R_0)$. Then, we formulate the offloading gain maximization problem.
\begin{lemma}
The Laplace transform of the inter-cluster aggregate interference $I_{\Phi_p^{!}}$ evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$ is given by
\begin{align}
\label{LT_inter}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &= {\rm exp}\Big(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm e}^{-p\overline{n} \varphi(s,v)}\Big)v\dd{v}\Big),
\end{align}
where $\varphi(s,v) = \int_{u=0}^{\infty}\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}$, and $f_U(u|v) = \mathrm{Rice} (u| v, \sigma)$ represents Rice's \ac{PDF} of parameter $\sigma$.
\end{lemma}
\begin{proof}
Please see Appendix A.
\end{proof}
\begin{lemma}
The Laplace transform of the intra-cluster aggregate interference $I_{\Phi_c}$ evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$ is given by
\begin{align}
\label{LT_intra}
\mathscr{L}_{I_{\Phi_c} }(s) = {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\end{align}
where $f_H(h) =\mathrm{Rayleigh}(h,\sqrt{2}\sigma)$ represents Rayleigh's \ac{PDF} with a scale parameter $\sqrt{2}\sigma$.
\end{lemma}
\begin{proof}
Please see Appendix B.
\end{proof}
For the serving distance distribution $f(r)$, since both the typical device as well as a potential catering device have their locations drawn from a normal distribution with variance $\sigma^2$ around the cluster center, the serving distance has a Rayleigh distribution with parameter $\sqrt{2}\sigma$, and given by
\begin{align}
\label{rayleigh}
f(r)= \frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}, \quad r>0
\end{align}
From (\ref{LT_inter}), (\ref{LT_intra}), and (\ref{rayleigh}), the offloading gain in (\ref{offloading_gain}) is characterized as
\begin{align}
\label{offloading_gain_1}
\mathbb{P}_o(p,b_i) &= \sum_{i=1}^{N_f} q_i b_i + q_i(1 - b_i)(1 - e^{-b_i\overline{n}}) \underbrace{\int_{r=0}^{\infty}
\frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}
\mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big)\dd{r}}_{\mathbb{P}(R_1>R_0)},
\end{align}
Hence, the offloading gain maximization problem can be formulated as
\begin{align}
\label{optimize_eqn_p}
\textbf{P1:} \quad &\underset{p, b_i}{\text{max}} \quad \mathbb{P}_o(p,b_i) \\
\label{const110}
&\textrm{s.t.}\quad \sum_{i=1}^{N_f} b_i = M, \\
\label{const111}
& b_i \in [ 0, 1], \\
\label{const112}
& p \in [ 0, 1],
\end{align}
where (\ref{const110}) is the device cache size constraint, which is consistent with the illustration of the example in Fig. \ref{prob_cache_example}. Since the offloading gain depends on the access probability $p$, and $p$ exists as a complex exponential term in $\mathbb{P}(R_1>R_0)$, it is hard to analytically characterize (e.g., show concavity of) the objective function or find a tractable expression for the optimal access probability. In order to tackle this, we propose to solve \textbf{P1} by \textcolor{blue}{finding first} the optimal $p^*$ that maximizes the probability $\mathbb{P}(R_{1}>R_0)$ over the interval $p \in [ 0, 1]$. Then, the obtained $p^*$ is used to solve for the caching probability $b_i$ in the optimization problem below. \textcolor{blue}{Since in the structure of \textbf{P1} $p$ and $b_i$ are separable, it is possible to solve numerically for $p^*$ and then substitute to get $b_i^*$.}
\begin{align}
\label{optimize_eqn_b_i}
\textbf{P2:} \quad &\underset{b_i}{\text{max}} \quad \mathbb{P}_o(p^*,b_i) \\
\label{const11}
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber
\end{align}
The optimal caching probability is formulated in the next lemma.
\begin{lemma}
$\mathbb{P}_o(p^*,b_i)$ is a concave function w.r.t. $b_i$ and the optimal caching probability $b_i^{*}$ that maximizes the offloading gain is given by
\[
b_{i}^{*}=\left\{
\begin{array}{ll}
1 \quad\quad\quad , v^{*} < q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)\\
0 \quad\quad\quad, v^{*} > q_i + \overline{n}q_i\mathbb{P}(R_1>R_0)\\
\psi(v^{*}) \quad, {\rm otherwise}
\end{array}
\right.
\]
where $\psi(v^{*})$ is the solution of $v^{*}$ of (\ref{psii_offload}) that satisfies $\sum_{i=1}^{N_f} b_i^*=M$.
\end{lemma}
\begin{proof}
Please see Appendix C.
\end{proof}
\section{\textcolor{blue}{Energy Consumption}}
In this section, we formulate the energy consumption minimization problem for the clustered \ac{D2D} caching network. In fact, significant energy consumption occurs only when content is served via \ac{D2D} or \ac{BS}-to-Device transmission. We consider the time cost $c_{d_i}$ as the time it takes to download the $i$-th content from a neighboring device in the same cluster. Considering the size ${S}_i$ of the $i$-th ranked content, $c_{d_i}=\overline{b}_i/R_1 $, where $R_1 $ denotes the average rate of the \ac{D2D} communication. Similarly, we have $c_{b_i} = \overline{b}_i/R_2 $ when the $i$-th content is served by the \ac{BS} with average rate $R_2 $. The average energy consumption when downloading files by the devices in the representative cluster is given by
\begin{align}
\label{energy_avrg}
E_{av} = \sum_{k=1}^{\infty} E(b_i|k)\mathbb{P}(n=k)
\end{align}
where $\mathbb{P}(n=k)$ is the probability that there are $k$ devices in the cluster $x_0$, and $E(b_i|k)$ is the energy consumption conditioning on having $k$ devices within the cluster $x_0$, written similar to \cite{energy_efficiency} as
\begin{equation}
E(b_i|k) = \sum_{j=1}^{k} \sum_{i=1}^{N_f}\big[\mathbb{P}_{j,i}^d q_i P_d c_{d_i} + \mathbb{P}_{j,i}^b q_i P_b c_{b_i}\big]
\label{energy}
\end{equation}
where $\mathbb{P}_{j,i}^d$ and $\mathbb{P}_{j,i}^b$ represent the probability of obtaining the $i$-th content by the $j$-th device from the local cluster, i.e., via \ac{D2D} communication, and the BS, respectively. $P_b$ denotes the \ac{BS} transmission power. Given that there are $k$ devices per cluster, it is obvious that $\mathbb{P}_{j,i}^b=(1-b_i)^{k}$, and $\mathbb{P}_{j,i}^d=(1 - b_i)\big(1-(1-b_i)^{k-1}\big)$.
The average rates $R_1$ and $R_2$ are now computed to get a closed-form expression for $E(b_i|k)$.
From equation (\ref{rate_eqn}), we need to obtain the \ac{D2D} coverage probability $\mathrm{P_{c_d}}$ and \ac{BS}-to-Device coverage probability $\mathrm{P_{c_b}}$ to calculate $R_1$ and $R_2$, respectively. Given the number of devices $k$ in the representative cluster, the Laplace transform of the inter-cluster interference \textcolor{red}{is as obtained in (\ref{LT_inter})}. However, the intra-cluster interfering devices no longer represent a Gaussian PPP since the number of devices is conditionally fixed, i.e., not a Poisson random number as before. To facilitate the analysis, for every realization $k$, we assume that the intra-cluster interfering devices form a Gaussian PPP with intensity function is given by $pkf_Y(y)$.
\textcolor{magenta}{Such an assumption is mandatory for tractability and is validated in the numerical section.}
From Lemma 2, the intra-cluster Laplace transform conditioning on $k$ can be approximated as
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|k) &\approx {\rm exp}\Big(-pk \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\nonumber
\end{align}
and directly, the \ac{D2D} coverage probability is given by
\begin{align}
\label{p_b_d2d}
\mathrm{P_{c_d}} =
\int_{r=0}^{\infty} \frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}
\mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big|k\big)\dd{r}
\end{align}
\textcolor{red}{With the adopted slotted-ALOHA scheme, the access probability $p$ is computed over the interval [0,1] to maximize $\mathrm{P_{c_d}}$ and, in turn, the \ac{D2D} achievable rate $R_1$.} Analogously, under the PPP $\Phi_{bs}$, and based on the nearest \ac{BS} association principle, it is shown in \cite{andrews2011tractable} that the \ac{BS} coverage probability can be expressed as
\begin{equation}
\mathrm{P_{c_b}} =\frac{1}{{}_2 F_1(1,-\delta;1-\delta;-\theta)},
\label{p_b_bs}
\end{equation}
where ${}_2 F_1(.)$ is the Gaussian hypergeometric function and $\delta = 2/\alpha$. Given the coverage probabilities $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ in (\ref{p_b_d2d}) and (\ref{p_b_bs}), respectively, $R_1$ and $R_2 $ can be calculated from (\ref{rate_eqn}), and hence $E(b_i|k)$ is expressed in a closed-form.
\subsection{Energy Consumption Minimization}
The energy minimization problem can be formulated as
\begin{align}
\label{optimize_eqn1}
&\textbf{P3:} \quad\underset{b_i}{\text{min}} \quad E(b_i|k) = \sum_{j=1}^{k} \sum_{i=1}^{N_f}\big[\mathbb{P}_{j,i}^d q_i P_d c_{d_i} + \mathbb{P}_{j,i}^b q_i P_b c_{b_i}\big]
\\
\label{const11}
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber
\end{align}
In the next lemma, we prove the convexity condition for $E$.
\begin{lemma}
\label{convex_E}
The energy consumption $E(b_i|k)$ is convex if $\frac{P_b}{R_2}>\frac{P_d}{R_1}$.
\end{lemma}
\begin{proof}
We proceed by deriving the Hessian matrix of $E$. The Hessian matrix of $E(b_i|k)(b_1,\dots,b_{Nf})$ w.r.t. the caching variables is \textbf{H}$_{i,j} = \frac{\partial^2 E(b_i|k)}{\partial b_i \partial b_j}$, $\forall i,j \in \mathcal{F}$. \textbf{H}$_{i,j}$ a diagonal matrix whose $i$-th row and $j$-th column element is given by $k(k-1) S_i\Big(\frac{P_b}{R_2}-\frac{P_d}{R_1}\Big)q_i(1 - b_i)^{k-2}$.
Since the obtained Hessian matrix is full-rank and diagonal, $\textbf{H}_{i,j}$ is positive semidefinite (and hence $E(b_i|k)$ is convex) if all the diagonal entries are nonnegative, i.e., when
$\frac{P_b}{R_2}>\frac{P_d}{R_1}$. In practice, it is reasonable to assume that $P_b \gg P_d$, as in \cite{ericsson}, the \ac{BS} transmit power is 100 fold the \ac{D2D} power.
\end{proof}
As a result of Lemma 3, the optimal caching probability can be computed to minimize $E(b_i|k)$.
\begin{lemma}
The optimal caching probability $b_i^{*}$ for the energy minimization problem \textbf{P3} is given by,
\begin{align}
b_i^* = \Bigg[ 1 - \Big(\frac{v^* + k^2q_iS_i\frac{P_d}{R_1 }}{kq_iS_i\big(\frac{P_d}{R_1 }-\frac{P_b}{R_2 }\big)} \Big)^{\frac{1}{k-1}}\Bigg]^{+}
\end{align}
where $v^{*}$ satisfies the maximum cache constraint $\sum_{i=1}^{N_f} \Big[ 1 - \Big(\frac{v^* + k^2q_iS_i\frac{P_d}{R_1 }}{kq_iS_i\big(\frac{P_d}{R_1 }-\frac{P_b}{R_2 }\big)} \Big)^{\frac{1}{k-1}}\Big]^{+}=M$, and $[x]^+ =$ max$(x,0)$.
\end{lemma}
\begin{proof}
The proof proceeds in a similar manner to Lemma 3 and is omitted.
\end{proof}
\begin{proposition} {\rm \textcolor{red}{The effect of file size, and the difference between the two fraction power over rate \ac{D2D} and BS}}
\end{proposition}
By substituting $b_i^*$ into (\ref{energy_avrg}), the average energy consumption per cluster is obtained. In the remaining of the paper, we study and minimize the weighted average delay per request for the proposed system.
\section{Delay Analysis}
In this section, the delay analysis and minimization are discussed. A joint stochastic geometry and queueing theory model is exploited to study this problem. The delay analysis incorporates the study of a system of spatially interacting queues. To simplify the mathematical analysis, we further consider that only one \ac{D2D} link can be active within a cluster of $k$ devices, where $k$ is fixed. As shown later, such an assumption facilitates the analysis by deriving simple expressions. We begin by deriving the \ac{D2D} coverage probability under the above assumption, which is used later in this section.
\begin{lemma}
\label{coverage_lemma}
The \ac{D2D} coverage probability of the proposed clustered model with one active \ac{D2D} link within a cluster is given by
\begin{align}
\label{v_0}
\mathrm{P_{c_d}} = \frac{1}{4\sigma^2 Z(\theta,\alpha,\sigma)} ,
\end{align}
where $Z(\theta,\alpha,\sigma) = (\pi\lambda_p \theta^{2/\alpha}\Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)+ \frac{1}{4\sigma^2})$.
\end{lemma}
\begin{proof}
The result can be proved by using the displacement theory of the PPP \cite{daley2007introduction}, and then proceeding in a similar manner to Lemma 1 and 2. The proof is presented in Appendix D for completeness.
\end{proof}
In the following, we firstly describe the traffic model of the network, and then we formulate the delay minimization problem.
\begin{figure
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/delay_queue2}
\caption {\textcolor{blue}{The traffic model of request arrivals and departures in a given cluster. Two M/G/1 queues are assumed, $Q_1$ and $Q_2$, that represent respectively requests served by the \ac{D2D} and \ac{BS}-to-Device communication.}}
\label{delay_queue}
\end{center}
\end{figure}
\subsection{Traffic Model}
We assume that the aggregate request arrival process from the devices in each cluster follows a Poisson arrival process with parameter $\zeta_{tot}$ (requests per time slot). As shown in Fig.~\ref{delay_queue}, the incoming requests are further divided according to where they are served from. $\zeta_{1}$ represents the arrival rate of requests served via the \ac{D2D} communication, whereas $\zeta_{2}$ is the arrival rate for those served from the BSs. $\zeta_{3} = 1 - \zeta_{1} - \zeta_{2}$ denotes the arrival rate of requests served via the self-cache with zero delay. By definition, $\zeta_{1}$ and $\zeta_{2}$ are also Poisson arrival processes. Without loss of generality, we assume that the file size has a general distribution $G$ whose mean is denoted as $\overline{S}$ MBytes. Hence, an M/G/1 queuing model is adopted whereby two non-interacting queues, $Q_1$ and $Q_2$, model the traffic in each cluster served via the \ac{D2D} and \ac{BS}-to-Device communication, respectively. Although $Q_1$ and $Q_2$ are non-interacting as the \ac{D2D} communication is assumed to be out-of-band, these two queues are spatially interacting with similar queues in other clusters. To recap, $Q_1$ and $Q_2$ are two M/G/1 queues with arrival rates $\zeta_{1}$ and $\zeta_{1}$, and service rates $\mu_1$ and $\mu_2$, respectively.
\subsection{Queue Dynamics}
It is worth highlighting that the two queues $Q_i$, $i \in \{1,2\}$, accumulate requests for files demanded by the clusters members, not the files themselves. First-in first-out (FIFO) scheduling is assumed where a request for content arrives first will be scheduled first either by the \ac{D2D} or \ac{BS} communication if the content is cached among the devices or not, respectively. The result of FIFO scheduling only relies on the time when the request arrives the queue and is irrelevant to the particular device that issues the request. Given the parameter of the Poisson's arrival process $\zeta_{tot}$, the arrival rates at the two queues are expressed respectively as
\begin{align}
\zeta_{1} &= \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big), \\
\zeta_{2} &= \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}
\end{align}
The network operation is depicted in Fig. \ref{delay_queue}, and described in detail below.
\begin{enumerate}
\item Given the memoryless property of the arrival process (Poisson arrival) along with the assumption that the service process is independent of the arrival process,
the number of requests in any queue at a future time only depends upon the current number in the system (at time $t$) and the arrivals or departures that occur within the interval $h$.
\begin{align}
Q_{i}(t+h) = Q_{i}(t) + \Lambda_{i}(h) - M_{i}(h)
\end{align}
where $\Lambda_{i}(h)$ is the number of arrivals in the time interval $(t,t+h)$, whose mean is $\zeta_i$ [sec$^{-1}$], and $M_{i}(h)$ is the number of departures in the time interval $(t,t+h)$, whose mean is $\mu_i = \frac{\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})W_i{\rm log}_2(1+\theta)}{\overline{S}}$ [sec$^{-1}$]. It is worth highlighting that, unlike the spatial-only model studied in the previous sections, the term $\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})$ is dependent on the traffic dynamics \textcolor{blue}{since a request being served in a given cluster is interfered only from other clusters that also have requests to serve}. What is more noteworthy is that the mean service time $\tau_i = \frac{1}{\mu_i}$ follows the same distribution as the file size. These aspects will be revisited later in this section.
\item $\Lambda_{i}(h)$ is dependent only on $h$ because the arrival process is Poisson. $M_{i}(h)$ is $0$ if the service time of the file being served $\epsilon_i >h$. $M_{i}(h)$ is 1 if $\epsilon_1 <h$ and $\epsilon_2 + \epsilon_1>h$, and so on. As the service times $\epsilon_1, \epsilon_2, \dots, \epsilon_n$ are independent, neither $\Lambda_{i}(h)$ nor $M_{i}(h)$ depend on what happened prior to $t$. Thus, $Q_{i}(t+h)$ only depends upon $Q_{i}(t)$ and not the past history. Hence it is a is a \ac{CTMC} which obeys the stability conditions in \cite{szpankowski1994stability}.
\end{enumerate}
The following proposition provides the sufficient conditions for the stability of the buffers in the sense defined in \cite{szpankowski1994stability}, i.e., $\{Q_{i}\}$ has a limiting distribution for $t \rightarrow \infty$.
\begin{proposition} {\rm The \ac{D2D} and \ac{BS}-to-Device traffic modeling queues are stable, respectively, if and only if}
\begin{align}
\label{stable1}
\lambda_1 < \mu_1 &= \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}} \\
\label{stable2}
\lambda_2 < \mu_2 & =\frac{\mathrm{P_{c_b}} W_2{\rm log}_2(1+\theta)}{\overline{S}}
\end{align}
\end{proposition}
\begin{proof}
We show sufficiency by proving that (\ref{stable1}) and (\ref{stable2}) guarantee stability in a dominant network, where all queues that have empty buffers make dummy transmissions. The dominant network is a fictitious system that is identical to the original system, except that terminals may choose to transmit even when their respective buffers are empty, in which case they simply transmit a dummy packet. If both systems are started from the same initial state and fed with the same arrivals, then the queues in the fictitious dominant system can never be shorter than the queues in the original system.
\textcolor{blue}{Similar to the spatial-only network, in the dominant system, the typical receiver is seeing an interference from all other clusters whether they have requests to serve or not.} This dominant system approach yields $\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})$ equal to $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ for the \ac{D2D} and \ac{BS}-to-Device communication, respectively. Also, the obtained delay is an upper bound for the actual delay of the system. The necessity of (\ref{stable1}) and (\ref{stable2}) is shown as follows: If $\zeta_i>\mu_i$, then, by Loynes' theorem \cite{loynes1962stability}, it follows that lim$_{t\rightarrow \infty}Q_i(t)=\infty$ (a.s.) for all queues in the dominant network.
\end{proof}
Next, we conduct the analysis for the dominant system whose parameters are as follows. The content size has an exponential distribution with \textcolor{red}{mean $\overline{S}$ [MBytes]}. The service times also obey the same exponential distribution with means $\tau_1 = \frac{\overline{S}}{R_1}$ [second] and $\tau_2 = \frac{\overline{S}}{R_2}$ [second]. The rates $R_1$ and $R_2$ are calculated from (\ref{rate_eqn}) where $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ are from (\ref{v_0}) and (\ref{p_b_bs}), respectively. Accordingly, $Q_1$ and $Q_2$ are two continuous time independent (non-interacting) M/M/1 queues with service rates $\mu_1 = \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}}$ and $\mu_2 = \frac{\mathrm{P_{c_b}} W_2{\rm log}_2(1+\theta)}{\overline{S}}$ [sec$^{-1}$], respectively. \begin{proposition} {\rm The mean queue length $L_i$ of the $i$-th queue is given by}
\begin{align}
\label{queue_len}
L_i &= \rho_i + \frac{2\rho_i^2}{2\zeta_i(1 - \rho_i)},
\end{align}
\end{proposition}
\begin{proof}
We can easily calculate $L_i$ by observing that $Q_i$ are continuous time M/M/1 queues with arrival rates $\zeta_i$, service rates $\mu_i$, and traffic intensities $\rho_i = \frac{\zeta_i}{\mu_i}$. Then, by applying the Pollaczek-Khinchine formula \cite{Kleinrock}, $L_i$ is directly obtained.
\end{proof}
The average delay per request for each queue is calculated from
\begin{align}
D_1 &= \frac{L_1}{\zeta_1}= \frac{1}{\mu_1 - \zeta_1} = \frac{1}{W_1\mathcal{O}_{1} - \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)} \\
D_2 &= \frac{L_2}{\zeta_2}=\frac{1}{\mu_2 - \zeta_2} = \frac{1}{W_2\mathcal{O}_{2} - \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}}
\end{align}
where $\mathcal{O}_{1} = \frac{\mathrm{P_{c_d}} {\rm log}_2(1+\theta)}{\overline{S}}$, $\mathcal{O}_{2}= \frac{\mathrm{P_{c_b}} {\rm log}_2(1+\theta)}{\overline{S}}$ for notational simplicity. The weighted average delay $D$ is then expressed as
\begin{align}
D&= \frac{\zeta_{1}D_1 + \zeta_{2}D_2}{\zeta_{tot}} \nonumber \\
&= \frac{\sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)}{ \mathcal{O}_{1}W_1 - \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)} + \frac{\sum_{i=1}^{N_f}q_i (1-b_i)^{k}}{ \mathcal{O}_{2}W_2 - \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}}
\end{align}
One important insight from the delay equation is that the caching probability $b_i$ controls the arrival rates $\zeta_{1}$ and $\zeta_{2}$ while the bandwidth determines the service rates $\mu_1$ and $\mu_2$. Therefore, it turns out to be of paramount importance to jointly optimize $b_i$ and $W_1$ to minimize the average delay. Subsequently, we formulate the weighted average delay joint caching and bandwidth minimization problem as
\begin{align}
\label{optimize_eqn3}
\textbf{P4:} \quad \quad&\underset{b_i,{\rm W}_1}{\text{min}} \quad D(b_i,W_1) \\
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber \\
& 0 \leq W_1 \leq W,
\end{align}
Although the objective function of \textbf{P4} is convex w.r.t. $W_1$, as derived below, the coupling of the optimization variables $b_i$ and $W_1$ makes \textbf{P4} a non-convex optimization problem. Therefore, \textbf{P4} can not be solved directly using standard convex optimization techniques.
\textcolor{blue}{By applying the \ac{BCD} optimization technique, \textbf{P4} can be solved in an iterative manner as follows. First, for a given caching probability $b_i$, we calculate the bandwidth allocation subproblem. Afterwards, the obtained optimal bandwidth is used to update $b_i$}. The optimal bandwidth for the bandwidth allocation subproblem is given in the next Lemma.
\begin{lemma}
The objective function of \textbf{P4} in (\ref{optimize_eqn3}) is convex w.r.t. $W_1$, and the optimal bandwidth allocation to the \ac{D2D} communication is given by
\begin{align}
W_1^* = \frac{\zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k}) +\varpi \big(\mathcal{O}_{2}W - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)}{\mathcal{O}_{1}+\varpi\mathcal{O}_{2}},
\end{align}
where $\overline{b}_i = 1 - b_i$ and $\varpi=\sqrt{\frac{\mathcal{O}_{1}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})}{\mathcal{O}_{2} \sum_{i=1}^{N_f}q_i\overline{b}_i^{k}}}$
\end{lemma}
\begin{proof}
$D(b_i,W_1|k)$ can be written as
\begin{align}
\label{optimize_eqn3_p1}
\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big(\mathcal{O}_{1}W_1 - \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big)^{-1} + \sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big(\mathcal{O}_{2}W_2 - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)^{-1}, \nonumber
\end{align}
The second derivative $\frac{\partial^2 D(b_i,W_1|k)}{\partial W_1^2}$ is hence given by
\begin{align}
2\mathcal{O}_{1}^2\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big(\mathcal{O}_{1}W_1 - \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big)^{-3} + 2\mathcal{O}_{2}^2\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big(\mathcal{O}_{2}W_2 - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)^{-3}, \nonumber
\end{align}
The stability condition requires that $\mathcal{O}_{1}W_1 > \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})$ and $\mathcal{O}_{2}W_2 > \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}$. Also, $\overline{b}_i \geq \overline{b}_i^{k}$ by definition. Hence, $\frac{\partial^2 D(b_i,W_1|k)}{\partial W_1^2} > 0$, and the objective function is a convex function of $W_1$. The optimal bandwidth allocation can be obtained from the Karush-Kuhn-Tucker (KKT) conditions similar to problems \textbf{P2} and \textbf{P3}, with the details omitted for brevity.
\end{proof}
Given $W_1^*$ from the bandwidth allocation subproblem, the caching probability subproblem can be written as
\begin{align}
\textbf{P5:} \quad \quad&\underset{b_i}{\text{min}} \quad D(b_i,W_1^*) \\
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111})
\end{align}
\textbf{The} caching probability subproblem \textbf{P5} is a sum of two fractional functions, where the first fraction is in the form of a concave over convex functions while the second fraction is in the form of a convex over concave functions. The first fraction structure, i.e., concave over convex functions, renders solving this problem using fractional programming very challenging.\footnote{\textbf{Dinkelbach's} transform can be used to solve a minimization of fractional functions that has the form of convex over concave functions, whereby an equivalent problem is solved with the objective function as the difference between convex (numerator) minus concave (denominator) functions \cite{schaible1973fractional}.} Hence, we use the \textcolor{red}{successive convex approximation} technique to solve for $b_i$ given the optimal bandwidth $W_1^*$. The explained procedure is repeated until the value of \textbf{P4}'s objective function converges to a pre-specified accuracy.
\section{Numerical Results}
\begin{table}[ht]
\caption{Simulation Parameters}
\centering
\begin{tabular}{c c c}
\hline\hline
Description & Parameter & Value \\ [0.5ex]
\hline
Network bandwidth & W & \SI{20}{\mega\hertz} \\
\ac{BS} transmission power & $P_b$ & \SI{43}{\deci\bel\of{m}} \\
\ac{D2D} transmission power & $P_d$ & \SI{23}{\deci\bel\of{m}} \\
Displacement standard deviation & $\sigma$ & \SI{10}{\metre} \\
Popularity index&$\beta$&1\\
Path loss exponent&$\alpha$&4\\
Library size&$N_f$&500 files\\
Cache size per device&$M$&5 files\\
Mean number of devices per cluster&\textcolor{red}{$\overline{n}$}&10\\
Density of $\Phi_p$&$\lambda_{p}$&50 clusters/\SI{}{km}$^2$ \\
Average content size&$\textcolor{red}{\overline{S}}$&\SI{3}{Mbits} \\
$\mathrm{SIR}$ threshold&$\theta$&\SI{0}{\deci\bel}\\
\hline
\end{tabular}
\label{ch3:table:sim-parameter}
\end{table}
At first, we validate the developed mathematical model via Monte Carlo simulations. Then we benchmark the proposed caching scheme against conventional caching schemes. Unless otherwise stated, the network parameters are selected as shown in Table \ref{ch3:table:sim-parameter}.
\subsection{Offloading Gain Results}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/prob_r_geq_r0.eps}
\caption {The probability that the achievable rate is greater than a threshold $R_0$ versus standard deviation $\sigma$.}
\label{prob_r_geq_r0}
\end{center}
\end{figure}
In this subsection, we present the offloading gain performance for the proposed caching model.
In Fig.~\ref{prob_r_geq_r0}, we verify the accuracy of the analytical results for the probability $\mathbb{P}(R_1>R_0)$. The theoretical and simulated results are plotted together, and they are consistent. We can observe that the probability $\mathbb{P}(R_1>R_0)$ decreases monotonically with the increase of $\sigma$. This is because as $\sigma$ increases, the serving distance increases and the inter-cluster interfering distance between the out-of-cluster interferers and the typical device decreases, and equivalently, the $\mathrm{SIR}$ decreases. It is also shown that $\mathbb{P}(R_1>R_0)$ decreases with the $\mathrm{SIR}$ threshold $\theta$ as the channel becomes more prone to be in outage when increasing the $\mathrm{SIR}$ threshold $\theta$.
In Fig.~\ref{prob_r_geq_r0_vs_p}, we plot the probability $\mathbb{P}(R_1>R_0)$ against the channel access probability $p$ at different thresholds $R_0$. As evident from the plot, we see that there is an optimal $p^*$; before it the probability $\mathbb{P}(R_1>R_0)$ tends to increase since the channel access probability increases, and beyond it, the probability $\mathbb{P}(R_1>R_0)$ decreases monotonically due to the effect of more interferers accessing the channel. It is quite natural that $\mathbb{P}(R_1>R_0)$ is higher when decreasing the rate threshold $R_0$. Also, we observe that the optimal access probability $p^*$ is smaller when $R_0$ decreases. This implies that a transmitting device can maximize the probability $\mathbb{P}(R_1>R_0)$ at the receiver when $R_0$ is smaller by accessing the channel less frequently, and correspondingly, receiving lower interference.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/prob_r_geq_r0_vs_p.eps}
\caption {\textcolor{red}{The probability that the achievable rate is greater than a threshold $R_0$ versus the access probability $p$.}}
\label{prob_r_geq_r0_vs_p}
\end{center}
\end{figure}
\begin{figure*}
\centering
\subfigure[$p=p^*$. ]{\includegraphics[width=3.0in]{Figures/ch3/histogram_b_i_p_star.eps}
\label{histogram_b_i_p_star}}
\subfigure[$p \neq p^*$.]{\includegraphics[width=3.0in]{Figures/ch3/histogram_b_i_p_leq_p_star.eps}
\label{histogram_b_i_p_leq_p_star}}
\caption{Histogram of the caching probability $b_i$ when (a) $p=p^*$ and (b) $p \neq p^*$.}
\label{histogram_b_i}
\end{figure*}
To show the effect of $p$ on the caching probability, in Fig.~\ref{histogram_b_i}, we plot the histogram of the optimal caching probability at different values of $p$, where $p=p^*$ in Fig.~\ref{histogram_b_i_p_star} and $p\neq p^*$ in Fig.~\ref{histogram_b_i_p_leq_p_star}. It is clear from the histograms that optimal caching probability $b_i$ tends to be more skewed when $p\neq p^*$, i.e., when $\mathbb{P}(R_1>R_0)$ decreases. This shows that file sharing is more difficult when $p$ is not optimal. For example, if $p<p^*$, the system is too conservative owing to the small access probabilities. However, for $p>p^*$, the outage probability is high due to the aggressive interference. In such a low coverage probability regime, each device tend to cache the most popular files leading to fewer opportunities of content transfer between devices.
\begin{figure*}
\centering
\subfigure[The offloading probability versus the popularity of files $\beta$ at different thresholds $R_0$. ]{\includegraphics[width=3.0in]{Figures/ch3/offloading_gain_vs_beta.eps}
\label{offloading_gain_vs_beta_R_0}}
\subfigure[The offloading probability versus the popularity of files $\beta$ under different caching schemes (PC, Zipf, CPF).]{\includegraphics[width=3.0in]{Figures/ch3/offloading_prob_cach_compare.eps}
\label{offloading_prob_cach_compare}}
\caption{The offloading probability versus the popularity of files $\beta$.}
\label{offloading_gain_vs_beta}
\end{figure*}
Last but not least, Fig.~\ref{offloading_gain_vs_beta} manifests the prominent effect of the files' popularity on the offloading gain. In Fig.~\ref{offloading_gain_vs_beta_R_0}, we plot the offloading gain against $\beta$ at different rate thresholds $R_0$. We note that the offloading gain monotonically increases with $\beta$ since fewer files are frequently requested such that the files can be entirely cached among the cluster devices. Also, we see that offloading gain decreases with the increase of $R_0$ since the probability $\mathbb{P}(R_1>R_0)$ decreases with $R_0$. In Fig.~\ref{offloading_prob_cach_compare}, we compare the offloading gain of three different caching schemes, namely, the proposed \ac{PC}, Zipf's caching (Zipf), and \ac{CPF}. We can see that the offloading gain under the \ac{PC} scheme attains the best performance as compared to other schemes. Also, we note that both \ac{PC} and Zipf's schemes encompass the same energy consumption when $\beta=0$ owing to the uniformity of content popularity.
\subsection{\textcolor{red}{Energy Consumption Results}}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/energy_vs_beta5.eps}
\caption {Normalized energy consumption versus popularity exponent $\beta$.}
\label{energy_vs_beta}
\end{center}
\end{figure}
The results in this part are given for the energy consumption.
Fig.~\ref{energy_vs_beta} shows the energy consumption, \textcolor{magenta}{normalized to the mean of the number of devices per cluster}, versus $\beta$ under different caching schemes, namely, \ac{PC}, Zipf, \ac{CPF}, and \ac{RC}. We can see that the minimized energy under the proposed \ac{PC} scheme attains the best performance as compared to other schemes. Also, it is clear that, except for the \ac{RC}, the consumed energy decreases with $\beta$. This can be justified by the fact that as $\beta$ increases, fewer files are frequently requested which are more likely to be cached among the devices under \ac{PC}, \ac{CPF}, and the Zipf's caching schemes. These few files therefore are downloadable from the devices via low power \ac{D2D} communication. In the \ac{RC} scheme, files are uniformly chosen for caching independent of their popularity.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/energy_vs_n4.eps}
\caption {\textcolor{red}{Normalized energy consumption versus number of devices per cluster.}}
\label{energy_vs_n}
\end{center}
\end{figure}
We plot the normalized energy consumption per device versus the \textcolor{magenta}{the mean of the number of devices per cluster} in Fig.~\ref{energy_vs_n}. First, we see that the normalized energy consumption decreases with the number of devices. As the number of devices per cluster increases, it is more probable to obtain requested files via low power \ac{D2D} communication. When the number of devices per cluster is relatively large, the normalized energy consumption tends to flatten as most of the content becomes cached at the cluster devices.
\subsection{Delay Results}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/delay_compare.eps}
\caption {Weighted average delay versus the popularity exponent $\beta$.}
\label{delay_compare}
\end{center}
\end{figure}
In Fig.~\ref{delay_compare}, we compare the average delay of three different caching schemes \ac{PC}, Zipf, and \ac{CPF}. We can see that the jointly minimized average delay under \ac{PC} scheme attains the best performance as compared to other caching schemes. Also, we see that, in general, the average delay monotonically decreases with $\beta$ when a fewer number of files undergoes the highest demand. Fig.~\ref{BW_compare} manifests the effect of the files' popularity $\beta$ on the allocated bandwidth. It is shown that optimal \ac{D2D} allocated bandwidth $W_1^*$ continues increasing with $\beta$. This can be interpreted as follows. When $\beta$ increases, a fewer number of files become highly demanded. These files can be entirely cached among the devices. To cope with such a larger number of requests served via the \ac{D2D} communication, the \ac{D2D} allocated bandwidth needs to be increased.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/BW_compare1.eps}
\caption {Normalized bandwidth allocation versus the popularity exponent $\beta$.}
\label{BW_compare}
\end{center}
\end{figure}
\section{Conclusion}
In this work, we conduct a comprehensive analysis of the joint communication and caching for a clustered \ac{D2D} network with random probabilistic caching incorporated at the devices. We first maximize the offloading gain of the proposed system by jointly optimizing the channel access and caching probability. We solve for the channel access probability numerically, and the optimal caching probability is then characterized. We show that deviating from the optimal access probability $p^*$ makes file sharing more difficult. More precisely, the system is too conservative for small access probabilities, while the interference is too aggressive for larger access probabilities. \textcolor{magenta}{Then, we minimize the energy consumption of the proposed clustered \ac{D2D} network. We formulate the energy minimization problem and show that it is convex and the optimal caching probability is obtained.} Finally, we adopt a queuing model for the devices' traffic within each cluster to investigate the network average delay. Two M/G/1 queues are employed to model the \ac{D2D} and \ac{BS}-to-Device communications. We then derive an expression for the weighted average delay per request. We observe that the average delay is dependent on the caching probability and bandwidth allocated, which control respectively the arrival rate and service rate for the two modeling queues. Therefore, we minimize the per request weighted average delay by jointly optimizing bandwidth allocation between \ac{D2D} and \ac{BS}-to-Device communication and the caching probability. The delay minimization problem is shown to be non-convex. Applying the \ac{BCD} optimization technique, the joint minimization problem can be solved in an iterative manner. Results show roughly up to $10\%$, $17\%$, and $140\%$ improvement gain in the offloading gain, energy consumption, and average delay, respectively, compared to the Zipf's caching technique.
\begin{appendices}
\section{Proof of lemma 1}
Laplace transform of the inter-cluster aggregate interference $I_{\Phi_p^{!}}$, evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$, can be evaluated as
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &= \mathbb{E} \Bigg[e^{-s \sum_{\Phi_p^{!}} \sum_{y \in \mathcal{B}^p} g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&= \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\mathcal{B}^p,g_{y_{x}}} \prod_{y \in \mathcal{B}^p} e^{-s g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&= \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\mathcal{B}^p} \prod_{y \in \mathcal{B}^p} \mathbb{E}_{g_{y_{x}}} e^{-s g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&\overset{(a)}{=} \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\mathcal{B}^p} \prod_{y \in \mathcal{B}^p} \frac{1}{1+s \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&\overset{(b)}{=} \mathbb{E}_{\Phi_p} \prod_{\Phi_p^{!}} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x + y\rVert^{-\alpha}}\Big)f_Y(y)\dd{y}\Big) \nonumber
\\
&\overset{(c)}{=} {\rm exp}\Bigg(-\lambda_p \int_{\mathbb{R}^2}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x + y\rVert^{-\alpha}}\Big)f_Y(y)\dd{y}\Big)\dd{x}\Bigg) \nonumber
\\
&\overset{(d)}{=} {\rm exp}\Bigg(-\lambda_p \int_{\mathbb{R}^2}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert z\rVert^{-\alpha}}\Big)f_Y(z-x)\dd{y}\Big)\dd{x}\Bigg) \nonumber
\\
&\overset{(e)}{=} {\rm exp}\Bigg(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{u=0}^{\infty}\Big(1 - \frac{1}{1+s u^{-\alpha}}\Big)f_U(u|v)\dd{u}\Big)v\dd{v}\Bigg) \nonumber
\\
&= {\rm exp}\Bigg(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm exp}\big(-p\overline{n} \int_{u=0}^{\infty}
\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}\big)v\dd{v}\Big)\Bigg) \nonumber
\\
&= {\rm exp}\Big(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm e}^{-p\overline{n} \varphi(s,v)}\Big)v\dd{v}\Big),
\end{align}
where $\varphi(s,v) = \int_{u=0}^{\infty}\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}$; (a) follows from the Rayleigh fading assumption, (b)
follows from the probability generating functional (PGFL) of PPP $\Phi_{cp}$, (c) follows from the PGFL of the parent PPP $\Phi_p$, (d) follows from change of variables $z = x + y$, and (e) follows from converting the cartesian coordinates to the polar coordinates. Hence, Lemma 1 is proven.
\section {Proof of lemma 2}
Laplace transform of the intra-cluster aggregate interference $I_{\Phi_c}$, evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$, is written as
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|v_0) &= \mathbb{E} \Bigg[e^{-s \sum_{y \in \mathcal{A}^p} g_{y} \lVert x_0 + y\rVert^{-\alpha}} \Bigg] \nonumber
\\
&= \mathbb{E}_{\mathcal{A}^p,g_{y}} \prod_{y \in\mathcal{A}^p} e^{-s g_{y} \lVert x_0 + y\rVert^{-\alpha}} \nonumber
\\
&= \mathbb{E}_{\mathcal{A}^p} \prod_{y \in\mathcal{A}^p} \mathbb{E}_{g_{y}} e^{-s g_{y} \lVert x_0 + y\rVert^{-\alpha}} \nonumber
\\
&\overset{(a)}{=} \mathbb{E}_{\mathcal{A}^p} \prod_{y \in\mathcal{A}^p} \frac{1}{1+s \lVert x_0 + y\rVert^{-\alpha}}
\nonumber
\\
&\overset{(b)}{=} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x_0 + y\rVert^{-\alpha}}\Big)f_{Y}(y)\dd{y}\Big) \nonumber
\\
&\overset{(c)}{=} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert z_0\rVert^{-\alpha}}\Big)f_{Y}(z_0-x_0)\dd{z_0}\Big) \nonumber
\\
&\overset{(d)}{=} {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\Big(1 - \frac{1}{1+s h^{-\alpha}}\Big)f_H(h|v_0)\dd{h}\Big) \nonumber
\\
&= {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h|v_0)\dd{h}\Big) \nonumber
\\
&= {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h|v_0)\dd{h}\Big) \nonumber
\\
\mathscr{L}_{I_{\Phi_c} }(s) &\approx {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\end{align}
where (a) follows from the Rayleigh fading assumption, (b) follows from the PGFL of the PPP $\Phi_c^p$, (c) follows from changing of variables $z_0 = x_0 + y$, (d) follows from converting the cartesian to polar coordinates, and the approximation comes from neglecting the correlation of the intra-cluster interfering distances, i.e., the common part $v_0$, as in \cite{clustered_twc}. Hence, Lemma 2 is proven.
\section {Proof of lemma 3}
First, to prove concavity, we proceed as follows.
\begin{align}
\frac{\partial \mathbb{P}_o}{\partial b_i} &= q_i + q_i\big(\overline{n}(1-b_i)e^{-\overline{n}b_i}-(1-e^{-\overline{n}b_i})\big)\mathbb{P}(R_1>R_0)\nonumber \\
\frac{\partial^2 \mathbb{P}_o}{\partial b_i \partial b_j} &= -q_i\big(\overline{n}e^{-\overline{n}b_i} + \overline{n}^2(1-b_i)e^{-\overline{n}b_i} + \overline{n}e^{-\overline{n}b_i}\big)\mathbb{P}(R_1>R_0)
\end{align}
It is clear that the second derivative $\frac{\partial^2 \mathbb{P}_o}{\partial b_i \partial b_j}$ is negative. Hence, the Hessian matrix \textbf{H}$_{i,j}$ of $\mathbb{P}_o(p^*,b_i)$ w.r.t. $b_i$ is negative semidefinite, and the function $\mathbb{P}_o(p^*,b_i)$ is concave with respect to $b_i$. Also, the constraints are linear, which imply that the necessity and sufficiency conditions for optimality exist. The dual Lagrangian function and the KKT conditions are then employed to solve \textbf{P2}.
The KKT Lagrangian function of the energy minimization problem is given by
\begin{align}
\mathcal{L}(b_i,w_i,\mu_i,v) =& \sum_{i=1}^{N_f} q_i b_i + q_i(1- b_i)(1-e^{-b_i\overline{n}})\mathbb{P}(R_1>R_0) \nonumber \\
&+ v(M-\sum_{i=1}^{N_f} b_i) + \sum_{i=1}^{N_f} w_i (b_i-1) - \sum_{i=1}^{N_f} \mu_i b_i
\end{align}
where $v, w_i, \mu_i$ are the dual equality and two inequality constraints, respectively. Now, the optimality conditions are written as,
\begin{align}
\label{grad}
\grad_{b_i} \mathcal{L}(b_i^*,w_i^*,\mu_i^*,v^*) = q_i + q_i&\big(\overline{n}(1-b_i)e^{-\overline{n}b_i}-(1-e^{-\overline{n}b_i})\big)\mathbb{P}(R_1>R_0) -v^* + w_i^* -\mu_i^*= 0 \\
&w_i^* \geq 0 \\
&\mu_i^* \leq 0 \\
&w_i^* (b_i^* - 1) =0 \\
&\mu_i^* b_i^* = 0\\
&(M-\sum_{i=1}^{N_f} b_i^*) = 0
\end{align}
\begin{enumerate}
\item $w_i^* > 0$: We have $b_i^* = 1$, $\mu_i^*=0$, and
\begin{align}
&q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)= v^* - w_i^* \nonumber \\
\label{cond1_offload}
&v^{*} < q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)
\end{align}
\item $\mu_i^* < 0$: We have $b_i^* = 0$, and $w_i^*=0$, and
\begin{align}
& q_i + \overline{n}q_i\mathbb{P}(R_1>R_0) = v^* + \mu_i^* \nonumber \\
\label{cond2_offload}
&v^{*} > q_i + \overline{n}q_i\mathbb{P}(R_1>R_0)
\end{align}
\item $0 <b_i^*<1$: We have $w_i^*=\mu_i^*=0$, and
\begin{align}
\label{psii_offload}
v^{*} = q_i + q_i\big(\overline{n}(1-b_i^*)e^{-\overline{n}b_i} - (1 - e^{-\overline{n}b_i})\big)\mathbb{P}(R_1>R_0)
\end{align}
\end{enumerate}
By combining (\ref{cond1_offload}), (\ref{cond2_offload}), and (\ref{psii_offload}), with the fact that $\sum_{i=1}^{N_f} b_i^*=M$, Lemma 3 is proven.
\section {Proof of lemma 6}
Under the assumption of one active \ac{D2D} link within a cluster, there is no intra-cluster interference. Also, the Laplace transform of the inter-cluster interference is similar to that of the PPP \cite{andrews2011tractable} whose density is the same as that of the parent PPP. In fact, this is true according to the displacement theory of the PPP \cite{daley2007introduction}, where each interferer is a point of a PPP that is displaced randomly and independently of all other points. For the sake of completeness, we prove it here. Starting from the third line of the proof of Lemma 1, we get
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &\overset{(a)}{=} \text{exp}\Bigg(-2\pi\lambda_p \mathbb{E}_{g_{u}} \int_{v=0}^{\infty}\mathbb{E}_{u|v}\Big[1 -
e^{-s P_d g_{u} u^{-\alpha}} \Big]v\dd{v}\Bigg), \nonumber \\
&= \text{exp}\Big(-2\pi\lambda_p \mathbb{E}_{g_{u}} \big[\int_{v=0}^{\infty}\int_{u=0}^{\infty}\big(1 - e^{-s P_d g_{u} u^{-\alpha}} \big)f_U(u|v)\dd{u}v\dd{v}\big]\Big) \nonumber \\
\label{prelaplace}
&\overset{(b)}{=} \text{exp}\Bigg(-2\pi\lambda_p \mathbb{E}_{g_{u}} \underbrace{\int_{v=0}^{\infty}v\dd{v} - \int_{v=0}^{\infty}\int_{u=0}^{\infty} e^{-s P_d g_{u} u^{-\alpha}} f_{U}(u|v)\dd{u} v \dd{v}}_{\mathcal{R}(s,\alpha)}\Bigg)
\end{align}
where (a) follows from the PGFL of the parent PPP \cite{andrews2011tractable}, and (b) follows from $\int_{u=0}^{\infty} f_{U}(u|v)\dd{u} =1$. Now, we proceed by calculating the integrands of $\mathcal{R}(s,\alpha)$ as follows.
\begin{align}
\mathcal{R}(s,\alpha)&\overset{(c)}{=} \int_{v=0}^{\infty}v\dd{v} - \int_{u=0}^{\infty}e^{-s P_d g_{u} u^{-\alpha}}
\int_{v=0}^{\infty} f_{U}(u|v)v \dd{v}\dd{u}\nonumber \\
&\overset{(d)}{=} \int_{v=0}^{\infty}v\dd{v} - \int_{u=0}^{\infty}e^{-s P_d g_{u} u^{-\alpha}}u\dd{u} \nonumber \\
&\overset{(e)}{=} \int_{u=0}^{\infty}(1 - e^{-s P_d g_{u} u^{-\alpha}})u\dd{u} \nonumber \\
&\overset{(f)}{=} \frac{(s P_d g_{u})^{2/\alpha}}{\alpha} \int_{u=0}^{\infty}(1 - e^{-t})t^{-1-\frac{2}{\alpha}}du \nonumber \\
\label{laplaceppp1}
&\overset{(g)}{=} \frac{(s P_d)^{2/\alpha}}{2} g_{u}^{2/\alpha} \Gamma(1 + 2/\alpha),
\end{align}
where (c) follows from changing the order of integration, (d) follows from $ \int_{v=0}^{\infty} f_{U}(u|v)v\dd{v} = u$, (e) follows from changing the dummy variable $v$ to $u$, (f) follows from changing the variables $t=s g_{u}u^{-\alpha}$, and (g) follows from solving the integration of (f) by parts. Substituting the obtained value for $\mathcal{R}(s,\alpha)$ into (\ref{prelaplace}), and taking the expectation over the exponential random variable $g_u$, with the fact that $\mathbb{E}_{g_{u}} [g_{u}^{2/\alpha}] = \Gamma(1 - 2/\alpha)$, we get
\begin{align}
\label{laplace_trans}
\mathscr{L}_{I_{\Phi_p^{!}}} (s)&= {\rm exp}\Big(-\pi\lambda_p (sP_d )^{2/\alpha} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)\Big),
\end{align}
Substituting this expression with the distance \ac{PDF} $f_R(r)$ into the coverage probability equation yields
\begin{align}
\mathrm{P_{c_d}} &=\int_{r=0}^{\infty}
{\rm e}^{-\pi\lambda_p (sP_d)^{2/\alpha} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)}\frac{r}{2\sigma^2}{\rm e}^{\frac{-r^2}{4\sigma^2}} {\rm dr} , \nonumber \\
&\overset{(h)}{=}\int_{r=0}^{\infty} \frac{r}{2\sigma^2}
{\rm e}^{-\pi\lambda_p \theta^{2/\alpha}r^{2} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)}{\rm e}^{\frac{-r^2}{4\sigma^2}} {\rm dr} , \nonumber \\
&\overset{(i)}{=}\int_{r=0}^{\infty} \frac{r}{2\sigma^2}
{\rm e}^{-r^2Z(\theta,\sigma,\alpha)} {\rm dr} , \nonumber \\
&\overset{}{=} \frac{1}{4\sigma^2 Z(\theta,\alpha,\sigma)}
\end{align}
where (h) comes from the substitution ($s = \frac{\theta r^{\alpha}}{P_d}$), and (i) from $Z(\theta,\alpha,\sigma) = (\pi\lambda_p \theta^{2/\alpha}\Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)+ \frac{1}{4\sigma^2})$.
\end{appendices}
\bibliographystyle{IEEEtran}
\section{Introduction}
Caching at mobile devices significantly improves system performance by facilitating \ac{D2D} communications, which enhances the spectrum efficiency and alleviate the heavy burden on backhaul links \cite{Femtocaching}. Modeling the cache-enabled heterogeneous networks, including \ac{SBS} and mobile devices, follows two main directions in the literature. The first line of work focuses on the fundamental throughput scaling results by assuming a simple protocol channel model \cite{Femtocaching,golrezaei2014base,8412262}, known as the protocol model, where two devices can communicate if they are within a certain distance. The second line of work, defined as the physical interference model, considers a more realistic model for the underlying physical layer \cite{andreev2015analyzing,cache_schedule}. In the following, we review some of the works relevant to the second line, focusing mainly on the \ac{EE} and delay analysis of wireless caching networks.
The physical interference model is based on the fundamental \ac{SIR} metric, and therefore, is applicable to any wireless communication system. Modeling devices' locations as a \ac{PPP} is widely employed in the literature, especially, in the wireless caching area \cite{andreev2015analyzing,cache_schedule,energy_efficiency,ee_BS,hajri2018energy}. However, a realistic model for \ac{D2D} caching networks requires that a given device typically has multiple proximate devices, where any of them can potentially act as a serving device. This deployment is known as clustered devices deployment, which can be characterized by cluster processes \cite{haenggi2012stochastic}. Unlike the popular \ac{PPP} approach, the authors in \cite{clustered_twc,clustered_tcom,8070464} developed a stochastic geometry based model to characterize the performance of content placement in the clustered \ac{D2D} network. In \cite{clustered_twc}, the authors discuss two strategies of content placement in a \ac{PCP} deployment. First, when each device randomly chooses its serving device from its local cluster, and secondly, when each device connects to its $k$-th closest transmitting device from its local cluster. The authors characterize the optimal number of \ac{D2D} transmitters that must be simultaneously activated in each cluster to maximize the area spectral efficiency. The performance of cluster-centric content placement is characterized in \cite{clustered_tcom}, where the content of interest in each cluster is cached closer to the cluster center, such that the collective performance of all the devices in each cluster is optimized. Inspired by the Matern hard-core point process, which captures pairwise interactions between nodes, the authors in \cite{8070464} devised a novel spatially correlated caching strategy called \ac{HCP} such that the \ac{D2D} devices caching the same content are never closer to each other than the exclusion radius.
Energy efficiency in wireless caching networks is widely studied in the literature \cite{energy_efficiency,ee_BS,hajri2018energy}.
For example, an optimal caching problem is formulated in \cite{energy_efficiency} to minimize the energy consumption of a wireless network. The authors consider a cooperative wireless caching network where relay nodes cooperate with the devices to cache the most popular files in order to minimize energy consumption. In \cite{ee_BS}, the authors investigate how caching at BSs can improve \ac{EE} of wireless access networks. The condition when \ac{EE} can benefit from caching is characterized, and the optimal cache capacity that maximizes the network \ac{EE} is found. It is shown that \ac{EE} benefit from caching depends on content popularity, backhaul capacity, and interference level.
The authors in \cite{hajri2018energy} exploit the spatial repartitions of devices and the correlation in their content popularity profiles to improve the achievable EE. The \ac{EE} optimization problem is decoupled into two related subproblems, the first one addresses the issue of content popularity modeling, and the second subproblem investigates the impact of exploiting the spatial repartitions of devices. It is shown that the small base station allocation algorithm improves the energy efficiency and hit probability. However, the problem of \ac{EE} for \ac{D2D} based caching is not yet addressed in the literature.
Recently, the joint optimization of delay and energy in wireless caching is conducted, see, for instance \cite{wu2018energy,huang2018energy,jiang2018energy,yang2018cache}. The authors in \cite{wu2018energy} jointly optimize the delay and energy in a cache-enabled dense small cell network. The authors formulate the energy-delay optimization problem as a mixed integer programming problem, where file placement, device association to the small cells, and power control are jointly considered. To model the energy consumption and end-to-end file delivery-delay tradeoff, a utility function linearly combining these two metrics is used as an objective function of the optimization problem. An efficient algorithm is proposed to approach the optimal association and power solution, which could achieve the optimal tradeoff between energy consumption and end-to-end file delivery delay. In \cite{huang2018energy}, the authors showed that with caching, the energy consumption can be reduced by extending transmission time. However, it may incur wasted energy if the device never needs the cached content. Based on the random content request delay, the authors study the maximization of \ac{EE} subject to a hard delay constraint in an additive white Gaussian noise channel. It is shown that the \ac{EE} of a system with caching can be significantly improved with increasing content request probability and target transmission rate compared with the traditional on-demand scheme, in which the \ac{BS} transmits content file only after it is requested by the user. However, the problem of energy consumption and joint communication and caching for clustered \ac{D2D} networks is not yet addressed in the literature.
In this paper, we conduct a comprehensive performance analysis and optimization of the joint communication and caching for a clustered \ac{D2D} network, where the devices have unused memory to cache some files, following a random probabilistic caching scheme. Our network model effectively characterizes the stochastic nature of channel fading and clustered geographic locations of devices. Furthermore, this paper emphasizes on the need for considering the traffic dynamics and rate of requests when studying the delay incurred to deliver requests to devices. To the best of our knowledge, our work is the first in the literature that conducts a comprehensive spatial analysis of a doubly \ac{PCP} (also called doubly \ac{PPP} \cite{haenggi2012stochastic}) with the devices adopting a slotted-ALOHA random access technique to access a shared channel. The key advantage of adopting the slotted-ALOHA access protocol is that it is a simple yet fundamental medium access control (MAC) protocol, wherein no central controller exists to schedule the users' transmissions.
We also incorporate the spatio-temporal analysis in wireless caching networks by combining tools from stochastic geometry and queuing theory in order to analyze and minimize the average delay (see, for instance, \cite{zhong2015stability,stamatiou2010random,zhong2017heterogeneous,7917340,kim2017ultra}).
The main contributions of this paper are summarized below.
\begin{itemize}
\item We consider a Thomas cluster process (TCP) where the devices are spatially distributed as groups in clusters. The cluster centers are drawn from a parent PPP, and the cluster members are normally distributed around the centers, forming a Gaussian PPP. This organization of the parent and offspring PPPs forms the so-called doubly PPP.
\item We conduct the coverage probability analysis where the devices adopt a slotted-ALOHA random access technique. We then jointly optimize the access probability and caching probability to maximize the cluster offloading gain. We obtain the optimal channel access probability, and then a closed-form solution of the optimal caching sub-problem is provided. The energy consumption problem is then formulated and shown to be convex and the optimal caching probability is also formulated.
\item By combining tools from stochastic geometry as well as queuing theory,
we minimize the per request weighted average delay by jointly optimizing bandwidth allocation between \ac{D2D} and \ac{BS}-to-Device communication and the caching probability. The delay minimization problem is shown to be non-convex. Applying the block coordinate descent (BCD) optimization technique, the joint minimization problem is solved in an iterative manner.
\item We validate our theoretical findings via simulations. Results show a significant improvement in the network performance metrics, namely, the offloading gain, energy consumption, and average delay as compared to other caching schemes proposed earlier in literature.
\end{itemize}
The rest of this paper is organized as follows. Section II and Section III discuss the system model and the offloading gain, respectively. The energy consumption is discussed in Section IV and the delay analysis is conducted in Section V. Numerical results are then presented in Section VI before we conclude the paper in Section VII.
\section{System Model}
\subsection{System Setup}
We model the location of the mobile devices with a \ac{TCP} in which the parent points are drawn from a \ac{PPP} $\Phi_p$ with density $\lambda_p$, and the daughter points are drawn from a Gaussian \ac{PPP} around each parent point. In fact, the TCP is considered as a doubly \ac{PCP} where the daughter points are normally scattered with variance $\sigma^2 \in \mathbb{R}^2$ around each parent point \cite{haenggi2012stochastic}.
The parent points and offspring are referred to as cluster centers and cluster members, respectively. The number of cluster members in each cluster is a Poisson random variable with mean $\overline{n}$. The density function of the location of a cluster member relative to its cluster center is
\begin{equation}
f_Y(y) = \frac{1}{2\pi\sigma^2}\textrm{exp}\Big(-\frac{\lVert y\rVert^2}{2\sigma^2}\Big), \quad \quad y \in \mathbb{R}^2
\label{pcp}
\end{equation}
where $\lVert .\rVert$ is the Euclidean norm. The intensity function of a cluster is given by $\lambda_c(y) = \frac{\overline{n}}{2\pi\sigma^2}\textrm{exp}\big(-\frac{\lVert y\rVert^2}{2\sigma^2}\big)$. Therefore, the intensity of the entire process is given by $\lambda = \overline{n}\lambda_p$. We assume that the BSs' distribution follows another \ac{PPP} $\Phi_{bs}$ with density $\lambda_{bs}$, which is independent of $\Phi_p$.
\subsection{Content Popularity and Probabilistic Caching Placement}
We assume that each device has a surplus memory of size $M$ designated for caching files.
The total number of files is $N_f> M$ and the set (library) of content indices is denoted as $\mathcal{F} = \{1, 2, \dots , N_f\}$. These files represent the content catalog that all the devices in a cluster may request, which are indexed in a descending order of popularity. The probability that the $i$-th file is requested follows a Zipf's distribution given by,
\begin{equation}
q_i = \frac{ i^{-\beta} }{\sum_{k=1}^{N_f}k^{-\beta}},
\label{zipf}
\end{equation}
where $\beta$ is a parameter that reflects how skewed the popularity distribution is. For example, if $\beta= 0$, the popularity of the files has a uniform distribution. Increasing $\beta$ increases the disparity among the files popularity such that lower indexed files have higher popularity. By definition, $\sum_{i=1}^{N_f}q_i = 1$.
We use Zipf's distribution to model the popularity of files per cluster.
\ac{D2D} communication is enabled within each cluster to deliver popular content. It is assumed that the devices adopt a slotted-ALOHA medium access protocol, where each transmitter during each time slot, independently and randomly accesses the channel with the same probability $p$. This implies that multiple active \ac{D2D} links might coexist within a cluster. Therefore, $p$ is a design parameter that directly controls \textcolor{blue}{(mainly)} the intra-cluster interference, as described later in the paper.
We adopt a random content placement where each device independently selects a file to cache according to a specific probability function $\textbf{b} = \{b_1, b_2, \dots, b_{N_{f}}\}$, where $b_i$ is the probability that a device caches the $i$-th file, $0 \leq b_i \leq 1$ for all $i=\{1, \dots, N_f\}$. To avoid duplicate caching of the same content within the memory of the same device, we follow a probabilistic caching approach proposed in \cite{blaszczyszyn2015optimal} and illustrated in Fig. \ref{prob_cache_example}.
\begin{figure
\begin{center}
\includegraphics[width=2.5in]{Figures/ch3/prob_cache_exam}
\caption {The cache memory of size $M = 3$ is equally divided into $3$ blocks of unit size. Starting from content $i=1$ to $i=N_f$, each content sequentially fills these $3$ memory blocks by an amount $b_i$. The amounts (probabilities) $b_i$ eventually fill all $3$ blocks since $\sum_{i=1}^{N_f} b_i = M$ \cite{blaszczyszyn2015optimal}. Then a random number $\in [0,1]$ is generated, and content $i$ is chosen from each block, whose $b_i$ fills the part intersecting with the generated random number. In this way, in the given example, the contents $\{1, 2, 4\}$ are chosen to be cached.}
\label{prob_cache_example}
\end{center}
\end{figure}
If a device caches the desired file, the device directly retrieves the content. However, if the device does not cache the file, the file can be downloaded from any neighboring device that caches the file (henceforth called catering device) in the same cluster. According to the proposed access model, the probability that a chosen catering device is admitted to access the channel is the access probability $p$. Finally, the device attaches to the nearest \ac{BS} as a last resort to download the content which is not cached entirely within the device's cluster. We assume that the \ac{D2D} communication is operating as out-of-band D2D. \textcolor{blue}{$W_{1}$ and $W_{2}$ denote respectively the bandwidth allocated to the \ac{D2D} and \ac{BS}-to-Device communication, and the total system bandwidth is denoted as $W=W_{1} + W_{2}$. It is assumed that device requests are served in a random manner, i.e., among the cluster devices, a random device request is chosen to be scheduled and content is served.
In the following, we aim at studying and optimizing three important metrics, widely studied in the literature. The first metric is the offloading gain, which is defined as the probability of obtaining the requested file from the local cluster, either from the self-cache or from a neighboring device in the same cluster, with a rate higher than a required threshold $R_0$. The second metric is the energy consumption which represents the dissipated energy when downloading files either from the BSs or via \ac{D2D} communication. Finally, the latency which accounts for the weighted average delay over all the requests served from the \ac{D2D} and \ac{BS}-to-Device communication.
\section{Maximum Offloading Gain}
\begin{figure}[t]
\centering
\includegraphics[width=0.55\textwidth]{Figures/ch3/distance1.png}
\caption {Illustration of the representative cluster and one interfering cluster.}
\label{distance}
\end{figure}
Without loss of generality, we conduct the analysis for a cluster whose center is at $x_0\in \Phi_p$ (referred to as representative cluster), and the device who requests the content (henceforth called typical device) is located at the origin. We denote the location of the \ac{D2D} transmitter by $y_0$ relative to $x_0$, where $x_0, y_0\in \mathbb{R}^2$. The distance from the typical device (\ac{D2D} receiver of interest) to this \ac{D2D} transmitter is denoted as $r=\lVert x_0+y_0\rVert$, which is a realization of a random variable $R$ whose distribution is described later. This setup is illustrated in Fig. \ref{distance}. It is assumed that a requested file is served from a randomly selected catering device, which is, in turn, admitted to access the channel based on the slotted-ALOHA protocol. The successful offloading probability is then given by
\begin{align}
\label{offloading_gain}
\mathbb{P}_o(p,\textbf{b}) &= \sum_{i=1}^{N_f} q_i b_i + q_i(1 - b_i)(1 - e^{-b_i\overline{n}})
\underbrace{\int_{r=0}^{\infty}f_R(r) \mathbb{P}(R_{1}(r)>R_0) \dd{r}}_{\mathbb{P}(R_1>R_0)},
\end{align}
where $R_{1}(r)$ is the achievable rate when downloading content from a catering device at a distance $r$ from the typical device with \ac{PDF} $f_R(r)$. The first term on the right-hand side is the probability of requesting a locally cached file (self-cache) whereas the remaining term incorporates the probability that a requested file $i$ is cached among at least one cluster member and being downloadable with a rate greater than $R_0$. More precisely, since the number of devices per cluster has a Poisson distribution, the probability that there are $k$ devices per cluster is equal to $\frac{\overline{n}^k e^{-\overline{n}}}{k!}$. Accordingly, the probability that there are $k$ devices caching content $i$ can be written as $\frac{(b_i\overline{n})^k e^{-b_i\overline{n}}}{k!}$. Hence, the probability that at least one device caches content $i$ is $1$-minus the void probability (i.e., $k=0$), which equals $1 - e^{-b_i\overline{n}}$.
In the following, we first compute the probability $ \mathbb{P}(R_{1}(r)>R_0)$ given the distance $r$ between the typical device and a catering device, \textcolor{blue}{then we conduct averaging over $r$ using the \ac{PDF} $f_R(r)$}. The received power at the typical device from a catering device located at $y_0$ relative to the cluster center is given by
\begin{align}
P &= P_d g_0 \lVert x_0+y_0\rVert^{-\alpha}= P_d g_0 r^{-\alpha}
\label{pwr}
\end{align}
where $P_d$ denotes the \ac{D2D} transmission power, $g_0$ is the complex Gaussian fading channel coefficient between a catering device located at $y_0$ relative to its cluster center at $x_0$ and the typical device, and $\alpha > 2$ is the path loss exponent. Under the above assumption, the typical device sees two types of interference, namely, the intra-and inter-cluster interference. We first describe the inter-cluster interference, then the intra-cluster interference is characterized. The set of active devices in any remote cluster is denoted as $\mathcal{B}^p$, where $p$ refers to the access probability. Similarly, the set of active devices in the local cluster is denoted as $\mathcal{A}^p$. Similar to (\ref{pwr}), the interference from the simultaneously active \ac{D2D} transmitters outside the representative cluster, at the typical device is given by
\begin{align}
I_{\Phi_p^{!}} &= \sum_{x \in \Phi_p^{!}} \sum_{y\in \mathcal{B}^p} P_d g_{y_x} \lVert x+y\rVert^{-\alpha}\\
& = \sum_{x\in \Phi_p^{!}} \sum_{y\in \mathcal{B}^p} P_d g_{u} u^{-\alpha}
\end{align}
where $\Phi_p^{!}=\Phi_p \setminus x_0$ for ease of notation, $y$ is the marginal distance between a potential interfering device and its cluster center at $x \in \Phi_p$ , $u = \lVert x+y\rVert$ is a realization of a random variable $U$ modeling the inter-cluster interfering distance (shown in Fig. \ref{distance}), $g_{y_x} \sim $ exp(1) are \ac{i.i.d.} exponential random variables modeling Rayleigh fading, and $g_{u} = g_{y_x}$ for ease of notation. The intra-cluster interference is then given by
\begin{align}
I_{\Phi_c} &= \sum_{y\in \mathcal{A}^p} P_d g_{y_{x_0}} \lVert x_0+y\rVert^{-\alpha}\\
& = \sum_{y\in \mathcal{A}^p} P_d g_{h} h^{-\alpha}
\end{align}
where $y$ is the marginal distance between the intra-cluster interfering devices and the cluster center at $x_0 \in \Phi_p$, $h = \lVert x_0+y\rVert$ is a realization of a random variable $H$ modeling the intra-cluster interfering distance (shown in Fig. \ref{distance}), $g_{y_{x_0}} \sim $ exp(1) are \ac{i.i.d.} exponential random variables modeling Rayleigh fading, and $g_{h} = g_{y_{x_0}}$ for ease of notation. From the thinning theorem \cite{haenggi2012stochastic}, the set of active transmitters following the slotted-ALOHA medium access forms Gaussian \ac{PPP} $\Phi_{cp}$ whose intensity is given by
\begin{align}
\lambda_{cp} = p\lambda_{c}(y) = p\overline{n}f_Y(y) =\frac{p\overline{n}}{2\pi\sigma^2}\textrm{exp}\Big(-\frac{\lVert y\rVert^2}{2\sigma^2}\Big), \quad y \in \mathbb{R}^2
\end{align}
Assuming that the thermal noise is neglected as compared to the aggregate interference, the $\mathrm{SIR}$ at the typical device is written as
\begin{equation}
\gamma_{r}=\frac{P}{I_{\Phi_p^{!}} + I_{\Phi_c}} = \frac{P_d g_0 r^{-\alpha}}{I_{\Phi_p^{!}} + I_{\Phi_c}}
\end{equation}
\textcolor{black}{A fixed rate transmission model is adopted in our study}, where each transmitter (\ac{D2D} or BS) transmits at the fixed rate of log$_2[1+\theta]$ \SI{}{bits/sec/Hz}, where $\theta$ is a design parameter. Since, the rate is fixed, the transmission is subject to outage due to fading and interference fluctuations. Consequently, the de facto average transmissions rate (i.e., average throughput) is given by
\begin{equation}
\label{rate_eqn}
R = W\textrm{ log$_{2}$}[1+ \theta]\mathrm{P_c},
\end{equation}
where $W$ is the bandwidth, $\theta$ is the pre-determined threshold for successful reception, $\mathrm{P_c} =\mathbb{E}(\textbf{1}\{\mathrm{SIR}>\theta\})$ is the coverage probability, and $\textbf{1}\{.\}$ is the indicator function. \textcolor{blue}{When served by a catering device $r$ apart from the origin, the achievable rate of the typical device under slotted-ALOHA medium access technique can be deduced from \cite[Equation (10)]{jing2012achievable} as}
\begin{equation}
\label{rate_eqn1}
R_{1}(r) = pW_{1} {\rm log}_2 \big(1 + \theta \big) \textbf{1}\{ \gamma_{r} > \theta\}
\end{equation}
Then, the probability $ \mathbb{P}(R_{1}(r)>R_0)$ is derived as follows.
\begin{align}
\mathbb{P}(R_{1}(r)>R_0)&= \mathbb{P} \big(pW_{1} {\rm log}_2 (1 + \theta)\textbf{1}\{ \gamma_{r} > \theta\}
>R_0\big) \nonumber \\
&=\mathbb{P} \big(\textbf{1}\{ \gamma_{r} > \theta\}
>\frac{R_0}{pW_{1} {\rm log}_2 (1 + \theta )}\big) \nonumber \\
&\overset{(a)}{=} \mathbb{E}\big(\textbf{1}\{ \gamma_{r} > \theta\}\big) = \mathbb{P}\big(\frac{P_d g_0 r^{-\alpha}}{I_{\Phi_p^{!}} + I_{\Phi_c}} > \theta\big)
\nonumber \\
&= \mathbb{P}\big( g_0 > \frac{\theta r^{\alpha}}{P_d} [I_{\Phi_p^{!}} + I_{\Phi_c}] \big)
\nonumber \\
&\overset{(b)}{=}\mathbb{E}_{I_{\Phi_p^{!}},I_{\Phi_c}}\Big[\text{exp}\big(\frac{-\theta r^{\alpha}}{P_d}{[I_{\Phi_p^{!}} + I_{\Phi_c}] }\big)\Big]
\nonumber \\
\label{prob-R1-g-R0}
&\overset{(c)}{=} \mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big)
\end{align}
\textcolor{blue}{where (a) follows from the assumption that $R_0 < pW_{1} {\rm log}_2 \big(1 + \theta \big)$ always holds, otherwise, it is infeasible to get $\mathbb{P}(R_{1}(r)>R_0)$ greater than zero}. (b) follows from the fact that $g_o$ follows an exponential distribution, and (c) follows from the independence of the intra- and inter-cluster interference and the Laplace transform of them.
In what follows, we first derive the Laplace transform of interference to get $\mathbb{P}(R_{1}(r)>R_0)$. Then, we formulate the offloading gain maximization problem.
\begin{lemma}
The Laplace transform of the inter-cluster aggregate interference $I_{\Phi_p^{!}}$ evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$ is given by
\begin{align}
\label{LT_inter}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &= {\rm exp}\Big(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm e}^{-p\overline{n} \varphi(s,v)}\Big)v\dd{v}\Big),
\end{align}
where $\varphi(s,v) = \int_{u=0}^{\infty}\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}$, $f_U(u|v) = \mathrm{Rice} (u| v, \sigma)$ represents Rice's \ac{PDF} of parameter $\sigma$, and $v=\lVert x\rVert$.
\end{lemma}
\begin{proof}
Please see Appendix A.
\end{proof}
\begin{lemma}
The Laplace transform of the intra-cluster aggregate interference $I_{\Phi_c}$ evaluated at $s=\frac{\theta r^{\alpha}}{P_d}$ can be approximated by
\begin{align}
\label{LT_intra}
\mathscr{L}_{I_{\Phi_c} }(s) \approx {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\end{align}
where $f_H(h) =\mathrm{Rayleigh}(h,\sqrt{2}\sigma)$ represents Rayleigh's \ac{PDF} with a scale parameter $\sqrt{2}\sigma$.
\end{lemma}
\begin{proof}
Please see Appendix B.
\end{proof}
For the serving distance distribution $f_R(r)$, since both the typical device as well as a potential catering device have their locations drawn from a normal distribution with variance $\sigma^2$ around the cluster center, then by definition, the serving distance has a Rayleigh distribution with parameter $\sqrt{2}\sigma$, and given by
\begin{align}
\label{rayleigh}
f_R(r)= \frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}, \quad r>0
\end{align}
From (\ref{LT_inter}), (\ref{LT_intra}), and (\ref{rayleigh}), the offloading gain in (\ref{offloading_gain}) is written as
\begin{align}
\label{offloading_gain_1}
\mathbb{P}_o(p,\textbf{b}) &= \sum_{i=1}^{N_f} q_i b_i + q_i(1 - b_i)(1 - e^{-b_i\overline{n}}) \underbrace{\int_{r=0}^{\infty}
\frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}
\mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big)\dd{r}}_{\mathbb{P}(R_1>R_0)},
\end{align}
Hence, the offloading gain maximization problem can be formulated as
\begin{align}
\label{optimize_eqn_p}
\textbf{P1:} \quad &\underset{p,\textbf{b}}{\text{max}} \quad \mathbb{P}_o(p,\textbf{b}) \\
\label{const110}
&\textrm{s.t.}\quad \sum_{i=1}^{N_f} b_i = M, \\
\label{const111}
& b_i \in [ 0, 1], \\
\label{const112}
& p \in [ 0, 1],
\end{align}
where (\ref{const110}) is the device cache size constraint, which is consistent with the illustration of the example in Fig. \ref{prob_cache_example}. \textcolor{black}{On one hand, from the assumption that the fixed transmission rate $pW_{1} {\rm log}_2 \big(1 + \theta \big)$ being larger than the required threshold $R_0$, we have the condition $p>\frac{R_0}{W_{1} {\rm log}_2 \big(1 + \theta \big)}$ on the access probability. On the other hand, from (\ref{prob-R1-g-R0}), with further increase of the access probability $p$, intra- and inter-cluster interference powers increase, and the probability $\mathbb{P}(R_{1}(r)>R_0)$ decreases accordingly. From intuition, the optimal access probability for the offloading gain maximization is chosen as $p^* >_{\epsilon} \frac{R_0}{W_{1} {\rm log}_2 \big(1 + \theta \big)}$, where $\epsilon \to 0$. However, increasing the access probability $p$ further above $p^*$ may lead to higher \ac{D2D} average achievable rate $R_1$, as elaborated in the next section.} The obtained $p^*$ is now used to solve for the caching probability $\textbf{b}$ in the optimization problem below. Since in the structure of \textbf{P1}, $p$ and $\textbf{b}$ are separable, it is possible to solve for $p^*$ and then substitute to get $\textbf{b}^*$.
\begin{align}
\label{optimize_eqn_b_i}
\textbf{P2:} \quad &\underset{\textbf{b}}{\text{max}} \quad \mathbb{P}_o(p^*,\textbf{b}) \\
\label{const11}
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber
\end{align}
The optimal caching probability is formulated in the next lemma.
\begin{lemma}
$\mathbb{P}_o(p^*,\textbf{b})$ is a concave function w.r.t. $\textbf{b}$ and the optimal caching probability $\textbf{b}^{*}$ that maximizes the offloading gain is given by
\[
b_{i}^{*}=\left\{
\begin{array}{ll}
1 \quad\quad\quad , v^{*} < q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)\\
0 \quad\quad\quad, v^{*} > q_i + \overline{n}q_i\mathbb{P}(R_1>R_0)\\
\psi(v^{*}) \quad, {\rm otherwise}
\end{array}
\right.
\]
where $\psi(v^{*})$ is the solution of $v^{*}$ of (\ref{psii_offload}) in Appendix C that satisfies $\sum_{i=1}^{N_f} b_i^*=M$.
\end{lemma}
\begin{proof}
Please see Appendix C.
\end{proof}
\section{Energy Consumption}
In this section, we formulate the energy consumption minimization problem for the clustered \ac{D2D} caching network. In fact, significant energy consumption occurs only when content is served via \ac{D2D} or \ac{BS}-to-Device transmission. We consider the time cost $c_{d_i}$ as the time it takes to download the $i$-th content from a neighboring device in the same cluster. Considering the size ${S}_i$ of the $i$-th ranked content, $c_{d_i}=S_i/R_1 $, where $R_1 $ denotes the average rate of the \ac{D2D} communication. Similarly, we have $c_{b_i} = S_i/R_2 $ when the $i$-th content is served by the \ac{BS} with average rate $R_2 $. The average energy consumption when downloading files by the devices in the representative cluster is given by
\begin{align}
\label{energy_avrg}
E_{av} = \sum_{k=1}^{\infty} E(\textbf{b}|k)\mathbb{P}(n=k)
\end{align}
where $\mathbb{P}(n=k)$ is the probability that there are $k$ devices in the representative cluster, \textcolor{blue}{equal to $\frac{\overline{n}^k e^{-\overline{n}}}{k!}$}, and $E(\textbf{b}|k)$ is the energy consumption conditioning on having $k$ devices in the cluster, written similar to \cite{energy_efficiency} as
\begin{equation}
E(\textbf{b}|k) = \sum_{j=1}^{k} \sum_{i=1}^{N_f}\big[\mathbb{P}_{j,i}^d q_i P_d c_{d_i} + \mathbb{P}_{j,i}^b q_i P_b c_{b_i}\big]
\label{energy}
\end{equation}
where $\mathbb{P}_{j,i}^d$ and $\mathbb{P}_{j,i}^b$ represent the probability of obtaining the $i$-th content by the $j$-th device from the local cluster, i.e., via \ac{D2D} communication, and the BS, respectively. $P_b$ denotes the \ac{BS} transmission power. Given that there are $k$ devices in the cluster, it is obvious that $\mathbb{P}_{j,i}^b=(1-b_i)^{k}$, and $\mathbb{P}_{j,i}^d=(1 - b_i)\big(1-(1-b_i)^{k-1}\big)$.
The average rates $R_1$ and $R_2$ are now computed to get a closed-form expression for $E(\textbf{b}|k)$.
From equation (\ref{rate_eqn}), we need to obtain the \ac{D2D} coverage probability $\mathrm{P_{c_d}}$ and \ac{BS}-to-Device coverage probability $\mathrm{P_{c_b}}$ to calculate $R_1$ and $R_2$, respectively. Given the number of devices $k$ in the representative cluster, the Laplace transform of the inter-cluster interference \textcolor{blue}{is as obtained in (\ref{LT_inter})}. However, the intra-cluster interfering devices no longer represent a Gaussian \ac{PPP} since the number of devices is conditionally fixed, i.e., not a Poisson random number as before. To facilitate the analysis, for every realization $k$, we assume that the intra-cluster interfering devices form a Gaussian \ac{PPP} with intensity function given by $pkf_Y(y)$. Such an assumption is mandatory for analytical tractability.
From Lemma 2, the intra-cluster Laplace transform conditioning on $k$ can be approximated as
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|k) &\approx {\rm exp}\Big(-pk \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\nonumber
\end{align}
and the conditional \ac{D2D} coverage probability is given by
\begin{align}
\label{p_b_d2d}
\mathrm{P_{c_d}} =
\int_{r=0}^{\infty} \frac{r}{2 \sigma^2} {\rm e}^{\frac{-r^2}{4 \sigma^2}}
\mathscr{L}_{I_{\Phi_p^{!}}}\big(s=\frac{\theta r^{\alpha}}{P_d}\big) \mathscr{L}_{I_{\Phi_c}} \big(s=\frac{\theta r^{\alpha}}{P_d}\big|k\big)\dd{r}
\end{align}
%
\textcolor{blue}{With the adopted slotted-ALOHA scheme, the access probability $p$ minimizing $E(\textbf{b}|k)$ is computed over the interval [0,1] to maximize the \ac{D2D} achievable rate $R_1$ in (\ref{rate_eqn1}), with the condition $p>\frac{R_0}{W_{1} {\rm log}_2 \big(1 + \theta \big)}$ holds to fulfill the probability $\mathbb{P}(R_{1}>R_0)$ greater than zero.
As an illustrative example, in Fig.~\ref{prob_r_geq_r0_vs_p}, we plot the \ac{D2D} average achievable rate $R_1$ against the channel access probability $p$.
As evident from the plot, we see that there is certain access probability; before it the rate $R_1$ tends to increase since the channel access probability increases, and beyond it, the rate $R_1$ decreases monotonically due to the effect of more interferers accessing the channel. In such a case, although we observe that increasing $p$ above $\frac{R_0}{W_{1} {\rm log}_2 \big(1 + \theta \big)}=0.1$ improves the average achievable rate $R_1$, it comes at a price of a decreased $\mathbb{P}(R_{1}>R_0)$.}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/prob_r_geq_r0_vs_p2.eps}
\caption {\textcolor{blue}{The \ac{D2D} average achievable rate $R_1$ versus the access probability $p$ ($\lambda_p = 20 \text{ clusters}/\SI{}{km}^2$, $\overline{n}=12$, $\sigma=\SI{30}{m}$, $\theta=\SI{0}{dB}$, $R_0/W_1=0.1\SI{}{bits/sec/Hz}$).}}
\label{prob_r_geq_r0_vs_p}
\end{center}
\end{figure}
Analogously, under the \ac{PPP} $\Phi_{bs}$, and based on the nearest \ac{BS} association principle, it is shown in \cite{andrews2011tractable} that the \ac{BS} coverage probability can be expressed as
\begin{equation}
\mathrm{P_{c_b}} =\frac{1}{{}_2 F_1(1,-\delta;1-\delta;-\theta)},
\label{p_b_bs}
\end{equation}
where ${}_2 F_1(.)$ is the Gaussian hypergeometric function and $\delta = 2/\alpha$. Given the coverage probabilities $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ in (\ref{p_b_d2d}) and (\ref{p_b_bs}), respectively, $R_1$ and $R_2 $ can be calculated from (\ref{rate_eqn}), and hence $E(\textbf{b}|k)$ is expressed in a closed-form.
\subsection{Energy Consumption Minimization}
The energy minimization problem can be formulated as
\begin{align}
\label{optimize_eqn1}
&\textbf{P3:} \quad\underset{\textbf{b}}{\text{min}} \quad E(\textbf{b}|k) = \sum_{j=1}^{k} \sum_{i=1}^{N_f}\big[\mathbb{P}_{j,i}^d q_i P_d c_{d_i} + \mathbb{P}_{j,i}^b q_i P_b c_{b_i}\big]
\\
\label{const11}
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber
\end{align}
In the next lemma, we prove the convexity condition for $E(\textbf{b}|k)$.
\begin{lemma}
\label{convex_E}
The energy consumption $E(\textbf{b}|k)$ is convex if $\frac{P_b}{R_2}>\frac{P_d}{R_1}$.
\end{lemma}
\begin{proof}
\textcolor{blue}{We proceed by deriving the Hessian matrix of $E(\textbf{b}|k)$. The Hessian matrix of $E(\textbf{b}|k)$ w.r.t. the caching variables is \textbf{H}$_{i,j} = \frac{\partial^2 E(\textbf{b}|k)}{\partial b_i \partial b_j}$, $\forall i,j \in \mathcal{F}$. \textbf{H}$_{i,j}$ a diagonal matrix whose $i$-th row and $i$-th column element is given by $k(k-1) S_i\Big(\frac{P_b}{R_2}-\frac{P_d}{R_1}\Big)q_i(1 - b_i)^{k-2}$.}
Since the obtained Hessian matrix is full-rank and diagonal, $\textbf{H}_{i,j}$ is positive semidefinite (and hence $E(\textbf{b}|k)$ is convex) if all the diagonal entries are nonnegative, i.e., when
$\frac{P_b}{R_2}>\frac{P_d}{R_1}$. In practice, it is reasonable to assume that $P_b \gg P_d$, as in \cite{ericsson}, the \ac{BS} transmission power is 100 fold the \ac{D2D} power.
\end{proof}
As a result of Lemma 3, the optimal caching probability can be computed to minimize $E(\textbf{b}|k)$.
\begin{lemma}
The optimal caching probability $\textbf{b}^{*}$ for the energy minimization problem \textbf{P3} is given by,
\begin{equation}
b_i^* = \Bigg[ 1 - \Big(\frac{v^* + k^2q_iS_i\frac{P_d}{R_1 }}{kq_iS_i\big(\frac{P_d}{R_1 }-\frac{P_b}{R_2}\big)} \Big)^{\frac{1}{k-1}}\Bigg]^{+}
\label{energy11}
\end{equation}
where $v^{*}$ satisfies the maximum cache constraint $\sum_{i=1}^{N_f} \Big[ 1 - \Big(\frac{v^* + k^2q_iS_i\frac{P_d}{R_1 }}{kq_iS_i\big(\frac{P_d}{R_1 }-\frac{P_b}{R_2 }\big)} \Big)^{\frac{1}{k-1}}\Big]^{+}=M$, and $[x]^+ =$ max$(x,0)$.
\end{lemma}
\begin{proof}
The proof proceeds in a similar manner to Lemma 3 and is omitted.
\end{proof}
\begin{proposition} {\rm \textcolor{blue}{By observing (\ref{energy11}), we can demonstrate the effects of content size and popularity on the optimal caching probability. $S_i$ exists in the numerator and denominator
of the second term in (\ref{energy11}), however, the effect on numerator is more significant due to larger multiplier. The same property is observed for $q_i$. With the increase of $S_i$ or $q_i$, the magnitude of the second term in (\ref{energy11}) increases, and correspondingly, $b_i^*$ decreases. That is a content with larger size or lower popularity has smaller probability to be cached.}}
\end{proposition}
By substituting $b_i^*$ into (\ref{energy_avrg}), the average energy consumption per cluster is obtained. In the rest of the paper, we study and minimize the weighted average delay per request for the proposed system.
\section{Delay Analysis}
In this section, the delay analysis and minimization are discussed. A joint stochastic geometry and queueing theory model is exploited to study this problem. The delay analysis incorporates the study of a system of spatially interacting queues. To simplify the mathematical analysis, we further consider that only one \ac{D2D} link can be active within a cluster of $k$ devices, where $k$ is fixed. As shown later, such an assumption facilitates the analysis by deriving simple expressions. We begin by deriving the \ac{D2D} coverage probability under the above assumption, which is used later in this section.
\begin{lemma}
\label{coverage_lemma}
The \ac{D2D} coverage probability of the proposed clustered model with one active \ac{D2D} link within a cluster is given by
\begin{align}
\label{v_0}
\mathrm{P_{c_d}} = \frac{1}{4\sigma^2 Z(\theta,\alpha,\sigma)} ,
\end{align}
where $Z(\theta,\alpha,\sigma) = (\pi\lambda_p \theta^{2/\alpha}\Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)+ \frac{1}{4\sigma^2})$.
\end{lemma}
\begin{proof}
The result can be proved by using the displacement theory of the \ac{PPP} \cite{daley2007introduction}, and then proceeding in a similar manner to Lemma 1 and 2. The proof is presented in Appendix D for completeness.
\end{proof}
In the following, we firstly describe the traffic model of the network, and then we formulate the delay minimization problem.
\begin{figure
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/delay_queue2}
\caption {\textcolor{black}{The traffic model of request arrivals and departures in a given cluster. $Q_1$ and $Q_2$ are M/G/1 queues modeling requests served by \ac{D2D} and \ac{BS}-to-Device communication, respectively.}}
\label{delay_queue}
\end{center}
\end{figure}
\subsection{Traffic Model}
We assume that the aggregate request arrival process from the devices in each cluster follows a Poisson arrival process with parameter $\zeta_{tot}$ (requests per time slot). As shown in Fig.~\ref{delay_queue}, the incoming requests are further divided according to where they are served from. $\zeta_{1}$ represents the arrival rate of requests served via the \ac{D2D} communication, whereas $\zeta_{2}$ is the arrival rate for those served from the BSs. $\zeta_{3} = 1 - \zeta_{1} - \zeta_{2}$ denotes the arrival rate of requests served via the self-cache with zero delay. By definition, $\zeta_{1}$ and $\zeta_{2}$ are also Poisson arrival processes. Without loss of generality, we assume that the file size has a general distribution $G$ whose mean is denoted as $\overline{S}$ \SI{}{MBytes}. Hence, an M/G/1 queuing model is adopted whereby two non-interacting queues, $Q_1$ and $Q_2$, model the traffic in each cluster served via the \ac{D2D} and \ac{BS}-to-Device communication, respectively. Although $Q_1$ and $Q_2$ are non-interacting as the \ac{D2D} communication is assumed to be out-of-band, these two queues are spatially interacting with similar queues in other clusters. To recap, $Q_1$ and $Q_2$ are two M/G/1 queues with arrival rates $\zeta_{1}$ and $\zeta_{1}$, and service rates $\mu_1$ and $\mu_2$, respectively.
\subsection{Queue Dynamics}
It is worth highlighting that the two queues $Q_i$, $i \in \{1,2\}$, accumulate requests for files demanded by the clusters members, not the files themselves. First-in first-out (FIFO) scheduling is assumed where a request for content arrives first will be scheduled first either by the \ac{D2D} or \ac{BS} communication if the content is cached among the devices or not, respectively. The result of FIFO scheduling only relies on the time when the request arrives at the queue and is irrelevant to the particular device that issues the request. Given the parameter of the Poisson's arrival process $\zeta_{tot}$, the arrival rates at the two queues are expressed respectively as
\begin{align}
\zeta_{1} &= \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big), \\
\zeta_{2} &= \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}
\end{align}
The network operation is depicted in Fig. \ref{delay_queue}, and described in detail below.
\begin{enumerate}
\item Given the memoryless property of the arrival process (Poisson arrival) along with the assumption that the service process is independent of the arrival process,
the number of requests in any queue at a future time only depends upon the current number in the system (at time $t$) and the arrivals or departures that occur within the interval $e$.
\begin{align}
Q_{i}(t+e) = Q_{i}(t) + \Lambda_{i}(e) - M_{i}(e)
\end{align}
where $\Lambda_{i}(e)$ is the number of arrivals in the time interval $(t,t+e)$, whose mean is $\zeta_i$ \SI{}{sec}$^{-1}$, and $M_{i}(e)$ is the number of departures in the time interval $(t,t+e)$, whose mean is $\mu_i = \frac{\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})W_i{\rm log}_2(1+\theta)}{\overline{S}}$ \SI{}{sec}$^{-1}$. It is worth highlighting that, unlike the spatial-only model studied in the previous sections, the term $\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})$ is dependent on the traffic dynamics \textcolor{black}{since a request being served in a given cluster is interfered only from other clusters that also have requests to serve}. What is more noteworthy is that the mean service time $\tau_i = \frac{1}{\mu_i}$ follows the same distribution as the file size. These aspects will be revisited later in this section.
\item $\Lambda_{i}(e)$ is dependent only on $e$ because the arrival process is Poisson. $M_{i}(e)$ is $0$ if the service time of the file being served $\epsilon_i >e$. $M_{i}(e)$ is 1 if $\epsilon_1 <e$ and $\epsilon_2 + \epsilon_1>e$, and so on. As the service times $\epsilon_1, \epsilon_2, \dots, \epsilon_n$ are independent, neither $\Lambda_{i}(e)$ nor $M_{i}(e)$ depends on what happened prior to $t$. Thus, $Q_{i}(t+e)$ only depends upon $Q_{i}(t)$ and not the past history. Hence it is a is a \ac{CTMC} which obeys the stability conditions in \cite{szpankowski1994stability}.
\end{enumerate}
The following proposition provides the sufficient conditions for the stability of the buffers in the sense defined in \cite{szpankowski1994stability}, i.e., $\{Q_{i}\}$ has a limiting distribution for $t \rightarrow \infty$.
\begin{proposition} {\rm The \ac{D2D} and \ac{BS}-to-Device traffic modeling queues are stable, respectively, if and only if}
\begin{align}
\label{stable1}
\zeta_{1} < \mu_1 &= \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}} \\
\label{stable2}
\zeta_{2} < \mu_2 & =\frac{\mathrm{P_{c_b}} W_2{\rm log}_2(1+\theta)}{\overline{S}}
\end{align}
\end{proposition}
\begin{proof}
We show sufficiency by proving that (\ref{stable1}) and (\ref{stable2}) guarantee stability in a dominant network, where all queues that have empty buffers make dummy transmissions. The dominant network is a fictitious system that is identical to the original system, except that terminals may choose to transmit even when their respective buffers are empty, in which case they simply transmit a dummy packet. If both systems are started from the same initial state and fed with the same arrivals, then the queues in the fictitious dominant system can never be shorter than the queues in the original system.
\textcolor{blue}{Similar to the spatial-only network, in the dominant system, the typical receiver is seeing an interference from all other clusters whether they have requests to serve or not (dummy transmission).} This dominant system approach yields $\mathbb{E}(\textbf{1} \{\mathrm{SIR}>\theta\})$ equal to $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ for the \ac{D2D} and \ac{BS}-to-Device communication, respectively. Also, the obtained delay is an upper bound for the actual delay of the system. The necessity of (\ref{stable1}) and (\ref{stable2}) is shown as follows: If $\zeta_i>\mu_i$, then, by Loynes' theorem \cite{loynes1962stability}, it follows that lim$_{t\rightarrow \infty}Q_i(t)=\infty$ (a.s.) for all queues in the dominant network.
\end{proof}
Next, we conduct the analysis for the dominant system whose parameters are as follows. The content size has an exponential distribution of mean $\overline{S}$ \SI{}{MBytes}. The service times also obey an exponential distribution with means $\tau_1 = \frac{\overline{S}}{R_1}$ \SI{}{seconds} and $\tau_2 = \frac{\overline{S}}{R_2}$ \SI{}{seconds}. The rates $R_1$ and $R_2$ are calculated from (\ref{rate_eqn}) where $\mathrm{P_{c_d}}$ and $\mathrm{P_{c_b}}$ are from (\ref{v_0}) and (\ref{p_b_bs}), respectively. Accordingly, $Q_1$ and $Q_2$ are two continuous time independent (non-interacting) M/M/1 queues with service rates $\mu_1 = \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}}$ and $\mu_2 = \frac{\mathrm{P_{c_b}} W_2{\rm log}_2(1+\theta)}{\overline{S}}$ \SI{}{sec}$^{-1}$, respectively. \begin{proposition} {\rm The mean queue length $L_i$ of the $i$-th queue is given by}
\begin{align}
\label{queue_len}
L_i &= \rho_i + \frac{2\rho_i^2}{2\zeta_i(1 - \rho_i)},
\end{align}
\end{proposition}
\begin{proof}
We can easily calculate $L_i$ by observing that $Q_i$ are continuous time M/M/1 queues with arrival rates $\zeta_i$, service rates $\mu_i$, and traffic intensities $\rho_i = \frac{\zeta_i}{\mu_i}$. Then, by applying the Pollaczek-Khinchine formula \cite{Kleinrock}, $L_i$ is directly obtained.
\end{proof}
The average delay per request for each queue is calculated from
\begin{align}
D_1 &= \frac{L_1}{\zeta_1}= \frac{1}{\mu_1 - \zeta_1} = \frac{1}{W_1\mathcal{O}_{1} - \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)} \\
D_2 &= \frac{L_2}{\zeta_2}=\frac{1}{\mu_2 - \zeta_2} = \frac{1}{W_2\mathcal{O}_{2} - \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}}
\end{align}
where $\mathcal{O}_{1} = \frac{\mathrm{P_{c_d}} {\rm log}_2(1+\theta)}{\overline{S}}$, $\mathcal{O}_{2}= \frac{\mathrm{P_{c_b}} {\rm log}_2(1+\theta)}{\overline{S}}$ for notational simplicity. The weighted average delay $D$ is then expressed as
\begin{align}
D&= \frac{\zeta_{1}D_1 + \zeta_{2}D_2}{\zeta_{tot}} \nonumber \\
&= \frac{\sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)}{ \mathcal{O}_{1}W_1 - \zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big)} + \frac{\sum_{i=1}^{N_f}q_i (1-b_i)^{k}}{ \mathcal{O}_{2}W_2 - \zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k}}
\end{align}
One important insight from the delay equation is that the caching probability $\textbf{b}$ controls the arrival rates $\zeta_{1}$ and $\zeta_{2}$ while the bandwidth determines the service rates $\mu_1$ and $\mu_2$. Therefore, it turns out to be of paramount importance to jointly optimize $\textbf{b}$ and $W_1$ to minimize the average delay. One relevant work is carried out in \cite{tamoor2016caching} where the authors investigate the storage-bandwidth tradeoffs for small cell \ac{BS}s that are subject to storage constraints. Subsequently, we formulate the weighted average delay joint caching and bandwidth minimization problem as
\begin{align}
\label{optimize_eqn3}
\textbf{P4:} \quad \quad&\underset{\textbf{b},{\rm W}_1}{\text{min}} \quad D(\textbf{b},W_1) \\
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}) \nonumber \\
& 0 \leq W_1 \leq W, \\
\label{stab1}
&\zeta_{tot} \sum_{i=1}^{N_f}q_i\big((1 - b_i) -(1-b_i)^{k}\big) < \mu_1, \\
\label{stab2}
&\zeta_{tot} \sum_{i=1}^{N_f}q_i (1-b_i)^{k} < \mu_2,
\end{align}
where constraints (\ref{stab1}) and (\ref{stab2}) are the stability conditions for the queues $Q_1$ and $Q_2$, respectively. Although the objective function of \textbf{P4} is convex w.r.t. $W_1$, as derived below, the coupling of the optimization variables $\textbf{b}$ and $W_1$ makes \textbf{P4} a non-convex optimization problem. Therefore, \textbf{P4} cannot be solved directly using standard convex optimization techniques.
\textcolor{black}{By applying the \ac{BCD} optimization technique, \textbf{P4} can be solved in an iterative manner as follows. First, for a given caching probability $\textbf{b}$, we calculate the bandwidth allocation subproblem. Afterwards, the obtained optimal bandwidth is used to update $\textbf{b}$}. The optimal bandwidth for the bandwidth allocation subproblem is given in the next Lemma.
\begin{lemma}
The objective function of \textbf{P4} in (\ref{optimize_eqn3}) is convex w.r.t. $W_1$, and the optimal bandwidth allocation to the \ac{D2D} communication is given by
\begin{align}
\label{optimal-w-1}
W_1^* = \frac{\zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k}) +\varpi \big(\mathcal{O}_{2}W - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)}{\mathcal{O}_{1}+\varpi\mathcal{O}_{2}},
\end{align}
where $\overline{b}_i = 1 - b_i$ and $\varpi=\sqrt{\frac{\mathcal{O}_{1}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})}{\mathcal{O}_{2} \sum_{i=1}^{N_f}q_i\overline{b}_i^{k}}}$
\end{lemma}
\begin{proof}
$D(\textbf{b},W_1)$ can be written as
\begin{align}
\label{optimize_eqn3_p1}
\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big(\mathcal{O}_{1}W_1 - \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big)^{-1} + \sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big(\mathcal{O}_{2}W_2 - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)^{-1}, \nonumber
\end{align}
The second derivative $\frac{\partial^2 D(\textbf{b},W_1)}{\partial W_1^2}$ is hence given by
\begin{align}
2\mathcal{O}_{1}^2\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big(\mathcal{O}_{1}W_1 - \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})\big)^{-3} + 2\mathcal{O}_{2}^2\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big(\mathcal{O}_{2}W_2 - \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}\big)^{-3}, \nonumber
\end{align}
The stability conditions require that $\mu_1 = \mathcal{O}_{1}W_1 > \zeta_{tot}\sum_{i=1}^{N_f}q_i(\overline{b}_i - \overline{b}_i^{k})$ and $\mu_2 =\mathcal{O}_{2}W_2 > \zeta_{tot}\sum_{i=1}^{N_f}q_i\overline{b}_i^{k}$. Also, $\overline{b}_i \geq \overline{b}_i^{k}$ by definition. Hence, $\frac{\partial^2 D(\textbf{b},W_1)}{\partial W_1^2} > 0$, and the objective function is a convex function of $W_1$. The optimal bandwidth allocation can be obtained from the \ac{KKT} conditions similar to problems \textbf{P2} and \textbf{P3}, with the details omitted for brevity.
\end{proof}
Given $W_1^*$ from the bandwidth allocation subproblem, the caching probability subproblem can be written as
\begin{align}
\textbf{P5:} \quad \quad&\underset{\textbf{b}}{\text{min}} \quad D(\textbf{b},W_1^*) \\
&\textrm{s.t.}\quad (\ref{const110}), (\ref{const111}), (\ref{stab1}), (\ref{stab2}) \nonumber
\end{align}
The caching probability subproblem \textbf{P5} is a sum of two fractional functions, where the first fraction is in the form of a concave over convex functions while the second fraction is in the form of a convex over concave functions. The first fraction structure, i.e., concave over convex functions, renders solving this problem using fractional programming (FP) very challenging.\footnote{A quadratic transform technique for tackling the multiple-ratio concave-convex FP problem is recently used to solve a minimization of fractional functions that has the form of convex over concave functions, whereby an equivalent problem is solved with the objective function reformulated as a difference between convex minus concave functions \cite{shen2018fractional}.}
\textcolor{blue}{Moreover, the constraint (\ref{const110}) is a concave w.r.t. $\textbf{b}$.
Hence, we adopt the interior point method to obtain local optimal solution of $\textbf{b}$ given the optimal bandwidth $W_1^*$, which depends on the initial value input to the algorithm \cite{boyd2004convex}. Nonetheless, we can increase the probability to find a near-optimal solution of problem \textbf{P5} by using the interior point method with multiple random initial values and then picking the solution with lowest weighted average delay. The explained procedure is repeated until the value of \textbf{P4}'s objective function converges to a pre-specified accuracy.}
\section{Numerical Results}
\begin{table}[ht]
\caption{Simulation Parameters}
\centering
\begin{tabular}{c c c}
\hline\hline
Description & Parameter & Value \\ [0.5ex]
\hline
\textcolor{black}{System bandwidth} & W & \SI{20}{\mega\hertz} \\
\ac{BS} transmission power & $P_b$ & \SI{43}{\deci\bel\of{m}} \\
\ac{D2D} transmission power & $P_d$ & \SI{23}{\deci\bel\of{m}} \\
Displacement standard deviation & $\sigma$ & \SI{10}{\metre} \\
Popularity index&$\beta$&1\\
Path loss exponent&$\alpha$&4\\
Library size&$N_f$&500 files\\
Cache size per device&$M$&10 files\\
Average number of devices per cluster&$\overline{n}$&5\\
Density of $\Phi_p$&$\lambda_{p}$&20 clusters/\SI{}{km}$^2$ \\
Average content size&$\textcolor{black}{\overline{S}}$&\SI{5}{MBytes} \\
$\mathrm{SIR}$ threshold&$\theta$&\SI{0}{\deci\bel}\\
\textcolor{black}{Total request arrival rate}&$\zeta_{tot}$&\SI{2}{request/sec}\\
\hline
\end{tabular}
\label{ch3:table:sim-parameter}
\end{table}
At first, we validate the developed mathematical model via Monte Carlo simulations. Then we benchmark the proposed caching scheme against conventional caching schemes. Unless otherwise stated, the network parameters are selected as shown in Table \ref{ch3:table:sim-parameter}.
\subsection{Offloading Gain Results}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/prob_r_geq_r0.eps}
\caption {The probability that the \ac{D2D} achievable rate is greater than a threshold $R_0$ versus standard deviation $\sigma$.}
\label{prob_r_geq_r0}
\end{center}
\end{figure}
In this subsection, we present the offloading gain performance for the proposed caching model.
In Fig.~\ref{prob_r_geq_r0}, we verify the accuracy of the analytical results for the probability $\mathbb{P}(R_1>R_0)$. The theoretical and simulated results are plotted together, and they are consistent. We can observe that the probability $\mathbb{P}(R_1>R_0)$ decreases monotonically with the increase of $\sigma$. This is because as $\sigma$ increases, the serving distance increases and the inter-cluster interfering distance between the out-of-cluster interferers and the typical device decreases, and equivalently, the $\mathrm{SIR}$ decreases. It is also shown that $\mathbb{P}(R_1>R_0)$ decreases with the $\mathrm{SIR}$ threshold $\theta$ as the channel becomes more prone to be in outage when increasing the $\mathrm{SIR}$ threshold $\theta$.
\begin{figure*}
\centering
\subfigure[$p=p^*$. ]{\includegraphics[width=3.0in]{Figures/ch3/histogram_b_i_p_star1.eps}
\label{histogram_b_i_p_star}}
\subfigure[$p > p^*$.]{\includegraphics[width=3.0in]{Figures/ch3/histogram_b_i_p_leq_p_star1.eps}
\label{histogram_b_i_p_leq_p_star}}
\caption{Histogram of the optimal caching probability $\textbf{b}^*$ when (a) $p=p^*$ and (b) $p > p^*$.}
\label{histogram_b_i}
\end{figure*}
To show the effect of $p$ on the caching probability, in Fig.~\ref{histogram_b_i}, we plot the histogram of the optimal caching probability at different values of $p$, where $p=p^*$ in Fig.~\ref{histogram_b_i_p_star} and $p>p^*$ in Fig.~\ref{histogram_b_i_p_leq_p_star}. It is clear from the histograms that the optimal caching probability $\textbf{b}^*$ tends to be more skewed when $p > p^*$, i.e., when $\mathbb{P}(R_1>R_0)$ decreases. This shows that file sharing is more difficult when $p$ is larger than the optimal access probability. More precisely, for $p>p^*$, the outage probability is high due to the aggressive interference. In such a low coverage probability regime, each device tends to cache the most popular files leading to fewer opportunities of content transfer between the devices.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/offloading_prob_cach_compare1.eps}
\caption {The offloading probability versus the popularity of files $\beta$ under different caching schemes, namely, \ac{PC}, Zipf, and \ac{CPF} ($\overline{n}=10$, $\sigma=$\SI{5}{\metre}).}
\label{offloading_gain_vs_beta}
\end{center}
\end{figure}
Last but not least, Fig.~\ref{offloading_gain_vs_beta} manifests the prominent effect of the files' popularity on the offloading gain. We compare the offloading gain of three different caching schemes, namely, the proposed \ac{PC}, Zipf's caching (Zipf), and \ac{CPF}. We can see that the offloading gain under the \ac{PC} scheme attains the best performance as compared to other schemes. Also, we note that both \ac{PC} and Zipf schemes encompass the same offloading gain when $\beta=0$ owing to the uniformity of content popularity.
\subsection{\textcolor{black}{Energy Consumption Results}}
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/energy_vs_beta7.eps}
\caption {Normalized energy consumption versus popularity exponent $\beta$.}
\label{energy_vs_beta}
\end{center}
\end{figure}
The results in this part are given for the energy consumption.
Fig.~\ref{energy_vs_beta} shows the energy consumption, \textcolor{black}{normalized to the mean number of devices per cluster}, versus $\beta$ under different caching schemes, namely, \ac{PC}, Zipf, and \ac{CPF}. We can see that the minimized energy consumption under the proposed \ac{PC} scheme attains the best performance as compared to other schemes. Also, it is clear that the consumed energy decreases with $\beta$. This can be justified by the fact that as $\beta$ increases, fewer files are frequently requested which are more likely to be cached among the devices under \ac{PC}, \ac{CPF}, and the Zipf schemes. These few files therefore are downloadable from the devices via low power \ac{D2D} communication.
\begin{figure}[!h]
\begin{center}
\includegraphics[width=3.5in]{Figures/ch3/energy_vs_n6.eps}
\caption {\textcolor{black}{Normalized energy consumption versus the mean number of devices per cluster.}}
\label{energy_vs_n}
\end{center}
\end{figure}
We plot the normalized energy consumption versus the \textcolor{black}{mean number of devices per cluster} in Fig.~\ref{energy_vs_n}. First, we see that energy consumption decreases with the mean number of devices per cluster. As the number of devices per cluster increases, it is more probable to obtain requested files via low power \ac{D2D} communication. When the number of devices per cluster is relatively large, the normalized energy consumption tends to flatten as most of the content becomes cached at the cluster devices.
\subsection{Delay Results}
\begin{figure*} [!h]
\centering
\subfigure[Weighted average delay versus the popularity exponent $\beta$. ]{\includegraphics[width=2.8in]{Figures/ch3/delay_compare3.eps}
\label{delay_compare}}
\subfigure[Optimal allocated bandwidth versus the popularity exponent $\beta$.]{\includegraphics[width=2.8in]{Figures/ch3/BW_compare4.eps}
\label{BW_compare}}
\caption{Evaluation and comparison of average delay for the proposed joint \ac{PC} and bandwidth allocation scheme with different baseline schemes against popularity exponent $\beta$, $N_f = 100$, $M = 4$, $k = 8$.}
\label{delay-analysis}
\end{figure*}
\textcolor{blue}{The results in this part are devoted to the average delay metric. The performance of the proposed joint \ac{PC} and bandwidth allocation scheme is evaluated in Fig.~\ref{delay-analysis}, and the optimized bandwidth allocation is also shown. Firstly, in Fig.~\ref{delay_compare}, we compare the average delay for two different caching schemes, namely, \ac{PC}, and Zipf's scheme. We can see that the minimized average delay under the proposed joint \ac{PC} and bandwidth allocation scheme attains substantially better performance as compared to the Zipf's scheme with fixed bandwidth allocation (i.e., $W_1=W_2=W/2$). Also, we see that, in general, the average delay monotonically decreases with $\beta$ when a fewer number of files undergoes the highest demand.
Secondly, Fig.~\ref{BW_compare} manifests the effect of the files' popularity $\beta$ on the allocated bandwidth. It is shown that optimal \ac{D2D} allocated bandwidth $W_1^*$ continues increasing with $\beta$. This can be interpreted as follows. When $\beta$ increases, a fewer number of files become highly demanded. These files can be entirely cached among the devices. To cope with such a larger number of requests served via the \ac{D2D} communication, the \ac{D2D} allocated bandwidth needs to be increased.}
\textcolor{blue}{Last but not least, Fig.~\ref{scaling} shows the geometrical scaling effects on the system performance, e.g., the effect of clusters' density $\lambda_p$ and the displacement standard deviation $\sigma$ on the \ac{D2D} coverage probability $\mathrm{P_{c_d}}$, optimal allocated bandwidth $W_1^*$, and the average delay. In Fig.~\ref{cache_size}, we plot the \ac{D2D} coverage probability $\mathrm{P_{c_d}}$ versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$. It is clear from the plot that $\mathrm{P_{c_d}}$ monotonically decreases with both $\sigma$ and $\lambda_p$. Obviously, increasing $\sigma$ and $\lambda_p$ results in larger serving distance, i.e., higher path-loss effect, and shorter interfering distance, i.e., higher interference power received by the typical device, respectively. This explains the encountered degradation for $\mathrm{P_{c_d}}$ with $\sigma$ and $\lambda_p$.
In Fig.~\ref{optimal-w}, we plot the optimal allocated bandwidth $W_1^*$ normalized to $W$ versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$. Here also, it is quite obvious that $W_1^*$ tends to increase with both $\sigma$ and $\lambda_p$. This behavior can be directly understood from (\ref{optimal-w-1}) where $W_1^*$ is inversely proportional to $\mathcal{O}_{1} = \frac{\mathrm{P_{c_d}} {\rm log}_2(1+\theta)}{\overline{S}}$, and $\mathrm{P_{c_d}}$ decreases with $\sigma$ and $\lambda_p$ as discussed above. More precisely, while the \ac{D2D} service rate $\mu_1$ decreases with $\mathrm{P_{c_d}}$ since $\mu_1 = \frac{\mathrm{P_{c_d}} W_1{\rm log}_2(1+\theta)}{\overline{S}}$, the optimal allocated bandwidth $W_1^*$ tends to increase with $\mathrm{P_{c_d}}$ to compensate for the service rate degradation, and eventually, minimizing the weighted average delay.
In Fig.~\ref{av-delay}, we plot the weighted average delay versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$. Following the same interpretations as in Fig.~\ref{cache_size} and Fig.~\ref{optimal-w}, we can notice that the weighted average delay monotonically increases with $\sigma$ and $\lambda_p$.}
\begin{figure*} [t!]
\vspace{-0.5cm}
\centering
\subfigure[\ac{D2D} coverage probability $\mathrm{P_{c_d}}$ versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$.]{\includegraphics[width=2.0in]{Figures/ch3/d2d-coverage-prob.eps}
\label{cache_size}}
\subfigure[Optimal allocated bandwidth versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$.]{\includegraphics[width=2.0in]{Figures/ch3/bw-allocated.eps}
\label{optimal-w}}
\subfigure[Weighted average delay versus the displacement standard deviation $\sigma$ for different clusters' density $\lambda_p$.]{\includegraphics[width=2.0in]{Figures/ch3/delay-sigma-lamda.eps}
\label{av-delay}}
\caption{Effect of geometrical parameters, e.g., clusters' density $\lambda_p$ and the displacement standard deviation $\sigma$ on the system performance, $\beta= 0.5$, $N_f = 100$, $M = 4$, $k = 8$.}
\label{scaling}
\end{figure*}
\section{Conclusion}
In this work, we conduct a comprehensive analysis of the joint communication and caching for a clustered \ac{D2D} network with random probabilistic caching incorporated at the devices. We first maximize the offloading gain of the proposed system by jointly optimizing the channel access and caching probability. We obtain the optimal channel access probability, and the optimal caching probability is then characterized. We show that deviating from the optimal access probability $p^*$ makes file sharing more difficult. More precisely, the system is too conservative for small access probabilities, while the interference is too aggressive for larger access probabilities. Then, we minimize the energy consumption of the proposed clustered \ac{D2D} network. We formulate the energy minimization problem and show that it is convex and the optimal caching probability is obtained. We show that a content with a large size or low popularity has a small probability to be cached. Finally, we adopt a queuing model for the devices' traffic within each cluster to investigate the network average delay. Two M/G/1 queues are employed to model the \ac{D2D} and \ac{BS}-to-Device communications. We then derive an expression for the weighted average delay per request. We observe that the average delay is dependent on the caching probability and bandwidth allocated, which control respectively the arrival rates and service rates for the two modeling queues. Therefore, we minimize the per request weighted average delay by jointly optimizing bandwidth allocation between \ac{D2D} and \ac{BS}-to-Device communication and the caching probability. The delay minimization problem is shown to be non-convex. Applying the \ac{BCD} optimization technique, the joint minimization problem can be solved in an iterative manner. Results show up to $10\%$, $17\%$, and $300\%$ improvement gain in the offloading gain, energy consumption, and average delay, respectively, compared to the Zipf's caching technique.
\begin{appendices}
\section{Proof of lemma 1}
Laplace transform of the inter-cluster aggregate interference $I_{\Phi_p^{!}}$ can be evaluated as
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &= \mathbb{E} \Bigg[e^{-s \sum_{\Phi_p^{!}} \sum_{y \in \mathcal{B}^p} g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&= \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\Phi_{cp},g_{y_{x}}} \prod_{y \in \mathcal{B}^p} e^{-s g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&= \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\Phi_{cp}} \prod_{y \in \mathcal{B}^p} \mathbb{E}_{g_{y_{x}}} e^{-s g_{y_{x}} \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&\overset{(a)}{=} \mathbb{E}_{\Phi_p} \Bigg[\prod_{\Phi_p^{!}} \mathbb{E}_{\Phi_{cp}} \prod_{y \in \mathcal{B}^p} \frac{1}{1+s \lVert x + y\rVert^{-\alpha}} \Bigg] \nonumber \\
&\overset{(b)}{=} \mathbb{E}_{\Phi_p} \prod_{\Phi_p^{!}} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x + y\rVert^{-\alpha}}\Big)f_Y(y)\dd{y}\Big) \nonumber
\\
&\overset{(c)}{=} {\rm exp}\Bigg(-\lambda_p \int_{\mathbb{R}^2}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x + y\rVert^{-\alpha}}\Big)f_Y(y)\dd{y}\Big)\dd{x}\Bigg)
\end{align}
where $\varphi(s,v) = \int_{u=0}^{\infty}\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}$; (a) follows from the Rayleigh fading assumption, (b)
follows from the \ac{PGFL} of Gaussian \ac{PPP} $\Phi_{cp}$, and (c) follows from the \ac{PGFL} of the parent \ac{PPP} $\Phi_p$. By using change of variables $z = x + y$ with $\dd z = \dd y$, we proceed as
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &\overset{}{=} {\rm exp}\Bigg(-\lambda_p \int_{\mathbb{R}^2}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert z\rVert^{-\alpha}}\Big)f_Y(z-x)\dd{y}\Big)\dd{x}\Bigg) \nonumber
\\
&\overset{(d)}{=} {\rm exp}\Bigg(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm exp}\Big(-p\overline{n} \int_{u=0}^{\infty}\Big(1 - \frac{1}{1+s u^{-\alpha}}\Big)f_U(u|v)\dd{u}\Big)v\dd{v}\Bigg) \nonumber
\\
&= {\rm exp}\Bigg(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm exp}\big(-p\overline{n} \int_{u=0}^{\infty}
\frac{s}{s+ u^{\alpha}}f_U(u|v)\dd{u}\big)v\dd{v}\Big)\Bigg) \nonumber
\\
&= {\rm exp}\Big(-2\pi\lambda_p \int_{v=0}^{\infty}\Big(1 - {\rm e}^{-p\overline{n} \varphi(s,v)}\Big)v\dd{v}\Big),
\end{align}
where (d) follows from converting the cartesian coordinates to the polar coordinates with $u=\lVert z\rVert$.
\textcolor{blue}{To clarify how in (d) the normal distribution $f_Y(z-x)$ is converted to the Rice distribution $f_U(u|v)$, consider a remote cluster centered at $x \in \Phi_p^!$, with a distance $v=\lVert x\rVert$ from the origin. Every interfering device belonging to the cluster centered at $x$ has its coordinates in $\mathbb{R}^2$ chosen independently from a gaussian distribution with standard deviation $\sigma$. Then, by definition, the distance from such an interfering device to the origin, denoted as $u$, has a Rice distribution, denoted as $f_U(u|v)=\frac{u}{\sigma^2}\mathrm{exp}\big(-\frac{u^2 + v^2}{2\sigma^2}\big) I_0\big(\frac{uv}{\sigma^2}\big)$, where $I_0$ is the modified Bessel function of the first kind with order zero and $\sigma$ is the scale parameter.
Hence, Lemma 1 is proven.}
\section {Proof of lemma 2}
Laplace transform of the intra-cluster aggregate interference $I_{\Phi_c}$, conditioning on the distance $v_0$ from the cluster center to the origin, see Fig~\ref{distance}, is written as
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|v_0) &= \mathbb{E} \Bigg[e^{-s \sum_{y \in \mathcal{A}^p} g_{y_{x_0}} \lVert x_0 + y\rVert^{-\alpha}} \Bigg] \nonumber
\\
&= \mathbb{E}_{\Phi_{cp},g_{y_{x_0}}} \prod_{y \in\mathcal{A}^p} e^{-s g_{y_{x_0}} \lVert x_0 + y\rVert^{-\alpha}} \nonumber
\\
&= \mathbb{E}_{\Phi_{cp}} \prod_{y \in\mathcal{A}^p} \mathbb{E}_{g_{y_{x_0}}} e^{-s g_{y_{x_0}} \lVert x_0 + y\rVert^{-\alpha}} \nonumber
\\
&\overset{(a)}{=} \mathbb{E}_{\Phi_{cp}} \prod_{y \in\mathcal{A}^p} \frac{1}{1+s \lVert x_0 + y\rVert^{-\alpha}}
\nonumber
\\
&\overset{(b)}{=} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert x_0 + y\rVert^{-\alpha}}\Big)f_{Y}(y)\dd{y}\Big) \nonumber
\\
&\overset{(c)}{=} {\rm exp}\Big(-p\overline{n} \int_{\mathbb{R}^2}\Big(1 - \frac{1}{1+s \lVert z_0\rVert^{-\alpha}}\Big)f_{Y}(z_0-x_0)\dd{z_0}\Big)
\end{align}
where (a) follows from the Rayleigh fading assumption, (b) follows from the \ac{PGFL} of the Gaussian \ac{PPP} $\Phi_{cp}$, (c) follows from changing of variables $z_0 = x_0 + y$ with $\dd z_0 = \dd y$. By converting the cartesian coordinates to the polar coordinates, with $h=\lVert z_0\rVert$, we get
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s|v_0) &\overset{}{=} {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\Big(1 - \frac{1}{1+s h^{-\alpha}}\Big)f_H(h|v_0)\dd{h}\Big) \nonumber
\\
&= {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h|v_0)\dd{h}\Big)
\end{align}
By neglecting the correlation of the intra-cluster interfering distances as in \cite{clustered_twc}, i.e., the common part $x_0$ in the intra-cluster interfering distances $\lVert x_0 + y\rVert$, $y \in \mathcal{A}^p$, we get.
\begin{align}
\mathscr{L}_{I_{\Phi_c} }(s) &\approx {\rm exp}\Big(-p\overline{n} \int_{h=0}^{\infty}\frac{s}{s+ h^{\alpha}}f_H(h)\dd{h}\Big)
\end{align}
Hence, Lemma 2 is proven.
\section {Proof of lemma 3}
First, to prove concavity, we proceed as follows.
\begin{align}
\frac{\partial \mathbb{P}_o}{\partial b_i} &= q_i + q_i\big(\overline{n}(1-b_i)e^{-\overline{n}b_i}-(1-e^{-\overline{n}b_i})\big)\mathbb{P}(R_1>R_0) \\
\frac{\partial^2 \mathbb{P}_o}{\partial b_i^2} &= -q_i\big(\overline{n}e^{-\overline{n}b_i} + \overline{n}^2(1-b_i)e^{-\overline{n}b_i} + \overline{n}e^{-\overline{n}b_i}\big)\mathbb{P}(R_1>R_0)
\end{align}
It is clear that the second derivative $\frac{\partial^2 \mathbb{P}_o}{\partial b_i^2}$ is always negative, and
$\frac{\partial^2 \mathbb{P}_o}{\partial b_i \partial b_j}=0$ for all $i\neq j$. Hence, the Hessian matrix \textbf{H}$_{i,j}$ of $\mathbb{P}_o(p^*,\textbf{b})$ w.r.t. $\textbf{b}$ is negative semidefinite, and $\mathbb{P}_o(p^*,\textbf{b})$ is a concave function of $\textbf{b}$. Also, the constraints are linear, which imply that the necessity and sufficiency conditions for optimality exist. The dual Lagrangian function and the \ac{KKT} conditions are then employed to solve \textbf{P2}.
The \ac{KKT} Lagrangian function of the energy minimization problem is given by
\begin{align}
\mathcal{L}(\textbf{b},w_i,\mu_i,v) =& \sum_{i=1}^{N_f} q_i b_i + q_i(1- b_i)(1-e^{-b_i\overline{n}})\mathbb{P}(R_1>R_0) \nonumber \\
&+ v(M-\sum_{i=1}^{N_f} b_i) + \sum_{i=1}^{N_f} w_i (b_i-1) - \sum_{i=1}^{N_f} \mu_i b_i
\end{align}
where $v, w_i, \mu_i$ are the dual equality and two inequality constraints, respectively. Now, the optimality conditions are written as
\begin{align}
\label{grad}
\grad_{\textbf{b}} \mathcal{L}(\textbf{b}^*,w_i^*,\mu_i^*,v^*) = q_i + q_i&\big(\overline{n}(1-b_i^*)e^{-\overline{n}b_i^*}-(1-e^{-\overline{n}b_i^*})\big)\mathbb{P}(R_1>R_0) -v^* + w_i^* -\mu_i^*= 0 \\
&w_i^* \geq 0 \\
&\mu_i^* \leq 0 \\
&w_i^* (b_i^* - 1) =0 \\
&\mu_i^* b_i^* = 0\\
&(M-\sum_{i=1}^{N_f} b_i^*) = 0
\end{align}
\begin{enumerate}
\item $w_i^* > 0$: We have $b_i^* = 1$, $\mu_i^*=0$, and
\begin{align}
&q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)= v^* - w_i^* \nonumber \\
\label{cond1_offload}
&v^{*} < q_i -q_i(1-e^{-\overline{n}})\mathbb{P}(R_1>R_0)
\end{align}
\item $\mu_i^* < 0$: We have $b_i^* = 0$, and $w_i^*=0$, and
\begin{align}
& q_i + \overline{n}q_i\mathbb{P}(R_1>R_0) = v^* + \mu_i^* \nonumber \\
\label{cond2_offload}
&v^{*} > q_i + \overline{n}q_i\mathbb{P}(R_1>R_0)
\end{align}
\item $0 <b_i^*<1$: We have $w_i^*=\mu_i^*=0$, and
\begin{align}
\label{psii_offload}
v^{*} = q_i + q_i\big(\overline{n}(1-b_i^*)e^{-\overline{n}b_i^*} - (1 - e^{-\overline{n}b_i^*})\big)\mathbb{P}(R_1>R_0)
\end{align}
\end{enumerate}
By combining (\ref{cond1_offload}), (\ref{cond2_offload}), and (\ref{psii_offload}), with the fact that $\sum_{i=1}^{N_f} b_i^*=M$, Lemma 3 is proven.
\section {Proof of lemma 6}
Under the assumption of one active \ac{D2D} link within a cluster, there is no intra-cluster interference. Also, the Laplace transform of the inter-cluster interference is similar to that of the \ac{PPP} \cite{andrews2011tractable} whose density is the same as that of the parent PPP. In fact, this is true according to the displacement theory of the \ac{PPP} \cite{daley2007introduction}, where each interferer is a point of a \ac{PPP} that is displaced randomly and independently of all other points. For the sake of completeness, we prove it here. Starting from the third line of the proof of Lemma 1, we get
\begin{align}
\mathscr{L}_{I_{\Phi_p^{!}}}(s) &\overset{(a)}{=} \text{exp}\Bigg(-2\pi\lambda_p \mathbb{E}_{g_{u}} \int_{v=0}^{\infty}\mathbb{E}_{u|v}\Big[1 -
e^{-s P_d g_{u} u^{-\alpha}} \Big]v\dd{v}\Bigg), \nonumber \\
&= \text{exp}\Big(-2\pi\lambda_p \mathbb{E}_{g_{u}} \big[\int_{v=0}^{\infty}\int_{u=0}^{\infty}\big(1 - e^{-s P_d g_{u} u^{-\alpha}} \big)f_U(u|v)\dd{u}v\dd{v}\big]\Big) \nonumber \\
\label{prelaplace}
&\overset{(b)}{=} \text{exp}\Bigg(-2\pi\lambda_p \mathbb{E}_{g_{u}} \underbrace{\int_{v=0}^{\infty}v\dd{v} - \int_{v=0}^{\infty}\int_{u=0}^{\infty} e^{-s P_d g_{u} u^{-\alpha}} f_{U}(u|v)\dd{u} v \dd{v}}_{\mathcal{R}(s,\alpha)}\Bigg)
\end{align}
where (a) follows from the \ac{PGFL} of the parent \ac{PPP} \cite{andrews2011tractable}, and (b) follows from $\int_{u=0}^{\infty} f_{U}(u|v)\dd{u} =1$. Now, we proceed by calculating the integrands of $\mathcal{R}(s,\alpha)$ as follows.
\begin{align}
\mathcal{R}(s,\alpha)&\overset{(c)}{=} \int_{v=0}^{\infty}v\dd{v} - \int_{u=0}^{\infty}e^{-s P_d g_{u} u^{-\alpha}}
\int_{v=0}^{\infty} f_{U}(u|v)v \dd{v}\dd{u}\nonumber \\
&\overset{(d)}{=} \int_{v=0}^{\infty}v\dd{v} - \int_{u=0}^{\infty}e^{-s P_d g_{u} u^{-\alpha}}u\dd{u} \nonumber \\
&\overset{(e)}{=} \int_{u=0}^{\infty}(1 - e^{-s P_d g_{u} u^{-\alpha}})u\dd{u} \nonumber \\
&\overset{(f)}{=} \frac{(s P_d g_{u})^{2/\alpha}}{\alpha} \int_{u=0}^{\infty}(1 - e^{-t})t^{-1-\frac{2}{\alpha}}du \nonumber \\
\label{laplaceppp1}
&\overset{(g)}{=} \frac{(s P_d)^{2/\alpha}}{2} g_{u}^{2/\alpha} \Gamma(1 + 2/\alpha),
\end{align}
where (c) follows from changing the order of integration, (d) follows from $ \int_{v=0}^{\infty} f_{U}(u|v)v\dd{v} = u$, (e) follows from changing the dummy variable $v$ to $u$, (f) follows from changing the variables $t=s g_{u}u^{-\alpha}$, and (g) follows from solving the integration of (f) by parts. Substituting the obtained value for $\mathcal{R}(s,\alpha)$ into (\ref{prelaplace}), and taking the expectation over the exponential random variable $g_u$, with the fact that $\mathbb{E}_{g_{u}} [g_{u}^{2/\alpha}] = \Gamma(1 - 2/\alpha)$, we get
\begin{align}
\label{laplace_trans}
\mathscr{L}_{I_{\Phi_p^{!}}} (s)&= {\rm exp}\Big(-\pi\lambda_p (sP_d )^{2/\alpha} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)\Big),
\end{align}
Substituting this expression with the distance \ac{PDF} $f_R(r)$ into the coverage probability equation yields
\begin{align}
\mathrm{P_{c_d}} &=\int_{r=0}^{\infty}
{\rm e}^{-\pi\lambda_p (sP_d)^{2/\alpha} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)}\frac{r}{2\sigma^2}{\rm e}^{\frac{-r^2}{4\sigma^2}} {\rm dr} , \nonumber \\
&\overset{(h)}{=}\int_{r=0}^{\infty} \frac{r}{2\sigma^2}
{\rm e}^{-\pi\lambda_p \theta^{2/\alpha}r^{2} \Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)}{\rm e}^{\frac{-r^2}{4\sigma^2}} {\rm dr} , \nonumber \\
&\overset{(i)}{=}\int_{r=0}^{\infty} \frac{r}{2\sigma^2}
{\rm e}^{-r^2Z(\theta,\sigma,\alpha)} {\rm dr} , \nonumber \\
&\overset{}{=} \frac{1}{4\sigma^2 Z(\theta,\alpha,\sigma)}
\end{align}
where (h) comes from the substitution $s = \frac{\theta r^{\alpha}}{P_d}$, and (i) from $Z(\theta,\alpha,\sigma) = (\pi\lambda_p \theta^{2/\alpha}\Gamma(1 + 2/\alpha)\Gamma(1 - 2/\alpha)+ \frac{1}{4\sigma^2})$.
\end{appendices}
\bibliographystyle{IEEEtran}
|
2,877,628,088,399 | arxiv | \section{Introduction}
There has been much recent work on the subject of how gradient flow (GF) \cite{Narayanan:2006rf, Luscher:2009eq} is related to the renormalization group (RG), due to the characteristic smoothing properties of GF resembling the high-mode elimination aspect of RG. Inspired by this resemblance, work has been done on the lattice to use gradient flow to define a continuous blocking transformation with which anomalous dimensions have been computed \cite{Carosso:2018bmz,Carosso:2018rep}, with encouraging results. On the analytic side, work has been done \cite{Makino:2018rys, Abe:2018zdc, Sonoda:2019ibh} connecting GF to the framework of functional (or exact) RG (FRG), which is a method by which one defines RG transformations nonperturbatively and continuously in the context of continuum field theory \cite{Wegner:1972ih, Wilson:1973jj, Polchinski:1983gv, Wetterich:1989xg}.\footnote{See \cite{Kopietz:2010zz, Rosten:2010vm} for exemplary reviews of the subject.} In particular, it has been noted that certain definitions of a GF effective action lead to a kind of Langevin equation \cite{Abe:2018zdc}, and most recently, that the connected $n-$point functions of a particular FRG effective theory are equal to the GF observables up to proportionality \cite{Sonoda:2019ibh}.
In this work, we will contribute to these discussions by demonstrating the equivalence of certain kinds of FRG transformations with a class of stochastic (Markov) processes on field space.\footnote{See \cite{Pavliotis:2014} for a mathematician's introduction to stochastic processes, \cite{ZinnJustin:2002ru} for a physicist's introduction, and \cite{Damgaard:1987rr, ZinnJustin:2002ru} for an introduction to their field-theoretical generalization in the context of stochastic quantization.} It has been noted before \cite{Gaite:2000jv, Pawlowski:2017rhn, ZinnJustin:2007zz} that the functional RG equations for effective actions, when written in terms of effective Boltzmann weights, are of the form of a Fokker-Planck (FP) equation, whose solution is therefore a probability distribution over effective fields. Taking this observation seriously, and recalling that Fokker-Planck distributions can be thought of as being generated by a Langevin equation on the degrees of freedom appearing in the FP distribution, one may ask what kinds of Langevin equation generate the FRG effective actions. In what follows, we will define an RG transformation by a particularly simple (linear) choice of Langevin equation, and show by direct calculation that the transition functions resemble the constraint functionals found in the literature of FRG. The effective action for the specific case of $\phi^4$ theory in three dimensions will then be discussed, and the existence of a nontrivial IR fixed point will be checked to 1-loop order in perturbation theory. It will therefore become apparent that although the stationary distribution of the FP equation would be expected to be gaussian, a simple rescaling of variables allows for an interacting fixed point solution.\footnote{This may not be surprising from the RG perspective, of course, but it may be unexpected from the standpoint of stochastic processes, where the stationary distributions of the Smoluchowski equation are expected to involve the potential whose gradient appears in the Langevin equation\cite{Pavliotis:2014}.}
The equivalence we discuss here is a formulation of the Monte Carlo Renormalization Group (MCRG) principle for FRG. Recall that the kind of MCRG discussed by Swendsen \cite{Swendsen:1979gn} in the 1980's provided a prescription for computing observables in an effective theory by computing \emph{blocked} observables in a bare theory, that is, without having to know the effective action. A similar property will be found for the stochastic RG transformation, namely, that effective observables may be computed from the stochastic observables generated by the Langevin equation, whose initial condition is the bare field. The MCRG property will be valid for both lattice and continuum theories alike, thereby suggesting the possibility of computing general observables in an effective theory on the lattice by integrating a Langevin equation on top of the ensemble generated in the MCMC simulation of the corresponding bare theory.
The relationship to gradient flow will then follow from an observation made by Wilson and Kogut \cite{Wilson:1973jj}, and recently connected to gradient flow by Sonoda and Suzuki \cite{Sonoda:2019ibh}. In the context of the stochastic RG transformation, it follows from the MCRG equivalence that the connected expectation values of an FRG effective theory are equal to gradient-flowed expectations up to additive corrections that depend on the choice of Langevin equation, and which decay exponentially at large distances. This relationship implies that the measurement of gradient-flowed quantities is sufficient for the determination of long-distance critical properties of the theory, in much the same way as spin-blocked observables at large distances, avoiding the necessity of performing a full Langevin equation simulation.
A virtue of the characterization of FRG in terms of stochastic processes is that the properties of the Langevin equation guarantee certain traditionally desired properties of an FRG transformation: a composition law for repeated applications of the transformation, a correct initial condition for the effective theory, and a natural characterization of the fixed points of the transformation. Moreover, that the effective Boltzmann factor satisfies a FP equation implies that observables of the effective theory satisfy a similar equation involving the generator of the Markov process. An analysis of this equation for discrete, small time steps leads to the stochastic RG instantiation of usual RG scaling laws of correlations of the fundamental field, as well as of composite operators built from it. In particular, by virtue of the stochastic MCRG equivalence, one is led to correlator ratio formulae of the sort described in \cite{Carosso:2018bmz,Carosso:2018rep}, implying a method for measuring anomalous dimensions of scaling operators close to a critical fixed point.
\section{Stochastic processes and FRG}
Here we discuss the general framework for stochastic RG. The RG transformation will be defined by a Langevin equation on the degrees of freedom of a field theory. The simplicity of the equation will allow for an explicit calculation of the probability distribution which it generates, and the functional form of the distribution will entail an equivalence to conventional FRG transformations. A brief consideration of the observables generated by the stochastic process will lead to the MCRG equivalence between the effective theory and the stochastic observables. Lastly, we will comment on the pitfalls of a seemingly simpler definition of the effective theory.
\paragraph{The Langevin equation.}
We will define an RG transformation by a stochastic process $\phi_t$ on field space over $\mathbb{R}^d$ determined by a Langevin equation (LE) of the form
\begin{equation} \label{LE_momspace}
\partial_t \phi_t(p) = -\omega(p) \phi_t(p) + \eta_t(p), \quad \phi_0(p) = \varphi(p),
\end{equation}
where $\omega(p)$ is positive for $\|p\|>0$ and $\omega(0) \geq 0$, e.g. $\omega(p) = p^2$, where $p^2 := \| p \|^2$.\footnote{Of course, the realm of stochastic quantization \cite{Damgaard:1987rr} deals with writing field theory expectation values as equilibrium limits of a stochastic process on field space. Here, however, the bare theory is kept as a traditional field theory, and the stochasticity applies to the RG transformation only.} The ``time'' $t$ in this equation does not denote physical time, but rather an ``RG time'' which we will call \emph{flow time}, or simply \emph{time}. The noise $\eta_t(p)$ is chosen to be gaussian-distributed according to the measure
\begin{equation} \label{noise_distribution}
\mathrm{d} \mu_{0} (\eta) := \mathrm{c}(\mn\Lambda_0, \Omega) \exp\Big[ - \frac{1}{2\Omega} \int_I \mathrm{d} t \; (\eta_t, K_{0}^{-1}\eta_t)\Big] \mathscr{D} \eta,
\end{equation}
where the notation $(\phi, M \chi)$ denotes a quadratic form, written variously as\footnote{We abbreviate $\int_x = \int_{\mathbb{R}^d} \mathrm{d}^d x$ and $\int_p = \int_{\mathbb{R}^d} \mathrm{d}^d p / (2\pi)^d$ when no confusion arises.}
\begin{equation}
(\phi, M \chi) = \int_{xy} \phi (x) M(x,y) \chi(y) = \int_{pk} \phi(p) M(p,k) \chi_t(k).
\end{equation}
The cutoff function $K_{0}(p)$ suppresses noise momentum modes greater than $\mn\Lambda_0$, e.g. $K_{0}(p) = \mathrm{e}^{-p^2/\mn\Lambda_0^2}$ under Schwinger regularization.\footnote{The LE and measure $\mathrm{d} \mu_0$ can easily be written for a lattice theory, in which case the cutoff function $K_0$ is not necessary, as the lattice naturally regulates the noise at the bare scale.} Expectation values with respect to the noise distribution of functions $O(\eta)$ are defined by
\begin{equation}
\mathbb{E}_{\mu_0}[O(\eta)] := \int \! O(\eta) \mathrm{d} \mu_0 (\eta).
\end{equation}
The first two moments of $\mu_{0}$ are then
\begin{equation}
\mathbb{E}_{\mu_0}[ \eta_t(p) ] = 0, \quad \mathbb{E}_{\mu_0}[ \eta_t(p) \eta_s(k) ] = 2\pi \Omega \; \delta(t-s) \; (2\pi)^d \delta(p+k) K_{0}(k).
\end{equation}
Later we will take the initial condition $\phi_0 = \varphi$ to be distributed according to a measure $\mathrm{d} \rho_0 (\varphi)$ corresponding to the bare theory of interest, the cutoff of which is chosen to be $\mn\Lambda_0$. Hence, the cutoff for the noise is chosen to match the cutoff of the bare theory.
Turning back to eq. (\ref{noise_distribution}), the constant $\mathrm{c}(\mn\Lambda_0, \Omega)$ is chosen to normalize $\mathrm{d} \mu_{0}$ to unity, $\Omega$ is the (dimensionless) variance of the noise, and $I \subset \mathbb{R}$ is an arbitrary time interval large enough to include all desired times. In position space, the Langevin equation takes the form of a stochastic heat equation
\begin{equation}
\partial_t \phi_t(x) = - (\omega \phi_t)(x) + \eta_t(x), \quad \phi_0(x) = \varphi(x).
\end{equation}
For the case $\omega(p) = p^2$, one has $(\omega \phi)(x) = -\Delta \phi (x) = -\partial_\mu \partial_\mu \phi(x)$. In position space, therefore, we see that the equation becomes a stochastic partial differential equation.
The form of the momentum space equation above is a simple field-theoretic generalization of the well-known Ornstein-Uhlenbeck process (i.e. damped Brownian motion) $q_t$ with Langevin equation and solution \cite{ZinnJustin:2002ru}, respectively,
\begin{equation}
\dot q_t = - \omega q_t + \eta_t, \quad q_t = \mathrm{e}^{-\omega t} q_0 + \int_0^t \mathrm{d} s \; \mathrm{e}^{-\omega(t-s)} \eta_s,
\end{equation}
where $\eta_t$ is gaussian white noise, so it is quite simple to find the solution. One treats the noise term like a non-homogeneous part of the equation, finding
\begin{equation}
\phi_t(p) = f_t(p) \varphi(p) + \int_0^t \mathrm{d} s \; f_{t-s}(p) \eta_s(p),
\end{equation}
where $f_t(p)$ is a generalized momentum space heat kernel of the form
\begin{equation}
f_t(p) = \mathrm{e}^{-\omega(p)t}, \quad f_t(z) = \int_p \mathrm{e}^{i p \cdot z} f_t(p).
\end{equation}
In position space, one finds
\begin{equation}
\phi_t(x) = (f_t \varphi)(x) + \int_0^t \mathrm{d} s \; (f_{t-s}\eta_s)(x).
\end{equation}
We will sometimes denote the solution's dependence on initial condition and noise by $\phi_t[\varphi; \eta]$. The first term on the right-hand side implies that the mean of $\phi_t(x)$ satisfies the free gradient flow equation, i.e. ``heat'' equation, corresponding to the differential operator $\omega$.
\paragraph{The Fokker-Planck distribution.}
With the explicit solution in-hand, one can compute the probability distribution of fields $\phi$ at time $t$ given $\varphi$ at $t=0$. We say that the Langevin equation \emph{generates} a Fokker-Planck (FP) distribution $P(\phi,t;\varphi,0)$ defined by
\begin{equation}
P(\phi,t;\varphi,0) := \mathbb{E}_{\mu_0} \big[ \delta(\phi - \phi_t[\varphi;\eta])\big] = \int \mathscr{D} \lambda \; \mathbb{E}_{\mu_0} \big[ \mathrm{e}^{i(\lambda,\phi-\phi_t [\varphi; \eta])}\big],
\end{equation}
From the definition of noise expectations, we then find
\begin{equation}
P(\phi, t; \varphi, 0) = \mathrm{c}(\mn\Lambda_0, \Omega) \int \! \mathscr{D} \lambda \int \! \mathscr{D} \eta \; \exp\Big[i(\lambda, \phi - \phi_t[\varphi; \eta]) -\frac{1}{2\Omega} \int_I \mathrm{d} s (\eta_s, K^{-1}_{0} \eta_s)\Big].
\end{equation}
Substituting in the explicit solution for $\phi_t$, the integrand becomes
\begin{align}
\exp&\Big[i(\lambda, \phi - f_t \varphi) - i \int_0^t \mathrm{d} s\; (\lambda, f_{t-s} \eta_s) - \frac{1}{2\Omega} \int_I \mathrm{d} s \; (\eta_s, K_{0}^{-1} \eta_s)\Big] \nonumber \\
& = C \exp\Big[i(\lambda, \phi - f_t \varphi) - \int_0^t \mathrm{d} s\; \Big( i (\lambda, f_{t-s} \eta_s) + \frac{1}{2\Omega} (\eta_s, K_{0}^{-1} \eta_s) \Big) \Big],
\end{align}
the constant $C$ involving times $s > t$, which divide out of any noise averages and will now be dropped. The noise integral over relevant $\eta_t$'s is a standard gaussian integral, which yields
\begin{equation}
P(\phi, t; \varphi, 0) = \int \! \mathscr{D}\lambda \; \exp\Big[i(\lambda, \phi - f_t \varphi) - \frac{\Omega}{2} \int_0^t \mathrm{d} s \; (f_{t-s}^\top \lambda, K_{0} f_{t-s}^\top
\lambda)\Big].
\end{equation}
Next, note that the $s-$integral (which does not care about $\lambda$ or $K_{0}$) produces a kernel
\begin{equation} \label{Adef}
A_t := \Omega \int_0^t \mathrm{d} s \; f_{t-s} K_{0} f_{t-s}^\top,
\end{equation}
which in momentum space is given by a diagonal matrix,
\begin{equation} \label{A_momspace}
A_t(p,k) = \Omega (2\pi)^d \delta(p+k) K_{0}(p) \; \frac{1-\mathrm{e}^{-2 \omega(p) t}}{2\omega(p)}.
\end{equation}
We will sometimes denote $A^{-1}_t$ by $B_t$; the inverse exists by virtue of the restrictions on $\omega(p)$. The remaining $\lambda-$integral is also gaussian, and evaluates to
\begin{equation} \label{FP_dist}
P(\phi, t; \varphi, 0) = \big[\det 2 \pi B_t \big]^{\frac{1}{2}} \exp\Big[- \frac{1}{2} \big(\phi - f_t\varphi, B_t (\phi - f_t \varphi)\big)\Big].
\end{equation}
Here we note that similar functionals to this one are common to several formulations of FRG \cite{Wilson:1973jj, Wetterich:1989xg, Igarashi:2009tj}. We will call such a functional a \emph{gaussian constraint functional} or a \emph{transition function} (when emphasizing it's probabilistic interpretation). In momentum space, the exponent is explicitly
\begin{equation} \label{constraint_functional}
- \frac{1}{2} \int_p \; \frac{2\omega(p) K_{0}^{-1}(p)}{1-\mathrm{e}^{-2\omega(p)t}} \big(\phi(p) - \mathrm{e}^{-\omega(p)t} \varphi(p)\big)\big(\phi(-p) - \mathrm{e}^{-\omega(p)t} \varphi(-p)\big).
\end{equation}
One observes that the mean of the field $\phi$ is set to the flowed field $f_t(p) \varphi(p)$ within a functional variance determined by $A_t(p)$. Thus the effect of the stochastic RG transformation is to produce a low-mode fluctuating field, in the sense that the mean value of modes of $\phi$ with $p^2 \gg 1/t$ are exponentially suppressed. This suggests that the effective cutoff of the resulting theory is roughly $\mn\Lambda_t \sim 1/\sqrt{t}$ ; a more precise identification will be made later.
For reasons explained in the next subsection, we may write the transition function as $P_t(\phi,\varphi)$ rather than $P(\phi,t;\varphi,0)$, and we will sometimes suppress the initial condition by writing $P_t(\phi)$. The transition function is a Green function for the Fokker-Planck equation\footnote{The tensor products are in the space of position or momentum indices.}
\begin{align} \label{FP_transfn}
& \frac{\partial P_t(\phi)}{\partial t} = \mathrm{tr} \; \frac{\delta}{\delta \phi} \otimes \Big( \frac{1}{2} \mn\Sigma(\phi,t) \fdelphiO{P_t(\phi)} + \mathscr{B}(\phi,t) P_t(\phi) \Big), \nonumber \\
& \lim_{t \to 0} P(\phi,t; \varphi,0) = \delta(\phi - \varphi),
\end{align}
where the drift $\mathscr{B}$ and diffusion matrix $\mn\Sigma$ are defined by \cite{Pavliotis:2014}
\begin{align}
\mathscr{B}(\phi,t) &= \lim_{t' \to t} \frac{1}{t'-t} \int_{\phi'} (\phi' - \phi) P(\phi',t';\phi,t), \\
\mn\Sigma(\phi,t) &= \lim_{t' \to t} \frac{1}{t'-t} \int_{\phi'} (\phi' - \phi) \otimes (\phi' - \phi) P(\phi',t';\phi,t).
\end{align}
With the explicit solution eq. (\ref{FP_dist}), we compute
\begin{align}
\mathscr{B}(\phi,t) &= \omega \phi, \\
\mn\Sigma(\phi,t) &= \Omega K_{0}.
\end{align}
If the initial condition $\varphi$ is distributed according to a measure $\mathrm{d} \rho_0(\varphi) = \mathrm{e}^{-S_0(\varphi)} \mathscr{D} \varphi$ corresponding to a bare theory, then the effective distribution
\begin{equation}
\rho_t(\phi) = \int \! P(\phi,t;\varphi,0) \; \mathrm{d} \rho_0(\varphi)
\end{equation}
also satisfies the FP equation, with initial condition $\rho_0(\varphi)$. For the specific choice eq. (\ref{LE_momspace}) of Langevin equation above, we find
\begin{equation} \label{FP_EFT}
\frac{\partial \rho_t(\phi)}{\partial t} = \frac{1}{2} \mathrm{tr} \; K_0 \; \frac{\delta}{\delta \phi} \otimes \Big( \fdelphiO{\rho_t(\phi)} + 2 \mn\Delta_{\omega}^{-1} \phi \; \rho_t(\phi) \Big),
\end{equation}
where we have set $\Omega = 1$, and $\mn\Delta_{\omega} = K_{0} \omega^{-1}$ is a \emph{regularized} propagator.
The drift term $(\omega \phi_t)(x)$ may be regarded as the functional derivative of what we might call a ``flow action''
\begin{equation}
\hat S(\phi) = \frac{1}{2} (\phi, \omega \phi), \quad \partial_t \phi_t = - \fdelphiO{\hat S}(\phi_t) + \eta_t,
\end{equation}
in which case one would have $\mathscr{B} = \fdelphiO{\hat S}$.
For arbitrary choices of $\hat S$, the Langevin equation may become nonlinear and the (still linear) FP equation generalizes to
\begin{equation} \label{FP_seed}
\frac{\partial \rho_t(\phi)}{\partial t} = \frac{1}{2} \mathrm{tr} \; K_{0} \frac{\delta}{\delta \phi} \otimes \Big( \fdelphiO{\rho_t(\phi)} + 2 K^{-1}_{0} \fdelphiO{\hat S(\phi)} \rho_t(\phi) \Big).
\end{equation}
Similar equations are common in the FRG literature, where they are sometimes referred to as linear FRG equations or generalized heat equations \cite{Rosten:2010vm}. Of course, by writing $\rho_t = \mathrm{e}^{-S_t}$ and letting $\mathrm{d} \rho_0(\varphi) = \mathrm{e}^{-S_0(\varphi)} \mathscr{D}\varphi$, one recovers functional PDEs for the effective action $S_t(\phi)$ given some bare action $S_0(\varphi)$, similar to the Polchinski equation.
There are many possibilities for how to generalize the scheme presented above. First, one could choose a different distribution for the noise, perhaps even a non-gaussian one. Second, one could generalize the flow action to be arbitrarily complicated in $\phi$, thereby making the Langevin equation non-linear, but these will generate FP distributions which are more difficult to calculate, and it is unclear what is gained by doing so, especially because we will find that the linear equation is sufficient to define a proper RG transformation. For theories whose field variables are in compact spaces, or theories with local symmetries, however, one will need non-linear LEs to ensure that the flow preserves the symmetry; such equations will likely resemble those found in the context of stochastic quantization \cite{Damgaard:1987rr, Batrouni:1985jn}.
\paragraph{Further properties and MCRG.} Although the distribution above has the same form as the constraint functionals found in the FRG literature, a notable difference here is that the kernel $B_t$ is \emph{determined} by the associated Langevin equation having a fixed relation to $\omega$, the choice of drift. The initial condition for the transition function, $P_0(\phi,\varphi) = \delta(\phi - \varphi)$, is guaranteed by the fact that it is generated by a LE with initial condition $\varphi$. As a distribution, it is furthermore normalized such that for all $t \geq 0$,
\begin{equation}
\int \! \mathscr{D} \phi \; P_t(\phi,\varphi) = 1,
\end{equation}
and in particular, the integral is independent of the field $\varphi$. These conditions allow one to define the effective theory in a more conventional way by inserting unity into the partition function $Z$ of the bare theory as
\begin{equation}
Z = \int \! \mathrm{d} \rho_0(\varphi) = \int \! \mathscr{D} \varphi \! \int \! \mathscr{D} \phi \; P_t(\phi,\varphi) \; \mathrm{e}^{-S_0(\varphi)},
\end{equation}
thereby defining a Boltzmann weight of effective (low-mode) fields
\begin{equation} \label{eff_action}
\mathrm{d} \rho_t(\phi) = \frac{1}{Z} \; \mathrm{e}^{-S_t(\phi)} \mathscr{D} \phi , \qquad \mathrm{e}^{-S_t(\phi)} := \int \! \mathscr{D} \varphi \; P_t(\phi,\varphi) \; \mathrm{e}^{-S_0(\varphi)},
\end{equation}
and the partition function remains invariant.
The stochastic process generated by a Langevin equation is a Markov process, so that future states depend only on the present state, so long as the noise at different times are uncorrelated. This kind of feature was desirable at least in Wilson's philosophy of RG, where any particular blocking step could be carried out by knowing only the previous step. In terms of the abstract distribution $P$ this implies, $\forall \; t > s \geq 0$,
\begin{equation}
P(t,0) = P(t, s)P(s, 0), \quad \mathrm{or} \quad P(\phi, t; \varphi, 0) = \int \! \mathscr{D} \chi \; P(\phi, t; \chi, s) P(\chi, s; \varphi, 0).
\end{equation}
By considering time-homogeneous Langevin equations (i.e. no explicit $t-$dependence in the LE or the noise variance), the transition function depends only on the difference $t-s$, and we can write $P(\phi, t; \chi, s) = P_{t-s}(\phi, \chi)$.\footnote{The noise variance can be chosen to depend on time, but this spoils the convenience of time-homogeneity.} This entails that the set $\{P_t : t\geq 0\}$ form a semigroup of operators and may be written in terms of a generator $\mathcal{L}$ as $P_t = \mathrm{e}^{t\mathcal{L}}$ \cite{Pavliotis:2014}. We will discuss $\mathcal{L}$ in the last section. For now we simply note that $\mathcal{L}$ is the adjoint of the functional differential operator appearing in the FP equation.
Next, consider the usual definition of the expectation value of an operator $\mathcal{O}$ in the effective theory,
\begin{equation}
\langle \mathcal{O}(\phi) \rangle_{S_t} := \frac{1}{Z} \int \! \mathscr{D} \phi \; \mathcal{O}(\phi) \; \mathrm{e}^{- S_t(\phi)}.
\end{equation}
By inserting the definition eq. (\ref{eff_action}), and noting that
\begin{equation}
\int \! \mathscr{D} \phi \; \mathcal{O}(\phi) P(\phi, t; \varphi, 0) = \mathbb{E}_{\mu_0} \big[ \mathcal{O}(\phi_t[\varphi;\eta]) \big],
\end{equation}
where $\phi_t[\varphi; \eta]$ denotes the solution of the LE, one readily obtains the equality
\begin{equation} \label{Equivalence}
\big\langle \mathcal{O}(\phi) \big\rangle_{S_t} = \big\langle \mathbb{E}_{\mu_0} \big[\mathcal{O}(\phi_t[\varphi; \eta]) \big] \big\rangle_{S_0}.
\end{equation}
This formula states the equivalence of a low-mode FRG effective theory (in the Polchinski sense, i.e., no rescaling is yet performed) and a double expectation value over the bare fields and the random noise. Since the right-hand side may be calculated without knowledge of the effective action, it further constitutes a generalization of MCRG to FRG for all observables. Notice that there are just as many degrees of freedom $\phi$ as there are $\varphi$ (this is especially clear on the lattice).
In the next section, we will explore various properties of the effective action $S_t(\phi)$ defined above. First, however, one might wonder why the noise average is necessary in eq. (\ref{Equivalence}), when compared with the corresponding statement for a spin-blocked theory \cite{Swendsen:1979gn},
\begin{equation}
\big\langle \mathcal{O}(\phi) \big\rangle_{S_b} = \langle \mathcal{O}(B_b\varphi) \rangle_{S_0},
\end{equation}
where $B_b$ denotes the blocking operator. This is perhaps clarified by the fact that when spin-blocking, there are fewer blocked spins than bare spins, so the blocked expectation values really involve an integration over ``extra'' degrees of freedom, from the perspective of the effective theory; here the role is played by noise. If one were to choose a blocked lattice of the same size as the original, so that the bare Boltzmann factor were integrated against a delta functional over the whole lattice, the resulting blocked action would be trivial, namely, $S_0(B^{-1}_b \phi)$. Likewise in the continuum, it has long been assumed \cite{Wetterich:1989xg} that a pure $\delta-$function constraint functional is not sufficient to define a non-trivial Wilson action for continuum FRG. Let us elaborate on this. One might have wanted to define the Wilson action through
\begin{equation} \label{delta_constraint}
\mathrm{e}^{-S_t(\phi)} = \int \! \mathscr{D} \varphi \; \delta(\phi - f_t \varphi) \; \mathrm{e}^{-S_0(\varphi)},
\end{equation}
where $f_t\varphi$ is the solution of a gradient flow equation such as
\begin{equation}
\partial_t \phi_t(x) = \Delta \phi_t(x),
\end{equation}
or some generalization thereof. The problem with this definition is that it generates a trivial effective action, in the sense to be described. In momentum space, the solution is simply $(f_t \varphi)(p) = \mathrm{e}^{-p^2 t} \varphi(p)$, so one can do a linear change of variables in eq. (\ref{delta_constraint}) and compute
\begin{equation}
S_t(\phi) = -\mathrm{tr} \ln f_t + S_0(f_t^{-1} \phi).
\end{equation}
Hence, the couplings of the new action are exactly computable, and because their dependence on $t$ is trivially determined by how many powers of $\phi$ and $p^2$ appear in each term, without involving any loop corrections, one verifies that the resulting ``Wilson action'' is not acceptable.
Lastly, we remark that the inadequacy of the definition eq. (\ref{delta_constraint}) \emph{does not} mean that the observables computed from gradient-flowed fields are not useful for studying certain RG properties of the system. At the end of the next section, in particular, we will describe how gradient-flowed observables are sufficient for studying the \emph{long-distance} properties of the theory.
\section{The effective theory} In what follows, the nontriviality of the effective action determined by the stochastic RG transformation will be discussed for the example case of $\phi^4_3$ theory. We will show that by a rescaling of variables, the existence of an interacting IR fixed point of the transformation becomes possible. From the point of view of stochastic processes, the result implies that, by merely rescaling degrees of freedom appropriately, the stationary solutions of the Fokker-Planck equation may be non-gaussian even though the Langevin equation is linear. Lastly, the correlation functions of the effective theory will be related to gradient-flowed correlations.
\paragraph{The effective action.} That the EFT defined by a gaussian constraint functional for $\Omega > 0$ is nontrivial can be understood as follows. One may insert the expression eq. (\ref{FP_dist}) for the transition function into the definition of the effective action eq. (\ref{eff_action}), and then expand out the exponent of $P_t(\phi,\varphi)$; the part proportional to $\varphi^2$ modifies the bare theory propagator, and the part linear in $\varphi$ acts as a source term with $J = f_t B_t \phi$. The remaining $\phi^2$ term contributes to the $\phi$ propagator. The result is a relation between effective and bare actions:\footnote{In a sense, this constitutes an exact solution to the FP equation, giving the finite$-t$ distribution $\rho_t$ in terms of the cumulants of $\rho_0$.}
\begin{equation}
S_t(\phi) = F_t + \frac{1}{2} (\phi, B_t \phi) - W_{0}^{(t)}(B_t f_t \phi),
\end{equation}
where $F_t$ is due to the normalization of $P_t(\phi,\varphi)$, and $W^{(t)}_0(J) = \ln \langle \mathrm{e}^{(J,\phi)} \rangle_{S_0^{(t)}}$ is the generator of connected Green functions for the bare theory $S_0$ with a modified $t-$dependent inverse propagator
\begin{equation}
[\mn\Delta^{(t)}_{0}]^{-1} := \mn\Delta_{0}^{-1} + h_t, \quad h_t := f_t B_t f^\top_t.
\end{equation}
Expanding the generator term in $\phi$ yields a formula which allows for the systematic computation of effective vertices,
\begin{equation}
W_0^{(t)}(B_t f_t \phi) = \sum_{n=0}^\infty \frac{1}{n!} \int_{\boldsymbol p} [W_0^{(t)}]^{(n)}(\boldsymbol p) (B_t f_t \phi)(p_1) \cdots (B_t f_t \phi)(p_n),
\end{equation}
where the set of $n$ momenta $p_i$ is denoted by $\boldsymbol p$. It is then apparent that the effective action for any finite $t$ is indeed non-trivial, since the vertices of $S_t$ contain the dynamics of the bare theory via the $[W^{(t)}_0]^{(n)}$.
The scale $\mn\Lambda_t$ of the effective theory may be determined by looking at the effective 2-point function at tree level, after isolating the quadratic part of $S_t(\phi)$:
\begin{equation}
\langle \phi(p_1) \phi(p_2) \rangle_{S_t}^\mathrm{tree} = (2\pi)^d\delta(p_\mathrm{tot}) \Big[ A_t(p) + f_t^2(p) \mn\Delta_0(p) \Big] = (2\pi)^d \delta(p_\mathrm{tot}) \Big[ A_t(p) + \frac{\mathrm{e}^{- p^2(a_0^2 + 2 t)}}{p^2 + m_0^2} \Big],
\end{equation}
where the inverse cutoff $a_0 = \mn\Lambda_0^{-1}$ has been used, and we recall that $A_t$ is given by eq. (\ref{A_momspace}). In position space, the first term decays rapidly at large distances with respect to the first. The second term is a Schwinger-regularized propagator; we therefore observe that the effective cutoff induced by the stochastic RG transformation is
\begin{equation}
\mn\Lambda_t^{-2} = \mn\Lambda_0^{-2} + 2t, \quad \mathrm{or} \quad \mn\Lambda_t = \frac{\mn\Lambda_0}{\sqrt{1+ 2\tau}},
\end{equation}
where the dimensionless flow time $\tau = \mn\Lambda_0^2 t$ has been introduced. We will take another look at the effective correlation functions and the function $A_t$ in the next section.
We can make sense of the odd-looking factors of $f_t$ and $B_t$ that appear in the effective action as follows. First, the additive $h_t$ in the propagator $\mn\Delta_0^{(t)}$ acts as a sliding IR cutoff for the \emph{bare} theory, since\footnote{We choose to consider the mass term in $S_0$ as part of the interaction $V_0(\phi)$ from now on.}
\begin{equation}
\lim_{p\to 0} h_t(p) = \frac{1}{t},
\end{equation}
which means that as $t$ increases, more of the bare field modes get integrated out. For example, in the case of $\phi^4_d$ theory (discussed in more detail in the next subsection), the momentum-independent part of the 1-loop contribution to the amputated effective 4-point vertex in $W^{(t)}_0$ is proportional to
\begin{equation}
\int_{\mathbb{R}^d} \frac{\mathrm{d}^d k}{(2\pi)^d} [\mn\Delta^{(t)}_{0}(k)]^2 = \int_{\mathbb{R}^d} \frac{\mathrm{d}^d k}{(2\pi)^d} \; \frac{\mathrm{e}^{-2 k^2 a_0^2}}{\big(k^2 + h_t(k) \big)^2} \; .
\end{equation}
We observe that the presence of $h_t$ in the denominator, combined with the multiplicative bare cutoff function, effectively restricts the domain of integration to $\| p \| \in [\mn\Lambda_t, \mn\Lambda_0]$, similarly to what one would have found in a standard (sharp) high-mode elimination RG step $\mn\Lambda_0 \to \mn\Lambda$, where the domain of the integral would be $\| p \| \in [\mn\Lambda, \mn\Lambda_0]$ (see \cite{Kopietz:2010zz} for details). Next, note that the argument $B_t f_t \phi$ of $W^{(t)}_0$ in $S_t$ implies that the $[W^{(t)}_0]^{(n)}$ vertices are mutiplied by a factor of
\begin{equation}
B_t(p) f_t(p) = K_{0}^{-1}(p) \frac{2 \omega(p) f_t(p)}{1-f_t^2(p)}
\end{equation}
for each factor of $\phi(p)$. Since the vertices $[W^{(t)}_0]^{(n)}$ are connected $n-$point functions, which have $n$ factors of external propagators $\mn\Delta_0^{(t)}(p_i) \propto K_0(p_i)$ attached to them, we see that the effective vertices decay like $f_t(p_i) = \mathrm{e}^{-p^2_i t}$ and therefore strongly suppress the $\| p_i \| \gg \mn\Lambda_t$ contribution of the $n-$point functions. Moreover, the leading momentum behavior of the products of $B_t f_t$ with $\mn\Delta^{(t)}_0$ demonstrates that they are, in a sense, \emph{amputated},
\begin{equation}
B_t(p) f_t(p) \mn\Delta^{(t)}_0(p) = 1 - \frac{1}{2} (p^2 t)^2 + O(p^8 t^4),
\end{equation}
in a manner similar to what one finds with sharp high-mode elimination. Thus, in sum, the effective vertices are amputated connected $n-$point functions to leading order in external momenta, which are heavily damped in the UV $(\| p \| \gg \mn\Lambda_t)$, and whose loop corrections effectively involve domains of integration $\| p \| \in [\mn\Lambda_t, \mn\Lambda_0]$. It is also noteworthy that the external momemtum dependence implied by the amputation formula above goes like powers of $p^2 / \mn\Lambda_t^2$, for $\mn\Lambda_t^{-2} \gg \mn\Lambda_0^{-2}$, as one expects from the general philosophy of effective field theory.
\paragraph{Flow of the couplings and rescaling.} At first sight it appears that the effective action, written as an integration against the bare density, eq. (\ref{eff_action}), has a trivial infinite flow time limit, as
\begin{equation}
\lim_{t \to \infty} \rho_t(\phi) = \big[\det 2 \pi B_\infty \big]^{\frac{1}{2}} \exp\Big[- \frac{1}{2} \big(\phi, B_\infty \phi\big)\Big],
\end{equation}
where $B_\infty(p) = 2 K_{0}^{-1}(p) \omega(p)$, due to the exponential decay of $f_t$. Indeed, it is well-known that the Ornstein-Uhlenbeck process has a gaussian stationary distribution. It is also well-known that the stationary distribution of a non-linear Langevin equation with drift $\delta \hat S(\phi) / \delta \phi$ is given by the Boltzmann factor $\exp [ - \hat S(\phi)]$. And yet, we know that the stationary distribution of an RG transformation should correspond to a fixed point theory, which in many cases is interacting.
The first feature to note in response to this paradox is that the properties of the drift $\omega$ imply that the zero mode of the bare field is not suppressed (see eq. (\ref{constraint_functional})); only its variance changes. Since the zero-mode theory is not gaussian, in general, the flowed distribution will also have a non-gaussian zero mode effective action, implying that the long-distance physics is non-trivial. This would suggest, however, that the infinite-time degrees of freedom do not propagate. To further clarify the situation, we will look at the flow of the most relevant effective couplings as the RG time $t$ increases, and then we will address the role of rescaling of degrees of freedom.
As an example of where an IR fixed point is known to exist by other means, we choose to analyze $\phi^4_3$ theory perturbatively, treating the mass term also as a perturbation. Denoting the coefficient of $p^2$ in the quadratic part of $S_t(\phi)$ by $c_t$, and the momentum-independent parts of the quadratic and quartic terms, respectively, by $m^2_t, \; \lambda_t$, we find
\begin{align}
c_t &= 1 + O(\lambda_0^2), \\
m^2_t & = m^2_0 + \frac{\lambda_0}{2} I^d_0(t) + O(\lambda_0^2, \lambda_0 m^2_0), \\
\lambda_t & = \lambda_0 - \frac{3\lambda_0^2}{2} C^d_0(t) - 2 \lambda_0^2 t I^d_0(t) + O(\lambda_0^3, \lambda_0 m^2_0),
\end{align}
at 1-loop order, where the loop integrals are given by
\begin{align}
I^d_{0}(t) & = \int_{\mathbb{R}^d} \! \frac{\mathrm{d}^d p}{(2 \pi)^d} \frac{\mathrm{e}^{-p^2 a_0^2}}{p^2 + h_t(p)} = \Omega_d \int_{\mathbb{R}_+} \! \mathrm{d} p \; p^{d-3} \mathrm{e}^{-p^2 a_0^2} \tanh p^2 t, \\
C^d_{0}(t) & = \int_{\mathbb{R}^d} \! \frac{\mathrm{d}^d p}{(2\pi)^d} \; \frac{\mathrm{e}^{-2 p^2 a_0^2}}{\big(p^2 + h_t(p) \big)^2} = \Omega_d \int_{\mathbb{R}_+} \! \mathrm{d} p \; p^{d-5} \mathrm{e}^{-2 p^2 a_0^2} \tanh^2 p^2 t,
\end{align}
and $\Omega_d = S_{d-1} / (2\pi)^d$. The first integral is superificially divergent, but for $a_0 > 0$, it has a finite $t\to\infty$ limit, and one may compute
\begin{equation}
t \frac{\mathrm{d}}{\mathrm{d} t} I^d_0(t) = \Omega_d \alpha_1 \; t^{1-d/2} + O(t^{-d/2} a_0^2),
\end{equation}
where $\alpha_1 \approx 0.379064$ for $d=3$. The second integral $C^d_0(t)$ exists even for $a_0=0$, and its time derivative is
\begin{equation}
t \frac{\mathrm{d}}{\mathrm{d} t} C^d_0(t) = \Omega_d \alpha_2 \; t^{2 - d/2} + O(t^{2 - d/2 - \delta} a_0^{2\delta}),
\end{equation}
where $\delta > 0$ and $\alpha_2 \approx 0.594978$ for $d = 3$.\footnote{Recall that $\phi^4_3$ theory is superrenormalizable, having only two superficially divergent diagrams: the snail and the sunset diagrams.} Hence, to 1-loop order, we find for the derivatives of effective couplings
\begin{align}
t \frac{\mathrm{d}}{\mathrm{d} t} m^2_t & = \frac{\lambda_0}{2} \Omega_d \alpha_1 \; t^{1-d/2} + O(t^{-d/2} a_0^2), \\
t \frac{\mathrm{d}}{\mathrm{d} t} \lambda_t & = - \lambda_0^2 \Omega_d \big( \smallfrac{3}{2} \alpha_2 + 2 \alpha_1 \big) \; t^{2-d/2} + O(t^{2 - d/2 - \delta} a_0^{2\delta}).
\end{align}
These expressions do not clearly indicate any nontrivial fixed-point behavior at this order in perturbation theory. To proceed further, one must cast the flow equations in terms of rescaled dimensionless quantities, as one usually does to study RG flows. We will find below that such quantities naturally arise after a passive momentum and field redefinition.
Now we introduce dimensionless rescaled variables using the effective scale $\mn\Lambda_t$ to give dimension \cite{Morris:1993qb}. Dimensionless momenta $\bar p$ are defined by setting
\begin{equation}
p =: \mn\Lambda_t \bar p.
\end{equation}
The kinetic term in the effective action therefore becomes
\begin{equation}
\frac{1}{2} \int_{\bar p} \mn\Lambda_t^{d + 2} \bar p^2 \phi(\mn\Lambda_t \bar p) \phi(-\mn\Lambda_t \bar p).
\end{equation}
This motivates a change of field variables $\phi \to \mn\Phi$, where $\mn\Phi$ is dimensionless:
\begin{equation} \label{field_cov}
\phi(\bar p \mn\Lambda_t) =: \mn\Lambda_t^{d_\phi} \mn\Phi(\bar p),
\end{equation}
with $d_\phi = -d/2-1$ being the canonical mass dimension of $\phi$ in momentum space. After doing so, the kinetic term is of the canonical form
\begin{equation}
\frac{1}{2} \int_{\bar p} \bar p^2 \mn\Phi(\bar p) \mn\Phi(- \bar p)
\end{equation}
at 1-loop order, while the mass and quartic terms pick up factors of $\mn\Lambda_t$ which define dimensionless couplings $r_t, \; u_t$ by
\begin{equation}
r_t := \mn\Lambda_t^{-2} m^2_t, \qquad u_t := \mn\Lambda_t^{d-4} \lambda_t.
\end{equation}
We note that these rescalings are all quite familiar when written in terms of the scale factor
\begin{equation}
b_t := \frac{\mn\Lambda_0}{\mn\Lambda_t} \quad \Rightarrow \quad r_t = b_t^2 \hat m_t^2, \quad u_t = b_t^{4-d} \hat \lambda_t,
\end{equation}
reflecting that the mass and the 4-point coupling are relevant at the gaussian fixed point (hats denote quantities rendered dimensionless with $\mn\Lambda_0$).
Next, we compute the RG flow equations which describe how the dimensionless variables change with the flow time $t$. In the expression for the derivatives above, one replaces $m^2_0$ and $\lambda_0$ by $m^2_t$ and $\lambda_t$, valid at this order in perturbation theory. The derivatives of the dimensionless couplings with respect to $b_t$ are then
\begin{align}
b_t \frac{\mathrm{d} r_t}{\mathrm{d} b_t} &= 2 r_t + \beta_1 u_t, \\
b_t \frac{\mathrm{d} u_t}{\mathrm{d} b_t} &= (4 - d) u_t - \beta_2 u^2_t,
\end{align}
up to terms of order $b^{-2}_t$, since $t = \frac{1}{2} \mn\Lambda_t^{-2}(1-b^{-2}_t)$, and where $\beta_1 = 2^{\frac{1}{2}} \Omega_3 \alpha_1, \; \beta_2 = 2^{\frac{1}{2}} \Omega_3 (\frac{3}{2} \alpha_2 + 2 \alpha_1)$ in $d=3$. As $b_t \to \infty$, the second equation has a nontrivial stationary solution $u_*$, and implies a corresponding critical value $r_*$, which for $d = 3$ are given, at 1-loop order, by $u_* \approx 8.46, \; r_* \approx -0.12$. Linearizing about the fixed point and computing the left-eigenvalues $y_i$ of the stability matrix, one finds that $y_2 = 2, \; y_4 = - 1$, which are crude approximations to the precisely-known values $y_2 = 1.58831(76), \; y_4 = -0.845(10)$ at the Wilson-Fisher fixed point \cite{Hasenbusch:1999mw}.\footnote{The values at this loop order from sharp high-mode elimination combined with epsilon expansion in \cite{Kopietz:2010zz} are $y_2 = 1.67, \; y_4 = -1$, which, however, treats the mass non-perturbatively. As a step in that direction, extending the analysis above to include terms of order $ru$ yields modified exponents $y_2 \approx 1.63, \; y_4 \approx -1.33$; we stress that our formalism is not expected to do any better than the epsilon expansion.}
Thus we observe the existence of an IR fixed point in perturbation theory, as we expect in $\phi^4_3$ theory. If we worked to $O(\lambda_0^2)$, we would find, as usual, the necessity of including a wave function renormalization factor $\zeta_t = b_t^{\gamma_\phi}$ to normalize the kinetic term coefficient $c_t :=1$, so that eq. (\ref{field_cov}) is replaced by
\begin{equation}
\phi(\bar p \mn\Lambda_t) =: \mn\Lambda_t^{d_\phi} \zeta_t^{-1} \mn\Phi(\bar p) = \mn\Lambda_0^{d_\phi} b_t^{-\Delta_\phi} \mn\Phi(\bar p),
\end{equation}
which modifies the scaling dimension $\Delta_\phi$ of $\phi$ to include an \emph{anomalous} dimension $\gamma_\phi = O(u_t^2)$, which has a non-zero $t\to\infty$ limit.
The existence of an IR fixed point for the dimensionless, rescaled effective action implies that the expectation values of rescaled effective observables
\begin{equation}
\langle \mn\Phi(\bar p_1) \cdots \mn\Phi(\bar p_n) \rangle_{S_t} = b_t^{n \Delta_\phi} \mn\Lambda_0^{-nd_\phi} \langle \phi(p_1) \cdots \phi(p_n) \rangle_{S_t}
\end{equation}
can have nontrivial infinite flow time limits. In terms of the stochastic RG transformation of section 2, this is written as
\begin{equation}
\langle \mn\Phi(\bar p_1) \cdots \mn\Phi(\bar p_n) \rangle_{S_t} = b_t^{n \Delta_\phi} \mn\Lambda_0^{-nd_\phi} \big\langle \mathbb{E}_{\mu_0} \big[\phi_t(p_1) \cdots \phi_t(p_n) \big] \big\rangle_{S_0}.
\end{equation}
Since the stochastic RG transformation was generated by a linear Langevin equation, it may be surprising to find that by simply rescaling the correlation functions, one can arrive at a non-gaussian stationary distribution of the Fokker-Planck equation. We also note that the quantities $\mn\Lambda_0^{-d_\phi} \phi_t$ correspond directly to the dimensionless field variables one would work with on the lattice.
\paragraph{Correlation functions.}
Wilson and Kogut demonstrated a relation between effective $n-$point functions and the bare $n-$point functions in their FRG scheme \cite{Wilson:1973jj}. Recently, the authors of \cite{Sonoda:2019ibh} have noted that this relation is an equivalence between effective correlations and gradient-flowed correlations. In the context of the stochastic approach here, the corresponding relation is given in terms of generators $W(J)$ of connected Green functions by
\begin{equation}
W_t(J) = \frac{1}{2}(J,A_t J) + W_0(f_t J),
\end{equation}
where $A_t$ is given by (eq. \ref{Adef}). This relation is simply derived by shifting $\phi' = \phi - f_t \varphi$ in eq. (\ref{eff_action}) and using the definition of the generator,
\begin{equation}
\mathrm{e}^{W_t(J)} := \frac{1}{Z_0} \int \! \mathscr{D} \phi \; \mathrm{e}^{-S_t(\phi) + (J,\phi)},
\end{equation}
with $Z_0$ being the free theory partition function \cite{Kopietz:2010zz, ZinnJustin:2002ru}. It follows that the 2-point functions of $S_t$ and $S_0$ are related by
\begin{equation}
W^{(2)}_t(x,y) = A_t(x,y) + \int_{x'y'} f_t(x,x') f_t(y,y') W^{(2)}_0(x',y'),
\end{equation}
and higher $n-$points are related by
\begin{equation}
W^{(n)}_t(x_1,\dots, x_n) = \int_{\boldsymbol x'} f_t(x_1,x_1') \cdots f_t(x_n,x_n') W^{(n)}_0(x_1',\dots,x_n').
\end{equation}
The function $A_t(x,y)$ is determined by the choice of Langevin equation. In the case $\omega(p) = p^2$, for example, one finds an expression in terms of upper incomplete gamma functions
\begin{equation}
A_t(z,0) = \frac{1}{8 \pi^{d/2} z^{d-2}} \Big[\Gamma\Big(\frac{d}{2}-1, \frac{z^2}{4 a_t^2}\Big) - \Gamma\Big(\frac{d}{2}-1, \frac{z^2}{4 a_0^2}\Big) \Big],
\end{equation}
where the inverse effective cutoff $a_t = \mn\Lambda_t^{-1}$ was used. For large separations $\| z \| \gg a_t$, this quantity decays as a gaussian. The effective propagator is therefore equal to the gradient-flowed propagator asymptotically in $x-y$ (so long as the correlation length $\xi \gg a_t$):
\begin{equation}
\langle \phi(x) \phi(y) \rangle_{S_t} \longrightarrow \langle (f_t\varphi)(x)(f_t\varphi)(y) \rangle_{S_0}.
\end{equation}
Note also that if no cutoff function were imposed on the gaussian noise $\eta_t$, there would be a short-distance singularity in $A_t(z,0)$, regardless of whether the bare theory was regulated.
The connected correlators of composite operators also are simply related to their gradient flow counterparts, except we must be careful to define the generators of their $m-$point functions properly. For example, in the case of $\mathcal{O} = \phi^2$, the generator of correlators $W_t^{(0,m)}$ is defined by \cite{ZinnJustin:2002ru, Amit:1984ms}
\begin{equation}
\mathrm{e}^{W_t(L)} := \frac{1}{Z_0} \int \! \mathscr{D} \phi \; \mathrm{e}^{-S_t(\phi) + \frac{1}{2}(L,\phi^2)}.
\end{equation}
By inserting the definition of $S_t(\phi)$, one may compute the relation between effective and bare generators exactly, as the integrals involved are gaussian. We note, however, that given the simplicity of the Langevin equation, we can just as easily use the explicit solution $\phi_t[\varphi; \eta]$ to compute expectations. For example, the 2-point correlator of the $\phi^2$ composite operator is
\begin{equation}
\langle \phi^2(x) \phi^2(y) \rangle_{S_t}^\mrm{c} = \langle (f_t\varphi)^2(x)(f_t\varphi)^2(y) \rangle_{S_0}^\mrm{c} + A_t(x-y)\langle (f_t\varphi)(x)(f_t\varphi)(y) \rangle_{S_0}^\mrm{c} + 2 A_t(x-y)^2,
\end{equation}
where the connected part of a correlator of local operators $A, \; B$ is defined by
\begin{equation}
\langle A(x) B(y) \rangle^\mrm{c} := \langle A(x) B(y) \rangle - \langle A(x) \rangle \langle B(y) \rangle,
\end{equation}
which again shows the asymptotic equivalence of effective and gradient-flowed quantities.
In sum, what we have found is that the correlation functions of composite operators in the effective theory are equal to the gradient-flowed correlators, up to terms proportional to powers of $A_t(x-y)$, which itself is determined by the drift term $\omega$. Thus, so long as the drift is chosen to imply an exponentially decaying $A_t$, the flowed observables are sufficient to determine the long-distance properties of the effective theory.
\section{Ratio formulae}
The fact that the transition functional $P_t$ satisfies the Fokker-Planck equation implies that observables at finite $t$ satisfy
\begin{equation}
\frac{\partial}{\partial t} \langle \mathcal{O}(\phi) \rangle_{S_t} = \langle \mathcal{L} \mathcal{O}(\phi) \rangle_{S_t},
\end{equation}
where the \emph{generator} $\mathcal{L}$ of the Markov process is a linear differential operator given by
\begin{equation}
\mathcal{L} = \frac{1}{2} \mathrm{tr} \Big( \mn\Sigma(\phi,t) \frac{\delta}{\delta \phi} \otimes \frac{\delta}{\delta \phi} - 2 \mathscr{B}(\phi,t) \otimes \frac{\delta}{\delta \phi} \Big).
\end{equation}
For the flow we have been considering, the generator takes the form
\begin{equation}
\mathcal{L} = \frac{1}{2} \mathrm{tr} \Big( K_0 \frac{\delta}{\delta \phi} \otimes \frac{\delta}{\delta \phi} \Big) - \mathrm{tr} \Big( \omega \phi \otimes \frac{\delta}{\delta \phi} \Big),
\end{equation}
where $\omega$ is (minus) the laplacian operator. After a small timestep $\epsilon$, then, successive observables are related by
\begin{equation}
\langle \mathcal{O}(\phi) \rangle_{S_{t+\epsilon}} = \langle \mathcal{O}(\phi) \rangle_{S_{t}} + \epsilon \langle \mathcal{L} \mathcal{O}(\phi) \rangle_{S_t} + O(\epsilon^2).
\end{equation}
Applied to $n-$point functions, the formula reads\footnote{This formula corresponds to the spin-blocking equation
\begin{equation}
\langle B_b \varphi (m_1) \cdots B_b \varphi (m_n) \rangle_{S} = \langle \varphi (m_1) \cdots \varphi(m_n) \rangle_{S} + O(\varepsilon / \Delta m),
\end{equation}
where $B_b \varphi(m) = b^{-d}\sum_\varepsilon \varphi(m+\varepsilon)$ is the blocking operator, $\varepsilon \leq b$, and $\Delta m$ stands for the differences $|m_i - m_j| \gg b \; \forall i \neq j$. This follows from the usual correlator scaling relations of \emph{rescaled} spins $\varphi_b(n/b) := b^{\Delta_\phi} (B_b \varphi)(n)$,
\begin{equation}
\langle \varphi_b (m_1/b) \cdots \varphi_b (m_n / b) \rangle_{S_b} = b^{n \Delta_\phi} \langle \varphi (m_1) \cdots \varphi(m_n) \rangle_{S} + O(\varepsilon / \Delta m),
\end{equation}
that one finds in textbooks, e.g. \cite{Cardy:1996xt, Amit:1984ms}.
}
\begin{equation} \label{npoint_step}
\langle \phi(x_1) \cdots \phi(x_n) \rangle_{S_{t+\epsilon}} = \langle \phi(x_1) \cdots \phi(x_n) \rangle_{S_{t}} + O(\epsilon).
\end{equation}
Writing both sides in terms of the rescaled theory variables, $\phi(x) \propto \mn\Lambda_t^{\Delta_\phi} \mn\Phi(\bar x)$, where the densionless position is defined by $x = \mn\Lambda_t^{-1} \bar x$, one finds
\begin{equation}
\mn\Lambda_{t+\epsilon}^{n\Delta_\phi} \langle \mn\Phi(\bar x_1) \cdots \mn\Phi(\bar x_n) \rangle_{S_{t+\epsilon}} = \mn\Lambda_t^{n\Delta_\phi} \big[ \langle \mn\Phi( \bar y_1) \cdots \mn\Phi(\bar y_n) \rangle_{S_{t}} + O(\epsilon)\big].
\end{equation}
Motivated by the definition of scale changes $b_t = \mn\Lambda_0 / \mn\Lambda_t$ with respect to the bare scale, we introduce the \emph{relative} scale change $b_\epsilon(t) := b_{t+\epsilon} / b_t = \mn\Lambda_t / \mn\Lambda_{t+\epsilon}$. Since the rescaled positions at different scales, $\bar x$ and $\bar y$, refer to the \emph{same} dimensionful position $x$ defined at the bare scale (i.e. in units of $a_0 = \mn\Lambda_0^{-1}$), it follows that $\bar y = b_\epsilon \bar x$, and we may write the previous formula as
\begin{equation}
\langle \mn\Phi(\bar x_1) \cdots \mn\Phi(\bar x_n) \rangle_{S_{t+\epsilon}} = b_\epsilon(t)^{n\Delta_\phi} \big[\langle \mn\Phi(b_\epsilon \bar x_1) \cdots \mn\Phi(b_\epsilon \bar x_n) \rangle_{S_{t}} + O(\epsilon)\big],
\end{equation}
To the extent that we may neglect the $O(\epsilon)$ terms (which we justify in the appendix), we therefore find a familiar RG scaling relation,
\begin{equation} \label{phi_ratios}
\langle \mn\Phi(\bar x_1) \cdots \mn\Phi(\bar x_n) \rangle_{S_{t+\epsilon}} \approx b_\epsilon(t)^{n\Delta_\phi} \langle \mn\Phi(b_\epsilon \bar x_1) \cdots \mn\Phi(b_\epsilon \bar x_n) \rangle_{S_{t}}.
\end{equation}
This formula is the stochastic RG analogue of a spin-blocked correlator scaling relation.
We may now derive a scaling relation for correlations with (scaling) operator insertions. Consider a 1-parameter family of actions $S_t(\theta) = S_t - \theta \mathcal{V}_\mathcal{O}$, where $\mathcal{V}_\mathcal{O}(\mn\Phi)$ is the volume-integral of a local operator $\mathcal{O}[\mn\Phi(\bar x)]$ (which may be polynomial in $\mn\Phi$). $n-$point functions of $\mn\Phi$ with insertions of $\mathcal{V}_\mathcal{O}$ in the rescaled effective theory may be written as
\begin{equation}
\langle \mathcal{V}_\mathcal{O}(\mn\Phi) \mn\Phi(\bar x_1) \cdots \mn\Phi(\bar x_n) \rangle_{S_t} = \frac{\mathrm{d}}{\mathrm{d} \theta} \langle \mn\Phi(\bar x_1) \cdots \mn\Phi(\bar x_n) \rangle_{S_t(\theta)} \Big|_{\theta = 0}.
\end{equation}
Now consider a small time step $\epsilon$ as before. If $\mathcal{V}_\mathcal{O}$ is an operator in the rescaled action whose coefficient transforms as $g_{t+\epsilon} = b_\epsilon^{y_g} g_t$ (obtained by diagonalizing the linearized flow equations for the couplings), and if we introduce $\theta$ as a perturbation of $S_{t+\epsilon}$, then eq. (\ref{phi_ratios}) together with the previous equation imply
\begin{equation}
\langle \mathcal{V}_\mathcal{O}(\mn\Phi) \mn\Phi(\bar x_1) \cdots \mn\Phi(\bar x_n) \rangle_{S_{t+\epsilon}} \approx b_\epsilon(t)^{n\Delta_\phi - y_g} \langle \mathcal{V}_\mathcal{O}(\mn\Phi) \mn\Phi(\bar y_1) \cdots \mn\Phi(\bar y_n) \rangle_{S_t}.
\end{equation}
The minus sign before $y_g$ arises because, with respect to $\theta$, which is the coupling at $t+\epsilon$, the previous step would have coupling $b^{-y_g}_\epsilon \theta$. Since $\mathrm{d}^d \bar y = b_\epsilon^{d} \mathrm{d}^d \bar x$, one may infer the scaling of insertions of the local operators $\mathcal{O}[\mn\Phi(\bar x)]$:
\begin{equation} \label{rescaled_scaling}
\langle \mathcal{O}[\mn\Phi(\bar x)] \mn\Phi(\bar x_1) \cdots \mn\Phi(\bar x_n) \rangle_{S_{t+\epsilon}} \approx b_\epsilon(t)^{n\Delta_\phi + \Delta_\mathcal{O}} \langle \mathcal{O}[\mn\Phi(b_\epsilon \bar x)] \mn\Phi( b_\epsilon \bar x_1) \cdots \mn\Phi(b_\epsilon \bar x_n) \rangle_{S_t}.
\end{equation}
In other words, the scaling operators transform like $\mathcal{O}[\mn\Phi(\bar x)]|_{S_{t+\epsilon}} = b_\epsilon^{\Delta_\mathcal{O}} \mathcal{O}[\mn\Phi(\bar y)]|_{S_t}$ inside of expectation values, where $\Delta_\mathcal{O} = d - y_g$ is the scaling dimension of $\mathcal{O}$.
Writing the rescaled variables in eq. (\ref{rescaled_scaling}) back in terms of $\phi$, and by using the MCRG equivalence between expectations of $\phi$ and $f_t \varphi$, one finds that the factors of $b_\epsilon^{n\Delta_\phi}$ cancel, and the remaining gradient-flowed quantities satisfy a ratio formula:
\begin{equation}
\frac{\langle \mathcal{O}[f_{t+\epsilon} \varphi(x)] f_{t+\epsilon} \varphi(x_1) \cdots f_{t+\epsilon} \varphi(x_n) \rangle_{S_0}}{\langle \mathcal{O}[f_{t} \varphi(x)] f_{t} \varphi(x_1) \cdots f_{t} \varphi(x_n) \rangle_{S_0}} \approx b_\epsilon(t)^{\Delta_\mathcal{O} - m\Delta_\phi},
\end{equation}
where $m$ is the number of factors of $\phi$ in $\mathcal{O}$. To reiterate, the position arguments on both sides are the \emph{same} physical positions in units of $a_0$. Since the scaling dimension of $\mathcal{O}$ can always be written as $\Delta_\mathcal{O} = m d_\phi + \gamma_\mathcal{O}$, with $d_\phi$ being the \emph{canonical} dimension of $\phi$ in position space, the exponent of $b_\epsilon(t)$ is just $\gamma_\mathcal{O} - m\gamma_\phi$.\footnote{If $\mathcal{O}$ contains $\ell$ derivatives, then the scaling dimension will be $\Delta_\mathcal{O} = m d_\phi + \ell + \gamma_\mathcal{O}$.}
\section{Conclusion}
In this work, it has been demonstrated that a class of stochastic processes on field space defined by a linear Langevin equation may be used to define an FRG transformation on the degrees of freedom of a given bare theory. By suitably rescaling the effective degrees of freedom, the Fokker-Planck distribution generated by the Langevin equation can approach a non-gaussian stationary distribution, in accordance with the requirement that an RG transformation should possibly have a nontrivial fixed point, so long as the dynamics of the theory allows it; this was explicitly checked in the case of $\phi^4_3$ theory. By equating the observables of the effective theory with stochastic observables, a form of MCRG for exact RG has been established, implying the possibility of directly simulating such effective theories on the lattice by supplementing an ensemble average by the integration of a Langevin equation. By further analyzing the properties of effective theory observables, it was found that gradient-flowed correlators are asymptotically (in operator-separation) equal to their effective theory counterparts, and therefore the sufficiency of measuring gradient-flowed quantities at large distances to determine long-distance properties of the effective theory was established. Lastly, the Markov property of the stochastic process was used to derive a scaling formula for the ratios of gradient-flow observables with composite operator insertions, thereby implying a means of measuring anomalous dimensions of such operators.
The utility of the relation of stochastic RG to gradient flow lies in its avoidance of the necessity of a full Langevin equation simulation, subject to the requirement that the observables one measures are correlators at large distances. At short distances the presence of additive contributions from the function $A_t$ imply that the gradient flow observables are not sufficient, and one should properly simulate the full transformation. The fact that the stochastic RG transformation is continuous, together with the observation that the effective action flows towards an IRFP, then suggests the possibility of a continuous counterpart to the equations proposed in \cite{Swendsen:1979gn} which allowed for the measurement of the discrete RG stability matrix using correlations between volume-averaged operators in the action. Work in this direction is underway.
\paragraph{Acknowledgements.} The author thanks Anna Hasenfratz, Ethan Neil, Tom DeGrand, and Masafumi Fukuma for useful discussions and suggestions. This work has been supported by the U. S. Department of Energy under grant number DE-SC0010005.
|
2,877,628,088,400 | arxiv | \section*{Abstract}
Experiments and computer simulations have established that liquid water's surfaces can deviate in important ways from familiar bulk behavior. Even in the simplest case of an air-water interface, distinctive layering, orientational biases, and hydrogen bond arrangements have been reported, but an overarching picture of their origins and relationships has been incomplete. Here we show that a broad set of such observations can be understood through an analogy with the basal face of crystalline ice.
Using simulations, we specifically demonstrate that water and ice surfaces share a set of structural features
suggesting the presence of nanometer-scale ice-like domains at the air-water interface. Most prominent is a shared characteristic layering of molecular density and orientation perpendicular to the interface.
Similarities in two-point correlations of hydrogen bond network geometry point to shared ice-like intermolecular structure in the parallel direction as well.
Our results bolster and significantly extend previous conceptions of ice-like structure at the liquid's boundary, and suggest that the much-discussed quasi-liquid layer on ice evolves subtly above the melting point into a quasi-ice layer at the surface of liquid water.
\section{Conclusion}
We have presented results of computer simulations that strongly support an analogy between the air-ice and air-water interfaces.
The analogy is not meant to suggest extended periodic structure at the liquid's surface, but instead to highlight the presence of nanometer-scale domains in which molecular layering, orientation, and hydrogen bond arrangements mimic those at the basal face of ice.
This conclusion echoes previous work that pointed less strongly to similar conceptions of liquid water's surface, but with important amendments.
Lee et al.\cite{Lee1984} posited that external forces which drive close molecular packing are an essential ingredient for the formation of ice-like layering at a hydrophobic substrate.
We have shown that the necessary driving forces are in fact intrinsic to aqueous interfaces, even without a hard wall to pack against.
The signatures and consequences of these forces, however, are pronounced only in an instantaneous interface analysis that accounts for shape fluctuations of soft interfaces.
Fan et al.\cite{Fan2009} drew a conclusion similar to ours for molecular organization perpendicular to the air-water interface, but viewed such organization to be absent in parallel directions.
We have shown that correlation lengths for ice-like structure are in fact nearly equal in perpendicular and parallel directions.
Lateral organization, however, is evident only in those observables that break symmetry appropriately, such as the two-point correlation function $S(r)$ for projected bond order parameters.
The ice-like structure we have reported at the surface of liquid water could be viewed as a phenomenological mirror image of the quasi-liquid layer at the surface of ice.
For ice, the interface with vapor necessarily severs many hydrogen bonds within the ideal lattice structure, allowing a set of low-energy crystal defects to be populated even at fairly low temperature.
From this perspective, the quasi-liquid layer constitutes a buildup of low-energy defects
that become increasingly common at modest temperatures\cite{Bishop2009}, consistent with changes in
VSFS peak amplitudes over a broad range of temperature\cite{Shen2006,Smit2017B,Tang2018,Tang2020}.
In the opposite way, a vapor interface with liquid water offers few ways to satisfy a significant fraction of hydrogen bonds.
The limited collection of low-energy liquid surface configurations results in a kind of quasi-ice layer at the surface of water.
The analogy between ice and liquid water presented here neglects known\cite{Bjerrum1952,Watkins2010} non-classical defects (i.e., Bjerrum defects) and is based on simulations of fully classical, non-reactive, rigid point-charge models of water.
Neglected effects of electronic polarization, molecular distortions, and proton disorder certainly impact surface structure and thermodynamics.
Nevertheless, rigid water models reproduce phase behavior\cite{Vega2005Phases} and liquid spectra\cite{Schmidt2007}, can match ice geometry, and provide reasonable estimates of surface tension\cite{Vega2007}.
Ab initio simulations of the water surface measure a surface structure that is similar to simpler point-charge models\cite{Leontyev2011,Kessler2015,Pezzotti2017,Pezzotti2018,Besford2018,Wohlfahrt2020}.
Given the strong connections we have demonstrated between the air-ice and air-liquid surfaces in a rigid point-charge model,
we expect that detailed electronic effects will be secondary to the pronounced consequences of molecular geometry that are common to any reasonable microscopic description of water.
\section{Acknowledgments}
This work was supported by the U.S. Department
of Energy, Office of Basic Energy Sciences, through the Chemical
Sciences Division (CSD) of Lawrence Berkeley National Laboratory
(LBNL), under Contract DE-AC02-05CH11231.
\clearpage
\section{Introduction}
The boundary of a macroscopic liquid -- whether at an electrode, a macromolecular surface, or an interface with a coexisting phase -- breaks symmetries and imposes constraints that can alter microscopic structure and response in important ways.
In the case of water, changes in the statistics of intermolecular arrangements have been discussed as key factors in, e.g., surface chemistry\cite{Ocampo1982,Kang2005,PinheiroMoreira2013,Bartels-Rausch2014,Gerber2015,Hudait2018} atmospheric aerosol behavior\cite{Gerber2015,Laskin2003,Finlayson-Pitts2010}, and ice nucleation\cite{Ocampo1982,Hudait2018,Haynes1992,Cox2015,Bi2016,Qiu2017,Hudait2019}.
The detailed molecular physical origin of surface effects, however, remains a subject of uncertainty and debate in many of these areas\cite{Shin2018,Shin-Willard,Serva2018,Besford2018,Kessler2015,Bjorneholm2016}.
The subtlety of such interfacial phenomena arises in part from the inherently transient and short-ranged character of liquid structure.
In the bulk liquid phase, microscopic structure is apparent only in those observables that break translational symmetry, either by referencing multiple points in space or/and by tagging a specific molecule.
An extended surface breaks translational symmetry along the perpendicular Cartesian coordinate $z$, so that structure is manifest in even single point observables,
such as the density profile $\rho(z) = (N/A)\langle \delta(z-z_1)\rangle$ where $N$ is the number of molecules, $A$ is the interface's area, and $z_1$ is the vertical position of a particular molecule.
But for soft aqueous interfaces, such as the air-water boundary, profiles of this kind are typically nondescript -- a smooth crossover from the density of one phase to the other, or a slight net molecular orientation decaying rapidly into the bulk phases.
It is difficult to draw or defend any detailed inference from such measurements, even with the nearly unlimited resolution of molecular simulation.
Experimentally, microscopic characterization of liquid-vapor interfaces is hindered further by the complicated nature of surface-specific spectroscopies. Vibrational sum-frequency generation (VSFG) reports specifically on molecular environments that lack centrosymmetry, and is therefore well suited to study interfacial structure that mediates frequencies of bond vibration.
Mapping VSFG spectra back onto intermolecular structure, however, typically requires uncontrolled approximations and the assistance of molecular simulation.
Even then, conflicting conclusions have been drawn for some systems.
Early VSFG measurements of water's surface\cite{Du1993,Du1994} revealed a strong signal near 3700 cm$^{-1}$ that is broadly agreed to originate in hydroxyl vibration of water molecules with H atoms exposed to the vapor phase, often termed the ``dangling'' or ``free'' OH group.
Spectral features in the range 3100-3500 cm$^{-1}$, consistent with hydroxyl stretching of intact hydrogen bonds, have been variously interpreted in terms of coupling among vibrational modes within and among molecules\cite{Raymond2003,Sovago2008,Auer2008,Yang2010,Stiopkin2011,Ishiyama2012,Medders2016},
or in terms of discrete populations of distinct hydrogen bonding
environments\cite{Buch2005,Walker2006,Ji2008,Nihonyanagi2009,Pieniazek2011,Stiopkin2011,Byrnes2011,Ishiyama2012,Sun2018,Smolentsev2017,Tang2020}.
Several results from these observations and simulations of air-water interfaces have inspired analogies with the crystalline phase.
The aforementioned 3700 cm$^{-1}$ VSFG peak, commonly attributed to dangling OH groups, is observed as well for the interface between vapor and crystalline ice. Indeed, dangling OH groups are a characteristic feature of ice's low-energy basal plane.
In addition, phase-sensitive VSFG measurements exhibit a pronounced band around 3200 cm$^{-1}$, consistent with OH stretching in bulk ice\cite{Ji2008}.
Based on these similarities, Shen and Ostroverkhov\cite{Shen2006} suggested that ice-like structure is a characteristic feature of the liquid's boundary.
Fan et al.\cite{Fan2009} supported this connection by computing depth-resolved probability distributions of molecular orientation from molecular dynamics simulations of the air-water interface.
They observed a clear anisotropy within two molecular diameters of the average interfacial height, with layered features that are reminiscent of ice's basal facet but are highly broadened and only weakly distinct from one another.
These results support a loose structural resemblance between air-water and air-ice interfaces.
As noted in Fan et al.\cite{Fan2009}, nondescript structural profiles like $\rho(z)$ are to be expected at soft interfaces, which are highly permissive of capillary wave-like fluctuations in interfacial topography.
By contrast, liquid structure at hard interfaces can exhibit pronounced spatial variations that extend several molecular diameters into the liquid phase.
In computer simulations of water confined between flat hydrophobic plates, Lee et al.\cite{Lee1984} revealed substantial oscillations in average microscopic density that decay on a nanometer length scale; distributions of molecular orientation are also strongly peaked near the interface.
A detailed comparison between the liquid's surface and the crystalline structure of ice is thus more straightforward in this case of a hard, flat, hydrophobic interface.
Lee et al. found a close correspondence: Spacing between peaks of $\rho(z)$ suggest a significant deviation from bulk liquid structure, aligning more closely with the separation between layers of ice in the direction normal to the basal face.
Similarly, preferred molecular orientations at the peaks of $\rho(z)$ are roughly consistent with hydrogen bonding directions in the corresponding ice layers, though with exceptions that suggest specific distortions of the basal plane.
More recent computational studies of soft aqueous interfaces have emphasized that the smooth density profile $\rho(z)$ belies a sharpness that is evident in any representative molecular configuration.
For a given lateral position $(x,y)$ in such a configuration, the dense liquid environment gives way to dilute vapor over a very small range of $z$ centered at
$z_{\rm inst}(x,y)$.
With the vertical coordinate referenced to this local ``instantaneous interface", Willard and Chandler demonstrated that the air-water interface is in fact highly structured\cite{Willard2010}, with average features even more pronounced than Lee et al. reported for water at a hard hydrophobic interface.
With this definition of depth (and related definitions\cite{Chacon2003,Sega2015,Partay2008}), distinct interfacial layers have been identified for a variety of microscopic properties, including density\cite{Willard2010,Sega2015,Kessler2015}, orientation\cite{Partay2008,Willard2010,Kessler2015,Wohlfahrt2020}, hydrogen bond formation\cite{Partay2008,Kessler2015,Serva2018}, bond network connectivity\cite{Partay2008,Zhou2017,Pezzotti2017, Pezzotti2018}, molecular dipole\cite{Shin-Willard}, and hydrogen bonding lifetime and bond libration\cite{Tong2016,Zhou2017}.
Models and physical pictures have been offered to rationalize these microscopic patterns at the air-water interface\cite{Lee1984,Wilson1988,Ismail2006,Fan2009,Bonthuis2011,Vaikuntanathan2016,Pezzotti2017,Shin2018,Shin-Willard}, which become apparent only after an accounting for its fluctuating surface topography.
But, to our knowledge, an overarching structural analogy with ice has not been evaluated from the instantaneous interface perspective.
This paper presents such an evaluation, based on molecular simulations, whose results strongly encourage the notion of ice-like organization at the air-water interface.
Specifically, we will show that most sharp features present in instantaneous depth profiles of molecular density and orientation can be anticipated from molecular arrangements at the basal face of an ideal ice lattice.
Where this correspondence fails, similar deviations are observed at the surface of ice at finite temperature (but still well below the melting point), due to low-energy defects that distort its ideal lattice structure.
This analogy highlights local structural motifs that are common to the surfaces of ice and water, but does not imply correlations in liquid structure that extend beyond a few molecular diameters.
Instead, we find that the accumulation of defects at the air-ice interface progressively degrade lateral correlations as temperature increases, producing distinctive features that are shared on both sides of the melting transition.
\subsection{Ice structure}
The essential structure of ice has been known for nearly 100 years, since the extended tetrahedral network of hexagonal ice was proposed following the rules of Bernal and Fowler\cite{Bernal1933}.
Linus Pauling argued that the fundamental building block of ice must be the ``puckered'' or ``chair''-configuration hexagon\cite{Pauling1935} (comprising, e.g., the orange and yellow molecules of Fig.~\ref{fig:two-layer-schematic} labeled L1A and L1B).
These puckered hexagons tessellate across a plane, forming a bilayer of slightly higher (orange in Fig.~\ref{fig:two-layer-schematic}) and slightly lower (yellow in Fig.~\ref{fig:two-layer-schematic}) waters that constitute the basal (0001) face of ice.
Each water molecule, at a vertex of three hexagons, forms three hydrogen bonds within the bilayer and reaches either upward (toward the vapor) or else downward to form a fourth hydrogen bond with an adjacent hexagonal bilayer.
Hexagonal ice can therefore be viewed as stacks of puckered hexagonal planes that are interconnected with ``pillars'' in the form of hydrogen bonds.
We will denote these bilayers L1, L2, etc.
The upward- and downward-reaching molecules within layer L$X$ will be designated L$X$A and L$X$B, respectively (see Fig.~\ref{fig:two-layer-schematic}).
\begin{figure}[t]
\includegraphics[width=\linewidth]{Introduction/2layer-annotated.pdf}
\caption{
Ideal lattice structure at the basal plane of ice, with outward-pointing normal vector $\hat{\bf n}$.
Colors highlight a characteristic layering of chair-configuration hexagonal motifs. Only the positions of molecular centers, and the connectivity of hydrogen bonds, is prescribed by this ideal structure. For concreteness we show one configuration of protons consistent with the Bernal-Fowler ice rules. The lattice continues indefinitely in the left, right, and downward directions; L0 marks the beginning of the vapor phase.
}
\label{fig:two-layer-schematic}
\end{figure}
At temperatures well below freezing, macroscopic ice crystals tend to terminate with a full bilayer of the basal plane exposed\cite{Materer1997}.
In a real system, this termination generates some degree of restructuring, but the unaltered ``ideal" ice surface serves as a useful reference for our work.
Because the outermost molecules (all those in L1A) are lacking one hydrogen bond, they are also the most likely participants in rearrangements at finite temperature.
As the melting point is approached, discrete defects accumulate at the ice surface\cite{Bishop2009,Buch2005,Buch2008,Smit2017B}, which develops a much-discussed\cite{Rosenberg2005,Li2007,Conde2008,Bishop2009,Watkins2011,Limmer2014,Michaelides2017,AlejandraSanchez2017,Pickering2018,Kling2018,Mohandesi2018,Slater2019} boundary region (called a pre-melted, quasi-liquid, or liquid-like layer) with fluid-like characteristics.
This region of 1-3 ice bilayers\cite{Conde2008,Kling2018} clearly mediates the contact of coexisting crystal and vapor phases, but the nature of its onset\cite{Kling2018}, and precise criteria for its presence, remain controversial.
\section{Methods}
\textit{\textbf{Simulation protocols.}}
Configurational ensembles for air-water and air-ice interfaces were sampled using molecular dynamics (MD) simulations performed with the LAMMPS\cite{lammps} MD package.
Forces were computed from the TIP4P/Ice model of water\cite{Abascal2005},
which was designed to accurately reproduce the thermodynamics of
phase coexistence between ice and liquid water\cite{Vega2005Phases,Abascal2005},
under periodic boundary conditions with Ewald summation of electrostatic interactions.
In all simulations, intramolecular bond distance constraints were imposed using the RATTLE algorithm\cite{Andersen1983} and constant temperature $T$ was maintained using a Nosé-Hoover thermostat with a damping frequency 200 fs$^{-1}$\cite{Hoover1985}.
The air-water simulation included $N=522$ molecules in a cell with dimensions $25\times25\times100\ \text{\AA}^3$. The resulting liquid slab (initialized as a $25\times25\times25\ \text{\AA}^3$ cube using the packmol\cite{packmol} package) spans the $x$ and $y$ dimensions of the simulation cell.
This system was held at $T=298$ K and propagated with a time step of 2 fs, during both 4 ns of equilibration and a subsequent 5 ns production trajectory.
Air-ice simulations included $N=96$ molecules in a cell with dimensions $13.5212\times 15.6129 \times 14.72\ \text{\AA}^3$.
We constructed an initial ice slab by truncating an ideal, periodic ice crystal (generated according to the scheme of Hayward and Reimers\cite{Hayward1997}) so that the basal plane is exposed to vapor.
This system was advanced for 10 ns with a time step of 2 fs at a variety of temperatures.
The system was made deliberately small in an effort to thoroughly equilibrate and broadly sample at low temperatures via parallel tempering.
Simulation replicas were exchanged for temperatures at 10$^{\circ}$C increments from 17 K to 287 K, with five additional simulations at 22 K, 32 K, 272 K, and 282 K.
By exchanging replicas among 32 simulations, we explored a broad range of fluctuations even 135 K below the melting point.
As further assistance to sampling at low temperature, we hold the subsurface structure of ice fixed.
Over most of the temperature range that we consider, previous work suggests that L4 would be nearly static in the absence of constraints\cite{Conde2008, AlejandraSanchez2017, Kling2018}.
Accordingly, L4 is kept fixed throughout all ice simulations.
\vspace{0.1cm}
\noindent
\textit{\textbf{Instantaneous interface analysis.}}
For air-water interfaces, all depth-dependent properties are referenced to an instantaneous interface. For a given configuration the interface's height $z_{\rm inst}(x,y)$ is determined at a grid of lateral positions $(x,y)$ according to the coarse-graining scheme of Willard and Chandler\cite{Willard2010}.
A local, vapor-facing surface normal vector $\hat{\bm n}({\bm s})$ is then estimated at each surface grid point ${\bm s} = (x,y,z_{\rm inst}(x,y))$.
The instantaneous depth of molecule $i$ is defined to be
\[
d_i = (\bm{s} - \bm{r}_i) \cdot \hat{\bm n}(\bm{s}), \quad\quad\textrm{(air-water interface)}
\]
where $\bm{s}$ is the surface grid point closest to $\bm{r}_i$.
Throughout this work, the position ${\bm r}_i$ of a TIP4P/Ice molecule refers to the center of its excluded volume, i.e., the oxygen atom.
The air-water density profile we report is thus
$\rho_{\rm inst}(d) = (N/A)\langle \delta(d-d_i) \rangle$.
At temperatures of interest, undulations of the air-ice interface are quite limited.
The static subsurface layer L4 in our simulations acts as an anchor that further removes uncertainty in the interface's location.
In contrast to the soft liquid interface, the absolute vertical coordinate $z$ suffices as a detailed measure of depth in this case.
We therefore define
\[
d_i = z^{\rm L1A} - z_i + 0.8{\textrm \AA},
\quad\quad\textrm{(air-ice interface)}
\]
where $z^{\rm L1A}$ denotes the crystalline height of the outermost half-bilayer of the basal plane.
A shift of $0.8\ \text{\AA}$ is introduced to facilitate comparison with the air-water interface.
The use of absolute height in the case of ice aids the detection and characterization of surface defects.
\vspace{0.1cm}
\noindent
\textit{\textbf{Orientational statistics.}}
We characterize the orientation of an interfacial water molecule by computing statistics of the angle $\theta_{\rm OH}$ between a hydroxyl bond vector ${\bm r}_{\rm OH}$ and the surface normal $\hat{\bm n}$.
Specifically, we determine a depth-dependent probability distribution $P(\cos\theta_{\rm OH};d)$ of $\cos\theta_{\rm OH}$, and also a joint distribution $P(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2};d)$ for the pair of OH bond vectors that together define a water molecule's orientation up to an angle of azimuthal symmetry.
The singlet distribution is uniform in an isotropic bulk liquid (or vapor).
The joint distribution, however, is nonuniform even in an isotropic environment (an exact calculation for the bulk case is given in SI). We follow previous work in referencing $P(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2};d)$ to its bulk form.
\vspace{0.1cm}
\noindent
\textit{\textbf{Lateral correlation of ice-like motifs.}}
In following sections we will show that a local structural motif characteristic of the air-ice interface is recapitulated in detail at the air-water interface.
For the idealized crystal surface, this arrangement comprising a few molecular centers is repeated along lattice vectors parallel to the surface, in perfect registry, forming the outermost layers of ice's basal face.
At a liquid surface, which is azimuthally symmetric, an average long-range alignment of local motifs is prohibited.
Resemblance between the two surfaces is thus necessarily limited to a microscopic scale.
We assess this length scale through a correlation function $S(r)$ that quantifies motifs' alignment as a function of lateral distance $r$.
The repeated structural element we consider centers on a molecule in the outermost bilayer (L1), which ideally forms hydrogen bonds with three other molecules in L1.
To characterize the relative alignment of two such motifs,
we first project the position of each L1 molecule (those with $0<d<1.3$ \text{\AA}) onto the local instantaneous interface,
\[
\tilde{\bf r} = {\bf r} - ({\bf r} \cdot \hat{\bf n})\hat{\bf n}.
\]
For the idealized crystal, these projected positions lie at the vertices of a honeycomb lattice in two dimensions.
For the liquid surface, we anticipate honeycomb-like domains, limited in size by orientational decoherence and the presence of structural defects.
Bond orientation parameters,
\[
q_m &= \frac{1}{N_{\rm b}(m)} \sum_{j=1}^{N_{\rm b}(m)} e^{3i \theta_{mj}},
\label{equ:steinhardt}
\]
akin to those of Steinhardt and Nelson\cite{Steinhardt1983} allow a simple statistical analysis of these domains.
The sum in Eq.~\ref{equ:steinhardt} runs over the $N_{\rm b}(m)$ surface neighbors of a surface molecule $m$.
(Neighbors are identified by intermolecular distance, $|\bm r_m - \bm r_j|<3$ \AA, a range encompassing all strong hydrogen bonds.)
$\theta_{mj}$
is the angle between the bond vector $\tilde{\bf r}_{mj} = \tilde{\bf r}_m -\tilde{\bf r}_j$ and an arbitrary
lateral axis in the laboratory frame. For a pair of surface molecules $m$ and $k$, the product $q_m q_k^*$
indicates the alignment of the respective local coordination environments.
A perfect honeycomb
gives $q_m q_k^* = \pm 1$, depending on the vertices occupied by $j$ and $m$. Our order parameter for the
alignment of ice-like surface motifs is a conditional average of this product,
\[
S(r) = \frac{\braket{q_m q_k^* \delta(r-|{\bf r}_{mk}|)} } {\braket{\delta(r-|{\bf r}_{mk}|)}}, \label{Eq:S_of_r}
\]
for surface molecules separated by a distance $r$.
Uncorrelated random bond orientations give $S(r)=0$, while the ideal basal face of ice gives
a series of values $S(r)=\pm 1$ at discrete distances corresponding to vertex pairs in the projected honeycomb lattice.
\section{Results and Discussion}
To establish connections between the surfaces of water's liquid and solid phases, we will compare three distinct systems.
One is an ideal crystal of ice Ih terminated at a specific lattice plane.
In the included figures, the precisely defined features of an ideal crystal surface will be indicated by black lines and points.
The second system we will consider is a finite-temperature crystal in coexistence with vapor.
In figures, we show simulation results for $T=137$ K,
the lowest temperature that is frequently accessed in our parallel tempering by thoroughly exchanging replicas.
This system is thus well equilibrated (within the constraint of fixed subsurface molecules in L4), but sufficiently cold that the ice surface is highly organized.
Substantial deviations from the ideal crystal structure are in most cases well localized in space (aside from occasional transitions of the entire surface to a cubic ice arrangement).
By most standards a quasi-liquid layer, whose fluidity and structural variability are reminiscent of ambient liquid water, is absent under these conditions.
Simulation results for this finite (but low) temperature system are presented in Figs. 2, 5, and 6 as blue lines.
Our final system is the much-studied air-water interface, as represented by the TIP4P/Ice model at 298 K.
Simulation results for the liquid surface are shown in Figs. 2, 5, and 6 as red lines.
We characterize each of these three systems in several ways, emphasizing different aspects of microscopic organization at aqueous surfaces.
We first examine density profiles that highlight discrete molecular layering, made evident at the liquid surface by referring depth to an instantaneous interface.
We then compare orientational structure of the revealed layers, focusing specifically on statistics of molecular dipoles and
of a water molecule's two OH bond vectors.
Finally, we quantify the degree of lateral alignment among molecular coordination environments in each system's outermost layer.
These comparisons paint a consistent picture: The ice surface's quasi-liquid layer, a semi-structured region
in which crystalline organization is locally evident but globally indistinct, is an apt description of the air-water interface as well.
\vspace{0.1cm}
\noindent
\textit{\textbf{Depth profiles of molecular density.}}
An ideal crystal truncated at a lattice plane exhibits distinct molecular layers parallel to the surface, separated by distances that are dictated by the lattice structure.
In the case of ice's basal face, these strata group naturally into a series of evenly spaced bilayers
(L1, L2, L3, $\ldots$), as described above and illustrated in Fig.~\ref{fig:two-layer-schematic}.
The 0.96 \text{\AA}\ spacing within one of these bilayers (i.e., the vertical distance between L$X$A and L$X$B peaks) is several
times smaller than the 3.68 \text{\AA}\ spacing between adjacent bilayers (i.e., the vertical distance from L1A to L2A).
Although ice is built from the same tetrahedral hydrogen bonding motif as the liquid, neither of these separation distances
correspond closely to lengths that are familiar from standard measures of liquid structure, e.g., peaks in the oxygen-oxygen radial distribution function $g_{\rm OO}(r)$ at 2.81 \text{\AA}, 4.46 \text{\AA}, $\ldots$\cite{Piaggi2019}.
Our simulations of the ice surface at low but finite temperature yield density profiles (Fig.~\ref{fig:density-ice-water}, blue) whose peak positions follow that of the ideal crystal lattice.
At 137 K bilayer substructure is clearly evident, although
broadened L1A and L1B peaks overlap significantly.
\begin{figure}[!t]
\includegraphics[width=\linewidth]{Results/cos-depth-density-composite}
\caption{
Layering of molecular density and orientational patterns at the air-water and air-ice interfaces. (a) Density profiles
for the two surfaces, scaled by their maximum values over the range shown. In the liquid-vapor case (red), depth is measured relative to the instantaneous interface. Ice results are shown for $T=137$ K (blue). Black lines indicate layer and sublayer divisions in an ideal ice lattice. (b) Probability distributions of molecular orientation at the liquid surface (colors). $P(\cos\theta_{\rm OH};d)$ is normalized separately at each depth $d$. Black circles show expectations from an ideal lattice.
}
\label{fig:density-ice-water}
\end{figure}
The density of liquid water at its interface with air is also structured, as Willard and Chandler strikingly
demonstrated\cite{Willard2010}.
Fig.~\ref{fig:density-ice-water} shows the results of such an analysis in red for the TIP4P/Ice model at liquid-vapor coexistence.
This depth profile closely matches those calculated previously for other point-charge models of water\cite{Willard2010,Sega2015,Serva2018} and agrees qualitatively with profiles computed by ab initio molecular dynamics\cite{Kessler2015}.
Two pronounced peaks appear, at $d\approx 1.9$ \text{\AA}\ and $d\approx 4.7$ \text{\AA}.
The first marks a well-defined outermost molecular layer, and has a shape that loosely suggests structure within the layer.
Specifically, the increase from gas to liquid density is not uniformly steep and includes multiple inflection points.
In addition, and in sharp contrast to $g_{\rm OO}(r)$, the first density peak is at least as broad as the second.
A subtle third peak can just be resolved near $d\approx 8.5$ \AA.
Such characterizations of the air-water interface have inspired several detailed layer-by-layer analyses\cite{Partay2008,Sega2015,Kessler2015,Zhou2017,Pezzotti2017, Pezzotti2018,Serva2018,Shin-Willard}.
The precise boundaries of each layer differ among studies, but some important general conclusions are consistently reached.
Kessler et al.\cite{Kessler2015} separated the liquid water density profile into layers 3 \text{\AA}\ in width (denoted L0, L1, and L2), and designated the upper 1.2 \text{\AA}\ of L1 as L1$^\parallel$ due to a unique internal structure that we will discuss later.
They further observed that orientations, hydrogen bonding, and residence time differed measurably in each layer\cite{Kessler2015}.
Gaigeot and coworkers extended the layer-by-layer analysis\cite{Pezzotti2017,Pezzotti2018,Serva2018} by noting substantial intra-layer connectivity, especially within L1$^\parallel$.
The similarity of the liquid and icy density profiles in Fig.~\ref{fig:density-ice-water} suggests that the layering at the liquid's surface may be similar in nature to the crystalline structure at ice's basal face.
In particular, peaks of the liquid profile, designated in previous work as L1, L2, $\ldots$, align well with the ice bilayers we have denoted L1, L2, $\ldots$.
We therefore define the boundaries of liquid surface layers with reference to the corresponding layers of ice, which are spaced by 3.68 \text{\AA}.
Setting the boundary between L1 and L2 at $d=3.68$ \text{\AA}\ (very near the first local minimum of $\rho_{\rm inst}(d)$), we arrive at the layer divisions indicated in Fig.~\ref{fig:density-ice-water}a by dashed lines.
Substructure within the ice bilayers can further help to explain peculiarities of the air-water density profile.
We associate the shoulder of the liquid's first peak with L1A molecules of ice's basal face, and the main peak with L1B.
Setting the boundary between L1A and L1B at $d=1.3$ \text{\AA}\ (near an inflection point where the peak shoulder ends), we obtain the sublayer divisions indicated in Fig.~\ref{fig:density-ice-water}a by dotted lines.
For the ideal ice lattice, populations of L1A and L1B are equal.
In the analogy we are proposing, substantial density has shifted from the liquid's L1A into L1B, giving a twofold difference in sublayer-averaged densities, $\rho_{\rm inst}^{\rm L1A} / \rho_{\rm inst}^{\rm L2A} = 0.53$.
(Table~SI.2).
The net density of L1 in our definition is slightly lower than that of other layers, $\rho_{\rm inst}^{\rm L1} / \rho_{\rm inst}^{\rm L2} = 0.92$.
Relative peak widths in the liquid density profile can be rationalized within the ice analogy.
The distinctness of molecular layers rapidly attenuates moving toward the translationally symmetric bulk liquid.
Density peaks are expected to broaden as a result, just as successive peaks in $g_{\rm OO}(r)$ broaden with increasing distance.
Bilayer substructure, however, enhances the peak widths of layers that are most ordered.
L1A and L1B are separated in depth at the liquid surface, even if their density contributions strongly overlap.
As crystalline order attenuates moving toward the bulk liquid, bilayers' internal separation weakens.
Resultant merging of A and B sublayers acts to reduce the width of a layer's net density peak.
The comparably broad peaks observed for L1 and L2 in simulations could be viewed as a consequence of these countervailing trends.
Other faces of ice Ih also feature characteristic layering, but their density profiles do not align with the liquid surface as well as the basal face. The density profile of the primary prismatic and secondary prismatic faces of ice is included in
Figs.~SI.2 and SI.3.
The primary prismatic face, whose boat hexagons are closely related to chair hexagons at the basal face, exhibits similar layering but at slightly wider intervals than the basal plane.
The secondary prismatic face is an especially poor match to the layering of the liquid surface.
The orientational structures at the other ice faces also do not match
the liquid as closely as the basal face (compare, e.g., Fig.~\ref{fig:coscos-summary} to
Fig.~SI.5).
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{Results/coscos-Summary}
\caption{
Joint probability distributions $P(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2};d)$ for the orientations of a water molecule's two OH bond vectors, computed for liquid-vapor (top) and ice-vapor (bottom) interfaces. Each plot shows results aggregated for a specific sublayer (L0, L1A, or L1B) and scaled by the probability distribution for an isotropic bulk environment. Black circles show expectations for these sublayers from an ideal lattice, including L0 as a continuation of the periodic L$X$A / L$X$B pattern.
}
\label{fig:coscos-summary}
\end{figure*}
\vspace{0.1cm}
\noindent
\textit{\textbf{Orientational statistics of water's outermost layers.}}
At temperatures of interest ice Ih is proton disordered, so that the directions along which a particular water molecule donates hydrogen bonds are not completely determined. These directions are nonetheless highly constrained by the ordered molecular lattice, allowing only four possible OH bond vector orientations for each molecular lattice site.
At the basal surface of ice, allowed orientations reflect the puckered hexagonal motif that tessellates to form each bilayer of ice.
For the ideal crystal surface, water molecules in L1A sacrifice one potential hydrogen bonding site directly to vapor, while forming hydrogen bonds with their three closest neighbors, all in L1B. The angle $\theta_{\rm OH}^{\rm L1A}$ thus has two possible values, satisfying $\cos \theta_{\rm OH}^{\rm L1A} \approx +1$ and $\cos \theta_{\rm OH}^{\rm L1A} \approx -1/3$, for outward- and inward-pointing OH groups, respectively. Likewise, every water molecule in L1B forms three hydrogen bonds with neighbors in L1A, $\cos \theta_{\rm OH}^{\rm L1B} \approx +1/3$, and forms a fourth hydrogen bond pointing directly away from the surface, $\cos \theta_{\rm OH}^{\rm L1B} \approx -1$, that connects to the subsequent layer, L2A. The oscillatory upward-reaching (L$X$A) and downward-reaching (L$X$B) structure repeats in each ice layer.
This pattern is indicated in Fig. \ref{fig:density-ice-water}b by black circles at allowed angles for each molecular depth in the ideal lattice.
Molecules at the air-water interface show a very similar pattern of orientational preferences.
The singlet distribution $P(\cos\theta_{\rm OH};d)$, plotted in Fig.~\ref{fig:density-ice-water}b, exhibits a series of peaks that align well with the collection of allowed hydrogen bond orientations at ice's basal face.
Relative to ice, peaks of $P(\cos\theta_{\rm OH};d)$ are broad at the liquid's surface, with FWHM $\approx30^{\circ}$ throughout L1 and broader still for L2.
These peaks are most pronounced at depths shifted from ice-based expectations by about 0.5 \text{\AA}\ in L1B and L2.
An ice-like progression of features near $\cos\theta_{\rm OH}=\pm 1$ and $\pm 1/3$ is nonetheless unmistakable.
Some of these orientational similarities have been discussed in earlier work.
In particular, the surface-exposed ``dangling'' OH ($\cos\theta_{\rm OH}= +1$ in L1A) has been demonstrated spectroscopically\cite{Du1993,Du1994,Fan2009}, and the corresponding outward-facing OH of the basal plane of ice was noted by Du et al.\cite{Du1994}.
Fan et al.\cite{Fan2009} further observed in MD simulation that a second layer of water tends to point one OH toward bulk ($\cos\theta_{\rm OH}= -1$ in L1B).
By accounting for surface shape fluctuations, the comparison in Fig.~\ref{fig:density-ice-water}b is much more precise than in previous studies.
The more detailed joint distribution $P(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2};d)$ reinforces this close similarity in orientational structure.
For the ideal crystal, each of the sublayers L1A, L1B, L2A, \ldots allows three possible ordered pairs $(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2})$, namely $(1,-1/3)$, $(-1/3,1)$, and $(-1/3,-1/3)$ in L$X$A, and $(-1,+1/3)$, $(+1/3,-1)$, and $(+1/3,+1/3)$ in L$X$B, indicated by black circles in Fig.~\ref{fig:coscos-summary}.
Simulations of ice at
137 K yield peaks near these expected values for L1A and L1B, broadened by thermal fluctuations.
For L1A at the liquid surface, probability is highest in the same intervals of $(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2})$ as in ice, and likewise for L1B.
This agreement is demonstrated in Fig.~\ref{fig:coscos-summary} by plotting sublayer-resolved histograms, which integrate $\rho_{\rm inst}(d) P(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2};d)$ over the range of $d$ defining a given sublayer. The peaks of these histograms align well for all three of our systems in L1A, and also in L1B. Peaks are unsurprisingly broader and less pronounced in the liquid case.
\pagebreak[4]
\vspace{0.1cm}
\noindent
\textit{\textbf{Ice-defect features at the liquid surface.}}
Joint angle distributions for finite-temperature ice differ from ideal lattice expectations in several ways.
Because the ice surface at 137 K remains highly ordered overall, we expect that these differences are associated with discrete defects in crystal structure.
Simulations of rigid point-charge water models have revealed a handful of such structural defects on the pristine ice basal surface\cite{Buch2008,Bishop2009,Watkins2011,Pedersen2015,Slater2019}, whose lifetimes and energies have been quantified\cite{Watkins2011,Pedersen2015,Slater2019}.
These defect states preserve or nearly preserve the total number of hydrogen bonds, at the cost of distortions to the idealized tetrahedral bond geometry of ice.
With a low energetic cost, they serve as the primary source of discrete fluctuations at the air-ice interface for low but finite temperatures.
Two of these discrete defects, depicted in Fig.~\ref{fig:defects-4panel}, are particularly helpful for understanding the orientational statistics we have computed.
\begin{figure}[!t]
\centering
\includegraphics[width=\linewidth]{Results/Defects}
\caption{
Discrete defects in ice surface structure that strongly influence statistics of molecular orientation at $T=137$ K.
Hydrogen bonds are indicated in schematics (a) and (c) by green dotted lines.
Joint probability distributions $P(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2};d)$ in (b) and (d) are averaged
over the depth range indicated and scaled by the bulk result, and black circles indicate
expectations from the ideal ice lattice.
(a) Surface self-interstitial defect viewed from the side (the surface normal $\hat{\bf n}$ points upward).
Three members of L1A (orange) each form one hydrogen bond with the admolecule (black) in L0, which sits atop the L1 hexagon as the apex of a pyramid. (b) Distribution of molecular orientation in the L0 layer of ice. Red circles show orientational preferences of the admolecule defect. (c) Top-down view of a vacancy defect (the surface normal $\hat{\bf n}$ points out of the page) at the basal surface of ice. Only molecules in L1 are shown. In this snapshot of a highly variable defect structure, one water (center) forms five strained hydrogen bonds, which require the nearby molecules (gray) to lie nearly parallel to the interface and to reside at a depth between the ideal L1A and L1B sublayers. (d) Distribution of molecular orientation in a narrow range of depth near the L1A/L1B boundary.
}
\label{fig:defects-4panel}
\end{figure}
In the first defect, a surface self-interstitial, a water molecule sits above the outermost bilayer, forming three hydrogen bonds with L1A molecules. This admolecule does not simply continue the periodic structure of the layers beneath it; in such a continuation it would form only one hydrogen bond with L1A, incurring a large energetic penalty.
Instead, the admolecule sacrifices a single bond and, as a result, is oriented much as in L$X$A. Red circles in
Fig.~\ref{fig:defects-4panel}b show these expectations for an admolecule defect, which align very closely with our simulation results for L0 in finite temperature ice.
As a more subtle signature of this defect, the L1A molecules that bond with the admolecule must shift slightly out of their regular tetrahedral orientation, consistent with faint features near $\pm$(0.75, -0.4) in the ice L1A panel of Fig.~\ref{fig:coscos-summary}.
Our results for L0 at the liquid surface show a remarkably similar pattern.
In the liquid case, L0 samples are rare almost by definition: The instantaneous interface would be deformed upwards by an admolecule, which could well be assigned to L1 as a result.
A population of strongly protruding molecules is nonetheless present, and its orientational preferences are most reminiscent of L$X$A.
By comparison with finite-temperature ice, a peak at $(-1/3,-1/3)$ is missing in the joint distribution for liquid L0.
We attribute this absence to the awkwardness of detecting admolecules with a Willard-Chandler interface, which could assign the corresponding population to L1A instead.
The second discrete defect we consider is a vacancy in L1.
The outright removal of an L1A molecule from the ideal crystal surface severs three hydrogen bonds.
To offset this loss, surrounding L1A molecules rearrange to form a number of new bonds that are strained.
This rearrangement is not precisely defined, introducing a variable mix of polygons of degree 4 or higher\cite{Bishop2009}.
Fig.~\ref{fig:defects-4panel}c shows an example vacancy structure, in which one water forms five strained hydrogen bonds.
This molecule and several of its neighbors lie nearly parallel to the surface and sit at a depth near the L1A/L1B boundary.
The L1B joint angle distribution for finite-temperature ice exhibits a peak near $(0,0)$,
which we associate with the vacancy defect. In this orientation, a water molecule's nuclei all lie in a plane parallel to the interface.
The joint histogram in Fig. \ref{fig:defects-4panel}d, which is limited to depths near the L1A/L1B boundary, confirms that parallel orientation is strongest in this inter-sublayer region of the ice surface.
A parallel surface region (L1$^\parallel$) has already been reported for the liquid surface\cite{Willard2010,Ishiyama2012,Kessler2015,Pezzotti2017,Pezzotti2018,Serva2018}, with many similarities to the defective ice structures we have described.
As in ice, the sublayer with enhanced parallel character is situated near the L1A/L1B boundary.
Like the vacancy defect structure, the parallel sublayer exhibits many hydrogen bonds between molecules in the parallel region\cite{Kessler2015,Pezzotti2018,Serva2018}.
The joint angle distributions we have calculated make clear that the parallel region in ice is a small sub-population of L1, made prominent only by limiting attention to the narrow strip of depths between L1A and L1B.
Assessing the population of the liquid's parallel region, however, is made difficult by a lack of sharp features in distributions of depth or orientation.
Indeed, previous definitions of this region vary significantly, as do the resulting counts of molecules it includes.
In SI we examine several classification criteria, their physical basis, and the molecular populations they imply.
This analysis classifies roughly 10-30\% of molecules in liquid L1 as parallel in character.
\vspace{0.1cm}
\noindent
\textit{\textbf{Sub-bilayer surface structure.}}
The dipole ${\bf m}$ of a water molecule -- a linear combination of its two OH vectors -- obeys statistics that are completely determined by the joint distribution $P(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2};d)$.
The depth-resolved average dipole $\braket{m_{\rm n}(d)} = \braket{{\bf m}\cdot \hat{\bf n}({\bf s})}$ at a surface nonetheless reports on hydrogen bond network properties that are only subtly encoded in this distribution. Simulation results for $\braket{m_{\rm n}(d)}$ reveal yet another similarity between liquid water and ice surfaces.
Any model of tetrahedral bonding units whose donating and accepting sites are statistically equivalent gives $\braket{m_{\rm n}(d)}=0$ as a requirement of symmetry.
For models like TIP4P/Ice, the average molecular dipole therefore reports on the statistical differences between donated and accepted hydrogen bonds.
Fig.~\ref{fig:dipole-water-iceT-135} shows the dipole profile computed for the liquid's surface (red), which closely resembles previous results for similar models\cite{Shin-Willard}.
We also show the dipole profile for finite-temperature ice (blue), which recapitulates the most prominent features of the liquid result. Specifically, in both cases $\braket{m_{\rm n}(d)}$ crosses zero in L1A and is minimum near the L1A/L1B boundary.
For ice, patterned structure in $\braket{m_{\rm n}(d)}$ continues many layers from the interface, while for the liquid $\braket{m_{\rm n}(d)}$ is distinguishable from zero only in L1 and L2 ($d < 7$ \text{\AA}).
In the SI, we argue that the ice result indicates distortions in each sublayer that systematically displace upwards-pointing OH groups in L$X$A towards the bulk, and downwards-pointing OH groups in L$X$B towards the surface, with the exception of L1A where the displacement direction is reversed.
For the liquid surface, our ice analogy thus suggests that the dipole profile arises from subtle but systematic displacements of water molecules according to the direction of their hydrogen bonds and the sublayers they occupy.
We also propose a minimal model, $m_{\rm est}(d) = \sum_i \bar{m}_i^{\alpha(d)} \rho_i(d)$ (see SI
Eq.~SI.12), that captures the essential surface dipole structure as simple deviations from L$X$A and L$X$B positions.
\begin{figure}[!t]
\includegraphics[width=\linewidth]{Results/dipole-water-iceT-135.pdf}
\caption{
Average $z$-component of the molecular dipole moment, $\braket{m_{\rm n}(d)}$, computed as a function of depth for the ice surface (137 K, blue) and liquid water surface (298 K, red). Dashed and dotted black lines indicate ice layer boundaries
just as in Fig.~\ref{fig:density-ice-water}. Ice results are shown only at depths where density is sufficient to obtain reliable averages. Systematic sub-bilayer dipole structure is reproduced qualitatively by $m_{\rm est}$ (green), the result of a simple model (see SI) that distinguishes molecules by the layers in which their hydrogen bonding partners reside.
}
\label{fig:dipole-water-iceT-135}
\end{figure}
\vspace{0.1cm}
\noindent
\textit{\textbf{Lateral decoherence of ice-like structure.}}
Structure parallel to the interface is slightly more challenging to characterize than layering in the perpendicular direction of broken symmetry.
The correlation function $S(r)$ in Eq.~\ref{Eq:S_of_r} is designed for this purpose, quantifying the lateral alignment of local structural motifs that in ice are coherently arranged over macroscopic distances.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{Results/registers-byZ-selectedT}
\caption{
Lateral alignment of hydrogen bonding geometries in the range of depth corresponding to the L1 layer of ice. Results for the bond order parameter correlation function $S(r)$ are shown for ice at a range of temperatures spanned by parallel tempering simulations with the L4 subsurface layer held fixed (including a temperature just above $T_{\rm m}$, at which the system would melt without the ice base constraint). The liquid result at 298 K was obtained without subsurface constraints.
Stars indicate ideal lattice expectations at vertex-vertex distances
$r = \ell [(\sqrt{3} i_x/2 )^2 + ( i_y/2 )^2]^{1/2}$
of the hexagonal lattice that results from projecting the L1 layer onto the plane of the interface. $i_x$ and $i_y$ are integers whose sum must be even, and $\ell = 2.6$ \AA\ is the side length of projected hexagons.
Sharp changes at $r=1.15 \ell$ coincide with the cutoff for identifying bonding partners that contribute to each molecule's bond orientation parameter $q_m$.
}
\label{fig:registers-byz-selectedt}
\end{figure*}
For two lattice sites in the L1 layer of an ideal basal face, separated by a distance $r$, the correlator $S_{\rm ideal}(r)$ has unit magnitude and a sign determined by the sublayers they inhabit: $S_{\rm ideal}(r)=+1$ for two sites in L1A (also for two sites in L1B), while $S_{\rm ideal}(r)=-1$ for sites in different sublayers.
The resulting pattern of $\pm 1$ values is indicated in Fig. \ref{fig:registers-byz-selectedt} by stars. At 137 K this pattern is closely followed. The overall scale of $S(r)$ is slightly diminished, even at $r=0$, because local hydrogen bonding environments are not perfectly equiangular, but strong positive and negative features appear very near those of the ideal lattice. Excursions of molecules away from their ideal lattice sites broaden these features. Such excursions also generate destructive interference where $S_{\rm ideal}(r)=-S_{\rm ideal}(r')$ at nearby distances $r$ and $r'$, so that peaks and troughs in $S(r)$ are shifted slightly away from their ideal locations.
At the liquid surface, $S(r)$ follows the very same pattern of positive and negative features as for finite-temperature ice, but its magnitude decays with $r$ over a scale of $\approx 7$ \text{\AA}\ . This resemblance demonstrates that ice-like structure at the air-water interface is not limited to vertical layering of density and orientational preferences of individual molecules. Lateral honeycomb patterning akin to ice's basal face is clearly evident in $S(r)$, with a coherence length of 2-3 molecular diameters. This length scale is nearly identical to the depth of coherent layering demonstrated by $\rho_{\rm inst}(d)$ and $P(\cos\theta_{\rm OH};d)$, which decay completely within 7-8 \text{\AA}\ of the liquid's outermost layer.
Fig. \ref{fig:registers-byz-selectedt} shows results as well for $S(r)$ at several temperatures spanning the range from 137 K to ambient temperature. The smooth progression of $S(r)$ with increasing $T$ casts the liquid surface as the endpoint of a gradual disordering of ice that begins already at 137 K. At such low temperature this decorrelation can only be driven by small-amplitude vibrations and sparse discrete defects. Sufficiently accumulated at higher $T$, individual defects are hardly distinct, but Fig. \ref{fig:registers-byz-selectedt} suggests that their accumulation indeed underlies the surface structure of ice near melting. The resemblance between $S(r)$ for ice at $T\approx T_{\rm m}$ and for liquid at 298 K is striking, and would likely be stronger still if constraints on subsurface layers were relaxed in our simulations. Liquid and high-$T$ solid surfaces thus appear to share the semi-structured exterior often described as a quasi-liquid layer on ice. From the perspective advanced in this paper, it could equally well be described as a quasi-ice layer on the surface of liquid water.
\section{}
\subsection*{The Joint Cosine Distribution}
The depth-dependent orientational distribution $P(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2};d)$ is
nonuniform due not only to surface effects but also to a geometric bias that that exists even in an isotropic bulk environment.
Here we work out the exact form of this geometric bias for a model water molecule whose rigid geometry imposes
an OH bond distance $\ell_{\rm OH}$, an HOH bond angle $\gamma$, and a corresponding HH distance
$\ell_{\rm HH} = 2 \ell_{\rm OH} \sin(\gamma/2)$.
We determine $P_{\rm bulk}(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2})$ by replacing the HOH bond angle
constraint with a smooth potential energy $u(r)$, which allows small but finite deviations of the HH
distance $r$ away from its ideal value $\ell_{\rm HH}$. In the limit of a highly stiff potential, the
constrained result should be recovered.
We first write $P_{\rm bulk}(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2})$ as a marginal of the Boltzmann distribution,
\[
P_{\rm bulk}(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2}) = q^{-1} \int d\hat{\bf R}_1
\int d\hat{\bf R}_2 e^{-\beta u(r)}
\delta(\cos\theta_{{\rm OH}_1}-\hat{\bf R}_1\cdot\hat{\bf z})
\delta(\cos\theta_{{\rm OH}_2}-\hat{\bf R}_2\cdot\hat{\bf z}),
\]
where $\hat{\bf R}_j$ is a unit vector along the OH$_j$ bond, $\beta = 1/(k_{\rm B}T)$, $q$ is the
normalizing partition function
\[
q = \int d\hat{\bf R}_1
\int d\hat{\bf R}_2 e^{-\beta u(r)},
\label{equ:part_fn}
\]
and $r = \ell_{\rm OH} |\hat{\bf R}_1 - \hat{\bf R}_2|$.
Arbitrarily setting the azimuthal angle of OH$_1$ at $\phi_1=0$, we obtain
\[
P_{\rm bulk}(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2}) =
\frac{2\pi}{q}
\int_0^{2\pi} d\phi_2
e^{-\beta u(r)}
\label{equ:marginal}
\]
Given an allowed set of angles $(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2})$,
the fluctuating distance
\[
r = \ell_{\rm OH}[2 - 2(\cos\theta_{{\rm OH}_1}\cos\theta_{{\rm OH}_2} +
\sin\theta_{{\rm OH}_1}\sin\theta_{{\rm OH}_2} \cos\phi_2 )]^{1/2}
\]
is equal to the ideal distance $\ell_{\rm HH}$ at two symmetry-related roots $\phi_2 =
\pm \bar{\phi_2}$.
(For a disallowed pair of OH orientations, no value of $\phi_2$ satisfies $r=\ell_{\rm HH}$.)
For small deviations $\delta\phi_2$ about either of these roots,
\[
r-\ell_{\rm HH} =
\frac{\ell_{\rm OH}^2}{\ell_{\rm HH}}\sin\theta_{{\rm OH}_1}\sin\theta_{{\rm OH}_2} \sin\bar{\phi_2} \delta\phi_2
+ {\cal O}(\delta\phi_2^2)
\]
A change of integration variables in Eq.~\ref{equ:marginal} then gives
\[
P_{\rm bulk}(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2}) =
\frac{4\pi}{q}
\frac{\ell_{\rm HH}}{\ell_{\rm OH}^2}
(\sin\theta_{{\rm OH}_1}\sin\theta_{{\rm OH}_2} \sin\bar{\phi_2})^{-1}
\int dr
e^{-\beta u(r)}
\label{equ:pbulk},
\]
where integration limits have been extended arbitrarily in light of the stiff restraining potential.
This result can be simplified by noting that the roots of $r = \ell_{\rm HH}$ satisfy
\[
\sin\bar{\phi_2} = \left[1-\left(
\frac{1-\cos\theta_{{\rm OH}_1}\cos\theta_{{\rm OH}_2} - \ell_{\rm HH}^2/(2\ell_{\rm OH}^2)}
{\sin\theta_{{\rm OH}_1}\sin\theta_{{\rm OH}_2}}
\right)^2\right]^{1/2}
\label{equ:root}
\]
The partition function $q$ can be similarly evaluated, noting first that the integral
over $\hat{\bf R}_2$ in Eq. \ref{equ:part_fn} is insensitive to $\hat{\bf R}_1$ by symmetry. Choosing $\hat{\bf R}_1 = \hat{\bf z}$ for simplicity, we obtain
\[
q = 8\pi^2 \int d\cos\theta_{{\rm OH}_2}
e^{-\beta u(r)}.
\]
Expanding $r = 2 \ell_{\rm OH} \sin(\theta_{{\rm OH}_2}/2)$ for $\cos\theta_{{\rm OH}_2} \approx \cos\gamma$ and changing the variable of integration then gives
\[
q = 8\pi^2 \frac{\ell_{\rm HH}}{\ell_{\rm OH}^2}
\int dr
e^{-\beta u(r)} ,
\label{equ:part_fn_full}
\]
where limits of integration have again been extended arbitrarily.
Combining Eqs. \ref{equ:pbulk}, \ref{equ:root}, and \ref{equ:part_fn_full}, we finally obtain
\[
P_{\rm bulk}(\cos\theta_{{\rm OH}_1}&,\cos\theta_{{\rm OH}_2}) = \nonumber \\
(2\pi)^{-1}&\bigg[
(1-\cos^2\theta_{{\rm OH}_1})(1-\cos^2\theta_{{\rm OH}_2}) -
(1-\cos\theta_{{\rm OH}_1}\cos\theta_{{\rm OH}_2}-b)^2
\bigg]^{-1/2},
\label{equ:joint-cos}
\]
where
\[
b&=
\frac{\ell_{\rm HH}^2}{2\ell_{\rm OH}^2}
= 2 \sin^2\left(\frac{\gamma}{2}\right).
\]
For a perfectly tetrahedral molecular geometry, $\cos\gamma = -1/3$ and $b= \frac{4}{3}$.
For TIP4P/Ice, the angle $\gamma = 104.52^{\circ}$ is slightly less obtuse, giving $b\approx 1.25$.
Histograms measured in MD simulations agree well with Eq.~\ref{equ:joint-cos}.
All cosine-cosine plots presented in the manuscript are normalized to the isotropic distribution measured in bulk MD, i.e., we plot $P(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2};d)/P_{\rm bulk}(\cos\theta_{{\rm OH}_1},\cos\theta_{{\rm OH}_2})$.
\clearpage
\subsection*{Extent of parallel tempering}
The parallel tempered simulations exchange over a range of temperatures from 17 K to 287 K.
Fig.~SI.\ref{fig:exchange-limit} presents illustrative temperature histories of four trajectories.
Although significant exchange occurs at all temperatures, simulations that began cold tend to stay cold while temperatures that began warm tend to stay warm, with a soft boundary near 160 K.
Due to the low rate of exchange, the data from temperatures below 137 K are unlikely to be well sampled, and so only temperatures at or above 137 K are used to form conclusions in this work.
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{SI-figures/exchange_limit.png}
\caption{
Representative sample of the temperature exchange of four simulations during parallel tempered simulations of ice as a histogram (top) or by timestep (bottom).
}
\label{fig:exchange-limit}
\end{figure*}
\clearpage
\subsection*{Layering in the ice crystal}
The density profiles of the primary prismatic face (Fig.~SI.\ref{fig:density-PP}) and secondary prismatic face (Fig.~SI.\ref{fig:density-SP}) exhibit clear layering that can also be related to the liquid water surface.
Some layering at intervals of approximately 3 \text{\AA}\ follows from the average bond length of water.
However, the other two ice crystal faces do not correspond to the liquid water surface density as closely as the basal face of ice.
The density layering at the ice basal surface diminishes smoothly as temperature increases (Fig.~SI.\ref{fig:density-allTs}), with L1A in particular losing clear structure near melting.
\begin{figure}
\centering
\includegraphics[width=\linewidth]{SI-figures/PP-Density-ice-T-135-water.png}
\caption{Comparison of the density profile of the primary prismatic face (blue) with the liquid water surface (red).}
\label{fig:density-PP}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{SI-figures/SP-Density-ice-T-135-water.png}
\caption{Comparison of the density profile of the secondary prismatic face (blue) with the liquid water surface (red).}
\label{fig:density-SP}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\linewidth]{SI-figures/Density-allTs.png}
\caption{
Summary of the change in density profile with increasing temperature according to parallel tempered simulations.
Layers L1 to L3 are displayed here and L4 is held stationary throughout each simulation, which partially causes the sharp peak in L3B.
}
\label{fig:density-allTs}
\end{figure}
\clearpage
\subsection*{Orientations in the primary and secondary ice faces}
\begin{figure*}
\centering
\includegraphics[width=\linewidth]{SI-figures/Prismatics-orientations.png}
\caption{Joint OH orientational measurements of the primary prismatic (top) and secondary prismatic (bottom) faces of ice. }
\label{fig:prismatics-orientations}
\end{figure*}
Water molecular orientations measured at the primary and secondary ice faces (Fig.~SI.\ref{fig:prismatics-orientations}) are substantially different from that measured at the water surface. In contrast (as discussed in the main text) the basal face of ice exhibits molecular orientations remarkably similar to the liquid water surface.
\clearpage
\subsection*{Alternative definitions of L1$^\parallel$}
While a parallel layer at the water surface has been observed elsewhere\cite{Willard2010,Kessler2015,Pezzotti2018}, no consistent boundary is used to define the parallel region.
``Parallel'' in this region means that water molecules tend to point both hydrogen mainly in the plane of the interface, which is usually defined using a measurement of the angle between the surface normal $\hat n$ and with ${\rm OH}_i$ or the dipole $\mu$.
In crystalline ice, L1A $\cos \theta_{{\rm OH}_i}$ is equally centered around either 1.0 or -0.33, and the corresponding L1A dipole $\cos \theta_{\mu}$ is centered around either 0.6 or -0.33.
The peak positions swap in L1B, with peaks in $\cos \theta_{{\rm OH}_i}$ at combinations of -1.0 and +0.33.
In defining the parallel layer, angular cutoffs need to avoid the known L1A and L1B peaks around $\cos \theta_{{\rm OH}_i}\pm 0.33$.
A window $\cos \theta^\parallel \in [-0.25,0.25]$ corresponds to an arc length of about 30$^{\circ}$.
At the liquid surface, $\cos \theta_\mu \in [-0.25,0.25]$ suggests about 32\% of water molecules in L1 are parallel, but if both $\theta_{{\rm OH}_i} \in [-0.25, 0.25]$ then only about 8.5\% of L1 waters qualify.
The surface population of parallel water molecules ranging 8.5-32\% is a minority of L1 waters, but doubles an equivalent measurement in bulk.
A narrow slice of increased parallel character certainly exists near the L1A/B boundary.
The L1$^\parallel$\ region also exhibits a high degree of interconnected hydrogen bonds\cite{Kessler2015,Pezzotti2018,Serva2018}.
For a water molecule sitting in L1$^\parallel$, first-neighbor hydrogen bonds can connect to other water molecules within L1$^\parallel$\ (intra), above L1$^\parallel$\ (upper), or below L1$^\parallel$\ (lower).
Serva et al.\cite{Serva2018} measured that connections between L1$^\parallel$\ molecules (intra) occur at nearly twice the rate as in bulk liquid.
Setting L1$^\parallel$\ boundaries to $1.0 < d < 2.8\ \text{\AA}$ maximizes ``intra'' bonding by capturing the lower portion of L1A and nearly all of L1B.
The parallel character is likely caused by a mixture of the underlying ice-like structure, the low density of bonding neighbors within L1A, and a reversion to isotropic orientations at higher temperature.
The abnormal increase in ``intra'' hydrogen bonding in L1B can be rationalized as a tendency of L1A molecules to cross into the L1B region (main text Fig.~4 and Fig.~SI.\ref{fig:L1-slices}), and is still consistent with the quasi-ice picture of the liquid water surface.
Table~SI.\ref{Tbl:upperintralower} summarizes upper/intra/lower hydrogen bonding measured at the layer boundaries.
Fig.~SI.\ref{fig:L1-slices} of the progression of orientations through L1 reveals three main regimes: L1A, L1$^\parallel$, and L1B.
The smooth transition suggests that L1A molecules shift to and from L1B by pointing both OH in the plane of the interface and passing through L1$^\parallel$.
While there is a characteristic parallel region between $d=1.0$ to $d=1.6$, the total density of molecules in that region is small.
Overall, the majority of liquid L1 water molecules are primarily classified as L1A or L1B, and much of the parallel population can be rationalized as a transition between L1A and L1B positions.
\begin{figure*}[t]
\centering
\includegraphics[width=\linewidth]{SI-figures/L1-slices.png}
\caption{
Progression of water molecule orientations by 0.3 \text{\AA}\ slice of the liquid water surface.
Grid lines at $\cos \theta_{{\rm OH}_i}=0$ are emphasized in red.
}
\label{fig:L1-slices}
\end{figure*}
\clearpage
\clearpage
\begin{table*}
\centering
\begin{tabular}{|c|c|c|c|}
\hline
FOR TIP4P/ICE @ 298 K & WIDTH & UPPER / INTRA / LOWER & TOTAL HBONDS \\
\hline
BULK & 1.3 & 1.35 / 0.82 / 1.35 & 3.52 \\
\hline
BULK & 1.8 & 1.20 / 1.13 / 1.19 & 3.52 \\
\hline
BULK & 2.38 & 1.02 / 1.49 / 1.01 & 3.52 \\
\hline
BULK & 3.68 & 0.68 / 2.17 / 0.67 & 3.52 \\
\hline
L1 & 3.68 & 0.01 / 2.43 / 0.72 & 3.16 \\
\hline
L1 $\div$ BULK & 3.68 & 0.01 / 1.12 / 1.07 & 0.90 \\
\hline
L1A & 1.3 & 0.01 / 0.43 / 1.92 & 2.37 \\
\hline
L1A $\div$ BULK & 1.3 & 0.01 / 0.53 / 1.43 & 0.67 \\
\hline
L1$^\parallel$ & 1.8 & 0.32 / 1.81 / 1.07 & 3.20 \\
\hline
L1$^\parallel$ $\div$ BULK & 1.8 & 0.27 / 1.60 / 0.90 & 0.91 \\
\hline
L1B & 2.38 & 0.47 / 1.99 / 0.89 & 3.36 \\
\hline
L1B $\div$ BULK & 2.38 & 0.47 / 1.34 / 0.88 & 0.95 \\
\hline
L2 & 3.68 & 0.67 / 2.21 / 0.66 & 3.53 \\
\hline
L2A & 1.3 & 1.30 / 0.86 / 1.38 & 3.54 \\
\hline
L2B & 2.38 & 1.05 / 1.50 / 0.99 & 3.53 \\
\hline \hline
\end{tabular}
\begin{tabular}{|c|c|c|c|}
\hline
FOR TIP4P/ICE @ 137 K & WIDTH & UPPER / INTRA / LOWER & TOTAL HBONDS \\
\hline
L1 & 3.68 & 0.03 / 2.99 / 0.52 & 3.54 \\
\hline
L1A & 1.3 & 0.06 / 0.11 / 2.90 & 3.07 \\
\hline
L1B & 2.38 & 2.40 / 0.59 / 0.93 & 3.92 \\
\hline
L2 & 3.68 & 0.51 / 2.99 / 0.50 & 4.00 \\
\hline
L2A & 1.3 & 1.01 / 0.01 / 2.98 & 4.01 \\
\hline
L2B & 2.38 & 2.93 / 0.09 / 0.99 & 4.00 \\
\hline
\end{tabular}
\caption{Readout of average upper/intra/lower hydrogen bonding behavior of various layer-slices of TIP4P/Ice for liquid (298 K) and ice (137 K, L4 held stationary). The definition of a hydrogen bond follows the procedure of White et al.\cite{White2000}.}
\label{Tbl:upperintralower}
\end{table*}
\begin{table*}
\centering
\begin{tabular}{|c|c|}
\hline
FOR TIP4P/ICE @ 298 K & Layer molecule count relative to count in L4 \\
\hline
L0 $\div$ L4 & 0.005 \\
\hline
L1 $\div$ L4 & 0.94 \\
\hline
L1A$\div$L4 & 0.19 \\
\hline
L1B$\div$L4 & 0.76 \\
\hline
L2$\div$L4 & 1.02 \\
\hline
L2A$\div$L4 & 0.36 \\
\hline
L2B$\div$L4 & 0.66 \\
\hline
L3$\div$L4 & 0.99 \\
\hline
L3A$\div$L4 & 0.36 \\
\hline
L3B$\div$L4 & 0.64 \\
\hline
\end{tabular}
\caption{
Density of water molecules in each surface layer, relative to L4 (effectively bulk liquid).
}
\label{Tbl:liquid-layer-counts}
\end{table*}
\clearpage
\subsection*{Minimal model of the surface dipole: $m_{\rm est}$}
To model the average dipole profile for finite-temperature ice, we classify each water molecule as either ``inter" or ``intra", according to the locations of its hydrogen bonding partners.
An ``intra'' molecule directs both OH groups within its own layer.
An ``inter'' molecule directs one OH group within the layer and the other towards a different layer.
In L$X$A an ``inter" molecule's dipole points strongly up along $\bm{\hat n}$ ($\cos \theta_{\rm OH}\approx 1,\ \cos\theta_\mu\approx 0.58$), while an ``intra" molecule's dipole points more weakly down ($\cos\theta_{\rm OH} = \cos\theta_\mu \approx -1/3$).
In a symmetric way, within L$X$B an ``inter" molecule's dipole points strongly down $\cos\theta_{\rm OH}\approx -1$, while an ``intra" molecule's dipole points more weakly up ($\cos\theta_{\rm OH}\approx +1/3$).
Less than approximately one in three of the molecules in a given sublayer of ice should be of type ``inter", lest the system as a whole acquire a macroscopically large dipole moment.
For a tetrahedral bonding geometry, this constraint implies that each sublayer is non-dipolar on average.
Systematic displacements of ``inter" and ``intra" molecules in opposite directions, however, can yield a nonzero dipole profile $\braket{m_{\rm n}(d)}\neq 0$ while still respecting the requirement that $\braket{m_{\rm n}(d)}\rho(d)$ integrates to zero over the sublayer.
For layers $X>1$, our simulation results thus suggest that ``inter" molecules in L$X$A shift downwards (towards bulk) while L$X$A ``intra" molecules shift upwards (towards the surface).
Symmetrically in L$X$B, ``inter" molecules instead shift upwards, and ``intra" molecules downwards.
The outermost surface causes a unique exception, where L1A reverses the regular dipole pattern and ``inter" molecules (i.e., dangling OH groups) shifting upwards.
To test this understanding of the ice dipole profile, we attempt to reconstruct it as a weighted sum of characteristic dipole moments $\bar{m}_i^\alpha$, where $i$ indicates ``inter" or ``intra", and $\alpha$ indicates a specific sublayer.
This estimate,
\[
m_{\rm est}(d) &= \sum_i \bar{m}_i^{\alpha(d)} \rho_i(d),
\label{equ:mest}
\]
is determined by computing $\bar{m}_i^\alpha$ as an average dipole over the corresponding molecular population in computer simulations.
In Eq.~SI.\ref{equ:mest}, ${\rho_{\rm inter}}(d)$ and ${\rho_{\rm intra}}(d)$ are density profiles for ``inter'' and ``intra'' populations, and $\alpha(d)$ returns the sublayer corresponding to depth $d$.
To capture this relation at warmer temperatures when near-ice structure is expected, $|\cos \theta_{\rm OH1} - \cos\theta_{\rm OH2}| < 0.3$ indicates that both OH vectors are likely ``intra'' while the remaining $|\cos \theta_{\rm OH1} - \cos \theta_{\rm OH2}| > 0.3$ would likely point ``inter''.
The comparison in Fig.~5 of the main text suggests that this estimate, and its underlying physical picture, are generally sound.
We make four observations about the average surface dipole of rigid point-charge water.
First, that there is an oscillating bilayer-by-bilayer dipole pattern at the ice surface at non-zero temperatures.
Second, that the bilayer in ice exhibits sub-bilayer structure due to subtle asymmetries in donor and acceptor geometries.
Third, that the dipole within L1 is similar between ice and liquid water, with L1A highly similar, and that this is likely due to a shared layering-with-defects structure.
And fourth, that the sub-bilayer structure reverses within L1A, consistent with the picture that the ``intra'' molecules dip well below their regular height to cross into L1B, while L1A ``inter'' molecules (those with a dangling hydrogen bond) sit abnormally high.
|
2,877,628,088,401 | arxiv | \section*{Acknowledgements}
\noindent
We thank A.~V.~Luchinsky for interesting discussions
and providing the~models based on QCD factorisation for
\mbox{${\ensuremath{\B^+}}\xspace\ensuremath{\rightarrow}\xspace\jpsi3\pip2{\ensuremath{\pion^-}}\xspace$} and
\mbox{${\ensuremath{\B^+}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace{\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace$}~decays.
We~express our gratitude to our colleagues in the~CERN
accelerator departments for the~excellent performance of the~LHC.
We~thank the~technical and administrative staff at the~LHCb
institutes.
We~acknowledge support from CERN and from the~national
agencies: CAPES, CNPq, FAPERJ and FINEP\,(Brazil);
NSFC\,(China);
CNRS/IN2P3\,(France);
BMBF, DFG and MPG\,(Germany);
INFN\,(Italy);
FOM and NWO\,(The~Netherlands);
MNiSW and NCN\,(Poland);
MEN/IFA\,(Romania);
MinES and FASO\,(Russia);
MinECo\,(Spain);
SNSF and SER\,(Switzerland);
NASU\,(Ukraine);
STFC\,(United Kingdom);
NSF\,(USA).
We~acknowledge the~computing resources that are provided by CERN,
IN2P3\,(France),
KIT and DESY\,(Germany),
INFN\,(Italy),
SURF\,(The~Netherlands),
PIC\,(Spain),
GridPP\,(United Kingdom),
RRCKI and Yandex LLC\,(Russia),
CSCS\,(Switzerland),
IFIN\nobreakdash-HH\,(Romania),
CBPF\,(Brazil),
PL\nobreakdash-GRID\,(Poland) and OSC\,(USA).
We~are indebted to the~communities behind the~multiple open
source software packages on which we depend.
Individual groups or members have received support from
AvH Foundation\,(Germany),
EPLANET, Marie Sk\l{}odowska\nobreakdash-Curie Actions and ERC\,(European Union),
Conseil G\'{e}n\'{e}ral de Haute\nobreakdash-Savoie,
Labex ENIGMASS and OCEVU,
R\'{e}gion Auvergne\,(France),
RFBR and Yandex LLC\,(Russia),
GVA, XuntaGal and GENCAT\,(Spain),
Herchel Smith Fund, The~Royal Society, Royal Commission for the~Exhibition
of 1851 and the~Leverhulme Trust\,(United Kingdom).
\section{Detector and simulation}
\label{sec:Detector}
The~\mbox{LHCb}\xspace detector~\cite{Alves:2008zz,LHCb-DP-2014-002} is a single-arm forward spectrometer
covering the~pseudorapidity range~\mbox{$2<\eta<5$}, designed for the~study of particles
containing {\ensuremath{\Pb}}\xspace or {\ensuremath{\Pc}}\xspace~quarks.
The~detector includes a~high-precision tracking system consisting of
a~silicon-strip vertex detector surrounding the~${\ensuremath{\Pp}}\xspace\proton$~interaction region,
a~large-area silicon-strip detector located upstream of a~dipole magnet
with a~bending power of about~$4{\mathrm{\,Tm}}$,
and three stations of silicon\nobreakdash-strip detectors and straw drift tubes placed
downstream of the~magnet.
The~tracking system provides a~measurement of momentum, \mbox{$p$}\xspace,
of charged particles with a~relative uncertainty that varies from~0.5\%
at low momentum to~1.0\% at~200\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace.
The~minimum distance of a~track to a~primary vertex\,(PV),
the~impact parameter, is measured with
a~resolution of~\mbox{$(15+29/\mbox{$p_{\mathrm{ T}}$}\xspace)\ensuremath{{\,\upmu\mathrm{m}}}\xspace$},
where~\mbox{$p_{\mathrm{ T}}$}\xspace is the~component of the~momentum transverse to the~beam in\,\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace.
Different types of charged hadrons are distinguished using information from
two ring\nobreakdash-imaging Cherenkov detectors\,(RICH).
Photons, electrons and hadrons are identified by a~calorimeter system consisting of
scintillating\nobreakdash-pad and preshower detectors,
an~electromagnetic calorimeter and a~hadronic calorimeter.
Muons are identified by a~system composed of alternating layers
of iron and multiwire proportional chambers.
The~online event selection is performed by a~trigger~\cite{LHCb-DP-2012-004},
which consists of a~hardware stage, based on information from
the~calorimeter and muon systems, followed by a~software stage,
which applies a~full event reconstruction.
The~hardware trigger selects
muon candidates with $\mbox{$p_{\mathrm{ T}}$}\xspace>1.48\,(1.76)\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ or
pairs of opposite-sign muon candidates
with a~requirement that the~product of the~muon transverse momenta
is larger than $1.7\,(2.6)\,\mathrm{GeV}^2/c^2$
for data collected at $\sqrt{s}=7\,(8)\ifthenelse{\boolean{inbibliography}}{\ensuremath{~T\kern -0.05em eV}\xspace}{\ensuremath{\mathrm{\,Te\kern -0.1em V}}}\xspace$.
The subsequent software trigger is composed of two stages,
the~first of which performs a~partial event reconstruction,
while full event reconstruction is done at the~second stage.
In the~software trigger
the~invariant mass of well\nobreakdash-reconstructed pairs of
oppositely charged muons forming a~good\nobreakdash-quality
two\nobreakdash-track vertex is required to exceed 2.7\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace,
and the~two\nobreakdash-track vertex
is required to be significantly displaced from all PVs.
The~analysis technique reported below is validated using simulated events.
In~the~simulation, ${\ensuremath{\Pp}}\xspace\proton$~collisions are generated using
\mbox{\textsc{Pythia}}\xspace~\cite{Sjostrand:2006za,*Sjostrand:2007gs} with a~specific \mbox{LHCb}\xspace configuration~\cite{LHCb-PROC-2010-056}.
Decays of hadronic particles are described by \mbox{\textsc{EvtGen}}\xspace~\cite{Lange:2001uf},
in which final\nobreakdash-state radiation is generated using \mbox{\textsc{Photos}}\xspace~\cite{Golonka:2005pn}.
A~model assuming QCD~factorisation
is implemented to generate the~decays $\mbox{$\Bu\to\jpsi3\pip2\pim$}\xspace$ and $\mbox{$\Bu\to\psitwos\pip\pip\pim$}\xspace$~\cite{Lesha}.
The~interaction of the~generated particles with the~detector and its response
are implemented using the~\mbox{\textsc{Geant4}}\xspace toolkit~\cite{Allison:2006ve, *Agostinelli:2002hh} as described in Ref.~\cite{LHCb-PROC-2011-006}.
\section{Efficiencies and systematic uncertainties}
\label{sec:effic}
The two ratios of branching fractions defined in Eq.~\ref{eq:rate_fivepi} are measured as
\begin{align*}
R_{5\pi} & = \dfrac{N_{{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace 3\pip2{\ensuremath{\pion^-}}\xspace}}{N_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace[\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace]{\ensuremath{\kaon^+}}\xspace}}
\times
\dfrac{\upvarepsilon_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace[\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace]{\ensuremath{\kaon^+}}\xspace}}{\upvarepsilon_{{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace 3\pip2{\ensuremath{\pion^-}}\xspace}}\times\BF({\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace)~,
\\
R_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace} &= \dfrac{N_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace[\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace]{\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace}}{N_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace[\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace]{\ensuremath{\kaon^+}}\xspace}}
\times\dfrac{\upvarepsilon_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace[\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace]{\ensuremath{\kaon^+}}\xspace}}{\upvarepsilon_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace[\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace]{\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace}}~,
\end{align*}
where $N_{X}$ represents the~observed signal yield and $\upvarepsilon_{X}$ denotes
the~efficiency for the~corresponding decay.
The~known value of~$(34.46\,\pm\,0.30)\%$~\cite{PDG} is used for the~\mbox{$\psitwos\to\jpsi\pip\pim$}\xspace branching fraction.
The~efficiency is determined as the~product of the~geometric acceptance
and the~detection, reconstruction, selection and trigger efficiencies.
The~efficiencies for hadron identification as a~function of the~kinematic parameters
and event multiplicity are determined from data, using calibration samples of kaons and pions from the
self-identifying decays ${\ensuremath{\D^{*+}}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\D^0}}\xspace{\ensuremath{\pion^+}}\xspace$~followed by ${\ensuremath{\D^0}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\kaon^-}}\xspace{\ensuremath{\pion^+}}\xspace$~\cite{LHCb-DP-2012-003}.
The~remaining efficiencies are determined using simulated events.
To determine the~overall efficiency for the~\mbox{$\Bu\to\jpsi3\pip2\pim$}\xspace channel,
the~individual efficiencies for the resonant and non-resonant components are averaged
according to the~measured proportions found in the data,
\begin{equation*}
k \equiv \dfrac{N_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace[\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace]{\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace}}{N_{{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace 3\pip2{\ensuremath{\pion^-}}\xspace}} = 0.44\pm0.06~.
\end{equation*}
The~ratio $k$~is calculated taking into account the~correlation in the~observed values in the~numerator and denominator.
The~ratios of the~efficiency for the~normalization channel $\upvarepsilon_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace{\ensuremath{\kaon^+}}\xspace}$
to the~efficiencies for resonant, $\upvarepsilon_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace {\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace}$,
and non-resonant decays $\upvarepsilon_{{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace 3\pip2{\ensuremath{\pion^-}}\xspace, \mathrm{NR}}$,
are determined to be
\begin{eqnarray*}
\dfrac{\upvarepsilon_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace{\ensuremath{\kaon^+}}\xspace}}{\upvarepsilon_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace {\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace}} &= & 6.75\pm0.13~, \\
\dfrac{\upvarepsilon_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace{\ensuremath{\kaon^+}}\xspace}}{\upvarepsilon_{{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace 3\pip2{\ensuremath{\pion^-}}\xspace, \mathrm{NR}}} &= & 4.18\pm0.05~.
\end{eqnarray*}
The ratio of efficiencies for the normalisation channel to that of the \mbox{$\Bu\to\jpsi3\pip2\pim$}\xspace~mode is given by
\begin{equation*}
\dfrac{\upvarepsilon_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace{\ensuremath{\kaon^+}}\xspace}}{\upvarepsilon_{{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace 3\pip2{\ensuremath{\pion^-}}\xspace}} =
k\times\dfrac{\upvarepsilon_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace{\ensuremath{\kaon^+}}\xspace}}{\upvarepsilon_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace {\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace}} +
\left(1-k\right)\times\dfrac{\upvarepsilon_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace{\ensuremath{\kaon^+}}\xspace}}{\upvarepsilon_{{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace 3\pip2{\ensuremath{\pion^-}}\xspace,{\mathrm{NR}}}} = 5.31\pm0.06~.
\end{equation*}
The~statistical uncertainty in the ratio $k$~is accounted for in the~calculation of the~statistical uncertainty for the~ratio $R_{5\pi}$.
Since the decay products in the channels under study have similar kinematics, many
systematic uncertainties cancel in the~ratio\,(for instance those related to muon identification).
The~different contributions to the~systematic uncertainties affecting this analysis are described below.
The~resulting individual uncertainties are presented in Table~\ref{table:system}.
\begin{table}[t]
\caption{\small Relative systematic uncertainties~(in \%) for the ratios of branching fractions. The total uncertainty is the quadratic sum of the individual components.}
\begin{center}
\begin{tabular}{p{8cm}cc}
Source & $R_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace}$ & $R_{5\pi}$ \\
\hline
Fit model & $4.6$ & $2.4$ \\
Decay model & $5.9$ & $1.1$ \\
Hadron interactions & $2\times1.4$ & $2\times1.4$ \\
Track reconstruction & $1.9$ & $1.8$ \\
Hadron identification & $0.3$ & $0.3$ \\
Size of the simulation sample & $1.9$ & $1.2$ \\
Trigger & $1.1$ & $1.1$ \\
$\BF({\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace)$ & $0.9$ & --- \\
\hline
Total & $8.5$ & $4.7$
\end{tabular}
\end{center}
\label{table:system}
\end{table}
The dominant uncertainty arises from the~imperfect knowledge of the~shape
of the~signal and the background
in the~{\ensuremath{\Bu}}\xspace and {\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace~mass distributions.
The~dependence of the~signal yields on the~fit model is studied by varying the~signal
and background parametrisations.
The~systematic uncertainties are determined for the~ratios of event yields in
different channels by taking the~maximum deviation of the~ratio
obtained with the~alternative model
with respect to the~baseline fit model.
The~uncertainty determined for
$R_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace}$ and $R_{5\pi}$ is $4.6\%$ and $2.4\%$, respectively.
To assess the systematic uncertainty related to the~\mbox{$\Bu\to\jpsi3\pip2\pim$}\xspace\,(\mbox{$\Bu\to\psitwos\pip\pip\pim$}\xspace)~decay model
used in the~simulation, the~reconstructed mass distribution of the~three\nobreakdash-pion\,(five\nobreakdash-pion) system
in simulated events is reweighted to reproduce the~distribution observed in data.
There is a~maximum change in efficiency of~$5.9\%$ for the~resonant mode
and~$4.7\%$ for the~non\nobreakdash-resonant mode leading to a~$1.1\%$ change
in the~total efficiency, which is taken as the~systematic uncertainty for the~decay model.
Further uncertainties arise from the differences between data and simulation,
in particular those affecting the efficiency for the reconstruction of charged-particle tracks.
The~first uncertainty arises from the simulation of hadronic interactions in the detector,
which has an~uncertainty of~$1.4\%$ per track~\cite{LHCb-DP-2013-002}.
Since the signal and normalisation channels differ by two tracks in the~final state,
the~corresponding uncertainty is assigned to be $2.8\%$.
The~small difference in the track\nobreakdash-finding efficiency between data and simulation
is corrected using a~data\nobreakdash-driven technique~\cite{LHCb-DP-2013-002}.
The~uncertainties in the~correction factors are propagated to
the~efficiency ratios by means of pseudoexperiments.
This~results in a~systematic uncertainty
of $1.9\%$ and $1.8\%$ for the ratios of $R_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace}$ and $R_{5\pi}$, respectively.
The uncertainties on the efficiency of hadron identification due to the~limited size of the calibration sample
are also propagated to the efficiency ratios by means of pseudoexperiments.
The~resulting uncertainties are equal to $0.3\%$ for both branching fraction ratios.
Additional uncertainties related to the~limited size of the simulation sample are $1.9\%$ and $1.2\%$ for $R_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace}$ and $R_{5\pi}$, respectively.
The trigger is highly efficient in selecting decays with two muons in the~final state.
The~trigger efficiency for events with a~\mbox{${\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Pmu^+\Pmu^-}}\xspace$} produced in beauty hadron decays
is studied using data in high\nobreakdash-yield modes and
a~systematic uncertainty of $1.1\%$ is \mbox{assigned} based on
the~comparison of the~ratio of trigger efficiencies for
high\nobreakdash-yield samples of~\mbox{${\ensuremath{\B^+}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\kaon^+}}\xspace$} and \mbox{${\ensuremath{\B^+}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace{\ensuremath{\kaon^+}}\xspace$}~decays
in data and simulation~\cite{LHCb-PAPER-2012-010}.
\section{Introduction}
\label{sec:Introduction}
The~{\ensuremath{\B^+}}\xspace~meson is a bound state of a heavy ${\ensuremath{\overline \bquark}}\xspace$~quark and a ${\ensuremath{\Pu}}\xspace$~quark, with
well known properties and a large number of decay modes~\cite{PDG},
but little is known about decays of {\ensuremath{\Bu}}\xspace~mesons to a~{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace~meson plus
a~large number of light hadrons.
The~\mbox{${\ensuremath{\B^+}}\xspace\ensuremath{\rightarrow}\xspace\jpsi3\pip2{\ensuremath{\pion^-}}\xspace$}~decay channel is of particular interest,
since it is one of the~highest multiplicity final states currently experimentally accessible.
Evidence for the corresponding decay of the~{\ensuremath{\B_\cquark^+}}\xspace~meson has recently been reported by
the~LHCb collaboration~\cite{LHCb-PAPER-2014-009},
with the~measured branching fraction
and qualitative behaviour of the multipion system consistent
with expectations from QCD factorisation~\cite{Bauer:1986bm,Wirbel:1988ft}.
In~this~scheme, the~\mbox{${\ensuremath{\B_\cquark^+}}\xspace\ensuremath{\rightarrow}\xspace\jpsi3\pip2{\ensuremath{\pion^-}}\xspace$}~decay
is characterized by the~form factors of the~\mbox{${\ensuremath{\B_\cquark^+}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace\PW^+$} transition and
the~spectral functions for the~conversion of the~$\PW^+$~boson into
light hadrons~\cite{Lesha,Likhoded:2013iua,Berezhnoy:2011is,Likhoded:2009ib}.
Different decay topologies contribute to decays of {\ensuremath{\B^+}}\xspace~mesons
into charmonia and light hadrons, affecting
the~dynamics of the multipion system and enabling
the~role of factorisation in {\ensuremath{\B^+}}\xspace~meson decays to be probed.
This~paper describes an~analysis of \mbox{$\Bu\to\jpsi3\pip2\pim$}\xspace~decays,
including decays to the~same final state that proceed through an~intermediate {\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace~resonance.
Charge\nobreakdash-conjugate modes are implied throughout the paper.
The~ratios of the branching fractions for each of these decays to that of the~normalisation decay
${\ensuremath{\B^+}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace{\ensuremath{\kaon^+}}\xspace$,
\begin{align}
\begin{split}
R_{5\pi} &\equiv\frac{\BF(\mbox{$\Bu\to\jpsi3\pip2\pim$}\xspace)}{\BF(\mbox{$\Bu\to\psitwos\Kp$}\xspace)}~, \\
R_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace} &\equiv\frac{\BF(\mbox{$\Bu\to\psitwos\pip\pip\pim$}\xspace)}{\BF(\mbox{$\Bu\to\psitwos\Kp$}\xspace)}~,
\end{split}
\label{eq:rate_fivepi}
\end{align}
are measured,
where the ${\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace$~meson is reconstructed in the ${\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$~final state and the
${\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace$~meson~is reconstructed in its dimuon decay channel.
In~addition, a~search for intermediate resonances
in the~multipion system is performed
and a~phase\nobreakdash-space model is compared to the~data and to
the~predictions from QCD factorisation~\cite
{Bauer:1986bm,Wirbel:1988ft,Lesha,Likhoded:2013iua,Berezhnoy:2011is,Likhoded:2009ib}.
The~results are based on ${\ensuremath{\Pp}}\xspace\proton$~collision data corresponding to an~integrated
luminosity of $1.0\ensuremath{\mbox{\,fb}^{-1}}\xspace$ and $2.0\ensuremath{\mbox{\,fb}^{-1}}\xspace$ collected by the~\mbox{LHCb}\xspace experiment
at centre\nobreakdash-of\nobreakdash-mass energies of $\sqrt{s}=7\ifthenelse{\boolean{inbibliography}}{\ensuremath{~T\kern -0.05em eV}\xspace}{\ensuremath{\mathrm{\,Te\kern -0.1em V}}}\xspace$ and $8\ifthenelse{\boolean{inbibliography}}{\ensuremath{~T\kern -0.05em eV}\xspace}{\ensuremath{\mathrm{\,Te\kern -0.1em V}}}\xspace$, respectively.
\section{List of all symbols}
\label{sec:listofsymbols}
\subsection{Experiments}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash lhcb} & \mbox{LHCb}\xspace & \texttt{\textbackslash atlas} & \mbox{ATLAS}\xspace & \texttt{\textbackslash cms} & \mbox{CMS}\xspace \\
\texttt{\textbackslash alice} & \mbox{ALICE}\xspace & \texttt{\textbackslash babar} & \mbox{BaBar}\xspace & \texttt{\textbackslash belle} & \mbox{Belle}\xspace \\
\texttt{\textbackslash cleo} & \mbox{CLEO}\xspace & \texttt{\textbackslash cdf} & \mbox{CDF}\xspace & \texttt{\textbackslash dzero} & \mbox{D0}\xspace \\
\texttt{\textbackslash aleph} & \mbox{ALEPH}\xspace & \texttt{\textbackslash delphi} & \mbox{DELPHI}\xspace & \texttt{\textbackslash opal} & \mbox{OPAL}\xspace \\
\texttt{\textbackslash lthree} & \mbox{L3}\xspace & \texttt{\textbackslash sld} & \mbox{SLD}\xspace & \texttt{\textbackslash cern} & \mbox{CERN}\xspace \\
\texttt{\textbackslash lhc} & \mbox{LHC}\xspace & \texttt{\textbackslash lep} & \mbox{LEP}\xspace & \texttt{\textbackslash tevatron} & Tevatron\xspace \\
\end{tabular*}
\subsubsection{LHCb sub-detectors and sub-systems}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash velo} & VELO\xspace & \texttt{\textbackslash rich} & RICH\xspace & \texttt{\textbackslash richone} & RICH1\xspace \\
\texttt{\textbackslash richtwo} & RICH2\xspace & \texttt{\textbackslash ttracker} & TT\xspace & \texttt{\textbackslash intr} & IT\xspace \\
\texttt{\textbackslash st} & ST\xspace & \texttt{\textbackslash ot} & OT\xspace & \texttt{\textbackslash spd} & SPD\xspace \\
\texttt{\textbackslash presh} & PS\xspace & \texttt{\textbackslash ecal} & ECAL\xspace & \texttt{\textbackslash hcal} & HCAL\xspace \\
\texttt{\textbackslash MagUp} & \mbox{\em Mag\kern -0.05em Up}\xspace & \texttt{\textbackslash MagDown} & \mbox{\em MagDown}\xspace & \texttt{\textbackslash ode} & ODE\xspace \\
\texttt{\textbackslash daq} & DAQ\xspace & \texttt{\textbackslash tfc} & TFC\xspace & \texttt{\textbackslash ecs} & ECS\xspace \\
\texttt{\textbackslash lone} & L0\xspace & \texttt{\textbackslash hlt} & HLT\xspace & \texttt{\textbackslash hltone} & HLT1\xspace \\
\texttt{\textbackslash hlttwo} & HLT2\xspace & \\
\end{tabular*}
\subsection{Particles}
\subsubsection{Leptons}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash electron} & {\ensuremath{\Pe}}\xspace & \texttt{\textbackslash en} & \en & \texttt{\textbackslash ep} & {\ensuremath{\Pe^+}}\xspace \\
\texttt{\textbackslash epm} & \epm & \texttt{\textbackslash epem} & {\ensuremath{\Pe^+\Pe^-}}\xspace & \texttt{\textbackslash muon} & {\ensuremath{\Pmu}}\xspace \\
\texttt{\textbackslash mup} & {\ensuremath{\Pmu^+}}\xspace & \texttt{\textbackslash mun} & \mun & \texttt{\textbackslash mumu} & {\ensuremath{\Pmu^+\Pmu^-}}\xspace \\
\texttt{\textbackslash tauon} & {\ensuremath{\Ptau}}\xspace & \texttt{\textbackslash taup} & {\ensuremath{\Ptau^+}}\xspace & \texttt{\textbackslash taum} & {\ensuremath{\Ptau^-}}\xspace \\
\texttt{\textbackslash tautau} & {\ensuremath{\Ptau^+\Ptau^-}}\xspace & \texttt{\textbackslash lepton} & {\ensuremath{\ell}}\xspace & \texttt{\textbackslash ellm} & {\ensuremath{\ell^-}}\xspace \\
\texttt{\textbackslash ellp} & {\ensuremath{\ell^+}}\xspace & \texttt{\textbackslash ellell} & \ensuremath{\ell^+ \ell^-}\xspace & \texttt{\textbackslash neu} & {\ensuremath{\Pnu}}\xspace \\
\texttt{\textbackslash neub} & {\ensuremath{\overline{\Pnu}}}\xspace & \texttt{\textbackslash neue} & {\ensuremath{\neu_e}}\xspace & \texttt{\textbackslash neueb} & {\ensuremath{\neub_e}}\xspace \\
\texttt{\textbackslash neum} & {\ensuremath{\neu_\mu}}\xspace & \texttt{\textbackslash neumb} & {\ensuremath{\neub_\mu}}\xspace & \texttt{\textbackslash neut} & {\ensuremath{\neu_\tau}}\xspace \\
\texttt{\textbackslash neutb} & {\ensuremath{\neub_\tau}}\xspace & \texttt{\textbackslash neul} & {\ensuremath{\neu_\ell}}\xspace & \texttt{\textbackslash neulb} & {\ensuremath{\neub_\ell}}\xspace \\
\end{tabular*}
\subsubsection{Gauge bosons and scalars}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash g} & {\ensuremath{\Pgamma}}\xspace & \texttt{\textbackslash H} & {\ensuremath{\PH^0}}\xspace & \texttt{\textbackslash Hp} & {\ensuremath{\PH^+}}\xspace \\
\texttt{\textbackslash Hm} & {\ensuremath{\PH^-}}\xspace & \texttt{\textbackslash Hpm} & {\ensuremath{\PH^\pm}}\xspace & \texttt{\textbackslash W} & {\ensuremath{\PW}}\xspace \\
\texttt{\textbackslash Wp} & {\ensuremath{\PW^+}}\xspace & \texttt{\textbackslash Wm} & {\ensuremath{\PW^-}}\xspace & \texttt{\textbackslash Wpm} & {\ensuremath{\PW^\pm}}\xspace \\
\texttt{\textbackslash Z} & {\ensuremath{\PZ}}\xspace & \\
\end{tabular*}
\subsubsection{Quarks}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash quark} & {\ensuremath{\Pq}}\xspace & \texttt{\textbackslash quarkbar} & {\ensuremath{\overline \quark}}\xspace & \texttt{\textbackslash qqbar} & {\ensuremath{\quark\quarkbar}}\xspace \\
\texttt{\textbackslash uquark} & {\ensuremath{\Pu}}\xspace & \texttt{\textbackslash uquarkbar} & {\ensuremath{\overline \uquark}}\xspace & \texttt{\textbackslash uubar} & {\ensuremath{\uquark\uquarkbar}}\xspace \\
\texttt{\textbackslash dquark} & {\ensuremath{\Pd}}\xspace & \texttt{\textbackslash dquarkbar} & {\ensuremath{\overline \dquark}}\xspace & \texttt{\textbackslash ddbar} & {\ensuremath{\dquark\dquarkbar}}\xspace \\
\texttt{\textbackslash squark} & {\ensuremath{\Ps}}\xspace & \texttt{\textbackslash squarkbar} & {\ensuremath{\overline \squark}}\xspace & \texttt{\textbackslash ssbar} & {\ensuremath{\squark\squarkbar}}\xspace \\
\texttt{\textbackslash cquark} & {\ensuremath{\Pc}}\xspace & \texttt{\textbackslash cquarkbar} & {\ensuremath{\overline \cquark}}\xspace & \texttt{\textbackslash ccbar} & {\ensuremath{\cquark\cquarkbar}}\xspace \\
\texttt{\textbackslash bquark} & {\ensuremath{\Pb}}\xspace & \texttt{\textbackslash bquarkbar} & {\ensuremath{\overline \bquark}}\xspace & \texttt{\textbackslash bbbar} & {\ensuremath{\bquark\bquarkbar}}\xspace \\
\texttt{\textbackslash tquark} & {\ensuremath{\Pt}}\xspace & \texttt{\textbackslash tquarkbar} & {\ensuremath{\overline \tquark}}\xspace & \texttt{\textbackslash ttbar} & {\ensuremath{\tquark\tquarkbar}}\xspace \\
\end{tabular*}
\subsubsection{Light mesons}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash hadron} & {\ensuremath{\Ph}}\xspace & \texttt{\textbackslash pion} & {\ensuremath{\Ppi}}\xspace & \texttt{\textbackslash piz} & {\ensuremath{\pion^0}}\xspace \\
\texttt{\textbackslash pizs} & {\ensuremath{\pion^0\mbox\,\mathrm{s}}}\xspace & \texttt{\textbackslash pip} & {\ensuremath{\pion^+}}\xspace & \texttt{\textbackslash pim} & {\ensuremath{\pion^-}}\xspace \\
\texttt{\textbackslash pipm} & {\ensuremath{\pion^\pm}}\xspace & \texttt{\textbackslash pimp} & {\ensuremath{\pion^\mp}}\xspace & \texttt{\textbackslash rhomeson} & {\ensuremath{\Prho}}\xspace \\
\texttt{\textbackslash rhoz} & {\ensuremath{\rhomeson^0}}\xspace & \texttt{\textbackslash rhop} & {\ensuremath{\rhomeson^+}}\xspace & \texttt{\textbackslash rhom} & {\ensuremath{\rhomeson^-}}\xspace \\
\texttt{\textbackslash rhopm} & {\ensuremath{\rhomeson^\pm}}\xspace & \texttt{\textbackslash rhomp} & {\ensuremath{\rhomeson^\mp}}\xspace & \texttt{\textbackslash kaon} & {\ensuremath{\PK}}\xspace \\
\texttt{\textbackslash Kb} & {\ensuremath{\Kbar}}\xspace & \texttt{\textbackslash KorKbar} & \kern 0.18em\optbar{\kern -0.18em K}{}\xspace & \texttt{\textbackslash Kz} & {\ensuremath{\kaon^0}}\xspace \\
\texttt{\textbackslash Kzb} & {\ensuremath{\Kbar{}^0}}\xspace & \texttt{\textbackslash Kp} & {\ensuremath{\kaon^+}}\xspace & \texttt{\textbackslash Km} & {\ensuremath{\kaon^-}}\xspace \\
\texttt{\textbackslash Kpm} & {\ensuremath{\kaon^\pm}}\xspace & \texttt{\textbackslash Kmp} & {\ensuremath{\kaon^\mp}}\xspace & \texttt{\textbackslash KS} & {\ensuremath{\kaon^0_{\mathrm{ \scriptscriptstyle S}}}}\xspace \\
\texttt{\textbackslash KL} & {\ensuremath{\kaon^0_{\mathrm{ \scriptscriptstyle L}}}}\xspace & \texttt{\textbackslash Kstarz} & {\ensuremath{\kaon^{*0}}}\xspace & \texttt{\textbackslash Kstarzb} & {\ensuremath{\Kbar{}^{*0}}}\xspace \\
\texttt{\textbackslash Kstar} & {\ensuremath{\kaon^*}}\xspace & \texttt{\textbackslash Kstarb} & {\ensuremath{\Kbar{}^*}}\xspace & \texttt{\textbackslash Kstarp} & {\ensuremath{\kaon^{*+}}}\xspace \\
\texttt{\textbackslash Kstarm} & {\ensuremath{\kaon^{*-}}}\xspace & \texttt{\textbackslash Kstarpm} & {\ensuremath{\kaon^{*\pm}}}\xspace & \texttt{\textbackslash Kstarmp} & {\ensuremath{\kaon^{*\mp}}}\xspace \\
\texttt{\textbackslash etaz} & \ensuremath{\Peta}\xspace & \texttt{\textbackslash etapr} & \ensuremath{\Peta^{\prime}}\xspace & \texttt{\textbackslash phiz} & \ensuremath{\Pphi}\xspace \\
\texttt{\textbackslash omegaz} & \ensuremath{\Pomega}\xspace & \\
\end{tabular*}
\subsubsection{Heavy mesons}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash D} & {\ensuremath{\PD}}\xspace & \texttt{\textbackslash Db} & {\ensuremath{\Dbar}}\xspace & \texttt{\textbackslash DorDbar} & \kern 0.18em\optbar{\kern -0.18em D}{}\xspace \\
\texttt{\textbackslash Dz} & {\ensuremath{\D^0}}\xspace & \texttt{\textbackslash Dzb} & {\ensuremath{\Dbar{}^0}}\xspace & \texttt{\textbackslash Dp} & {\ensuremath{\D^+}}\xspace \\
\texttt{\textbackslash Dm} & {\ensuremath{\D^-}}\xspace & \texttt{\textbackslash Dpm} & {\ensuremath{\D^\pm}}\xspace & \texttt{\textbackslash Dmp} & {\ensuremath{\D^\mp}}\xspace \\
\texttt{\textbackslash Dstar} & {\ensuremath{\D^*}}\xspace & \texttt{\textbackslash Dstarb} & {\ensuremath{\Dbar{}^*}}\xspace & \texttt{\textbackslash Dstarz} & {\ensuremath{\D^{*0}}}\xspace \\
\texttt{\textbackslash Dstarzb} & {\ensuremath{\Dbar{}^{*0}}}\xspace & \texttt{\textbackslash Dstarp} & {\ensuremath{\D^{*+}}}\xspace & \texttt{\textbackslash Dstarm} & {\ensuremath{\D^{*-}}}\xspace \\
\texttt{\textbackslash Dstarpm} & {\ensuremath{\D^{*\pm}}}\xspace & \texttt{\textbackslash Dstarmp} & {\ensuremath{\D^{*\mp}}}\xspace & \texttt{\textbackslash Ds} & {\ensuremath{\D^+_\squark}}\xspace \\
\texttt{\textbackslash Dsp} & {\ensuremath{\D^+_\squark}}\xspace & \texttt{\textbackslash Dsm} & {\ensuremath{\D^-_\squark}}\xspace & \texttt{\textbackslash Dspm} & {\ensuremath{\D^{\pm}_\squark}}\xspace \\
\texttt{\textbackslash Dsmp} & {\ensuremath{\D^{\mp}_\squark}}\xspace & \texttt{\textbackslash Dss} & {\ensuremath{\D^{*+}_\squark}}\xspace & \texttt{\textbackslash Dssp} & {\ensuremath{\D^{*+}_\squark}}\xspace \\
\texttt{\textbackslash Dssm} & {\ensuremath{\D^{*-}_\squark}}\xspace & \texttt{\textbackslash Dsspm} & {\ensuremath{\D^{*\pm}_\squark}}\xspace & \texttt{\textbackslash Dssmp} & {\ensuremath{\D^{*\mp}_\squark}}\xspace \\
\texttt{\textbackslash B} & {\ensuremath{\PB}}\xspace & \texttt{\textbackslash Bbar} & {\ensuremath{\kern 0.18em\overline{\kern -0.18em \PB}{}}}\xspace & \texttt{\textbackslash Bb} & {\ensuremath{\Bbar}}\xspace \\
\texttt{\textbackslash BorBbar} & \kern 0.18em\optbar{\kern -0.18em B}{}\xspace & \texttt{\textbackslash Bz} & {\ensuremath{\B^0}}\xspace & \texttt{\textbackslash Bzb} & {\ensuremath{\Bbar{}^0}}\xspace \\
\texttt{\textbackslash Bu} & {\ensuremath{\B^+}}\xspace & \texttt{\textbackslash Bub} & {\ensuremath{\B^-}}\xspace & \texttt{\textbackslash Bp} & {\ensuremath{\Bu}}\xspace \\
\texttt{\textbackslash Bm} & {\ensuremath{\Bub}}\xspace & \texttt{\textbackslash Bpm} & {\ensuremath{\B^\pm}}\xspace & \texttt{\textbackslash Bmp} & {\ensuremath{\B^\mp}}\xspace \\
\texttt{\textbackslash Bd} & {\ensuremath{\B^0}}\xspace & \texttt{\textbackslash Bs} & {\ensuremath{\B^0_\squark}}\xspace & \texttt{\textbackslash Bsb} & {\ensuremath{\Bbar{}^0_\squark}}\xspace \\
\texttt{\textbackslash Bdb} & {\ensuremath{\Bbar{}^0}}\xspace & \texttt{\textbackslash Bc} & {\ensuremath{\B_\cquark^+}}\xspace & \texttt{\textbackslash Bcp} & {\ensuremath{\B_\cquark^+}}\xspace \\
\texttt{\textbackslash Bcm} & {\ensuremath{\B_\cquark^-}}\xspace & \texttt{\textbackslash Bcpm} & {\ensuremath{\B_\cquark^\pm}}\xspace & \\
\end{tabular*}
\subsubsection{Onia}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash jpsi} & {\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace & \texttt{\textbackslash psitwos} & {\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace & \texttt{\textbackslash psiprpr} & {\ensuremath{\Ppsi(3770)}}\xspace \\
\texttt{\textbackslash etac} & {\ensuremath{\Peta_\cquark}}\xspace & \texttt{\textbackslash chiczero} & {\ensuremath{\Pchi_{\cquark 0}}}\xspace & \texttt{\textbackslash chicone} & {\ensuremath{\Pchi_{\cquark 1}}}\xspace \\
\texttt{\textbackslash chictwo} & {\ensuremath{\Pchi_{\cquark 2}}}\xspace & \texttt{\textbackslash OneS} & {\Y1S} & \texttt{\textbackslash TwoS} & {\Y2S} \\
\texttt{\textbackslash ThreeS} & {\Y3S} & \texttt{\textbackslash FourS} & {\Y4S} & \texttt{\textbackslash FiveS} & {\Y5S} \\
\texttt{\textbackslash chic} & {\ensuremath{\Pchi_{c}}}\xspace & \\
\end{tabular*}
\subsubsection{Baryons}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash proton} & {\ensuremath{\Pp}}\xspace & \texttt{\textbackslash antiproton} & {\ensuremath{\overline \proton}}\xspace & \texttt{\textbackslash neutron} & {\ensuremath{\Pn}}\xspace \\
\texttt{\textbackslash antineutron} & {\ensuremath{\overline \neutron}}\xspace & \texttt{\textbackslash Deltares} & {\ensuremath{\PDelta}}\xspace & \texttt{\textbackslash Deltaresbar} & {\ensuremath{\overline \Deltares}}\xspace \\
\texttt{\textbackslash Xires} & {\ensuremath{\PXi}}\xspace & \texttt{\textbackslash Xiresbar} & {\ensuremath{\overline \Xires}}\xspace & \texttt{\textbackslash Lz} & {\ensuremath{\PLambda}}\xspace \\
\texttt{\textbackslash Lbar} & {\ensuremath{\kern 0.1em\overline{\kern -0.1em\PLambda}}}\xspace & \texttt{\textbackslash LorLbar} & \kern 0.18em\optbar{\kern -0.18em \PLambda}{}\xspace & \texttt{\textbackslash Lambdares} & {\ensuremath{\PLambda}}\xspace \\
\texttt{\textbackslash Lambdaresbar} & {\ensuremath{\Lbar}}\xspace & \texttt{\textbackslash Sigmares} & {\ensuremath{\PSigma}}\xspace & \texttt{\textbackslash Sigmaresbar} & {\ensuremath{\overline \Sigmares}}\xspace \\
\texttt{\textbackslash Omegares} & {\ensuremath{\POmega}}\xspace & \texttt{\textbackslash Omegaresbar} & {\ensuremath{\overline \POmega}}\xspace & \texttt{\textbackslash Lb} & {\ensuremath{\Lz^0_\bquark}}\xspace \\
\texttt{\textbackslash Lbbar} & {\ensuremath{\Lbar{}^0_\bquark}}\xspace & \texttt{\textbackslash Lc} & {\ensuremath{\Lz^+_\cquark}}\xspace & \texttt{\textbackslash Lcbar} & {\ensuremath{\Lbar{}^-_\cquark}}\xspace \\
\texttt{\textbackslash Xib} & {\ensuremath{\Xires_\bquark}}\xspace & \texttt{\textbackslash Xibz} & {\ensuremath{\Xires^0_\bquark}}\xspace & \texttt{\textbackslash Xibm} & {\ensuremath{\Xires^-_\bquark}}\xspace \\
\texttt{\textbackslash Xibbar} & {\ensuremath{\Xiresbar{}_\bquark}}\xspace & \texttt{\textbackslash Xibbarz} & {\ensuremath{\Xiresbar{}_\bquark^0}}\xspace & \texttt{\textbackslash Xibbarp} & {\ensuremath{\Xiresbar{}_\bquark^+}}\xspace \\
\texttt{\textbackslash Xic} & {\ensuremath{\Xires_\cquark}}\xspace & \texttt{\textbackslash Xicz} & {\ensuremath{\Xires^0_\cquark}}\xspace & \texttt{\textbackslash Xicp} & {\ensuremath{\Xires^+_\cquark}}\xspace \\
\texttt{\textbackslash Xicbar} & {\ensuremath{\Xiresbar{}_\cquark}}\xspace & \texttt{\textbackslash Xicbarz} & {\ensuremath{\Xiresbar{}_\cquark^0}}\xspace & \texttt{\textbackslash Xicbarm} & {\ensuremath{\Xiresbar{}_\cquark^-}}\xspace \\
\texttt{\textbackslash Omegac} & {\ensuremath{\Omegares^0_\cquark}}\xspace & \texttt{\textbackslash Omegacbar} & {\ensuremath{\Omegaresbar{}_\cquark^0}}\xspace & \texttt{\textbackslash Omegab} & {\ensuremath{\Omegares^-_\bquark}}\xspace \\
\texttt{\textbackslash Omegabbar} & {\ensuremath{\Omegaresbar{}_\bquark^+}}\xspace & \\
\end{tabular*}
\subsection{Physics symbols}
\subsubsection{Decays}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash BF} & {\ensuremath{\mathcal{B}}}\xspace & \texttt{\textbackslash BRvis} & {\ensuremath{\BF_{\mathrm{{vis}}}}} & \texttt{\textbackslash BR} & \BF \\
\texttt{\textbackslash decay[2] \textbackslash decay\{\Pa\}\{\Pb \Pc\}} & \decay{\Pa}{\Pb \Pc} & \texttt{\textbackslash ra} & \ensuremath{\rightarrow}\xspace & \texttt{\textbackslash to} & \ensuremath{\rightarrow}\xspace \\
\end{tabular*}
\subsubsection{Lifetimes}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash tauBs} & {\ensuremath{\tau_{{\ensuremath{\B^0_\squark}}\xspace}}}\xspace & \texttt{\textbackslash tauBd} & {\ensuremath{\tau_{{\ensuremath{\B^0}}\xspace}}}\xspace & \texttt{\textbackslash tauBz} & {\ensuremath{\tau_{{\ensuremath{\B^0}}\xspace}}}\xspace \\
\texttt{\textbackslash tauBu} & {\ensuremath{\tau_{{\ensuremath{\Bu}}\xspace}}}\xspace & \texttt{\textbackslash tauDp} & {\ensuremath{\tau_{{\ensuremath{\D^+}}\xspace}}}\xspace & \texttt{\textbackslash tauDz} & {\ensuremath{\tau_{{\ensuremath{\D^0}}\xspace}}}\xspace \\
\texttt{\textbackslash tauL} & {\ensuremath{\tau_{\mathrm{ L}}}}\xspace & \texttt{\textbackslash tauH} & {\ensuremath{\tau_{\mathrm{ H}}}}\xspace & \\
\end{tabular*}
\subsubsection{Masses}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash mBd} & {\ensuremath{m_{{\ensuremath{\B^0}}\xspace}}}\xspace & \texttt{\textbackslash mBp} & {\ensuremath{m_{{\ensuremath{\Bu}}\xspace}}}\xspace & \texttt{\textbackslash mBs} & {\ensuremath{m_{{\ensuremath{\B^0_\squark}}\xspace}}}\xspace \\
\texttt{\textbackslash mBc} & {\ensuremath{m_{{\ensuremath{\B_\cquark^+}}\xspace}}}\xspace & \texttt{\textbackslash mLb} & {\ensuremath{m_{{\ensuremath{\Lz^0_\bquark}}\xspace}}}\xspace & \\
\end{tabular*}
\subsubsection{EW theory, groups}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash grpsuthree} & {\ensuremath{\mathrm{SU}(3)}}\xspace & \texttt{\textbackslash grpsutw} & {\ensuremath{\mathrm{SU}(2)}}\xspace & \texttt{\textbackslash grpuone} & {\ensuremath{\mathrm{U}(1)}}\xspace \\
\texttt{\textbackslash ssqtw} & {\ensuremath{\sin^{2}\!\theta_{\mathrm{W}}}}\xspace & \texttt{\textbackslash csqtw} & {\ensuremath{\cos^{2}\!\theta_{\mathrm{W}}}}\xspace & \texttt{\textbackslash stw} & {\ensuremath{\sin\theta_{\mathrm{W}}}}\xspace \\
\texttt{\textbackslash ctw} & {\ensuremath{\cos\theta_{\mathrm{W}}}}\xspace & \texttt{\textbackslash ssqtwef} & {\ensuremath{{\sin}^{2}\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace & \texttt{\textbackslash csqtwef} & {\ensuremath{{\cos}^{2}\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace \\
\texttt{\textbackslash stwef} & {\ensuremath{\sin\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace & \texttt{\textbackslash ctwef} & {\ensuremath{\cos\theta_{\mathrm{W}}^{\mathrm{eff}}}}\xspace & \texttt{\textbackslash gv} & {\ensuremath{g_{\mbox{\tiny V}}}}\xspace \\
\texttt{\textbackslash ga} & {\ensuremath{g_{\mbox{\tiny A}}}}\xspace & \texttt{\textbackslash order} & {\ensuremath{\mathcal{O}}}\xspace & \texttt{\textbackslash ordalph} & {\ensuremath{\mathcal{O}(\alpha)}}\xspace \\
\texttt{\textbackslash ordalsq} & {\ensuremath{\mathcal{O}(\alpha^{2})}}\xspace & \texttt{\textbackslash ordalcb} & {\ensuremath{\mathcal{O}(\alpha^{3})}}\xspace & \\
\end{tabular*}
\subsubsection{QCD parameters}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash as} & {\ensuremath{\alpha_s}}\xspace & \texttt{\textbackslash MSb} & {\ensuremath{\overline{\mathrm{MS}}}}\xspace & \texttt{\textbackslash lqcd} & {\ensuremath{\Lambda_{\mathrm{QCD}}}}\xspace \\
\texttt{\textbackslash qsq} & {\ensuremath{q^2}}\xspace & \\
\end{tabular*}
\subsubsection{CKM, CP violation}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash eps} & {\ensuremath{\varepsilon}}\xspace & \texttt{\textbackslash epsK} & {\ensuremath{\varepsilon_K}}\xspace & \texttt{\textbackslash epsB} & {\ensuremath{\varepsilon_B}}\xspace \\
\texttt{\textbackslash epsp} & {\ensuremath{\varepsilon^\prime_K}}\xspace & \texttt{\textbackslash CP} & {\ensuremath{C\!P}}\xspace & \texttt{\textbackslash CPT} & {\ensuremath{C\!PT}}\xspace \\
\texttt{\textbackslash rhobar} & {\ensuremath{\overline \rho}}\xspace & \texttt{\textbackslash etabar} & {\ensuremath{\overline \eta}}\xspace & \texttt{\textbackslash Vud} & {\ensuremath{V_{\uquark\dquark}}}\xspace \\
\texttt{\textbackslash Vcd} & {\ensuremath{V_{\cquark\dquark}}}\xspace & \texttt{\textbackslash Vtd} & {\ensuremath{V_{\tquark\dquark}}}\xspace & \texttt{\textbackslash Vus} & {\ensuremath{V_{\uquark\squark}}}\xspace \\
\texttt{\textbackslash Vcs} & {\ensuremath{V_{\cquark\squark}}}\xspace & \texttt{\textbackslash Vts} & {\ensuremath{V_{\tquark\squark}}}\xspace & \texttt{\textbackslash Vub} & {\ensuremath{V_{\uquark\bquark}}}\xspace \\
\texttt{\textbackslash Vcb} & {\ensuremath{V_{\cquark\bquark}}}\xspace & \texttt{\textbackslash Vtb} & {\ensuremath{V_{\tquark\bquark}}}\xspace & \texttt{\textbackslash Vuds} & {\ensuremath{V_{\uquark\dquark}^\ast}}\xspace \\
\texttt{\textbackslash Vcds} & {\ensuremath{V_{\cquark\dquark}^\ast}}\xspace & \texttt{\textbackslash Vtds} & {\ensuremath{V_{\tquark\dquark}^\ast}}\xspace & \texttt{\textbackslash Vuss} & {\ensuremath{V_{\uquark\squark}^\ast}}\xspace \\
\texttt{\textbackslash Vcss} & {\ensuremath{V_{\cquark\squark}^\ast}}\xspace & \texttt{\textbackslash Vtss} & {\ensuremath{V_{\tquark\squark}^\ast}}\xspace & \texttt{\textbackslash Vubs} & {\ensuremath{V_{\uquark\bquark}^\ast}}\xspace \\
\texttt{\textbackslash Vcbs} & {\ensuremath{V_{\cquark\bquark}^\ast}}\xspace & \texttt{\textbackslash Vtbs} & {\ensuremath{V_{\tquark\bquark}^\ast}}\xspace & \\
\end{tabular*}
\subsubsection{Oscillations}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash dm} & {\ensuremath{\Delta m}}\xspace & \texttt{\textbackslash dms} & {\ensuremath{\Delta m_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash dmd} & {\ensuremath{\Delta m_{{\ensuremath{\Pd}}\xspace}}}\xspace \\
\texttt{\textbackslash DG} & {\ensuremath{\Delta\Gamma}}\xspace & \texttt{\textbackslash DGs} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash DGd} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Pd}}\xspace}}}\xspace \\
\texttt{\textbackslash Gs} & {\ensuremath{\Gamma_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash Gd} & {\ensuremath{\Gamma_{{\ensuremath{\Pd}}\xspace}}}\xspace & \texttt{\textbackslash MBq} & {\ensuremath{M_{{\ensuremath{\PB}}\xspace_{\ensuremath{\Pq}}\xspace}}}\xspace \\
\texttt{\textbackslash DGq} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Pq}}\xspace}}}\xspace & \texttt{\textbackslash Gq} & {\ensuremath{\Gamma_{{\ensuremath{\Pq}}\xspace}}}\xspace & \texttt{\textbackslash dmq} & {\ensuremath{\Delta m_{{\ensuremath{\Pq}}\xspace}}}\xspace \\
\texttt{\textbackslash GL} & {\ensuremath{\Gamma_{\mathrm{ L}}}}\xspace & \texttt{\textbackslash GH} & {\ensuremath{\Gamma_{\mathrm{ H}}}}\xspace & \texttt{\textbackslash DGsGs} & {\ensuremath{\Delta\Gamma_{{\ensuremath{\Ps}}\xspace}/\Gamma_{{\ensuremath{\Ps}}\xspace}}}\xspace \\
\texttt{\textbackslash Delm} & {\mbox{$\Delta m $}}\xspace & \texttt{\textbackslash ACP} & {\ensuremath{{\mathcal{A}}^{{\ensuremath{C\!P}}\xspace}}}\xspace & \texttt{\textbackslash Adir} & {\ensuremath{{\mathcal{A}}^{\mathrm{ dir}}}}\xspace \\
\texttt{\textbackslash Amix} & {\ensuremath{{\mathcal{A}}^{\mathrm{ mix}}}}\xspace & \texttt{\textbackslash ADelta} & {\ensuremath{{\mathcal{A}}^\Delta}}\xspace & \texttt{\textbackslash phid} & {\ensuremath{\phi_{{\ensuremath{\Pd}}\xspace}}}\xspace \\
\texttt{\textbackslash sinphid} & {\ensuremath{\sin\!\phid}}\xspace & \texttt{\textbackslash phis} & {\ensuremath{\phi_{{\ensuremath{\Ps}}\xspace}}}\xspace & \texttt{\textbackslash betas} & {\ensuremath{\beta_{{\ensuremath{\Ps}}\xspace}}}\xspace \\
\texttt{\textbackslash sbetas} & {\ensuremath{\sigma(\beta_{{\ensuremath{\Ps}}\xspace})}}\xspace & \texttt{\textbackslash stbetas} & {\ensuremath{\sigma(2\beta_{{\ensuremath{\Ps}}\xspace})}}\xspace & \texttt{\textbackslash stphis} & {\ensuremath{\sigma(\phi_{{\ensuremath{\Ps}}\xspace})}}\xspace \\
\texttt{\textbackslash sinphis} & {\ensuremath{\sin\!\phis}}\xspace & \\
\end{tabular*}
\subsubsection{Tagging}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash edet} & {\ensuremath{\varepsilon_{\mathrm{ det}}}}\xspace & \texttt{\textbackslash erec} & {\ensuremath{\varepsilon_{\mathrm{ rec/det}}}}\xspace & \texttt{\textbackslash esel} & {\ensuremath{\varepsilon_{\mathrm{ sel/rec}}}}\xspace \\
\texttt{\textbackslash etrg} & {\ensuremath{\varepsilon_{\mathrm{ trg/sel}}}}\xspace & \texttt{\textbackslash etot} & {\ensuremath{\varepsilon_{\mathrm{ tot}}}}\xspace & \texttt{\textbackslash mistag} & \ensuremath{\omega}\xspace \\
\texttt{\textbackslash wcomb} & \ensuremath{\omega^{\mathrm{comb}}}\xspace & \texttt{\textbackslash etag} & {\ensuremath{\varepsilon_{\mathrm{tag}}}}\xspace & \texttt{\textbackslash etagcomb} & {\ensuremath{\varepsilon_{\mathrm{tag}}^{\mathrm{comb}}}}\xspace \\
\texttt{\textbackslash effeff} & \ensuremath{\varepsilon_{\mathrm{eff}}}\xspace & \texttt{\textbackslash effeffcomb} & \ensuremath{\varepsilon_{\mathrm{eff}}^{\mathrm{comb}}}\xspace & \texttt{\textbackslash efftag} & {\ensuremath{\etag(1-2\omega)^2}}\xspace \\
\texttt{\textbackslash effD} & {\ensuremath{\etag D^2}}\xspace & \texttt{\textbackslash etagprompt} & {\ensuremath{\varepsilon_{\mathrm{ tag}}^{\mathrm{Pr}}}}\xspace & \texttt{\textbackslash etagLL} & {\ensuremath{\varepsilon_{\mathrm{ tag}}^{\mathrm{LL}}}}\xspace \\
\end{tabular*}
\subsubsection{Key decay channels}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash BdToKstmm} & \decay{\Bd}{\Kstarz\mup\mun} & \texttt{\textbackslash BdbToKstmm} & \decay{\Bdb}{\Kstarzb\mup\mun} & \texttt{\textbackslash BsToJPsiPhi} & \decay{\Bs}{\jpsi\phi} \\
\texttt{\textbackslash BdToJPsiKst} & \decay{\Bd}{\jpsi\Kstarz} & \texttt{\textbackslash BdbToJPsiKst} & \decay{\Bdb}{\jpsi\Kstarzb} & \texttt{\textbackslash BsPhiGam} & \decay{\Bs}{\phi \g} \\
\texttt{\textbackslash BdKstGam} & \decay{\Bd}{\Kstarz \g} & \texttt{\textbackslash BTohh} & \decay{\B}{\Ph^+ \Ph'^-} & \texttt{\textbackslash BdTopipi} & \decay{\Bd}{\pip\pim} \\
\texttt{\textbackslash BdToKpi} & \decay{\Bd}{\Kp\pim} & \texttt{\textbackslash BsToKK} & \decay{\Bs}{\Kp\Km} & \texttt{\textbackslash BsTopiK} & \decay{\Bs}{\pip\Km} \\
\end{tabular*}
\subsubsection{Rare decays}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash BdKstee} & \decay{\Bd}{\Kstarz\epem} & \texttt{\textbackslash BdbKstee} & \decay{\Bdb}{\Kstarzb\epem} & \texttt{\textbackslash bsll} & \decay{\bquark}{\squark \ell^+ \ell^-} \\
\texttt{\textbackslash AFB} & \ensuremath{A_{\mathrm{FB}}}\xspace & \texttt{\textbackslash FL} & \ensuremath{F_{\mathrm{L}}}\xspace & \texttt{\textbackslash AT\#1 \textbackslash AT2} & \AT2 \\
\texttt{\textbackslash btosgam} & \decay{\bquark}{\squark \g} & \texttt{\textbackslash btodgam} & \decay{\bquark}{\dquark \g} & \texttt{\textbackslash Bsmm} & \decay{\Bs}{\mup\mun} \\
\texttt{\textbackslash Bdmm} & \decay{\Bd}{\mup\mun} & \texttt{\textbackslash ctl} & \ensuremath{\cos{\theta_\ell}}\xspace & \texttt{\textbackslash ctk} & \ensuremath{\cos{\theta_K}}\xspace \\
\end{tabular*}
\subsubsection{Wilson coefficients and operators}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash C\#1 \textbackslash C9} & \C9 & \texttt{\textbackslash Cp\#1 \textbackslash Cp7} & \Cp7 & \texttt{\textbackslash Ceff\#1 \textbackslash Ceff9 } & \Ceff9 \\
\texttt{\textbackslash Cpeff\#1 \textbackslash Cpeff7} & \Cpeff7 & \texttt{\textbackslash Ope\#1 \textbackslash Ope2} & \Ope2 & \texttt{\textbackslash Opep\#1 \textbackslash Opep7} & \Opep7 \\
\end{tabular*}
\subsubsection{Charm}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash xprime} & \ensuremath{x^{\prime}}\xspace & \texttt{\textbackslash yprime} & \ensuremath{y^{\prime}}\xspace & \texttt{\textbackslash ycp} & \ensuremath{y_{\CP}}\xspace \\
\texttt{\textbackslash agamma} & \ensuremath{A_{\Gamma}}\xspace & \texttt{\textbackslash dkpicf} & \decay{\Dz}{\Km\pip} & \\
\end{tabular*}
\subsubsection{QM}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash bra[1] \textbackslash bra\{a\}} & \bra{a} & \texttt{\textbackslash ket[1] \textbackslash ket\{b\}} & \ket{b} & \texttt{\textbackslash braket[2] \textbackslash braket\{a\}\{b\}} & \braket{a}{b} \\
\end{tabular*}
\subsection{Units}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash unit[1] \textbackslash unit\{kg\}} & \unit{kg} & \\
\end{tabular*}
\subsubsection{Energy and momentum}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash tev} & \ifthenelse{\boolean{inbibliography}}{\ensuremath{~T\kern -0.05em eV}\xspace}{\ensuremath{\mathrm{\,Te\kern -0.1em V}}}\xspace & \texttt{\textbackslash gev} & \ensuremath{\mathrm{\,Ge\kern -0.1em V}}\xspace & \texttt{\textbackslash mev} & \ensuremath{\mathrm{\,Me\kern -0.1em V}}\xspace \\
\texttt{\textbackslash kev} & \ensuremath{\mathrm{\,ke\kern -0.1em V}}\xspace & \texttt{\textbackslash ev} & \ensuremath{\mathrm{\,e\kern -0.1em V}}\xspace & \texttt{\textbackslash gevc} & \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace \\
\texttt{\textbackslash mevc} & \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace & \texttt{\textbackslash gevcc} & \ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace & \texttt{\textbackslash gevgevcccc} & \ensuremath{{\mathrm{\,Ge\kern -0.1em V^2\!/}c^4}}\xspace \\
\texttt{\textbackslash mevcc} & \ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace & \\
\end{tabular*}
\subsubsection{Distance and area}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash km} & \ensuremath{\mathrm{ \,km}}\xspace & \texttt{\textbackslash m} & \ensuremath{\mathrm{ \,m}}\xspace & \texttt{\textbackslash ma} & \ensuremath{{\mathrm{ \,m}}^2}\xspace \\
\texttt{\textbackslash cm} & \ensuremath{\mathrm{ \,cm}}\xspace & \texttt{\textbackslash cma} & \ensuremath{{\mathrm{ \,cm}}^2}\xspace & \texttt{\textbackslash mm} & \ensuremath{\mathrm{ \,mm}}\xspace \\
\texttt{\textbackslash mma} & \ensuremath{{\mathrm{ \,mm}}^2}\xspace & \texttt{\textbackslash mum} & \ensuremath{{\,\upmu\mathrm{m}}}\xspace & \texttt{\textbackslash muma} & \ensuremath{{\,\upmu\mathrm{m}^2}}\xspace \\
\texttt{\textbackslash nm} & \ensuremath{\mathrm{ \,nm}}\xspace & \texttt{\textbackslash fm} & \ensuremath{\mathrm{ \,fm}}\xspace & \texttt{\textbackslash barn} & \ensuremath{\mathrm{ \,b}}\xspace \\
\texttt{\textbackslash mbarn} & \ensuremath{\mathrm{ \,mb}}\xspace & \texttt{\textbackslash mub} & \ensuremath{{\mathrm{ \,\upmu b}}}\xspace & \texttt{\textbackslash nb} & \ensuremath{\mathrm{ \,nb}}\xspace \\
\texttt{\textbackslash invnb} & \ensuremath{\mbox{\,nb}^{-1}}\xspace & \texttt{\textbackslash pb} & \ensuremath{\mathrm{ \,pb}}\xspace & \texttt{\textbackslash invpb} & \ensuremath{\mbox{\,pb}^{-1}}\xspace \\
\texttt{\textbackslash fb} & \ensuremath{\mbox{\,fb}}\xspace & \texttt{\textbackslash invfb} & \ensuremath{\mbox{\,fb}^{-1}}\xspace & \texttt{\textbackslash ab} & \ensuremath{\mbox{\,ab}}\xspace \\
\texttt{\textbackslash invab} & \ensuremath{\mbox{\,ab}^{-1}}\xspace & \\
\end{tabular*}
\subsubsection{Time }
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash sec} & \ensuremath{\mathrm{{\,s}}}\xspace & \texttt{\textbackslash ms} & \ensuremath{{\mathrm{ \,ms}}}\xspace & \texttt{\textbackslash mus} & \ensuremath{{\,\upmu{\mathrm{ s}}}}\xspace \\
\texttt{\textbackslash ns} & \ensuremath{{\mathrm{ \,ns}}}\xspace & \texttt{\textbackslash ps} & \ensuremath{{\mathrm{ \,ps}}}\xspace & \texttt{\textbackslash fs} & \ensuremath{\mathrm{ \,fs}}\xspace \\
\texttt{\textbackslash mhz} & \ensuremath{{\mathrm{ \,MHz}}}\xspace & \texttt{\textbackslash khz} & \ensuremath{{\mathrm{ \,kHz}}}\xspace & \texttt{\textbackslash hz} & \ensuremath{{\mathrm{ \,Hz}}}\xspace \\
\texttt{\textbackslash invps} & \ensuremath{{\mathrm{ \,ps^{-1}}}}\xspace & \texttt{\textbackslash invns} & \ensuremath{{\mathrm{ \,ns^{-1}}}}\xspace & \texttt{\textbackslash yr} & \ensuremath{\mathrm{ \,yr}}\xspace \\
\texttt{\textbackslash hr} & \ensuremath{\mathrm{ \,hr}}\xspace & \\
\end{tabular*}
\subsubsection{Temperature}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash degc} & \ensuremath{^\circ}{C}\xspace & \texttt{\textbackslash degk} & \ensuremath {\mathrm{ K}}\xspace & \\
\end{tabular*}
\subsubsection{Material lengths, radiation}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash Xrad} & \ensuremath{X_0}\xspace & \texttt{\textbackslash NIL} & \ensuremath{\lambda_{int}}\xspace & \texttt{\textbackslash mip} & MIP\xspace \\
\texttt{\textbackslash neutroneq} & \ensuremath{\mathrm{ \,n_{eq}}}\xspace & \texttt{\textbackslash neqcmcm} & \ensuremath{\mathrm{ \,n_{eq} / cm^2}}\xspace & \texttt{\textbackslash kRad} & \ensuremath{\mathrm{ \,kRad}}\xspace \\
\texttt{\textbackslash MRad} & \ensuremath{\mathrm{ \,MRad}}\xspace & \texttt{\textbackslash ci} & \ensuremath{\mathrm{ \,Ci}}\xspace & \texttt{\textbackslash mci} & \ensuremath{\mathrm{ \,mCi}}\xspace \\
\end{tabular*}
\subsubsection{Uncertainties}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash sx} & \sx & \texttt{\textbackslash sy} & \sy & \texttt{\textbackslash sz} & \sz \\
\texttt{\textbackslash stat} & \ensuremath{\mathrm{\,(stat)}}\xspace & \texttt{\textbackslash syst} & \ensuremath{\mathrm{\,(syst)}}\xspace & \\
\end{tabular*}
\subsubsection{Maths}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash order} & {\ensuremath{\mathcal{O}}}\xspace & \texttt{\textbackslash chisq} & \ensuremath{\chi^2}\xspace & \texttt{\textbackslash chisqndf} & \ensuremath{\chi^2/\mathrm{ndf}}\xspace \\
\texttt{\textbackslash chisqip} & \ensuremath{\chi^2_{\text{IP}}}\xspace & \texttt{\textbackslash chisqvs} & \ensuremath{\chi^2_{\text{VS}}}\xspace & \texttt{\textbackslash chisqvtx} & \ensuremath{\chi^2_{\text{vtx}}}\xspace \\
\texttt{\textbackslash chisqvtxndf} & \ensuremath{\chi^2_{\text{vtx}}/\mathrm{ndf}}\xspace & \texttt{\textbackslash deriv} & \ensuremath{\mathrm{d}} & \texttt{\textbackslash gsim} & \gsim \\
\texttt{\textbackslash lsim} & \lsim & \texttt{\textbackslash mean[1] \textbackslash mean\{x\}} & \mean{x} & \texttt{\textbackslash abs[1] \textbackslash abs\{x\}} & \abs{x} \\
\texttt{\textbackslash Real} & \ensuremath{\mathcal{R}e}\xspace & \texttt{\textbackslash Imag} & \ensuremath{\mathcal{I}m}\xspace & \texttt{\textbackslash PDF} & PDF\xspace \\
\texttt{\textbackslash sPlot} & \mbox{\em sPlot}\xspace & \\
\end{tabular*}
\subsection{Kinematics}
\subsubsection{Energy, Momenta}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash Ebeam} & \ensuremath{E_{\mbox{\tiny BEAM}}}\xspace & \texttt{\textbackslash sqs} & \ensuremath{\protect\sqrt{s}}\xspace & \texttt{\textbackslash ptot} & \mbox{$p$}\xspace \\
\texttt{\textbackslash pt} & \mbox{$p_{\mathrm{ T}}$}\xspace & \texttt{\textbackslash et} & \mbox{$E_{\mathrm{ T}}$}\xspace & \texttt{\textbackslash mt} & \mbox{$M_{\mathrm{ T}}$}\xspace \\
\texttt{\textbackslash dpp} & \ensuremath{\Delta p/p}\xspace & \texttt{\textbackslash msq} & \ensuremath{m^2}\xspace & \texttt{\textbackslash dedx} & \ensuremath{\mathrm{d}\hspace{-0.1em}E/\mathrm{d}x}\xspace \\
\end{tabular*}
\subsubsection{PID}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash dllkpi} & \ensuremath{\mathrm{DLL}_{\kaon\pion}}\xspace & \texttt{\textbackslash dllppi} & \ensuremath{\mathrm{DLL}_{\proton\pion}}\xspace & \texttt{\textbackslash dllepi} & \ensuremath{\mathrm{DLL}_{\electron\pion}}\xspace \\
\texttt{\textbackslash dllmupi} & \ensuremath{\mathrm{DLL}_{\muon\pi}}\xspace & \\
\end{tabular*}
\subsubsection{Geometry}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash degrees} & \ensuremath{^{\circ}}\xspace & \texttt{\textbackslash krad} & \ensuremath{\mathrm{ \,krad}}\xspace & \texttt{\textbackslash mrad} & \ensuremath{\mathrm{ \,mrad}}\xspace \\
\texttt{\textbackslash rad} & \ensuremath{\mathrm{ \,rad}}\xspace & \\
\end{tabular*}
\subsubsection{Accelerator}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash betastar} & \ensuremath{\beta^*} & \texttt{\textbackslash lum} & \lum & \texttt{\textbackslash intlum[1] \textbackslash intlum\{2 \,\ensuremath{\mbox{\,fb}^{-1}}\xspace\}} & \intlum{2 \,\ensuremath{\mbox{\,fb}^{-1}}\xspace} \\
\end{tabular*}
\subsection{Software}
\subsubsection{Programs}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash bcvegpy} & \mbox{\textsc{Bcvegpy}}\xspace & \texttt{\textbackslash boole} & \mbox{\textsc{Boole}}\xspace & \texttt{\textbackslash brunel} & \mbox{\textsc{Brunel}}\xspace \\
\texttt{\textbackslash davinci} & \mbox{\textsc{DaVinci}}\xspace & \texttt{\textbackslash dirac} & \mbox{\textsc{Dirac}}\xspace & \texttt{\textbackslash evtgen} & \mbox{\textsc{EvtGen}}\xspace \\
\texttt{\textbackslash fewz} & \mbox{\textsc{Fewz}}\xspace & \texttt{\textbackslash fluka} & \mbox{\textsc{Fluka}}\xspace & \texttt{\textbackslash ganga} & \mbox{\textsc{Ganga}}\xspace \\
\texttt{\textbackslash gaudi} & \mbox{\textsc{Gaudi}}\xspace & \texttt{\textbackslash gauss} & \mbox{\textsc{Gauss}}\xspace & \texttt{\textbackslash geant} & \mbox{\textsc{Geant4}}\xspace \\
\texttt{\textbackslash hepmc} & \mbox{\textsc{HepMC}}\xspace & \texttt{\textbackslash herwig} & \mbox{\textsc{Herwig}}\xspace & \texttt{\textbackslash moore} & \mbox{\textsc{Moore}}\xspace \\
\texttt{\textbackslash neurobayes} & \mbox{\textsc{NeuroBayes}}\xspace & \texttt{\textbackslash photos} & \mbox{\textsc{Photos}}\xspace & \texttt{\textbackslash powheg} & \mbox{\textsc{Powheg}}\xspace \\
\texttt{\textbackslash pythia} & \mbox{\textsc{Pythia}}\xspace & \texttt{\textbackslash resbos} & \mbox{\textsc{ResBos}}\xspace & \texttt{\textbackslash roofit} & \mbox{\textsc{RooFit}}\xspace \\
\texttt{\textbackslash root} & \mbox{\textsc{Root}}\xspace & \texttt{\textbackslash spice} & \mbox{\textsc{Spice}}\xspace & \texttt{\textbackslash urania} & \mbox{\textsc{Urania}}\xspace \\
\end{tabular*}
\subsubsection{Languages}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash cpp} & \mbox{\textsc{C\raisebox{0.1em}{{\footnotesize{++}}}}}\xspace & \texttt{\textbackslash ruby} & \mbox{\textsc{Ruby}}\xspace & \texttt{\textbackslash fortran} & \mbox{\textsc{Fortran}}\xspace \\
\texttt{\textbackslash svn} & \mbox{\textsc{SVN}}\xspace & \\
\end{tabular*}
\subsubsection{Data processing}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash kbytes} & \ensuremath{{\mathrm{ \,kbytes}}}\xspace & \texttt{\textbackslash kbsps} & \ensuremath{{\mathrm{ \,kbits/s}}}\xspace & \texttt{\textbackslash kbits} & \ensuremath{{\mathrm{ \,kbits}}}\xspace \\
\texttt{\textbackslash kbsps} & \ensuremath{{\mathrm{ \,kbits/s}}}\xspace & \texttt{\textbackslash mbsps} & \ensuremath{{\mathrm{ \,Mbytes/s}}}\xspace & \texttt{\textbackslash mbytes} & \ensuremath{{\mathrm{ \,Mbytes}}}\xspace \\
\texttt{\textbackslash mbps} & \ensuremath{{\mathrm{ \,Mbyte/s}}}\xspace & \texttt{\textbackslash mbsps} & \ensuremath{{\mathrm{ \,Mbytes/s}}}\xspace & \texttt{\textbackslash gbsps} & \ensuremath{{\mathrm{ \,Gbytes/s}}}\xspace \\
\texttt{\textbackslash gbytes} & \ensuremath{{\mathrm{ \,Gbytes}}}\xspace & \texttt{\textbackslash gbsps} & \ensuremath{{\mathrm{ \,Gbytes/s}}}\xspace & \texttt{\textbackslash tbytes} & \ensuremath{{\mathrm{ \,Tbytes}}}\xspace \\
\texttt{\textbackslash tbpy} & \ensuremath{{\mathrm{ \,Tbytes/yr}}}\xspace & \texttt{\textbackslash dst} & DST\xspace & \\
\end{tabular*}
\subsection{Detector related}
\subsubsection{Detector technologies}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash nonn} & \ensuremath{\mathrm{{ \mathit{n^+}} \mbox{-} on\mbox{-}{ \mathit{n}}}}\xspace & \texttt{\textbackslash ponn} & \ensuremath{\mathrm{{ \mathit{p^+}} \mbox{-} on\mbox{-}{ \mathit{n}}}}\xspace & \texttt{\textbackslash nonp} & \ensuremath{\mathrm{{ \mathit{n^+}} \mbox{-} on\mbox{-}{ \mathit{p}}}}\xspace \\
\texttt{\textbackslash cvd} & CVD\xspace & \texttt{\textbackslash mwpc} & MWPC\xspace & \texttt{\textbackslash gem} & GEM\xspace \\
\end{tabular*}
\subsubsection{Detector components, electronics}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash tell1} & TELL1\xspace & \texttt{\textbackslash ukl1} & UKL1\xspace & \texttt{\textbackslash beetle} & Beetle\xspace \\
\texttt{\textbackslash otis} & OTIS\xspace & \texttt{\textbackslash croc} & CROC\xspace & \texttt{\textbackslash carioca} & CARIOCA\xspace \\
\texttt{\textbackslash dialog} & DIALOG\xspace & \texttt{\textbackslash sync} & SYNC\xspace & \texttt{\textbackslash cardiac} & CARDIAC\xspace \\
\texttt{\textbackslash gol} & GOL\xspace & \texttt{\textbackslash vcsel} & VCSEL\xspace & \texttt{\textbackslash ttc} & TTC\xspace \\
\texttt{\textbackslash ttcrx} & TTCrx\xspace & \texttt{\textbackslash hpd} & HPD\xspace & \texttt{\textbackslash pmt} & PMT\xspace \\
\texttt{\textbackslash specs} & SPECS\xspace & \texttt{\textbackslash elmb} & ELMB\xspace & \texttt{\textbackslash fpga} & FPGA\xspace \\
\texttt{\textbackslash plc} & PLC\xspace & \texttt{\textbackslash rasnik} & RASNIK\xspace & \texttt{\textbackslash elmb} & ELMB\xspace \\
\texttt{\textbackslash can} & CAN\xspace & \texttt{\textbackslash lvds} & LVDS\xspace & \texttt{\textbackslash ntc} & NTC\xspace \\
\texttt{\textbackslash adc} & ADC\xspace & \texttt{\textbackslash led} & LED\xspace & \texttt{\textbackslash ccd} & CCD\xspace \\
\texttt{\textbackslash hv} & HV\xspace & \texttt{\textbackslash lv} & LV\xspace & \texttt{\textbackslash pvss} & PVSS\xspace \\
\texttt{\textbackslash cmos} & CMOS\xspace & \texttt{\textbackslash fifo} & FIFO\xspace & \texttt{\textbackslash ccpc} & CCPC\xspace \\
\end{tabular*}
\subsubsection{Chemical symbols}
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash cfourften} & \ensuremath{\mathrm{ C_4 F_{10}}}\xspace & \texttt{\textbackslash cffour} & \ensuremath{\mathrm{ CF_4}}\xspace & \texttt{\textbackslash cotwo} & \cotwo \\
\texttt{\textbackslash csixffouteen} & \csixffouteen & \texttt{\textbackslash mgftwo} & \mgftwo & \texttt{\textbackslash siotwo} & \siotwo \\
\end{tabular*}
\subsection{Special Text }
\begin{tabular*}{\linewidth}{@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l@{\extracolsep{\fill}}l@{\extracolsep{0.5cm}}l}
\texttt{\textbackslash eg} & \mbox{\itshape e.g.}\xspace & \texttt{\textbackslash ie} & \mbox{\itshape i.e.}\xspace & \texttt{\textbackslash etal} & \mbox{\itshape et al.}\xspace \\
\texttt{\textbackslash etc} & \mbox{\itshape etc.}\xspace & \texttt{\textbackslash cf} & \mbox{\itshape cf.}\xspace & \texttt{\textbackslash ffp} & \mbox{\itshape ff.}\xspace \\
\texttt{\textbackslash vs} & \mbox{\itshape vs.}\xspace & \\
\end{tabular*}
\section{Study of the multipion system}\label{seq:multi}
A~search for intermediate light resonances is performed on
the~set of events which do not decay through the~{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace~resonance.
For this, the~additional criterion that the mass of every \mbox{$\jpsi\pip\pim$}\xspace~combination
is outside $\pm6\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$~around the~known
{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace~meson mass~\cite{PDG} is applied.
The~invariant-mass distribution for \mbox{$\Bu\to\jpsi3\pip2\pim$}\xspace candidates selected
with the~veto on the~{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace resonance is shown
in Fig.~\ref{fig:5pi_nr}(a).
A~clear peak, corresponding to the~non\nobreakdash-resonant decay
${\ensuremath{\B^+}}\xspace\ensuremath{\rightarrow}\xspace\jpsi3\pip2{\ensuremath{\pion^-}}\xspace$~decay is visible.
The~signal yield for this channel is determined using
an~extended unbinned maximum likelihood fit using the~function described above.
The~observed signal yield is~$80\pm15$
with a~statistical significance of 6.8~standard deviations.
\begin{figure}[t]
\setlength{\unitlength}{1mm}
\centering
\begin{picture}(150,60)
\put( 0,0){\includegraphics*[width=75mm]{Fig3a.pdf}}
\put(75,0){\includegraphics*[width=75mm]{Fig3b.pdf}}
\put( -1,16 ) {\begin{sideways}Candidates/$(4\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace)$\end{sideways} }
\put( 75,31 ) {\begin{sideways}$N/(40\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace)$\end{sideways} }
\put( 25,1 ) {$m(\mbox{$\jpsi3\pip2\pim$}\xspace)$}
\put( 102,1 ) {$m({\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace)$}
\put( 57,1 ) {$\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$ }
\put( 132,1 ) {$\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$ }
\put( 55,50 ) {LHCb}
\put( 130,50 ) {LHCb}
\put( 16,50 ) {a)}
\put( 91,50 ) {b)}
\end{picture}
\caption { \small
(a)~Mass distribution of the~selected \mbox{$\Bu\to\jpsi3\pip2\pim$}\xspace~candidates with
the~additional requirement
of every \mbox{$\jpsi\pip\pim$}\xspace combination to be outside of $\pm6\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$ around
the~known {\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace~mass.
The~total fit function,
the~{\ensuremath{\B^+}}\xspace~signal contribution and
the~combinatorial background are shown
with thick solid\,(orange),
thin solid\,(red) and dashed\,(blue) lines, respectively.
(b)~Sum of mass distributions for all possible background\nobreakdash-subtracted
${\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$~combinations.
The~factorisation\nobreakdash-based model prediction is shown by a~solid\,(red) line,
and the~expectation from the~phase\nobreakdash-space model
is shown by a~dashed\,(blue) line.
The~total fit function, shown with a~dotted\,(green) line, is an~incoherent sum of
a~relativistic Breit\nobreakdash-Wigner function with
the~mean and natural width fixed to the~known $\Prho^{0}$~values
and a~phase\nobreakdash-space function multiplied by a~second\nobreakdash-order polynomial.}
\label{fig:5pi_nr}
\end{figure}
The~resonance structure is investigated
in the~${\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$,
${\ensuremath{\pion^+}}\xspace\pip$,
${\ensuremath{\pion^-}}\xspace\pim$,
${\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace$,
${\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace\pim$,
${\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^+}}\xspace$,
$2\pip2{\ensuremath{\pion^-}}\xspace$,
$3{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$ and~$3\pip2{\ensuremath{\pion^-}}\xspace$~combinations
of final\nobreakdash-state particles using the~$sPlot$ technique,
with the~reconstructed $\mbox{$\jpsi3\pip2\pim$}\xspace$~mass as the~discriminating variable.
The~resulting background\nobreakdash-subtracted mass distribution
of all possible ${\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$~combinations is shown in Fig.~\ref{fig:5pi_nr}(b),
along with the~theoretical predictions from the~factorisation approach
and the~phase\nobreakdash-space model~\cite{Lesha,Likhoded:2013iua,Berezhnoy:2011is,Likhoded:2009ib}.
A~structure is seen that can be associated to the~$\Prho^{0}$~meson.
The~distribution is fitted with a~sum of a~relativistic Breit\nobreakdash-Wigner~function
with the~mean and natural width fixed to the~known $\Prho^{0}$~values plus
a~phase\nobreakdash-space shape multiplied by
a~second\nobreakdash-order polynomial.
No~significant narrow structures are observed for other
\mbox{multipion} combinations.
The~distributions for all other combinations of pions
are compared with predictions of both
a~factorisation approach and a~phase\nobreakdash-space model,
as shown in Fig.~\ref{fig:pions_nr}.
For all fits the \ensuremath{\chi^2}\xspace per degree of freedom, $\ensuremath{\chi^2}\xspace/\mathrm{ndf}$, is given in Table~\ref{tab:chisq_nr}.
The~prediction from the~factorisation approach
is found to be in somewhat better agreement
with the~data than that from the~phase\nobreakdash-space model,
giving better $\ensuremath{\chi^2}\xspace/\mathrm{ndf}$~values for eight out of
nine distributions examined.
\begin{figure}[t]
\setlength{\unitlength}{1mm}
\centering
\begin{picture}(150,180)
\put( 5,135){\includegraphics*[width=60mm,height=45mm]{Fig4a.pdf}}
\put(75,135){\includegraphics*[width=60mm,height=45mm]{Fig4b.pdf}}
\put( 5,90 ){\includegraphics*[width=60mm,height=45mm]{Fig4c.pdf}}
\put(75,90 ){\includegraphics*[width=60mm,height=45mm]{Fig4d.pdf}}
\put( 5,45 ){\includegraphics*[width=60mm,height=45mm]{Fig4e.pdf}}
\put(75,45 ){\includegraphics*[width=60mm,height=45mm]{Fig4f.pdf}}
\put( 5,0 ){\includegraphics*[width=60mm,height=45mm]{Fig4g.pdf}}
\put(75,0 ){\includegraphics*[width=60mm,height=45mm]{Fig4h.pdf}}
\put( 16,172){\small{a)}}
\put( 86,172){\small{b)}}
\put( 16,127){\small{c)}}
\put( 86,127){\small{d)}}
\put( 16,82 ){\small{e)}}
\put( 86,82 ){\small{f)}}
\put( 16,37 ){\small{g)}}
\put( 86,37 ){\small{h)}}
\put( 49,172){\small{LHCb}}
\put(119,172){\small{LHCb}}
\put( 49,127){\small{LHCb}}
\put(119,127){\small{LHCb}}
\put( 49,82 ){\small{LHCb}}
\put(119,82 ){\small{LHCb}}
\put( 49,37 ){\small{LHCb}}
\put(119,37 ){\small{LHCb}}
\put( 5,153){\begin{sideways}\small{$N/(40\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace)$}\end{sideways}}
\put(75,153){\begin{sideways}\small{$N/(40\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace)$}\end{sideways}}
\put( 5,108){\begin{sideways}\small{$N/(60\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace)$}\end{sideways}}
\put(75,108){\begin{sideways}\small{$N/(60\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace)$}\end{sideways}}
\put( 5,63 ){\begin{sideways}\small{$N/(60\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace)$}\end{sideways}}
\put(75,63 ){\begin{sideways}\small{$N/(50\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace)$}\end{sideways}}
\put( 5,18 ){\begin{sideways}\small{$N/(50\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace)$}\end{sideways}}
\put(75,18 ){\begin{sideways}\small{$N/(50\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace)$}\end{sideways}}
\put( 26,135){\small{$m({\ensuremath{\pion^-}}\xspace\pim)$}}
\put( 96,135){\small{$m({\ensuremath{\pion^+}}\xspace\pip)$}}
\put( 26,90 ){\small{$m({\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace)$}}
\put( 96,90 ){\small{$m({\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace\pim)$}}
\put( 26,45 ){\small{$m({\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^+}}\xspace)$}}
\put( 96,45 ){\small{$m(2\pip2{\ensuremath{\pion^-}}\xspace)$}}
\put( 26,0 ){\small{$m(3{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace)$}}
\put( 96,0 ){\small{$m(3\pip2{\ensuremath{\pion^-}}\xspace)$}}
\put( 48,135){\small{$\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$}}
\put(118,135){\small{$\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$}}
\put( 48,90 ){\small{$\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$}}
\put(118,90 ){\small{$\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$}}
\put( 48,45 ){\small{$\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$}}
\put(118,45 ){\small{$\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$}}
\put( 48,0 ){\small{$\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$}}
\put(118,0 ){\small{$\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$}}
\end{picture}
\caption {\small
Distributions of
(a)\,${\ensuremath{\pion^-}}\xspace\pim$,
(b)\,${\ensuremath{\pion^+}}\xspace\pip$,
(c)\,${\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace$,
(d)\,${\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace\pim$,
(e)\,${\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^+}}\xspace$,
(f)\,$2\pip2{\ensuremath{\pion^-}}\xspace$,
(g)\,$3{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$ and
(h)\,$3\pip2{\ensuremath{\pion^-}}\xspace$~masses in the~\mbox{$\Bu\to\jpsi3\pip2\pim$}\xspace decay.
The~prediction from the factorisation\nobreakdash-based model is shown by solid\,(red) lines,
and the~expectation from the~phase\nobreakdash-space model is shown by dashed\,(blue) lines.
}
\label{fig:pions_nr}
\end{figure}
\begin{table}[t]
\centering
\caption{ \small
The~\ensuremath{\chi^2}\xspace per degree of freedom for the
factorisation\nobreakdash-based
and phase\nobreakdash-space models
for the multipion system in non\nobreakdash-resonant
\mbox{${\ensuremath{\B^+}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace 3\pip2{\ensuremath{\pion^-}}\xspace$}~decays.
} \label{tab:chisq_nr}
\vspace*{3mm}
\begin{tabular*}{0.85\textwidth}{@{\hspace{5mm}}l@{\extracolsep{\fill}}cc@{\hspace{5mm}}}
Multipion system & Factorisation model & Phase-space model \\
\hline
${\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$ & 0.7 & 2.6 \\
${\ensuremath{\pion^-}}\xspace\pim$ & 2.8 & 3.7 \\
${\ensuremath{\pion^+}}\xspace\pip$ & 1.7 & 4.2 \\
${\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace$ & 1.8 & 2.3 \\
${\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace\pim$ & 2.8 & 5.0 \\
${\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^+}}\xspace$ & 1.0 & 2.5 \\
$2\pip2{\ensuremath{\pion^-}}\xspace$ & 3.5 & 4.4 \\
$2{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$ & 0.7 & 1.0 \\
$3\pip2{\ensuremath{\pion^-}}\xspace$ & 2.2 & 1.7
\end{tabular*}
\end{table}
In a~similar way intermediate light resonances
are searched for in the three\nobreakdash-pion system recoiling against
${\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$ in ${\ensuremath{\Bu}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace{\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace$~decays.
The~resonant structure is investigated
in the~${\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$,
${\ensuremath{\pion^+}}\xspace\pip$ and ${\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace$~combinations.
The~distributions for these combinations of pions
are compared with predictions of both
the~factorisation approach and a~phase\nobreakdash-space model,
as shown in Fig.~\ref{fig:pions_res}.
The corresponding $\ensuremath{\chi^2}\xspace/\mathrm{ndf}$ values are summarized in Table~\ref{tab:chisq_res}.
Similarly to the~non-resonant case, the~prediction from the~factorisation approach
is found to be in somewhat better agreement
with the~data than that from the~phase\nobreakdash-space model.
\begin{figure}[t]
\setlength{\unitlength}{1mm}
\centering
\begin{picture}(150,120)
\put( 0,55 ){\includegraphics*[width=70mm]{Fig5a.pdf}}
\put(70,55 ){\includegraphics*[width=70mm]{Fig5b.pdf}}
\put(35,0 ){\includegraphics*[width=70mm]{Fig5c.pdf}}
\put(17,101){a)}
\put(87,101){b)}
\put(92,45 ){c)}
\put( 50,101){LHCb}
\put(120,101){LHCb}
\put( 52,45 ){LHCb}
\put( 0,81 ){\begin{sideways}\small{ N/(100$\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$) }\end{sideways}}
\put(71,81 ){\begin{sideways}\small{ N/(100$\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$) }\end{sideways}}
\put(35,26 ){\begin{sideways}\small{ N/(100$\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$) }\end{sideways}}
\put( 30,57 ){\small{$m({\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace)$}}
\put(100,57 ){\small{$m({\ensuremath{\pion^+}}\xspace\pip)$}}
\put( 65,2 ){\small{$m({\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace)$}}
\put( 53,56 ){\small{$\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$}}
\put(123,56 ){\small{$\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$}}
\put( 88,1 ){\small{$\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$}}
\end{picture}
\caption {\small
Distributions of
(a)\,${\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$,
(b)\,${\ensuremath{\pion^+}}\xspace\pip$ and
(c)\,${\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace$~masses
in the~\mbox{${\ensuremath{\Bu}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace{\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace$}~decay.
The~predictions from the factorisation\nobreakdash-based model is shown by solid\,(red) lines,
and the~expectation from the~phase\nobreakdash-space model is shown by dashed\,(blue) lines. }
\label{fig:pions_res}
\end{figure}
\begin{table}[t]
\centering
\caption{ \small
The~\ensuremath{\chi^2}\xspace per degree of freedom for the
factorisation\nobreakdash-based
and phase\nobreakdash-space models
for the multipion system recoiling against ${\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace$
in \mbox{${\ensuremath{\B^+}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace {\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace$}~decays.
}
\label{tab:chisq_res}
\vspace*{3mm}
\begin{tabular*}{0.85\textwidth}{@{\hspace{5mm}}l@{\extracolsep{\fill}}cc@{\hspace{5mm}}}
Multipion system & Factorisation model & Phase-space model \\
\hline
${\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$ & 0.5 & 1.3 \\
${\ensuremath{\pion^+}}\xspace\pip$ & 0.8 & 0.7 \\
${\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace$ & 1.3 & 1.6
\end{tabular*}
\end{table}
\section{Results and summary}
\label{sec:results}
A~search for the~decay \mbox{$\Bu\to\jpsi3\pip2\pim$}\xspace is performed using a~data sample
corresponding to an~integrated luminosity of~$3.0\ensuremath{\mbox{\,fb}^{-1}}\xspace$,
collected by the~\mbox{LHCb}\xspace experiment.
A~total of $139\pm18$~signal events are observed,
representing the~first observation of this decay channel.
Around~half of the {\ensuremath{\Bu}}\xspace candidates are found to decay through
the~{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace~resonance.
The~observed yield of
~\mbox{$\Bu\to\psitwos\pip\pip\pim$}\xspace~decays is $61\pm10$ events,
which is the~first observation of this decay channel.
Using the ${\ensuremath{\B^+}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace{\ensuremath{\kaon^+}}\xspace$~decay as a normalisation channel,
the~ratios of the~branching fractions are measured to be
\begin{align*}
\label{eq:results}
R_{5\pi} &= \dfrac{\BF(\mbox{$\Bu\to\jpsi3\pip2\pim$}\xspace)}{\BF(\mbox{$\Bu\to\psitwos\Kp$}\xspace)} = (1.88\pm0.17\pm0.09)\times10^{-2}~, \\
R_{{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace} &= \dfrac{\BF(\mbox{$\Bu\to\psitwos\pip\pip\pim$}\xspace)}{\BF(\mbox{$\Bu\to\psitwos\Kp$}\xspace)} = (3.04\pm0.50\pm0.26)\times10^{-2}~,
\end{align*}
where the~first uncertainties are statistical and
the~second are systematic.
The~ratio $R_{5\pi}$ contains
also the~contribution from $\mbox{$\Bu\to\psitwos[\to\jpsi\pip\pim]\pip\pip\pim$}\xspace$~decays.
The~multipion distributions in the~\mbox{$\jpsi3\pip2\pim$}\xspace final state\,(vetoing the~{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace meson contribution)
and in the~${\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace{\ensuremath{\pion^+}}\xspace\pip{\ensuremath{\pion^-}}\xspace$ final state are studied.
A~structure which can be associated to the~$\Prho^{0}$~meson is seen in
the~${\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$ combinations of the~$\mbox{$\jpsi3\pip2\pim$}\xspace$ final state.
The~multipion distributions are compared with the~theoretical predictions from
the~factorisation approach and a~phase\nobreakdash-space model.
The~prediction from the~factorisation approach is found to be in somewhat better agreement
with the~data than the prediction from the~phase\nobreakdash-space model.
\section{Candidate selection}
\label{sec:selection}
The~decays \mbox{$\Bu\to\jpsi3\pip2\pim$}\xspace, \mbox{$\Bu\to\psitwos\pip\pip\pim$}\xspace and \mbox{$\Bu\to\psitwos\Kp$}\xspace are reconstructed using the decay modes \mbox{$\jpsi\to\mumu$}\xspace and \mbox{$\psitwos\to\jpsi\pip\pim$}\xspace followed by \mbox{$\jpsi\to\mumu$}\xspace.
Similar selection criteria are applied to all channels in order to minimize the systematic uncertainties.
Muon, pion and kaon candidates are selected from well\nobreakdash-reconstructed tracks
and are identified using information from the~RICH, calorimeter and muon detectors.
Muon candidates are required to have a~transverse momentum larger than~$550\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace$.
Both pion and kaon candidates are required to have a~transverse momentum larger than~$250\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c}}\xspace$
and momentum between~$3.2\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ and~$150\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c}}\xspace$ to allow good particle identification.
To reduce combinatorial background due to tracks from the ${\ensuremath{\Pp}}\xspace\proton$~interaction vertex,
only tracks that are inconsistent with originating from a PV are used.
Pairs of oppositely charged muons originating from a~common vertex are combined to
form ${\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{\Pmu^+\Pmu^-}}\xspace$~candidates. The~mass of the~dimuon combination is required to be
between $3.020$ and $3.135\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace$. The~asymmetric mass range around
the~known ${\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace$~meson mass~\cite{PDG}
is chosen to include the low\nobreakdash-mass tail due to final\nobreakdash-state radiation.
To~form a~{\ensuremath{\Bu}}\xspace~candidate, the~selected~{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace candidates are combined with
$3\pip2{\ensuremath{\pion^-}}\xspace$ or ${\ensuremath{\kaon^+}}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$~candidates for the~signal and control decays, respectively.
Each~{\ensuremath{\Bu}}\xspace~candidate is associated with the~PV with respect to which it has the smallest~\ensuremath{\chi^2_{\text{IP}}}\xspace,
which is defined as the~difference in the~vertex fit~\ensuremath{\chi^2}\xspace of the~PV with and without
the~particle under consideration.
To~improve the~mass resolution, a~kinematic fit~\cite{Hulsbergen:2005pu} is applied.
In~this fit the~mass of the~${\ensuremath{\Pmu^+}}\xspace\mun$~combination is fixed to the~known {\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace mass,
and the~{\ensuremath{\Bu}}\xspace~candidate's momentum vector is required to originate at the~associated PV.
A~good-quality fit is required to further suppress combinatorial background.
In~addition, the~measured decay time of the~{\ensuremath{\Bu}}\xspace candidate, calculated with respect
to the~associated PV, is required to be larger than~$200\ensuremath{{\,\upmu\mathrm{m}}}\xspace/c$, to suppress
background from particles coming from the~PV.
\section{Signal and normalisation yields}
\label{sec:signals}
The~mass distribution for selected \mbox{$\Bu\to\jpsi3\pip2\pim$}\xspace candidates
is shown in~Fig.~\ref{fig:5pi}(a).
The~signal yield is determined with an~extended unbinned maximum likelihood fit to the~distribution.
The~signal is modelled with a~Gaussian function with power law tails on both sides~\cite{LHCb-PAPER-2011-013},
where the~tail parameters are fixed from simulation
and the~peak position and the~width of the~Gaussian function are allowed to vary.
The~combinatorial background is modelled with a~uniform distribution.
No~peaking backgrounds from misreconstructed or
partially reconstructed decays of beauty hadrons
are expected in the~fit range.
The~resolution parameter obtained from the~fit is found to be~$6\pm1\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$
and is in good agreement with the~expectation from simulation.
The~observed signal yield is~$139\pm18$.
\begin{figure}[t]
\setlength{\unitlength}{1mm}
\centering
\begin{picture}(150,60)
\put( 0,0){\includegraphics*[width=75mm]{Fig1a.pdf}}
\put(75,0){\includegraphics*[width=75mm]{Fig1b.pdf}}
\put( 0,16 ) {\begin{sideways}Candidates/$(4\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace)$\end{sideways} }
\put( 75,33 ) {\begin{sideways}$N/(2\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace)$\end{sideways} }
\put( 25,1 ) { $m(\mbox{$\jpsi3\pip2\pim$}\xspace)$}
\put( 102,1 ) { $m(\mbox{$\jpsi\pip\pim$}\xspace)$}
\put( 55,1 ) { $\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$ }
\put( 130,1 ) { $\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$ }
\put( 55,50 ) {LHCb}
\put( 130,50 ) {LHCb}
\put( 16,50 ) {a)}
\put( 91,50 ) {b)}
\end{picture}
\caption { \small
(a)~Mass distributions of the~selected \mbox{$\Bu\to\jpsi3\pip2\pim$}\xspace~candidates.
(b)~Sum of mass distributions for all background\nobreakdash-subtracted \mbox{$\jpsi\pip\pim$}\xspace~combinations.
The~total fit function is shown with thick solid\,(orange) lines and
the~signal contribution with thin solid\,(red) lines.
The~dashed\,(blue) lines represent
the~combinatorial background and non\nobreakdash-resonance component
for plots~(a) and~(b), respectively.
}
\label{fig:5pi}
\end{figure}
The~statistical significance for the~observed signal is determined as
\mbox{$\mathcal{S}_{\sigma}=\sqrt{-2\log \mathcal{L}_\mathrm{B}/\mathcal{L}_{\mathrm{S+B}}}$},
where ${\mathcal{L}_{\mathrm{S+B}}}$ and
${\mathcal{L}_{\mathrm{B}}}$ denote the~likelihood associated
with the~signal\nobreakdash-plus\nobreakdash-background and
background\nobreakdash-only hypothesis,
respectively.
The~statistical significance of the~\mbox{$\Bu\to\jpsi3\pip2\pim$}\xspace signal is in excess of 10~standard deviations.
For the~selected {\ensuremath{\Bu}}\xspace~candidates, the~existence of a resonant structure is searched for
in the~$\mbox{$\jpsi\pip\pim$}\xspace$~combinations of final\nobreakdash-state particles.
There are six possible \mbox{$\jpsi\pip\pim$}\xspace~combinations that can be formed
from the~\mbox{$\jpsi3\pip2\pim$}\xspace final state.
The~background\nobreakdash-subtracted distribution of all six possible
combinations in the~narrow range around the~known {\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace~meson mass
is shown in Fig.~\ref{fig:5pi}(b), where each event enters six times.
The~$sPlot$~technique is used for background subtraction~\cite{Pivk:2004ty} with the~\mbox{$\jpsi3\pip2\pim$}\xspace mass
as the~discriminating variable.
The~signal yield of \mbox{$\Bu\to\psitwos[\to\jpsi\pip\pim]\pip\pip\pim$}\xspace is determined using an~extended unbinned maximum likelihood fit
to the~background-subtracted ${\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace$~mass distribution.
The~{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace component is modelled with a~Gaussian function with power law tails on both sides,
where the~tail parameters are fixed from simulation.
The~non\nobreakdash-resonant component
is modelled with the~phase\nobreakdash-space shape multiplied by a~linear function.
The~mass resolution obtained from the~fit is~$1.9\pm0.3\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$, in good agreement
with the expectation from simulation.
The~observed signal yield is~$61\pm10$.
The~\mbox{$\Bu\to\psitwos[\to\jpsi\pip\pim]\Kp$}\xspace decay
is used as a~normalisation channel for
the~measurements of the~relative branching fractions.
The~mass distribution for selected \mbox{${\ensuremath{\B^+}}\xspace\ensuremath{\rightarrow}\xspace{\ensuremath{{\PJ\mskip -3mu/\mskip -2mu\Ppsi\mskip 2mu}}}\xspace{\ensuremath{\pion^+}}\xspace{\ensuremath{\pion^-}}\xspace{\ensuremath{\kaon^+}}\xspace$}
candidates is shown in Fig.~\ref{fig:psik}(a).
An~extended unbinned maximum likelihood fit to the~distribution
is performed using the~model described above for the~signal and
an~exponential function for the~background.
The~mass resolution parameter obtained from the~fit is~$6.60\pm0.02\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$,
again in good agreement with the~expectations from simulation.
The~background-subtracted mass distribution of the~\mbox{$\jpsi\pip\pim$}\xspace system in the~region
of the~{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace mass is shown in Fig.~\ref{fig:psik}(b).
\begin{figure}[t]
\setlength{\unitlength}{1mm}
\centering
\begin{picture}(150,60)
\put( 0,0){\includegraphics*[width=75mm]{Fig2a.pdf}}
\put(75,0){\includegraphics*[width=75mm]{Fig2b.pdf}}
\put( -3,16 ) {\begin{sideways}Candidates/$(4\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace)$\end{sideways} }
\put( 74,33 ) {\begin{sideways}$N/(1\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace)$\end{sideways} }
\put( 25,1 ) {$m(\mbox{$\jpsi\pip\pim$}\xspace{\ensuremath{\kaon^+}}\xspace)$}
\put( 102,1 ) {$m(\mbox{$\jpsi\pip\pim$}\xspace)$}
\put( 57,1 ) {$\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$ }
\put( 132,1 ) {$\left[\!\ensuremath{{\mathrm{\,Ge\kern -0.1em V\!/}c^2}}\xspace\right]$ }
\put( 55,50 ) {LHCb}
\put( 130,50 ) {LHCb}
\put( 16,50 ) {a)}
\put( 91,50 ) {b)}
\end{picture}
\caption { \small
Mass distributions (a) of the selected \mbox{$\Bu\to\psitwos\Kp$}\xspace~candidates and
(b) background\nobreakdash-subtracted \mbox{$\jpsi\pip\pim$}\xspace combination.
The~total fit function is shown with thick solid\,(orange) lines and
the~signal contribution with thin solid\,(red) lines.
The~dashed\,(blue) lines represent
the~combinatorial background and non\nobreakdash-resonance component
for plots~(a) and~(b), respectively.
}
\label{fig:psik}
\end{figure}
The signal yield of \mbox{$\Bu\to\psitwos[\to\jpsi\pip\pim]\Kp$}\xspace is determined using an extended unbinned maximum likelihood fit
to the~\mbox{$\jpsi\pip\pim$}\xspace distribution,
where the background is subtracted using the $sPlot$ technique with the~$\mbox{$\jpsi\pip\pim$}\xspace{\ensuremath{\kaon^+}}\xspace$ mass
as the~discriminating variable.
The~{\ensuremath{\Ppsi{(2\mathrm{S})}}}\xspace and the~non\nobreakdash-resonant components are modelled with
the~same functions used for the~signal channel.
The~mass resolution obtained from the fit is~\mbox{$2.35\pm0.02\ensuremath{{\mathrm{\,Me\kern -0.1em V\!/}c^2}}\xspace$}.
The~signal yields are summarized in Table~\ref{tab:fit_results}.
\begin{table}[t]
\centering
\caption{\small Signal yields, $N$, of {\ensuremath{\Bu}}\xspace decay channels. Uncertainties are statistical only.}
\begin{tabular}{lc}
Channel & $N$({\ensuremath{\Bu}}\xspace) \\ \hline
\mbox{$\Bu\to\jpsi3\pip2\pim$}\xspace & $\phantom{0}139\pm18$ \\
\mbox{$\Bu\to\psitwos[\to\jpsi\pip\pim]\pip\pip\pim$}\xspace & $\phantom{00}61\pm10$ \\
\mbox{$\Bu\to\psitwos[\to\jpsi\pip\pim]\Kp$}\xspace & $13554\pm118$ \\
\end{tabular}
\label{tab:fit_results}
\end{table}
|
2,877,628,088,402 | arxiv | \section{Introduction}
\section{Introduction}
\label{secintro}
\IEEEPARstart{T}{he} grating coupler is a crucial component in integrated photonics which couples light into or out from the integrated photonic circuits \cite{mekis2010grating, marchetti2019coupling}. Its coupling efficiency is important to the performance of the whole system. Considerable efforts have been devoted into designing grating couplers with high coupling efficiency \cite{taillaert2004compact, chen2010apodized, marchetti2017high, wang2005embeddedslant, roelkens2006highoverlay, su2018fully, verslegers2019method, ding2014fully, bozzola2015optimising, michaels2018inverse}. A uniform grating coupler has an exponential scattering intensity distribution and, consequently, a maximum theoretical coupling effieicny of 80\% with a Gaussian beam, due to the mismatch between an exponential distribution and a Gaussian distribution \cite{orobtchouk2000high}.
Therefore, to achieve higher coupling efficiency, the grating couplers must be apodized, which means that the geometric parameters like periodicity and etch length are changed along the grating coupler.
A well-established description of the apodized grating coupler is to model the grating as a continuous scatterer with a position-dependent scattering strength \cite{mekis2010grating, taillaert2004compact}. With this model,
the ideal scattering strength can be expressed as a function of the target output. Using a mapping between the scattering strength and the geometric parameters of the grating, a concrete design of the apodized grating coupler can be found.
Guided by this model, grating couplers with high coupling efficiency and near-unity mode-matching efficiency have been demonstrated \cite{taillaert2004compact, chen2010apodized, ding2014fully}.
However, the previous model, though successful, has an important limitation. The optimized scattering strength predicted by the previous model, which is referred to as the {ideal scattering strength}, is a continuous function starting from zero and usually unbounded \cite{mekis2010grating}. But in practice, the scattering strength can neither be arbitrarily large due to the finite scattering strength of the grating teeth, nor be arbitrarily small, due to the minimal feature size allowed in fabrication.
In this study, we extend the previous model by including the constraints of upper and lower bounds of the scattering strength. We provide a formalism that can determine the globally optimal scattering strength under these constraints.
This extension results in modest improvement in coupling efficiency for the ``standard'' grating couplers in silicon photonics coupling with single mode fiber. However, our extension becomes important for grating couplers coupling with more complex beams \cite{demirtzioglou2019apodized, zhou2019ultra, liu2016chip, nadovich2016forked, nadovich2017focused}, such as optical beams with non-zero orbital angular momentum, or in integrated photonic systems using materials other than silicon \cite{krasnokutska2019high, chen2016high, maire2008high}.
The paper is organized as following: A brief review of the coupling efficiency and the previous {ideal model} is presented in Sec. \ref{sec:review}. In Sec. \ref{sec:theory}, we discuss our extension of the {ideal model} and provide an algorithm to obtain the scattering strength. As a validation of our model, in Sec. \ref{sec:design_procedure}, we present our design procedure with a illustrative design of a grating coupler on a silicon-on-insulator (SOI) platform, which has coupling efficiency comparable to the state-of-the-art. We extend both the {ideal model} and our extension to two-dimensional non-focusing gratings in Sec. \ref{sec:2d_nonfocus}. In Sec. \ref{sec:design_oam}, we analytically design and numerically demonstrate a highly efficient grating coupler coupling to a Laguerre-Gaussian beam carrying orbital angular momentum. We extend our model to the design of a fan-shape focusing grating coupler in Sec. \ref{sec:2d_focusing}. In Sec. \ref{sec:implication} we further discuss some implications of our model, such as how the upper and lower bounds of the scattering strength constrain the upper limit of the coupling efficiency. We conclude in Sec. \ref{sec:conclusion}.
\section{Review of the {ideal model}}
\label{sec:review}
In this section, we outline the description of the coupling efficiency of the grating coupler system, and briefly review the previous {ideal model} that provides the {ideal scattering strength}.
Consider a grating coupler that couples the power in the guided mode of a waveguide to a target mode profile (Fig. \ref{fig:schematic}). The coupling efficiency ($\eta$) can be obtained through an overlap integral \cite{michaels2018inverse}:
\begin{equation}
\label{eq:couplingefficiency}
\eta = \frac{1}{P_{\textrm{wg}}P_\textrm{t}}\Big|\iint \frac{1}{2} \bm{E}\times \bm{H}_\textrm{t}^* \cdot d\bm{S} \Big|^2,
\end{equation}
where $P_\textrm{wg}$ and $P_\textrm{t}$ are the power in the guided wave and in the target mode respectively, $\bm{E}$ is the electric field scattered by the grating coupler, and $\bm{H}_\textrm{t}$ is the magnetic field of the target mode profile. The overlap integral can be carried out above the grating coupler. Equation (\ref{eq:couplingefficiency}) suggests that to maximize the coupling efficiency, the polarization, amplitude distribution and phase distribution of the scattering field should match that of the target mode.
In the setup of Fig. \ref{fig:schematic}, which is a two-dimensional system having translational symmetry along the $x$-direction, the polarization matching is guaranteed by symmetry.
The phase matching is also well-studied. For a uniform grating, it is achieved by choosing the grating pitch to compensate the momentum mismatch between the guided mode and the target output beam \cite{mekis2010grating, marchetti2019coupling}. For an apodized grating, the phase matching can be satisfied with small adjustments of the grating pitch \cite{michaels2018inverse}.
Assuming that the polarization matching and phase matching can be achieved, we focus on the matching between the amplitude distributions. Under this assumption, the coupling efficiency becomes
\begin{equation}
\label{eq:couplingefficiency2}
\eta = \frac{1}{P_{\textrm{wg}}P_\textrm{t}}\Big|\int dx \int dz A(x, z) A_\textrm{t}(x, z) \Big|^2,
\end{equation}
where $|A(\bm{r})|^2$ represents the intensity of the field at $\bm{r}$. The guided wave is propagating along $z$-direction, the $y$-direction is perpendicular to the grating coupler, and the remaining transverse direction is the $x$-direction (Fig. \ref{fig:schematic}).
When the grating teeth are parallel, as in most common cases, the $x$- and $z$-dependence of the amplitude are separable, i.e. $A(x, z) = A_\textrm{x}(x)A_\textrm{z}(z)$. (We can choose $A_\textrm{x}(x)$ such that it is dimensionless and takes peak value of unity.) Furthermore, for target modes like Gaussian beams, the decomposition $A_\textrm{t}(x, z) = A_\textrm{tx}(x)A_\textrm{tz}(z)$ also holds. Since the amplitude matching in the transverse direction is straightforward \cite{taillaert2004compact}, the remaining challenge is to design the apodized grating coupler to maximize the following overlap integral along the longitudinal $z$-direction:
\begin{equation}
\label{eq:overlapz}
\eta_\textrm{z} = \frac{1}{P_\textrm{wg} P_\textrm{t}}\Big|\int dz A_\textrm{z}(z) A_\textrm{tz}(z) \Big|^2.
\end{equation}
Equation (\ref{eq:overlapz}) is equivalent to the coupling efficiency of a one-dimensional grating coupler system, where both the grating coupler and the target mode extend uniformly along $x$-direction. The power terms like $P_\textrm{wg}$ and $P_\textrm{t}$ take the unit of power per unit length along the $x$-direction.
Due to Cauchy-Schwarz inequality
\begin{align}
\label{eq:CauchySchwarz}
\eta_\textrm{z} \leq \frac{\int dz |A_\textrm{z}(z)|^2 \int dz |A_\textrm{tz}(z)|^2}{P_\textrm{wg} P_\textrm{t}},
\end{align}
unity coupling efficiency is achieved only if the amplitude matching is achieved and the guided power is entirely scattered to one side of the grating coupler, i.e.
\begin{align}
\label{eq:CauchySchwarz_A}
A_\textrm{z}(z) & = C_1 A_\textrm{tz}(z) ,\\
\label{eq:CauchySchwarz_Pwg}
P_\textrm{wg} & = \int dz |A_\textrm{z}(z)|^2,
\end{align}
where $C_1$ is a constant, and $P_\textrm{t} = \int dz |A_\textrm{tz}(z)|^2$ is always satisfied.
\begin{figure}
\centering
\includegraphics[width = 0.76\linewidth]{appodized_grating_schematic_3.png}
\caption{Schematic of an apodized grating coupler (a) and its cross section (b).}
\label{fig:schematic}
\end{figure}
Based on the discussion above, the well-established model, which we refer to as the \textit{ideal model} in this paper, for such one-dimensional grating coupler system is summarized as following. The apodized grating coupler (Fig. \ref{fig:schematic}) is modeled as a continuous scatterer whose scattering strength depends on $z$, such that
\begin{align}
\label{eq:alpha_power_decay}
\frac{dP(z)}{dz} = - 2\alpha(z) P(z),
\end{align}
where $P(z)$ is the remaining power in the guided mode and $\alpha(z)$ is the scattering strength of the grating. The {ideal model} also assumes no reflection in the grating coupler region. The intensity of the scattering light is
\begin{equation}
\label{eq:scattered_light_1}
S(z) = 2\alpha(z)P(z).
\end{equation}
To maximize the coupling efficiency, the amplitude matching condition (\ref{eq:CauchySchwarz_A}) should be satisfied, such that $S(z) = S_\textrm{t}(z)$, where $S_\textrm{t}(z)$ is the target intensity distribution. Together with (\ref{eq:alpha_power_decay}) and (\ref{eq:scattered_light_1}), we find that
\begin{align}
\frac{dP(z)}{dz} & = -S_\textrm{t}(z), \\
\label{eq:P_1} P(z) & = P(-\infty) - \int_{-\infty}^z S_\textrm{t}(t) dt, \\
\label{eq:alpha_1} \alpha(z) & = \frac{1}{2}\frac{S_\textrm{t}(z)}{P(-\infty) - \int_{-\infty}^z S_\textrm{t}(t) dt},
\end{align}
where $P(-\infty) = P_\textrm{wg}$ and $-\infty$ ($+\infty$) denotes any position before (after) the grating coupler region. Equation (\ref{eq:alpha_1}) resembles Eq. (3) in \cite{taillaert2004compact}.
Furthermore, if the guided power is entirely extracted, $P(-\infty) = \int_{-\infty}^{+\infty} S_\textrm{t}(t)dt$. Therefore, the scattering strength is
\begin{equation}
\label{eq:alpha_complete_scattering}
\alpha(z) = \frac{1}{2}\frac{S_\textrm{t}(z)}{\int_{z}^\infty S_\textrm{t}(t) dt},
\end{equation}
which is consistent with Eq. (3) in \cite{mekis2010grating}.
On the other hand, if only a portion ($\zeta$) of the guided power is scattered by the grating coupler,
\begin{equation}
\zeta P(-\infty) = \int_{-\infty}^{+\infty} S_\textrm{t}(t)dt,
\end{equation}
the scattering strength (\ref{eq:alpha_1}) becomes
\begin{equation}
\label{eq:alpha_partial_scattering}
\alpha(z) = \frac{1}{2}\frac{\zeta S_\textrm{t}(z)}{\int_{-\infty}^\infty S_\textrm{t}(t)dt - \zeta \int_{-\infty}^z S_\textrm{t}(t) dt},
\end{equation}
which is used in \cite{mehta2017precise, nadovich2017focused}. This scattering strength is regarded as the optimum for a fixed scattered portion $\zeta$, but obviously not the optimal scattering strength in general, since part of the power remains in the waveguide (i.e. $\zeta \neq 1$). In this paper, we refer to the scattering strength defined in either (\ref{eq:alpha_complete_scattering}) or (\ref{eq:alpha_partial_scattering}) as the \textit{ideal scattering strength}.
From (\ref{eq:alpha_complete_scattering}) and (\ref{eq:alpha_partial_scattering}), we find that multiplying $S_\textrm{t}(z)$ by a constant factor has no influence on the optimal scattering strength. Thus, we can simply take $S_\textrm{t}(z) = |A_\textrm{tz}(z)|^2$.
Moreover, if only a portion ($\xi$) of the guided power is scattered to the target mode side, where $\xi$ is independent of $z$, the optimal scattering strength is unchanged, and the optimal coupling efficiency is reduced by a factor of $\xi$.
We also find that the numerator and denominator in (\ref{eq:alpha_complete_scattering}) both approach $0$ as $z$ increases. When the target mode is a Gaussian beam, $\alpha(z)$ is a monotonic increasing function of $z$ \cite{mekis2010grating}. However, the scattering strength cannot be arbitrarily large and (\ref{eq:alpha_partial_scattering}) is sometimes used to ensure that $\alpha(z)$ is upper bounded \cite{nadovich2017focused}, because $\alpha(z)$ given by (\ref{eq:alpha_partial_scattering}) approaches 0 as $z$ approaches infinity for $\zeta < 1$, and, therefore, is upper bounded.
Due to this property, (\ref{eq:alpha_partial_scattering}) is used when the fidelity of the output mode, which is the similarity between the output mode and the target mode, is more important than the coupling efficiency \cite{mehta2017precise, nadovich2017focused}.
\section{Theory and Algorithm}
\label{sec:theory}
The previous {ideal model} reviewed in Sec. \ref{sec:review} generally gives an {ideal scattering strength} that starts from zero and increases to a large value for a long grating.
In this section, we present our extension to the previous {ideal model} by taking the upper and lower bound of the scattering strength into account.
We consider a grating coupler with length $L$ whose scattering strength is upper- and lower-bounded ($\alpha \in [\alpha_{\textrm{min}}, \alpha_{\textrm{max}}] \cup \{0\}$).
Below we refer to such a range of $\alpha$ as the \textit{feasible range}. The scattering strength certainly cannot be arbitrarily large for a given index contrast in a grating. Also, since there is usually a constraint on the minimum feature size of the grating, the scattering strength cannot be arbitrarily small either if it is non-zero. On the other hand, it can take the value of zero, which corresponds to a uniform region with no grating. Here, by ``lower bound'', we refer only to non-zero values of $\alpha$.
As a starting point, in this section, we consider only the one-dimensional grating coupler system and focus only on the amplitude matching.
With these simplifications, the coupling efficiency is
\begin{equation}
\label{eq:coupling_finite_length}
\eta = \xi \Big|\int_0^L \sqrt{S(z)} A_\textrm{t}(z)dz \Big|^2,
\end{equation}
where $\xi$, describing the directivity of the grating \cite{marchetti2019coupling}, is the portion of light scattered towards the side of the target mode, and the grating coupler is placed in $z \in [0, L]$. Comparing with (\ref{eq:overlapz}), we set the power in the guided mode at $z=0$ to be unity. The amplitude of the target mode is normalized such that $P_\textrm{t}=1$, and the subscript $z$ is omitted for simplicity.
The optimized grating coupler, characterized by $\alpha(z)$, should maximize the coupling efficiency. In practice, the portion of light that is scattered towards the target side of the grating is determined by the technology, such as layer thicknesses and etching depth \cite{wang2005embeddedslant, bozzola2015optimising, marchetti2017high, roelkens2006highoverlay}, and can be optimized separately. Thus, we take $\xi$ as a constant independent of the grating apodization.
With a $z$-dependent scattering strength, the power remaining in the guided mode and the scattering intensity are obtained from (\ref{eq:alpha_power_decay}) and (\ref{eq:scattered_light_1}):
\begin{align}
\label{eq:power_remaining}
P(z) & = \exp\Big[ -2\int_0^z \alpha(t)dt \Big], \\
\label{eq:scattered_light_2}
S(z) & = 2\alpha(z)\exp\Big[ -2\int_0^z \alpha(t)dt \Big].
\end{align}
Since both $S(z)$ and $A_\textrm{t}(z)$ are real and $\xi$ is a constant, we find that to maximize the coupling efficiency is equivalent to maximize the term in the absolute sign of (\ref{eq:coupling_finite_length}):
\begin{equation}
\label{eq:f}
f = \int_0^L \sqrt{2\alpha(z)} \exp\Big[ -\int_0^z \alpha(t)dt \Big] A_\textrm{t}(z) dz.
\end{equation}
Since $f$ is a functional of $\alpha(z)$, the optimization problem can therefore be formulated as:
\begin{align}
\label{eq:optimization}
\begin{split}
\textrm{Objective:} \; & \textrm{max} \, f(\alpha(z)), \\
\textrm{Subject to:} \; & \alpha(z) \in [\alpha_{\textrm{min}}, \alpha_{\textrm{max}}]\cup\{0\}.
\end{split}
\end{align}
We now prove that the constraint optimization problem has a global optimum given a target mode, by first showing that the problem exhibits optimal substructure \cite{cormen2009introduction}.
Optimal substructure refers to the fact that the optimal scattering strength for the whole grating must also be the optimal scattering strength for any tail potion of the grating coupler, i.e. the grating segment $[z, L]$ for any $z$. The existence of the optimal substructure allows us to derive a recursive relation to determine the optimal scattering strength, since the optimal scattering strength at position $z$ depends only on the target mode and the optimal scattering strength after $z$. Therefore, we can determine the optimal scattering strength from the end to the beginning of the grating, and we show that the derived scattering strength is the global optimum.
To illustrate the optimal substructure of the optimization problem,
we separate $f$ into two parts:
\begin{align}
\label{eq:f_separation}
f & = f^-(z) + \exp \Big [{-\int_0^{z}\alpha(t)dt} \Big ] f^+(z),
\end{align}
where
\begin{align}
\label{eq:f_minus}
f^-(z) & = \int_0^{z} \sqrt{2\alpha(s)} \exp\Big[{-\int_0^s \alpha(t)dt}\Big] A_\textrm{t}(s) ds, \\
\label{eq:f_plus}
f^+(z) & = \int_{z}^L \sqrt{2\alpha(s)} \exp \Big[{-\int_z^s \alpha(t)dt}\Big] A_\textrm{t}(s) ds.
\end{align}
$f^-(z)$ depends only on the grating scattering strength between 0 and $z$, while $f^+(z)$ depends only on the grating scattering strength between $z$ and $L$. Comparing (\ref{eq:f}) and (\ref{eq:f_plus}), we find that $f = f^+(0)$. Also, $\xi|f^+(z)|^2$ has the physical meaning of the coupling efficiency of the grating coupler with grating region restricted to $[z, L]$. To maximize $f$, the scattering strength must also maximize $f^+(z)$ for any $z \in [0, L]$. This can be straightforwardly proved by counter-evidence (Appendix \ref{app:optimal_substructure}). Thus, the optimization problem has optimal substructure.
Due to the optimal substructure, we can adopt a recursive approach to study the sub-problem of finding the optimal scattering strength in region $[z, L]$ to maximize $f^+(z)$.
The relevant question is: Assuming the optimal scattering strength in $[z+\Delta z, L]$ has been found to be $\alpha^\star(s)$ for all $s \in [z+\Delta z, L]$ such that $f^+(z+\Delta z)$ is maximized, what is the optimal $\alpha(s)$ in $s \in [z, z+\Delta z]$ such that $f^+(z)$ is maximized? Since $\Delta z$ is infinitesimal, we can approximate the scattering strength in $[z, z+\Delta z]$ as a constant $\alpha_z$. Therefore, $f^+(z)$ is a function of $\alpha_z$ only. And hence $\alpha_z$ can be determined. Thus, we get the optimal scattering strength in region $[z, L]$. In fact, one can further show that this function is concave, and reaches its maximum either at the extreme point, or at the boundaries of the feasible range.
With this recursive approach starting from the end of the grating, we can get the optimal scattering strength for the whole grating coupler. The above derivation also proves that the obtained scattering strength is a global optimum.
We proceed to give the explicit recursive relation to obtain the optimal scattering strength in $[z, z+\Delta z]$.
The dependence of $f^+(z)$ on $\alpha_z$ is:
\begin{align}
\label{eq:f_plus_deltaz}
\begin{split}
f^+(z; \alpha_z) = \,& \sqrt{2\alpha_z} \int_0^{\Delta z} \exp(-\alpha_z t) A_\textrm{t}(z + t) dt \\ \,& + \exp(-\alpha_z \Delta z)f^+(z + \Delta z)
\end{split}
\end{align}
Taking derivative with respect to $\alpha_z$, where we assume $\alpha_z >0$ and postpone treating the case $\alpha_z=0$, we find
\begin{align}
\label{eq:f_plus_derive_1}
\begin{split}
\frac{\partial f^+(z; \alpha_z)}{\partial \alpha_z} = &\, \Big[ \frac{1}{\sqrt{2 \alpha_z}} A_\textrm{t}(z) - f^+(z + \Delta z)\Big]\Delta z \\ & \, + O[(\Delta z)^2] ,
\end{split}\\
\label{eq:f_plus_derive_2}
\frac{\partial^2f^+(z; \alpha_z)}{\partial \alpha_z^2} = &\, -(2\alpha_z)^{-\frac{3}{2}} A_\textrm{t}(z) \Delta z + O[(\Delta z)^2].
\end{align}
As $\Delta z \to 0$, $O[(\Delta z)^2]$ terms are negligible comparing with the terms proportional to $\Delta z$. To maximize $f^+(z)$, we set $\partial f^+(z) /\partial \alpha_z = 0$ and find the condition for the extreme point:
\begin{equation}
\label{eq:alphaz_extreme}
\frac{1}{\sqrt{2 \alpha_z}} A_\textrm{t}(z) = f^+(z).
\end{equation}
The second derivative (\ref{eq:f_plus_derive_2}) is negative, which implies that $f^+(z)$ is a concave function of $\alpha_z$ and the extreme point is a maximum.
Thus, without considering the feasible range, the optimal scattering strength is:
\begin{align}
\label{eq:alpha_opt}
\alpha_z = \frac{A_\textrm{t}^2(z)}{2 \Big\{\int_{z}^L \sqrt{2 \alpha^\star(s)}\exp \Big[ -\int_{z}^s \alpha^\star(t)dt \Big] A_\textrm{t}(s) ds \Big\}^2},
\end{align}
where the superscript $\star$ indicates the optimum within the feasible range. Since $\alpha(z)$ should not have singularities, the integrations starting from $z$ in (\ref{eq:alpha_opt}) are equivalent to those starting from $z^+$. Equation (\ref{eq:alpha_opt}) suggests that $\alpha_z$ depends on $\alpha^\star(s)$ for $s \in (z, L]$, but not $s \in [0, z)$. Hence, (\ref{eq:alpha_opt}) can be regarded as a recursive relation. Furthermore, since $f^+(z)$ is a concave function of $\alpha_z$, the maximum is at the boundary if $\alpha_z \notin [\alpha_\textrm{min}, \alpha_\textrm{max}]$.
Moreover, as $\alpha(z)=0$ is also eligible, one need to compare $f^+(z; \alpha_z=\alpha_\textrm{min})$ and $f^+(z; \alpha_z=0)$ to choose between $0$ and $\alpha_\textrm{min}$ if $\alpha_z$ obtained from (\ref{eq:alpha_opt}) is smaller than $\alpha_\textrm{min}$. In summary, the optimal scattering strength at $z$ within the feasible range is
\begin{align}
\label{eq:alphaconstraint}
\alpha^\star(z) =
\begin{cases}
0 & \alpha_z < \alpha_{\textrm{min}} \textrm{, } f^+(z; \alpha_\textrm{min}) \leq f^+(z; 0) \\
\alpha_\textrm{min} & \alpha_z < \alpha_{\textrm{min}} \textrm{, } f^+(z; \alpha_\textrm{min}) > f^+(z; 0) \\
\alpha_z & \alpha_{\textrm{min}} \leq \alpha_z \leq \alpha_{\textrm{max}} \\
\alpha_{\textrm{max}} & \alpha_{\textrm{max}} < \alpha_z
\end{cases}.
\end{align}
Equations (\ref{eq:alpha_opt}) and (\ref{eq:alphaconstraint}) are the recursive relation for the optimal scattering strength.
In the numerical implementation of solving $\alpha(z)$, we take finite but small step $\Delta z$.
We discretize the interval $[0, L]$ into N equal segments, and denote $\Delta z = L/N$, $z_i = \frac{i}{N}L$, $\alpha_i = \alpha_{z_i}$, $\alpha_i^\star = \alpha^\star(z_i)$, and $A_i = A(z_i)$, where $i = 0, 1, \hdots, N$. The discretized version of the extreme point condition (\ref{eq:alphaz_extreme}) is
\begin{align}
\label{eq:alpha_extreme_discrete}
\begin{split}
& \, \frac{1}{\sqrt{2\alpha_i}}A_i = \frac{1}{2}\sqrt{2\alpha_i}A_i\Delta z \\
& + \sum_{j=i+1}^{N-1}\sqrt{2\alpha_j^\star}\exp \big[-\big(\frac{\alpha_i}{2} + \sum_{k=i+1}^{j-1}\alpha_k^\star + \frac{\alpha_j^\star }{2}\big) \Delta z \big] A_j \Delta z \\
& + \frac{1}{2}\sqrt{2\alpha_N^\star} \exp \big[-\big( \frac{\alpha_i}{2} + \sum_{k=i+1}^{N-1} \alpha_k^\star + \frac{\alpha_N^\star}{2} \big) \Delta z \big] A_N \Delta z.
\end{split}
\end{align}
for $i = 0, 1, \hdots, N-1$.
Equation (\ref{eq:alpha_extreme_discrete}) is a transcendental equation of $\alpha_i$, which can be solved iteratively usually with only a few iteration steps.
The grating strength at the end of the grating coupler must be as large as possible in order to minimize the remaining guided power.
So,
\begin{equation}
\label{eq:alpha_N}
\alpha^*_N = \alpha_{\textrm{max}}.
\end{equation}
Thus, we can numerically solve (\ref{eq:alpha_extreme_discrete}) from $i = N-1$ to $i = 0$.
For each $i$, after solving (\ref{eq:alpha_extreme_discrete}), we obtain the optimum $\alpha_i^\star$ based on the condition (\ref{eq:alphaconstraint}), such that $\alpha_i^\star \in [\alpha_\textrm{min}, \alpha_\textrm{max}]$ or $\alpha_i^\star = 0$.
As a final remark, the position of the target beam is usually also a design parameter, which can be taken into account in our model straightforwardly. Since the maximal $f$ can be obtained for any target beam position ($z_0$), searching for the optimal $z_0$ is a single variable optimization problem. It can be solved by either brute force or gradient decent.
In summary, we extend the {ideal model} by considering explicitly the upper and lower bounds of the scattering strength. We prove that the scattering strength given by (\ref{eq:alpha_opt}) and (\ref{eq:alphaconstraint}) is the optimum. An algorithm is also provided for solving $\alpha(z)$ numerically.
\section{Design procedure}
\label{sec:design_procedure}
In this section, we outline the deterministic design procedure towards generating a highly efficient grating coupler corresponding to available fabrication technology. This procedure is similar to previous work with the {ideal model} \cite{taillaert2004compact, marchetti2019coupling}. We repeat it for completeness and discuss details of maintaining the phase matching.
For illustration, we demonstrate the design procedure by designing a one-dimensional grating coupler (Fig. \ref{fig:schematic}(b)).
\textit{Step 1: Specify technology and experimental conditions.}
The fabrication technology and experimental conditions constrain the achievable parameters for the design. Such parameters include waveguide core material, core thickness, top and bottom cladding material and thickness, and minimal feature size. The industry-standard testing platform gives further requirements for the fiber incident angle.
For demonstration, we choose a silicon-on-insulator platform with 220 nm silicon thickness and 2 $\mu$m bottom oxide thickness. The top oxide cladding is assumed to be thick. The etch depth is 70 nm and the minimal feature size is 80 nm, which can be achieved by electron beam lithography or deep ultra-violet photo lithography \cite{selvaraja2009fabrication, hu2004sub}. The single mode fiber incident angle is 10$^\circ$ in air, which corresponds to $\theta=6.9^\circ$ in silicon dioxide cladding. The free space wavelength is 1550 nm. The target mode in the single-mode fiber can be approximated as a Gaussian beam with beam waist $w_0 = 5.2$ $\mu$m and we consider TE polarization (electric field polarized in $x$-direction).
This fabrication technology and experimental condition are chosen since they are close to the industrial standard. Nevertheless, our model and design procedure can be applied to other technologies. The parameters of the fabrication technology can also be optimized by comparing the optimal grating couplers under different technologies.
\begin{figure}
\centering
\includegraphics[width = 0.86\linewidth]{mapping_and_pitch_2.png}
\caption{(a) Schematic of a unit cell in a grating coupler. $\Lambda$ is the pitch of the grating, $l_e$ is the etch length, $d_e$ is the etch depth. The effective indices of the guided modes in the etched part and unetched part are labeled as $n_{e}$ and $n_\textrm{wg}$. (b) Pitch as a function of the etch length that satisfies the phase matching condition (\ref{eq:pitch_analytical}). (c) Grating scattering strength as a function of the etch length. (b) and (c) are for 220 nm thick silicon slab, with 70 nm etch depth, surrounding by silica.}
\label{fig:mapping}
\end{figure}
\textit{Step 2: Generate mapping between scattering strength and grating structures.}
A mapping from the geometrical parameters of a single unit cell of the grating to the scattering strength $\alpha$ is required for the design.
Although sophisticated scattering elements can be used in a single unit cell \cite{ding2014fully, chen2016high, halir2009waveguide, purwaha2019broadband, qin2012high, watanabe2017perpendicular},
as a simple demonstration, we choose the scattering element per unit cell as a single etch trench (Fig. \ref{fig:mapping}(a)). The relevant parameters are then the etch length, and the length of the unit cell which corresponds to the pitch of the grating.
For a given etch length, the length of the unit cell is determined by the phase-matching condition as follows.
Assume that the accumulated phase per unit cell for the guided wave is $ \phi_{\Lambda} = 2\pi n_{e}l_e/{\lambda} + 2\pi n_\textrm{wg}(\Lambda - l_e)/{\lambda}$,
where $\Lambda$ is the pitch, $l_e$ is the etch length, $\lambda$ is the free space wavelength, $n_{e}$ and $n_\textrm{wg}$ are the effective indices of the etched and unetched slab waveguide \cite{michaels2018inverse}. To satisfy the phase matching condition, $\phi_\Lambda = 2\pi + 2\pi n_c \Lambda \sin\theta / \lambda$, where $n_c$ is the refractive index of the cladding, and $\theta$ is the target beam propagation angle in the cladding layer. With these assumptions, the grating pitch is
\begin{equation}
\label{eq:pitch_analytical}
\Lambda = \frac{\lambda + l_e (n_\textrm{wg} - n_{e})}{n_\textrm{wg} - n_c \sin{\theta}},
\end{equation}
which is illustrated in Fig. \ref{fig:mapping}(b) for the chosen fabrication technology. This prediction of grating pitch works well empirically, especially for shallow etch or low index contrast between core and cladding.
We then numerically simulate a long uniform grating for each etch length and the corresponding pitch.
The scattering strength is extracted by fitting the remaining power in the guided mode along the propagation direction to an exponential decay.
In such a simulation, the grating should be long enough such that the guided power at the end of the grating is negligible. The mapping between the etch length and scattering strength is shown in Fig. \ref{fig:mapping}(c).
The simulation of the uniform grating also provides information of the emission phase of the grating with a specific etch length. The emission phase describes the phase of the scattered field, which can be extracted from the following field overlap integral with the target mode:
\begin{equation}
\label{eq:emission_phase}
f_{\textrm{e}} = \frac{1}{2} \int E(z)\times H_{\textrm{t}}^*(z) dz,
\end{equation}
where $E(z)$ is the scattered field. The target mode position is
kept invariant for simulations with different etch lengths. The emission phase $\phi_\textrm{e} = \angle f_\textrm{e}$ can be used later to adjust the separation between neighbor grating trenches with different etch lengths.
\textit{Step 3: Determine the scattering strength.}
With the mapping from the grating structures to the scattering strength and the parameters defined by the fabrication technology, we can find the optimal scattering strength using the algorithm presented in Sec. \ref{sec:theory}.
For the chosen technology, the mapping shown in Fig. \ref{fig:mapping}(c) indicates that $\alpha_\textrm{min} = 0.02$ $\mu$m$^{-1}$, which is limited by the 80 nm minimal feature size, and $\alpha_\textrm{max} = 0.09$ $\mu$m$^{-1}$, which is achieve by etch length $l_e = 0.26$ $\mu$m. The grating length is chosen to be longer than the Gaussian beam size ($L=17$ $\mu$m). Using these parameters, our model gives the optimal scattering strength, which is shown in Fig. \ref{fig:design_SOI}(a), and the optimal beam center position, which is at $z=6.3$ $\mu$m.
As a comparison, we also show the scattering strength given by the {ideal model}, illustrated by the black dashed curve in Fig. \ref{fig:design_SOI}(a). It grows monotonically and exceeds the upper bound of the scattering strength even before the beam center.
\begin{figure}
\centering
\includegraphics[width = 0.64\linewidth]{design_SOI_220_2.png}
\caption{(a) The optimal scattering strength given by our model (solid blue curve), in comparison with the {ideal model} (dashed black curve), which grows monotonically with $z$. (b) The etch length of the grating obtained directly from the scattering strength (solid blue curve) and for each trench in the grating (orange circle). (c) The adjustment of the separation between neighbor grating trenches. (d) Cross section of the grating.}
\label{fig:design_SOI}
\end{figure}
\textit{Step 4: Determine each etch length and position.}
Once the optimal scattering strength is obtained, we use the mapping (Fig. \ref{fig:mapping}(c)) to retrieve the corresponding etch length at $z$. Since the minimal feature size is 80 nm and the maximal scattering strength is achieved at etch length 260 nm, we only use the mapping for $80 \textrm{nm}\leq l_e \leq 260 \textrm{ nm}$. The solid curve in Fig. \ref{fig:design_SOI}(b) shows the etch length at all possible positions along the grating coupler.
To fix the position of each grating trench, we start from $z=0$, choose the corresponding etch length predicted by our model, and use the estimated grating pitch (\ref{eq:pitch_analytical}) to get the position of the next grating trench, till the end of the grating. Additional care need to be taken when the neighbor grating trenches have different etch lengths, due to their different emission phases.
When the adjacent trenches, denoted as the $i^{\textrm{th}}$ and $(i+1)^{\textrm{th}}$ trenches, have different emission phases, $\phi_{e,i}$ and $\phi_{e, i+1}$, their separation should have a small deviation from $\Lambda(z_i) - l_e(z_i)$ to restore the phase matching condition. This small change ($\Delta l$) is
\begin{equation}
\label{eq:delta_l}
\Delta l_i = \frac{\lambda}{2\pi (n_{\textrm{wg}} - n_\textrm{c}\sin\theta)}(\phi_{e,i} - \phi_{e, i+1}).
\end{equation}
With this correction, the separation between adjacent trenches becomes $\Lambda(z_i) - l_e(z_i) + \Delta l_i$. This adjustment of the grating trench separation is shown in Fig. \ref{fig:design_SOI}(c) for the demonstrated example. After this adjustment, we obtain the position and etch length of each grating trench, as shown in Fig. \ref{fig:design_SOI}(b), which completes the design procedure. A cross section view of the grating is shown in Fig. \ref{fig:design_SOI}(d) for illustration.
Similar grating designs where an apodized section is followed by a uniform grating has been presented in \cite{chen2010apodized}. Our model here provides a rigorous foundation for such designs.
\textit{Post verification and further optimization}
Numerical simulation of this deterministically designed grating coupler shows a coupling efficiency of 61.4\%, which is comparable to grating couple designs with sophisticated optimization \cite{bozzola2015optimising}. The major loss is due to the scattering towards the substrate, which can be suppressed by improving the grating directivity, such as using a thicker silicon core layer, placing a bottom mirror, or using slant gratings or two grating layers \cite{bozzola2015optimising, marchetti2017high, wang2005embeddedslant, ding2014fully, su2018fully, michaels2018inverse, van2007efficient}. To further optimize the grating coupler, one can use this deterministic design as an initial point, since a good initial guess is usually important for many optimization algorithms \cite{michaels2018inverse}.
\section{Two-dimensional apodized non-focusing grating couplers}
\label{sec:2d_nonfocus}
In this section, we investigate the optimal scattering strength in a two-dimensional non-focusing grating coupler, as shown in Fig. \ref{fig:schematic}(a),
which can be used to couple to high-order modes of a multi-mode fiber or vortex beams.
In these cases, the apodization can be different at different transverse positions, which implies that the grating trenches in Fig. \ref{fig:schematic}(a) may not be parallel or continuous. We discuss the {ideal model} and our model in sequence.
We assume that the power flow of the guided mode is along $z$-direction with no {power flow} in the transverse $x$-direction. This assumption is approximately valid in non-focusing grating couplers. Same as in Secs. \ref{sec:review} and \ref{sec:theory}, we focus only on the amplitude matching and assume that the phase matching and polarization matching are satisfied. Thus, the coupling efficiency is given by (\ref{eq:couplingefficiency2}) and terms in the integrand ($A(x, z)$ and the $A_\textrm{t}(x, z)$) are real. The coupling efficiency is maximized when the overlap integration $\int dz A(x, z) A_\textrm{t}(x, z)$ is maximized at each $x$. Therefore, the problem of finding the optimal scattering strength can be divided into sub-problems of finding the optimal scattering strength of a one-dimensional grating at each $x$-cut.
The scattering strength of the {ideal model} is a direct extension of (\ref{eq:alpha_1}):
\begin{align}
\label{eq:alpha_2d_1}
\alpha(x, z) = \frac{1}{2}\frac{S_\textrm{t}(x, z)}{P(x, -\infty) - \int_{-\infty}^z S_\textrm{t}(x, t)dt},
\end{align}
where $P(x, -\infty)$ is the guided power per unit length before the grating region and at transverse position $x$, and the total guided power is $P_\textrm{wg} = \int dx P(x, -\infty)$. Equation (\ref{eq:alpha_2d_1}) should be applied with caution, since it is not obvious that multiplying $S_\textrm{t}(x, z)$ by a constant has no influence on $\alpha(x, z)$. Unlike in the one-dimensional counter-part (\ref{eq:alpha_1}) where $P(-\infty)$ can be re-scaled depending on $S_\textrm{t}(z)$, in the two-dimensional grating, the initial guided power at different transverse positions are related and hence cannot be scaled independently. In fact, to use (\ref{eq:alpha_2d_1}), one should take $S_\textrm{t}(x, z) = b(x)|A_\textrm{t}(x, z)|^2$ where the coefficient $b(x)$ depends on $P(x, -\infty)$.
Therefore, we proceed to transform (\ref{eq:alpha_2d_1}) into a form such that the scattering strength $\alpha(x, z)$ is invariant when $S_\textrm{t}(x, z)$ is multiplied by a constant, and take $S_\textrm{t}(x, z) = |A_\textrm{t}(x, z)|^2$.
In the case that all the guided power is extracted, $P(x, -\infty) = \int_{-\infty}^{+\infty}S_\textrm{t}(x, t)dt$, the scattering strength is
\begin{equation}
\label{eq:alpha_2d_complete}
\alpha(x, z) = \frac{1}{2}\frac{S_\textrm{t}(x, z)}{\int_z^{+\infty}S_\textrm{t}(x, t)dt}.
\end{equation}
The scattered intensity distribution is
\begin{align}
\label{eq:S_2d_nonfocus_complete}
S(x, z) = P(x, -\infty)\frac{S_\textrm{t}(x, z)}{\int_{-\infty}^{+\infty} S_\textrm{t}(x, t) dt}
\end{align}
We assume the portion of the scattered power that is scattered towards the side of the target mode is $\xi$. Then, the amplitude of the scattered field is $A(x, z) = \sqrt{\xi S(x, z)}$. Substituting into (\ref{eq:couplingefficiency2}), the coupling efficiency with such optimal scattering strength is
\begin{align}
\label{eq:coupling_efficiency_2d_complete}
\eta = \frac{\xi}{P_\textrm{wg}P_\textrm{t}}\Big| \int dx \sqrt{P(x, -\infty)} \sqrt{\int_{-\infty}^{+\infty} dz S_\textrm{t}(x, z)}\Big|^2
\end{align}
Equation (\ref{eq:coupling_efficiency_2d_complete}) can be utilized to find the optimal waveguide cross section in the transverse direction.
When only part of the guide power is extracted, the extension of (\ref{eq:alpha_partial_scattering}) to the two-dimensional model is non-trivial but has not been considered thoroughly in previous studies. To achieve the amplitude matching between the scattered field and the target mode, the portion of the guided power scattered by the grating ($\zeta$) should be a function of the transverse position:
\begin{align}
\label{eq:zeta_x}
\zeta(x) = \frac{\int_{-\infty}^{\infty}S(x, t)dt}{P(x, -\infty)}.
\end{align}
To match $S(x, z)$ with $S_\textrm{t}(x, z)$, the choice of $\zeta(x)$ can be obtained in the following approach. Define $\zeta_\textrm{t}(x)$ as
\begin{align}
\label{eq:zeta_t}
\zeta_\textrm{t}(x) = \frac{\int_{-\infty}^{\infty}S_\textrm{t}(x, t)dt}{P(x, -\infty)}.
\end{align}
We have the freedom of choosing the portion of scattered power at a specific transverse position, for instance, at $x = x_\textrm{m}$, set $\zeta(x_\textrm{m}) = \zeta_\textrm{m}$. Then, $\zeta(x)$ is fixed through
\begin{align}
\label{eq:zeta_choice}
\zeta(x) = \zeta_\textrm{m}\frac{\zeta_\textrm{t}(x)}{\zeta_\textrm{t}(x_\textrm{m})}.
\end{align}
A practical choice of $x_\textrm{m}$ is to choose the maximum of $\zeta_\textrm{t}(x)$ in the range where $P(x, -\infty)$ is also substantial. Near the transverse edge of the grating, the guided power, which appears in the demoninator of (\ref{eq:zeta_t}), approaches zero, and
$\zeta(x)$ determined by (\ref{eq:zeta_choice}) may be larger than unity. Nevertheless, the apodization design is insignificant in those regions due to the negligible scattering power. In those regions, we can set $\zeta(x) \lesssim 1$ if $\zeta(x)$ given by (\ref{eq:zeta_choice}) is larger than one. The portion of the total scattered power is
\begin{align}
\label{eq:zeta_total}
\zeta = \frac{1}{P_\textrm{wg}} \int dx \zeta(x) P(x, -\infty).
\end{align}
With this choice of $\zeta(x)$, the scattering strength is
\begin{align}
\label{eq:alpha_2d_partial}
\alpha(x, z) = \frac{1}{2}\frac{\zeta(x)S_\textrm{t}(x, z)}{\int_{-\infty}^{+\infty}S_\textrm{t}(x, t)dt - \zeta(x)\int_{-\infty}^z S_\textrm{t}(x, t)dt},
\end{align}
which is the two-dimensional extension of (\ref{eq:alpha_partial_scattering}). By tuning $\zeta_\textrm{m}$, one can ensure that the scattering strength is below an upper bound. The scattering strength obtained by (\ref{eq:alpha_2d_partial}) ensures amplitude matching and maximal coupling efficiency for the given portion of the scattered power ($\zeta$). The coupling efficiency is $\eta \lesssim \xi \zeta$. It is generally not the optimum when the coupling efficiency is the only objective.
The extension of our model, (\ref{eq:alpha_opt}) and (\ref{eq:alphaconstraint}), to two-dimensional non-focusing gratings is straightforward. Due to the assumption of no power exchange along the transverse $x$-direction, the scattering strength $\alpha(x, z)$ at different $x$ can be obtained independently using (\ref{eq:alpha_opt}) and (\ref{eq:alphaconstraint}).
We demonstrate such a two-dimensional non-focusing grating coupler design in Sec. \ref{sec:design_oam}.
\section{Two-dimensional grating coupling to a vortex beam}
\label{sec:design_oam}
The two-dimensional extension of our model provides an approach to design apodized grating couplers for sophisticated beams.
To demonstrate this capability, we study in this section a design of a two-dimensional apodized grating coupler that couples to a Laguerre-Gaussian beam with orbital angular momentum. Such grating couplers are studied previously \cite{zhou2019ultra, nadovich2016forked, nadovich2017focused, liu2016chip}, but the coupling efficiency is still much lower than the coupling efficiency with the simple Gaussian beam. We demonstrate here that the coupling efficiency with a vortex beam is 57.6\%, only a few percent lower than the 61.4\% coupling efficiency with a Gaussian beam shown in Sec. \ref{sec:design_procedure}.
We choose exactly the same technology consideration as discussed in Sec. \ref{sec:design_procedure}. The Laguerre-Gaussian beam radius is $w_0 = 5.2$ $\mu$m and the topological charge of the beam is $l = 1$. The width of the input waveguide is $20$ $\mu$m ($-10 \textrm{ } \mu \textrm{m} \leq x \leq 10 \textrm{ }\mu$m), which matches the target beam in the $x$-direction best. The length of the grating along $z$-direction is also 20 $\mu$m ($0 \leq z \leq 20$ $\mu$m). The beam center is located at $x_0=0, \, z_0=7$ $\mu$m.
The same design procedure as outlined in Sec. \ref{sec:design_procedure} is carried out for the design at each $x$. The only change is in \textit{Step 4} when the separation between neighbor trenches is adjusted to achieve phase matching. For the vortex beam, the target mode has an additional phase
\begin{align}
\label{eq:phase_oam}
\psi(x, z) = l\varphi(x, z),
\end{align}
where $l$ is the topological charge and $\varphi(x, z) = \textrm{atan}[(x - x_0)/(z - z_0)]$. The change in separation becomes
\begin{align}
\label{eq:delta_l_oam}
\Delta l_i(x) = \frac{[\phi_{e,i}(x) - \psi_i(x)] - [\phi_{e, i+1}(x) - \psi_{i+1}(x)]}{2\pi(n_\textrm{wg} - n_\textrm{c}\sin\theta)}\lambda,
\end{align}
where $\psi_i(x) = \psi(x, z_i)$. Equation (\ref{eq:delta_l_oam}) is an extension of (\ref{eq:delta_l}) when the target mode has an extra phase in addition to the phase due to the non-zero incident angle $\theta$.
The position of the starting grating trench at each \textit{x}-cut ($z_0(x)$) is chosen such that the phase $2\pi(n_\textrm{wg} - n_\textrm{c}\sin\theta)z_0(x)/\lambda - \psi(x, z)$ is independent of $x$, which ensures that the phase matching is satisfied along the transverse direction.
\begin{figure}
\centering
\includegraphics[width=0.96\linewidth]{design_SOI_220_OAM_2.png}
\caption{Grating coupler design for Laguerre-Gaussian beam with $l=1$. (a) The optimal scattering strength obtained from our model. (b) The etched trenches on the grating coupler. (c) The amplitude distribution associated with the scattering strength in (a). (d) The amplitude of $E_x$ at a plane 1 $\mu$m above the grating. (e) The additional phase $\psi$ of the Laguerre-Gaussian beam. (f) The phase of $E_x$ at a plane 1 $\mu$m above the grating subtracting $2\pi n_\textrm{c} \sin \theta z/\lambda$.}
\label{fig:design_oam}
\end{figure}
The scattering strength obtained by our model is shown in Fig. \ref{fig:design_oam}(a). The scattering amplitude $A(x, z) = \sqrt{S(x, z)}$ associated with this scattering strength is presented in Fig. \ref{fig:design_oam}(c). Since our objective is to maximize the coupling efficiency, rather than preserving the mode fidelity, the amplitude distribution is different from the ``donut'' distribution of the Laguerre-Gaussian mode with $l = 1$.
After determining each etch length from the scattering strength and each trench position at different $x$, we obtain the shape and position of each grating trench, as shown in Fig. \ref{fig:design_oam}(b).
As a validation of the design procedure, we simulate the designed device using three-dimensional finite-difference time-domain method. The grating coupler is excited by the fundamental guided mode of a 20 $\mu$m wide and 220 nm thick silicon waveguide. We monitor the field at 1 $\mu$m above the grating. The amplitude and phase of the transverse electric field $E_x$ are respectively shown in Fig. \ref{fig:design_oam}(d) and (f), where the phase associated with the nonzero incident angle is subtracted. The phase of the scattered field matches well with the phase of the target beam, shown in Fig. \ref{fig:design_oam}(e). In the simulation, we observe that 62.4\% power is scattered upwards and the coupling efficiency is $\eta = 57.6$\%. The mode matching efficiency, which is the ratio between coupling efficiency and the portion of light scattered upwards, is as high as 92\%. Comparing the coupling efficiency with the one-dimensional grating coupler coupling with a simple Gaussian beam ($\eta = 61.4$\%), the reduction in coupling efficiency is only 3.8 percentage point. This coupling efficiency is also significantly higher than previous design and demonstration of coupling efficiency $\sim$ 32\% for the vortex beam \cite{nadovich2017focused, nadovich2016forked}. This demonstration suggests that our model and design procedure indeed generate highly efficient grating couplers, and can be applicable for achieving efficient coupling to complicated target modes.
\section{Two-dimensional apodized focusing grating couplers}
\label{sec:2d_focusing}
\begin{figure}
\centering
\includegraphics[width = 0.6\linewidth]{schematic_fanout.png}
\caption{Schematic of a focusing apodized grating coupler with a top-down view. The darker blue part indicates the etched grating trenches. The input waveguide connects with the fan-shape slab at the coordinate origin. $r$ represents the distance to the origin, and $\vartheta$ is the angle measured from $z$-axis.}
\label{fig:schematic_fanout}
\end{figure}
In this section, we consider the design of coupler that couples from a single-mode on-chip waveguide to a free-space beam. In silicon photonics, due to the high index of silicon, such a single-mode waveguide typically has a corss-section of approximately 0.5 $\mu$m, which is far smaller than the size of the free-space beam. In principle, one can first use a waveguide taper structure to expand the waveguide mode on-chip, and then use the non-focusing grating coupler that we have already designed to couple to a free-space beam. However, such a taper structure typically occupies substantial chip area \cite{mekis2010grating}. Here, instead, we consider a more compact design where a focusing coupler is used to directly couple from a single-mode on-chip waveguide to a free-space beam.
In a typical focusing grating coupler, the single-mode waveguide is connected to a fan-shape free-propagation region and the grating trenches are located on curves close to concentric ellipses rather than parallel lines as in non-focusing grating couplers \cite{van2007compact, mehta2017precise, nadovich2017focused}. A schematic of such a focusing grating coupler is illustrated in Fig. \ref{fig:schematic_fanout}. In this section, we extend both the {ideal model} and our model to find the optimal scattering strength for the focusing grating. We believe that such extension is useful practically, but it has not been considered systematically in previous studies \cite{mehta2017precise, nadovich2017focused}.
The power flow in the fan-shape region is almost along the radial direction, so we assume that the power exchange between different angles $\vartheta$ is zero. With this assumption, we can treat the apodization design for different angles separately.
Same as in Sec. \ref{sec:2d_nonfocus}, the phase matching and polarization matching are omitted and we investigate only the amplitude matching, since the phase matching in focusing grating couplers is well-studied \cite{mekis2010grating, becker2019out, marchetti2019coupling}. The coupling efficiency is
\begin{align}
\label{eq:couplingefficiency_focusing}
\eta = \frac{1}{P_\textrm{wg}P_\textrm{t}}\Big| \int d\vartheta \int r A(r, \vartheta) A_\textrm{t}(r, \vartheta) dr \Big|^2,
\end{align}
where the amplitudes $A(r, \vartheta) = A(x, z)$ and the coordinates $z = r\cos\vartheta, x = r\sin\vartheta$.
The total guided power on a constant radius surface in the fan-shape region is
\begin{equation}
\label{eq:power_r}
P(r) = \int r P(r, \vartheta) d\vartheta,
\end{equation}
where $P(r)$ is the total guide power at $r$, and $P(r, \vartheta)$ is the power per unit length at position $(r, \vartheta)$. Suppose that the grating region starts at $r = r_0$. The power decay along the radial direction is
\begin{align}
\label{eq:power_decay_r}
\frac{\partial}{\partial r}[r P(r, \vartheta)] = -2\alpha(r, \vartheta) r P(r, \vartheta).
\end{align}
The scattered power per unit area is
\begin{equation}
\label{eq:scatter_r_theta}
S(r, \vartheta) = 2\alpha(r, \vartheta)P(r, \vartheta).
\end{equation}
To maximize the coupling efficiency, the scattered power should match the target mode, i.e. $S(r, \vartheta) = S_\textrm{t}(r, \vartheta)$, where $S_\textrm{t}(r, \vartheta)$ is the target intensity distribution expressed with polar coordinates. With (\ref{eq:power_decay_r}) and (\ref{eq:scatter_r_theta}), we have
\begin{align}
\label{eq:power_decay_r_target}
& \frac{\partial}{\partial r}[rP(r,\vartheta)] = -rS_\textrm{t}(r, \vartheta), \\
\label{eq:power_r_target}
& P(r, \vartheta) = \frac{r_0 P(r_0, \vartheta)}{r} - \frac{1}{r}\int_{r_0}^r tS_\textrm{t}(t, \vartheta) dt, \\
\label{eq:alpha_2d_focusing_general}
& \alpha(r, \vartheta) = \frac{1}{2}\frac{rS_\textrm{t}(r, \vartheta)}{r_0 P(r_0, \vartheta) - \int_{r_0}^r tS_\textrm{t}(t, \vartheta) dt}.
\end{align}
Equation (\ref{eq:alpha_2d_focusing_general}) is an extension of (\ref{eq:alpha_1}) for the one-dimensional grating and (\ref{eq:alpha_2d_1}) for the non-focusing grating.
Similar to the discussion in Sec. \ref{sec:2d_nonfocus}, we would like to transform (\ref{eq:alpha_2d_focusing_general}) into a scale-invariant form, such that it is transparent that multiplying $S_\textrm{t}(r, \vartheta)$ by a constant does not change $\alpha(r, \vartheta)$. Then we can simply take $S_\textrm{t}(r, \vartheta) = |A_\textrm{t}(r, \vartheta)|^2$ to obtain the scattering strength.
In the case that all the guided power is scattered, this boundary condition suggests that $r_0 P(r_0, \vartheta) = \int_{r_0}^{+\infty}tS_\textrm{t}(t, \vartheta)dt$, and the scattering strength is
\begin{align}
\label{eq:alpha_2d_focusing_complete}
\alpha(r, \vartheta) = \frac{1}{2}\frac{r S_\textrm{t}(r, \vartheta)}{\int_r^{+\infty}t S_\textrm{t}(r, \vartheta) dt}.
\end{align}
The corresponding scattering intensity distribution is
\begin{align}
\label{eq:S_design_2d_focusing}
S(r, \vartheta) = r_0 P(r_0, \vartheta)\frac{S_\textrm{t}(r, \vartheta)}{\int_{r_0}^{+\infty} t S_\textrm{t}(t, \vartheta) dt}.
\end{align}
Recalling that $A(r, \vartheta) = \sqrt{\xi S(r, \vartheta)}$ and $A_\textrm{t}(r, \vartheta) = \sqrt{S_\textrm{r}(r, \vartheta)}$, the coupling efficiency (\ref{eq:couplingefficiency_focusing}) is:
\begin{align}
\label{eq:coupling_efficiency_complete_focusing}
\eta = \frac{\xi}{P_\textrm{wg}P_\textrm{t}}\Big | \int d\vartheta \sqrt{r_0P(r_0, \vartheta)} \sqrt{\int_{r_0}^{+\infty} r S_\textrm{t}(r, \vartheta)dr}\Big|^2.
\end{align}
The open angle of the fan-shape region and the position of the target beam center are important design parameters for the focusing grating couplers. The optimal open angle with respect to the beam center position can be found using (\ref{eq:coupling_efficiency_complete_focusing}), which is demonstrated in Sec. \ref{sec:implication}.
When only part of the guided power is scattered out and the scattering strength is chosen to maximize the mode matching, the portion of scattered power should be a function of the angle. The treatment is similar to that in Sec. \ref{sec:2d_nonfocus}. Let
\begin{align}
\label{eq:zeta_2d_target}
\zeta_\textrm{t}(\vartheta) = \frac{\int_{r_0}^{+\infty} tS_\textrm{t}(t, \vartheta)dt}{r_0 P(r_0, \vartheta)}.
\end{align}
We have the freedom to choose the portion of scattered power at a specific angle, for instance $\zeta(\vartheta_\textrm{m}) = \zeta_\textrm{m}$. Then, the scattered portion as a function of angle is \begin{align}
\label{eq:zeta_2d_design}
\zeta(\vartheta) = \zeta_\textrm{m}\frac{\zeta_\textrm{t}(\vartheta)}{\zeta_\textrm{t}(\vartheta_\textrm{m})}.
\end{align}
The practical choice of $\vartheta_\textrm{m}$ and the subtleties of adjusting $\zeta(\vartheta)$ near the edge of the fan-shape region are similar to the discussion in Sec. \ref{sec:2d_nonfocus}.
The portion of the total scattered power is
\begin{equation}
\label{eq:zeta_total_2d_focusing}
\zeta = \frac{1}{P_\textrm{wg}}\int d\vartheta \zeta(\vartheta)r_0 P(r_0, \vartheta).
\end{equation}
Using the boundary condition $\zeta(\vartheta) r_0 P(r_0, \vartheta) = \int_{r_0}^{+\infty}t S_\textrm{t}(t, \vartheta)dt$, the scattering strength is
\begin{align}
\label{eq:alpha_2d_focusing_partial}
\alpha(r, \vartheta) = \frac{1}{2}\frac{\zeta(\vartheta) r S_\textrm{t}(r, \vartheta)}{\int_{r_0}^{+\infty} t S_\textrm{t}(t, \vartheta) dt - \zeta(\vartheta) \int_{r_0}^{r}t S_\textrm{t}(t, \vartheta) dt}.
\end{align}
The corresponding coupling efficiency is $\eta \lesssim \xi \zeta$.
Our model can also be extended to the design of apodized focusing gratings. From (\ref{eq:power_decay_r}) and (\ref{eq:scatter_r_theta}), the scattering intensity associated with a scattering strength distribution is
\begin{align}
\label{eq:scattering_2d_focusing}
S(r, \vartheta) = 2\alpha(r, \vartheta) \frac{r_0}{r}P(r_0, \vartheta)\exp\Big[-2\int_{r_0}^{r} \alpha(t, \vartheta) dt\Big].
\end{align}
The coupling efficiency given by (\ref{eq:couplingefficiency_focusing}) is
\begin{align}
\label{eq:coupling_efficiency_2d_ourmodel}
\begin{split}
\eta = &\, \frac{\xi}{P_\textrm{wg}P_\textrm{t}}\Big | \int d\vartheta \sqrt{r_0 P(r_0, \vartheta)} \int_{r_0}^{r_0+L}\sqrt{2\alpha(r, \vartheta)}\\
&\cdot \exp\Big[ -\int_{r_0}^r \alpha(t, \vartheta)dt \Big] \sqrt{r} A_\textrm{t}(r, \vartheta)dr \Big|^2,
\end{split}
\end{align}
where we assume that the grating lies between $r_0$ and $r_0+L$, and $A(r, \vartheta) = \sqrt{\xi S(r, \vartheta)}$ is used. Since each term in the absolute sign in (\ref{eq:coupling_efficiency_2d_ourmodel}) is real, the maximal coupling efficiency is achieved when the following term is maximized for each $\vartheta$.
\begin{align}
\label{eq:f_2d_focusing}
\begin{split}
f(\vartheta) = & \, \int_{r_0}^{r_0 + L} \sqrt{2\alpha(r, \vartheta)}\exp \Big[-\int_{r_0}^r \alpha(t, \vartheta) dt \Big] \\ & \cdot \sqrt{r} A_\textrm{t}(r, \vartheta) dr.
\end{split}
\end{align}
The physical intuition is that the scattering strength along each angle can be designed independent of other angles.
Comparing to (\ref{eq:f}), the only difference in (\ref{eq:f_2d_focusing}) is to replace $A_\textrm{t}(z)$ by $\sqrt{r}A_\textrm{t}(r, \vartheta)$, besides trivial shift of origin and change of integration variable. These changes does not influence the optimal substructure of the optimization problem outlined in (\ref{eq:optimization}). Therefore, the optimal scattering strength can be found following the derivation in Sec. \ref{sec:theory}. The optimal scattering strength is
\begin{align}
\label{eq:alpha_opt_2d_focusing}
\alpha_r(\vartheta) = \frac{r A_\textrm{t}^2(r, \vartheta)}{2\Big\{ \int_r^{r_0+L}e^{-\int_r^s\alpha^\star(t)dt}\sqrt{2\alpha^\star(s, \vartheta)s} A_\textrm{t}(s, \vartheta)ds \Big\}^2},
\end{align}
with the same adjustments outlined in (\ref{eq:alphaconstraint}) to explicitly take the upper and lower bounds of the scattering strength into account.
\section{Implications of our model}
\label{sec:implication}
Equipped with our model, we can study how the upper limit of the coupling efficiency depends on the upper and lower bounds of the scattering strength and the grating length. With the extension of the {ideal model} to the two-dimensional focusing gratings, we also investigate the optimal open angle of the fan-shape region with respect to the beam center position.
For practical purposes, we choose the target mode in this section to be a Gaussian beam with beam radius $w_0 = 5.2$ $\mu$m, which resembles the mode in a single-mode fiber.
\begin{figure}[h]
\centering
\includegraphics[width = 0.6\linewidth]{GC_theory_alpha_sweep.png}
\caption{The upper bound of the coupling efficiency limited by the maximal scattering strength, where the length of the grating coupler is much longer than the Gaussian beam waist. Curves with different colors represent different lower bounds of the scattering strength.}
\label{fig:alpha_sweep}
\end{figure}
We first study how the finite range of scattering strength influences the upper limit of the coupling efficiency. We assume that all the scattered power is towards the target mode side ($\xi=1$) and the mode matching in transverse dimension is perfect. To avoid the influence of finite coupling length, the grating coupler is much longer than the Gaussian beam waist. Figure \ref{fig:alpha_sweep} shows the upper limit of the coupling efficiency as a function of the upper bound of the scattering strength, where different curves represent different lower bounds of the scattering strength.
The coupling efficiency, including the asymptotic limit ($\alpha_\textrm{max} \rightarrow \infty$), decreases mildly with the lower bound of the scattering strength.
We also find that the coupling efficiency grows with $\alpha_\textrm{max}$ but starts to saturate when $\alpha_\textrm{max}$ is above about 0.15 $\mu\textrm{m}^{-1}$.
This implies the intrinsic challenge for low-index contrast gratings coupling with single-mode fiber, where $\alpha_\textrm{max}$ is small. Nevertheless, this challenge can be bypassed with techniques to expand the target mode size \cite{chen2016high}. A discussion of how the optimal scattering strength varies with a scaling of the target mode size is presented in Appendix \ref{app:scaling}, which is useful to generalized the above results to different Gaussian beam radius.
\begin{figure}
\centering
\includegraphics[width = 0.64\linewidth]{GC_theory_L_sweep_eta_zg.png}
\caption{(a) The upper bound of the coupling efficiency limited by the the length of the grating coupler. Curves with different colors represent different upper bounds of the scattering strength. The lower bound of the scattering strength is set to be zero. (b) The corresponding optimal position of the Gaussian beam center.}
\label{fig:L_sweep}
\end{figure}
To achieve compact grating couplers \cite{zhou2019ultra, sun2013large}, shorter grating length is preferred. Our model also predicts the influence of the finite grating length. Figure \ref{fig:L_sweep}(a) shows the upper limit of the coupling efficiency as a function of the grating length, where different curves represent different $\alpha_\textrm{max}$ while $\alpha_\textrm{min}$ is set to zero. The corresponding optimal position of the Gaussian beam center is shown in Fig. \ref{fig:L_sweep}(b). We find that the coupling efficiency increases with the grating length, and the increment becomes insignificant when the length exceeds about 12 $\mu$m ($L \sim 2.4w_0$). This trend consistently holds for different $\alpha_\textrm{max}$. Therefore, the length of the grating coupler should be larger than $\sim 2.4 w_0$ to avoid significant efficiency decrease. We also observe from Fig. \ref{fig:L_sweep}(a) that large $\alpha_\textrm{max}$ has greater advantage when the grating is shorter. This is consistent with previous studies using full etch and short apodization region for compact grating couplers \cite{sun2013large}.
\begin{figure}
\centering
\includegraphics[width=0.7\linewidth]{GC_2D_model_ideal_cos_thetac_20200121.png}
\caption{The upper limit of the coupling efficiency given by the {ideal model} for the focusing grating as a function of the position of the Gaussian beam center ($r_c$) and the open angle of the fan-shape region ($\vartheta_0$).}
\label{fig:fan_angle}
\end{figure}
As a demonstration of the extension of the {ideal model} for the two-dimensional focusing grating, we investigate the optimal open angle as a function of the position of the Gaussian beam center. Suppose that the Gaussian beam center is located at $(r = r_c, \vartheta=0)$, and the open angle of the fan-shape region is $\vartheta_0$, i.e. $\vartheta \in [-\vartheta_0, \vartheta_0]$. The guided power in the fan-shape free propagation region can be approximated by \cite{mehta2017precise}
\begin{align}
\label{eq:P_wg_focusing}
P(r_0, \vartheta) = \frac{P_\textrm{wg}}{r_0 \vartheta_0}\cos^2\Big( \frac{\pi \vartheta}{2 \vartheta_0}\Big), \textrm{ } \vartheta \in [-\vartheta_0, \vartheta_0].
\end{align}
The resulting upper limit of the coupling efficiency given by (\ref{eq:coupling_efficiency_complete_focusing}), where $\xi=1$ and the target mode is a Gaussian beam, is shown in Fig. \ref{fig:fan_angle} for different open angles and beam positions. The open angle that maximizes the coupling efficiency for each beam position is highlighted by the red dashed curve. The upper limit of the coupling efficiency is between 98\% and 99\% along this curve. Figure \ref{fig:fan_angle} suggests that the optimal open angle should decrease when the beam center is farther away from the waveguide terminal. This result can function as a guidance in designing focusing grating couplers.
\section{Conclusion}
\label{sec:conclusion}
In conclusion, we explicitly consider the upper and lower bound of the scattering strength in an analytical model for designing apodized grating couplers. We prove that the obtained scattering strength is the global optimum and provide an algorithm to compute the scattering strength numerically. Equipped with our model, we present a deterministic design procedure to generate a highly efficient apodized grating coupler. The demonstrated design in a silicon-on-insulator platform has coupling efficiency comparable with previous designs after sophisticated optimization. We further extend both the previous {ideal model} and our model to two-dimensional non-focusing and focusing gratings. Using the two-dimensional extension of our model, we demonstrate a grating coupler coupling to a vortex beam with topological charge $l=1$. The mode matching efficiency is 92\% and the total coupling efficiency 57.6\% is only 3.8 percentage point lower than the design for a standard Gaussian beam. We finally discuss how the finite scattering strength and coupling length influence the upper bounds of the coupling efficiency of a grating coupler coupling with a single mode fiber.
Our systematical modeling of the apodized grating coupler can provide guidance to design highly efficient grating couplers in different fabrication technologies and for various target modes. The deterministic design procedure presented in this study can generate highly efficient grating couplers quickly or function as a near-optimum starting point for more advanced optimization.
\appendices
\section{Optimal substructure of the constraint optimization problem}
\label{app:optimal_substructure}
In this Appendix, we prove that the constraint optimization problem described by (\ref{eq:optimization}) exhibits optimal substructure by showing that the optimal scattering strength $\alpha^\star(z)$, which maximizes $f$, must also maximize $f^+(z)$ for any $z \in [0, L]$.
We prove it by counter-evidence.
Suppose the scattering strength $\alpha^\star(z)$ maximizes $f$ but does not maximize $f^+(z)$ for some $z \in [0, L]$, for instance $z = z_1$. Since $f^+(0) = f$ and $f^+(L)=0$, $z_1$ is in the range $z_1 \in (0, L)$. Based on the presumption, there exists another scattering strength distribution $\alpha_1(s)$ for $s \in [z_1, L]$, such that $f^+(z_1)$ takes a larger value, i.e. $f^+(z_1; a_1) > f^+(z_1; a^\star)$. Hence, we can construct another scattering strength distribution $\alpha_2$ by taking $\alpha^\star$ in $[0, z_1)$ and $\alpha_1$ in $[z_1, L]$, i.e.
\begin{align}
\label{eq:alpha_construct}
\alpha_2(z) =
\begin{cases}
\alpha^\star(z) & 0 \leq z < z_1 \\
\alpha_1(z) & z_1 \leq z \leq L
\end{cases}.
\end{align}
We then compare the overlap integral $f$ of the scattering strength $\alpha^\star(z)$ and $\alpha_2(z)$. Recall the separation of $f$ into two parts with $z = z_1$:
\begin{align}
\label{eq:f_separation_repeat}
f = f^-(z_1) + \exp \Big [{-\int_0^{z_1}\alpha(t)dt} \Big ] f^+(z_1),
\end{align}
where $f^-$ and $f^+$ are given in (\ref{eq:f_minus}) and (\ref{eq:f_plus}) with $z = z_1$. We find that $f^-(z_1)$ and the factor $\exp[-\int_0^{z_1} \alpha(t) dt]$ are the same for the scattering strength $\alpha^\star(z)$ and $\alpha_2(z)$, since the scattering strength in $[0, z_1)$ are the same. Because the scattering strength $\alpha_2(z)$ gives larger $f^+(z_1)$, the overall overlap integral $f$ is also larger, i.e. $f(\alpha_2) > f(\alpha^\star)$. This contradicts with the presumption that the scattering strength $\alpha^\star(z)$ maximizes $f$. Thus, we prove that the scattering strength $\alpha^\star(z)$ that maximizes $f$ must also maximize $f^+(z)$ for any $z \in [0, L]$. The optimization problem (\ref{eq:optimization}) therefore exhibits optimal substructure.
\section{Optimal scattering strength with a scaling of the target mode size}
\label{app:scaling}
In this Appendix, we study how the optimal scattering strength changes when the target mode is transformed by a scaling operation. For simplicity, we only consider the one-dimensional system.
Suppose the target mode after the transformation ($\tilde{S}_\textrm{t}$) is related to the original target mode ($S_\textrm{t}$) by:
\begin{equation}
\label{eq:target_mode_scaling}
\tilde{S}_\textrm{t}(z) = S_\textrm{t}(\gamma z),
\end{equation}
where $\gamma$ is the scaling factor. We find that the optimal scattering strength matching the target mode after the scaling transformation ($\tilde{\alpha}(z)$) is related to the original optimal scattering strength ($\alpha(z)$) by:
\begin{align}
\label{eq:alpha_scaling}
\tilde{\alpha}(z) = \gamma \alpha (\gamma z).
\end{align}
One can check that this relation holds for the {ideal model} (\ref{eq:alpha_complete_scattering}) and (\ref{eq:alpha_partial_scattering}). It also holds for our model (\ref{eq:alpha_opt}), if the upper and lower bounds of the scattering strength and the length of the grating coupler are transformed accordingly:
\begin{align}
\label{eq:alpha_min_scale}
\tilde{\alpha}_\textrm{min} & = \gamma \alpha_\textrm{min}, \\
\tilde{\alpha}_\textrm{max} & = \gamma \alpha_\textrm{max}, \\
\tilde{L} & = L/\gamma,
\end{align}
where the quantities with a tilde are those matching the target mode after the scaling transformation.
\section*{Acknowledgment}
The authors would like to thank Dr. Min Teng from imec USA Nanoelectronics Design Center Inc., Mr. Nathan Z. Zhao and Prof. Meir Orenstein for helpful discussions. We acknowledge support from the ACHIP program, funded by the Gordon and Betty Moore Foundation (GBMF4744). Z. Zhao also acknowledges support from Stanford Graduate Fellowship.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
2,877,628,088,403 | arxiv | \section{introduction}
The interaction between strong laser and atoms and molecules is a
hot topic. A series of nonlinear phenomena appear when atoms and
molecules are subjected to intense laser pulse
\cite{Robert,K.Yam,R.Bart,J.H.}. High-order harmonic generation
(HHG) is one of the most studied effects among these nonlinear
phenomena with the results that the coherent emitted light can be
harnessed to produce new coherent trains of extremely short pulses
on the time scale of electron motion \cite{P.B,P.Ago}. A simple
model \cite{Corkum}, based on the idea of tunnelling ionization of
an electron in an atom allows for interpreting the harmonic
generation process in terms of recolliding electron trajectories
and leads to a universal cutoff law for the maximum in photon
energy in harmonic generation. The extra degrees of freedom and
nonspherical symmetry of molecules lead to a further new phenomena
such as molecular high-order harmonic generation (MHOHG)
\cite{P.B, P.Ago, J.Itatani}, charge resonance enhanced ionization
(CREI) \cite{T.Zou, T.Seideman} and bond softening \cite{G.Yao,
SK}. The theoretical understanding of these mechanisms needs to
solute the time-dependent Schr\"{o}dinger equation for all
electrons and all nuclear degrees of freedom. However, this kind
of numerical solutions only exist for smallest systems, such as
atoms \cite{JPHansen, Parker}, H$_2$ and H$_2^+$ \cite{Chelkowski,
Kreibich, ADBandrauk, XBB, Liang}. For large systems the
computation is quite expensive due to the number of degrees of
freedom. In contrast, the well tested time dependent Density
Functional Theory at the level of the time-dependent local-density
approximation (TDLDA) \cite{NATO} serves as a powerful tool to
study the electron dynamics of multi-electron systems
\cite{F.Calvayrac,Takashi,Ben-Nun}. Recently, Madsen \emph{et al}
\cite{C.B.Madsen} studied MHOHG of polyatomic molecules using a
quantum-mechanical three-step model.
The application of two-color laser field for HHG is a fascinating
topic of research science. One can control the formation of
harmonic spectrum and yields of ion or molecular fragments by
changing the relative phase between the fundamental frequency and
its second harmonic fields in the two-color laser field. At
present, in experiment and theory, two-color laser field has been
applied to atoms \cite{Kim,Schumacher,Zhang}, molecules
\cite{SSheely,Chelkowski,Haruimiya,Xuebin,Chen} and clusters
\cite{Nguyen,F.S.Zhang,Y.P.}. However, much less has been done for
more complicated molecules. In this paper we apply TDLDA,
augmented with an average-density self-interaction correction
(ADSIC) \cite{C.Legrand} to investigate theoretically the
ionization and MHOHG of ethylene in one-color and two-color laser
fields.
\section{Theory}
In this section, we represent the real-time method in TDLDA. The
molecule is described as a system composed of valence electrons
and ions. The interaction between ions and electrons is described
by means of a norm-conserving pseudopotentials.
Valence electrons are treated by TDLDA, augmented with an
average-density self-interaction correction (ADSIC)
\cite{C.Legrand}. They are represented by single-particle orbitals
$\phi_{j}(\textbf{r},t)$ satisfying the time-dependent Kohn-Sham
(TDKS) equation \cite{10},
\begin{eqnarray}
i\frac{\partial}{\partial{t}}\phi_{j}(\textbf{r},t) & = & \
\hat{H}_{Ks}\phi_{j}(\textbf{r},t) \nonumber\\
& = &
(-\frac{\nabla^{2}}{2}+V_{eff}(\textbf{r},t))\phi_{j}(\textbf{r},t),
\nonumber \\
& & {} j=1,...,N.
\end{eqnarray}
$V_{eff}$ is Kohn-Sham effective potential composed of four parts,
\begin{eqnarray}
V_{eff}(\textbf{r},t)) = V_{ion}(\textbf{r},t)+V_{ext}(\textbf{r},t)\nonumber \\
+V_{H}[n](\textbf{r},t)+V_{xc}[n](\textbf{r},t),
\end{eqnarray}
where $V_{ion} = \sum_{I}V_{ps}(\textbf{r}-\textbf{R}_I)$ is ionic
background potential, $V_{ext}$ is external potential, $V_{H}$
stands for a time-dependent Hartree part and the final part is
exchange-correlation (xc) potential. The electron density is given
by
\begin{eqnarray}
n(\textbf{r},t)) = \sum_{j}|\phi_{j}(\textbf{r},t)|^{2},
\end{eqnarray}
and the Hartree potential $V_{H}[n](\textbf{r},t)$ is defined as
\begin{eqnarray}
V_{H}[n](\textbf{r},t) =
\int{d^{3}r'\frac{n({\textbf{r}}',t)}{|\textbf{r}-{\textbf{r}}'|}}.
\end{eqnarray}
The xc potential $V_{xc}[n](\textbf{r},t)$ is a functional of the
time-dependent density and has to be approximated in practice. The
simplest choice consists in the TDLDA, defined as
\begin{eqnarray}
V_{xc}^{TDLDA}[n](\textbf{r},t) =
d\epsilon_{xc}^{hom}(n)/dn|_{n=n(\textbf{r},t)},
\end{eqnarray}
where $\epsilon_{xc}^{hom}(n)$ is the xc energy density of the
homogeneous electron gas. For $\epsilon_{xc}^{hom}$ we use the
parametrization of Perdew and Zunger \cite{19}. The form of
pseudopotential for covalent molecule is taken from \cite{20}
including nonlocal part.
The ground state wavefunctions are determined by the damped
gradient method \cite{Paris}. The TDLDA equations are solved
numerically by time-splitting technique \cite{22}. For the
nonlocal part contained in Hamiltonian, we deal with it in an
additional propagator and treated it with a third-order Taylor
expansion of the exponential \cite{Paris}. The absorbing boundary
condition is employed to avoid periodic reflecting electrons
\cite{24}.
For the laser field, neglecting the magnetic field component can
be written as
\begin{eqnarray}
E(t)= E_{0}f(t)[cos(\omega(t))+A cos(2\omega(t)+\phi)]
\end{eqnarray}
where $E_{0} \propto {\sqrt{I}}$, $I$ denoting the laser
intensity, $\omega$ is the laser frequency and $f(t)$ is the pulse
profile. $A$ is the electric field strength ration between the two
frequency and $\phi$ is the relative phase. In this paper, $A$ is
chosen 0 and 0.5 respectively for the one-color and the two-color
laser field and $\phi=0$.
The laser induced electron dipole moment can be obtained by
\begin{eqnarray}
D(t)= \int{rn(\textbf{r},t)d^{3}}r
\end{eqnarray}
The Fourier transform of $D(t)$ gives the MHOHG power spectrum
$|D_F(\omega)|^2$.
The harmonics intensity as a function of harmonic frequency
$\omega$ and emission time $\beta$ can be obtained by the
time-frequency analysis \cite{Antoine}
\begin{eqnarray}
D_G(\omega,\beta)=|\int D(t)e^{i\omega
t}e^{-(t-\beta)^2/2\alpha^2}dt|^2
\end{eqnarray}
where the window function width $\alpha$ is chosen as one-tenth of
the laser optical period.
The number of escaped electrons is defined as
\begin{eqnarray}
N_{esc} = N_{t=0} - \int_{V}d^{3}rn(\textbf{r},t)
\end{eqnarray}
where $V$ is a volume surrounding molecule. A detailed link with
experiments is the probabilities $P^k(t)$ of finding the excited
molecules in one of the possible charge states $k$ to which they
can ionize. The formula can be obtained from \cite{24}.
\section{Results and discussion}
The ethylene is the simplest organic $\pi$ system holding $D_{2h}$
symmetry. In our calculation, there are 12 valence electrons. It
is in $x$-$y$ plane with the center of mass at the origin. The
laser polarization is along $x$ direction which is parallel to the
axis of CC double bond. The laser time profile is chosen the Ramp
envelope. The ramp-on and ramp-off time are both 8~fs and the
duration is 30~fs. For the one-color laser field, $I=
10^{14}$~W/cm$^2$ and $\omega= 2.72$~eV and for the two-color
laser field, $I_1=10^{14}$~W/cm$^2$, $I_2=I_{1}/4$ and
${\omega}_1=2.72$~eV, ${\omega}_2=5.44$~eV. It should be noted
that the two-color laser field is non-symmetric, which is the same
as that in \cite{Xuebin}, the negative and positive amplitudes of
the electric field strength are not equal.
\subsection{Ionization properties}
\begin{figure}[!ht
\begin{center}
\includegraphics[width=6cm,angle=0]{fig1.eps}
\caption{Time evolutions of the number of escaped electrons of
ethylene in the one-color and the two-color laser fields. Solid
line: one-color ($\omega= 2.72$~eV) laser field with
$I=10^{14}$~W/cm$^2$. Dashed line: two-color laser field with
${\omega}_1=2.72$~eV, $I_1=10^{14}$~W/cm$^2$ and
${\omega}_2=5.44$~eV, $I_2=I_{1}/4$, the relative phase
$\phi=0$.}\label{escelectron}
\end{center}
\end{figure}
Fig.~\ref{escelectron} shows the time evolutions of the number of
escaped electrons of ethylene in the one-color and the two-color
laser fields. One can find that the electron emission takes the
same pattern in both cases. The ionizations start at around 5~fs
with a constant speed in both cases, while it is obvious that
electrons start to escape a little earlier in the two-color laser
field. Then in both cases electron emissions become more and more
slow. Finally, at around 42~fs they are saturated at 1.18 in the
two-color case and 0.42 in the one-color case, which is 4~fs
earlier than the laser pulses are switched off. Moreover, it is
obvious that the ionization is enhanced by the two-color laser
field.
\begin{figure}[!ht
\begin{center}
\includegraphics[width=8cm,angle=0]{fig3.eps}
\caption{The ionization probabilities of ethylene in the one-color
laser field (a) and two-color laser field (b) for the same cases
as in Fig.~\ref{escelectron}.}\label{probab}
\end{center}
\end{figure}
Fig.~\ref{probab} represents ionization probabilities of various
charge states for ethylene in one-color and two-color laser
fields. From Fig.~\ref{probab}(a) we can see that in the first
5~fs, there is mainly neutral ethylene in the one-color laser
field and probabilities of $P^{(1+)}$ and $P^{(2+)}$ are almost
zero. Then from 5~fs to 42~fs, the probability of neutral ethylene
decreases more and more slowly and reaches a saturated value 0.62
before the laser is switched off. During this period, $P^{(1+)}$
and $P^{(2+)}$ increase more and more slowly and they are also
saturated at 42~fs. Their final values are 0.3 and 0.05
respectively. The probability of $P^{(3+)}$ is so small that it is
not visible in the figure. Furthermore, one can find that in the
course of the one-color laser pulse, neutral ethylene predominates
the main probability.
For the ionization probabilities of ethylene in the two-color
laser field, as shown in Fig.~\ref{probab}(b), it is obvious that
the time evolution pattern is different from that in the one-color
case. In the first 5~fs, there is mainly neutral ethylene in the
the two-color laser field. From 5~fs to 42~fs, $P^{(0)}$ decreases
more and more slowly and reaches a saturated value 0.28. This
value is smaller than that in Fig.~\ref{probab}(a). It is
noteworthy that from 5~fs to 10~fs, $P^{(0)}$ drops quickly from 1
to 0.62 in the two-color case while it takes 37~fs for $P^{(0)}$
to drop from 1 to 0.62 in the one-color case. In
Fig.~\ref{probab}(b), $P^{(1+)}$ starts to increase quickly at
around 5~fs and it exceeds $P^{(0)}$ at around 23~fs. Finally, it
is saturated at 0.4. For $P^{(2+)}$, it starts to increase at
around 7~fs, which is a litter later than $P^{(1+)}$, but earlier
than $P^{(3+)}$. The increase trends of $P^{(2+)}$ and $P^{(3+)}$
are quite similar. Both of them increase slowly and reach
saturated values 0.22 and 0.06 respectively at around 42~fs. We
can also find that after 23~fs, $P^{(1+)}$ dominates mainly the
charge state.
From the above discussion we can say that the probability of
higher charge state is larger for the two-color than that for the
one-color laser field. This is related to the fact that the local
maximum of amplitude strength for the two-color laser field is
larger than for the one-color case. This is consistent with the
result from the interaction between atoms and laser fields
\cite{Tong}.
\subsection{Analysis of MHOHG}
\begin{figure}[!ht
\begin{center}
\includegraphics[width=8cm,angle=0]{fig2.eps}
\caption{(a) and (b): the time evolution of dipole moments of
ethylene along $x$, $y$ and $z$ directions in the one-color and
the two-color laser fields. (c) and (d): MHOHG spectrum of
ethylene in the one-color and two-color laser fields. The laser
parameters are the same as those in
Fig.~\ref{escelectron}.}\label{HHG}
\end{center}
\end{figure}
Figs.~\ref{HHG}(a) and (b) exhibit the time evolution of dipole
moments along $x$, $y$ and $z$ directions of ethylene in the
one-color ($\omega$) and the two-color ($\omega+2\omega$) laser
field respectively. One can find that in both cases the dipole
moments along $x$ direction are distinct while those along $y$ and
$z$ directions are very small. This is due to the laser
polarization is along $x$ direction. However, $D_x$ displays the
different behaviors in two cases. In Fig.~\ref{HHG}(a), $D_x$
exhibits the relative symmetry while in Fig.~\ref{HHG}(b) it is
not symmetric because of the non-symmetric property of the
electric field strength of the two-color laser field.
The MHOHG spectra calculated by our TDLDA in the one-color and the
two-color laser fields are shown in Figs.~\ref{HHG}(c) and
Fig.~\ref{HHG}(d). It is well known that the chosen laser
intensities and frequencies are certain essential physical
parameters which allow for quasistatic interpretations of strong
field laser-atom processes \cite{Corkum, Scrinzi}. In this paper,
ponderomotive energy $U_p=eI/4m\omega^2$ for $\omega=2.72$~eV and
$2\omega=5.44$~eV are 1.93~eV and 0.12~eV respectively. The
Keldysh parameter $\gamma$ separating multiphoton and tunnelling
ionization regimes for ethylene ionization potential $I_p=11.5$~eV
is $\gamma=\sqrt{I_p/2U_p}=1.7$. Thus the present parameters
situate our calculations above the tunnelling ionization regime.
According to the atomic HG cut-off law, $E_{max}=I_p+3.17U_p$, the
cutoff MHOHG of ethylene in the one-color laser field is at 6.
From the MHOHG spectrum in Fig.~\ref{HHG}(c) we can find a clearly
cut-off at the 9th order. Although this value is not exactly the
same as the value predicted by the atomic HG cut-off law, it is
quite near and we attribute the difference to the fact that
ethylene is polyatomic molecule while the cut-off law is for
atoms. One can also find in Fig.~\ref{HHG}(c) that the MHOHG
spectrum of ethylene consists of a series of peaks, first
decreasing in amplitude and then reaching a plateau. The plateaus
is rather short from the 5th order harmonic to the 9th order
followed by continuously decrease. Thus the MHOHG spectrum of
ethylene behaves basically like an atom with recollision of the
electron with the compact molecular ion being the principal HG
mechanism. Furthermore, one can find in Fig.~\ref{HHG}(c) that
ethylene generates odd harmonics.
Compare the harmonic spectrum in Fig.~\ref{HHG}(d) to that in
Fig.~\ref{HHG}(c), we can see that the shapes of MHOHG spectrum
are different. In Fig.~\ref{HHG}(d), the plateaus is not so
obvious and the decrease lasts longer. This is due to the
interference effect of the $1\omega$ and $2\omega$ components of
the two-color laser field. It can also be found that even
harmonics are well produced in the two-color case. One explanation
of this is that even harmonic results from the sum of an odd
number of $2\omega$ photon plus two $1\omega$ photons. The other
interpretation is that even harmonic generation requires broken
reflection symmetry because only this allows that even harmonic
generation transforms a squared dipole excitation into one dipole
signal, i.e., $\hat{D}^2{\longrightarrow}\hat{D}$. That transition
cannot be mediated by a reflection symmetric system because parity
is then conserved, but $\hat{D}$ has negative parity while
$\hat{D}^2$ has positive parity. Ethylene is symmetric and a
two-color laser field breaks the symmetry so that a series of even
harmonics are produced.
\begin{figure}[!ht
\begin{center}
\includegraphics[width=8cm,angle=0]{fig4.eps}
\caption{Time profiles of the harmonics at about the 5th one and the
6th one in the one-color laser field (solid line) and the two-color
laser field (dotted line). The laser parameters are the same as
those in Fig.~\ref{escelectron}.}\label{Harmonics}
\end{center}
\end{figure}
To investigate the detailed temporal structure of the MHOHG
spectrum of ethylene, we perform the Gabor transform of the dipole
moments in the one-color and the two-color laser field
respectively. The laser parameters are the same as those in
Fig.~\ref{escelectron}. Here we consider two typical harmonics
which are both in the plateau, one is odd harmonic (5th) and the
other one is even harmonic (6th), as shown in
Fig.~\ref{Harmonics}. It should be noted that the optical cycle
refers to the one-color laser frequency. In
Fig.~\ref{Harmonics}(a) we can find that in the one-color laser
field, the time profile of the 5th harmonic shows a relative
smooth function of the driving laser pulse which indicates that
the multiphoton mechanism dominates this lower harmonic regime.
However, for the two-color case, the time profile of the 5th
harmonic is stronger and it exhibits well the one burst within
each optical cycle from the 5th optical cycle to the 25th optical
cycle. This can be attributed to the recollision of the electronic
wave packet with the ionic cores. Due to the first five and the
final five optical cycles are ramp-on and ramp-off times of laser
pulses, there are only two and four bursts in these two parts of
times. For the 6th harmonic, as shown in Fig.~\ref{Harmonics}(b),
it is obvious that in the one-color laser field, the time profile
of the 6th harmonic is very weak, whereas in the two-color laser
field, it is almost two orders of magnitude stronger. Again, it
shows well the one burst within each optical cycle from the 5th
optical cycle to the 25th optical cycle.
\section{Conclusions}
In this paper, we have simulated the MHOHG spectra and ionization
of ethylene induced by the one-color and the two-color laser
fields in the multiphoton regimes with TDLDA. We find that the
ionization and the ionization probability of higher charge state
are enhanced by the two-color laser field. It is shown that the
MHOHG spectrum of ethylene in the one-color laser field exhibits
the typical atom HHG spectrum and odd order harmonics are
produced. The two-color laser field can result in the breaking of
the symmetry and generate the even order harmonics. Furthermore,
the detailed temporal structure of MHOHG spectrum of ethylene is
obtained by means of the time-frequency transform providing new
insights of the MHOHG mechanisms in the one-color and two-color
laser fields.
\section*{ACKNOWLEDGEMENTS}
This work was supported by the National Natural Science Foundation
of China (Grants No. 10575012 and No. 10435020), the National
Basic Research Program of China (Grant No. 2006CB806000), the
Doctoral Station Foundation of Ministry of Education of China
(Grant No. 200800270017), the scholarship program of China
Scholarship Council and the French-German exchange program PROCOPE
Grant No. 04670PG.
|
2,877,628,088,404 | arxiv | |
2,877,628,088,405 | arxiv | \section{Introduction: Sum of distances and related problems}
\subsection{Problem statement and overview of results}
Let $Z_N=\{z_1,\dots,z_N\}$ be a set of $N$ points on the unit sphere $S^{n-1}$ in $\reals^n$. Denote by
$\tau_n(Z_N)=\sum_{i,j=1}^N \|z_i-z_j\|$ the sum of pairwise distances between the points in $Z_N$ and let
$\tau(n,N)=\sup_{Z_N}\tau(Z_N)$ be the largest attainable sum of distances over all sets of cardinality $N$.
The problem of estimating $\tau(n,N)$ was introduced by Fejes T{\'o}th \cite{FejesToth1956} and it has been studied in
a large number of follow-up papers. The main body of results in the literature are concerned with the asymptotic
regime of fixed $n$ and $N\to\infty.$ In particular, it is known that
\begin{equation}\label{eq:old-bounds}
cN^{1-1/n}\le W(S^{n-1}) N^2-\tau(n,N)\le CN^{1-1/n},
\end{equation}
where $W(S^{n-1})=\iint \|x-y\|dxdy$ is the average distance on the sphere and $c, C$ are some positive constants that depend only on $n$.
The upper bound in \eqref{eq:old-bounds} is due to Alexander \cite{Alexander1972} and the lower bound was proved by
Beck \cite{Beck1984}. Kuijlaars and Saff \cite{Kuijlaars1998} extended these results to bounds on the $s$-Riesz energy of spherical sets for all $s>0$, and Brauchart et al. \cite{Brauchart2012} computed next terms of the asymptotics; see also Ch.6 in
a comprehensive monograph by Borodachov et al.~\cite{BHS2019} for a recent overview.
In this paper we adopt a different view, allowing both the dimension $n$ and the cardinality $N$ to increase in a certain related way.
The main emphasis of this work is on obtaining explicit lower and upper bounds on the sum of distances of a spherical set $Z_N$ for
$N\sim \delta n^\alpha$, $0< \alpha\le 2.$ Upper bounds apply uniformly for all spherical sets, while to derive lower bounds we need to assume that the minimum pairwise distance is bounded from below (otherwise the sum of distances can be made arbitrarily small).
If the minimum distance is large, then the neighbors of a point are naturally placed on the orthogonal sphere, and the distance to them is about $\sqrt 2.$ This suggests that the main term in the asymptotic expression for the sum of distances is $\sqrt2 N^2,$ and it is easy to obtain a bound of the form $\tau(n,N)\le \sqrt2 N^2(1+o(1)),$
as shown below in Sec.~\ref{sec:discrepancy}.
Our main results are related to refinements of this claim. Using linear programming, we derive lower and upper
bounds for the sum of distances of codes of small size. For large minimum separation the
asymptotics of our lower and upper bounds are very close, showing that for good codes, placing the neighbors on the equatorial hyperplane is a rigid structure, enforced by the parameters. For a number of code families, the sum of distances behaves as
$\sqrt2 N^2,$ and the bound is asymptotically tight. We compute lower-order terms in a number of examples, including codes obtained from equiangular line sets, spherical embeddings of strongly regular graphs (tight frames), and spherical embeddings of some classes of small-size binary codes. Numerical calculations, some of which we include, confirm that the sum of distances of these codes follows closely the upper bound.
\subsection{Sum of distances and Stolarsky's invariance}\label{sec:discrepancy}
The sum of distances in a spherical code enjoys several links with other problems in
geometry of spherical sets. One of them is related to the theory of uniform distributions on the sphere. A
sequence of spherical sets $(Z_N)_N$ is called asymptotically uniformly distributed if for every closed set $A\subset S^{n-1}$
$$
\lim_{N\to\infty}\frac{|Z_N\cap A|}N=\sigma_n(A),
$$
where $\sigma_n$ is the normalized (surface area) measure on the sphere.
To quantify the proximity of a sequence of sets $Z_N$ to the uniform distribution on $S^{n-1},$ define
the {\em quadratic discrepancy} of $Z_N:$
\begin{equation}\label{eq:disc}
D^{L_2}(Z_N):=\int_{-1}^1 \int_{S^{n-1}}\Big|\frac 1N\sum_{j=1}^N \bb_{C(x,t)}(z_j)-\sigma_n(C(x,t))\Big|^2 d\sigma_n(x) dt,
\end{equation}
where $C(x,t)=\{y\in S^d: (x \cdot y) \ge t\}$ is a {\em spherical cap} of radius $\arccos t$ and center at $x.$ A classic result
states that a sequence of sets $Z_N$ is asymptotically uniformly distributed if and only if $\lim_{N\to\infty} D^{L_2}(Z_N)=0;$
see e.g., \cite[Theorem 6.1.5]{BHS2019}.
A fundamental relation between $\tau_n(Z_N)$ and $D^{L_2}(Z_N)$ states that the sum of these two quantities is a constant that depends only
on $N$ and $n.$ Namely,
\begin{equation}\label{eq:Stol}
c_n D^{L_2}(Z_N)=W(S^{n-1})-\frac 1{N^2}\tau_n(Z_N),
\end{equation}
where $c_n=\frac{n\sqrt\pi\Gamma(n/2)}{\Gamma((n+1)/2)}$ is a universal constant that depends only on the dimension of the sphere.
This relation was proved by Stolarsky \cite{Stolarsky1973} and is currently known as {\em Stolarsky's invariance principle}.
The average distance on the sphere is given by
$\int_0^\pi 2\sin(\theta/2)\sin^{n-1}\theta d\theta/\int_0^\pi \sin^{n-1}\theta d\theta,$ which evaluates to
\begin{equation*}
W(S^{n})=\frac{2^n\Gamma((n+1)/2)^2}{\sqrt\pi\Gamma(n+\nicefrac12)}=\sqrt 2-\frac 1{4\sqrt 2n}+O(n^{-2}),
\end{equation*}
Since $D^{L_2}(Z_N)\ge 0,$ the following bound is immediate: For any code $Z_N\subset S^{n-1}$
\begin{equation}\label{eq:disc-dist}
\tau_n(Z_N)\le N^2\Big(\sqrt 2-\frac 1{4\sqrt 2n}+O(n^{-2})\Big).
\end{equation}
This inequality in effect states a well-known fact that the average of a negative-definite kernel over a subset is at most the average over the entire space. It also forms a very particular case of a recent general result in \cite{BDHSS2020}.
{\em Remarks}
1. On account of \eqref{eq:Stol} the problem of maximizing the sum of distances is equivalent to minimizing the quadratic discrepancy, i.e., that
the sum of distances serves as a proxy for uniformity: a set of $N$ points on the sphere is ``more uniform'' if the sum of pairwise distances is large for its size.
2. Sequences $(Z_N)$ with average distance $\sqrt 2(1+o(1))$ are asymptotically uniformly distributed. As we have already pointed out, many
sequences of codes satisfy this condition; moreover, as shown below, spherical codes obtained from the binary Kerdock and dual BCH codes match the second term in \eqref{eq:disc-dist}, implying a faster rate of convergence to the limit.
3. Extensions and generalizations of Stolarsky's invariance were proposed in recent works
\cite{BilykLacey2017,Bilyk2018,Skriganov2019,Skriganov2020,Barg2021}. In particular, \cite{Barg2021} studied quadratic
discrepancy of binary codes, deriving explicit expressions as well as some bounds. Below in Sec.~\ref{sec:binary} we
point out that this problem is closely related to the sum-of-distances problem in the spherical case, and translate our results on bounds to the binary case. This link also motivates studying the asymptotic regime of $n\to\infty$ for spherical codes because this is the only possible asymptotics in the binary space.
\subsection{Details of our approach}
Viewing the distance $\|x-y\|$ as a two-point potential on the sphere, we can relate the problem of estimating
$\tau(n,N)$ to the search
for spherical configurations with the minimum potential energy. The above references \cite{Kuijlaars1998}, \cite{Brauchart2012},
\cite{BHS2019}, and many others adopt this point of view, considering the energy minimization for general classes of potential functions on the sphere. A line of works on energy
minimization, initiated by Yudin \cite{Yudin1993,Kolushov1997} and developed by Cohn and Kumar \cite{coh07b}, uses the linear programming bounds on codes to derive results about optimality as well as lower bounds on the energy of spherical codes.
Extending the approach of earlier works by Yudin and Levenshtein \cite{lev83a,lev98}, the authors of
\cite{coh07b} proved optimality of several known spherical codes for all {\em absolutely monotone} potentials\footnote{A potential $h(t):[-1,1]\to\reals$ is called absolutely monotone if for every $n\ge0$ the
derivative $h^{(n)}(t)$ exists and is positive for all $t$.} and called
such codes universally optimal. In particular, denoting $t=t(x,y)=x\cdot y,$ we immediately observe that
the potential $L(t)=-\|x-y\|=-\sqrt{2(1-t)}$ is absolutely monotone, and thus all the known universally optimal codes are
maximizers of the sum of distances.
While the results of \cite{coh07b} apply to specific spherical codes, a suite of {\em universal bounds} on the potential energy
was derived in recent papers of Boyvalenkov, Dragnev, Hardin, Saff and Stoyanova~\cite{BDHSS2015,BDHSS2016,BDHSS2019,BDHSS2020}. While the bounds can be written in a general form relying on the Levenshtein formalism, explicit expressions are difficult to come by. We derive an explicit form of the first few bounds in the Levenshtein hierarchy and evaluate them for the families of spherical codes mentioned above, limiting ourselves to the potential $L(t).$
Our approach can be summarized as follows.
Given an absolutely monotone potential $h$, define the minimum $h$-energy over all spherical sets of size $N$ by $E_h(n,N):=\inf_{Z_N} E_h(Z_N)$, where $E_h(Z_N)=\sum_{i,j=1}^N h(z_i\cdot z_j).$ This quantity is bounded from below
as follows:
\begin{equation} \label{ulb-general}
E_h(n,N) \geq N^2\sum_{i=0}^{k-1+\varepsilon} \rho_i h(\alpha_i),
\end{equation}
where the positive integer $k$, the value $\varepsilon \in \{0,1\}$, and the real parameters $(\rho_i,\alpha_i)$, $i=0,1,\ldots,k-1+\varepsilon$,
are functions of $N$ and $n$ as explained in \cite{BDHSS2016} and in Section \ref{sec:proofs-bounds} below.
The bound \eqref{ulb-general} was called a {\em universal lower bound} (ULB) in \cite{BDHSS2016}.
For given $k$ and $\varepsilon$ we obtain a degree-$m$ bound, where the term ``degree'' refers to the degree
of the polynomial used in the corresponding linear programming problem.
The bound of degree $m=2k-1+\varepsilon$ applies to the values of code cardinality in the segment
$D^{\ast}(n,m)\le N< D^{\ast}(n,m+1),$ where $D^{\ast}(n,m):= \binom{n+k-1-\varepsilon}{n-1}+\binom{n+k-2}{n-1}$. The first few segments are as follows:
$$
[2,n),\; [n+1,2n) ,\;[2n,\nicefrac12 n(n+3)) ,\;[\nicefrac12n(n+3),n(n+1)) ,\;[n(n+1),\nicefrac16 n(n^2+6n+5))
$$
The results of \cite{BDHSS2016}
also imply the optimal choice of the polynomial, so the bounds we obtain cannot be improved by choosing a different polynomial of
degree $\le m.$
Similarly, it is possible to bound $E_h(n,N)$ from above under the condition that the maximum inner product $s$ between distinct vectors in $Z_N$ satisfies the condition $\limsup s<1.$ Universal upper bounds for spherical codes of fixed cardinality and minimum distance were derived in \cite{BDHSS2020}. To this end,
the linear programming functional $f_0N-f(1)$ is minimized on the set of polynomials
$$
\Big\{f(t)=\sum_{i=0}^d f_i P_i^{(n)}(t): f(t) \geq h(t),t \in [-1,s]; f_i \leq 0, i \geq 1 \Big\},
$$
where $P_i^{(n)}(t)$ are the Gegenbauer polynomials.
In \cite{BDHSS2020}, the authors use a specific choice of the polynomials $f(t)$
as explained in Section \ref{sec:proofs-bounds}. This leads to the bound \begin{equation} \label{uub-general}
E_h(n,N) \leq \left(\frac{N}{N_1}-1\right)Nf(1)-N^2\sum_{i=0}^{k-1+\varepsilon} \rho_i h(\alpha_i),
\end{equation}
where this time the parameters $(\rho_i,\alpha_i)$ are functions of the dimension $n$ and the minimum separation $s,$ and $N_1$ is the corresponding Levenshtein bound (see Sec.~\ref{sec:proofs-bounds} for additional details).
\section{Bounds}\label{sec:bounds}
General bounds on energy of spherical codes obtained earlier in \cite{BDHSS2016} and \cite{BDHSS2020} apply to the sum of distances, although obtaining explicit expressions is not immediate. In this section we list the bounds
In this section we list the lower and upper bounds on the sum of distances obtained from the general results in the cited works, deferring the proof to Sec.~\ref{sec:proofs-bounds}. We limit ourselves to the first three bounds in the sequence of lower and upper bounds, noting that even in this case, the resulting expressions are unusually cumbersome.
\subsection{Upper bounds}
The following bounds on the maximum sum of distances of a spherical code in $n$ dimensions hold true:
\begin{numcases}
{\tau_n(N) \leq} \tau_1(n,N):=N\sqrt{2N(N-1)}\label{eq:lb1}
\\[.1in]
\tau_2(n,N):=\frac{N\left(2N(N-n-1)+(N-2)\sqrt{2nN(n-1)(N-2)}\right)}{Nn+N-4n} \label{eq:lb2}\\[.1in]
\tau_3(n,N) :=N \sqrt{\frac{2N(nA_1+2(N-n-1)^2B_1)}{n^2(n-1)^2+4n(N-n-1)(N-2n)}} \label{eq:lb3}
\end{numcases}
where the first bound applies for $2\le N\le n+1,$ the second for $n+1\le N\le 2n,$ the third for $2n\le N\le n(n+3)/2,$
and where
\begin{gather} A_1= Nn^3+(2N-1)n^2-(N-1)(7N-2)n+(N-1)^2(2N+3), \label{eq:A1}\\
B_1=\sqrt{n(n-1)N(N-n-1)}. \label{eq:B1}
\end{gather}
Bound \eqref{eq:lb1} is attained by the simplex code, bound \eqref{eq:lb2} is attained by the biorthogonal
code, and bound \eqref{eq:lb3} is attained by all codes that attain the 3rd Levenshtein bound \cite[p.620]{lev98}.
Due to \eqref{eq:disc}, these codes have the smallest quadratic discrepancy among all codes of their size.
In the asymptotics of $n\to\infty$ bounds \eqref{eq:lb1} and \eqref{eq:lb2} yield
\begin{align}
\tau_1(n,N)&= \sqrt{2}N^2-\frac N2+O(1) \quad\text{if $N\sim\delta n, 0<\delta\le 1$} \label{eq:lb1a}\\
\tau_2(n,N)&=\sqrt{2}N^2-2\Big(1-\delta-\frac{1-\nicefrac32\delta}{\sqrt 2}\Big) N+O(1) \quad \text{if $N\sim \delta n, 1\le \delta\le 2 $} \label{eq:lb2a}
\end{align}
Note that the bound \eqref{eq:lb2a} is slightly tighter than \eqref{eq:lb1a} because of a larger second term, which is at least
$0.7N$ for all $\delta \ge 1.$ The bound \eqref{eq:lb2a} is also uniformly better than \eqref{eq:disc-dist} for all $N=\delta n, \delta\in[1,2].$
The bound \eqref{eq:lb3} is valid for $N\le n(n+3)/2.$ Writing $N\sim \delta n^\alpha$, we note that
its asymptotic behavior depends on $\alpha.$ For instance, for $N=\delta n^2$ we obtain
\begin{equation} \label{eq:lb3a}
\tau_3(n,N)= \sqrt 2 N^2 - \frac {\sqrt{2\delta}}{8} N^{3/2} + O(N).
\end{equation}
Here the second term of the asymptotics coincides with the bound obtained from the average distance \eqref{eq:disc-dist}.
\subsection{Lower bounds}
Let $Z_N$ be a spherical code in $n$ dimensions, and assume that the minimum distance between distinct points $z_i,z_j\in Z_N$ is bounded
from below, i.e., that $z_i\cdot z_j\le s,$ where $s\in[-1,1)$ is some number.
Denote by $\tau_n(N,s)=\inf_{Z_N}\tau(Z_N)$ the smallest possible sum of
distances for such codes. We have
\begin{equation}
{\tau_n(N,s)\ge} \tau^{(i)}(n,N,s), i=1,2,3,
\end{equation}
where
\begin{equation}\label{eq:ub1}
\tau^{(1)}(n,N,s)=N(N-1)\sqrt{2(1-s)},
\end{equation}
\begin{equation}\label{eq:ub2}
\tau^{(2)}(n,N,s)=\frac{N\left(2N(1-ns^2)-2n(1-s^2)+(n-1)\sqrt{2(1-s)}\right)}{n(1-s^2)},
\end{equation}
applicable for $n+1\le N \le 2n$ and $s \in \left[\frac{N-2n}{n(N-2)},0\right]$, and
\begin{equation} \label{eq:ub3}
\tau^{(3)}(n,N,s)= \frac{N[A_5((1-s)(1+ns)A_4+B_4\sqrt{(1-s)B_5})-2N(1+2s+ns^2)C_4\sqrt{A_6}]}{n(1-s)(1+2s+ns^2)^2C_4\sqrt{2B_5}},
\end{equation}
applicable for $2n\le N \le n(n+3)/2$ and
\begin{equation}\label{eq:s}
s \in \left[\frac{\sqrt{n^2(n-1)^2+4n(N-n-1)(N-2n)}-n(n-1)}{2n(N-n-1)},\frac{\sqrt{n+3}-1}{n+2}\right],
\end{equation}
where the notation in \eqref{eq:ub3} is as follows:
\begin{equation}\label{eq:AB}
\begin{aligned}
A_4 &= n(n+2)(n+3)s^4 + 2(3n^2+13n+8)s^3+2(n^2+12n+23)s^2+2(2n^2+5n+17)s+9n+3, \\
B_4 &= 2(n-1)((n+1)s+2)((n-2)s^2-2ns-1), \\
C_4 &= 2n(n+2)s^3-(n^2-5n-2)s^2-6ns-n-5, \\
A_5 &= N(1-ns^2)-n(1-s)((n+1)s+2), \\
B_5 &=\frac{(n+1)s+2}{1+ns}, \\
A_6 &= \frac{(1-s)(A_2+2(1+ns)^2B_2)}{1+ns}.
\end{aligned}
\end{equation}
{\em Remarks.}
1. Note that expression \eqref{eq:ub1} yields a trivial bound on the sum of distances, assuming that every pair of code points is at distance $\sqrt{2(1-s)}.$ It is included for completeness because it follows by optimizing the linear polynomial in the linear programming problem.
2. The bounds \eqref{eq:ub2} and \eqref{eq:ub3} are proved for $s$ in the specified intervals above but are valid for larger $s$.
We will discuss this in more detail in Sec.~\ref{sec:proofs-bounds}.
3. Using Mathematica, we can compute asymptotic behavior of $\tau^{(3)}(n,N,s)$ for $n\to\infty.$ Since it depends on $s$, we do not include general expressions, leaving this for the examples.
\section{Examples of codes of small size}
In this section we consider several families of spherical codes that attain the asymptotic
extremum of the sum of distances. We focus on sets with a small number of distinct distances because the sum of distances is easier
to compute, and because their cardinalities fit the range of the parameters used to derive the bounds in the previous section.
\subsection{Equiangular lines}
A family of $M$ equiangular lines in $\reals^n$ with common inner product $s$ defines a spherical code
$Z_N$ with $N=2M$ vectors, each of which has inner product $s$ with $M-1$ other vectors and $-s$ with their opposites. The sum
of distances in $Z_N$ equals
\begin{align}\label{eq:sum}
\tau_n(Z_N)&=\sum_{i,j=1}^{N}\|z_i-z_j\|=N((M-1)(\sqrt{2-2s}+\sqrt{2+2s})+2)\nonumber
\\&=\frac{N^2}{\sqrt{2}}(\sqrt{1-s}+\sqrt{1+s})(1+o(1)).
\end{align}
For small $s$ we can write $ \sqrt{1-s}+\sqrt{1+s}= 2-\frac{s^2}4+O(s^4),$ so the sum of distances will be close to
the value $\sqrt2 N^2$ given by the bound \eqref{eq:lb3a}. We give a few examples.
{\sc Examples.}
1. Constructions with $M=\Theta(n^2).$ There are several constructions of large-size sets of equiangular lines,
starting with De Caen's family \cite{deCaen2000}; see also \cite{Jedwab2015}. In all these constructions $s \to 0,$ and thus the sum of distances equals $\tau_n(Z_N)=\sqrt 2N^2(1+o(1)),$ showing that such families yield asymptotically optimal spherical codes. For instance, De Caen's family yields codes $Z_{N}$ with the parameters
$$n=3\cdot 2^{2r-1}-1, \ N=\frac 49 (n+1)^2, \ s=\frac 1{2^r+1}, \quad r\ge 1,$$
and we find from \eqref{eq:sum} that $\tau_n(Z_N)=\sqrt 2 N^2-\frac{1}{4\sqrt 2}N^{3/2}+O(N^{5/4}).$ At the same time, on account \eqref{eq:lb3a} and \eqref{eq:ub3} any
sequence of codes $Z_N$ with $N\approx \frac49 n^2$ and $s=\sqrt{3/2n}$ satisfies
$$
\sqrt 2 N^2 - \frac{1}{5\sqrt 2} N^{7/4}-O(N^{3/2}) \le \tau_n(Z_N) \le \sqrt 2 N^2 - \frac 1{6\sqrt 2} N^{3/2}+O(N)
$$
(computations for the lower bound performed with Mathematica).
We give examples of the bounds on the sum of distances of de Caen's codes and of its true value for the first few values of $r.$
{\small
\begin{table}[H]
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$r$ & $n$ & $N$ & Upper bound $\tau_3(n,N)$ & $\tau_n(Z_N)$ & Lower bound $\tau^{(3)}(n,N,s)$ \\[0.03in]
\hline
3 & 95 & 4096 & $2.369344.10^7$ & $2.368643.10^7$ & $2.341901.10^7$ \\[0.02in]
\hline
4 & 383 & 65536 & $6.0719880.10^9$ & $6.071317.10^9$ & $6.036098.10^9$ \\[0.02in]
\hline
5 & 1535 & 1048576 & $1.5548171.10^{12}$ & $1.554765.10^{12}$ & $1.550113.10^{12}$ \\[0.02in]
\hline
6 & 6143 & 16777216 & $3.9805762.10^{14}$ & $3.980539.10^{14}$ & $3.974463.10^{14}$ \\[0.02in]
\hline
7 & 24575 & 268435456 & $1.0190430.10^{17}$ & $1.019041.10^{17}$ & $1.018254.10^{17}$ \\
\hline
\end{tabular}
\end{table}
}
2. Below by $M_s(n)$ we denote the maximum number of equiangular lines in $n$ dimensions. It is known \cite{Lemmens1973} that $M_{1/3}(n)=2(n-1).$ Putting $N=4(n-1)$ for a given $n,$ we obtain a spherical code $Z_N$ with sum of
distances equal to
$$
\tau_n(Z_N)=N((M-1)(\sqrt{4/3}+\sqrt{8/3})+2)=N^2\frac{1+\sqrt 2}{\sqrt{3}}(1+o(1)),
$$
The constant factor in this expression is approximately $1.39.$
A more detailed calculation shows that
$$
\lim_{n\to\infty}\frac{\tau_n(Z_N)}{\tau_3(n,N)}=(\sqrt 6(\sqrt 2-1))^{-1}\approx 0.9856.
$$
3. Further, by \cite{Neumaier1989}, $M_{1/5}(n)=\lfloor 3(n-1)/2\rfloor$ for all sufficiently large $n$. This set of lines
yields a spherical code with sum of distances
$
\tau_n(Z_N)=N^2((\sqrt{2}+\sqrt{3})/\sqrt{5})(1+o(1))\approx 1.40 N^2,$ which is again very close to \eqref{eq:lb3a}.
It is not difficult to check that the
$\lim_{n\to\infty}\tau_n(Z_N)/\tau_3(n,N)=(\sqrt2+\sqrt3)/\sqrt{10}\approx 0.9949.$
4. A recent paper by Jiang and Polyanskii \cite{Jiang2020a} shows that $M_{1/(1+2\sqrt 2)}(n)=3n/2+O(1),$ yielding a spherical
code of size $N=3n+O(1).$ For this code, the constant factor in \eqref{eq:sum} equals
$$
\frac 1{\sqrt2}\Big(\sqrt{1-\frac{1}{1+2\sqrt 2}}+\sqrt{1+\frac{1}{1+2\sqrt 2}}\Big)\approx 1.40189.
$$
In the limit of $n\to\infty$, the sum of distances $\tau_n(Z_N)/\tau_3(n,N)\to 0.991.$
\subsection{Strongly regular graphs and tight frames}
Here we consider the sum-of-distances function for spherical codes obtained from strongly regular graphs. A $k$-regular graph on $v$ vertices is strongly
regular (SRG) if every pair of adjacent vertices has $a$ common neighbors and every pair of nonadjacent vertices has $c$ common
neighbors. Below we use the notation $\SRG(v,k,a,c)$. Spherical embeddings of SRGs were introduced by Delsarte, Goethals, and Seidel \cite{del77b} and they are obtained by projecting $\reals^v$ on the eigenspaces of the adjacency matrix of the graph.
An SRG has three eigenspaces that correspond to the eigenvalues $k,r_1,r_2.$ Let $\Delta=(a-c)^2+4(k-c)$, then the eigenvalues of the adjacency matrix other than $k$ have the form
$$
r_1=\frac 12(a-c+\sqrt\Delta),\quad r_2=\frac 12(a-c-\sqrt\Delta),
$$
and the dimensions of the corresponding eigenspaces are
\begin{equation}\label{eq:dimensions}
n_{1,2}=\frac 12\Big(v-1\pm\frac{(v-1)(c-a)-2k}{\sqrt\Delta}\Big)
\end{equation}
(see e.g., \cite[p.~118]{Brouwer2012}).
Projecting $\reals^v$ on the eigenspace $W_{r_1}$ that corresponds to $r_1,$ we obtain a spherical code in $\reals^{n_1}$ with $N=v$ points and inner products
\begin{equation}\label{eq:stheta}
s_1=\frac{r_1}{k},\quad s_2=-\frac{1+r_1}{v-1-k}.
\end{equation}
A similar procedure for $r_2$ yields a spherical code in $\reals^{n_2}$ with $N$ points and inner products
\begin{equation}\label{eq:stau}
s_1=-\frac{1+r_2}{v-1-k},\quad s_2 =\frac {r_2}k,
\end{equation}
where in both cases $s_1>0>s_2.$ Details of the calculations can be found, for instance, in \cite[Sec.~9.4]{Ericson2001}.
The distribution of distances in the obtained spherical codes does not depend on the point $z_i\in Z_N.$ If
the code is obtained by projecting on $W_{r_1}$, then the number of neighbors of a point with inner product $r_1/k$ is $k$, and if it is obtained by projecting on $W_{r_2},$ then the number of neighbors of a point with inner product $r_2/k$ is $k$. Thus, in both cases, the number of neighbors with the remaining value of the inner product is $N-k-1.$
Combining \eqref{eq:dimensions}, \eqref{eq:stheta}, and \eqref{eq:stau}, we obtain
\begin{proposition} The spherical code $Z_N$ obtained by projecting an $\SRG(v,k,a,c)$ on the eigenspace $W_{\theta}, \theta=r_1,r_2$ forms a spherical code in $\reals^{n_{1,2}}$ of size $N=v$ whose sum of distances equals
\begin{equation}\label{eq:stf}
\tau_{n_{_{1,2}}}(Z_N)=N(\sqrt{2k(k-\theta)}+\sqrt{2(N-1-k)(N+\theta-k)}),
\end{equation}
where $\theta=r_1$ or $r_2$ as appropriate.
\end{proposition}
{\em Remark:} Families of spherical codes considered below attain sum of distances that can be written in the form
$\tau_n(Z_N)=\sqrt 2 N^2(1+o(1)).$ A sufficient condition for this is that the eigenvalues are small compared to $N$ as can be seen upon rewriting \eqref{eq:stf} in the form
$$
\tau_{n}=\sqrt 2 N^2\Big(\sqrt{\frac{k(k-\theta)}{N^2}}+\sqrt{\frac{(N-k-1)(N-k+\theta)}{N^2}}\;\Big).
$$
As long as $\theta/N=o(N),$ as is the case in the examples below, the main term of the asymptotic expression will be $\tau_n=\sqrt2 N^2.$
\noindent{\sc Examples.}
The families of graphs considered below are taken from the online database \cite{Brouwer}.
1. Graph of points on a quadric in PG$(2m,q).$ The parameters of the SRG are
$$
v=\frac{q^{2m}-1}{q-1}, k=\frac{q(q^{2m-2}-1)}{q-1}, a=\frac{q^2(q^{2m-4}-1)}{q-1}+q-1, c=\frac{q^{2m-2}-1}{q-1},
$$
and the eigenvalues are $r_{1,2}=\pm q^{m-1}-1.$
Spherical embeddings of this graph give tight frames in dimensions \eqref{eq:dimensions}
$$
n_{1,2}=\frac12(N-1\pm q^m)\approx\frac12(N\pm\sqrt N),
$$
which is easily seen since $\sqrt\Delta=2q^{m-1}.$
The size of the code $Z_N=Z_N(r_1)$ is $N=v$ and the sum of distances is computed from \eqref{eq:stf} and equals
$$
\tau_{n_1}(Z_N)=N\sqrt{2(q^m+1)}\Big[\frac{q^{m-1}-1}{q-1}\sqrt{q(q^{m-1}+1)}+q^{\frac{3m-2}2}\Big].
$$
Taking $m\to\infty,$ we compute
\begin{equation}\label{eq:second}
\tau_{n_1}(Z_N)=\sqrt2 N^2-\frac{5}{4\sqrt2}N+O(1).
\end{equation}
Since in this case $N\approx2n_1-2\sqrt {2n_1},$ the appropriate bound to look at is $\tau_2(n,N)$ with $\delta=2.$ The second term of the sum of distances in \eqref{eq:second} is approximately $-0.884 N$ while the second term in \eqref{eq:lb2a} is $-2(\sqrt 2-1)N\approx -0.828N.$
Likewise, the projection on the eigenspace $W_{r_2}$ gives a spherical code $Z_N=Z_N(r_2)$ whose sum of distances equals
$$
\tau_{n_2}(Z_N)=N\sqrt{2(q^m-1)}\Big[\frac{q^{m-1}+1}{q-1}\sqrt{q(q^{m-1}-1)}+q^{\frac{3m-2}2}\Big].
$$
For large $m$ this behaves as $\sqrt 2 N^2-\frac{5}{4\sqrt2}N+O(1),$ exhibiting similar behavior as the code
in dimension $n_1.$
2. Graph of points on a hyperbolic quadric in PG$(2m-1,q)$. The parameters of the SRG are
\begin{equation}\label{eq:he}
v=\frac{q^{2m-1}-1}{q-1}+q^{m-1}, k=\frac{q(q^{2m-3}-1)}{q-1}+q^{m-1}, a=k-q^{2m-3}-1, c=k/q,
\end{equation}
and the eigenvalues are $r_{1}=q^{m-1}-1$ and $r_2=-q^{m-2}-1.$ Taking $\epsilon=1,$ we obtain that
the dimensions of the spherical embeddings of this graph are
$$
n_1=\frac{q(q^{m-2}+1)(q^m-1)}{q^2-1}, \quad n_2=\frac{q^2(q^{2m-2}-1)}{q^2-1}
$$
and thus, $n_1\approx N/(q+1), n_2\approx Nq/(q+1).$ The sum of distances in $Z_N(r_1)$ is found to be
$$
\tau_{n_1}(Z_N)=N\sqrt{2q(q^{m-1}+1)}\Big[\frac{q^{m-1}-1}{q-1}\sqrt{q^{m-2}+1}+q^{\frac{3m}2-2}\Big].
$$
For large $m$ we obtain $\tau_n(Z_N)=\sqrt 2 N^2-\frac{q+4}{4\sqrt 2}N-O(1).$ At the same time, from the bound
\eqref{eq:lb3} we obtain an upper estimate of the form $\sqrt 2 N^2-O(N),$ giving the second term of the same order, although with a
smaller constant factor.
Turning to the code $Z_N$ obtained by projecting on the eigenspace $W_{r_2},$ we find that
$$
\tau_{n_2}(Z_N)=N\sqrt{2(q^m-1)}\Big[\frac{q^{m-2}+1}{q-1}\sqrt{q(q^{m-1}-1)}+q^{\frac{3m}2-2}\Big],
$$
yielding $\tau_{n_2}(Z_N)=\sqrt 2N^2-\frac{4q-1}{4q\sqrt 2}N-O(1)$, with similar conclusions in regards to asymptotics of the upper bound.
{\em Remark:} Spherical codes obtained in the described way have an additional property of forming tight frames for $\reals^{n_{1,2}}.$
A spherical code $Z_N=\{z_1,\dots,z_N\}$ forms a {\em tight frame} for $\reals^d$ if $\sum_{i=1}^N (x \cdot z_i)^2=A\|x\|^2$ for any $x\in R^n$, where $A$ is a constant. A necessary and sufficient condition for the tight frame property to hold is the equality
\cite{Benedetto2003}
\begin{equation}\label{eq:fp}
\sum_{i,j=1}^N (z_i\cdot z_j)^2=\frac {N^2}{n}.
\end{equation}
It turns out that all two-distance tight frames are obtained as spherical embeddings of SRGs \cite{BGOY15,Waldron2009}. As it turns out \cite{Benedetto2003}, $N^2/n$ is the smallest value of the sum in \eqref{eq:fp} over all $(n,N)$ spherical
codes. Thus, two-distance tight frames form spherical codes of size $N$ in $R^n$ that have asymptotically maximum sum of distances while minimizing the sum of squares of the inner products.
\subsection{Spherical embeddings of binary codes} Infinite series of asymptotically optimal spherical codes can be obtained by spherical embeddings of binary codes.
Let $Z_N\subset \cX_n = \{0,1\}^n$ be a binary code of length $n$, and denote by $A_w=\frac{1}{N}\{a,b\in C: d_H(a,b)=w\}$ be the
average number of neighbors of a code vector at Hamming distance $w$. The $(n+1)$-tuple $(A_0=1,A_1,\dots,A_n)$ is called
the distance distribution of the code $Z_N$. For a vector $z\in\cX_n$ denote by $\tilde z$ the $n$-dimensional real vector given by $\tilde z_i=(-1)^{z_i}/\sqrt n, i=1,\dots,n,$ and let $\tilde Z_N\subset S^{n-1}$ be the spherical embedding of the code $Z_N.$
Since $\|\tilde x-\tilde y\|=2\sqrt{d_H(x,y)/n},$ the sum
of distances in $\tilde Z_N$ can be written as
\begin{equation}\label{eq:2s}
\sum_{i,j=1}^N\|\tilde z_i-\tilde z_j\|=\frac{2N}{\sqrt n}\sum_{w=0}^n A_w\sqrt w.
\end{equation}
Using this correspondence, we give several examples of asymptotically optimal families of spherical codes.
\subsubsection{Sidelnikov codes} In \cite[Thm.~7]{Sid1971} Sidelnikov constructed a class of binary linear codes $C_r, r\ge1$
with the parameters
$
n=\frac{2^{4r}-1}{2^r+1}, \;N=2^{4r}.
$
The distance distribution of the codes has two nonzero components (in addition to $A_0=1$):
\begin{equation*}
\begin{aligned}
&w_1=\frac{2^{4r-1}-2^{2r-1}}{2^r+1}, \;A_{w_1}=2^{4r}-n-1,\\
&w_2=\frac{2^{4r-1}+2^{3r-1}}{2^r+1},\;A_{w_2}=n.
\end{aligned}
\end{equation*}
Let us compute the sum of distances of the spherically embedded Sidelnikov codes.
Using \eqref{eq:2s},
we obtain
$$
\frac {2N}{\sqrt n} (A_{w_1}\sqrt{w_1}+A_{w_2}\sqrt{w_2}))=\sqrt2 \Big(N^2 - \nicefrac 1{8}N^{5/4}-\nicefrac{7}{16}N-\nicefrac{13}{128}N^{3/4}\Big)+O(N^{1/2}).
$$
At the same time, the bounds \eqref{eq:lb3a} and \eqref{eq:ub3} imply that for any sequence of codes $Z_N$ with $N$ as above and
$s=1-2w_1/n$
$$
\sqrt 2 N^{2}-\frac{1}{2\sqrt 2}N^{7/4}-O(N^{11/8}) \le \tau_n(Z_N)\le \sqrt2 \Big(N^2 - \nicefrac 1{8}N^{5/4}-\nicefrac{7}{16}N-\nicefrac{5}{128}N^{3/4}\Big)+O(N^{1/2}),
$$
and so for large $n$ the true value agrees with the upper bound in the first three terms.
The first few values of the sum of distances together with the bounds of Sec.~\ref{sec:bounds} are shown in the table below.
{\small
\begin{table}[H]
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$r$ & $n$ & $N$ & Upper bound $\tau_3(n,N)$ & $\tau_n(Z_N)$ & Lower bound $\tau^{(3)}(n,N,s)$ \\[0.03in]
\hline
1 & 5 & 16 & $345.4941208$ & $345.4941208$ & $345.4941208$ \\[.02in]
\hline
2 & 51 & 256 & $92338.0198$ & $92334.5230$ & $91959.9016$ \\[.02in]
\hline
3 & 455 & 4096 & $2.371820900.10^7$ & $2.371817158.10^7$ & $2.369984979.10^7$ \\[.02in]
\hline
4 & 3855 & 65536 & $6.0737748.10^9$ & $6.0737745.10^9$ & $6.073097678.10^9$ \\[.02in]
\hline
5 & 31775 & 1048576 & $1.554937673.10^{12}$ & $1.554937671.10^{12}$ & $1.554914842.10^{12}$ \\
\noalign{\smallskip}\hline
\end{tabular}
\end{table}
}
\vspace*{-.2in}The relative difference between the upper bound and the true value for $r=5$ is about $10^{-9},$ and the upper and lower bounds on the sum of distances are also rather close.
We next discuss several families of spherical codes obtained from binary codes of cardinality $N\approx n^2$ that share the following common property: they have a small number of nonzero distances concentrated around $n/2.$ Since the factor $\sqrt w\approx \sqrt\frac{n}2$
for large $n$ can be taken outside the sum in \eqref{eq:2s}, and since the nonzero coefficients $A_w$ add to $N-1$, all such families satisfy
$$
\tau_n(Z_N)\sim \sqrt 2 N^2(1+o(1)),
$$
differing only in the lower terms of the asymptotics.
\subsubsection{Kerdock codes} \cite[\S15.5]{mac91}. Binary Kerdock codes form a family of nonlinear codes of length $n=2^{2m}, m\ge 2$ and cardinality
$N=n^2.$ The distribution of Hamming distances does not depend on the code point and the nonzero entries $(A_i)$ are as follows:
$$
A_0=A_n=1, A_{(n\pm\sqrt n)/2}=n(n/2-1), A_{n/2}=2(n-1).
$$
From \eqref{eq:2s}, the sum of distances of the spherical Kerdock code equals
$
\tau_{n}(\tilde Z_N)=\sqrt2 N^2 - N^{3/2}/4\sqrt 2 +O(N),
$
which agrees with the bound \eqref{eq:disc-dist}, \eqref{eq:lb3a}. Note that
for general completely monotone potentials, the first-term optimality of the Kerdock codes was previously observed in \cite{BDHSS2015}.
\subsubsection{Dual BCH codes} \cite[\S15.4]{mac91}. Let $Z_N$ be a linear binary BCH code of length $n=2^r-1$, $r\ge 3$ with distance $5$.
Suppose that $r$ is odd, then the dual code $(Z_N)^\bot$ has cardinality $N=2^{2r}$ and weight distribution $A_0=1$ and
$$
A_{\frac{n+1}2\pm \sqrt{\frac{n+1}2}}=n\Big(\frac {n+1}4\mp\frac{\sqrt{n+1}}{2\sqrt 2}\Big), \; A_{\frac{n+1}2}=\frac{n(n+3)}2.
$$
For $r$ even the dual BCH code of length $2^r-1$ and distance 5 has five nonzero weights, namely $\frac{n+1}2\pm\sqrt{n+1},
\frac{n+1}2\pm\frac{\sqrt{n+1}}2,$ and $\frac{n+1}2.$
Using \eqref{eq:2s}, we find that the sum of distances in both cases comes out to be $\tau_{n}(\tilde Z_N)=
\sqrt 2(N^2- N^{3/2}/8)-O(N).$ Note that $\tau_n(\tilde Z_N)$ follows closely the upper bound \eqref{eq:disc-dist}.
%
Many more similar examples can be given using the known results on binary codes with few weights. At the same time, obviously there are sequences of binary codes $(Z_N)$ that do not attain $\tau_n(\tilde Z_N)\sim \sqrt 2 N^2$. For instance, consider the code $Z_N$ formed of $\binom n2$ vectors of Hamming weight 2, then the pairwise distances are 2 and 4, and a calculation shows that $\tau_N(\tilde Z_N)\sim (2N)^{7/4}.$
\section{Sum of distances and bounds for quadratic discrepancy of binary codes}\label{sec:binary}
An analog of Stolarsky's identity \eqref{eq:Stol} for the Hamming space $\cX_n=\{0,1\}^n$ was recently derived in \cite{Barg2021}. To state it, define a function $\lambda:\cX_n\times\cX_n\to {\mathbb N}$ on pairs of binary $n$-vectors given by
$
\lambda(x,y)=\frac 12\sum_{z\in \cX_n} |d_H(x,z)-d_H(y,z)|.
$
For a binary code $Z_N\in\cX_n$ define the {\em quadratic discrepancy} as follows:
\begin{equation*}
D_b^{L_2}(Z_N)=\sum_{t=0}^n\sum_{x\in \cX}\Big(\frac{|B(x,t)\cap Z_N|}{N}-\frac{v(t)}{2^n}\Big)^2,
\end{equation*}
where $B(x,t)=\{y\in \cX: d_H(x,y)\le t\}$ is the Hamming ball and $v(t)=\sum_{i=0}^t\binom ni$ is its volume. We also use the subscript
$b$ to differentiate this quantity from it spherical counterpart defined in \eqref{eq:disc}.
An analog of relation \eqref{eq:disc} in the binary case has the following form:
\begin{equation*}
D_b^{L_2}(Z_N)=\frac n{2^{n+1}}{\binom{2n}n}-\frac1{N^2}\sum_{i,j=1}^N\lambda(d_H(z_i,z_j)).
\end{equation*}
As it turns out, $\lambda(x,y)$ depends only on the distance $d_H(x,y).$ Denoting this distance by $w=2i$, we obtain \cite[Eq.(19)]{Barg2021}
\begin{equation}\label{eq:lambda}
\lambda(w-1)=\lambda(w)=2^{n-w}i\binom{w}{i}, 1\le i\le \lfloor n/2\rfloor,
\end{equation}
and thus the average value of $\lambda(\cdot)$ over the code can be written in the form
\begin{equation}\label{eq:dd}
\frac1{N^2}\sum_{i,j=1}^N\lambda(d_H(z_i,z_j))=\frac1{N}\sum_{i=1}^n A_w\lambda(w),
\end{equation}
where $(A_w,w=1,\dots,n)$ is the distribution of distances in $Z_N$ defined above.
Thus, the value of discrepancy of the code is determined once we know the average ``energy'' for the potential $\lambda,$ denoted
$\langle \lambda\rangle_{Z_N}$. Some estimates of this quantity were proved in \cite{Barg2021}.
In this section we note that the bounds on the sum of distances derived above in Sec.~\ref{sec:bounds} imply bounds on
$\langle \lambda\rangle_{Z_N}$ via the spherical embedding, and thus also imply bounds on $D_b^{L_2}.$
Our results are based on the following simple observation.
\begin{proposition}\label{prop:bs} Let $n$ be even and let $Z_N\subset\cX_n$ be a binary code and let $\tilde Z_N\subset S^{n-1}$ be its spherical embedding. We have
\begin{equation}\label{eq:bs}
\langle \lambda\rangle_{Z_N}\le \frac{2^{n-1}}{N^2}\sqrt{\frac{n}{\pi}}\tau_n(\tilde Z_N)
\end{equation}
\end{proposition}
\begin{proof} Assume that $n$ is even. From \eqref{eq:dd} and \eqref{eq:lambda} we obtain
\begin{align*}
\frac 1N\sum_{i,j=1}^N \lambda(d_H(z_i,z_j))&=\sum_{w=1} A_w\lambda(w) \leq \sum_{i=1}^{n/2}(A_{2i-1}+A_{2i}) 2^{n}\sqrt {i/\pi}\\&= \frac{2^{n-1/2}}{\sqrt{\pi}}\sum_{i=1}^{n/2}(A_{2i-1}+A_{2i})\sqrt{2i}\le \frac{2^{n}}{\sqrt{\pi}}\sum_{w=1}^n A_w\sqrt w
\end{align*}
where for the first inequality we used the estimate $i\binom{2i}i\le \sqrt{i/\pi}\,2^{2i},$ valid for all $i.$
Substituting the value of the sum from \eqref{eq:2s}, we obtain the claim.
\end{proof}
With minor differences, this result is also valid for odd $n$.
Earlier results \cite[Thm.5.2]{Barg2021} give several estimates for average value of $\lambda;$ for instance,
for $n=2l-1$, $l$ even
\begin{equation*}\label{eq:lambda1}
\langle \lambda\rangle_{Z_N} \le\lambda(l)(1-\frac1{2N}).
\end{equation*}
Using this inequality and estimating the binomial coefficient, we obtain
\begin{equation}\label{eq:upper-lp}
\langle \lambda\rangle_{Z_N}\le 2^{n-l} \frac l2\binom{l}{l/2}\le 2^{n-1/2} \sqrt{ l/{\pi}},
\end{equation}
valid for all odd $n$.
While in \cite{Barg2021} inequality \eqref{eq:lambda} is proved by linear programming in the Hamming space, similar estimates are also obtained
from \eqref{eq:bs} and the upper bounds \eqref{eq:lb1}-\eqref{eq:lb3} (for $N$ in the range of their applicability), and they largely
coincide with earlier results. For instance, using \eqref{eq:bs} and a bound of the form \eqref{eq:lb3a} with $N=\delta n^2,$
we obtain $\langle \lambda\rangle_{Z_N}\le 2^{n-\frac12}\sqrt{\frac n\pi}(1-O(N^{-1/2})),$ which is only slightly inferior to \eqref{eq:upper-lp}.
In summary, spherical embeddings of binary codes give an alternative way of proving lower bounds for their quadratic discrepancy.
\section{Proofs of the bounds}\label{sec:proofs-bounds}
In this section we prove the bounds on the sum of distances stated in Sec.~\ref{sec:bounds}, using the energy function $E_h(n,N)$ with
$h(t)=L(t)=-\sqrt{2(1-t)}$ (the negative distance). Accordingly, the upper and lower bounds of Sec.~\ref{sec:bounds} exchange their roles. All the derivatives $L^{(i)}(t)$, $i \geq 1$, are defined and positive in $[-1,1)$ and $\lim_{t \to 1^-} L^{(i)}(t)=+\infty$; $L(t)+2$ is nonnegative and increasing in $[-1,1]$, and thus $L(t)$ is absolutely monotone up to an additive constant.
\subsection{Derivation of the necessary parameters}
Here we explain the choice of the parameters in the Levenshtein framework used to derive the bounds.
The parameters $k$, $\varepsilon$, $m=2k-1+\varepsilon$, and $(\rho_i,\alpha_i)$, $i=0,1,\ldots,k-1+\varepsilon$,
originate from the paper of Levenshtein \cite{lev92} (see also \cite[Section 5]{lev98}), where the author used them
to establish the optimality property of his bound on the size of codes (see Theorem 5.39 in \cite{lev98}).
For each positive integer $m=2k-1+\varepsilon$, where $\varepsilon \in \{0,1\}$ accounts for the parity of $m$, Levenshtein
used the $m$th degree polynomial
$$
f_m^{(n,s)}(t)=(t-\alpha_0)^{2-\varepsilon}(t-\alpha_{k-1+\varepsilon}) \prod_{i=1}^{k-2+\varepsilon} (t-\alpha_i)^2
$$
to obtain his universal upper bound $N \leq L_m(n,s)$ on the maximal cardinality of a code $Z_N(n,s)$ with fixed $n$ and $s$.
The numbers $\alpha_0 < \alpha_1 < \cdots < \alpha_{k-1+\varepsilon}$ belong to $[-1,1)$ and $\alpha_{k-1+\varepsilon}=s$ and
$\alpha_0=-1$ if and only if $\varepsilon=1$. The polynomial $f_m$ can be written in the form
\begin{equation} \label{lev-poly}
f_m^{(n,s)}(t)=(t+1)^{\varepsilon} \left(P_k(t)P_{k-1}(s) - P_k(s)P_{k-1}(t)\right)^2/(t-s),
\end{equation}
where $P_i(t)=P_i^{(\frac{n-1}{2},\frac{n-3}{2}+\varepsilon)}(t)$ is a Jacobi polynomial normalized to satisfy $P_i(1)=1$.
For small $m$ the zeros $\alpha_i$ of $f_m$ can be easily found.
The quadrature formula
\begin{equation} \label{QF}
f_0=\frac{f(1)}{L_m(n,s)}+\sum_{i=0}^{k-1+\varepsilon} \rho_i f(\alpha_i),
\end{equation}
which is exact for all real polynomials $f(t)=\sum_{i=0}^{d} f_i P_i^{(n)}(t)$ of degree $d \leq m$, reveals a strong relation
between the Levenshtein bounds and the energy bounds. In our context, this formula
can be used to calculate the weights $\rho_i$; see, for example \cite{BDL1999}, where formulas for $\rho_i$ for odd $m$ were derived from a Vandermonde-type system obtained by $f(t)=t^i$, $i$ odd.
Formula \eqref{QF} is instrumental in the representation \eqref{ulb-general} of the ULB and the proof of its optimality in \cite{BDHSS2016}. The quantity $f_0N-f(1)$ which provides lower linear programming bounds for the energy $E_h(Z_N)$ (assuming that certain conditions on the polynomial $f(t)$ are met) is computed by \eqref{QF} to give the right-hand side of \eqref{ulb-general}. The reason for this is that $f(\alpha_i)=h(\alpha_i)$ from Hermite interpolation.
We next explain the derivation of the universal upper bound (UUB) from \cite{BDHSS2020} which is based on the optimal choice of polynomials
$$
f(t)=-\lambda f_m^{(n,s)}(t) + g_T(t)
$$
(one each value of the degree). Here $f_m^{(n,s)}(t)$ is the degree-$m$ Levenshtein polynomial \eqref{lev-poly},
$g_T(t)$ interpolates the potential function at the multiset $T$ which consists of the roots of $f_m^{(n,s)}(t)$
(counted with their multiplicities) and $\lambda=\max\{g_i/\ell_i : 1 \leq i \leq m-1\}$ is a positive constant, where
\[ f_m^{(n,s)}(t)=\sum_{i=0}^m \ell_i P_i^{(n)}(t), \ g_T(t)=\sum_{i=0}^{m-1} g_i P_i^{(n)}(t) \]
are the Gegenbauer expansions of $f_m^{(n,s)}(t)$ and $g_T(t)$, respectively (note that $\ell_i>0$ for every $i \leq m$).
The parameter $N_1=L_m(n,s) \geq N$ is involved as the equality $N_1=N$ happens if and only if there exists a universally optimal code of size $N$ in $n$ dimensions (in this case, ULB and UUB coincide\footnote{Having said that,
we may view the difference between the ULB and UUB as a measure of how far the codes are from being universally optimal.}). In our
computations of UUBs below we first find the Hermite interpolant $g_T(t)$, then $\lambda$, and finally compute the bound \eqref{uub-general}.
\subsection{Lower bounds}
\begin{proposition} \label{prop2-1}
For $2\le N\le n+1$ we have
\begin{equation} \label{ulb-deg1}
E_L(n,N) \geq -\tau_1(n,N).
\end{equation}
For $n+1\le N\le 2n$, we have
\begin{equation} \label{ulb-deg2}
E_L(n,N) \geq -\tau_2(n,N).
\end{equation}
For $2n\le N \le n(n+3)/2$, we have
\begin{equation} \label{ulb-deg3}
E_L(n,N) \geq -\tau_3(n,N).
\end{equation}
where $\tau_2,\tau_2,\tau_3$ are defined in \eqref{eq:lb1}-\eqref{eq:lb3}.
\end{proposition}
These estimates constitute the first three bounds in \eqref{ulb-general}, beginning with expressing the
parameters $(\rho_i,\alpha_i)$ and functions of the dimension $n$ and cardinality $N$. In all three proofs below we first
find the roots $\alpha_i$ of the Levenshtein polynomial \eqref{lev-poly} giving $L_m(n,s)=N$ for $m=1,2,3$, respectively.
This is equivalent to solving in $s$ the equation $L_m(n,s)=N$. Then we give the weights $\rho_i$ computed by
setting suitable polynomials (we used $f(t)=1,t,t^2,t^3$; for example $f(t)=1$ gives the identity
$\sum_{i=1}^{k-1+\varepsilon} \rho_i =1-1/N$) in the quadrature formula \eqref{QF}.
{\it Proof of \eqref{ulb-deg1}}.
For the degree 1 bound \eqref{ulb-deg1} we have $\alpha_0=-1/(N-1)$ and $\rho_0 = -1/N\alpha_0 = (N-1)/N$. Therefore
\[ E_L(n,N) \geq N^2\rho_0 L(\alpha_0) = N(N-1)L(\alpha_0)= -N\sqrt{2N(N-1)}, \]
{\it Proof of \eqref{ulb-deg2}.} For degree 2 (with $k=1$ and $\varepsilon=1$) we have
$\alpha_0=-1$, $\alpha_1 = -\frac{2n-N}{n(N-2)}$, $\rho_0 = \frac{N-n-1}{Nn+N-4n}$ and
$\rho_1 = \frac{n(N-2)^2}{N(Nn+N-4n)}$. Since $L(-1)=-2$ and $L(\alpha_1)=-\sqrt{\frac{2N(n-1)}{n(N-2)}}$, we obtain that
the expression $N^2(\rho_0 L(\alpha_0) + \rho_1 L(\alpha_1))$ from \eqref{ulb-general}
is equal to $-\tau_2(n,N)$ as given in \eqref{eq:lb2}.
{\it Proof of \eqref{ulb-deg3}.}
For the degree-3 lower bound we take $k=2$ and $\varepsilon=0$. By \eqref{ulb-general} we have
\begin{equation} \label{ulb-3}
E_L(n,N) \geq N^2 (\rho_0 L(\alpha_0)+\rho_1 L(\alpha_1)),
\end{equation}
where $N \in [D^{\ast}(n,3),D^{\ast}(n,4)]=[2n,n(n+3)/2]$, and
\[ \alpha_{0,1} = \frac{-n(n-1) \pm \sqrt{D}}{2n(N-n-1)}, \ \ D=n^2(n-1)^2+4n(N-n-1)(N-2n), \]
are the roots of the quadratic equation $n(N-n-1)s^2+n(n-1)s+2n-N=0$ obtained from the equality $L_3(n,s)=N$.
Further, the weights $\rho_0$ and $\rho_1$ satisfy the formulas
\[ \rho_0N=\frac{1-\alpha_1^2}{\alpha_0(\alpha_1^2-\alpha_0^2)}, \ \ \rho_1N=\frac{1-\alpha_0^2}{\alpha_1(\alpha_0^2-\alpha_1^2)} \]
(note that the numerators resemble the potential $L(t)$ computed for $\alpha_0,\alpha_1$; this will make
our expessions symmetric). In the sequel,
we use the following symmetric expressions for $\alpha_0$ and $\alpha_1$
\begin{gather*}
\alpha_0+\alpha_1=-\frac{n-1}{N-n-1}, \ \ \alpha_0 \alpha_1=- \frac{N-2n}{n(N-n-1)}, \ \ \alpha_0^2-\alpha_1^2=\frac{(n-1)\sqrt{D}}{n(N-n-1)^2}, \\
(1-\alpha_0)(1-\alpha_1)=\frac{(n-1)N}{n(N-n-1)}, \ \ (1+\alpha_0)(1+\alpha_1)=\frac{(n-1)(N-2n)}{n(N-n-1)}.
\end{gather*}
Our task is to express the bound \eqref{ulb-3} via $n$ and $N$. Using the above equalities, we obtain
\begin{align*}
E_L(n,N) &\geq N (\rho_0N L(\alpha_0)+\rho_1N L(\alpha_1)) \\
&= -N \left(\frac{(1-\alpha_1^2)\sqrt{2(1-\alpha_0)}}{\alpha_0(\alpha_1^2-\alpha_0^2)}+
\frac{(1-\alpha_0^2)\sqrt{2(1-\alpha_1)}}{\alpha_1(\alpha_0^2-\alpha_1^2)} \right) \\
=& -\frac{n^2N(N-n-1)^3}{(n-1)(N-2n)\sqrt{D}}\left(\alpha_1(1-\alpha_1^2)\sqrt{2(1-\alpha_0)}-\alpha_0(1-\alpha_0^2)\sqrt{2(1-\alpha_1)}\right).
\end{align*}
Consider the expression $S=\alpha_1(1-\alpha_1^2)\sqrt{2(1-\alpha_0)}-\alpha_0(1-\alpha_0^2)\sqrt{2(1-\alpha_1)}$.
We compute
\begin{align*}
\frac{S^2}{2} &= \frac{(n-1)\left(A-B\right)N}{n(N-n-1)},
\end{align*}
and thus
$$
S=\sqrt{\frac{2(A-B)(n-1)N}{n(N-n-1)}},
$$
where we have denoted
\begin{align*}
A
&= \frac{(n-1)(N-2n)^2[Nn^3+(2N-1)n^2-(N-1)(7N-2)n+(N-1)^2(2N+3)]}{n^2(N-n-1)^5}
\end{align*}
and
\begin{align*}
B
&= - \frac{2(n-1)(N-2n)^2\sqrt{(n-1)N}}{(n(N-n-1))^{5/2}}.
\end{align*}
Therefore
\[ E_L(n,N) \geq -\frac{nN(N-n-1)^2}{(N-2n)\sqrt{D}} \sqrt{\frac{2(A-B)nN(N-n-1)}{n-1}} \]
which gives \eqref{ulb-deg3} and \eqref{eq:lb3}.
\subsection{Upper bounds}
In this section we prove bounds \eqref{eq:ub1}-\eqref{eq:ub3}, deriving an explicit form of the first three universal upper bound for $Z_N(n,s)$ codes from \cite{BDHSS2020} for $L(t)$ as functions of
$n$, $N$ and $s$. In addition to the parameters $(\rho_i,\alpha_i)$ as explained above (but now related to $N_1=L_m(n,s)$ instead
of $N$), we need to find the polynomial $g_T(t)$, then the real parameter $\lambda$ and finally the polynomial $f(t)$ as explained in the last paragraph of Section 5.1.
Recall again that because of the sign change, the inequalities \eqref{eq:ub1}-\eqref{eq:ub3} are inverted.
\begin{proposition} \label{prop2-4}
For $N \in [2,n+1]$ and $s \in [-1/(N-1),-1/n]$, we have
\begin{equation} \label{uub-deg1}
E_L(n,N,s) \leq -\tau^{(1)}(n,N,s).
\end{equation}
For $N \in [n+1,2n]$ and $s \in [(N-2n)/n(N-2),0]$, we have
\begin{equation} \label{uub-deg2}
E_L(n,N) \leq -\tau^{(2)}(n,N,s).
\end{equation}
For $N \in [2n,n(n+3)/2]$ and $s \in \left[\frac{\sqrt{n^2(n-1)^2+4n(N-n-1)(N-2n)}-n(n-1)}{2n(N-n-1)},\frac{\sqrt{n+3}-1}{n+2}\right]$, we have
\begin{equation} \label{uub-deg3}
\begin{split}
E_L(n,N,s) &\leq -\tau^{(3)}(n,N,s) \\
\end{split}
\end{equation}
where the quantities $\tau^{(1)},\tau^{(2)},\tau^{(3)}$ are defined in \eqref{eq:ub1}-\eqref{eq:ub3} above.
\end{proposition}
\begin{remark}
We set upper limits for $s$ in all three cases as suggested implicitly by the framework in \cite{BDHSS2020}.
The bounds are valid beyond these limits but most likely they can be improved by polynomials of higher degrees.
\end{remark}
{\it Proof of \eqref{uub-deg1}.}
For fixed $n$, $N \in [2,n+1]$ and $s \in [-1/(N-1),-1/n]$, we consider the degree 1 UUB
\begin{equation*}
E_L(n,N,s) \leq N\left(\frac{N}{L_1(n,s)}-1\right)f(1) +N^2\rho_0 L(s),
\end{equation*}
where the parameters come as follows: $L_1(n,s)=(s-1)/s=: N_1$ is the first Levenshtein bound,
\[ f(t)=-\lambda f_1^{(n,s)}(t)+g_T(t)=-\lambda (t-s)+g_T(t) \]
is our linear programming polynomial, and $\alpha_0=s$, $\rho_0=-1/N_1s =1/(1-s)$
are the Levenshtein's parameters corresponding to $s$ (i.e., to $N_1$). The polynomial $g_T(t)$ is constant and is found from
$g_T(s)=L(s)$. Then $\lambda=0$ and $f(t)=L(s)$ give the bound
\[ E_L(n,N,s) \leq \left(\frac{N}{N_1}-1\right)NL(s) +N^2\rho_0L(s)=N(N-1)L(s). \]
\begin{remark} As already observed, this bound is straightforward upon estimating all terms in the energy sum $E_L(Z_N)$
by the constant $L(s)$.
\end{remark}
{\it Proof of \eqref{uub-deg2}.}
For fixed $n$, $N \in [n+1,2n]$ and $s \in [(N-2n)/n(N-2),0]$, we consider the degree 2 UUB following the derivation in \cite{BDHSS2020}
\begin{equation} \label{uub-2}
E_L(n,N,s) \leq N\left(\frac{N}{L_2(n,s)}-1\right)f(1) +N^2(\gamma_0 L(\alpha_0)+\gamma_1 L(\alpha_1)),
\end{equation}
where the parameters are defined as follows: $N_1:=L_2(n,s)=2n(1-s)/(1-ns)$ is the second Levenshtein bound,
\[ f(t)=-\lambda f_2^{(n,s)}(t)+g_T(t)=-\lambda (t+1)(t-s)+g_T(t) \]
is our linear programming polynomial (to be described below), and
\[ \alpha_0=-1, \ \alpha_1=s, \ \rho_0=\frac{N_1-n-1}{N_1 n+N_1-4n}, \ \rho_1=\frac{n(N_1-2)^2}{N_1(N_1 n+N_1-4n)} \]
are the Levenshtein parameters corresponding to $s$ (compare with the parameters in the proof of \eqref{ulb-deg2}).
The polynomial $g_T(t)$ with $T=\{-1,s\}$, i.e. $g(-1)=L(-1)$, $g(s)=L(s)$, becomes
\[ g(t)=\frac{L(s)-L(-1)}{1+s}t+\frac{L(s)+sL(-1)}{1+s}=\frac{(2-\sqrt{2(1-s)})t-2s^2+\sqrt{2(1-s)}}{1+s}. \]
The coefficient $\lambda$ is chosen to make $f_1=0$ in the Gegenbauer expansion $f(t)=f_2P_2^{(n)}(t)+f_1P_1^{(n)}(t)+f_0$
(this choice is unique). This gives $\lambda=\frac{2-\sqrt{2(1-s)}}{1-s^2}$ and
\[ f(t)=-\frac{(2-\sqrt{2(1-s)})t^2-2s^2+\sqrt{2(1-s)}}{1-s^2}. \]
Therefore, \eqref{uub-2} gives
\[ E_L(n,N,s) \leq N\left(\frac{N}{N_1}-1\right)(-2) +
N^2\left(\frac{(N_1-n-1)(-2)}{N_1n+N_1-4n}+\frac{n(N_1-2)^2(-\sqrt{2(1-s)})}{N_1(N_1n+N_1-4n)}\right), \]
implying \eqref{uub-deg2}.
{\it Proof of \eqref{uub-deg3}.} For fixed $n$, $N$ and $s$ as in the condition \eqref{eq:s}, we derive the degree 3 UUB
\begin{equation} \label{uub-21}
E_L(n,N,s) \leq N\left(\frac{N}{L_3(n,s)}-1\right)f(1) +N^2(\rho_0 L(\alpha_0)+\rho_1 L(\alpha_1)),
\end{equation}
where the parameters are defined as follows:
\[ N_1:=L_3(n,s)=\frac{n(1-s)((n+1)s+2)}{1-ns^2} \]
is the third Levenshtein bound,
\[ f(t)=-\lambda f_3^{(n,s)}(t)+g_T(t)=-\lambda (t-\alpha_0)^2(t-s)+g_T(t), \]
\[ \alpha_0=\frac{-n(n-1)-\sqrt{D_1}}{2n(N_1-n-1)} = -\frac{1+s}{1+ns}, \ \alpha_1=\frac{-n(n-1)+\sqrt{D_1}}{2n(N_1-n-1)} = s, \]
\[ D_1=n^2(n-1)^2+4n(N_1-n-1)(N_1-2n) = \frac{n^2(n-1)^2(1+2s+ns^2)^2}{(1-ns^2)^2}, \]
\[ \rho_0=\frac{1-\alpha_1^2}{N_1\alpha_0(\alpha_1^2-\alpha_0^2)} = \frac{(1+ns)^3}{n((n+1)s+2)(1+2s+ns^2)}, \]
\[ \rho_1=\frac{1-\alpha_0^2}{N_1\alpha_1(\alpha_0^2-\alpha_1^2)} = \frac{n-1}{n(1-s)(1+2s+ns^2)}, \]
are the Levenshtein's parameters corresponding to $s$ (note that they are shown to depend on $n$ and $s$ only).
The ULB part $\rho_0 L(\alpha_0)+\rho_1 L(\alpha_1)$ in \eqref{uub-21} can be found as in the proof of \eqref{ulb-deg3}. We obtain
\begin{equation} \label{uub-22}
E_L(n,N,s) \leq \frac{N}{N_1}\left((N-N_1)f(1)-N\sqrt{\frac{2N_1(nA_1+2(N_1-n-1)^2B_1)}{D}}\right),
\end{equation}
where $A_1$ and $B_1$ are as in \eqref{eq:A1} and \eqref{eq:B1}, respectively, but with $N_1$ instead of $N$. Writing
\eqref{uub-22} in terms of $n$ and $s$, we obtain
\begin{eqnarray*}
A_1 &=& \frac{(n-1)^2[(1+ns)^5(1-s)+(n-1)^2((n+1)s+2)]}{(1-ns^2)^3}, \\
B_1 &=& \frac{n(n-1)\sqrt{(1-s)(1+ns)((n+1)s+2)}}{1-ns^2}.
\end{eqnarray*}
Now let us exclude $N_1$ from the ULB part. Using the above expression for $D_1$ and the equality $N_1-n-1=(n-1)(1+ns)/(1-ns^2)$,
we find
\begin{equation} \label{eq:uub3-ulb-part}
E_L(n,N,s) \leq \frac{N}{N_1}\left((N-N_1)f(1)-\frac{N\sqrt{2(1-s)((n+1)s+2)(A_2+2(1+ns)^2B_2)}}{(1+2s+ns^2)(1-ns^2)}\right),
\end{equation}
where
\[ A_2 = (1+ns)^5(1-s)+(n-1)^2((n+1)s+2), \ \ B_2= (n-1)\sqrt{(1-s)(1+ns)((n+1)s+2)}. \]
Next we find $f(t)$ in order to compute $f(1)$. The polynomial $g_T(t)=at^2+bt+c$ interpolates $L(t)$ in $T=\{\alpha_0,\alpha_0,\alpha_1\}$, i.e.
$g(\alpha_0)=L(\alpha_0)$, $g^\prime (\alpha_0)=L^\prime(\alpha_0)$, and $g(\alpha_1)=L(\alpha_1)$.
Resolving this to find $a$, $b$ and $c$, we obtain the Gegenbauer expansion of $f(t)$ as follows
\begin{eqnarray*}
&& f(t)=-\frac{\lambda(n-1)}{n+2} P_3^{(n)}(t)+\frac{(n-1)(a+\lambda(2\alpha_0+\alpha_1))}{n}P_2^{(n)}(t) \\
&& +\, \left(b-\frac{\lambda((\alpha_0^2+2\alpha_0\alpha_1)(n+2)+3)}{n+2}\right)P_1^{(n)}(t) +
\frac{\lambda(\alpha_0^2\alpha_1 n +2\alpha_0+\alpha_1)+a+cn}{n} P_0^{(n)}(t),
\end{eqnarray*}
where
\begin{eqnarray*}
a &=& \frac{L(\alpha_1)-L(\alpha_0)-L^\prime(\alpha_0)(\alpha_1-\alpha_0)}{(\alpha_1-\alpha_0)^2}, \\
b &=& \frac{L^\prime(\alpha_0)(\alpha_1^2-\alpha_0^2)-2\alpha_0(L(\alpha_1)-L(\alpha_0))}{(\alpha_1-\alpha_0)^2}, \\
c &=& \frac{\alpha_0^2(L(\alpha_1)-L(\alpha_0))-\alpha_0\alpha_1(\alpha_1-\alpha_0)L^\prime(\alpha_0)+(\alpha_1-\alpha_0)^2L(\alpha_0)}
{(\alpha_1-\alpha_0)^2}.
\end{eqnarray*}
According to the rule in Theorem 3.2 from \cite{BDHSS2020}, the coefficient $\lambda$ has to be chosen as $\max\{g_1/\ell_1,g_2/\ell_2\},$ which is equivalent to the choice between
$\{ f_1=0,f_2<0\}$ and $\{f_1<0,f_2=0\}$. We will prove below that $f_2<0$, i.e., that the first of these conditions is realized
for all $n$ and $s$ under consideration.
The equality $f_1=0$ gives
\[ \lambda=\frac{b(n+2)}{(\alpha_0^2+2\alpha_0\alpha_1)(n+2)+3} \iff \lambda = \frac{(n+2)(L^\prime(\alpha_0)(\alpha_1^2-\alpha_0^2)-2\alpha_0(L(\alpha_1)-L(\alpha_0)))}
{(\alpha_1-\alpha_0)^2((\alpha_0^2+2\alpha_0\alpha_1)(n+2)+3)}. \]
Then
\[ f(1) = -\lambda (1-\alpha_0)^2(1-\alpha_1)+a+b+c = \frac{A_3 (L(\alpha_1)-L(\alpha_0))+B_3 L(\alpha_0)-C_3 L^\prime(\alpha_0)}{B_3}, \]
where
\begin{eqnarray*}
A_3 &=& (1-\alpha_0)^2((n+2)(1+\alpha_0)^2-n-1) = \frac{(n-1)((n+1)s+2)^2((n-2)s^2-2ns-1)}{(1+ns)^4}, \\
B_3 &=& (\alpha_1-\alpha_0)^2((\alpha_0^2+2\alpha_0\alpha_1)(n+2)+3) \\
&=& -\frac{(1+2s+ns^2)^2(2n(n+2)s^3-(n^2-5n-2)s^2-6ns-n-5)}{(1+ns)^4}, \\
C_3 &=& (1-\alpha_0)(1-\alpha_1)(\alpha_1-\alpha_0)((n+2)(\alpha_0+\alpha_1+\alpha_0\alpha_1)+3) \\
&=& \frac{(n-1)(1-s)((n+1)s+2)(1+2s+ns^2)((n+2)s^2+2s-1)}{(1+ns)^3}. \\
\end{eqnarray*}
Therefore
\[ f(1) = \frac{((n+1)s+2)[(1-s)(1+ns)A_4 + B_4B_5\sqrt{1-s}]} {(1+2s+ns^2)^2C_4B_5\sqrt{2}}, \]
where $A_4$, $B_4$, $B_5$ and $C_4$ are as given in Equation \eqref{eq:AB} in Section 2.
Substituting these parameters into \eqref{eq:uub3-ulb-part}, we obtain \eqref{eq:ub3}.
The condition $f_2<0$ is equivalent to $\lambda(2\alpha_0+s) + a < 0$. This gives the inequality
\[ \frac{6B_6\sqrt{(1-s)(1+ns)((n+1)s+2)} - C_6}{C_4} < 0, \]
where
\begin{eqnarray*}
B_6 &=& (n-2)(n+1)s^2-4s-n-1, \\
C_6 &=& n^3(n+2)s^6 + 3n^2(n+2)s^5 - 3n(n^2-n-2)s^4 + 2(3n^3-6n^2-8n-4)s^3 + \\
&& 3(3n^2-16n-14)s^2 - 3(2n^2+5n+18)s - 11n - 13. \\
\end{eqnarray*}
We have $C_4<0$ since $2n(n+2)s^3<n+5$ follows for $n \geq 3$ and $0<s<(-1+\sqrt{n+3})/(n+2)$ (just use that $s<1/\sqrt{n+2}$).
It remains to see that $6B_6\sqrt{(1-s)(1+ns)((n+1)s+2)} > C_6$. Since $B_6<0$ for $0<s<(-1+\sqrt{n+3})/(n+2)$, we need to prove
that $C_6^2> 36B_6^2 (1-s)(1+ns)((n+1)s+2)$. This inequality is reduced to an 8-degree polynomial (in $s$) inequality shown to hold
true by a computer algebra system.
\bibliographystyle{amsplain}
|
2,877,628,088,406 | arxiv | \section{Introduction}
We consider the problem of filtering: designing algorithms for the causal estimation of a real valued signal from noisy observations. The filtering algorithm observes at each iteration a noisy signal component, and is required to estimate the corresponding underlying signal component based on the current and past noisy observations alone.
\\
We consider finite fixed-length linear filters that combine the current and several last noisy observations for prediction of the current underlying signal component. Performance is measured by the mean square error over the entire signal. Following the setting in \cite{MWPaper}, we assume that the underlying signal is an arbitrary bounded signal, possibly even adversarial, and that it is corrupted by an additive zero-mean, time-independent, bounded noise with known constant variance \footnote{The justification of \cite{MWPaper} for assuming that the variance is a known constant is that this variance could be learned by sending a training sequence in the beginning of transmission.}.
\\
The approach taken in this paper is to construct a {\it universal} filter - i.e. an adaptive filter whose performance we compare to an optimal offline filter with full knowledge of the signal and noise. The metric of performance is thus regret - or the difference between the total mean squared error incurred by our adaptive filter, and the total mean square error of the offline benchmark filter.
\\
The question of competing with a fixed offline filter was successfully tackled in \cite{MWPaper}. In this paper we consider a more challenging task: competing with the best offline changing filter, where restrictions are placed on how often this optimal offline filter is allowed to change. A more stringent metric of performance what fully captures this notion of competing with an adaptive offline benchmark is called {\it adaptive regret}: it is the maximum regret incurred by the algorithm on any subinterval.
\\
We present and analyze simple, efficient and intuitive algorithms that attain logarithmic adaptive regret. This bound is tight, and resolves a question posed by Moon and Weissman in \cite{MWPaper}. Along the way, we introduce a simple universal algorithm for filtering, improving the previously known best running time from quadratic in the number of filter coefficients to linear.
\subsection{Related Work}
There has been much work on the problem of estimating a real-valued signal from noisy observations with respect to the MMSE loss over the years. Classical results assume a model in which the underlying signal is stochastic with some known parameters, i.e. the first and second moments, or require the signal to be stationary, such as the classical work of \cite{WienerPaper}. The special case of linear MMSE filters has received special attention due to its simplicity \cite{LinearEstimationRef}. For more recent results on MMSE estimation see \cite{RobustMMRef1, RobustMMRef2, RobustMMRef3,UnsuperAdapFilterRef}.
\\
In this work we follow the non-stochastic setting of \cite{MWPaper}: no generating model is assumed for the underlying signal and stochastic assumptions are made on the added noise (that it is zero-mean, time-independent with known fixed variance). In this setting, while considering finite linear filters, \cite{MWPaper} presented an online algorithm that achieves logarithmic expected regret with respect to the entire signal. The computational complexity of their algorithm is proportional to a quadratic in the linear filter size.
\\
Henceforth we build on recent results from the emerging online learning framework called online convex optimization \cite{ZinkPaper,LogRegretPaper}. For our adaptive regret algorithm, we use tools from the framework presented in \cite{AdapRegretPaper} to derive an algorithm that achieves logarithmic expected regret on any interval of the signal.
\section{Preliminaries}
\subsection{Online convex optimization}
In the setting of online convex optimization (OCO) an online algorithm $\mathcal{A}$ is iteritevly required to make a prediction by choosing a point $x_t$ in some convex set $\mathcal{K}$. The algorithm then incurs a loss $l_t(x_t)$, where $l_t(x):\mathcal{K}\rightarrow\mathbb{R}$ is a convex function. The emphasis in this model is that on iteration $t$, $\mathcal{A}$ has only knowledge of the loss functions in previous iterations $l_1(x),...,l_{t-1}(x)$ and thus $l_t(x)$ may be chosen arbitrarily and even adversely. The standard goal in this setting is to minimize the difference between the overall loss of $\mathcal{A}$ and that of the best fixed point $x^*\in{\mathcal{K}}$ in hindsight. This difference is called regret and it is formally given by,
\begin{eqnarray*}
R_T(\mathcal{A}) = \sum_{t=1}^Tl_t(x_t) - \min_{x\in{\mathcal{K}}}\sum_{t=1}^Tl_t(x)
\end{eqnarray*}
A stronger measure of performance requires the algorithm to have little regret on any interval $I=[r,s]\subseteq{[T]}$ with respect to the best fixed point $x_I^*\in{\mathcal{K}}$ in hindsight in this interval. This measure is call adaptive regret and it is given by ,
\begin{eqnarray*}
AR_T(\mathcal{A}) = \sup_{I = [r,s]\subset{[T]}}\lbrace{\sum_{t=r}^sl_t(x_t) - \min_{x\in{\mathcal{K}}}\sum_{t=r}^s(l_t(x)}\rbrace
\end{eqnarray*}
\subsection{Problem Setting}
Let $x_t$ be a real-valued, possibly adversarial, signal bounded in the range $[-B_X...B_X]$. The signal $x_t$ is corrupted by an additive zero-mean time independent noise $n_t$ bounded in the range $[-B_N...B_N]$ with known time-invariant variance $\sigma^2$. An estimator observes on time $t$ the noisy signal $y_t = x_t + n_t$, and is required to predict $x_t$ by taking a linear combination of the observations $y_t,y_{t-1},...,y_{t-d+1}$ where $d$ is the order of the filter. That is, the estimator chooses on time $t$ a filter $w_t\in{\mathbb{R}^d}$ and predicts according to $w_t^{\top}Y_t$ where $Y_t\in{\mathbb{R}^d}$ and $Y_t(i) = y_{t-i+1}$, $1\leq i \leq d$. The loss of the estimator after $T$ iterations is given by the mean-square-error $\frac{1}{T}\sum_{t=1}^T(x_t - w_t^{\top}Y_t)^2$. \\
In case $x_t$ is observable to the online algorithm, minimizing the regret and the adaptive regret is fairly easy using the framework of OCO with the loss functions $l_t(w_t) = (x_t -w_t^{\top}Y_t)^2$. However in our case, the algorithm only observes the noisy signal $y_t$ and thus online convex optimization algorithms could be directly used. Denoting $\hat{l}_t(w) = (y_t - w^{\top}Y_t)^2 + 2w^{\top}c$ where $c\in{\mathbb{R}^d}$, $c = (\sigma^2,0...,0)$, it was pointed out in \cite{MWPaper} that if $w_t$ depends only on the observations $y_1,...,y_{t-1}$, then for any $w\in{\mathbb{R}^d}$ it holds that,
\begin{eqnarray}\label{ExpectationEq}
\mathbb{E}\left[{\sum_{t=1}^T\hat{l}_t(w_t) - \sum_{t=1}^T\hat{l}_t(w)}\right] = \mathbb{E}\left[{\sum_{t=1}^Tl_t(w_t) - \sum_{t=1}^Tl_t(w)}\right]
\end{eqnarray}
Thus by using OCO algorithms with the estimated loss functions $\hat{l}_t(w)$ we may minimize the expected regret with respect to the actual losses $l_t(w)$. Thus a simple algorithm such as \cite{ZinkPaper} immediately gives a $O(\sqrt{T})$ bound on the expected regret as well as on the expected adaptive regret with respect to the true losses $l_t(w)$, as long as we limit the choice of the filter to a euclidean ball of constant radius.
\subsection{Using Strong-Convexity and Exp-Concavity}
Given a function $f(x): \mathcal{K}\rightarrow\mathbb{R}$ we denote by $\nabla{}f(x)$ the gradient vector of $f$ at point $x$ and by $\nabla^2f(x)$ the matrix of second derivatives, also known as the Hessian, of $f$ at point $x$. $f(x)$ is convex at point $x$ if and only if $\nabla^2f(x) \succeq{0}$, that is its Hessian is positive semidefinite at $x$.\\
We say that $f$ is {\it $H$-strongly-convex}, for some $H>0$, if for all $x\in{\mathcal{K}}$ it holds that $\nabla^2f(x) \succeq{}H\textbf{I}$, where $\textbf{I}$ is the identity matrix of proper dimension. That is all the eigenvalues of $\nabla^2f(x)$ are lower bounded by $H$ for all $x\in{\mathcal{K}}$. \\
We say that $f$ is {\it $\alpha$-exp-concave}, for some $\alpha > 0$, if the function $\exp{(-\alpha{}f(x))}$ is a concave function of $x\in{\mathcal{K}}$. It is easy to show that given a function $f$ such that $f\succeq{}H\textbf{I}$ and $\max_{x\in{\mathcal{K}}}\Vert{\nabla{}f(x)}\Vert_2 \leq G$ it holds that $f$ is $\frac{H}{G^2}$-exp-concave.\\
In case all loss functions are $H$-strongly-convex or $\alpha$-exp-concave, there exists algorithms that achieve logarithmic regret and adaptive regret \cite{LogRegretPaper, AdapRegretPaper}. \\
In our case, the Hessian of the loss function $\hat{l}_t(w)$ is given by the random matrix $\nabla^2\hat{l}_t(w) = 2Y_tY_t^{\top}$ which is positive semidefinite and it holds that
\begin{eqnarray}\label{ExpectationOfMatrix}
\mathbb{E}\left[{Y_tY_t^{\top}}\right] = \mathbb{E}\left[{X_tX_t^{\top} + N_tX_t^{\top} + X_tN_t^{\top} + N_tN_t^{\top}}\right]
= X_tX_t^{\top} + \sigma^2\textbf{I} \succeq \sigma^2\textbf{I}
\end{eqnarray}
Nevertheless, in worst case, $\hat{l}_t(w)$ need not be strongly-convex or exp-concave and thus algorithms such as \cite{LogRegretPaper, AdapRegretPaper} could not be directly used in order to get logarithmic expected regret and adaptive regret.
\section{A Simple Gradient Decent Filter}
In this section we describe how the problem of the loss functions $\hat{l}_t$ not necessarily being strongly-convex or exp-concave could be overcome and introduce a simple gradient decent algorithm based on \cite{LogRegretPaper} that achieves $O(\log{T})$ expected regret. \\
For time $t$ and filter $w\in{\mathbb{R}^d}$ we define the following loss functions.
\begin{eqnarray}\label{NewLossFunc}
L^k_t(w) = \sum_{\tau = t-k+1}^t\hat{l}_t(w) + (w-w_t)^{\top}\left({(k-d+1)\sigma^2\textbf{I} - \sum_{\tau=t-k+d}^tY_tY_t^{\top}}\right)(w-w_t)
\end{eqnarray}
where $w_t$ is the filter that was used by the algorithm for prediction in time $t$ and $k\in{\mathbb{N}^+}$ is a parameter. \\
Our Gradient Decent filtering algorithm is given below.
\begin{algorithm}
\caption{GDFilter}
\label{GDFilter}
\begin{algorithmic}[1]
\STATE Input: $k \in{\mathbb{N}^+}$, $H\in{\mathbb{R}^+}$, $R\in{\mathbb{R}^+}$.
\STATE Let $w_1 = \textbf{0}_d$
\FOR{$c = 1...$}
\FOR{$t = (c-1)k+1...ck$}
\STATE predict: $x_t = w_c^{\top}Y_t$.
\ENDFOR
\STATE $\eta_c \leftarrow \frac{1}{Hc}$
\STATE $\tilde{w}_{c+1} \leftarrow w_c - \eta_c\nabla{}L^k_c(w_c)$.
\IF{$\Vert{\tilde{w}_{c+1}}\Vert > R$}
\STATE $w_{c+1} \leftarrow \tilde{w}_{c+1}\cdot{}\frac{R}{\Vert{\tilde{w}_{c+1}}\Vert}$.
\ELSE
\STATE $w_{c+1} \leftarrow \tilde{w}_{c+1}$.
\ENDIF
\ENDFOR
\end{algorithmic}
\end{algorithm}
We have the following theorem and corollary.
\begin{theorem}\label{GDThr}
Let $w_t$ be the filter used by algorithm \ref{GDFilter} for prediction in time $t$. Let $k$ = $2d$ and $H = d\sigma^2$. Algorithm \ref{GDFilter} achieves the following regret bound,
\begin{eqnarray*}
\mathbb{E}\left[{\sum_{t=1}^Tl_t(w_t)}\right] - \min_{w\in{\mathbb{R}^d},\Vert{w}\Vert \leq R}\mathbb{E}\left[{\sum_{t=1}^Tl_t(w)}\right] = O\left({\frac{d^3R^2(B_X+B_N)^4}{\sigma^2}\log{T}}\right) \\
\end{eqnarray*}
\end{theorem}
\begin{corr}
Let $w_t$ be the filter used by algorithm \ref{GDFilter} for prediction in time $t$. Let $k=2d$, $H = d\sigma^2$ and let $R = \frac{\sqrt{d}B_X^2}{\sigma^2}$. It holds that,
\begin{eqnarray*}
\mathbb{E}\left[{\sum_{t=1}^Tl_t(w_t)}\right] - \min_{w\in{\mathbb{R}^d}}\mathbb{E}\left[{\sum_{t=1}^Tl_t(w)}\right] = O\left({\frac{d^4B_X^4(B_X+B_N)^4}{\sigma^6}\log{T}}\right)
\end{eqnarray*}
\end{corr}
Basically the new loss function (\ref{NewLossFunc}) sums several consecutive losses and adds a regularization expression. We show that since the regularization expression depends on the actual choices of the filtering algorithm, achieving low regret with respect to $L^k_t(w)$ implies low regret with respect to the losses $l_t(w)$. Moreover, as we will show, the combination of summing several losses and adding regularization, insures that $L^k_t(w)$ is always strongly-convex for a proper choice of $k$, and thus we can use the algorithms in \cite{LogRegretPaper, AdapRegretPaper} to get logarithmic regret. \\
It holds that,
\begin{eqnarray}\label{StrongConvexInq}
\nabla^2L^k_t(w) &=& \sum_{\tau = t-k+1}^t\nabla^2\hat{l}_t(w) + 2\left({(k-d+1)\sigma^2\textbf{I} - \sum_{\tau=t-k+d}^tY_tY_t^{\top}}\right) \nonumber \\
&=& 2\sum_{\tau = t-k+1}^tY_tY_t^{\top} + 2(k-d+1)\sigma^2\textbf{I} - 2\sum_{\tau=t-k+d}^tY_tY_t^{\top} \nonumber \\
&\succeq & 2(k-d+1)\sigma^2\textbf{I}
\end{eqnarray}
Thus for $k\geq d$, $L^k_t(w)$ is always $2(k-d+1)\sigma^2$-strongly-convex and $2(k-d+1)\sigma^2/G^2$-exp-concave where $G = \max_{w,t}\Vert{\nabla{}L^k_t(w)}\Vert$.\\
We thus use the gradient decent algorithm in \cite{LogRegretPaper} by partitioning the iterations into disjoint blocks of length $k$ each, and our algorithm updates its filter every $d$ iterations according to the loss function $L^k_t(w)$ for $t=ck$, $c\in{\mathbb{Z}}$ and predicts using the same filter on all iterations in the same block. The value of $k$ is assumed to be a constant independent of $T$. \\
Abusing notation, we switch between $L^k_c(w)$ and $L^k_{ck}(w)$ interchangeably where we use $L^k_c(w)$ to refer to the loss on block number $c$ of length $k$. \\
The following Lemma plays a key part in our analysis.
\begin{lemma}\label{KeyLemma}
Let $\mathcal{A}$ be a filtering algorithm that updates its filter every $k$ iterations. Denote by $w_t$ the filter used for prediction on iteration $t$ and denote by $w_c$ the filter used to predict on the entire block $c$, that is on iterations $((c-1)\cdot{k}+1)...c\cdot{k}$. It holds that
\begin{eqnarray*}
\mathbb{E}\left[{\sum_{t=1}^Tl_t(w_t) - \sum_{t=1}^Tl_t(w)}\right] \leq \mathbb{E}\left[{\sum_{c=1}^{T/k}L^k_{ck}(w_c) - \sum_{c=1}^{T/k}L^k_{ck}(w)}\right]
\end{eqnarray*}
\end{lemma}
\begin{proof}
First we assume w.l.o.g. that $T=b\cdot{k}$ for some $b\in{\mathbb{N}^+}$. Otherwise it holds that $T=b\cdot{k} + a$ where $0 < a < k$ and thus the regret on the additional $a$ iterations is a constant independent of $T$ and we can ignore it in the regret bound.\\
We now have,
\begin{eqnarray}\label{sum1}
&&\sum_{c=1}^{T/k}L^k_{ck}(w_c) - \sum_{c=1}^{T/k}L^k_{ck}(w) \\
&=& \sum_{c=1}^{T/k}\left({\sum_{t=(c-1)k+1}^{ck}\hat{l}_t(w_c) + (w_{c}-w_{c})^{\top}\left({(k-d+1)\sigma^2\textbf{I} - \sum_{\tau=ck-k+d}^{ck}Y_{\tau}Y_{\tau}^{\top}}\right)(w_{c}-w_{c})}\right) \nonumber \\
&-& \sum_{c=1}^{T/k}\left({\sum_{t=(c-1)k+1}^{ck}\hat{l}_t(w) + (w-w_{c})^{\top}\left({(k-d+1)\sigma^2\textbf{I} - \sum_{\tau=ck-k+d}^{ck}Y_{\tau}Y_{\tau}^{\top}}\right)(w-w_{c})}\right) \nonumber \\
&=& \sum_{t=1}^T\left({\hat{l}_t(w_t) - \hat{l}_t(w)}\right) - \sum_{c=1}^{T/k}(w-w_{c})^{\top}\left({(k-d+1)\sigma^2\textbf{I} - \sum_{\tau=(c-1)k+1}^{ck}Y_{\tau}Y_{\tau}^{\top}}\right)(w-w_{c}) \nonumber
\end{eqnarray}
Since $\mathcal{A}$ updates its filter every $k$ iterations, we have that $w_{ck}$ depends only on the random variables $n_1,...,n_{(c-1)k}$. Thus using (\ref{ExpectationOfMatrix}) we have for all $c$ we that,
\begin{eqnarray*}
&&\mathbb{E}\left[{(w-w_{c})^{\top}\left({(k-d+1)\sigma^2\textbf{I} - \sum_{\tau=(c-1)k+1}^{ck}Y_{\tau}Y_{\tau}^{\top}}\right)(w-w_{c})}\right]\\
&=& (k-d+1)\sigma^2\mathbb{E}[\Vert{w-w_{c}}\Vert^2] - \mathbb{E}\left[{\sum_{\tau=(c-1)k+1}^{ck}Y_{\tau}Y_{\tau}^{\top}}\right]\circ{}\mathbb{E}\left[{(w-w_{c})(w-w_{c})^{\top}}\right]\\
&=& (k-d+1)\sigma^2\mathbb{E}[\Vert{w-w_{c}}\Vert^2] \\
&-& \left({\sum_{\tau=(c-1)k+1}^{ck}X_{\tau}X_{\tau}^{\top}+(k-d+1)\sigma^2\textbf{I}}\right)\circ{}\mathbb{E}\left[{(w-w_{c})(w-w_{c})^{\top}}\right]\\
&=& -\sum_{\tau=(c-1)k+1}^{ck}X_{\tau}X_{\tau}^{\top}\circ{}\mathbb{E}\left[{(w-w_{c})(w-w_{c})^{\top}}\right] \leq 0
\end{eqnarray*}
Overall by taking expectation over (\ref{sum1}) we get
\begin{eqnarray*}
\mathbb{E}\left[{\sum_{c=1}^{T/k}L^k_{ck}(w_c) - \sum_{c=1}^{T/k}L^k_{ck}(w)}\right] \geq \mathbb{E}\left[{\sum_{t=1}^T\hat{l}_t(w_t) - \hat{l}_t(w)}\right]
\end{eqnarray*}
The lemma now follows from (\ref{ExpectationEq}).
\end{proof}
According to Lemma \ref{KeyLemma} we can reduce our discussion to algorithms that predict in disjoint blocks of length $k$ and achieve low regret with respect to the loss function $L^k_c(w)$ \\
In order to derive precise regret bounds we give a bound on $G = \max_{w,t}\Vert{\nabla{}L^k_t(w)}\Vert$.
\begin{eqnarray*}
\nabla{}L^k_t(w) = 2\sum_{\tau=t-k+1}^tY_t(y_t - w_t^{\top}Y_t) + 2\left({(k-d+1)\sigma^2\textbf{I} - \sum_{\tau=t-k+d}^tY_{\tau}Y_{\tau}^{\top}}\right)(w-w_t)
\end{eqnarray*}
Thus by simple algebra we have,
\begin{eqnarray*}
G^2 &=& O\left({k^2d(B_X+B_N)^2R^2d(B_X+BN)^2 + k^2d^2(B_X+B_N)^4R^2}\right) \\
&=& O\left({k^2d^2R^2(B_X+B_N)^4}\right)
\end{eqnarray*}
Where $R$ is a bound on the magnitude of the filter. That is we consider only filters $w\in{\mathbb{R}^d}$ such that $\Vert{w}\Vert_2 \leq R$. $R$ needs to be bounded since the regret of online convex optimization algorithms grows with $G$.\\
As pointed out in \cite{MWPaper}, for
\begin{eqnarray*}
w^* = \arg\min_{w\in{\mathbb{R}^d}}\mathbb{E}\left[{(1/T)\sum_{t=1}^T\left({x_t - w^{\top}Y_t}\right)^2}\right]
\end{eqnarray*}
It holds that $\Vert{w^*}\Vert \leq \frac{\sqrt{d}B_X^2}{\sigma^2}$. \\
We denote by $G(k,R)$ an upper bound on $\max_{w,t}\Vert{\nabla{}L^k_t(w)}\Vert$ parametrized by $k,R$. \\
For the complete proof of the theorem and corollary the reader is referred to the appendix.
\section{An Adaptive Algorithm}
In this section we present an algorithm that is based on the framework from \cite{AdapRegretPaper} and achieves logarithmic expected regret on any interval $I=[r,s]\subseteq{[T]}$. Our algorithm is given below.
\begin{algorithm}
\caption{AdaptiveFilter}
\label{AdaptiveFilter}
\begin{algorithmic}[1]
\STATE Input: $k\in{\mathbb{N}^+}$, $\alpha\in{\mathbb{R}^+}$.
\STATE Let $E^1,...,E^T$ be online convex optimization algorithms.
\STATE Let $p_1\in{\mathbb{R}^T},p_1^{(1)} = 1, \forall{j:1<j\leq T}, p_1^{(j)} = 0$.
\FOR{$c = 1...$}
\STATE $\forall{j \leq c}, w_c^{(j)} \leftarrow E^j(L^k_1,...,L^k_{(c-1)})$ (the filter of the j'th algorithm).
\STATE $w_c \leftarrow \sum_{j=1}^cp_c^{(j)}w_c^{(j)}$.
\FOR{$t = (c-1)k+1...ck$}
\STATE predict: $x_t = w_c^{\top}Y_t$.
\ENDFOR
\STATE $\hat{p}_{c+1}^{(c+1)} = 0$ and for $i\in{[c]}$, \begin{eqnarray*}
\hat{p}_{c+1}^{(i)} = \frac{p_c^{(i)}e^{-\alpha{}L^k_c(w_c^{(i)})}}{\sum_{j=1}^cp_c^{(i)}e^{-\alpha{}L^k_c(w_c^{(i)})}}
\end{eqnarray*}
\STATE $p_{c+1}^{(c+1)} = 1/(c+1)$ and for $i\in{[c]}: p_{c+1}^{(i)} = (1-(c+1)^{-1})\hat{p}_{c+1}^{(i)}$ (adding expert $E^{(c+1)}$).
\ENDFOR
\end{algorithmic}
\end{algorithm}
We have the following theorem and corollary.
\begin{theorem}\label{AdaptiveThr}
Let $w_t$ be the filter used by algorithm \ref{AdaptiveFilter} for prediction in time $t$. Let $k=2d$ and let $\alpha = \frac{d\sigma^2}{G(2d,R)^2}$. For all $I=[r,s]\subseteq{[T]}$, algorithm \ref{AdaptiveFilter} achieves the following regret bound,
\begin{eqnarray*}
\mathbb{E}\left[{\sum_{t=r}^sl_t(w_t)}\right] - \min_{w\in{\mathbb{R}^d},\Vert{w}\Vert \leq R}\mathbb{E}\left[{\sum_{t=r}^sl_t(w)}\right] = O\left({\frac{d^3R^2(B_X+B_N)^4}{\sigma^2}\log{T}}\right)\\
\end{eqnarray*}
\end{theorem}
\begin{corr}
Let $w_t$ be the filter used by algorithm \ref{AdaptiveFilter} for prediction in time $t$. Let $k=2d$, $R = \frac{\sqrt{d}B_X^2}{\sigma^2}$ and let $\alpha = \frac{d\sigma^2}{G(2d,R)^2}$. For all $I=[r,s]\subseteq{[T]}$, algorithm \ref{AdaptiveFilter} achieves the following regret bound,
\begin{eqnarray*}
\mathbb{E}\left[{\sum_{t=r}^sl_t(w_t)}\right] - \min_{w\in{\mathbb{R}^d}}\mathbb{E}\left[{\sum_{t=r}^sl_t(w)}\right] = O\left({\frac{d^4B_X^4(B_X+B_N)^4}{\sigma^6}\log{T}}\right)
\end{eqnarray*}
\end{corr}
As in the previous section, we take the approach of partitioning the iterations into disjoint blocks of length $k$ and optimizing over the loss functions $L^k_t$. \\
The algorithm is based on the well known experts framework where each expert in our case, is a gradient descent filter presented in the previous section. On each block $c$, the algorithm adds a new expert that starts producing predictions from block $c+1$ an onward. The experts algorithm predicts on each iteration by combining the filters of all experts using a weighted sum according to the weight of each expert. The key idea behind this framework is that an expert added at block $r$ achieves low regret on all intervals starting in $r$. Given such an interval, the experts algorithm itself achieves low regret on the interval with respect to this specific expert, and thus has low regret on the interval. \\
Expert $E^r$ could be thought of as an algorithm that plays $w_c = 0$ for all $c<r$ and starting at block $r$ plays according to algorithm \ref{GDFilter}. \\
For the complete proof of the theorem and corollary the reader is referred to the appendix.
|
2,877,628,088,407 | arxiv | \section{Addendum of 17 June 2012 (\textit{Revised 18 July 2012})}
\subsection{$\phi^4$ Chains with Two Log-Thermostat Particles}
The comprehensive investigation of the ``$\phi^4$'' atomistic model for heat
flow carried out by Aoki and
Kusnezov \cite{Aoki-Kusnezov,Aoki-Kusnezov-2,Aoki-Kusnezov-3} showed that this
model behaves ``normally'', even in one space dimension. Heat flows through a
chain of $\phi^4$ particles according to Fourier's Law \cite{Hoover1},
$Q_x = -\kappa (dT/dx)$.
Thus this model can provide good test cases for the logarithmic oscillator
thermostat. The model we investigate here is a periodic chain of 20
one-dimensional particles. Two are ``log-thermostat'' particles, characterized
by their individual specified ``thermostat temperatures'' $\{T\}$, and
interacting with their lattice sites $\{q_0\}$ with a logarithmic potential:
\[
\phi_{\rm log} = (T/2)\ln ( \delta q^2 + 0.1) \ ; \ \delta q = q - q_0 \ .
\]
The remaining eighteen are $\phi^4$ particles, tethered to their lattice sites
$\{q_0\}$ with a \textit{quartic} potential :
\[
\phi_{\rm tether} = (1/4)(q-q_0)^4 \ .
\]
In addition to these two types of lattice-site potentials all 20
nearest-neighbor pairs interact with a Hooke's-Law potential,
\[
\phi(q_i,q_{i+1}) = (\kappa /2)(|q_i - q_{i+1}| - 1 )^2 \ ;
\ \kappa = 1.00 \ {\rm or} \ 0.10 \ .
\]
Because the log-thermostat model is imagined to be ``weakly coupled'' to the
chain we considered a model with a much smaller force constant $\kappa = 0.1$
linking the two thermostat particles to their four neighbors in the chain.
Experiments with $\kappa = 0.01$ showed no tendency at all towards equilibration
with simulations of $10^9$ time steps. With initial velocities $\pm 1$
alternating along the chain the longtime averaged temperatures along the chain
reflect the initial conditions rather than the thermostat temperatures, ending
up with all the time-averaged kinetic temperatures near
$\langle \ p^2 \ \rangle = 0.5$ .
The log-thermostats are apparently unable to absorb much energy in a reasonable
time \cite{Melendez}.
Equilibrium simulations with alternating initial velocities $\pm \sqrt{0.2}$
along the chain and with both specified thermostat temperatures equal to $0.10$
were more nearly successful. Figure \ref{eq} shows that the time-averaged
kinetic temperatures along the chain are within 8\% of the specified temperature
$0.10$ after a simulation of $10^9$ time steps, corresponding to a time of one
million in reduced units. Evidently, under propitious conditions log-thermostat
temperature control \textit{can} approach equilibrium on a sufficiently long
timescale.
\begin{figure}
\includegraphics[width=0.5\textwidth,angle=-90]{eq.ps}
\caption{\label{eq}Equilibrium temperature profiles for a 20-particle periodic
chain.\vspace{10mm}}
\end{figure}
We next carried out a similar, but \textit{nonequilibrium} simulation, with the
same initial conditions but with different specified thermostat temperatures:
$0.05$ for thermostat Particle 5 and $0.15$ for thermostat Particle 15. The
temperature profile which resulted (again with $10^9$ fourth-order Runge-Kutta
time steps) was scarcely different to the equilibrium one (see figure
\ref{neq}). The log-thermostats were \textit{unable} to provide a nonequilibrium
temperature profile.
\begin{figure}
\includegraphics[width=0.5\textwidth,angle=-90]{neq.ps}
\caption{\label{neq}``Nonequilibrium'' temperature profiles for a 20-particle
chain.}
\end{figure}
But why do logarithmic thermostats fail? Apart from the large time intervals
mentioned above \cite{Melendez}, there is another more fundamental reason for
their failure in \textit{nonequilibrium} problems, traceable to their
Hamiltonian heritage \cite{Hoover2}: Deterministic nonequilibrium heat-flow
problems generate \textit{fractal} phase-space distributions, with a vanishing
phase volume. A Hamiltonian system obeying Liouville's Theorem in phase space,
$df(q,p)/dt \equiv 0$ , simply \textit{cannot} produce a fractal.
Aoki and Kusnezov showed that heat flow through a $\phi^4$ chain generates
fractal phase-space distributions, with a dimensionality reduced from the
equilibrium Gibbs'
distribution \cite{Aoki-Kusnezov,Aoki-Kusnezov-2,Aoki-Kusnezov-3}. Hoover
\textit{et alii} \cite{Hoover-Aoki} showed that similar fractals result using
seven different thermostat types (none of which obeys the equilibrium version of
Liouville's Theorem).
\newpage
\subsection{Lennard-Jones Potentials}
Even the original one-dimensional simulations proposed by Campisi \textit{et
alii} for a couple of particles turn out to imply very long simulation times. We
carried out simulations with one and two particles setting the mass of the
logarithmic oscillator equal to ten particle masses. A classic fourth-order
Runge-Kutta integrator took $2 \cdot 10^9$ time steps to generate a reasonable
reproduction of the energy histogram presented in \cite{Campisi} (see figure
\ref{densities}).
\begin{figure}
\includegraphics[width=0.5\textwidth,angle=-90]{densities-1D.ps}
\caption{\label{densities}
Probability distribution for $E_S/E_{tot}$ in the original numerical
experiment proposed by Campisi \textit{et alii} in \cite{Campisi}. $E_S$ is
the energy of a system interacting with the logarithmic oscillator. The
theoretical prediction follows the solid line (red for two particles and
blue for only one). The black points correspond to the numerical results for
$t = 0.5\cdot 10^6, 10^6$ and $2 \cdot 10^6$ for a system of two particles,
as in the original article (the time step was set to $\Delta t = 0.001$).
The blue points correspond to a system of only one thermostated particle,
which also takes about the same time to converge to the prediction.}
\end{figure}
In their discussion of a three-dimensional simulation, Campisi \textit{et alii}
pointed out that an increase in the number of particles led to a very
significant departure from the predicted velocity distribution. The solution
suggested was simply to increase the total energy of the system by
$\Delta E \propto 3Nk_BT/2$. This solution, however, leads to the exponential
increase in the typical lengths and times for the logarithmic oscillator that we
have already explained above.
Our investigations reveal that unless the initial conditions and the problem are
carefully ``tuned'' the thermostat is ineffective at equilibrium, even for
extraordinarily long simulation times. The situation away from equilibrium is
worse yet, as the thermostat fails to act rapidly enough to affect change.
We conclude that log-thermostats are not useful in most practical applications,
whether simulations or experiments.
We would like to thank Campisi \textit{et alii} for correcting a mistake in the
previous version of this article \cite{Campisi-Reply}.
\section{Addendum 29 January 2013}
Our comment was published in Physical Review Letters on the 11$^{\rm th}$ January
2013 \cite{PRL-Comment}, followed by a reply \cite{PRL-Reply} where Campisi and
his colleagues proposed a new experimental arrangement for the logarithmic
oscillator, without the unreasonable time or length scales that we had
described. The number of degrees of freedom in the original experiment was
reduced to one third by forcing the neutral atoms and logarithmic oscillator ion
to move along a single dimension. Table \ref{Campisi-table}, taken from the
reply, illustrates the exponential growth of mean free times $\tau$ and box
lengths $L$ as the required precision $H_{KS}$ or the number of particles $N$
increase.
Campisi \textit{et alii} claimed that this version of the experiment could be
implemented with present day cold-atom technology \cite{Bloch}. Having no prior
experience with cold-atom physics, we contacted Prof. I. Bloch, who kindly lent
us some of his time and confirmed that such a precise one-dimensional setup,
though ``challenging'', should be feasible in principle. We are grateful for his
helpful comments.
Although the magnitudes shown in the table are correct, they are slightly
misleading because they assume that the system of interest begins at (or very
near) the ``thermostat temperature''. However, if we assume that the initial
temperature is off by $\Delta T$ degrees, then the logarithmic oscillator will
have to absorb at least $\Delta E = Nk_B\Delta T/2$ units of energy. For $N=20$
and $\Delta T = 5 \mathrm{K}$, for example, the energy absorbed must be about
$\Delta E = 50 k_B$. Compare this value to those in the table, where the total
energy of system plus oscillator never exceeds $30k_B$.
\begin{table}
\caption{Total energy, box lengths and mean free times for the logarithmic
oscillator experiment as a function of the number of degrees of freedom,
$N$, and the required precision, $H_{KS}$, measured as a Kolmogorov-Smirnov
distance (from Campisi \textit{et alii} \cite{PRL-Reply}).}
\centering
\begin{tabular}{c c c c c}
\hline\hline
$N$ & $H_{KS}$ & $E_{tot}/k_B$ & $L\ \mathrm[m]$ & $\tau\ \mathrm[s]$ \\
\hline
20 & 0.005 & 16.45 & $3\times10^{-1}$ & $1\times10^{-3}$ \\
20 & 0.01 & 14.8 & $5\times10^{-2}$ & $3\times10^{-4}$ \\
20 & 0.02 & 13.1 & $9\times10^{-3}$ & $5\times10^{-5}$ \\
\hline
30 & 0.02 & 18.1 & $1\times10^{0}$ & $5\times10^{-3}$ \\
40 & 0.02 & 23.1 & $2\times10^{2}$ & $5\times10^{-1}$ \\
50 & 0.02 & 28 & $3\times10^{4}$ & $6\times10^{1}$ \\
\hline\hline
\end{tabular}
\label{Campisi-table}
\end{table}
Logarithmic oscillators indeed ``possess an infinite heat capacity'', but this
statement is easily misunderstood. The logarithmic oscillator's mean kinetic
temperature is not a function of its energy (if one considers time averages
with intervals that are very large compared to the period of oscillation).
In practice, though, a logarithmic oscillator \textit{cannot} absorb an
arbitrary amount of heat because any physical potential will lack a singularity
at the origin and the size of the experiment, $L$, will limit the amount of
energy that the oscillator may absorb, so that
\[\Delta E_{max.} = \frac{1}{2}k_BT\ \ln \left(\frac{L^2 + b^2}{b^2}\right),\]
which is an extremely slowly growing function of $L$.
Our comment pointed out that applying \textit{two} logarithmic oscillators, with
different temperatures, to a chaotic Hamiltonian system failed to create the
expected linear temperature gradient. In their Reply, Campisi \textit{et alii}
disregarded this observation, arguing that their Letter suggested temperatures
that varied in time and not in space, so that our simulations were not relevant
to their work. This conclusion strikes us as ill-conceived. Unless they can
somehow explain how to change a system's temperature \textit{homogeneously}, one
would expect to find that a \textit{time}-varying temperature would necessarily
create gradients in \textit{space}.
Consequently we stand by our claim that the logarithmic oscillator cannot be
used an an effective thermostat in practical applications.
|
2,877,628,088,408 | arxiv | \section{Introduction} \label{intoduction}
DNNs have recently obtained remarkable performance on various applications \cite{he2016deep,krizhevsky2012imagenet}. Its effective training, however, often requires to pre-collect large scale finely annotated samples. When the training dataset contains certain amount of noisy (incorrect) labels, the overfitting issue tends to easily occur, naturally leading to their poor performance in generalization \cite{zhang2016understanding}. In fact, such biased training data are commonly encountered in practice, since the data are generally collected by coarse annotation sources, like crowdsourcing systems \cite{bi2014learning} or search engines \cite{liang2016learning,zhuang2017attend}. Such robust deep learning issue is thus critical in machine learning and computer vision.
One of the most classical methods for handling this issue is to employ robust losses, that are not unduly affected by noisy labels, to replace the conventional noise-insensitive ones to guide the training process \cite{manwani2013noise}. For example, as compared with the commonly used cross entropy (CE) loss, the mean absolute error (MAE), as well as the simplest 0-1 loss for classification, can be more robust against noisy labels \cite{ghosh2017robust} due to their evident suppression to large loss values (as clearly depicted in Fig.\ref{fig1}), and thus inclines to reduce the negative influence brought by the outlier samples with evidently corrupted labels. Beyond other robust learning techniques for defending noisy labels, like sample reweighting \cite{kumar2010self,chang2017active,wang2017robust,jiang2018mentornet,ren2018learning,shu2019meta}, loss correction \cite{goldberger2016training,sukhbaatar2014training,patrini2017making,hendrycks2018using}, and label correction \cite{lee2018cleannet,li2017learning,veit2017learning,tanaka2018joint}, such robust-loss-designing methodology is superior in its concise implementation scheme and solid theoretical basis of robust statistics and generalization theory \cite{huber2011robust,liu2014robust,masnadi2009design,patrini2017making}
\begin{figure}[t]
\centering
\subfigcapskip=-1.5mm
\subfigure[Adative robust loss in PolySoft]{
\label{fig1a}
\includegraphics[width=0.22\textwidth]{./fig/PolySoft.pdf}} \ \ \
\subfigure[Adative robust loss in GCE]{
\label{fig1b}
\includegraphics[width=0.22\textwidth]{./fig/GCE.pdf}} \\ \vspace{-4mm}
\subfigure[Adative robust loss in SL]{
\label{fig1c}
\includegraphics[width=0.22\textwidth]{./fig/SL.pdf}} \ \ \
\vspace{-0.2cm}
\subfigure[Adative robust loss in Bi-Tem]{
\label{fig1d}
\includegraphics[width=0.22\textwidth]{./fig/Bi.pdf}} \vspace{1mm}
\caption{Comparison of different loss functions. In each figure, the 0-1 loss, CE loss, original robust loss, and those learned by our method under three different noise rates on CIFAR-10 are shown. The robust losses included in (a)-(d) are PolySoft \cite{gong2018decomposition}, GCE \cite{zhang2018generalized}, SL \cite{wang2019symmetric} and Bi-Tempered \cite{amid2019robust}, respectively.}\label{fig1}
\vspace{-5mm}
\end{figure}
Along this research line, besides the aforementioned MAE and 0-1 loss, various forms of robust losses have been designed against such robust learning issue on noisy labels. For example, 0-1 loss is verified to be robust for binary classification \cite{manwani2013noise,ghosh2015making}. However, since 0-1 loss is not continuous and the corresponding learning algorithm is hardly to be efficiently and accurately executed, many surrogate loss functions have been proposed to approximate it \cite{bartlett2006convexity,masnadi2009design,nock2009efficient}, such as ramp loss \cite{brooks2011support} and unhinged loss \cite{van2015learning}, which are also proved to be robust to label noise under certain conditions \cite{ghosh2015making}. Specifically, \cite{manwani2013noise} formally defines a ``noise-tolerant" robust loss if the minimization under it with noisy labels would achieve the same solution as that with noise-free labels. \cite{ghosh2017robust} further relaxes the definition with loss bounded conditions to make the definition useful in guiding construction of rational robust losses in practice. Inspired by this formulation, several losses have been designed very recently and proved to satisfy loss bounded conditions, like the GCE \cite{zhang2018generalized} and SL \cite{wang2019symmetric}, both originated from the CE loss by introducing some noise-robust factor.
Although these robust loss functions help improve the robustness of a learning algorithm on noisy labels, they still have evident deficiencies in practice. On the one hand, they inevitably involve hyperparameter(s) to control their robustness extents against different noise rates. These hyperparameters need to be manually preset or tuned by heuristic strategies like cross-validation, naturally conducting efficiency issue and difficulty in practical implementations. On the other hand, the relatively complex form of robust loss will increase the non-convexity of the objective function used for training the network. Together with the non-convexity brought by the complicated network architectures, the minimization with all involved network parameters is highly non-convex, making the problem easily trapped to an unexpected solution with poor generalization capability even under properly preset hyperparameters in the robust loss. It is thus still challenging to construct a generally useful algorithm for the problem.
To alleviate the aforementioned issues, this paper presents an adaptive hyperparameter learning strategy to automatically tune the hyperparameter and thus learn the robust loss from data. Specifically, this study mainly made three-fold contributions.
\begin{itemize}
\item The proposed algorithm realizes a mutual amelioration between automatically tuning proper hyperparameters involved in a robust loss and learning suitable network parameters. To the best of our knowledge, this is the first work to handle the robust loss (with explicit and concise forms) adaptive learning under noisy labels.
\item Four kinds of STOA robust loss functions, including GCE \cite{zhang2018generalized}, SL \cite{wang2019symmetric}, Bi-Tempered \cite{amid2019robust} and PolySoft \cite{gong2018decomposition}, are attempted to be integrated into our algorithm,
showing the generality of our algorithm on adaptive robust loss learning with noisy labels. Especially, besides GCE and SL, which have been proved to be theoretically robust under loss bounded condition, we also prove that the loss bounded conditions for Bi-Tempered and PolySoft, implying their intrinsic robustness. These robust losses are thus all with sound theoretical guarantee and potentially useful.
\item Comprehensive experiments substantiate the effectiveness of the proposed algorithm, especially its superiority beyond conventional hyperparameter setting strategy. Specifically, from the experiments, it is interestingly seen that through iteratively ameliorating both robust loss hyperparameters and deep network parameters, our algorithm is capable of exploring good solution for the problem with evidently better generalization than that extracted by conventional hyperparameter tuning strategy with even carefully tuned hyperparameters. This might show a new potential way for exploring solutions with better generalization for such highly non-convex robust learning problems.
\end{itemize}
The paper is organized as follows. Section \ref{related_work} reviews the related works. Section \ref{ARL_algo} introduces the four robust loss forms used in this paper, and proves the theoretical robustness theory for Bi-Tempered and PolySoft. Our main algorithm for adaptive robust loss learning is also presented in this section. Experiments are demonstrated in Section \ref{experiment}, and a conclusion is finally made.
\vspace{-1mm}
\section{Related Work} \label{related_work}\vspace{-1mm}
\textbf{Deep learning with noisy labels.} There are various approaches raised for handling robust learning issues under noisy labels, which can be roughly divided into four categories: label correction, loss correction, sample reweighting and robust loss setting.
The label correction approach aims to correct noisy labels to their true ones via a supplemental clean label inference step, characterized by directed graphical models \cite{xiao2015learning}, conditional random fields~\cite{vahdat2017toward} or knowledge graphs \cite{li2017learning}. Comparatively, the loss correction approach assumes that there exists a noise transition matrix defining the probability of one class changed to another. Typically, \cite{sukhbaatar2014training,goldberger2016training} modeled the matrix using a linear layer on the top of the DNNs, Forward \cite{patrini2017making} used the noise transition matrix to modify the loss function, and GLC \cite{hendrycks2018using} used additional meta data to estimate the transition matrix.
\textbf{Robust loss approach.}
Based on the noise-tolerant definition given by \cite{natarajan2013learning}, it has been proven that 0-1 loss, sigmoid loss, ramp loss, and probit loss are all noise-tolerant under some conditions \cite{ghosh2015making}. \cite{ghosh2015making} further relaxed this definition as bounded loss condition to make the theory better feasible in practice. Recently, curriculum loss, a tighter upper bound of the 0-1 loss was proposed in \cite{lyu2019curriculum}, which can adaptively select samples for training as a curriculum learning process. Generalized to multi-class problem, MAE (Mean Absolute Error) is proved to be robust to symmetric label noise and class-conditional noise.
Under the advanced noise-robust understanding provided by \cite{ghosh2017robust}, some new robust losses have been designed on the basis of classical CE loss very recently expected to be finely performed in real practice. Zhang et al. \cite{zhang2018generalized} demonstrated that it is hard to train DNN with MAE, and proposed to combine MAE and CE losses to obtain a new loss function, GCE, which behaves very like a weighted MAE, to handle the noisy label issue. \cite{wang2019symmetric} observed that the learning procedure of DNNs with CE loss is class-biased, and proposed a Reverse Cross Entropy (RCE) to help robust learning. Besides, \cite{amid2019robust} also presented a robust loss called Bi-Tempered by introducing two tunable temperatures to the traditional softmax layer and CE loss, which makes the loss be bounded and heavy-tail. Xu et al., \cite{xu2019l_dmi} also provided a novel information-theoretic robust loss function different from the distance-based loss as aforementioned.
\textbf{Sample reweighting approach.} The main idea of this approach is to assign weights to losses of all training samples, and iteratively update these weights based on the loss values during the training process \cite{kumar2010self,jiang2014easy,zhao2015self}. Such a loss-weight function is generally set as monotonically decreasing, enforcing a learning effect that samples with larger loss, more possible to be noisy labeled as compared with small loss samples, are with smaller weights to suppress their effect to training. In this manner, the negative influence of noisy labels can be possibly alleviated. An interesting result is that when this monotonically decreasing weighting function makes this re-weighting learning process equivalent to solving an implicit robust loss function \cite{meng2017theoretical}, which constructs a close relationship between this strategy with robust loss approach. Very recently, some advanced sample reweighting methods have been raised inspired by the idea of meta-learning \cite{shu2019meta}, which possesses a much more complicated weighting scheme to the conventional reweighting strategies. This makes them able to deal with more general data bias cases other than noisy labels, like class imbalance. The fine theoretical basis, like noise tolerant, however, is also lost and almost impossible to be further prompted due to their complex implementation formats.
\textbf{Learning adaptive loss.} Some other methods have also attempted to directly learn a good proxy for an underlying evaluation loss. For example, learning to teach \cite{wu2018learning} dynamically learned the loss through a teacher network outputting the coefficients matrix of the general loss function. Xu et al., \cite{xu2018autoloss} learned a discrete optimization schedule that alternates between different loss functions at different time-points. Adaptive Loss Alignment \cite{huang2019addressing} extended work in \cite{wu2018learning} to loss-metric mismatch problem. \cite{grabocka2019learning} tried to learn the surrogate losses for non-differentiable and non-decomposable loss function. These methods, however, generally attain losses with complicated forms, making them hardly to be theoretically analyzed with robust loss theory.
\textbf{Hyperparameter optimization.} Hyperparameter optimization was historically investigated by selecting proper values for each hyperparameter to obtain better performance on validation set. Typical methods include grid search, random search \cite{bergstra2012random}, Bayesian optimization \cite{snoek2012practical,swersky2013multi}, etc. Recently, meta-learning based strategy has been gradually more investigated \cite{franceschi2018bilevel,franceschi2017forward,maclaurin2015gradient,pedregosa2016hyperparameter}. This paper can be seen as a specific exploration of this methodology on adaptive robust loss learning issue on noisy labels.
\vspace{-1mm}
\section{Adaptive Robust Loss Learning} \label{ARL_algo}\vspace{-1mm}
\subsection{Preliminaries}\vspace{-1mm}
We consider the problem of $c$-class classification. Let $\mathcal{X} \subset \mathbb{R}^d$ be the feature space, and $\mathcal{Y}=\{1,2,\cdots,c\}$ be the label space. Assume the DNN architecture is with a softmax output layer.
Denote the network as a function with input $\mathbf{x}\in \mathcal{X}$ and output as $f(\mathbf{x};\mathbf{w})$, where $f: \mathcal{X}\rightarrow \mathbb{R}^c$, where
$\mathbf{w}$ represent the network parameters. $f_j(\mathbf{x};\mathbf{w})$, representing the $j$-th component ($j=1,\cdots,c$) of $f(\mathbf{x};\mathbf{w})$, then satisfies $\sum_{j=1}^{c} f_j(\mathbf{x};\mathbf{w}) =1, f_j(\mathbf{x};\mathbf{w})\geq 0$. Given training data $D = \{(\mathbf{x}_i, \mathbf{y}_{i})\}_{i=1}^{N} \in(\mathcal{X} \times \mathcal{Y})^N$, for any loss function $\mathcal{L}$, the (empirical) risk of the network classifier is defined as:
\begin{align*}
\mathcal{L}(D,\mathbf{w}) = \frac{1}{N}\sum_{i=1}^N \mathcal{L}(f(\mathbf{x}_i;\mathbf{w}),\mathbf{y}_i).
\end{align*}
The commonly used CE loss can be written as:
\begin{align}
\mathcal{L}_{CE}(D,\mathbf{w})= -\frac{1}{N}\sum_{i=1}^N \sum_{j=1}^c y_{ij} \log f_j(\mathbf{x}_i;\mathbf{w}), \label{LCE}
\end{align}
where $y_{ij}$ denotes the $j$-th component of $\mathbf{y}_i$. Generally in all components of $\mathbf{y}_i$, only one is $1$ and all others are $0$.
\subsection{Typical Robust Loss Forms}
We first introduce the forms of some typical robust losses.
\textbf{Generalized Cross Entropy (GCE).} To exploit the benefits of both the noise-tolerant property of MAE and the implicit weighting scheme of CE for better learning, Zhang et al., \cite{zhang2018generalized} proposed the GCE loss as follows:
\begin{align}\label{eqgce}
\begin{split}
\mathcal{L}_{GCE}(D,\mathbf{w};{\color{red} q}) &= \frac{1}{N} \sum_{i=1}^{N}\frac{(1- f_{j_i}(\mathbf{x}_i)^{\color{red} q})}{{\color{red} q}},
\end{split}
\end{align}
where $j_i$ denotes the $j$'s index of the term $y_{ij}=1$ for each $i$, and $q\in (0,1]$. GCE loss degenerates to the CCE when $q$ approaches to 0 and becomes to MAE loss when $q = 1$.
\textbf{Symmetric Cross Entropy (SL).} Wang et al. \cite{wang2019symmetric} proposed an extra term for CE to make it noise tolerant and designed the Reverse Cross Entropy (RCE):
\begin{align*}
\mathcal{L}_{RCE} = -\frac{1}{N}\sum_{i=1}^N \sum_{j\neq j_i} Af_j(\mathbf{x}_i;\mathbf{w}),
\end{align*}
where $A<0$ is a preset constant. The SL loss is defined as:
\begin{align}\label{eqsl}
\mathcal{L}_{SL}(D,\mathbf{w};{\color{red}\gamma_1,\gamma_2)} = {\color{red}\gamma_1}\mathcal{L}_{CE} +{\color{red}\gamma_2}\mathcal{L}_{RCE}.
\end{align}
\textbf{Bi-Tempered logistic Loss (Bi-Tempered).} Amid et al. \cite{amid2019robust} replaced the logarithm and exponential of the logistic loss with corresponding ``tempered'' versions function $\log_t,\exp_t$, to make the loss functions bounded to handle large-margin outliers and softmax function heavy-tailed to handle small-margin mislabeled examples. Specifically, they define $
\log_t(x)=\frac{1}{1-t}(x^{1-t}-1)
$
and
$
\exp_t(x)=[1+(1-t)x]_+^{1/(1-t)},
$
where $[\cdot]_+=\max\{\cdot,0\}$. The Bi-Tempered Loss function is then defined as~\cite{amid2019robust}:
\begin{align}\label{eqbi}
\begin{split}
\mathcal{L}_{Bi}(D,\mathbf{w};{\color{red}t_1,t_2})& = -\frac{1}{N}\sum_{i=1}^N [\log_{{\color{red}t_1}}\hat{f}_{j_i,{\color{red}t_2}}(\mathbf{x}_i) \\
&+\frac{1}{2-{\color{red}t_1}}(1-\sum_{j=1}^c\hat{f}_{j,{\color{red}t_2}}(\mathbf{x}_i)^{2-{\color{red}t_1}})],
\end{split}
\end{align}
where $0\leq t_1<1, t_2>1$, $\hat{f}_{j,t} =\exp_{t} (z_j-\gamma_t(\mathbf{z}))$, $z_j$ is the input of softmax layer,
and $\gamma_t(\mathbf{z})$ is calculated by letting $\sum_{j=1}^c \exp_{t} (z_j-\gamma_{t}(\mathbf{z}))=1$.
\textbf{Polynomial Soft Weighting loss (PolySoft).} Self-paced learning (SPL) is a typical sample reweighting strategy to handle noisy labels by setting monotonically decreasing weighting function \cite{kumar2010self,jiang2014easy,zhao2015self}. It has been proved that such re-weighting learning process is equivalent to minimizing an latent robust loss \cite{meng2017theoretical}, and it thus can also be seen as a standard robust loss method. Recently, Gong et al. \cite{gong2018decomposition} proposed a polynomial soft weighting scheme for SPL, which can generally approximate monotonically decreasing weighting functions. By setting the CE loss as the basis loss form, the latent robust loss of this method is:
\begin{align}\label{eqlatent}
\begin{split}
&\mathcal{L}_{Poly}(D,\mathbf{w};{\color{red} \lambda,d}) = \\
&\begin{cases}
\frac{({\color{red}d}-1){\color{red}\lambda}}{{\color{red}d}}\left[1- (1-\frac{\mathcal{L}_{CE}(D,\mathbf{w})}{{\color{red}\lambda}})^{\frac{{\color{red}d}}{{\color{red}d}-1}}\right],& \mathcal{L}_{CE}<{\color{red}\lambda},\\
\frac{({\color{red}d}-1){\color{red}\lambda}}{{\color{red}d}},& \mathcal{L}_{CE} \geq {\color{red}\lambda},
\end{cases}
\end{split}
\end{align}
where $\mathcal{L}_{CE}$ is defined as in Eq. (\ref{LCE}).
\subsection{Adaptive Robust Loss Learning Algorithm}
It can be observed that all aforementioned robust loss functions contain hyperparameter(s), e.g., $q$ in $\mathcal{L}_{GCE}$ (Eq.(\ref{eqgce})), $\gamma_1,\gamma_2$ in $\mathcal{L}_{SL}$ (Eq.(\ref{eqsl})), $t_1,t_2$ in $\mathcal{L}_{Bi}$ (Eq.(\ref{eqbi})) and $\lambda,d$ in $\mathcal{L}_{Poly}$ (Eq.(\ref{eqlatent})). Instead of manually presetting or tuning them by cross-validation, we provide the following algorithm to adaptively learn these hyperparameter(s), by borrowing the idea of recent meta-learning techniques \cite{schmidhuber1992learning,thrun2012learning,finn2017model,franceschi2018bilevel,shu2018small,shu2019meta}.
\textbf{The Meta-learning Objective.} Given training dataset $D$, the net parameters are trained by optimizing the following minimization problem under certain robust loss $\mathcal{L}_{Train}$:
\begin{align}\label{eqroust}
\mathbf{w}^*(\Lambda) = \arg\min_{\mathbf{w}} \mathcal{L}_{Train}(D,\mathbf{w};\Lambda)
\end{align}
where $\Lambda$ denotes the hyperparameter set of $\mathcal{L}_{Train}$.
\begin{algorithm}[t]
\vspace{0mm}
\renewcommand{\algorithmicrequire}{\textbf{Input:}}
\renewcommand{\algorithmicensure}{\textbf{Output:}}
\caption{The Adaptive Robust Loss (ARL) Algorithm}
\label{alg1}
\begin{algorithmic}[1] \small
\REQUIRE Training data $S$, meta data $S_{meta}$, batch size $n,m$, max iterations $T$.
\ENSURE Classifier network parameter $\mathbf{w}$, robust loss hyperparameter $\Lambda$.
\STATE Initialize classifier network parameter $\mathbf{w}^{(0)}$ and robust loss $\mathcal{L}_R$ hyperparameter $\Lambda^{(0)}$.
\FOR{$t=0$ {\bfseries to} $T-1$}
\STATE $\{x,y\} \leftarrow$ SampleMiniBatch($S,n$).
\STATE $\{x^{(m)},y^{(m)}\} \leftarrow$ SampleMiniBatch($S_{meta},m$).
\STATE Update $\Lambda^{(t)}$ by Eq. (\ref{eqlambda}).
\STATE Update $\mathbf{w}^{(t)}$ by Eq. (\ref{eqpara}).
\ENDFOR
\end{algorithmic}
\end{algorithm}
Our method aims to automatically learn the hyperparameters $\Lambda$ in a meta-learning manner \cite{finn2017model,ren2018learning,shu2019meta}. Specifically, assume that we have a small amount meta-data set (i.e., with clean labels) $D_{meta} = \{x_i^{(m)},y_i^{(m)}\}_{i=1}^M$, representing the meta-knowledge of ground-truth sample-label distribution, where $M$ is the number of meta-samples, and $M\ll N$. We can then formulate a meta-loss minimization problem with respect to $\Lambda$ as:
\begin{align} \label{eqmeta}
\Lambda^* = \arg\min_{\Lambda} \mathcal{L}_{Meta}(D_{meta},\mathbf{w}^*(\Lambda)),
\end{align}
where $\mathcal{L}_{Meta}$ represents the loss imposed on meta data. Since meta data are all clean, it is employed as the conventional loss forms without hyperparameter, like CE loss.
\textbf{Learning Algorithm.} Calculating the optimal $\mathbf{w}^*$ and $\Lambda^*$ requires two nested loops of optimization, which is expensive to obtain the exact solution \cite{franceschi2018bilevel}. Here we adopt an online approximation strategy \cite{finn2017model, shu2019meta} to jointly update both sets of parameters in an iterative manner to guarantee the efficiency of the algorithm.
At iteration step $t$, we need to update hyperparameter $\Lambda^{(t)}$ on the basis of the net parameter $\mathbf{w}^{(t-1)}$ and hyperparameter $\Lambda^{(t-1)}$ obtained in the last iteration by minimizing the meta loss defined in Eq.(\ref{eqmeta}). To guarantee efficency and general feasibility, SGD is employed to optimize the parameters on $m$ mini-batch samples $D_m$ from $D_{meta}$, i.e.,
\begin{align} \label{eqlambda}
\Lambda^{(t)} = \Lambda^{(t-1)} - \beta\nabla_{\Lambda} \mathcal{L}_{Meta} (D_m, \tilde{\mathbf{w}}^{(t)}(\Lambda))\Big|_{\Lambda^{(t-1)}},
\end{align}
where the following equation is used to formulate $\tilde{\mathbf{w}}^{(t)}((\Lambda))$ on a mini-batch training samples $D_n$ from $D$:
\begin{align}
\tilde{\mathbf{w}}^{(t)}(\Lambda) = \mathbf{w}^{(t-1)} - \alpha \nabla_{\mathbf{w}} \mathcal{L}_{Train}(D_n,\mathbf{w};\Lambda)\Big|_{\mathbf{w}^{(t-1)}},
\end{align}
which is inspired from MAML \cite{finn2017model}, and $\alpha,\beta$ is the step size.
When obtained the parameter $\Lambda^{(t)}$, the network parameters $\mathbf{w}^{(t)}$ can then be updated by:
\begin{align}\label{eqpara}
\mathbf{w}^{(t)}= \mathbf{w}^{(t-1)} - \alpha \nabla_{\mathbf{w}}\mathcal{L}_{Train}(D_n,\mathbf{w};\Lambda^{(t)})\Big|_{\mathbf{w}^{(t-1)}}.
\end{align}
The Adaptive Robust Loss (ARL) Algorithm can then be summarized in Algorithm \ref{alg1}. All computations of gradients can be efficiently implemented by automatic differentiation techniques and easily generalized to any deep learning architectures. The algorithm can be easily implemented using popular deep learning frameworks like PyTorch \cite{paszke2017automatic}. The algorithm can then be easily integrated with any robust loss to make their hyperparameter automatically learnable. Specifically, we denote the ARL algorithms on robust losses defined in Eq.(\ref{eqgce}),(\ref{eqsl}),(\ref{eqbi}) and (\ref{eqlatent}) as A-GCE, A-SL, A-Bi-Tempered and A-PolySoft, respectively.
\begin{table*}[t]
\caption{Test accuracy (\%) of all competing methods on CIFAR-10 and CIFAR-100 under different noise rates. The best results are in bold.}\label{table1} \vspace{1mm}
\centering
\begin{footnotesize}
\begin{tabular}{c|c|c|c|c|c|c|c|c}
\toprule
\multirow{3}{*}{Models} & \multirow{3}{*}{Datasets} & \multirow{3}{*}{Methods} &\multicolumn{4}{c|}{Symmetric Noise} & \multicolumn{2}{c}{Asymmetric Noise} \\
\cline{4-9}
& & & \multicolumn{4}{c|}{Noise Rate $\eta$} & \multicolumn{2}{c}{Noise Rate $\eta$} \\
\cline{4-9}
& & & 0 & 0.2 & 0.4 & 0.6 & 0.2 & 0.4 \\
\hline
\hline
\multirow{24}{*}{ResNet-32} &\multirow{12}{*}{CIFAR-10} & CE & 92.89$\pm$0.32 & 76.83$\pm$2.30 & 70.77$\pm$2.31 & 63.21$\pm$4.22 & 76.83$\pm$2.30 & 70.77$\pm$2.31 \\
& &Forward & \textbf{93.03$\pm$0.11} & 86.49$\pm$0.15 & 80.51$\pm$0.28 & 75.55$\pm$2.25 & 87.38$\pm$0.48 & 78.98$\pm$0.35 \\
& &DMI & 90.91$\pm$0.20 & 87.59$\pm$0.21 & 85.13$\pm$0.10 & 80.23$\pm$0.39 & 89.08$\pm$0.49 & 79.33$\pm$0.65 \\
& & Meta-Weight-Net & 92.04$\pm$0.15 &89.19$\pm$0.57 & 86.10$\pm$0.18 & 81.31$\pm$0.37 & 90.33$\pm$0.61 & 87.54$\pm$0.23 \\
& &PolySoft & 91.40$\pm$0.39 & 87.53$\pm$0.48 & 81.49$\pm$0.34 & 75.87$\pm$0.25 & 85.99$\pm$1.77 &82.71$\pm$0.99 \\
& & {\color{blue}A-PolySoft} & 92.12$\pm$0.12 & \textbf{89.73$\pm$0.20} & \textbf{87.22$\pm$0.36} & \textbf{82.49$\pm$0.30} & \textbf{90.41$\pm$0.16} & \textbf{87.75$\pm$0.23 } \\
& & GCE& 90.03$\pm$0.30 &88.51$\pm$0.37 &85.48$\pm$0.16 & 81.29$\pm$0.23 & 88.55$\pm$0.22 & 83.31$\pm$0.14 \\
& & {\color{blue}A-GCE}& 91.47$\pm$0.19 &89.07$\pm$0.27 & 86.36$\pm$0.14 & 81.64$\pm$0.11 & 89.51$\pm$0.07 &86.35$\pm$0.17 \\
& & SL & 89.37$\pm$0.13 & 88.76$\pm$ 0.56 & 85.84$\pm$0.74 & 81.38$\pm$1.39 & 87.63$\pm$0.34 & 83.48$\pm$0.48 \\
& & {\color{blue}A-SL} & 91.50$\pm$0.16 & 89.53$\pm$0.22 & 86.36$\pm$0.41 & 82.19$\pm$0.30 & 89.54$\pm$0.28 & 86.45$\pm$0.20 \\
& & Bi-Tempered& 90.11$\pm$0.23 & 88.51$\pm$0.31 & 84.93$\pm$0.67 & 77.82$\pm$0.79 & 88.23$\pm$ 0.23 & 82.43$\pm$0.23 \\
& & {\color{blue}A-Bi-Tempered} & 92.24$\pm$0.20 & 89.37$\pm$0.09 & 86.32$\pm$0.28 & 81.70$\pm$0.21 & 89.88$\pm$0.30 & 86.86$\pm$0.28 \\ \cline{2-9}\cline{2-9}
&\multirow{12}{*}{CIFAR-100} & CE& \textbf{70.50$\pm$0.12 } & 50.86$\pm$0.27&43.01$\pm$1.16 & 34.43$\pm$0.94 & 50.86$\pm$0.27 & 43.01$\pm$1.16 \\
& & Forward& 67.81$\pm$0.61 &63.75$\pm$0.38 & 57.53$\pm$0.15 & 46.44$\pm$1.03 & 64.28$\pm$0.23 & 57.90$\pm$0.57 \\
& & DMI& 68.40$\pm$0.23 & 62.66$\pm$0.05& 56.95$\pm$0.11 & 46.30$\pm$0.10 & 64.05$\pm$0.18 & 58.08$\pm$0.22 \\
& &Meta-Weight-Net & 69.13$\pm$0.33 & 64.22$\pm$0.28 & 58.64$\pm$0.47& 47.43$\pm$0.76 & 64.22$\pm$0.28 &58.64$\pm$0.47 \\
& & PolySoft & 68.26$\pm$0.25 & 62.41$\pm$0.38 & 56.16$\pm$0.30 & 45.23$\pm$0.47 & 63.05$\pm$0.61 & 56.09$\pm$ 0.26 \\
& &{\color{blue}A-PolySoft} & 68.92$\pm$0.41 & \textbf{65.37$\pm$1.43} & \textbf{ 61.38$\pm $0.47} & \textbf{ 52.23$\pm$0.63 } & \textbf{64.42$\pm$0.26} & \textbf{58.73$\pm$0.17 } \\
& & GCE & 67.39$\pm$0.12 & 63.97$\pm$0.43 &58.33$\pm$0.35 & 41.73$\pm$0.36 & 62.07$\pm$0.41 & 55.25$\pm$0.09 \\
& &{\color{blue}A-GCE}& 67.57$\pm$0.32 & 64.58$\pm$0.30 & 58.50$\pm$0.15 & 42.16$\pm$0.63 & 62.46$\pm$ 0.52 &56.75$\pm$0.44 \\
& & SL & 66.43$\pm$0.43 & 52.46$\pm$0.18 & 51.28$\pm$0.73 & 38.39$\pm$1.53 & 52.04$\pm$0.89& 44.01$\pm$1.91 \\
& & {\color{blue}A-SL} & 68.07$\pm$0.51 & 63.73$\pm$0.27 & 57.99$\pm$0.37 & 45.75$\pm$0.66 & 63.25$\pm$0.33 & 56.83$\pm$0.19 \\
& & Bi-Tempered& 67.68$\pm$0.25 & 63.45$\pm$0.48& 57.25$\pm$0.16 & 44.72$\pm$0.39 & 63.12$\pm$0.28 & 55.37$\pm$0.56 \\
& &{\color{blue}A-Bi-Tempered}& 69.32$\pm$0.19 & 64.48$\pm$0.53& 59.26$\pm$0.12 & 48.62$\pm$0.32 & 63.78$\pm$0.27 & 56.56$\pm$0.08 \\
\bottomrule
\end{tabular} \vspace{-2mm}
\end{footnotesize}
\end{table*}
\subsection{Noise-robust Properties of Utilized Losses}
It can be seen from Fig.\ref{fig1} the adopted GCE, SL, Bi-Tempered and PolySoft losses are all robust amelioration from CE loss to noisy labels. All of them tend to be flat when loss becomes larger so as to suppress the negative influence by large losses brought by noisy samples with incorrect labels. In theory, actually they all satisfy loss bounded conditions, and are noise-tolerant under certain conditions.
This property has been proved for GCE \cite{zhang2018generalized} and SL \cite{wang2019symmetric}, and we then provide the related results for the other two.
Denote the true label of $x$ as $\hat{y}$, in contrast to its noisy label $y$, and $D_c$ and $D_n$ as the underlying distributions of clean and noisy data, respectively. Let $R_{\mathcal{L}}(f)=\mathbb{E}_{D_c} [\mathcal{L}(f(x),\hat{y})]$ be the risk of classifier $f$ under clean labels, and $R_{\mathcal{L}}^{\eta}(f)=\mathbb{E}_{D_n} [\mathcal{L}(f(x),y)]$ as the risk of classifier $f$ under label noise rate $\eta$.
A loss function $\mathcal{L}$ is defined to be noise tolerant \cite{manwani2013noise,ghosh2017robust} if $\hat{f}$ on noisy data has the same misclassification probability
as that of $f^*$ on clean data, where $\hat{f}$ and $f^*$ are the global minimizers of $R_{\mathcal{L}}^{\eta}(f)$ and $R_{\mathcal{L}}(f)$, respectively.
To make this definition more feasible in practice, Ghosh et al. \cite{ghosh2017robust} further relaxed this definition as the bounded loss condition and proved that under it the loss also possesses certain robustness capacity in theory \cite{zhang2018generalized}, as proved for GCE \cite{zhang2018generalized} and SL \cite{wang2019symmetric}. We can further prove that the PolySoft and Bi-Tempered also possess these properties, as provided in the following theorems. The detailed proofs are listed in the supplementary material.
\begin{theorem}\label{TH1}
Under the symmetric noise with $\eta \leq 1-\frac{1}{c}$, and $\lambda \geq \log c, d>1$, the PolySoft loss in Eq. (\ref{eqlatent}) satisfies
\begin{align*}
0\leq R_{\mathcal{L}}^{\eta}(f^*)-R_{\mathcal{L}}^{\eta}(\hat{f}) \leq A,
A' \leq R_{\mathcal{L}}(f^*)-R_{\mathcal{L}}(\hat{f})\leq 0,
\end{align*}
where $A = \frac{c(d-1)\eta}{d(c-1)}(\lambda-\log c)\geq 0$, $A' = \frac{c(d-1)\eta}{d(c-1-\eta c)}[\log c -\lambda]<0$. Especially, when $\lambda = \log c$, we have $R_{\mathcal{L}}^{\eta}(f^*)=R_{\mathcal{L}}^{\eta}(\hat{f})$.
\end{theorem}
The theorem clarifies that the PolySoft loss is with loss bounded condition, and noise tolerant when $\lambda = \log c$.
\begin{theorem}\label{TH2}
Under the symmetric noise with $\eta \leq 1-\frac{1}{c}$, and $ 0\leq t_1<1, t_2>1$, the Bi-Tempered loss in Eq.(\ref{eqbi}) satisfies
\begin{align*}
0\leq R_{\mathcal{L}}^{\eta}(f^*)-R_{\mathcal{L}}^{\eta}(\hat{f}) \leq A,
A' \leq R_{\mathcal{L}}(f^*)-R_{\mathcal{L}}(\hat{f})\leq 0,
\end{align*}
where $A = \frac{\eta}{1-t_1}-\frac{\eta(c-c^{t_1})}{(c-1)(1-t_1)(2-t_1)}>0$, $A' =\frac{\eta(c-c^{t_1})}{(c-1-\eta c)(1-t_1)(2-t_1)} -\frac{\eta(c-1)}{(1-t_1)(c-1-\eta c)}<0$.
\end{theorem}
This theorem illustrates that the Bi-Tempered loss satisfies loss bounded condition. Albeit not noise tolerant, it still possesses certain theoretical robustness \cite{zhang2018generalized}.
\vspace{-1mm}
\section{Experimental Results} \label{experiment}\vspace{-1mm}
To evaluate the capability of the ARL algorithm, we implement experiments on CIFAR-10, CIFAR-100, TinyImageNet, as well as a
large-scale real-world noisy dataset Clothing1M.
\vspace{-1mm}
\subsection{Experimental Setup}\vspace{-1mm}
\textbf{Datasets.} We first verify the effectiveness of our method on two benchmark datasets: CIFAR-10 and CIFAR-100 \cite{krizhevsky2009learning}, consisting of $32\times32$ color images arranged in 10 and 100 classes, respectively. Both datasets contain 50,000 training and 10,000 test images. We random select 1,000 clean images in the validation set as meta data. Then, we verify our method on a larger and harder dataset called Tiny-ImageNet (T-ImageNet briefly), containing 200 classes with 100K training, 10K validation, 10K test images of $64\times64$. We random sample 10 clean images per class as meta data. These datasets are popularly used for evaluation of learning with noisy labels in the previous literatures \cite{reed2014training,patrini2017making,goldberger2016training}.
\textbf{Noise setting.} We test two types of label noise: symmetric noise and asymmetric (class-dependent) noise. \textbf{Symmetric} noisy labels are generated by flipping the labels of a given proportion of training samples to one of the other class labels uniformly \cite{zhang2016understanding}. For \textbf{asymmetric} noisy labels, we use the setting in \cite{shu2019meta}, where the label of each sample is independently flipped to two classes with same probability. Also, we consider a more realistic \textbf{hierarchical} corruption in CIFAR-100 as described in \cite{hendrycks2018using}, which applies uniform corruption only to semantically similar classes.
\textbf{Baselines.} We compare ARL algorithm with the following state-of-art methods, and implement all methods with default settings in the original paper by PyTorch. 1) \textbf{CE}, which uses CE loss to train the DNNs on noisy datasets. 2) \textbf{Forward} \cite{patrini2017making}, which corrects the prediction by the label transition matrix. 3) \textbf{DMI }\cite{xu2019l_dmi}, which uses mutual information based robust loss to train the DNNs.
4) \textbf{Meta-Weight-Net }\cite{shu2019meta}, which uses a MLP net to learn the weighting function in a data-driven fashion, representing the SOTA sample weighting methods. 5) \textbf{PolySoft} \cite{gong2018decomposition}, 6) \textbf{GCE} \cite{zhang2018generalized}, 7) \textbf{SL} \cite{wang2019symmetric}, 8) \textbf{Bi-Tempered} \cite{amid2019robust} represent the STOA robust loss methods.
The meta-data in these methods are used as validation set for cross-validation to search the best hyperparameters except for Meta-Weight-Net.
\textbf{Network structure.} We use ResNet-32 \cite{he2016deep} as our classifier network models for CIFAR-10 and CIFAR-100 dataset, and a 18-layer Preact ResNet \cite{he2016deep} for T-ImageNet.
\textbf{Experimental setup.} We train the models with SGD, at an initial learning rate 0.1 and a momentum 0.9, a weight decay $1\times10^{-3}$ with mini-batch size 100.
For our proposed methods, ResNet-32 models, the learning rate decays 0.1 at 40 epochs and 50 epochs for a total of 60 epochs; Preact ResNet-18 models, the learning rate decays 0.1 at 30 epochs and 60 epochs for a total of 90 epochs. We use SGD to optimize hyperparameters, and the learning rate setting is the same as the classifier for different experiments.
\textbf{Hyperparameter setting.} For the methods in PolySoft, GCE, SL, Bi-Tempered, we used the optimal hyperparameter in the original paper or carefully searched by cross-validation. For our method, those hyperparameters are automatically learned.
\vspace{-1mm}
\subsection{Robustness Performance Evaluation}\vspace{-1mm}
\textbf{Results on CIFAR-10 and CIFAR-100.} The classification accuracies of CIFAR-10 and CIFAR-100 under symmetric and asymmetric noise are reported in Table \ref{table1} with 5 random runs. As can be seen, our {\color{blue}ARL algorithm} (in color blue) improves on the original algorithm via a large margin for almost all noise rates and all datasets. Table \ref{table2} shows classification accuracies of more realistic hierarchical label corruption on CIFAR-100 dataset. Our ARL algorithm can also improve the accuracy of the original algorithm and A-PolySoft outperforms all other baselines methods.
\begin{table*}[t]
\caption{Test accuracy (\%) of ResNet-32 on CIFAR-100 with hierarchical noisy labels. The best results are in bold.}\label{table2} \vspace{1mm}
\centering
\begin{footnotesize}
\begin{tabular}{c|c|c|c|c|c|c|c}
\toprule
\multicolumn{2}{c|}{Methods} & CE & Forward & DMI & Meta-Weight-Net & PolySoft & {\color{blue}A-PolySoft} \\ \hline
\multirow{2}{*}{Noise Rate $\eta$} & 0.2 & 51.31$\pm$0.27 & 64.35$\pm$0.33 & 64.51$\pm$0.08 & 64.38$\pm$0.38 & 63.51$\pm$0.38 & \textbf{65.42$\pm$0.15} \\
& 0.4 &45.23$\pm$1.16 & 59.74$\pm$0.19 &60.09$\pm$0.10 & 59.41$\pm$0.79 & 58.63$\pm$0.12 & \textbf{60.46$\pm$0.18} \\ \hline \hline
\multicolumn{2}{c|}{Methods}& GCE & {\color{blue}A-GCE} & SL & {\color{blue}A-SL} & Bi-Tempered & {\color{blue}A-Bi-Tempered} \\ \hline
\multirow{2}{*}{Noise Rate $\eta$} & 0.2 & 62.72$\pm$0.36 & 63.31$\pm$0.40 & 56.38$\pm$0.50 & 63.30$\pm$0.19 & 63.45$\pm$0.20 &64.99$\pm$0.25 \\
& 0.4 &58.03$\pm$0.81 & 58.46$\pm$0.33 &48.34$\pm$0.33 &57.94$\pm$0.71 & 57.90$\pm$0.18 &59.90$\pm$0.53\\
\bottomrule
\end{tabular}\vspace{-3mm}
\end{footnotesize}
\end{table*}
\begin{table}
\caption{Test accuracy (\%) on T-ImageNet under different noise fractions. The best results are in bold. }\label{tableT} \vspace{1mm}
\centering
\begin{footnotesize}
\begin{tabular}{c|c|c|c|c|c|c}
\toprule
\multirow{3}{*}{Methods} &\multicolumn{4}{c|}{Symmetric Noise} & \multicolumn{2}{c}{Asymmetric Noise} \\
\cline{2-7}
& \multicolumn{4}{c|}{Noise Rate $\eta$} & \multicolumn{2}{c}{Noise Rate $\eta$} \\
\cline{2-7}
& 0 & 0.2 & 0.4 & 0.6 & 0.2 & 0.4 \\
\hline
\hline
CE & 55.01 & 43.94 & 35.14 & 20.45 & 42.12 & 33.58 \\
Forward & \textbf{ 55.29} & 46.57 & 38.01 & 24.43 & 44.98 & 36.99 \\
DMI & 54.50 & 46.10 & 40.35 & 25.23 & 44.82 & 36.68 \\
MW-Net & 53.58 &48.31 & 43.33 & 28.23 & 45.17 & 37.72 \\
PolySoft & 52.18 & 46.86 &40.76 & 21.48 & 43.99 & 36.11 \\
{\color{blue}A-PolySoft} & 54.18 & \textbf{49.24} & \textbf{43.67 } &\textbf{28.46 } & \textbf{48.65} & \textbf{ 40.50 } \\
GCE& 53.11 &47.72 & 38.96 & 23.93 & 45.62 & 35.32 \\
{\color{blue} A-GCE}& 53.46 & 48.22 & 41.40 & 24.11 & 46.18 & 36.47 \\
SL & 52.48 & 44.33 & 35.18 & 21.82 & 44.18 & 34.69 \\
{\color{blue} A-SL} & 53.34 & 48.99 & 38.29 & 22.39 &47.68 & 37.78 \\
Bi-Tem& 52.09 & 45.90 & 35.36 & 21.32 & 44.14 & 34.37 \\
{\color{blue}A-Bi-Tem} & 54.22 & 46.67 & 37.36 & 22.10 & 46.91 & 35.43 \\
\bottomrule
\end{tabular}\vspace{-1mm}
\end{footnotesize}
\end{table}
Combining Table \ref{table1} and \ref{table2}, it can be observed that: 1) PolySoft and Bi-Tempered's performance drop quickly as the noise rate exceeds 0.4, and A-PolySoft and A-Bi-Tempered improve the accuracy around 5\% on CIFAR-100 thereafter. 2) SL drops more quickly on CIFAR-100 than CIFAR-10, and A-SL improves the accuracy more evidently, over 10\% on CIFAR-100 with 20\% noise rate. 3) A-PolySoft outperforms the STOA sample reweighting method Meta-Weight-Net, possibly attributed to its monotonically decreasing form of robust loss, making it noise robust, as illustrated in Section \ref{understand}.
\textbf{Results on T-ImageNet.} To verify our approach on a more complex scenario, we summarize in Table \ref{tableT} the test accuracy on T-ImageNet with different noise setting. As we can see, for both noise settings with different noise rates, A-PolySoft outperforms other baselines. Meanwhile, our ARL algorithm improves the original algorithm stably.
\begin{table*}[t]
\caption{Test accuracy (\%) of different models on real-world noisy dataset Clothing1M. The best results are in bold.}\label{Table4} \vspace{1mm}
\centering
\begin{footnotesize}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c}
\toprule
Methods & CE & Forward & DMI & MN-Net & PolySoft & {\color{blue}A-PolySoft} & GCE & {\color{blue}A-GCE} &SL & {\color{blue}A-SL}& Bi-Tem &{\color{blue}A-Bi-Tem} \\ \hline
Accuracy & 68.94 & 70.83 & 72.46 & 73.72 & 69.96 & 73.76 & 69.75 & 70.55 & 71.02 & 71.83 & 69.89 & 70.14 \\
\bottomrule
\end{tabular} \vspace{-2mm}
\end{footnotesize}
\end{table*}
\begin{figure}[t]
\centering
\subfigcapskip=-1mm
\subfigure[Sample weight distribution of Meta-Weight-Net]{
\label{fig1a}
\includegraphics[width=0.48\textwidth]{./fig/MWcifar10-40.pdf}} \\ \vspace{-3mm}
\subfigure[Sample weight distribution of A-PolySoft]{
\label{fig1b}
\includegraphics[width=0.48\textwidth]{./fig/cifar-40.pdf}}
\caption{Sample weight distribution on CIFAR-10 dataset under 40\% symmetric noise experiments during training process. (a)-(b) present the sample weights produced by Meta-Weight-Net and A-PolySoft, respectively.}\label{fig2}
\vspace{-4mm}
\end{figure}
\vspace{-1mm}
\subsection{Towards Understanding of ARL Algorithm} \label{understand}\vspace{-1mm}
\textbf{How ARL adapt to noise extents.}
To understand how the ARL algorithm automatically fit noise extents, we plot the learned loss under different noise rates on CIFAR10 dataset in Fig.\ref{fig1}. It is easy to see that when the loss value is small, learned loss is almost the same as CE loss; while when loss becomes larger and exceeds a certain threshold, learned loss tends to be flat thereafter, behaving as suppressing the effect of samples with incorrect labels often with large losses. Furthermore, the learned loss tends to be flat earlier when the noisy rate is higher, which implies ARL algorithm can adjust loss function according to noise amounts to better encode and alleviate the noisy label effectly. Such noisy-rate adapting capability of our algorithm finely explains its superior robustness depicted in our experiments.
\begin{figure}[t]
\centering
\subfigcapskip=-1mm
\subfigure[CE on clean data. ]{
\label{fig1a}
\includegraphics[width=0.23\textwidth]{./fig/CE.png}}
\subfigure[CE on 60\% noise data.]{
\label{fig1b}
\includegraphics[width=0.23\textwidth]{./fig/CE-60.png}} \\ \vspace{-3mm}
\subfigure[Bi-Tem on 60\% noise data. ]{
\label{fig1a}
\includegraphics[width=0.23\textwidth]{./fig/Bi-60.png}}
\subfigure[A-Bi-Tem on 60\% noise data.]{
\label{fig1b}
\includegraphics[width=0.23\textwidth]{./fig/ABi-60.png}} \\
\caption{2D representations extracted by A-Bi-Tempered and baselines on CIFAR-10 dataset with 60\% symmetric noisy labels.}\label{figre}
\vspace{-4mm}
\end{figure}
\begin{figure*}[t]
\centering
\subfigcapskip=-1mm
\subfigure[CIFAR-10 40\% Symmetric Noise]{
\label{fig1a}
\includegraphics[width=0.43\textwidth]{./fig/SL-10-40.pdf}} \ \ \
\subfigure[CIFAR-10 60\% Symmetric Noise]{
\label{fig1b}
\includegraphics[width=0.43\textwidth]{./fig/SL-10-60.pdf}} \\ \vspace{-3mm}
\subfigure[CIFAR-100 40\% Symmetric Noise]{
\label{fig1a}
\includegraphics[width=0.43\textwidth]{./fig/SL-100-40.pdf}} \ \ \
\subfigure[CIFAR-100 60\% Symmetric Noise]{
\label{fig1b}
\includegraphics[width=0.43\textwidth]{./fig/SL-100-60.pdf}} \\ \vspace{-1mm}
\caption{Test accuracy vs. number of epochs of A-SL and other comparison methods, SL, SL-Opt1, SL-Opt2 with different noise amounts on CIFAR-10 and CIFAR-100 datasets under symmetric noise.}\label{figSL}
\vspace{-4mm}
\end{figure*}
\textbf{Reweighting mechanism visualization.} To better understand why our algorithm contributes to learn more robust models during training, we plot the weight distribution variations of clean and noisy training samples during the learning process of A-PolySoft in Fig.\ref{fig2}. To better visualize this point, we also compare such weight distribution of Meta-Weight-Net. It can be seen the weights extracted by A-PolySoft clearly distinguish clean and noisy samples, much more evident than those obtained by Meta-Weight-Net. Specifically, through the iteration of our algorithm, the weights of clean samples are with larger values gradually than those on noisy ones (most approximates 0), and thus the negative influence of these noisy labels tends to be effectively reduced. This clearly explains why our algorithm is able to consistently outperform the Mete-Weight-Net.
\textbf{Representation Demonstration.} We further investigate the representations learned by ARL algorithm compared to other baselines. We extract the high-dimensional representations of data at the second last dense layer of the learned classifiers by different methods, and then project them to a 2D embedding by using t-SNE \cite{maaten2008visualizing}.
As shown in Fig.\ref{figre}, it is evident that the representations learned by A-Bi-Tempered algorithm (other methods are presented in the supplemental file) are obviously better than that by CE and Bi-Tempered with more separated and clearly bounded clusters.
\vspace{-1mm}
\subsection{Ablation Study}\vspace{-1mm}
ARL algorithm tries to mutually ameliorate robust loss hyperparameters and net parameters in an iterative manner. An important problem is whether or not the mutual amelioration process helps explore better generalization solution. To clarify this, Fig.\ref{figSL} compares four strategies as: 1) SL: conventional SL method using the hyperparameter optimally tuned by cross-validation; 2) SL-Opt1: conventional SL method using the hyperparameter learned by A-SL at the last step; 3) SL-Opt2: run conventional SL in each step of A-SL by using the hyperparameter obtained by the latter in its current step as its initialization; 4) A-SL: our algorithm. It can be easily observed: 1) SL-Opt1 performs worse than SL, which means the hyperparameter adaptively learned by A-SL is actually not the optimal one for SL, with fixed hyperparameter throughout its iteration.
2) SL-Opt2 outperforms SL, implying the A-SL adaptively finds a proper hyperparameter for its robust loss and simultaneously explores a good initialization net parameter for this loss under its current hyperparameter in a dynamical mutual updating way.
3) ASL outperforms SL-Opt2, showing such adaptive learning manner for both robust loss hyperparameter and net parameters should be a more suitable manner for simultaneously obtaining optimal values for both of them rather than only updating one under the other fixed, even the fixed one could be set possibly optimal. All of these observations inspire such a meta-learning regime might provide a rational learning manner for exploring better generalization solutions for such non-convex robust learning problem.
\vspace{-1mm}
\subsection{Experiments on Real-world Noisy Dataset} \label{real-experiment}\vspace{-1mm}
We then verify the applicability of our algorithm on a real-world large-scale noisy dataset: Clothing1M, which contains 1 million images of clothing from online shopping websites with 14 classes, e.g., T-shirt, Shirt, Knitwear. The labels are generated by the surrounding text of images and are thus extremely noisy. The dataset also provides 50k, 14k, 10k manually refined clean data for training, validation and testing respectively, but we did not use the 50k clean data. We use the validation dataset as the meta dataset.
\textbf{Experimental setup.} Following the previous works\cite{patrini2017making,tanaka2018joint}, we used ResNet-50 pre-trained on ImageNet. For preprocessing, we resize the image to $256\times256$, crop the center $224\times224$ as input, and perform normalization. We used SGD with a momentum 0.9, a weight decay $10^{-3}$, an initial learning rate 0.01, and batch size 32. The learning rate of ResNet-50 is divided by 10 after 5 epochs (for a total 10 epochs). We use SGD to optimize hyperparameters with an initial learning rate 0.1, and divided by 10 after 5 epoch.
\textbf{Results.} The results are summarized in Table \ref{Table4}. The conventional algorithm need to search a proper hyperparameter from a candidate set by cross-validation to obtain a satisfied result, which is often expensive and hard to be reproduced in real-world settings. Our ARL algorithm provides a new way to mutually ameliorate between hyperparameter and network parameters to reduce the barrier of practical implementations. It can be seen that ARL algorithm can consistently improve the performance of the original algorithm, and A-PolySoft obtains the highest performance compared to the baselines.
\vspace{-1mm}
\section{Conclusion} \label{conclusion}\vspace{-1mm}
In this paper, we have proposed an adaptive hyperparameter learning strategy to learn the form of robust loss function directly from data by automatically tuning the hyperparameter. Four STOA robust loss functions are chosen to be integrated into our ARL framework, to verify its validity. Comprehensive experiments have been conducted, and the empirical results show that the propose method can perform superior than conventional hyperparameter setting strategy. The learning fashion of iterative amelioration betwen hyperparameter and network parameter has shown good potential for providing a new thought to explore solutions with better generalization for such highly non-convex robust loss optimization problems.
\newpage
{\small
\bibliographystyle{ieee_fullname}
|
2,877,628,088,409 | arxiv | \section{\label{s1} Introduction}
Defining and studying theories of random surfaces \cite{pol1}, or more generally random geometries, is a fundamental problem in science with a wide
range of applications, from string theory and quantum gravity to
statistical physics and probability theory. Most of the literature has
focused on random surfaces. In this case, the problem simplifies because
any metric $g$ can be put into a simple form using diffeomorphisms, the so-called conformal gauge,
\begin{equation}
\label{confgauge}
g=e^{2\sigma}g_{0}\, ,
\end{equation}
where $g_{0}$ is some reference background metric which depends on at most a finite number of parameters. The standard route to define random metrics is then to consider that the unconstrained scalar $\sigma$ is a gaussian free field \cite{DDK}. One obtains in this way a family of non-trivial models, the so-called Liouville theories, that are parametrized by a single positive constant multiplying the Liouville action
\begin{equation}}\def\ee{\end{equation}\label{Liouvilleaction} S_{\text{L}}(g_{0},g) = \int\!\d^{2}x\,\sqrt{g_{0}}\bigl( g_{0}^{ab}\partial_{a}\sigma\partial_{b}\sigma + R_{0}\sigma\bigr)\, .\ee
These models are natural two-dimensional generalizations of the brownian random paths. They are known to be related to the continuum limit of discretized versions of random geometries formulated using double-scaled matrix models \cite{kaza}. They are also of interest for probabilists who have recently proved a rigorous version
\cite{sheff} of the KPZ relation \cite{KPZ}.
In spite of its many successes, the Liouville formulation suffers from a certain number of shortcomings. For example, one of the fundamental application of the model has been to the theory of two-dimensional quantum gravity. A crucial requirement in quantum gravity is background-independence which, in the present context, is equivalent to independence with respect to the reference metric $g_{0}$ in \eqref{confgauge}. However, the Liouville theory is background-dependent, because both the Liouville action \eqref{Liouvilleaction} and the path integral measure on the Liouville field, which is derived from the following metric in field space,
\begin{equation}}\def\ee{\end{equation}
\label{Liouvillemetric}
\lVert\delta\sigma\rVert_{0}^{2} =\int_{\Sigma}\!\d^{2}x\,\sqrt{g_{0}}\,(\delta\sigma)^{2}\, ,\ee
are background-dependent. A major result in Liouville theory \cite{DDK} is to demonstrate that the model can actually be made background-independent when coupled to a conformal field theory of central charge $c\leq 1$ by adjusting the parameter multiplying the action \eqref{Liouvilleaction}. The cases $c>1$ remain open, but even when $c\leq 1$ it would be desirable to understand why the Liouville model is the correct theory of quantum gravity. From first principle, the background-independent path integral measure $\mathscr D\sigma$ should be derived from the background-independent metric on the space $\mathscr M$ of all two-dimensional metrics
\begin{equation}}\def\ee{\end{equation}
\label{Calabimetric}
\lVert\delta\sigma\rVert^{2} = \int_{\Sigma}\!\d^{2}x\,\sqrt{g}\,(\delta\sigma)^{2}=\int_{\Sigma}\!\d^{2}x\,\sqrt{g_{0}}\,e^{2\sigma}(\delta\sigma)^{2}\, .\ee
This formula is very unusual, because it depends non-linearly on the field $\sigma$, and it is not clear a priori how to use it to build a path integral.
In this letter, we propose a new approach to the theory of random geometry, based on the profound geometrical properties of the space of metrics on a K\"ahler manifold \cite{Mabuchi, Mabuchi2, Tian}. In this framework, we can construct very simple regularized versions of
\eqref{Calabimetric} and $\mathscr D\sigma$. A wealth of new theories of random metrics can be considered. In particular, it is natural to study models for which the gravitational action is given by the Mabuchi functional
\cite{Mabuchi}, which is singled out by its unique geometrical properties. Remarkably, we show that the effective gravitational action for a massive scalar field coupled to gravity in two dimensions does contain the Mabuchi action on top of the standard Liouville term, yielding new quantum gravity models with profound geometrical features.
We shall restrict ourselves in the following to the simplest case of random metrics on the two dimensional sphere $\Sigma=\text{S}^{2}$. We can then choose the background metric in \eqref{confgauge} to be the round metric of area $A_{0}$,
\begin{equation}}\def\ee{\end{equation}\label{roundmet} g_{0}=\frac{A_{0}}{\pi}\frac{|\d z|^{2}}{(1+|z|^{2})^{2}}\,\raise 2pt\hbox{,}} \def\sign{\mathop{\rm sign}\nolimits\ee
where $(z,\bar z)$ are the standard stereographic coordinates. Generalizations to arbitrary Riemann surfaces $\Sigma$ and to higher dimensional K\"ahler manifolds are presented elsewhere \cite{FKZ1,FKZ2}.
\section{\label{s2} The K\"ahler potential and the construction of
metrics}
Our starting point is to write the conformal factor $e^{2\sigma}$ in terms of the K\"ahler potential $\phi$ defined by the equation
\begin{equation}}\def\ee{\end{equation}\label{phidef} e^{2\sigma} = \frac{A}{A_{0}} -\frac{1}{2}A\,\Delta_{0}\phi\, ,
\ee
where $A$ is the area for the metric $g=e^{2\sigma}g_{0}$ and $\Delta_{0}$ the positive laplacian for the metric $g_{0}$. Equation \eqref{phidef} can always be solved for $A$ and $\phi$ in terms of $\sigma$, and the solution is unique up to constant shifts in $\phi$. \emph{We propose to focus on $\phi$ instead of $\sigma$ to define random metrics.}
The field $\phi$ must satisfy the fundamental inequality
\begin{equation}}\def\ee{\end{equation}\label{constphi} \Delta_{0}\phi < 2/A_{0}\ee
coming from the positivity of the metric $g$. To define a path integral over $\phi$, we must regularize the theory by introducing a UV cut-off and also solve the constraint \eqref{constphi}. Remarkably, this can be done in a very elegant way.
A simple method to regularize is to expand the field on spherical harmonics up to spin $N$,
corresponding to a short distance cut-off $\ell\sim A^{1/2}/N$, and then take the limit $N\rightarrow\infty$. In stereographic coordinates, it is not difficult to see that a basis for the space of all spherical harmonics of spin up to $N$ is given by the functions $f_{ij}= \bar s_{i}(\bar z) s_{j}(z) \lambda_{0}^{2}(z,\bar z)$, $0\leq i,j\leq N$, where the $(s_{i})_{0\leq i\leq N}$ forms a basis for the holomorphic polynomials of degrees up to $N$ and $1/\lambda_{0}^{2} = (1+|z|^{2})^{N}$. It is actually convenient to choose
\begin{equation}}\def\ee{\end{equation}\label{sidef} s_{j}(z) = \sqrt{\frac{N!}{j!(N-j)!}}\, z^{j}\, ,\quad 0\leq j\leq N\, .\ee
In order to make the constraint \eqref{constphi} tractable, the idea, which is based on profound methods in K\"ahler geometry \cite{Tian}, is to expand
\begin{equation}}\def\ee{\end{equation}\label{phiexp} e^{2\pi N\phi_{N}} = \sum_{0\leq i,j\leq N}\lambda_{0}^{2}\,\bar s_{i}(\bar z) H_{ij} s_{j}(z)\ee
instead of $\phi$ itself. The field $\phi_{N}$ is the regularized version of $\phi$ and the associated metric is called a Bergman metric. The matrix $H$ in \eqref{phiexp} must be hermitian since the K\"ahler potential is real. It is defined up to multiplication by strictly positive constants, since constant shifts in the K\"ahler potential are immaterial. We can thus impose the condition $\det H=1$. Moreover, \emph{$H$ must be positive-definite.} This ensures that the right hand side of eq.\ \eqref{phiexp} is strictly positive, and, less trivially, also ensures that \eqref{constphi} is automatically satisfied, as a little calculation using the Cauchy-Schwarz inequality shows. The converse statement is also true \cite{Tian}: any $\phi$ satisfying \eqref{constphi} can be approximated by an expansion like \eqref{phiexp}, for $N$ large enough, where $H$ is a positive-definite hermitian matrix given by
\begin{equation}}\def\ee{\end{equation}\label{Hilbmap} H^{-1}_{ji} = \int_{\Sigma}\!\d^{2}x\,\sqrt{g}\, \lambda_{0}^{2}e^{-2\pi N\phi}\bar s_{i} s_{j}\, .\ee
The fact that $\phi_{N}$, given by \eqref{phiexp}, converges to $\phi$ when $N\rightarrow\infty$ if $H$ is given by \eqref{Hilbmap} (with the same convergence property being true for the associated metric and all its derivatives) can be interpreted in terms of natural properties of the lowest Landau level for a charged particle on $\text{S}^{2}$ in a magnetic field of strength $\sim N\rightarrow\infty$. We cannot expand on this interesting point here but more details can be found in \cite{DK, FKZ2}.
The important conclusion is that the symmetric space $\mathscr M_{N}=\text{SL}(N+1,\mathbb C)/\text{SU}(N+1)$ of $(N+1)\times (N+1)$ positive-definite hermitian matrices $H$ of determinant one provides a regularization of the space $\mathscr M$ of all metrics on $\text{S}^{2}$. Metrics in $\mathscr M_{N}$ are parametrized by $H$ through the formulas \eqref{confgauge}, \eqref{phidef}, \eqref{phiexp} and the space $\mathscr M_{N}\times\mathbb R_{+}^{*}$, where the $\mathbb R_{+}^{*}$ factor parametrizes the total area, goes to the space of all metrics $\mathscr M$ when $N\rightarrow\infty$. A theory of random metrics can then be defined by choosing some probability measure on the space $\mathscr M_{N}$, corresponding to a particular matrix model. The continuum limit is associated with the large $N$ limit. Let us emphasize that the integral over angles is crucial in these models, because the relevant observables depend on the full matrix $H$ and not only on its eigenvalues, as, for example, the formulas \eqref{phiexp} and \eqref{Hilbmap} clearly show.
\section{\label{s3} Background-independent measure}
The metric \eqref{Calabimetric} expressed in terms of the variables $A$ and $\phi$ defined in \eqref{phidef} reads
\begin{equation}}\def\ee{\end{equation}\label{Calabimet2}
\lVert\delta\sigma\rVert^{2} = \frac{(\delta A)^{2}}{4A} + \frac{A^{2}}{16}\int\!\d^{2}x\,\sqrt{g}\, (\Delta_{g}\delta\phi)^{2}\, ,\ee
where $\Delta_{g}$ is the laplacian for the metric $g$. If we introduce the natural metric on the space of K\"ahler potentials $\phi$
\begin{equation}}\def\ee{\end{equation}\label{Mabushimet}\lVert\delta\phi\rVert^{2} =
\int\!\d^{2}x\,\sqrt{g}\, (\delta\phi)^{2}\, ,\ee
then the background-independent measures $\mathscr D\phi$ and $\mathscr D\sigma$ associated to \eqref{Calabimetric} and \eqref{Mabushimet} respectively are simply related,
\begin{equation}}\def\ee{\end{equation}\label{measurerel} \mathscr D\sigma = A^{1/6}\d A\, \mathscr D\phi\, {\det}'(A\Delta_{g})\, ,\ee
and thus the problems of defining $\mathscr D\sigma$ and $\mathscr D\phi$ are equivalent. The determinant ${\det}'(A\Delta_{g})$ of the laplacian of the metric of unit area $g/A$ excludes the constant zero mode because the constant shifts in $\phi$ are unphysical. It is defined as usual via the zeta-function regularization procedure. The overall power of $A$ in \eqref{measurerel} is derived in general from a one-loop calculation and the result indicated in \eqref{measurerel} corresponds to the Liouville theory on the sphere \cite{FKZ2}.
In the context of the regularized theory defined in the previous Section, the choice of background is related to the choice of a metric associated to the identity matrix $H=\mathbb{I}$, or equivalently to a choice of basis $(s_{j})_{0\leq j\leq N}$. For example, with the choice \eqref{sidef}, $H=\mathbb{I}$ corresponds to the round metric \eqref{roundmet}.
Changing the background amounts to changing $H$ into $MHM^{\dagger}$ for some invertible matrix $M$. The metric on $\mathscr M_{N}$ defined by
\begin{equation}}\def\ee{\end{equation}\label{metMN} \lVert\delta H\rVert_{N}^{2} = \tr (H^{-1}\delta H)^{2}\, ,\ee
as well as its associated volume form, $\mathscr D_{N}H$ are thus manifestly background-independent.
Clearly, since at large $N$ the space $\mathscr M_{N}\times\mathbb R_{+}^{*}$ approximates the space $\mathscr M$ of all metrics, the measure $\mathscr D_{N}H$ must yield a regularized version of the background-independent measure on $\mathscr M$. This can be explicitly checked by studying the large $N$ asymptotics of \eqref{metMN}, again using the properties of the associated lowest Laudau level problem. The result
\begin{equation}}\def\ee{\end{equation}\label{dphidH} \lVert\delta\phi\rVert^{2}/A = \lim_{N\rightarrow\infty} \lVert\delta H\rVert_{N}^{2}/(8\pi^{2}N^{3})\ee
shows that $\mathscr D_{N}H$ provides a definition of $\mathscr D\phi$ at large $N$. Path integrals over metrics are then defined by
\begin{equation}}\def\ee{\end{equation}\label{defpath} \int_{\mathscr M}\!\mathscr D\phi\, e^{-S(\phi)}\sim\lim_{N\rightarrow\infty}\int_{\mathscr M_{N}}\!\mathscr D_{N}H\, e^{-S(\phi_{N}(H))}\, ,\ee
where $\phi_{N}(H)$ is given by \eqref{phiexp} and $\sim$ means that suitable rescalings (renormalizations) need to be performed when taking the limit.
\section{\label{s4} A new effective action for gravity}
The Liouville action first entered the field of two-dimensional quantum gravity because it provides an integrated version of the conformal anomaly \cite{pol1}. This implies that the full metric dependence of the partition function of a matter \emph{conformal field theory} of central charge $c$ coupled to gravity can be easily found. Taking into account the ghost CFT coming from the gauge fixing \eqref{confgauge}, the path integral over metrics reduces to
\begin{equation}}\def\ee{\end{equation}\label{qg1} \int_{\mathscr M}\!\mathscr D\sigma \, e^{\frac{c-26}{24\pi} S_{\text L}(g_{0},g)} Z(g_{0})\, ,\ee
where $Z(g_{0})$ is the partition function of the matter plus ghost system. The model thus reduces to the study of two decoupled CFT, the matter/ghost theory and the Liouville theory. Background independence is equivalent to the fact that the total central charge must be zero.
What happens when one couples a matter field theory which is \emph{not} conformal to gravity has been much less studied, in particular in the continuum formalism (see e.g.\ \cite{Ising} for the Ising model on the lattice, and \cite{Zam} for a perturbative approach in the continuum formalism). The obvious difficulty is that the metric dependence of the matter partition function can no longer be deduced from the conformal anomaly, and thus the effective gravitational action $S_{\text{eff}}$, defined by
\begin{equation}}\def\ee{\end{equation}\label{graveff} Z(g) = e^{-S_{\text{eff}}(g_{0},g)}Z(g_{0})\, ,\ee
is no longer given by the Liouville action. In particular, we expect
$S_{\text{eff}}$ to be non-local in the Liouville field $\sigma$. Nevertheless, some fundamental properties, already present in the CFT case, must remain valid. First, background independence implies that the total gravity plus ghost plus non-conformal matter system must still be a conformal field theory of vanishing total central charge. Second, \eqref{graveff} implies that $S_{\text{eff}}$ must satisfy the following one-cocycle consistency conditions,
\begin{align}\label{cocycle1} S_{\text{eff}}(g_{1},g_{2}) & = -S_{\text{eff}}(g_{2},g_{1})\, ,\\
\label{cocycle2} S_{\text{eff}}(g_{1},g_{3}) &= S_{\text{eff}}(g_{1},g_{2}) + S_{\text{eff}}(g_{2},g_{3})\, .\end{align}
The above conditions are non-trivial but, unlike the CFT case, we cannot expect to derive from them a universal formula for $S_{\text{eff}}$. Let us thus
simplify the problem by studying an expansion when the mass scale governing the non-conformality of the matter theory is small. Then we may expect to see some universality emerging, at least at leading order. For example, we consider a massive scalar field $X$ with action
\begin{equation}}\def\ee{\end{equation}\label{massivescalar} S_{\text{m}} = \frac{1}{8\pi} \int\!\d^{2}x\,
\sqrt{g}\,\bigl( g^{ab}\partial_{a}X\partial_{b}X + q R X+ m^{2}X^{2}\bigr)\, ,\ee
where as usual $R$ is the Ricci scalar and $q$ an arbitrary dimensionless parameter, in the small $m^{2}$ expansion. This expansion is non-perturbative, because the mass term $m^{2}X^{2}$ is not a well-defined operator in the CFT at $m=0$. At fixed area, it will be valid for $m^{2}A\ll 1$. If we want to integrate over areas, then the cosmological constant should be chosen to be much larger than $m^{2}$. At leading order, as is usual in a small mass expansion, we may expect the effective action to be non-local with respect to $\sigma$, with terms of the form $A^{-1}\partial^{-2}\sigma$ or $m^{2}\partial^{-2}\sigma$ which, from \eqref{phidef}, may be made local in terms of $\phi$.
It turns out that there does exist an extremely natural functional of $\phi$, the so-called Mabuchi action \cite{Mabuchi}. It is given on the sphere by
\begin{equation}\label{Mabuchidef} S_{\text M}(g_{0},g) = \int_{\text{S}^{2}}\!\d^{2}x\, \sqrt{g_{0}}\bigl(-2\pi g_{0}^{ab}\partial_{a}\phi\partial_{b}\phi + 8\pi\phi/A_{0}- R_{0}\phi + 4\sigma e^{2\sigma}/A\bigr)\, ,\end{equation}
where the metrics $g$ and $g_{0}$ are related by \eqref{confgauge} and \eqref{phidef}.
The Mabuchi action satisfies all the required consistency conditions and actually shares many properties with the Liouville action. It is invariant under constant shifts of $\phi$, and thus well-defined on the space of metrics. It satisfies the cocycle conditions \eqref{cocycle1} and \eqref{cocycle2}, as can be checked straightforwardly. It is bounded from below and is convex in the metric \eqref{Mabushimet}, making it a suitable candidate to be used as an action in a path integral. It has a unique minimum, corresponding to the metric of constant scalar curvature, which is the round metric \eqref{roundmet} in the case of the sphere. It admits higher dimensional generalizations with similar properties. For these
reasons it plays a central r\^ole in geometry, in particular in the study of constant scalar curvature metrics on K\"ahler manifolds. Finally, a regularized version of $S_{\text M}$ can be naturally constructed \cite{dondet,FKZ1}.
We have computed explicitly the effective gravitational action for the massive scalar field \eqref{massivescalar}, to leading order in the small
mass expansion. It is convenient to add into the model a spectator CFT of central charge $c$. On the sphere and at fixed area, and when $q\not =0$, the result, up to terms of order $m^{2}A$, then reads
\begin{equation}}\def\ee{\end{equation}\label{Seffscalar} S_{\text{eff}}(g_{0},g) = \frac{25-3q^{2}-c}{24\pi}
S_{\text L}(g_{0},g) + \frac{q^{2}}{4} S_{\text M}(g_{0},g)\, .\ee
When $q=0$, the leading correction to the Liouville term, given by $\frac{m^{2}A}{16\pi} S_{\text M}(g_{0},g)$, is also governed by the Mabuchi action. It is also possible to consider correlation functions, for example
\begin{equation}}\def\ee{\end{equation}\label{corrfunc} \Bigl\langle\prod_{j}\int\!\d^{2}x_{j}\,\sqrt{g}\, e^{ik_{j}X(x_{j})}\Bigr\rangle\, .\ee
The gravitational dressing then involves new factors depending on the Mabuchi action as well as on a closely related functional called the Aubin-Yau action. Full details on these results are presented in \cite{FKZ2}.
A startling possibility is to adjust the parameters such that the Liouville contribution to the gravitational action vanishes altogether. Since the most natural measure in the present context is $\mathscr D\phi$ instead of $\mathscr D\sigma$, and the determinant factor in \eqref{measurerel} is equivalent to the contribution of a $c=-2$ CFT, this is achieved when $27-c-3q^{2}=0$ in our model. We are then left with an entirely new quantum gravity path integral,
\begin{equation}}\def\ee{\end{equation}\label{MabQG} \int_{\mathscr M}\!\mathscr D\phi\, e^{-q^{2}S_{\text M}/4}\, ,\ee
which can be regularized and studied using the tools presented in the previous sections, see e.g.\ \eqref{defpath}. Note that the coupling $q^{2}$ in front of the Mabuchi action can be made arbitrary and thus the strength of the quantum fluctuations are chosen at will.
\section{Conclusion}
The approach to the theory of random metrics that we have proposed provides a new point of view with a deep interplay between matrix models and geometrical techniques. It allows to address fundamental questions in two dimensional quantum gravity and suggests new interesting models. It also opens a window on the theory of higher dimensional fluctuating geometries. We hope it will provide many fruitful insights into these hard but fundamental issues in theoretical physics.
\section{Acknowledgments}
We would like to thank M.~Douglas for useful discussions.
This work is supported in part by the belgian FRFC (grant 2.4655.07), the belgian IISN (grant 4.4511.06 and 4.4514.08), the IAP Programme (Belgian Science Policy), the RFFI grant 11-01-00962 and the NSF grant DMS-0904252.
|
2,877,628,088,410 | arxiv | \section{Introduction}
A common procedure in Lattice QCD is to calculate a correlation function in a
certain channel and then fit it as a sum of several exponentials
\cite{QCDPAX,UKQCD,dong,my}.
The parameters of the fits are estimated by
minimizing $\chi^2$. To find the errors,
ideally the calculations should be repeated many times,
but this is impractical.
Usually the jacknife or the bootstrap is performed.
Instead here we use Bayes's theorem to derive the parameters'
probability distribution for given data from the probability of the
data for given parameters.
Usually one determines the number of poles by
comparing $\chi^2$ for different models.
It is very often ambiguous.
Here we use the parameters probability distribution
to calculate relative probabilities of the possible models with given data.
We perform the Bayesian analysis for some model
functions imitating Lattice QCD propagators.
Then we apply this method to analyze SU(2) hadron propagators.
\section{Bayes' theorem and its applications}
$P(A|B)$ is the conditional probability that proposition $A$ is true,
given that proposition $B$ is true. Bayes' theorem reads:
\begin{eqnarray}
P(T|D,I) P(D|I) = P(D|T,I) P(T|I),
\end{eqnarray}
where $T$ is the theoretical model to be tested,
$D$ is the data, and $I$ is the prior information.
$P(T|D,I)$ is the posterior probability of the theoretical model.
$P(T|I)$ is the prior probability of the theoretical model.
$P(D|I)$ is the prior probability of the data; it will be always absorbed into
the normalization constant.
$P(D|T,I)$ is the direct probability of the data.
For shortness $I$ will be implicit in all formulae henceforth.
We are interested in calculating the posterior probability of a theoretical
model and its parameters $\{c,E\}$:
\begin{eqnarray}
P(\{c,E\}|D)=\frac{P(\{c,E\})}{P(D)} P(D|\{c,E\}|). \nonumber
\end{eqnarray}
Given $P(\{c,E\})$, and $P(D|\{c,E\})$ we can~\cite{bret}:
I. Calculate the posterior probability
density $P(E_n|D)$
[$P(c_n|D)$] for the parameters $E_n$ [$c_n$] :
\begin{eqnarray}
P(E_n|D) = \int
\Pi \hspace{-0.05in}\raisebox{-1ex}{\scriptsize{$i$}}\hspace{0.05in} dc_i
\hspace{0.1in}\Pi \hspace{-0.2in}\raisebox{-1ex}{\scriptsize{$i\neq n$}}
dE_i\hspace{0.05in} P(\{c,E\}|D). \nonumber
\label{eq:distr}
\end{eqnarray}
II. Calculate the average values $\overline{E}_i$
and the standard deviations $\sigma_{E_i}$ (similarly for $c_i$)
\begin{eqnarray}
\overline{E}_n = \int dE_n\hspace{0.05in} E_n P(E_n|D), \nonumber \\
\sigma_{E_i}^2 = \int dE_n\hspace{0.05in} (E_n-\overline{E_n})^2 P(E_n|D),
\label{eq:value}
\end{eqnarray}
provided $P(E_n|D)$ is normalized.
III. Compare several models $T_i$,
(for example one pole, two pole and three pole models).
We cannot find the absolute probability of a theory,
since we do not have
the ``complete set'' of theories. But we can calculate the relative
probabilities of two theories:
\begin{eqnarray}
\frac{P(T_i|D)}{P(T_j|D)} = \frac{P(T_i) P(D|T_i)}
{P(T_j) P(D|T_j)}.
\label{eq:tot_rat}
\end{eqnarray}
$P(D|T)$ can be obtained from $P(D|\{c,E\})$ by integrating over all
parameters of the theory:
\begin{eqnarray}
P(D|T)= \int \{dc\} \{dE\} P(\{c,E\}) P(D|\{c,E\}).
\label{eq:tot_pr}
\end{eqnarray}
For the prior probability $P(\{c,E\})$ of the parameters of a model
in section \ref{sec:modelselection}
we make the ``least informative'' assumption~\cite{bret}
$P(c,E)$ $dc$ $dE$ $\sim$ $dc/c$ $dE/E$.
This form is scale invariant.
Priors $P(\{c,E\})$ should be normalized.
The direct probability $P(D|\{c,E\})$ of the data $D$ can be calculated
relatively easily if the data is Gaussian distributed.
We generate ``fake'' data to be used in the analysis.
We use an n-pole model, add noise $e(t)$ to generate a sample of
$N$ ``propagators'' $g_{\alpha}(t)$= $\sum_{i=1}^n$ $c_i e^{-E_i t}+e(t)$
($\alpha=1,..,N$),
calculate the average
$G(t)$ and estimate the covariance matrix $C(t,t')$ from the data.
Here t is the discrete, ``lattice'' time.
We vary the number of ``propagators'' to control
the noise level in the data.
Here we use $N=360$ and $N=3600$, which corresponds to a
decrease in the noise level
by a factor of 3.
The probability distribution of $G(t)$ is \cite{TASI}
\begin{eqnarray}
P(D|\{c,E\})= e^{-\chi^2(\{c,E\})/2} ,
\label{eq:dir_pr_D}
\end{eqnarray}
where $\chi^2$ is calculated using the full covariance matrix \cite{TASI}.
The individual $g_\alpha(t)$ need not be Gaussian distributed,
as long as we average
over enough of them so that $G(t)$ are.
Gaussian distribution of the ``fake'' data is ensured.
\section{Estimating the Parameters.}
\subsection{1 pole data.}
We generate data $D$ for the one pole model with
$c_1^{in}=0.15$ and $E_1^{in}=0.485$.
We use the one pole model to fit the data.
Here we assume the prior probability of the data $P(\{c,E\})$
to be constant.
Then the posterior probability of the parameters $P(\{c,E\}|D)$
is up to a constant equal to the direct probability of the data given
by equation (\ref{eq:dir_pr_D}).
The posterior probability density for $E_1$
\begin{eqnarray}
P(E_1|D) = \frac{\int dc_1 \hspace{0.05in} e^{-\chi^2(c_1,E_1)/2}}
{\int dc_1 dE_1 \hspace{0.05in} e^{-\chi^2(c_1,E_1)/2}}.
\label{eq:PcE}
\end{eqnarray}
It begs for the Monte Carlo integration with the Metropolis
algorithm.
We generate a set of points $(c_1,E_1)$. Every point is characterized by
$\chi^2(c_1,E_1)$. We sample the vicinity of the minimum
of $\chi^2(c_1,E_1)$ [maximum of $exp{(-\chi^2(c_1,E_1))}$](Fig.1).
\begin{picture}(,130)
\newsavebox{\onechi}
\newsavebox{\onechicap}
\savebox{\onechi}(50,80)
{\input{chi1.tex}}
\savebox{\onechicap}(50,30){\parbox{2.9in}
{Figure 1. $\chi^2$ vs. the point number $n$.}}
\put(71,30){\usebox{\onechi}}
\put(71,5){\usebox{\onechicap}}
\end{picture}
We make a scatter plot $c_1$ vs.$ E_1$ (Fig.~2) to
visualize this distribution. The density of the points
is proportional to the weight $exp{(-\chi^2(c_1,E_1))}$.
\begin{picture}(,160)
\newsavebox{\onepole}
\newsavebox{\onepolecap}
\savebox{\onepole}(50,100)
{\input{one_pole.tex}}
\savebox{\onepolecap}(50,30){\parbox{2.9in}
{Figure 2. Scatter plot of $\chi^2$ in ($c,E$) space.}}
\put(71,40){\usebox{\onepole}}
\put(71,0){\usebox{\onepolecap}}
\end{picture}
Taking integrals (\ref{eq:PcE}) is equivalent to making a histogram with
steps big enough to make the distribution smooth (Fig.3).
\begin{picture}(,130)
\newsavebox{\onee}
\newsavebox{\oneecap}
\savebox{\onee}(50,90)
{\input{one_e.tex}}
\savebox{\oneecap}(50,30){\parbox{2.9in}
{Figure 3. Probability distribution $P(E)$. }}
\put(71,30){\usebox{\onee}}
\put(71,0){\usebox{\oneecap}}
\end{picture}
Both equation (\ref{eq:value}) and jacknife give
the same values for the parameters and
errors of $E= 0.4851(1)$ and $c=0.1499(2)$.
The present approach conserves computational time
compared to the jacknife.
If one uses simulated annealing to fit the data, one covers the
same regions in the ($c_i$,$E_i$) space
as needed to calculate probability distributions~(\ref{eq:distr}).
With the probability distributions one immediately obtains the
parameters and errors,
whereas with the jacknife the fitting has to be repeated $N$ times.
\subsection{2 Pole data}
We repeat the analysis performed in section 3.1
for two-pole data when the poles are well separated:
$c_1^{in}$=0.1, $c_2^{in}$=0.1 $E_1^{in}$=0.5
$E_2^{in}$=0.6.
We use the two-pole model to fit the data.
The probability distributions for the energies and
coefficients are obtained and the parameters are estimated
just as in section 3.1.
The only complication is that now we have to deal with
the 4-d space of parameters $(c_1,E_1,c_2,E_2)$.
We perform a Monte Carlo integration as described above.
The 4-d probability distribution is visualized by projecting it onto two planes
$(c_1,E_1)$ and $(c_2,E_2)$.
Each blob in Fig. 4 is a projection of the 4-d distribution on a 2-d plane.
Each blob represents the probability distribution for one pair of $(c_i,E_i)$
after the second pair has been integrated out.
\begin{picture}(,150)
\newsavebox{\twopole}
\newsavebox{\twopolecap}
\savebox{\twopole}(50,100)
{\input{two_pole.tex}}
\savebox{\twopolecap}(50,40){\parbox{2.9in}
{Figure 4. Scatter plot of $\chi^2$
projected on $(c_1,E_1)$ and $(c_2,E_2)$ planes and
combined on one plane.}}
\put(71,40){\usebox{\twopole}}
\put(71,0){\usebox{\twopolecap}}
\end{picture}
\section{Model selection}
\label{sec:modelselection}
Unless one has some prior knowledge, determining
the number of poles present in the data
based on comparing $\chi^2$ for models with different numbers of poles
is very often ambiguous.
Table~\ref{tab:chi} contains the values of $\chi^2$ obtained when 1 and 2 pole
data are fit with 1 and 2 pole models.
We use the one pole data from section 3.1 and generate
2 pole data with the poles close to each other:
$c_1^{in}=0.05$ $c_2^{in}=0.1$ $E_1^{in}=0.49$ $E_2^{in}=0.5$.
With one exception
the corresponding values of $\chi^2$ are close
and do not allow for a definite
answer.
\begin{table}[h]
\begin{tabular}{cccc}\hline
data & \# of propagators & model &$\chi^2/dof$ \\ \hline
& &1 pole & 0.61 \\ \cline{3-4}
& 360 & 2 pole &0.57 \\ \cline {2-4}
1 pole & & 1 pole & 1.2 \\ \cline{3-4}
& 3600 & 2 pole & 1.3 \\ \hline \hline
& &1 pole &0.87 \\ \cline{3-4}
& 360 & 2 pole &0.99 \\ \cline {2-4}
2 pole & & 1 pole & 2.0 \\ \cline{3-4}
& 3600 & 2 pole& 1.2 \\ \hline
\end{tabular}
\caption{Values of $\chi^2$ for fitting 1 and 2 pole data
of different noise levels with 1 and 2 pole models.}
\label{tab:chi}
\end{table}
We need to calculate the total probabilities ratio (\ref{eq:tot_rat})
to determine the number of poles in the data.
We assume $P(2 pole)=P(1 pole)$, i.e. {\em a priori} these two models
are equally probable.
To calculate $ P(D|1 pole)$
we substitute
(\ref{eq:dir_pr_D}) into equation (\ref{eq:tot_pr})
(and similarly for $P(D|2 pole)$)
\begin{eqnarray}
P(D|1 pole)= \int dc_1 dE_1 \hspace{0.05in}\frac{ e^{-\chi^2(c_1,E_1)}}
{c_1 E_1}
\label{eq:tot_pr1}
\end{eqnarray}
The integration is tricky since we are dealing with a function that varies
rapidly in the multidimensional space.
We use scatter plots to determine the areas of integration.
Table \ref{tab:ratio} contains integration results for the total probability
ratio $R=\frac{P(1 pole|D)}{P(2 pole|D)}$.
With a sufficiently low noise level the total probabilities ratio
picks the correct model.
\begin{table}[h]
\begin{tabular}{ccc}\hline
data & \# of propagators & R \\ \hline
& 360 & 3 \\ \cline{2-3}
1 pole & 3600 & 30 \\ \hline
& 360 & 3 \\ \cline{2-3}
2 pole & 3600& 0.02 \\ \hline
\end{tabular}
\caption{Total probabilities ratio $R$.}
\vspace{-0.3in}
\label{tab:ratio}
\end{table}
If we estimate the parameters
of the 2 pole data with 3600 propagators
using equation (\ref{eq:value}), we get the input
parameters back within the error bars.
The jacknife here gives unreasonably large errors
because
$\chi^2$ has several minima in the ($c_1,c_2,E_1,E_2$) space.
It can be seen from the graph of $\chi^2$ vs. $n$ (Fig.7),
and they can be also clearly identified on the scatter plot (Fig.8).
Two minima ( 1 and 3) have the same $\chi^2$.
When we perform the jacknife, the blind fitting finds either minimum 1 or 3,
which results in the unreasonably large error estimates.
\begin{picture}(,165)
\newsavebox{\twopolonechi}
\newsavebox{\twopolonechicap}
\savebox{\twopolonechi}(50,100)
{\input{two_pol_one_chi.tex}}
\savebox{\twopolonechicap}(50,30){\parbox{2.9in}
{Figure 7. $\chi^2$ has multiple minima.}}
\put(71,50){\usebox{\twopolonechi}}
\put(71,20){\usebox{\twopolonechicap}}
\end{picture}
\begin{picture}(,135)
\newsavebox{\twopolone}
\newsavebox{\twopolonecap}
\savebox{\twopolone}(50,100)
{\input{two_pol_one.tex}}
\savebox{\twopolonecap}(50,30){\parbox{2.9in}
{Figure 8. Three minima of $\chi^2$ from Fig.7
projected on the two-dimensional
plane $(c,E)$.}}
\put(71,40){\usebox{\twopolone}}
\put(71,0){\usebox{\twopolonecap}}
\end{picture}
\section{Analysis of SU(2) data}
Here we analyze hadron propagators in the pseudoscalar channel. The detailed
description of these data is given in \cite{my,colormy}.
The coupling constant $\beta=2.5$,
lattice spacing $a=0.09\pm0.012$ fm,
the lattice size is $12^3\times 24$.
The propagators analyzed here were calculated with $\kappa=0.146$
for 360 configurations.
The point source is at $t=5$. The fit is performed in the time range 6-20.
We repeat the analysis for 60 and 360 configurations to
study the data with different noise levels. For 60 configurations
the $\chi^2/dof$ is 1.0 for the 3 pole fit
and 1.17 for the 4 pole fit,
and the total probabilities ratio is
$\frac{P(3 pole|D)}{P(4 pole|D)}\sim 1$. For 360 configurations
the $\chi^2/dof$ is 0.52 for
the 3 pole model fit and 0.69 for
the 4 pole model fit, and
the total probabilities ratio for 360 configurations is
$\frac{P(3 pole|D)}{P(4 pole|D)}\sim 10$.
Here again, for low noise data
we are able to choose between
two models based on a qualitative estimate given by
the total probabilities ratio.
\section{Conclusion}
A new method has been introduced that can be used to analyze many-pole fits
of hadron propagators in Lattice QCD.
It has been used to estimate the many-pole model parameters and their
uncertainties.
It works in the presence of multiple minima when the jacknife (at least in its
simpleminded form) fails. It cuts the computational time.
The new method has been used to calculate
relative probabilities for different models, which can be crucial
in making the optimal choice of a model.
The scatter plots, which have been introduced as an auxiliary tool for the
multidimensional integration, can be used as an independent tool
for the many pole fit analysis.
I wish to thank Greg Kilcup for his suggestion to use Bayesian analysis.
|
2,877,628,088,411 | arxiv | \section{Introduction}
Given a set of points lying close to a union of linear subspaces, subspace clustering refers to the problem of identifying the number of subspaces, their dimensions, a basis for each subspace, and the clustering of the data points according to their subspace membership. This is an important problem with widespread applications in computer vision \cite{Vidal:CVPR04-multiaffine}, systems theory \cite{Ma-Vidal:HSCC05} and genomics \cite{Jiang:CAG14}.
\subsection{Existing work} \label{subsection:Literature}
Over the past $15$ years, various subspace clustering methods have appeared in the literature \cite{Vidal:SPM11-SC}. Early techniques, such as \emph{K-subspaces} \cite{Bradley:JGO00,Tseng:JOTA00} or \emph{Mixtures of Probabilistic PCA} \cite{Tipping-mixtures:99,Gruber-Weiss:CVPR04}, rely on solving a non-convex optimization problem by alternating between assigning points to subspaces and re-estimating a subspace for each group of points. As such, these methods are sensitive to initialization. Moreover, these methods require a-priori knowledge of the number of subspaces and their dimensions. This motivated the development of a family of purely algebraic methods, such as \emph{Generalized Principal Component Analysis} or GPCA \cite{Vidal:PAMI05}, which feature closed form solutions for various subspace configurations, such as hyperplanes \cite{Vidal:CVPR03-gpca,Vidal:CVPR04-gpca}. A little later, ideas from spectral clustering \cite{vonLuxburg:StatComp2007} led to a family of algorithms based on constructing an affinity between pairs of points. Some methods utilize local geometric information to construct the affinities \cite{Yan:ECCV06}. Such methods can estimate the dimension of the subspaces, but cannot handle data near the intersections. Other methods use global geometric information to construct the affinities, such as the \emph{spectral curvature} \cite{Chen:IJCV09}. Such methods can handle intersecting subspaces, but require the subspaces to be low-dimensional and of equal dimensions. In the last five years, methods from sparse representation theory, such as \emph{Sparse Subspace Clustering} \cite{Elhamifar:CVPR09,Elhamifar:ICASSP10,Elhamifar:TPAMI13}, low-rank representation, such as \emph{Low-Rank Subspace Clustering} \cite{Liu:ICML10,Favaro:CVPR11,Liu:TPAMI13,Vidal:PRL14}, and least-squares, such as \emph{Least-Squares-Regression Subspace Clustering} \cite{Lu:ECCV12}, have provided new ways for constructing affinity matrices using convex optimization techniques. Among them, sparse-representation based methods have become extremely attractive because they have been shown to provide affinities with guarantees of correctness as long as the subspaces are sufficiently separated and the data are well distributed inside the subspaces \cite{Elhamifar:TPAMI13,Soltanolkotabi:AS13}. Moreover, they have also been shown to handle noise \cite{Wang-Xu:ICML13} and outliers \cite{Soltanolkotabi:AS14}. However, existing results require the subspace dimensions to be small compared to the dimension of the ambient space. This is in sharp contrast with algebraic methods, which can handle the case of hyperplanes.
\subsection{Motivation} \label{subsection:Motivation}
This paper is motivated by the highly complementary properties of Sparse Subspace Clustering (SSC) and Algebraic Subspace Clustering (ASC), priorly known as GPCA:\footnote{Following the convention introduced in \cite{Vidal:GPCAbook}, we have taken the liberty to change the name from GPCA to ASC for two reasons. First, to have a consistent naming convention across many subspace clustering algorithms, such as ASC, SSC, LRSC, which is indicative of the its type (algebraic, sparse, low-rank). Second, we believe that GPCA is a more general name that is best suited for the entire family of subspace clustering algorithms, which are all generalizations of PCA.} On the one hand, theoretical results for SSC assume that the subspace dimensions are small compared to the dimension of the ambient space. Furthermore, SSC is known to be very robust in the presence of noise in the data. On the other hand, theoretical results for ASC are valid for subspaces of arbitrary dimensions, with the easiest case being that of hyperplanes, provided that an upper bound on the number of subspaces is known. However, all known implementations of ASC for subspaces of different dimensions, including the recursive algorithm proposed in \cite{Huang:CVPR04-ED}, are very sensitive to noise and are thus considered impractical. As a consequence, our motivation for this work is to develop an algorithm that enjoys the strong theoretical guarantees associated to ASC, but it is also robust to noise.
\subsection{Paper contributions}
\label{subsection:Contribution}
This paper features two main contributions.
As a first contribution, we propose a new ASC algorithm, called \emph{Filtrated Algebraic Subspace Clustering} (FASC), which can handle an unknown number of subspaces of possibly high and different dimensions, and give a rigorous proof of its correctness.\footnote{Partial results from the present paper have been presented without proofs in \cite{Tsakiris:Asilomar14}.} Our algorithm solves the following problem:
\begin{definition}[Algebraic subspace clustering problem] \label{dfn:AbstractSC}
Given a finite set of points $\mathcal{X}=\left\{\boldsymbol{x}_1,\dots,\boldsymbol{x}_N\right\}$ lying in general position\footnote{We will define formally the notion of points in general position in Definition \ref{dfn:GeneralPosition}.} inside a transversal subspace arrangement\footnote{We will define formally the notion of a transversal subspace arrangement in Definition \ref{dfn:transversal}.} $\mathcal{A}=\bigcup_{i=1}^n \mathcal{S}_i$, decompose $\mathcal{A}$ into its irreducible components, i.e., find the number of subspaces $n$ and a basis for each subspace $\mathcal{S}_i,i=1,\dots,n$.
\end{definition}
Our algorithm approaches this problem by selecting a suitable polynomial vanishing on the subspace arrangement $\mathcal{A}$. The gradient of this polynomial at a point $\boldsymbol{x}\in\mathcal{A}$ gives the normal vector to a hyperplane $\mathcal{V}_1$ containing the subspace $\mathcal{S}$ passing through the point. By intersecting the subspace arrangement with the hyperplane, we obtain a subspace sub-arrangement $\mathcal{A}_1 \subset \mathcal{A}$, which lives in an ambient space $\mathcal{V}_1$ of dimension one less than the original ambient dimension and still contains $\mathcal{S}$. By choosing another suitable polynomial that vanishes on $\mathcal{A}_1$, computing the gradient of this new polynomial at the same point, intersecting again with the new hyperplane $\mathcal{V}_2$, and so on, we obtain a \emph{descending filtration} $\mathcal{V}_1 \supset \mathcal{V}_2 \supset \cdots \supset \mathcal{S}$ of subspace arrangements, which eventually gives us the subspace $\mathcal{S}$ containing the point. This happens precisely after $c$ steps, where $c$ is the codimension of $\mathcal{S}$, when no non-trivial vanishing polynomial exists, and the ambient space $\mathcal{V}_c$, which is the orthogonal complement of the span of all the gradients used in the filtration, can be identified with $\mathcal{S}$. By repeating this procedure at another point not in the first subspace, we can identify the second subspace and so on, until all subspaces have been identified. Using results from algebraic geometry, we rigorously prove that this algorithm correctly identifies the number of subspaces, their dimensions and a basis for each subspace.
As a second contribution, we extend the ideas behind the purely abstract FASC algorithm to a working algorithm called \emph{Filtrated Spectral Algebraic Subspace Clustering} (FSASC), which is suitable for computations with noisy data.\footnote{A preliminary description of this method appeared in a workshop paper \cite{Tsakiris:FSASCICCV15}.} The first modification is that intersections with hyperplanes are replaced by projections onto them. In this way, points in the subspace contained by the hyperplane are preserved by the projection, while other points are generally shrank. The second modification is that we compute a filtration at each data point and use the norm of point $x_j$ at the end of the filtration associated to point $x_i$ to define an affinity between these two points. The intuition is that the filtration associated to point $x_i$ will in theory preserve the norms of all points lying in the same subspace as $x_i$. This process leads to an affinity matrix of high intra-class and low cross-class connectivity, upon which spectral clustering is applied to obtain the clustering of the data. By experiments on real and synthetic data we demonstrate that the idea of filtrations leads to affinity matrices of superior quality, i.e., affinities with high intra- and low inter-cluster connectivity, and as a result to better clustering accuracy. In particular, FSASC is shown to be superior to state-of-the-art methods in the problem of motion segmentation using the Hopkins155 dataset \cite{Tron:CVPR07}.
Finally, we have taken the liberty of presenting in an appendix the foundations of the algebraic geometric theory of subspace arrangements relevant to Algebraic Subspace Clustering, in a manner that is both rigorous and accessible to the interested audience outside the algebraic geometry community, thus complementing existing reviews such as \cite{Ma:SIAM08}.
\subsection{Notation} \label{subsection:Notation}
For any positive integer $n$, we define $[n]:=\left\{1,2,\hdots,n\right\}$. We denote by $\mathbb{R}$ the real numbers. The right null space of a matrix $\boldsymbol{B}$ is denoted by $\mathcal{N}(\boldsymbol{B})$. If $\mathcal{S}$ is a subspace of $\mathbb{R}^D$, then $\dim(\mathcal{S})$ denotes the dimension of $\mathcal{S}$ and $\pi_{\mathcal{S}}: \mathbb{R}^D \rightarrow \mathcal{S}$ is the orthogonal projection of $\mathbb{R}^D$ onto $\mathcal{S}$. The symbol $\oplus$ denotes direct sum of subspaces. We denote the
orthogonal complement of a subspace $\mathcal{S}$ in $\mathbb{R}^D$ by $\mathcal{S}^\perp$. If $\boldsymbol{y}_1, \dots, \boldsymbol{y}_s$ are elements of $\mathbb{R}^D$, we denote by $\Span(\boldsymbol{y}_1,\dots,\boldsymbol{y}_s)$ the subspace of $\mathbb{R}^D$ spanned by these elements. For two vectors $\boldsymbol{x},\boldsymbol{y} \in \mathbb{R}^D$, the notation $\boldsymbol{x} \cong \boldsymbol{y}$ means that $\boldsymbol{x}$ and $\boldsymbol{y}$ are colinear.
We let $\mathbb{R}[x]=\mathbb{R}[x_1,\hdots,x_D]$ be the polynomial ring over the real numbers in $D$ indeterminates. We use $x$ to denote the vector of indeterminates $x=(x_1,\dots,x_D)$, while we reserve $\boldsymbol{x}$ to denote a data point $\boldsymbol{x}=(\chi_1,\dots,\chi_D)$ of $\mathbb{R}^D$. We denote by $\mathbb{R}[x]_\ell$ the set of all homogeneous\footnote{A polynomial in many variables is called homogeneous if all monomials appearing in the polynomial have the same degree.} polynomials of degree $\ell$ and similarly $\mathbb{R}[x]_{\le \ell}$ the set of all homogeneous polynomials of degree less than or equal to $\ell$. $\mathbb{R}[x]$ is an infinite dimensional real vector space, while $\mathbb{R}[x]_\ell$\ and $\mathbb{R}[x]_{\le \ell}$ are finite dimensional subspaces of $\mathbb{R}[x]$
of dimensions $\mathcal{M}_{\ell}(D):={ \ell+D-1 \choose \ell}$ and ${\ell + D\choose \ell}$, respectively. We denote by $\mathbb{R}(x)$ the field of all rational functions over $\mathbb{R}$ and indeterminates $x_1,\hdots,x_D$. If $\left\{p_1,\dots,p_s\right\}$ is a subset of $\mathbb{R}[x]$, we denote by $\langle p_1,\dots,p_s \rangle$ the ideal generated by $p_1,\dots,p_s$ (see Definition \ref{dfn:ideal}). If $\mathcal{A}$ is a subset of $\mathbb{R}^D$, we denote by $\mathcal{I}_{\mathcal{A}}$ the vanishing ideal of $\mathcal{A}$, i.e., the set of all elements of $\mathbb{R}[x]$ that vanish on $\mathcal{A}$ and similarly $\mathcal{I}_{\mathcal{A},\ell} :=\mathcal{I}_{\mathcal{A}} \cap \mathbb{R}[x]_\ell$ and $\mathcal{I}_{\mathcal{A},\le \ell} :=\mathcal{I}_{\mathcal{A}} \cap \mathbb{R}[x]_{\le \ell}$. Finally, for a point $\boldsymbol{x} \in \mathbb{R}^D$, and a set $\mathcal{I} \subset \mathbb{R}[x]$ of polynomials, $\nabla \mathcal{I}|_{\boldsymbol{x}}$ is the set of gradients of all the elements of $\mathcal{I}$ evaluated at $\boldsymbol{x}$.
\subsection{Paper organization}
\label{subsection:Organization}
The remainder of the paper is organized as follows: section \ref{section:ASC} provides a careful, yet concise review of the state-of-the-art in algebraic subspace clustering. In section \ref{section:geometricAASC} we discuss the FASC algorithm from a geometric viewpoint with as few technicalities as possible. Throughout Sections \ref{section:ASC} and \ref{section:geometricAASC}, we use a running example of two lines and a plane in $\mathbb{R}^3$ to illustrate various ideas; the reader is encouraged to follow these illustrations. We save the rigorous treatment of FASC for section \ref{section:mfAASC}, which consists of the technical heart of the paper. In particular, the listing of the FASC algorithm can be found in Algorithm \ref{alg:AASC} and the theorem establishing its correctness is Theorem \ref{thm:AASC}. In section \ref{section:FSASC} we describe FSASC, which is the numerical adaptation of FASC, and compare it to other state-of-the-art subspace clustering algorithms using both synthetic and real data. Finally, appendices \ref{appendix:CA}, \ref{appendix:AG} and \ref{appendix:SA} cover basic notions and results from commutative algebra, algebraic geometry and subspace arrangements respectively, mainly used throughout section \ref{section:mfAASC}.
\section{Review of Algebraic Subspace Clustering (ASC)} \label{section:ASC}
This section reviews the main ideas behind ASC. For the sake of simplicity, we first discuss ASC in the case of hyperplanes (section \ref{subsection:Hyperplanes}) and subspaces of equal dimension (section \ref{subsection:Equidimensional}), for which a closed form solution can be found using a single polynomial. In the case of subspaces of arbitrary dimensions, the picture becomes more involved, but a closed form solution from multiple polynomials is still available when the number of subspaces $n$ is known (section \ref{subsection:Known}) or an upper bound $m$ for $n$ is known (section \ref{subsection:GeneralCase}). In section \ref{subsection:RASC} we discuss one limitation of ASC due to computational complexity and a partial solution based on a recursive ASC algorithm. In section \ref{subsection:SASC-A} we discuss another limitation of ASC due to sensitivity to noise and a practical solution based on spectral clustering. We conclude in section \ref{subsection:challenges} with the main challenge that this paper aims to address.
\subsection{Subspaces of codimension $1$} \label{subsection:Hyperplanes}
The basic principles of ASC can be introduced more smoothly by considering the case where the union of subspaces is the union of $n$ hyperplanes $\mathcal{A} = \bigcup_{i=1}^n \mathcal{H}_i$ in $\mathbb{R}^D$. Each hyperplane $\mathcal{H}_i$ is uniquely defined by its unit length normal vector $\boldsymbol{b}_i \in \mathbb{R}^D$ as $\mathcal{H}_i = \{ \boldsymbol{x}\in\mathbb{R}^D : \boldsymbol{b}_i^\top \boldsymbol{x} = 0\}$. In the language of algebraic geometry this is equivalent to saying that $\mathcal{H}_i$ is the zero set of the polynomial $\boldsymbol{b}_i^{\top} x$ or equivalently $\mathcal{H}_i$ is the algebraic variety defined by the polynomial equation $\boldsymbol{b}_i^{\top} x=0$, where $\boldsymbol{b}_i^{\top} x= b_{i,1}x_1 + \cdots b_{i,D} x_D$ with $\boldsymbol{b}_i:=(b_{i,1},\hdots,b_{i,D})^{\top}, x := (x_1,\hdots,x_D)^{\top}$. We write this more succinctly as $\mathcal{H}_i = \mathcal{Z}(\boldsymbol{b}_i^{\top} x)$. We then observe that a point $\boldsymbol{x}$ of $\mathbb{R}^D$ belongs to $\bigcup_{i=1}^n \mathcal{H}_i$ if and only if $\boldsymbol{x}$ is a root of the polynomial $p(x)=(\boldsymbol{b}_1^{\top}x)\cdots (\boldsymbol{b}_n^{\top}x)$, i.e., the union of hyperplanes $\mathcal{A}$ is the \emph{algebraic variety} $\mathcal{A}=\mathcal{Z}(p)$ (the zero set of $p$). Notice the important fact that $p$ is homogeneous of degree equal to the number $n$ of distinct hyperplanes and moreover it is the product of linear homogeneous polynomials $\boldsymbol{b}_i^{\top}x$, i.e., a product of \emph{linear forms}, each of which defines a distinct hyperplane $\mathcal{H}_i$ via the corresponding normal vector $\boldsymbol{b}_i$.
Given a set of points $\mathcal{X}=\left\{\boldsymbol{x}_j\right\}_{j=1}^N \subset \mathcal{A}$ in general position in the union of hyperplanes, the classic
\emph{polynomial differentiation} algorithm proposed in \cite{Vidal:CVPR04-gpca,Vidal:PAMI05} recovers the correct number of hyperplanes as well as their normal vectors by
\begin{enumerate}
\item embedding the data into a higher-dimensional space via a polynomial map,
\item finding the number of subspaces by analyzing the rank of the embedded data matrix,
\item finding the polynomial $p$ from the null space of the embedded data matrix,
\item finding the hyperplane normal vectors from the derivatives of $p$ at a nonsingular point $\boldsymbol{x}$ of $\mathcal{A}$.\footnote{A nonsingular point of a subspace arrangement is a point that lies in one and only one of the subspaces that constitute the arrangement.}
\end{enumerate}
More specifically, observe that the polynomial $p(x)=(\boldsymbol{b}_1^{\top}x)\cdots (\boldsymbol{b}_n^{\top}x)$ can be written as a linear combination of the set of all monomials of degree $n$ in $D$ variables, $\{x_1^{n}, x_1^{n-1} x_2, x_1^{n-1} x_3 \hdots, x_1 x_D^{n-1},\hdots,x_D^n\}$ as:
\begin{align}
p(x) = \sum_{n_1+n_2+\cdots n_D = n} c_{n_1,n_2,\dots,n_D}x_1^{n_1}x_2^{n_2}\cdots x_D^{n_D} = \boldsymbol{c}^\top \nu_n(x).
\end{align}
In the above expression, $\boldsymbol{c}\in\mathbb{R}^{M_n(D)}$ is the vector of all coefficients $c_{n_1,n_2,\dots,n_D}$, and $\nu_{n}$ is the \emph{Veronese} or \emph{Polynomial embedding} of degree $n$, as it is known in the algebraic geometry and machine learning literature, respectively. It is defined by taking a point of $\mathbb{R}^D$ to a point of $\mathbb{R}^{\mathcal{M}_{n}(D)}$ under the rule
\begin{align}
(x_1,\hdots,x_D)^\top \stackrel{\nu_{n}}{\longmapsto} \left(x_1^{n}, x_1^{n-1} x_2, x_1^{n-1} x_3 \hdots, x_1 x_D^{n-1},\hdots,x_D^{n}\right)^{\top} ,
\end{align}
where $\mathcal{M}_{n}(D)$ is the dimension of the space of homogeneous polynomials of degree $n$ in $D$ indeterminates.
The image of the data set $\mathcal{X}$ under the Veronese embedding is used to form the so-called \emph{embedded data matrix}
\begin{align}
\nu_{\ell}(\mathcal{X}):=\begin{bmatrix} \nu_{\ell}(\boldsymbol{x}_1) & \cdots& \nu_{\ell}(\boldsymbol{x}_N) \end{bmatrix}^{\top}.
\end{align}
It is shown in \cite{Vidal:PAMI05} that when there are sufficiently many data points that are sufficiently well distributed in the subspaces, the correct number of hyperplanes is the smallest degree $\ell$ for which $\nu_{\ell}(\mathcal{X})$ drops rank by 1: $n = \min_{\ell \geq 1} \{ \ell : \rank ( \nu_{\ell} (\mathcal{X})) = M_{\ell}(D) - 1\}$. Moreover, it is shown in \cite{Vidal:PAMI05} that the polynomial vector of coefficients $\boldsymbol{c}$ is the unique up to scale vector in the one-dimensional null space of $\nu_n(\mathcal{X})$.
It follows that the task of identifying the normals to the hyperplanes from $p$ is equivalent to extracting the linear factors of $p$.
This is achieved\footnote{A direct factorization has been shown to be possible as well \cite{Vidal:CVPR03-gpca}; however this approach has not been generalized yet to the case of subspaces of different dimensions.} by observing that if we have a point $\boldsymbol{x} \in \mathcal{H}_i - \cup_{i' \neq i} \mathcal{H}_{i'}$, then the gradient $\nabla p|_{\boldsymbol{x}}$ of $p$ evaluated at $\boldsymbol{x}$
\begin{align}
\nabla p |_{\boldsymbol{x}} = \sum_{j=1}^n \boldsymbol{b}_j \prod_{j'\neq j} (\boldsymbol{b}_{j'}^\top \boldsymbol{x})
\end{align}
is equal to $\boldsymbol{b}_i$ up to a scale factor because $\boldsymbol{b}_i^\top \boldsymbol{x} = 0$ and hence all the terms in the sum vanish except for the $i^{th}$ (see Proposition \ref{prp:Grd} for a more general statement). Having identified the normal vectors, the task of clustering the points in $\mathcal{X}$ is straightforward.
\subsection{Subspaces of equal dimension} \label{subsection:Equidimensional}
Let us now consider a more general case, where we know that the subspaces are of equal and known dimension $d$. Such a case can be reduced to the case of hyperplanes, by noticing that a union of $n$ subspaces of dimension $d$ of $\mathbb{R}^{D}$ becomes a union of hyperplanes of $\mathbb{R}^{d+1}$ after a \emph{generic} projection $\pi_{d}:\mathbb{R}^{D} \rightarrow \mathbb{R}^{d+1}$. We note that any random orthogonal projection will almost surely preserve the number of subspaces and their dimensions, as the set of projections $\pi_{d}$ that do not have this preserving property is a zero measure subset of the set of orthogonal projections $\left\{\pi_{d} \in \mathbb{R}^{(d+1) \times D}: \pi_{d} \pi_{d}^\top = I_{{(d+1)}\times {(d+1)}} \right\}$.
When the common dimension $d$ is unknown, it can be estimated exactly by analyzing the right null space of the embedded data matrix, after projecting the data generically onto
subspaces of dimension $d'+1$, with $d'= D-1, D-2, \dots$ \cite{Vidal:PhD03}. More specifically, when $d'>d$, we have
that $\dim \mathcal{N}(\nu_{n}(\pi_{d'}(\mathcal{X})))>1$, while when $d' < d$ we have $\dim \mathcal{N}(\nu_{n}(\pi_{d'}(\mathcal{X})))=0$. On the other hand, the case $d'=d$ is the only case for which the null space is one-dimensional, and so $d=\left\{d': \dim \mathcal{N}(\nu_{n}(\pi_{d'}(\mathcal{X})))=1\right\}$.
Finally, when both $n$ and $d$ are unknown, one can first recover $d$ as the smallest $d'$ such that there exists an $\ell$ for which $\dim \mathcal{N}(\nu_{\ell}(\pi_{d'}(\mathcal{X})))>0$, and subsequently recover $n$ as the smallest $\ell$ such that $\dim \mathcal{N}(\nu_{\ell}(\pi_{d}(\mathcal{X})))>0$; see \cite{Vidal:PhD03} for further details.
\subsection{Known number of subspaces of arbitrary dimensions} \label{subsection:Known}
When the dimensions of the subspaces are unknown and arbitrary, the problem becomes much more complicated, even if the number $n$ of subspaces is known, which is the case examined in this subsection. In such a case, a union of subspaces $\mathcal{A}=\mathcal{S}_1 \cup \cdots \cup \mathcal{S}_n$ of $\mathbb{R}^D$, henceforth called a \emph{subspace arrangement}, is still an algebraic variety.
The main difference with the case of hyperplanes is that, in general, multiple polynomials of degree $n$ are needed to define $\mathcal{A}$, i.e., $\mathcal{A}$ is the zero set of a finite collection of homogeneous polynomials of degree $n$ in $D$ indeterminates.
\begin{eg}\label{eg:setup}
Consider the union $\mathcal{A}$ of a plane $\mathcal{S}_1$ and two lines $\mathcal{S}_2,\mathcal{S}_3$ in general position in $\mathbb{R}^3$ (Fig. \ref{fig:TwoLinesOnePlane}).
\begin{figure}[!h]
\centering
\begin{tikzpicture}
\filldraw[fill=lightgray] (-2,0,-2) -- (2,0,-2) -- (2,0,2) node[anchor=west]{$\mathcal{S}_1$} -- (-2,0,2) -- (-2,0,-2);
\draw[->] (-1.5,0,1.5) -- (-1.5,1,1.5) node[anchor=east]{$\boldsymbol{b}_1$};
\draw[->] (0,0,0) -- (1,0,0);
\draw[densely dotted] (0,0,0) -- (-1,0,0);
\draw[->] (0,0,0) -- (0,0,1) ;
\draw (0,0,0) -- (0,1.5,-1.5) node[anchor=east]{$\mathcal{S}_2$};
\draw[densely dotted] (0,0,-0.5) -- (0,0.5,-0.5);
\draw[densely dotted] (0,0,0) -- (0,0,-1);
\draw (0,0,0) -- (-1.6,1.6,-0.8) node[anchor=west]{$\mathcal{S}_3$};
\draw[densely dotted] (0,0,0) -- (-1.6,0,-0.8);
\draw[densely dotted] (-1,0,-0.5) -- (-1,1,-0.5);
\end{tikzpicture}
\caption{A union of two lines and one plane in general position in $\mathbb{R}^3$.}
\label{fig:TwoLinesOnePlane}
\end{figure}
Then
$\mathcal{A}=\mathcal{S}_1 \cup \mathcal{S}_2 \cup \mathcal{S}_3$ is the zero set of the degree-$3$ homogeneous polynomials
\begin{align}
p_1 &:= (\boldsymbol{b}_1^\top x) (\boldsymbol{b}_{2,1}^\top x)(\boldsymbol{b}_{3,1}^\top x), &
p_2 &:= (\boldsymbol{b}_1^\top x) (\boldsymbol{b}_{2,1}^\top x)(\boldsymbol{b}_{3,2}^\top x), \\
p_3 &:= (\boldsymbol{b}_1^\top x) (\boldsymbol{b}_{2,2}^\top x)(\boldsymbol{b}_{3,1}^\top x), &
p_4 &:= (\boldsymbol{b}_1^\top x) (\boldsymbol{b}_{2,2}^\top x)(\boldsymbol{b}_{3,2}^\top x),
\end{align}
where $\boldsymbol{b}_1$ is the normal vector to the plane $\mathcal{S}_1$ and
$\boldsymbol{b}_{i,j}, \, j=1,2$, are two linearly independent vectors that are orthogonal
to the line $\mathcal{S}_i, i=2,3$. These polynomials are linearly independent and form a basis
for the vector space $\mathcal{I}_{\mathcal{A},3}$ of the degree-$3$ homogeneous polynomials that
vanish on $\mathcal{A}$.\footnote{The interested reader is encouraged to prove this claim.}
\end{eg}
In contrast to the case of hyperplanes, when the subspace dimensions are different,
there may exist vanishing polynomials of degree strictly less than the number of
subspaces.
\begin{eg}\label{eg:degree2}
Consider the setting of Example \ref{eg:setup}. Then there exists a unique up to scale vanishing polynomial of degree $2$, which is the product of two linear forms: one form is $\boldsymbol{b}_1^\top x$, where $\boldsymbol{b}_1$ is the normal to the plane $\mathcal{S}_1$, and the other linear form is $\boldsymbol{f}^\top x$,
where $\boldsymbol{f}$ is the normal to the plane defined by the lines $\mathcal{S}_2$ and $\mathcal{S}_3$ (Fig. \ref{fig:normals-b-f}).
\begin{figure}[!h]
\centering
\begin{tikzpicture}
\filldraw[fill=lightgray] (-2,0,-2) -- (2,0,-2) -- (2,0,2) node[anchor=west]{$\mathcal{S}_1$} -- (-2,0,2) -- (-2,0,-2);
\filldraw[fill=gray] (-1.2,1.2,-0.6) -- (0,1.2,-1.2) -- (0,0,0) -- (-1.2,1.2,-0.6);
\draw (0.25,1.425,0.5)node[anchor=east]{$\boldsymbol{f}$};
\draw (-0.3,1.4,-1)node[anchor=east]{$\mathcal{H}_{23}$};
\draw[->] (-0.25,0.625,-0.5) -- (0.25,1.625,0.5);
\draw[->] (0,0,0) -- (1,0,0);
\draw[densely dotted] (0,0,0) -- (-1,0,0);
\draw[->] (-1.5,0,1.5) -- (-1.5,1,1.5) node[anchor=east]{$\boldsymbol{b}_1$};
\draw[->] (0,0,0) -- (0,0,1) ;
\draw (0,0,0) -- (0,1.5,-1.5) node[anchor=east]{$\mathcal{S}_2$};
\draw[densely dotted] (0,0,-0.5) -- (0,0.5,-0.5);
\draw[densely dotted] (0,0,0) -- (0,0,-1);
\draw (0,0,0) -- (-1.6,1.6,-0.8) node[anchor=west]{$\mathcal{S}_3$};
\draw[densely dotted] (0,0,0) -- (-1.6,0,-0.8);
\draw[densely dotted] (-1,0,-0.5) -- (-1,1,-0.5);
\end{tikzpicture}
\caption{The geometry of the unique degree-$2$ polynomial
$p(x)=(\boldsymbol{b}_1^\top x)(\boldsymbol{f}^\top x)$ that vanishes on
$\mathcal{S}_1\cup\mathcal{S}_2\cup \mathcal{S}_3$. $\boldsymbol{b}_1$ is the normal vector to plane $\mathcal{S}_1$ and $\boldsymbol{f}$ is the normal vector to the plane $\mathcal{H}_{23}$ spanned by lines $\mathcal{S}_2$ and $\mathcal{S}_3$.}
\label{fig:normals-b-f}
\end{figure}
\end{eg}
As Example \ref{eg:setup} shows, all the relevant geometric information is still encoded in the factors
of \emph{some} special basis\footnote{Strictly speaking, this is not always true. However, it is true if the subspace arrangement is general enough, in particular if it is transversal; see Definition \ref{dfn:transversal} and Theorem \ref{thm:I=J}.} of $\mathcal{I}_{\mathcal{A},n}$, that consists of degree-$n$ homogeneous
polynomials that factorize into the product of linear forms. However,
computing such a basis remains, to the best of our knowledge, an unsolved problem. Instead, one can only rely on computing (or be given) a general basis for the vector space $\mathcal{I}_{\mathcal{A},n}$. In our example such a basis could be
\begin{align}
p_1 + p_4, \, \, p_1 - p_4, \, \, p_2 + p_3, \, \, p_2 - p_3\end{align} and it can be seen that none of these polynomials is factorizable
into the product of linear forms. This
difficulty was not present in the case of hyperplanes, because there was only one
vanishing polynomial (up to scale) of degree $n$ and it had to be factorizable.
In spite of this difficulty, a solution can still be achieved in an elegant fashion by resorting to polynomial differentiation. The key fact that allows this approach is that any homogeneous polynomial $p$ of degree $n$ that vanishes on the subspace arrangement $\mathcal{A}$ is a linear combination of vanishing polynomials, each of which is a product of linear forms, with each distinct subspace contributing a vanishing linear form in every product (Theorem \ref{thm:I=J}). As a consequence (Proposition \ref{prp:Grd}), the gradient of $p$ evaluated at some point $\boldsymbol{x} \in \mathcal{S}_i - \cup_{i' \neq i} \mathcal{S}_{i'}$ lies in $\mathcal{S}_i^{\perp}$ and the linear span of the gradients at $\boldsymbol{x}$ of all such $p$ is precisely equal to $\mathcal{S}_i^{\perp}$. We can thus recover $\mathcal{S}_i$, remove it from $\mathcal{A}$ and then repeat the procedure to identify all the remaining subspaces. As stated in Theorem \ref{thm:ASC}, this process is provably correct as long as the subspace arrangement $\mathcal{A}$ is transversal, as defined next.
\begin{definition}[Transversal subspace arrangement \cite{Derksen:JPAA07}]\label{dfn:transversal}
\!\!\! A subspace arrangement $\mathcal{A} = \bigcup_{i=1}^n \mathcal{S}_i \subset \mathbb{R}^D$ is called transversal, if for any subset $\mathfrak{I}$ of $[n]$, the codimension of $\bigcap_{i \in \mathfrak{I}} \mathcal{S}_i$ is the minimum between $D$ and the sum of the codimensions of all $\mathcal{S}_i, \, i \in \mathfrak{I}$.
\end{definition}
\begin{remark}Transversality is a geometric condition on the subspaces, which in particular requires the dimensions of all possible intersections among subspaces to be as small as the dimensions of the subspaces allow (see Appendix \ref{appendix:SA} for a discussion).
\end{remark}
\smallskip
\begin{theorem}[ASC by polynomial differentiation when $n$ is known, \cite{Vidal:PAMI05,Ma:SIAM08}] \label{thm:ASC}
Let $\mathcal{A} = \bigcup_{i=1}^n \mathcal{S}_i$ be a transversal subspace arrangement of $\mathbb{R}^D$, let $\boldsymbol{x} \in \mathcal{S}_i - \bigcup_{i' \neq i} \mathcal{S}_{i'}$ be a nonsingular point in $\mathcal{A}$, and let $\mathcal{I}_{\mathcal{A},n}$ be the vector space of all degree-$n$ homogeneous polynomials that vanish on $\mathcal{A}$. Then $\mathcal{S}_i$ is the orthogonal complement of the subspace spanned by all vectors of the form $\nabla p|_{\boldsymbol{x}}$, where $p \in \mathcal{I}_{\mathcal{A},n}$, i.e., $\mathcal{S}_i = \Span\left( \nabla \mathcal{I}_{\mathcal{A},n}|_{\boldsymbol{x}}\right)^\perp$.
\end{theorem}
Theorem \ref{thm:ASC} and its proof are illustrated in the next example.
\begin{eg}\label{eg:ASCtheorem}
Consider Example \ref{eg:setup} and recall that
$p_1 = (\boldsymbol{b}_1^\top x) (\boldsymbol{b}_{2,1}^\top x)(\boldsymbol{b}_{3,1}^\top x)$,
$p_2 = (\boldsymbol{b}_1^\top x) (\boldsymbol{b}_{2,1}^\top x)(\boldsymbol{b}_{3,2}^\top x)$,
$p_3 = (\boldsymbol{b}_1^\top x) (\boldsymbol{b}_{2,2}^\top x)(\boldsymbol{b}_{3,1}^\top x)$,
and
$p_4 = (\boldsymbol{b}_1^\top x) (\boldsymbol{b}_{2,2}^\top x)(\boldsymbol{b}_{3,2}^\top x)$.
Let $\boldsymbol{x}_2$ be a generic point in $\mathcal{S}_2 - \mathcal{S}_1 \cup \mathcal{S}_3$. Then
\begin{align}
\nabla p_1|_{\boldsymbol{x}_2} \cong \nabla p_2|_{\boldsymbol{x}_2} \cong \boldsymbol{b}_{2,1},\, \, \, \nabla p_3|_{\boldsymbol{x}_2} \cong \nabla p_4|_{\boldsymbol{x}_2} \cong \boldsymbol{b}_{2,2}.
\end{align}
Hence $\boldsymbol{b}_{2,1},\boldsymbol{b}_{2,2} \in \Span( \nabla \mathcal{I}_{\mathcal{A},3}|_{\boldsymbol{x}_2})$ and so
$\mathcal{S}_2 \supset \Span \left( \nabla \mathcal{I}_{\mathcal{A},3}|_{\boldsymbol{x}_2}\right)^\perp$. Conversely, let $p \in \mathcal{I}_{\mathcal{A},3}$. Then there exist $\alpha_i\in\mathbb{R},i=1,\dots,4$, such that $p = \sum_{i=1}^{4} \alpha_i p_i$ and so
\begin{align}
\nabla p|_{\boldsymbol{x}_2} = \sum_{i=1}^{4} \alpha_i \nabla p_i|_{\boldsymbol{x}_2} \in \Span(\boldsymbol{b}_{2,1},\boldsymbol{b}_{2,2})=\mathcal{S}_2^\perp.
\end{align} Hence $\nabla \mathcal{I}_{\mathcal{A},3}|_{\boldsymbol{x}_2} \subset \mathcal{S}_2^\perp$, and so
$\Span(\nabla \mathcal{I}_{\mathcal{A},3}|_{\boldsymbol{x}_2})^\perp \supset \mathcal{S}_2$.
\end{eg}
\subsection{Unknown number of subspaces of arbitrary dimensions} \label{subsection:GeneralCase}
As it turns out, when the number of subspaces $n$ is unknown, but an upper
bound $m \ge n$ is given, one can obtain the decomposition of the subspace
arrangement from the gradients of the vanishing polynomials of degree $m$,
precisely as in Theorem \ref{thm:ASC}, simply by replacing $n$ with $m$.
\begin{theorem}[ASC by polynomial differentiation when an upper bound on $n$ is known, \cite{Vidal:PAMI05,Ma:SIAM08}]
\label{thm:ubASC}
Let $\mathcal{A} = \bigcup_{i=1}^n \mathcal{S}_i$ be a transversal subspace arrangement of $\mathbb{R}^D$, let $\boldsymbol{x} \in \mathcal{S}_i - \bigcup_{i' \neq i} \mathcal{S}_{i'}$ be a nonsingular point in $\mathcal{A}$, and let $\mathcal{I}_{\mathcal{A},m}$ be the vector space of all degree-$m$ homogeneous polynomials that vanish on $\mathcal{A}$, where $m \ge n$. Then $\mathcal{S}_i$ is the orthogonal complement of the subspace spanned by all vectors of the form $\nabla p|_{\boldsymbol{x}}$, where $p \in \mathcal{I}_{\mathcal{A},m}$, i.e., $\mathcal{S}_i = \Span\left( \nabla \mathcal{I}_{\mathcal{A},m}|_{\boldsymbol{x}}\right)^\perp$.
\end{theorem}
\begin{eg}
Consider the setting of Examples \ref{eg:setup} and \ref{eg:degree2}. Suppose that we have the upper bound $m=4$ on the number of underlying subspaces $(n=3)$. It can be shown that the vector space $\mathcal{I}_{\mathcal{A},4}$ has\footnote{This can be verified by applying the dimension formula of Corollary 3.4 in \cite{Derksen:JPAA07}.} dimension $8$ and is spanned by the polynomials
\begin{align}
q_1 &:= (\boldsymbol{b}_1^\top x) (\boldsymbol{f}^\top x)^3,&
q_5 &:= (\boldsymbol{b}_1^\top x) (\boldsymbol{f}^\top x)(\boldsymbol{b}_{3}^\top x)^2, \\
q_2 &:= (\boldsymbol{b}_1^\top x)^2 (\boldsymbol{f}^\top x)^2&
q_6 &:= (\boldsymbol{b}_1^\top x) (\boldsymbol{b}_2^\top x)^2(\boldsymbol{f}^\top x), \\
q_3 &:= (\boldsymbol{b}_1^\top x)^3 (\boldsymbol{f}^\top x), &
q_7 &:=(\boldsymbol{b}_1^\top x) (\boldsymbol{b}_{2}^\top x)^2(\boldsymbol{b}_3^\top x), \\
q_4 &:= (\boldsymbol{b}_1^\top x) (\boldsymbol{f}^\top x)^2(\boldsymbol{b}_{3}^\top x), &
q_8 &:= (\boldsymbol{b}_1^\top x) (\boldsymbol{b}_{2}^\top x)(\boldsymbol{b}_{3}^\top x)^2,
\end{align} where $\boldsymbol{b}_1$ is the normal to $\mathcal{S}_1$, $\boldsymbol{f}$ is the normal to the plane defined by lines $\mathcal{S}_2$ and $\mathcal{S}_3$, and $\boldsymbol{b}_i$ is a normal to line $\mathcal{S}_i$ that is linearly independent from $\boldsymbol{f}$, for $i=2,3$. Hence $\mathcal{S}_1 = \Span(\boldsymbol{b}_1)^\perp$ and $\mathcal{S}_i=\Span(\boldsymbol{f},\boldsymbol{b}_i)^\perp, i=2,3$. Then for a generic point $\boldsymbol{x}_2 \in \mathcal{S}_2 - \mathcal{S}_1 \cup \mathcal{S}_3$, we have that
\begin{align}
& \nabla q_1|_{\boldsymbol{x}_2}=\nabla q_2|_{\boldsymbol{x}_2}=\nabla q_4|_{\boldsymbol{x}_2}=\nabla q_6|_{\boldsymbol{x}_2}=\nabla q_7|_{\boldsymbol{x}_2}=0,\\
& \nabla q_3|_{\boldsymbol{x}_2}\cong\nabla q_5|_{\boldsymbol{x}_2}\cong \boldsymbol{f}, \, \, \, \nabla q_8|_{\boldsymbol{x}_2} \cong \boldsymbol{b}_2.
\end{align}
Hence $\boldsymbol{f},\boldsymbol{b}_2 \in \Span(\nabla \mathcal{I}_{\mathcal{A},4}|_{\boldsymbol{x}_2})$ and so
$\mathcal{S}_2 \supset \Span(\nabla\mathcal{I}_{\mathcal{A},4}|_{\boldsymbol{x}_2})^\perp$. Similarly to
Example \ref{eg:ASCtheorem}, since every element of $\mathcal{I}_{\mathcal{A},4}$ is a
linear combination of the $q_\ell,\ell=1,\dots,8$, we have
$\mathcal{S}_2 = \Span(\nabla\mathcal{I}_{\mathcal{A},4}|_{\boldsymbol{x}_2})^\perp$.
\end{eg}
\begin{remark}
Notice that both Theorems \ref{thm:ASC} and \ref{thm:ubASC} are statements about the abstract subspace arrangement $\mathcal{A}$, i.e., no finite subset $\mathcal{X}$ of $\mathcal{A}$ is explicitly considered. To pass from $\mathcal{A}$ to $\mathcal{X}$ and get similar Theorems, we need to require $\mathcal{X}$ to be \emph{in general position} in $\mathcal{A}$, in some suitable sense. As one may suspect, this notion of general position must entail that
polynomials of degree $n$ for Theorem \ref{thm:ASC}, or of degree $m$ for Theorem \ref{thm:ubASC}, that vanish on $\mathcal{X}$ must also vanish on $\mathcal{A}$ and vice versa. In that case, we can compute the required basis for $\mathcal{I}_{\mathcal{A},n}$, simply by computing a basis for $\mathcal{I}_{\mathcal{X},n}$, by means of the Veronese embedding described in section \ref{subsection:Hyperplanes}, and similarly for $\mathcal{I}_{\mathcal{A},m}$. We will make the notion of general position precise in Definition \ref{dfn:GeneralPosition}.
\end{remark}
\subsection{Computational complexity and recursive ASC} \label{subsection:RASC}
Although Theorem \ref{thm:ubASC} is quite satisfactory from a theoretical point of view, using an upper bound $m\geq n$ for the number of subspaces comes with the practical disadvantage that the dimension of the Veronese embedding, $M_m(D)$, grows exponentially with $m$. In addition, increasing $m$ also increases the number of polynomials in the null space of $\nu_m(\mathcal{X})$, some which will eventually, as $m$ becomes large, be polynomials that simply fit the data $\mathcal{X}$ but do not vanish on $\mathcal{A}$. To reduce the computational complexity of the polynomial differentiation algorithm, one can consider vanishing polynomials of smaller degree, $m < n$, as suggested by Example \ref{eg:degree2}. While such vanishing polynomials may not be sufficient to cluster the data into $n$ subspaces, they still provide a clustering of the data into $m' \le n$ subspaces. We can then look at each of these $m'$ clusters and see if they can be partitioned further. For instance, in Example \ref{eg:degree2}, we can first cluster the data into two planes, the plane $\mathcal{S}_1$ and the plane $\mathcal{H}_{23}$ containing the two lines $\mathcal{S}_2$ and $\mathcal{S}_3$, and then partition the data lying in $\mathcal{H}_{23}$ into the two lines $\mathcal{S}_2$ and $\mathcal{S}_3$. This leads to the recursive ASC algorithm proposed in \cite{Huang:CVPR04-ED,Vidal:PAMI05}, which is based on finding the polynomials of the smallest possible degree $m$ that vanish on the data, computing the gradients of these vanishing polynomials to cluster the data into $m'\le n$ groups, and then repeating the procedure for each group until the data from each group can be fit by polynomials of degree $1$, in which case each group lies in single linear subspace. While this recursive ASC algorithm is very intuitive, no rigorous proof of its correctness has appeared in the literature. In fact, there are examples where this recursive method provably fails in the sense of producing \emph{ghost subspaces} in the decomposition of $\mathcal{A}$. For instance, when partitioning the data from Example \ref{eg:degree2} into two planes $\mathcal{S}_1$ and $\mathcal{H}_{23}$, we may assign the data from the intersection of the two planes to $\mathcal{H}_{23}$. If this is the case, when trying to partition further the data of $\mathcal{H}_{23}$, we will obtain three lines: $\mathcal{S}_2$, $\mathcal{S}_3$ and the ghost line $\mathcal{S}_4=\mathcal{S}_1 \cap \mathcal{H}_{23}$ (see Fig. \ref{fig:ghost}).
\subsection{Instability in the presence of noise and spectral ASC}\label{subsection:SASC-A}
Another important issue with Theorem \ref{thm:ubASC} from a practical standpoint is its sensitivity to noise. More precisely, when implementing Theorem \ref{thm:ubASC} algorithmically, one is required to estimate the dimension of the null space of $\nu_m(\mathcal{X})$, which is an extremely challenging problem in the presence of noise. Moreover, small errors in the estimation of $\dim \mathcal{N}(\nu_m(\mathcal{X}))$ have been observed to have dramatic effects in the quality of the clustering, thus rendering algorithms that are directly based on Theorem \ref{thm:ubASC} unstable. While the recursive ASC algorithm of \cite{Huang:CVPR04-ED,Vidal:PAMI05} is more robust than such algorithms, it is still sensitive to noise, as considerable errors may occur in the partitioning process. Moreover, the performance of the recursive algorithm is always subject to degradation due to the potential occurrence of ghost subspaces.
To enhance the robustness of ASC in the presence of noise and obtain a stable working algebraic algorithm, the standard practice has been to apply a variation of the polynomial differentiation algorithm based on spectral clustering \cite{Vidal:PhD03}. More specifically, given noisy data $\mathcal{X}$ lying close to a union of $n$ subspaces $\mathcal{A}$, one computes an approximate vanishing polynomial $p$ whose coefficients are given by the right singular vector of $\nu_n(\mathcal{X})$ corresponding to its smallest singular value. Given $p$, one computes the gradient of $p$ at each point in $\mathcal{X}$ (which gives a normal vector associated with each point in $\mathcal{X})$, and builds an affinity matrix between points $\boldsymbol{x}_j$ and $\boldsymbol{x}_{j'}$ as the cosine of the angle between their corresponding normal vectors, i.e.,
\begin{align}
\boldsymbol{C}_{jj',\text{angle}} = \Big | \Big \langle \frac{\nabla p|_{\boldsymbol{x}_j}}{||\nabla p|_{\boldsymbol{x}_j}||}, \frac{\nabla p|_{\boldsymbol{x}_{j'}}}{||\nabla p|_{\boldsymbol{x}_{j'}}||} \Big \rangle \Big |. \label{eq:ABA}
\end{align}
This affinity is then used as input to any spectral clustering algorithm (see \cite{vonLuxburg:StatComp2007} for a tutorial on spectral clustering) to obtain a clustering $\mathcal{X} = \bigcup_{i=1}^n\mathcal{X}_i$. We call this Spectral ASC method with \emph{angle-based affinity} as SASC-A.
To gain some intuition about $\boldsymbol{C}$, suppose that $\mathcal{A}$ is a union of $n$ hyperplanes and that there is no noise in the data. Then $p$ must be of the form $p(x)=(\boldsymbol{b}_1^\top x)\cdots (\boldsymbol{b}_n^\top x)$. In this case $\boldsymbol{C}_{jj'}$ is simply the cosine of the angle between the normals to the hyperplanes that are associated with points $\boldsymbol{x}_j$ and $\boldsymbol{x}_{j'}$. If both points lie in the same hyperplane, their normals must be equal, and hence $\boldsymbol{C}_{jj'} = 1$. Otherwise, $\boldsymbol{C}_{jj'} < 1$ is the cosine of the angles between the hyperplanes. Thus, assuming that the smallest angle between any two hyperplanes is sufficiently large and that the points are well distributed on the union of the hyperplanes, applying spectral clustering to the affinity matrix $\boldsymbol{C}$ will in general yield the correct clustering.
Even though SASC-A is much more robust in the presence of noise than purely algebraic methods for the case of a union of hyperplanes, it is fundamentally limited by the fact that, theoretically, it applies only to unions of hyperplanes. Indeed, if the orthogonal complement of a subspace $\mathcal{S}$ has dimension greater than $1$, there may be points $\boldsymbol{x}, \boldsymbol{x}' $ inside $\mathcal{S}$ such that the angle between $\nabla p|_{\boldsymbol{x}}$ and $\nabla p|_{\boldsymbol{x}'}$ is as large as $90^\circ$. In such instances, points associated to the same subspace may be weakly connected and thus there is no guarantee for the success of spectral clustering.
\subsection{The challenge} \label{subsection:challenges}
As the discussion so far suggests, the state of the art in ASC can be summarized as follows:
\begin{enumerate}
\item A complete closed form solution to the abstract subspace clustering problem (Problem \ref{dfn:AbstractSC}) exists and can be found using the polynomial differentiation algorithm implied by Theorem \ref{thm:ubASC}.
\item All known algorithmic variants of the polynomial differentiation algorithm are sensitive to noise, especially for subspaces of arbitrary dimensions.
\item The recursive ASC algorithm described in section \ref{subsection:RASC} does not in
general solve the abstract subspace clustering problem (Problem \ref{dfn:AbstractSC}), and is in addition sensitive to noise.
\item The spectral algebraic algorithm described in section \ref{subsection:SASC-A} is less sensitive to noise, but is theoretically justified only for unions of hyperplanes.
\end{enumerate}
The above list reveals the challenge that we will be addressing in the rest of this paper: Develop an ASC algorithm, that solves the abstract subspace clustering problem for perfect data, while at the same time it is robust to noisy data.
\section{Filtrated Algebraic Subspace Clustering - Overview}\label{section:geometricAASC}
This section provides an overview of our proposed \emph{Filtrated Algebraic Subspace Clustering} (FASC) algorithm, which conveys the geometry of the key idea of this paper while keeping technicalities at a minimum. To that end, let us pretend for a moment that we have access to the entire set $\mathcal{A}$, so that we can manipulate it via set operations such as taking its intersection with some other set.
Then the idea behind FASC is to construct a \emph{descending filtration} of the given subspace arrangement $\mathcal{A} \subset \mathbb{R}^D$, i.e., a sequence of inclusions of subspace arrangements, that starts with
$\mathcal{A}$ and terminates after a finite number of $c$ steps with one of the irreducible components $\mathcal{S}$ of $\mathcal{A}$:\footnote{We will also be using the notation
$\mathcal{A} =: \mathcal{A}_0 \leftarrow \mathcal{A}_1 \leftarrow \mathcal{A}_2 \leftarrow \cdots$,
where the arrows denote embeddings.}
\begin{align}
\mathcal{A} =: \mathcal{A}_0 \supset \mathcal{A}_1 \supset \mathcal{A}_2 \supset \cdots \supset \mathcal{A}_c=\mathcal{S}.
\end{align}
The mechanism for generating such a filtration is to construct a strictly descending filtration of
intermediate ambient
spaces, i.e.,
\begin{align}
\mathcal{V}_0 \supset \mathcal{V}_1 \supset \mathcal{V}_2 \supset \cdots, \label{eq:AmbientSpaces}
\end{align}
such that $\mathcal{V}_0=\mathbb{R}^D$, $\dim (\mathcal{V}_{s+1}) = \dim (\mathcal{V}_{s})-1$, and
each $\mathcal{V}_s$ contains the same fixed irreducible component $\mathcal{S}$ of $\mathcal{A}$.
Then the filtration of
subspace arrangements is obtained by intersecting $\mathcal{A}$ with the filtration of ambient spaces,
i.e.,
\begin{align}
\mathcal{A}_0 := \mathcal{A} \supset \mathcal{A}_1 : = \mathcal{A} \cap \mathcal{V}_1 \supset \mathcal{A}_2 := \mathcal{A} \cap \mathcal{V}_2 \supset \cdots.
\end{align} This can be seen equivalently as constructing a descending filtration of pairs
$(\mathcal{V}_s, \mathcal{A}_s)$, where $\mathcal{A}_s$ is a subspace arrangement of $\mathcal{V}_s$:
\begin{align}
(\mathbb{R}^{D},\mathcal{A}) \leftarrow (\mathcal{V}_1 \cong \mathbb{R}^{D-1}, \mathcal{A}_1) \leftarrow (\mathcal{V}_2 \cong \mathbb{R}^{D-2}, \mathcal{A}_2) \leftarrow \cdots .
\end{align}
But how can we construct a filtration of ambient spaces \eqref{eq:AmbientSpaces}, that satisfies
the apparently strong condition $\mathcal{V}_s \supset \mathcal{S}, \, \forall s$? The answer lies at the heart of ASC: to construct
$\mathcal{V}_1$ pick a suitable polynomial $p_1$ vanishing on $\mathcal{A}$ and evaluate its gradient at a nonsingular point $\boldsymbol{x}$ of $\mathcal{A}$. Notice that $\boldsymbol{x}$ will lie in some irreducible component $\mathcal{S}_{\boldsymbol{x}}$ of $\mathcal{A}$. Then take $\mathcal{V}_1$ to be the hyperplane of $\mathbb{R}^D$ defined by the gradient of $p_1$ at $\boldsymbol{x}$. We know from Proposition \ref{prp:Grd} that $\mathcal{V}_1$ must contain $\mathcal{S}_{\boldsymbol{x}}$. To construct $\mathcal{V}_2$ we apply essentially the same procedure on the pair $(\mathcal{V}_1,\mathcal{A}_1)$: take a suitable polynomial $p_2$ that vanishes on $\mathcal{A}_1$, but does not vanish on $\mathcal{V}_1$, and take $\mathcal{V}_2$ to be the hyperplane of $\mathcal{V}_1$ defined by $\pi_{\mathcal{V}_1}\left(\nabla p_2|_{\boldsymbol{x}}\right)$. As we will show in section \ref{section:mfAASC}, it is always the case that $\pi_{\mathcal{V}_1}\left(\nabla p_2|_{\boldsymbol{x}}\right) \perp \mathcal{S}_{\boldsymbol{x}}$ and so $\mathcal{V}_2 \supset \mathcal{S}_{\boldsymbol{x}}$. Now notice, that after precisely $c$ such steps, where $c$ is the codimension of $\mathcal{S}_{\boldsymbol{x}}$, $\mathcal{V}_c$ will be a $(D-c)$-dimensional linear subspace of $\mathbb{R}^D$ that by construction contains $\mathcal{S}_{\boldsymbol{x}}$. But $\mathcal{S}_{\boldsymbol{x}}$ is also a $(D-c)$-dimensional subspace and the only possibility is that $\mathcal{V}_c = \mathcal{S}_{\boldsymbol{x}}$. Observe also that this is precisely the step where the filtration naturally terminates, since there is no polynomial that vanishes on $\mathcal{S}_{\boldsymbol{x}}$ but does not vanish on $\mathcal{V}_c$. The relations between the intermediate ambient spaces and subspace arrangements are illustrated in the commutative diagram of \eqref{eq:commutative-diagram}. The filtration in \eqref{eq:commutative-diagram} will yield the irreducible component $\mathcal{S}:=\mathcal{S}_{\boldsymbol{x}}$ of $\mathcal{A}$ that contains the nonsingular point $\boldsymbol{x} \in \mathcal{A}$ that we started with. We will be referring to such a point as \emph{the reference point}. We can also take without loss of generality $\mathcal{S}_{\boldsymbol{x}}=\mathcal{S}_1$. Having identified $\mathcal{S}_1$, we can pick a nonsingular point $\boldsymbol{x}' \in \mathcal{A}-\mathcal{S}_{\boldsymbol{x}}$ and construct a filtration of $\mathcal{A}$ as above with reference point $\boldsymbol{x}'$. Such a filtration will terminate with the irreducible component $\mathcal{S}_{\boldsymbol{x}'}$ of $\mathcal{A}$ containing $\boldsymbol{x}'$, which without loss of generality we take to be $\mathcal{S}_2$. Picking a new reference point $\boldsymbol{x}'' \in \mathcal{A}-\mathcal{S}_{\boldsymbol{x}}\cup\mathcal{S}_{\boldsymbol{x}'}$ and so on, we can identify the entire list of irreducible components of $\mathcal{A}$, as described in Algorithm \ref{alg:geometricAASC}.
\begin{equation}
\begin{tikzcd} \label{eq:commutative-diagram}
\mathbb{R}^D \arrow[r,leftarrow,"\cong"]
& \mathcal{V}_0 \arrow[r,leftarrow] \arrow[d,leftarrow] & \mathcal{A}_0 \arrow[r,leftarrow] \arrow[d,leftarrow] & \mathcal{S}_{\boldsymbol{x}} \\
\mathbb{R}^{D-1} \arrow[r,leftarrow,"\cong"]
& \mathcal{V}_1 \arrow[r,leftarrow]\arrow[d,leftarrow] & \mathcal{A}_1 \arrow[r,leftarrow]\arrow[d,leftarrow] & \mathcal{S}_{\boldsymbol{x}} \\
\mathbb{R}^{D-2} \arrow[r,leftarrow,"\cong"]
& \mathcal{V}_2 \arrow[r,leftarrow]\arrow[d,leftarrow] & \mathcal{A}_2 \arrow[r,leftarrow]\arrow[d,leftarrow] & \mathcal{S}_{\boldsymbol{x}} \\
& \vdots \arrow[d,leftarrow] & \vdots \arrow[d,leftarrow] & \\
\mathbb{R}^{D-c+1} \arrow[r,leftarrow,"\cong"]
& \mathcal{V}_{c-1} \arrow[r,leftarrow]\arrow[d,leftarrow] & \mathcal{A}_{c-1} \arrow[r,leftarrow]\arrow[d,leftarrow] & \mathcal{S}_{\boldsymbol{x}} \\
\mathbb{R}^{D-c} \arrow[r,leftarrow,"\cong"]
& \mathcal{V}_c \arrow[r,leftarrow,"\cong"] & \mathcal{A}_c \arrow[r,leftarrow,"\cong"] & \mathcal{S}_{\boldsymbol{x}}
\end{tikzcd}
\end{equation}
\begin{algorithm} \caption{Filtrated Algebraic Subspace Clustering (FASC) - Geometric Version}\label{alg:geometricAASC} \begin{algorithmic}[1]
\Procedure{FASC}{$\mathcal{A}$}
\State $\mathfrak{L} \gets \emptyset$; $\mathcal{L} \gets \emptyset$;
\While{$\mathcal{A} - \mathcal{L} \neq \emptyset$}
\State pick a nonsingular point $\boldsymbol{x}$ in $\mathcal{A} - \mathcal{L}$;
\State $\mathcal{V} \gets \mathbb{R}^D$;
\While{$\mathcal{V} \cap \mathcal{A} \subsetneq \mathcal{V}$}
\State find polynomial $p$ that vanishes on $\mathcal{A}\cap \mathcal{V}$ but not on $\mathcal{V}$, s.t. $\nabla p|_{\boldsymbol{x}} \neq \boldsymbol{0}$;
\State let $\mathcal{V}$ be the orthogonal complement of $\pi_{\mathcal{V}}(\nabla p|_{\boldsymbol{x}})$ in $\mathcal{V}$;
\EndWhile
\State $\mathfrak{L} \gets \mathfrak{L} \cup
\left\{\mathcal{V}\right\}$; $\mathcal{L} \gets \mathcal{L} \cup \mathcal{V}$;
\EndWhile
\State \Return $\mathfrak{L}$;
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{eg}
Consider the setting of Examples \ref{eg:setup} and \ref{eg:degree2}.
Suppose that in the first filtration the algorithm picks as reference point
$\boldsymbol{x} \in \mathcal{S}_2 - \mathcal{S}_1 \cup \mathcal{S}_3$. Suppose further that the algorithm
picks the polynomial $p(x) = (\boldsymbol{b}_1^\top x) (\boldsymbol{f}^\top x)$, which
vanishes on $\mathcal{A}$ but certainly not on $\mathbb{R}^3$. Then the first ambient space
$\mathcal{V}_1$ of the filtration associated to $\boldsymbol{x}$ is constructed as $\mathcal{V}_1 = \Span(\nabla p|_{\boldsymbol{x}})^\perp$.
Since $\nabla p|_{\boldsymbol{x}} \cong \boldsymbol{f}$, this gives that $\mathcal{V}_1$ is precisely the plane of
$\mathbb{R}^3$ with normal vector $\boldsymbol{f}$. Then $\mathcal{A}_1$ is constructed as $\mathcal{A}_1 = \mathcal{A} \cap \mathcal{V}_1$, which consists of the union of three lines $\mathcal{S}_2 \cup \mathcal{S}_3 \cup \mathcal{S}_4$,
where $\mathcal{S}_4$ is the intersection of $\mathcal{V}_1$ with $\mathcal{S}_1$ (see Figs. \ref{fig:ghost} and \ref{fig:FiltrationStep}).
\begin{figure}[t]
\subfigure[]{\label{fig:ghost}
\centering
\begin{tikzpicture}[scale=0.8]
\filldraw[fill=lightgray] (-2,0,-2) -- (2,0,-2) -- (2,0,2) node[anchor=west]{$\mathcal{S}_1$} -- (-2,0,2) -- (-2,0,-2);
\filldraw[fill=gray] (-1.2,1.2,-0.6) -- (0,1.2,-1.2) -- (0,0,0) -- (-1.2,1.2,-0.6);
\draw (0.25,1.425,0.5)node[anchor=east]{$\boldsymbol{f}$};
\draw (-0.15,1.45,-1)node[anchor=east]{$\mathcal{H}_{23}$};
\draw[->] (-0.25,0.625,-0.5) -- (0.25,1.625,0.5);
\draw[->] (-1.5,0,1.5) -- (-1.5,1,1.5) node[anchor=east]{$\boldsymbol{b}_1$};
\draw (0,0,0) -- (0,1.5,-1.5) node[anchor=east]{$\mathcal{S}_2$};
\draw[densely dotted] (0,0,-0.5) -- (0,0.5,-0.5);
\draw[densely dotted] (0,0,0) -- (0,0,-1);
\draw (0,0,0) -- (-1.6,1.6,-0.8) node[anchor=west]{$\mathcal{S}_3$};
\draw[densely dotted] (0,0,0) -- (-1.6,0,-0.8);
\draw[densely dotted] (-1,0,-0.5) -- (-1,1,-0.5);
\draw (-1.6,0,0.8) -- (1.2,0,-0.6) node[anchor=west]{$\mathcal{S}_4$};
\end{tikzpicture}}
\subfigure[]{\label{fig:FiltrationStep}
\centering
\begin{tikzpicture}[scale=0.8]
\filldraw[fill=gray] (-1.2,1.2,-0.6) -- (0,1.2,-1.2) -- (0,0,0) -- (-1.2,1.2,-0.6);
\filldraw[fill=gray] (-1.2,1.2,-0.6) -- (0,-1.2,1.2) -- (0,0,0) -- (-1.2,1.2,-0.6);
\filldraw[fill=gray] (1.2,-1.2,0.6) -- (0,1.2,-1.2) -- (0,0,0) -- (-1.2,1.2,-0.6);
\filldraw[fill=gray] (1.2,-1.2,0.6) -- (0,-1.2,1.2) -- (0,0,0) -- (1.2,-1.2,0.6);
\draw (0.25,-0.625,0.5) node[anchor=north]{$\mathcal{V}_1^{(1)}$};
\draw (0,-1.5,1.5) -- (0,1.5,-1.5) node[anchor=east]{$\mathcal{S}_2$};
\draw (1.6,-1.6,0.8) -- (-1.6,1.6,-0.8) node[anchor=west]{$\mathcal{S}_3$};
\draw (-1.6,0,0.8) -- (1.2,0,-0.6) node[anchor=west]{$\mathcal{S}_4$};
\end{tikzpicture}}
\subfigure[]{\label{fig:FiltrationStepNormals}
\begin{tikzpicture}[scale=0.8]
\centering
\filldraw[fill=gray] (-1.2,1.2,-0.6) -- (0,1.2,-1.2) -- (0,0,0) -- (-1.2,1.2,-0.6);
\filldraw[fill=gray] (-1.2,1.2,-0.6) -- (0,-1.2,1.2) -- (0,0,0) -- (-1.2,1.2,-0.6);
\filldraw[fill=gray] (1.2,-1.2,0.6) -- (0,1.2,-1.2) -- (0,0,0) -- (-1.2,1.2,-0.6);
\filldraw[fill=gray] (1.2,-1.2,0.6) -- (0,-1.2,1.2) -- (0,0,0) -- (1.2,-1.2,0.6);
\draw[->] (0,0.5,-0.5) -- (1,0.25,-0.75) node[anchor=south]{$\boldsymbol{b}_2$};
\draw[->] (1.2,-1.2,0.6) -- (1.7,-0.95,0.1) node[anchor=north]{$\boldsymbol{b}_3$};
\draw[->] (1,0,-0.5) -- (1.25,-0.625,0) node[anchor=west]{$\boldsymbol{b}_4$};
\draw (0.25,-0.625,0.5) node[anchor=north]{$\mathcal{V}_1^{(1)}$};
\draw (0,-1.5,1.5) -- (0,1.5,-1.5) node[anchor=east]{$\mathcal{S}_2$};
\draw (1.6,-1.6,0.8) -- (-1.6,1.6,-0.8) node[anchor=west]{$\mathcal{S}_3$};
\draw (-1.6,0,0.8) -- (1.2,0,-0.6) node[anchor=west]{$\mathcal{S}_4$};
\end{tikzpicture}}
\caption{\protect\subref{fig:ghost}: The plane spanned by lines $\mathcal{S}_2$ and $\mathcal{S}_3$ intersects the plane $\mathcal{S}_1$ at the line $\mathcal{S}_4$. \subref{fig:FiltrationStep}: Intersection of the original subspace arrangement $\mathcal{A}=\mathcal{S}_1 \cup \mathcal{S}_2 \cup \mathcal{S}_3$ with the intermediate ambient space $\mathcal{V}_1^{(1)}$, giving rise to the intermediate subspace arrangement $\mathcal{A}_1^{(1)}=\mathcal{S}_2 \cup \mathcal{S}_3 \cup \mathcal{S}_4$. \subref{fig:FiltrationStepNormals}: Geometry of the unique degree-$3$ polynomial $p(x)=(\boldsymbol{b}_2^\top x)(\boldsymbol{b}_3^\top x)(\boldsymbol{b}_4^\top x)$ that vanishes on $\mathcal{S}_2 \cup \mathcal{S}_3 \cup \mathcal{S}_4$ as a variety of the intermediate ambient space $\mathcal{V}_1^{(1)}$. $\boldsymbol{b}_i \perp \mathcal{S}_i, i=2,3,4$.}
\end{figure}
Since $\mathcal{A}_1 \subsetneq \mathcal{V}_1$, the algorithm takes one more step in the filtration. Suppose that the algorithm picks the polynomial $q(x)=(\boldsymbol{b}_2^\top x) (\boldsymbol{b}_3^\top x) (\boldsymbol{b}_4^\top x)$, where $\boldsymbol{b}_i$ is the unique normal vector of $\mathcal{V}_1$ that is orthogonal to $\mathcal{S}_i$, for $i=2,3,4$ (see Fig \ref{fig:FiltrationStepNormals}).
Because of the general position assumption, none of the lines
$\mathcal{S}_2, \mathcal{S}_3, \mathcal{S}_4$ is orthogonal to another. Consequently,
$\nabla q|_{\boldsymbol{x}}= (\boldsymbol{b}_3^\top \boldsymbol{x})(\boldsymbol{b}_4^\top \boldsymbol{x}) \boldsymbol{b}_2 \neq 0$. Moreover, since $\boldsymbol{b}_2 \in \mathcal{V}_1$, we have that $\pi_{\mathcal{V}_1}\left(\nabla q|_{\boldsymbol{x}}\right)=\nabla q|_{\boldsymbol{x}} \cong \boldsymbol{b}_2$ defines a line in $\mathcal{V}_1$ that must contain $\mathcal{S}_2$. Intersecting $\mathcal{A}_1$ with $\mathcal{V}_2$ we obtain $\mathcal{A}_2 = \mathcal{A}_1 \cap \mathcal{V}_2 = \mathcal{V}_2$ and the filtration terminates with output the irreducible component $\mathcal{S}_{\boldsymbol{x}} = \mathcal{S}_2 = \mathcal{V}_2$ of $\mathcal{A}$ associated to reference point $\boldsymbol{x}$.
Continuing, the algorithm now picks a new reference point $\boldsymbol{x}' \in \mathcal{A} - \mathcal{S}_{\boldsymbol{x}}$, say $\boldsymbol{x}' \in \mathcal{S}_1$. A similar process as above will identify $\mathcal{S}_1$ as the intermediate ambient space $\mathcal{V}_1=\mathcal{S}_{\boldsymbol{x}'}$ of the filtration associated to $\boldsymbol{x}'$ that arises after one step. Then a third reference point will be chosen as $\boldsymbol{x}'' \in \mathcal{A} - \mathcal{S}_{\boldsymbol{x}} \cup \mathcal{S}_{\boldsymbol{x}'}$ and $\mathcal{S}_3$ will be identified as the intermediate ambient space $\mathcal{V}_2=\mathcal{S}_{\boldsymbol{x}''}$ of the filtration associated to $\boldsymbol{x}''$ that arises after two steps. Since the set $\mathcal{A} - \mathcal{S}_{\boldsymbol{x}} \cup \mathcal{S}_{\boldsymbol{x}'} \cup \mathcal{S}_{\boldsymbol{x}''}$ is empty, the algorithm will terminate and return $\{\mathcal{S}_{\boldsymbol{x}}, \mathcal{S}_{\boldsymbol{x}'},\mathcal{S}_{\boldsymbol{x}''}\}$, which is up to a permutation a decomposition of the original subspace arrangement into its constituent subspaces.
\end{eg}
Strictly speaking, Algorithm \ref{alg:geometricAASC} is not a valid algorithm in the
computer-science theoretic sense, since it takes as input an infinite set $\mathcal{A}$, and
it involves operations such as checking equality of the infinite sets $\mathcal{V}$ and $\mathcal{A} \cap \mathcal{V}$. Moreover, the reader may reasonably ask:
\begin{enumerate}
\item Why is it the case that through the entire filtration associated with reference point $\boldsymbol{x}$ we can always find polynomials $p$ such that $\nabla p|_{\boldsymbol{x}} \neq 0$?
\item Why is it true that even if $\nabla p|_{\boldsymbol{x}} \neq 0$ then $\pi_{\mathcal{V}}(\nabla p|_{\boldsymbol{x}}) \neq 0$?
\end{enumerate} We address all issues above and beyond in the next section, which is devoted to rigorously establishing the theory of the FASC algorithm.\footnote{At this point the reader unfamiliar with algebraic geometry is encouraged to read the appendices before proceeding.}
\section{Filtrated Algebraic Subspace Clustering - Theory} \label{section:mfAASC}
This section formalizes the concepts outlined in section \ref{section:geometricAASC}. section \ref{subsection:NatureInput} formalizes the notion of a set $\mathcal{X}$ being in \emph{general position inside a subspace arrangement $\mathcal{A}$}. Sections \ref{subsection:FirstStepFiltration}-\ref{subsection:MultipleSteps} establish the theory of a single filtration of a finite subset $\mathcal{X}$ lying in general position inside a transversal subspace arrangement $\mathcal{A}$, and culminate with the \emph{Algebraic Descending Filtration (ADF)} algorithm for identifying a single irreducible component of $\mathcal{A}$ (Algorithm \ref{alg:ADF}) and the theorem establishing its correctness (Theorem \ref{thm:ADF}). The ADF algorithm naturally leads us to the core contribution of this paper in section \ref{subsection:All}, which is the FASC algorithm for identifying all irreducible components of $\mathcal{A}$ (Algorithm \ref{alg:AASC}) and the theorem establishing its correctness (Theorem \ref{thm:AASC}).
\subsection{Data in general position in a subspace arrangement}\label{subsection:NatureInput}
From an algebraic geometric point of view, a union $\mathcal{A}$ of linear subspaces is the same as the set $\mathcal{I}_\mathcal{A}$ of polynomial functions that vanish on $\mathcal{A}$. However, from a computer-science-theoretic point of view, $\mathcal{A}$ and $\mathcal{I}_{\mathcal{A}}$ are quite different: $\mathcal{A}$ is an infinite set and hence it can not be given as input to any algorithm. On the other hand, even though $\mathcal{I}_{\mathcal{A}}$ is also an infinite set, it is generated as an \emph{ideal} by a finite set of polynomials, which can certainly serve as input to an algorithm.That said, from a machine-learning point of view, both $\mathcal{A}$ and $\mathcal{I}_{\mathcal{A}}$ are often unknown, and one is usually given only a finite set of points $\mathcal{X}$ in $\mathcal{A}$, from which we wish to compute its irreducible components $\mathcal{S}_1, \dots, \mathcal{S}_n$.
To lend ourselves the power of the algebraic-geometric machinery, while providing an algorithm of interest to the machine learning and computer science communities,
we adopt the following setting. The input to our algorithm will be the pair $(\mathcal{X},m)$, where $\mathcal{X}$ is a finite subset of an unknown union of linear subspaces $\mathcal{A}:=\bigcup_{i=1}^n\mathcal{S}_i$ of $\mathbb{R}^D$, and $m$ is an upper bound on $n$. To make the problem of recovering the decomposition $\mathcal{A}=\bigcup_{i=1}^n \mathcal{S}_i$ from $\mathcal{X}$ well-defined, it is necessary that $\mathcal{A}$ be uniquely identifiable form $\mathcal{X}$. In other words, $\mathcal{X}$ must be in general position inside $\mathcal{A}$, as defined next.
\begin{definition}[Points in general position] \label{dfn:GeneralPosition}
Let $\mathcal{X}=\left\{\boldsymbol{x}_1,\dots,\boldsymbol{x}_N\right\}$ be a finite subset of a subspace arrangement $\mathcal{A}=\mathcal{S}_1 \cup \cdots \cup \mathcal{S}_n$. We say that $\mathcal{X}$ is in general position in $\mathcal{A}$ with respect to degree $m$, if $m \ge n$ and $\mathcal{A} = \mathcal{Z}(\mathcal{I}_{\mathcal{X},m})$, i.e., if $\mathcal{A}$ is precisely the zero locus of all homogeneous polynomials of degree $m$ that vanish~on~$\mathcal{X}$.
\end{definition}
The intuitive geometric condition
$\mathcal{A} = \mathcal{Z}(\mathcal{I}_{\mathcal{X},m})$ of Definition \ref{dfn:GeneralPosition} guarantees that there are no \emph{spurious} polynomials of degree less or equal to $m$ that vanish on $\mathcal{X}$.
\begin{proposition} \label{prp:GeneralPositionLowDegrees}
Let $\mathcal{X}$ be a finite subset of an arrangement $\mathcal{A}$ of n linear subspaces of $\mathbb{R}^D$. Then $\mathcal{X}$ lies in general position inside $\mathcal{A}$ with respect to degree $m$ if and only if
$\mathcal{I}_{\mathcal{A},k} = \mathcal{I}_{\mathcal{X},k}, \, \, \, \forall k \le m$.
\end{proposition}
\begin{proof}
$(\Rightarrow)$ We first show that $\mathcal{I}_{\mathcal{A},m} = \mathcal{I}_{\mathcal{X},m}$. Since $\mathcal{A} \supset \mathcal{X}$, every homogeneous polynomial of degree $m$ that vanishes on $\mathcal{A}$ must vanish on $\mathcal{X}$, i.e., $\mathcal{I}_{\mathcal{A},m} \subset \mathcal{I}_{\mathcal{X},m}$. Conversely, the hypothesis $\mathcal{A} = \mathcal{Z}(\mathcal{I}_{\mathcal{X},m})$ implies that every polynomial of $\mathcal{I}_{\mathcal{X},m}$ must vanish on $\mathcal{A}$, i.e., $\mathcal{I}_{\mathcal{A},m} \supset \mathcal{I}_{\mathcal{X},m}$.
Now let $k < m$. As before, since $\mathcal{A} \supset \mathcal{X}$, we must have $\mathcal{I}_{\mathcal{A},k} \subset \mathcal{I}_{\mathcal{X},k}$. For the converse direction, suppose for the sake of contradiction that there exists some $p \in \mathcal{I}_{\mathcal{X},k}$ that does not vanish on $\mathcal{A}$. This means that there must exist an irreducible component of $\mathcal{A}$, say $\mathcal{S}_1$, such that $p$ does not vanish on $\mathcal{S}_1$. Let $\boldsymbol{\zeta}$ be a vector of $\mathbb{R}^D$ non-orthogonal to $\mathcal{S}_1$, i.e., the linear form $g(x) = \boldsymbol{\zeta}^\top x$ does not vanish on $\mathcal{S}_1$. Since $p$ vanishes on $\mathcal{X}$ so will the degree $m$ polynomial $g^{m-k} p$, i.e., $g^{m-k} p \in \mathcal{I}_{\mathcal{X},m}$. But we have already shown that $\mathcal{I}_{\mathcal{X},m} = \mathcal{I}_{\mathcal{A},m}$, and so it must be the case that $g^{m-k} p \in \mathcal{I}_{\mathcal{A},m}$. Since $g^{m-k} p$ vanishes on $\mathcal{A}$, it must vanish on $\mathcal{S}_1$, i.e., $g^{m-k} p \in \mathcal{I}_{\mathcal{S}_1}$. Since by hypothesis $p \not\in \mathcal{I}_{\mathcal{S}_1}$, and since $\mathcal{I}_{\mathcal{S}_1}$ is a prime ideal (see \ref{prp:ISprime}), it must be the case that $g^{m-k} \in \mathcal{I}_{\mathcal{S}_1}$. But again because $\mathcal{I}_{\mathcal{S}_1}$ is a prime ideal, we must have that $g \in \mathcal{I}_{\mathcal{S}_1}$. But this is true if and only if $\boldsymbol{\zeta} \in \mathcal{S}_1^\perp$, which contradicts the definition of $\boldsymbol{\zeta}$.
$(\Leftarrow)$ Suppose $\mathcal{I}_{\mathcal{A},k} = \mathcal{I}_{\mathcal{X},k}, \, \, \, \forall k \le m$. We will show that $\mathcal{A} = \mathcal{Z}(\mathcal{I}_{\mathcal{X},m})$. But this is the same as showing that $\mathcal{A} = \mathcal{Z}(\mathcal{I}_{\mathcal{A},m})$, which is true, by Proposition \ref{prp:Regularity}.
\end{proof}
The next Proposition ensures the existence of points in general position with respect to any degree $m \ge n$.
\begin{proposition}
Let $\mathcal{A}$ be an arrangement of $n$ linear subspaces of $\mathbb{R}^D$ and let $m$ be any integer $\ge n$. Then there exists a finite subset $\mathcal{X} \subset \mathcal{A}$ that is in general position inside $\mathcal{A}$ with respect to degree $m$.
\end{proposition}
\begin{proof}
By Proposition \ref{prp:Regularity} $\mathcal{I}_{\mathcal{A}}$ is generated by polynomials of degree $\le m$. Then by Theorem 2.9 in \cite{Ma:SIAM08}, there exists a finite set $\mathcal{X} \subset \mathcal{A}$ such that $\mathcal{I}_{\mathcal{A},k} = \mathcal{I}_{\mathcal{X},k}, \, \, \, \forall k \le m$, which concludes the proof in view of Proposition \ref{prp:GeneralPositionLowDegrees}.
\end{proof}
Notice that there is a price to be paid by requiring $\mathcal{X}$ to be in general position, which is that we need the cardinality of $\mathcal{X}$ to be artificially large, especially when $m-n$ is large. In particular, since the dimension of $\mathcal{I}_{\mathcal{X}, m}$ must match the dimension of $\mathcal{I}_{\mathcal{A}, m}$, the cardinality of $\mathcal{X}$ must be at least $M_m(D) - \dim (\mathcal{I}_{\mathcal{A}, m}) $.
The next result will be useful in the sequel.
\begin{lemma}\label{lem:GeneralPosition}
Suppose that $\mathcal{X}$ is in general position inside $\mathcal{A}$ with respect to degree $m$. Let $n'<n$. Then the set $\mathcal{X}^{(n')}:=\mathcal{X}-\bigcup_{i=1}^{n'}\mathcal{X}_i$ lies in general position inside the subspace arrangement $\mathcal{A}^{(n')}:=\mathcal{S}_{n'+1} \cup \cdots \cup \mathcal{S}_n$ with respect to degree $m-n'$.
\end{lemma}
\begin{proof}
We begin by noting that $m-n'$ is an upper bound on the number of subspaces
of the arrrangement $\mathcal{A}^{(n')}$. According
to Proposition \ref{prp:GeneralPositionLowDegrees}, it is enough to prove that a homogeneous
polynomial $p$ of degree less or equal than $m-n'$ vanishes on $\mathcal{X}^{(n')}$ if and only if it vanishes on $\mathcal{A}^{(n')}$.
So let $p$ be a homogeneous polynomial of degree less or equal than $m-n'$. If
$p$ vanishes on $\mathcal{A}^{(n')}$, then it certainly vanishes on $\mathcal{X}^{(n')}$. It remains to prove the converse. So suppose that $p$ vanishes on
$\mathcal{X}^{(n')}$. Suppose that for each $i=1,\dots,n'$ we have a vector $\boldsymbol{\zeta}_i \perp \mathcal{S}_i$, such that $\boldsymbol{\zeta}_i \not\perp \mathcal{S}_{n'+1},\dots,\mathcal{S}_n$.
Next, define the polynomial $r(x)=(\boldsymbol{\zeta}_1^\top x)\cdots (\boldsymbol{\zeta}_{n'}^\top x) p(x)$.
Then $r$ has degree $\le m$ and vanishes on $\mathcal{X}$. Since $\mathcal{X}$ is in general position inside $\mathcal{A}$, $r$ must vanish on $\mathcal{A}$. For the sake of contradiction suppose that $p$ does not
vanish on $\mathcal{A}^{(n')}$. Then $p$ does not vanish say on $\mathcal{S}_{n}$.
On the other hand $r$ does vanish on $\mathcal{S}_{n}$, hence $r \in \mathcal{I}_{\mathcal{S}_n}$ or equivalently $(\boldsymbol{\zeta}_1^\top x)\cdots (\boldsymbol{\zeta}_{n'}^\top x) p(x) \in \mathcal{I}_{\mathcal{S}_n}$. Since $\mathcal{I}_{\mathcal{S}_n}$ is a prime ideal we must have either $\boldsymbol{\zeta}_i^\top x \in \mathcal{I}_{\mathcal{S}_n}$ for some $i \in [n']$ or
$p \in \mathcal{I}_{\mathcal{S}_n}$. Now, the latter can not be true by hypothesis, thus we must
have $\boldsymbol{\zeta}_i^\top x \in \mathcal{I}_{\mathcal{S}_n}$ for some $i \in [n']$. But this implies that $\boldsymbol{\zeta}_i \perp \mathcal{S}_n$, which contradicts the hypothesis on $\boldsymbol{\zeta}_i$. Hence it must be
the case that $p$ vanishes on $\mathcal{A}^{(n')}$.
To complete the proof we show that such vectors $\boldsymbol{\zeta}_i,i=1,\dots,n'$ always exist. It is enough to prove the existence of $\boldsymbol{\zeta}_1$. If every vector of $\mathbb{R}^D$ orthogonal to $\mathcal{S}_1$ were orthogonal to, say $\mathcal{S}_{n'+1}$, then we would have that $\mathcal{S}_1^{\perp} \subset \mathcal{S}_{n'+1}^\perp$, or equivalently, $\mathcal{S}_1 \supset \mathcal{S}_{n'+1}$.
\end{proof}
\begin{remark}
Notice that the notion of points $\mathcal{X}$ lying in general position inside a subspace arrangement $\mathcal{A}$ is independent of the notion of transversality of $\mathcal{A}$ (Definition \ref{dfn:transversal}). Nevertheless, to facilitate the technical analysis by avoiding degenerate cases of subspace arrangements, in the rest of section \ref{section:mfAASC} we will assume that $\mathcal{A}$ is transversal. For a geometric interpretation of transversality as well as examples, the reader is encouraged to consult Appendix \ref{appendix:SA}.
\end{remark}
\subsection{Constructing the first step of a filtration} \label{subsection:FirstStepFiltration}
We will now show how to construct the first step of a descending filtration associated with a single irreducible component of $\mathcal{A}$, as in \eqref{eq:commutative-diagram}. Once again, we are given the pair $(\mathcal{X}, m)$, where $\mathcal{X}$ is a finite set in general position inside $\mathcal{A}$ with respect to degree $m$, $\mathcal{A}$ is transversal, and $m$ is an upper bound on the number $n$ of irreducible components of $\mathcal{A}$ (section \ref{subsection:NatureInput}).
To construct the first step of the filtration, we need to find a first hyperplane $\mathcal{V}_{1}$ of $\mathbb{R}^D$ that contains some irreducible component $\mathcal{S}_i$ of $\mathcal{A}$. According to Proposition \ref{prp:Grd}, it would be enough to have a polynomial $p_1$ that vanishes on the irreducible component $\mathcal{S}_i$ together with a point $\boldsymbol{x} \in \mathcal{S}_i$. Then
$\nabla p_1|_{\boldsymbol{x}}$ would be the normal to a hyperplane $\mathcal{V}_{1}$ containing $\mathcal{S}_i$.
Since every polynomial that vanishes on $\mathcal{A}$ necessarily vanishes on $\mathcal{S}_i, \forall i=1,\dots,n$, a reasonable choice is a vanishing polynomial of \emph{minimal degree} $k$, i.e., some $0\neq p_1 \in \mathcal{I}_{\mathcal{A},k}$, where $k$ is the smallest degree at which $\mathcal{I}_{\mathcal{A}}$ is non-zero. Since $\mathcal{X}$ is assumed in general position in $\mathcal{A}$ with respect to degree $m$, by Proposition \ref{prp:GeneralPositionLowDegrees} we will have $\mathcal{I}_{\mathcal{A},k} = \mathcal{I}_{\mathcal{X},k}$, and so our $p_1$ can be computed as an element of the right null space of the embedded data matrix $\nu_k(\mathcal{X})$. The next Lemma ensures that given any such $p_1$, there is always a point $\boldsymbol{x}$ in $\mathcal{X}$ such that $\nabla p_1|_{\boldsymbol{x}} \neq 0$.
\begin{lemma} \label{lem:existsx}
Let $0 \neq p_1 \in \mathcal{I}_{\mathcal{X},k}$ be a vanishing polynomial of minimal degree. Then there exists
$0 \neq \boldsymbol{x} \in \mathcal{X} $
such that $\nabla p_1 |_{\boldsymbol{x}} \neq 0$, and moreover, without loss of generality $\boldsymbol{x} \in \mathcal{S}_1 - \bigcup_{i > 1} \mathcal{S}_{i}$.
\end{lemma}
\begin{proof}
We first establish the existence of a point $\boldsymbol{x} \in \mathcal{X}$ such that $\nabla p_1|_{\boldsymbol{x}} \neq \boldsymbol{0}$. For the sake of contradiction, suppose that no such $\boldsymbol{x} \in \mathcal{X}$ exists. Since $0 \neq p_1 \in \mathcal{I}_{\mathcal{X},k}$, $p_1$ can not be a constant polynomial, and so there exists some $j \in [D]$ such that the degree $k-1$ polynomial $\frac{\partial p_1}{\partial x_{j}}$ is not the zero polynomial. Now, by hypothesis $\nabla p_1\big|_{\boldsymbol{x}}=\boldsymbol{0}, \, \forall \boldsymbol{x} \in \mathcal{X}$, hence $\frac{\partial p_1}{\partial x_j}\big|_{\boldsymbol{x}}=0, \, \forall \boldsymbol{x} \in \mathcal{X}$. But then, $0 \neq \frac{\partial p_1}{\partial x_j} \in \mathcal{I}_{\mathcal{X},k-1}$ and this would contradict the hypothesis that $k$ is the smallest index such that $\mathcal{I}_{\mathcal{X},k} \neq 0$. Hence there exists $\boldsymbol{x} \in \mathcal{X}$ such that $\nabla p_1|_{\boldsymbol{x}} \neq \boldsymbol{0}$. To show that $\boldsymbol{x}$ can be chosen to be non-zero, note that if $k=1$, then $\nabla p_1$ is a constant vector and we can take $\boldsymbol{x}$
to be any non-zero element of $\mathcal{X}$. If $k>1$ then $\nabla p_1|_{\boldsymbol{0}}=\boldsymbol{0}$ and so $\boldsymbol{x}$ must necessarily be different from zero.
Next, we establish that $\boldsymbol{x} \in \mathcal{S}_1 - \bigcup_{i > 1} \mathcal{S}_{i}$.
Without loss of generality we can assume that $\boldsymbol{x} \in \mathcal{X}_1:=\mathcal{X} \cap \mathcal{S}_1$. For the sake of contradiction, suppose that $\boldsymbol{x} \in \mathcal{S}_{1} \cap \mathcal{S}_{i}$ for some $i > 1$. Since $\boldsymbol{x} \neq \boldsymbol{0}$, there is some index $j \in [D]$ such that the $j^{th}$ coordinate of $\boldsymbol{x}$, denoted by $\chi_j$, is different from zero. Define $g(x) := x_j^{n-k} p_1(x)$. Then $g \in \mathcal{I}_{\mathcal{X},n}$ and by the general position assumption we also have that $g \in \mathcal{I}_{\mathcal{A},n}$. Since $\mathcal{A}$ is assumed transversal, by Theorem \ref{thm:I=J}, $g$ can be written in the form
\begin{align}
g = \sum_{r_i \in [c_i], \, i \in [n]} c_{r_1,\dots,r_n} l_{r_1,1} \cdots l_{r_n,n}, \label{eq:p_m}
\end{align}
where $c_{r_1,\dots,r_n} \in \mathbb{R}$ is a scalar coefficient, $l_{r_i,i}$ is a linear form vanishing on $\mathcal{S}_i$, and the summation runs over all multi-indices $(r_1,\dots,r_n) \in [c_1]\times \cdots \times [c_n]$. Then evaluating the gradient of the expression on the right of \eqref{eq:p_m} at $\boldsymbol{x}$, and using the hypothesis that $\boldsymbol{x} \in \mathcal{S}_{1} \cap \mathcal{S}_{i}$ for some $i>1$, we see that $\nabla g|_{\boldsymbol{x}}=\boldsymbol{0}$. However, evaluating the gradient of $g$ at $\boldsymbol{x}$ from the formula $g(x) := x_j^{n-\ell} p_1(x)$, we get $\nabla g|_{\boldsymbol{x}} = \chi_j^{n-k} \nabla p_1|_{\boldsymbol{x}} \neq \boldsymbol{0}$. This contradiction implies that the hypothesis $\boldsymbol{x} \in \mathcal{S}_1 \cap \mathcal{S}_i$ for some $i>1$ can not be true, i.e., $\boldsymbol{x}$ lies only in the irreducible component $\mathcal{S}_1$.
\end{proof}
Using the notation established so far and setting $\boldsymbol{b}_1=\nabla p_1|_{\boldsymbol{x}}$, the hyperplane of $\mathbb{R}^D$ given by $\mathcal{V}_1 = \Span(\boldsymbol{b}_1)^\perp=\mathcal{Z}(\boldsymbol{b}_1^\top x)$ contains the irreducible component of $\mathcal{A}$ associated with the reference point $\boldsymbol{x}$, i.e., $\mathcal{V}_1 \supset \mathcal{S}_1$. Then we can define a subspace sub-arrangement $\mathcal{A}_{1}$ of $\mathcal{A}$ by
\begin{align}
\mathcal{A}_{1} := \mathcal{A} \cap \mathcal{V}_{1} = \mathcal{S}_1 \cup (\mathcal{S}_2 \cap \mathcal{V}_{1}) \cup \cdots \cup (\mathcal{S}_n \cap \mathcal{V}_{1}).
\end{align} Observe that $\mathcal{A}_{1}$ can be viewed as a subspace arrangement of $\mathcal{V}_{1}$, since $\mathcal{A}_{1} \subset \mathcal{V}_{1}$ (see also the commutative diagram of eq. \eqref{eq:commutative-diagram}). Certainly, our algorithm can not manipulate directly the infinite sets $\mathcal{A}$ and $\mathcal{V}_1$. Nevertheless, these sets are algebraic varieties and as a consequence we can perform their intersection in the algebraic domain. That is, we can obtain a set of polynomials defining $\mathcal{A} \cap \mathcal{V}_1$, as shown next.\footnote{Lemma \ref{lem:GeneratorsIntersection} is a special case of Proposition \ref{prp:VarietiesIntersection}.}
\begin{lemma} \label{lem:GeneratorsIntersection}
$\mathcal{A}_1:=\mathcal{A} \cap \mathcal{V}_1$ is the zero set of the ideal generated by $\mathcal{I}_{\mathcal{X}, m}$ and $\boldsymbol{b}_1^\top x$, i.e.,
\begin{align}
\mathcal{A}_1=\mathcal{Z} \left(\mathfrak{a}_1 \right), \, \, \, \mathfrak{a}_1:=\langle \mathcal{I}_{\mathcal{X}, m}\rangle+\langle \boldsymbol{b}_1^\top x \rangle.
\end{align}
\end{lemma}
\begin{proof}
$(\Rightarrow):$ We will show that $\mathcal{A}_1 \subset \mathcal{Z} \left(\mathfrak{a}_1 \right)$. Let $w$ be a polynomial of $\mathfrak{a}_1$. Then by definition of $\mathfrak{a}_1$, $w$ can be written as $w=w_1+w_2$, where $w_1 \in \langle\mathcal{I}_{\mathcal{X}, m}\rangle$ and $w_2 \in \langle\boldsymbol{b}_1^\top x \rangle$. Now take any point $\boldsymbol{y} \in \mathcal{A}_1$. Since $\boldsymbol{y} \in \mathcal{A}$, and $\mathcal{I}_{\mathcal{X},m} = \mathcal{I}_{\mathcal{A},m}$, we must have $w_1(\boldsymbol{y})=0$. Since $\boldsymbol{y} \in \mathcal{V}_1$, we must have that $w_2(\boldsymbol{y})=0$. Hence $w(\boldsymbol{y})=0$, i.e., every point of $\mathcal{A}_1$ is inside the zero set of $\mathfrak{a}_1$.
$(\Leftarrow):$ We will show that $\mathcal{A}_1 \supset \mathcal{Z} \left(\mathfrak{a}_1 \right)$. Let $\boldsymbol{y} \in \mathcal{Z} \left(\mathfrak{a}_1 \right)$, i.e., every element of $\mathfrak{a}_1$ vanishes on $\boldsymbol{y}$. Hence every element of $\mathcal{I}_{\mathcal{X}, m}$ vanishes on $\boldsymbol{y}$, i.e., $\boldsymbol{y} \in \mathcal{Z}(\mathcal{I}_{\mathcal{X}, m})=\mathcal{A}$. In addition, every element of $\langle\boldsymbol{b}_1^\top x \rangle$ vanishes on $\boldsymbol{y}$, in particular $\boldsymbol{b}_1^\top \boldsymbol{y}=0$, i.e., $\boldsymbol{y} \in \mathcal{V}_1$.
\end{proof}
In summary, the computation of the vector $\boldsymbol{b}_1 \perp \mathcal{S}_1$ completes algebraically the first step of the filtration, which gives us the hyperplane $\mathcal{V}_1$ and the sub-variety $\mathcal{A}_1$. Then, there are two possibilities: $\mathcal{A}_1= \mathcal{V}_1$ or $\mathcal{A}_1 \subsetneq \mathcal{V}_1$. In the first case, we need to terminate the filtration, as explained in section \ref{subsection:SecondStep}, while in the second case we need to take one more step in the filtration, as explained in section \ref{subsection:MultipleSteps}.
\subsection{Deciding whether to take a second step in a filtration} \label{subsection:SecondStep}
If $\mathcal{A}_1 = \mathcal{V}_1$, we should terminate the filtration because in this case $\mathcal{V}_1 = \mathcal{S}_1$, as Lemma \ref{lem:V=A-geometry} shows, and so we have already identified one of the subspaces. Lemma \ref{lem:V=A-algebra} will give us an algebraic procedure for checking if the condition $\mathcal{A}_1 = \mathcal{V}_1$ holds true, while Lemma~\ref{lem:SinglePolynomialCheck} will give us a computationally more friendly procedure for checking the same condition.
\begin{lemma} \label{lem:V=A-geometry}
$\mathcal{V}_1 = \mathcal{A}_1$ if and only if $\mathcal{V}_1 = \mathcal{S}_1$.
\end{lemma}
\begin{proof}
$(\Rightarrow):$ Suppose $\mathcal{V}_1 = \mathcal{A}_1 \doteq \mathcal{S}_1 \cup (\mathcal{S}_2 \cap \mathcal{V}_{1}) \cup \cdots \cup (\mathcal{S}_n \cap \mathcal{V}_{1})$.
Taking the vanishing-ideal operator on both sides, we obtain
\begin{align} \label{eq:IA1}
\mathcal{I}_{\mathcal{V}_1} = \mathcal{I}_{\mathcal{S}_1} \cap \mathcal{I}_{\mathcal{S}_2 \cap \mathcal{V}_{1}} \cap \cdots \cap \mathcal{I}_{\mathcal{S}_n \cap \mathcal{V}_{1}}.
\end{align} Since $\mathcal{V}_1$ is a linear subspace, $\mathcal{I}_{\mathcal{V}_1}$ is a prime ideal by Proposition \ref{prp:ISprime}, and so by Proposition \ref{prp:ideals-intersection} $\mathcal{I}_{\mathcal{V}_1}$ must contain one of the ideals
$\mathcal{I}_{\mathcal{S}_1},\mathcal{I}_{\mathcal{S}_2 \cap \mathcal{V}_1},\dots,\mathcal{I}_{\mathcal{S}_n \cap \mathcal{V}_1}$. Suppose that $\mathcal{I}_{\mathcal{V}_1} \supset \mathcal{I}_{\mathcal{S}_i \cap \mathcal{V}_1}$ for some $i >1$. Taking the zero-set operator on both sides, and using Proposition \ref{prp:closure} and the fact that linear subspaces are closed in the Zariski topology, we obtain $\mathcal{V}_1 \subset \mathcal{S}_i \cap \mathcal{V}_1$, which implies that $\mathcal{V}_1 \subset \mathcal{S}_i$. Since $\mathcal{S}_1 \subset \mathcal{V}_1$, we must have that $\mathcal{S}_1 \subset \mathcal{S}_i$, which contradicts the assumption of transversality on $\mathcal{A}$. Hence it must be the case that $\mathcal{I}_{\mathcal{V}_1} \supset \mathcal{I}_{\mathcal{S}_1}$. Taking the zero-set operator on both sides we get $\mathcal{V}_1 \subset \mathcal{S}_1$, which implies that $\mathcal{V}_1 = \mathcal{S}_1$, since $\mathcal{S}_1 \subset \mathcal{V}_1$.
$(\Leftarrow):$ Suppose $\mathcal{V}_1 = \mathcal{S}_1$. Then $\mathcal{V}_1=\mathcal{S}_1 \subset \mathcal{A}_1 \subset \mathcal{V}_1 = \mathcal{S}_1$ and so $\mathcal{A}_1 = \mathcal{V}_1$.
\end{proof}
Knowing that a filtration terminates if $\mathcal{A}_1 = \mathcal{V}_1$, we need a mechanism for checking this condition. The next lemma shows how this can be done in the algebraic domain.
\begin{lemma} \label{lem:V=A-algebra}
$\mathcal{V}_1 = \mathcal{A}_1$ if and only if $\mathcal{I}_{\mathcal{X}, m} \subset \langle\boldsymbol{b}_1^\top x \rangle_{ m}$.
\end{lemma}
\begin{proof}
$(\Rightarrow):$ Suppose $\mathcal{A}_1=\mathcal{V}_1$. Then $\mathcal{A} \supset \mathcal{V}_1$ and by taking vanishing ideals on both sides we get $\mathcal{I}_{\mathcal{A}} \subset \mathcal{I}_{\mathcal{V}_1}=\langle\boldsymbol{b}_1^\top x \rangle$. Since $\mathcal{I}_{\mathcal{X}, m} = \mathcal{I}_{\mathcal{A}, m} \subset \mathcal{I}_{\mathcal{A}}$, it follows that $\mathcal{I}_{\mathcal{X}, m} \subset \langle\boldsymbol{b}_1^\top x \rangle_m$.
$(\Leftarrow):$ Suppose $\mathcal{I}_{\mathcal{X}, m} \subset \langle\boldsymbol{b}_1^\top x \rangle_{ m}$ and for the sake of contradiction suppose that $\mathcal{A}_1 \subsetneq \mathcal{V}_1$. In particular, from Lemma \ref{lem:V=A-geometry} we have that $\mathcal{S}_1 \subsetneq \mathcal{V}_1$. Hence, there exists a vector $\boldsymbol{\zeta}_1$ linearly independent from $\boldsymbol{b}_1$ such that $\boldsymbol{\zeta}_1 \perp \mathcal{S}_1$. Now for any $i > 1$, there exists $\boldsymbol{\zeta}_i$ linearly independent from $\boldsymbol{b}_1$ such that $\boldsymbol{\zeta}_i \perp \mathcal{S}_i$. For if not, then $\mathcal{I}_{\mathcal{S}_i} \subset \mathcal{I}_{\mathcal{V}_1}$ and so $\mathcal{S}_i \supset \mathcal{V}_1$, which leads to the contradiction $\mathcal{S}_i \supset \mathcal{S}_1$. Then the polynomial $(\boldsymbol{\zeta}_1^\top x) \cdots (\boldsymbol{\zeta}_n^\top x)$ is an element of $\mathcal{I}_{\mathcal{A},n}=\mathcal{I}_{\mathcal{X},n}$ and by the hypothesis that $\mathcal{I}_{\mathcal{X}, m} \subset \langle\boldsymbol{b}_1^\top x \rangle_m$ we must have that
$(\boldsymbol{\zeta}_1^\top x)^{m-n+1} \cdots (\boldsymbol{\zeta}_n^\top x) \in \langle\boldsymbol{b}_1^\top x\rangle$. But $\langle\boldsymbol{b}_1^\top x\rangle$ is a prime ideal and so one of the factors of $(\boldsymbol{\zeta}_1^\top x) \cdots (\boldsymbol{\zeta}_n^\top x)$ must lie in $\langle\boldsymbol{b}_1^\top x\rangle$. So suppose $\boldsymbol{\zeta}_j^\top x \in \langle\boldsymbol{b}_1^\top x\rangle$, for some $j \in [n]$. This implies that there must exist a polynomial $h$ such that
$\boldsymbol{\zeta}_j^\top x = h \, (\boldsymbol{b}_1^\top x)$. By degree considerations, we conclude that $h$ must be a constant, in which case the above equality implies $\boldsymbol{\zeta}_j \cong \boldsymbol{b}_1$. But this is a contradiction on the definition of $\boldsymbol{\zeta}_j$. Hence it can not be the case that $\mathcal{A}_1 \subsetneq \mathcal{V}_1$.
\end{proof}
Notice that checking the condition $\mathcal{I}_{\mathcal{X}, m} \subset \langle\boldsymbol{b}_1^\top x \rangle_{ m}$ in Lemma \ref{lem:V=A-algebra}, requires computing a basis of $\mathcal{I}_{\mathcal{X},m}$ and checking whether each element of the basis is divisible by the linear form $\boldsymbol{b}_1^\top \boldsymbol{x}$. Equivalently, to check the inclusion of finite dimensional vector spaces $\mathcal{I}_{\mathcal{X}, m} \subset \langle\boldsymbol{b}_1^\top x \rangle_{ m}$ we need to compute a basis $\boldsymbol{B}_{\mathcal{X}, m}$ of $\mathcal{I}_{\mathcal{X},m}$ as well as a basis $\boldsymbol{B}$ of $\langle\boldsymbol{b}_1^\top x \rangle_{ m}$ and check whether the rank equality $\rank([\boldsymbol{B}_{\mathcal{X},m} \, \, \boldsymbol{B}]) = \rank(\boldsymbol{B})$ holds true. Note that a basis of $\langle\boldsymbol{b}_1^\top x \rangle_{ m}$ can be obtained in a straightforward manner by multiplying all monomials of degree $m-1$ with the linear form $\boldsymbol{b}_1^\top x$. On the other hand, computing a basis of $\mathcal{I}_{\mathcal{X},m}$ by computing a basis for the right nullspace of $\nu_m(\mathcal{X})$ can be computationally expensive, particularly when $m$ is large. If however, the points $\mathcal{X} \cap \mathcal{S}_1$ are in general position in $\mathcal{S}_1$ with respect to degree $m$, then checking the condition $\mathcal{I}_{\mathcal{X},m} \subset \langle\boldsymbol{b}_1^\top x \rangle_{ m}$ can be done more efficiently, as we now explain. Let $\boldsymbol{V}_1=[\boldsymbol{v}_1,\dots,\boldsymbol{v}_{D-1}]$ be a basis for the vector space $\mathcal{V}_1$. Then $\mathcal{V}_1$ is isomorphic to $\mathbb{R}^{D-1}$ under the linear map $\sigma_{\boldsymbol{V}_1}: \mathcal{V}_1 \rightarrow \mathbb{R}^{D-1}$ that takes a vector $\boldsymbol{v} = \alpha_1 \boldsymbol{v}_1 + \cdots + \alpha_{D-1} \boldsymbol{v}_{D-1}$ to its coordinate representation $(\alpha_1,\dots,\alpha_{D-1})^\top$. Then the next result says that checking the condition $\mathcal{V}_1 = \mathcal{A}_1$ is equivalent to checking the rank-deficiency of the embedded data matrix $\nu_{m}(\sigma_{\boldsymbol{V}_1}(\mathcal{X} \cap \mathcal{V}_1))$, which is computationally a simpler task than computing the right nullspace of $\nu_m(\mathcal{X})$.
\begin{lemma} \label{lem:SinglePolynomialCheck}
Suppose that $\mathcal{X}_1$ is in general position inside $\mathcal{S}_1$ with respect to degree $m$. Then
$\mathcal{V}_1 = \mathcal{A}_1$ if and only if the embedded data matrix $\nu_{m}(\sigma_{\boldsymbol{V}_1}(\mathcal{X} \cap \mathcal{V}_1))$ is full rank.
\end{lemma}
\begin{proof}
The statement is equivalent to the statement ``$\mathcal{V}_1=\mathcal{A}_1$ if and only if
$\mathcal{I}_{\mathcal{X} \cap \mathcal{V}_1, m}=\langle \boldsymbol{b}_1^\top x\rangle_{m}$", which we now prove.
$(\Rightarrow):$ Suppose $\mathcal{V}_1 = \mathcal{A}_1$. Then by Lemma \ref{lem:V=A-geometry} $\mathcal{V}_1=\mathcal{S}_1$, which implies that $\mathcal{I}_{\mathcal{S}_1}=\langle \boldsymbol{b}_1^\top x \rangle$. This in turn implies that $\mathcal{I}_{\mathcal{S}_1, m}=\langle \boldsymbol{b}_1^\top x\rangle_{ m}$. Now
$\mathcal{I}_{\mathcal{X} \cap \mathcal{V}_1, m} = \mathcal{I}_{\mathcal{X} \cap \mathcal{S}_1, m} = \mathcal{I}_{\mathcal{X}_1, m}$. By the general position hypothesis on $\mathcal{X}_1$ we have $\mathcal{I}_{\mathcal{S}_1, m} = \mathcal{I}_{\mathcal{X}_1, m}$. Hence $\mathcal{I}_{\mathcal{X} \cap \mathcal{V}_1, m} = \langle \boldsymbol{b}_1^\top x\rangle_{ m}$.
$(\Leftarrow):$ Suppose that $\mathcal{I}_{\mathcal{X} \cap \mathcal{V}_1, m} = \langle \boldsymbol{b}_1^\top x\rangle_{ m}$.
For the sake of contradiction, suppose that $\mathcal{A}_1 \subsetneq \mathcal{V}_1$. Since $\mathcal{A}_1$ is an arrangement of at most $m$ subspaces, there exists a homogeneous polynomial $p$ of degree at most $m$ that vanishes on $\mathcal{A}_1$ but does not vanish on $\mathcal{V}_1$. Since $\mathcal{X} \cap \mathcal{V}_1 \subset \mathcal{A}_1$, $p$ will vanish on $\mathcal{X} \cap \mathcal{V}_1$, i.e., $p \in \mathcal{I}_{\mathcal{X} \cap \mathcal{V}_1, m}$ or equivalently $p \in \langle \boldsymbol{b}_1^\top x\rangle_{ m}$ by hypothesis. But then $p$ vanishes on $\mathcal{V}_1$, which is a contradiction; hence it must be the case that $\mathcal{V}_1 = \mathcal{A}_1$.
\end{proof}
\subsection{Taking multiple steps in a filtration and terminating}\label{subsection:MultipleSteps}
If $\mathcal{A}_1 \subsetneq \mathcal{V}_1$,
then it follows from Lemma \ref{lem:V=A-geometry} that $\mathcal{S}_1 \subsetneq \mathcal{V}_1$. Therefore, subspace $\mathcal{S}_1$ has not yet been identified in the first step of the filtration and we should take a second step. As before, we can start constructing the second step of our filtration by choosing a suitable vanishing polynomial $p_2$, such that its gradient at the reference point $\boldsymbol{x}$ is not colinear with $\boldsymbol{b}_1$. The next Lemma shows that such a $p_2$ always exists.
\begin{lemma} \label{lem:exists-q}
$\mathcal{X}$ admits a homogeneous vanishing polynomial $p_2$ of degree $\ell \le n$, such that $p_2 \not\in \mathcal{I}_{\mathcal{V}_1}$ and $\nabla p_2|_{\boldsymbol{x}} \not\in \Span(\boldsymbol{b}_{1})$.
\end{lemma}
\begin{proof} Since $\mathcal{A}_1 \subsetneq \mathcal{V}_1$, Lemma \ref{lem:V=A-geometry} implies that $\mathcal{S}_1 \subsetneq \mathcal{V}_1$. Then there exists a vector $\boldsymbol{\zeta}_1$ that is orthogonal to $\mathcal{S}_1$ and is linearly independent from $\boldsymbol{b}_1$. Since $\boldsymbol{x} \in \mathcal{S}_1 - \bigcup_{i>1}\mathcal{S}_i$, for each $i>1$ we can find a vector $\boldsymbol{\zeta}_i$ such that $\boldsymbol{\zeta}_i \not\perp \boldsymbol{x}$ and $\boldsymbol{\zeta}_i \perp \mathcal{S}_i$. Notice that the pairs $\boldsymbol{b}_1,\boldsymbol{\zeta}_i$ are linearly independent for $i>1$, since $\boldsymbol{b}_1 \perp \boldsymbol{x}$ but $\boldsymbol{\zeta}_i \not\perp \boldsymbol{x}$. Now, the polynomial $p_2:=(\boldsymbol{\zeta}_1^\top x)\cdots (\boldsymbol{\zeta}_n^\top x)$ has degree $n$ and vanishes on $\mathcal{A}$, hence $p_2 \in \mathcal{I}_{\mathcal{X},\le m}$. Moreover,
$\nabla p_2|_{\boldsymbol{x}} =
(\boldsymbol{\zeta}_2^\top \boldsymbol{x})\cdots (\boldsymbol{\zeta}_n^\top \boldsymbol{x}) \boldsymbol{\zeta}_1 \neq 0$, since by hypothesis $\boldsymbol{\zeta}_i^\top \boldsymbol{x} \neq 0, \forall i>1$. Since $\boldsymbol{\zeta}_1$ is linearly independent from $\boldsymbol{b}_1$, we have $\nabla p_2|_{\boldsymbol{x}} \not\in \Span(\boldsymbol{b}_1)$. Finally, $p_2$ does not vanish on $\mathcal{V}_1$, by a similar argument to the one used in the proof of Lemma \ref{lem:V=A-algebra}.
\end{proof}
\begin{remark}
Note that if $\ell$ is the degree of $p_2$ as in Lemma \ref{lem:exists-q}, and if $q_1,\dots,q_s$ is a basis for $\mathcal{I}_{\mathcal{X},\ell}$, then at least one of the $q_i$ satisfies the conditions of the Lemma. This is important algorithmically, because it implies that the search for our $p_2$ can be done sequentially. We can start by first computing a minimal-degree polynomial in $\mathcal{I}_{\mathcal{A},k}$, and see if it satisfies our requirements. If not, then we can compute a second linearly independent polynomial and check again. We can continue in that fashion until we have computed a full basis for $\mathcal{I}_{\mathcal{X},k}$. If no suitable polynomial has been found, we can repeat the process for degree $k+1$, and so on, until we have reached degree $n$, if necessary.
\end{remark}
By using a polynomial $p_2$ as in Lemma \ref{lem:exists-q}, Proposition \ref{prp:Grd} guarantees that $\nabla p_2|_{\boldsymbol{x}}$ will be
orthogonal to $\mathcal{S}_1$. Recall though that for the purpose of the filtration we are interested in constructing a hyperplane $\mathcal{V}_2$ of $\mathcal{V}_1$. Since there is no guarantee that $\nabla p_2|_{\boldsymbol{x}}$ is inside $\mathcal{V}_1$ (thus defining a hyperplane of $\mathcal{V}_1$),
we must project $\nabla p_2|_{\boldsymbol{x}}$ onto $\mathcal{V}_{1}$ and guarantee that this projection is still orthogonal to $\mathcal{S}_1$. The next Lemma ensures that this is always the case.
\begin{lemma} \label{lem:project-q}
Let $0 \neq p_2 \in \mathcal{I}_{\mathcal{X},\le m} - \mathcal{I}_{\mathcal{V}_1}$
such that $\nabla p_2|_{\boldsymbol{x}} \not\in \Span(\boldsymbol{b}_{1})$. Then $\boldsymbol{0} \neq
\pi_{\mathcal{V}_{1}}(\nabla p_2|_{\boldsymbol{x}}) \perp \mathcal{S}_1$.
\end{lemma}
\begin{proof}
For the sake of contradiction, suppose that $\pi_{\mathcal{V}_{1}}(\nabla p_2|_{\boldsymbol{x}})=0$.
Setting $\boldsymbol{b}_{11}:=\boldsymbol{b}_1$,
let us augment $\boldsymbol{b}_{11}$ to a basis $\boldsymbol{b}_{11},\boldsymbol{b}_{12}\dots,\boldsymbol{b}_{1c}$ for the orthogonal complement of $\mathcal{S}_1$ in $\mathbb{R}^D$. In fact, we can choose the vectors
$\boldsymbol{b}_{12},\dots,\boldsymbol{b}_{1c}$ to be a basis for the orthogonal complement of $\mathcal{S}_1$ inside $\mathcal{V}_{1}$. By proposition
\ref{prp:VIS}, $p_2$ must have the form
\begin{align}
p_2(x)=q_1(x)(\boldsymbol{b}_{11}^\top x)+q_2(x) (\boldsymbol{b}_{12}^\top x)+\cdots +q_c(x) (\boldsymbol{b}_{1c}^\top x),
\end{align} where $q_1,\dots,q_c$ are homogeneous polynomials of
degree $\deg(p_2)-1$. Then
\begin{align}
\nabla p_2|_{\boldsymbol{x}} = q_1(\boldsymbol{x})\boldsymbol{b}_{11}+ q_2(\boldsymbol{x})\boldsymbol{b}_{12}+\cdots + q_c(\boldsymbol{x})\boldsymbol{b}_{1c}. \label{eq:NablaCombination}
\end{align} Projecting the above equation orthogonally onto $\mathcal{V}_{1}$ we get
\begin{align}
\pi_{\mathcal{V}_{1}}(\nabla p_2|_{\boldsymbol{x}})=q_2(\boldsymbol{x})\boldsymbol{b}_{12}+\cdots + q_c(\boldsymbol{x})\boldsymbol{b}_{1c},\label{eq:NablaProjection}
\end{align} which is zero by hypothesis. Since $\boldsymbol{b}_{12},\cdots,\boldsymbol{b}_{1c}$ are linearly independent vectors of $\mathcal{V}_{1}$ it must be the case that
$q_2(\boldsymbol{x})=\cdots=q_c(\boldsymbol{x})=0$. But this implies that $\nabla p_2|_{\boldsymbol{x}}=q_1(\boldsymbol{x}) \boldsymbol{b}_{11}$, which is a contradiction on the non-colinearity of $\nabla p_2|_{\boldsymbol{x}}$ with $\boldsymbol{b}_{11}$. Hence it must be the case that
$0 \neq \pi_{\mathcal{V}_{1}}(\nabla p_2|_{\boldsymbol{x}})$. The fact that $\pi_{\mathcal{V}_{1}}(\nabla p_2|_{\boldsymbol{x}}) \perp \mathcal{S}_1$ follows from \eqref{eq:NablaProjection} and the fact that
by definition $\boldsymbol{b}_{12},\dots,\boldsymbol{b}_{1c}$ are orthogonal to $\mathcal{S}_1$.
\end{proof}
At this point, letting $\boldsymbol{b}_2:=\pi_{\mathcal{V}_{1}}(\nabla p_2|_{\boldsymbol{x}})$, we can define $\mathcal{V}_2 = \Span(\boldsymbol{b}_1,\boldsymbol{b}_2)^\perp$, which is a subspace of codimension $1$ inside $\mathcal{V}_1$ (and hence of codimension $2$ inside $\mathcal{V}_0:=\mathbb{R}^D$). As before, we can define a subspace sub-arrangement $\mathcal{A}_2$ of $\mathcal{A}_1$ by intersecting $\mathcal{A}_1$ with $\mathcal{V}_2$. Once again, this intersection can be realized in the algebraic domain as $\mathcal{A}_2 = \mathcal{Z}(\mathcal{I}_{\mathcal{X}, m},\boldsymbol{b}_1^\top x, \boldsymbol{b}_2^\top x)$. Next, we have a similar result as in Lemmas \ref{lem:V=A-geometry} and \ref{lem:V=A-algebra}, which we now prove in general form:
\begin{lemma}\label{lem:FiltrationTermination}
Let $\boldsymbol{b}_{1},\dots,\boldsymbol{b}_{s}$ be $s$ vectors orthogonal to $\mathcal{S}_1$ and define the intermediate ambient space
$\mathcal{V}_{s} := \Span(\boldsymbol{b}_{1},\cdots,\boldsymbol{b}_{s})^\perp$. Let $\mathcal{A}_{s}$ be the subspace arrangement obtained by intersecting
$\mathcal{A}$ with $\mathcal{V}_{s}$. Then the following are equivalent:
\begin{enumerate}[(i)]
\item $\mathcal{V}_{s}=\mathcal{A}_{s}$
\item $\mathcal{V}_{s}=\mathcal{S}_1$
\item $\mathcal{S}_1 = \Span(\boldsymbol{b}_{1},\dots,\boldsymbol{b}_s)^\perp$
\item $\mathcal{I}_{\mathcal{X}, m} \subset \langle \boldsymbol{b}_{1}^\top x, \dots, \boldsymbol{b}_{s}^\top x \rangle_{m}$.
\end{enumerate}
\end{lemma}
\begin{proof}
$(i)\Rightarrow (ii):$ By taking vanishing ideals on both sides of
$\mathcal{V}_s = \mathcal{S}_1 \bigcup_{i>1} (\mathcal{S}_i \cap \mathcal{V}_s)$ we get
$\mathcal{I}_{\mathcal{V}_s} = \mathcal{I}_{\mathcal{S}_1} \bigcap_{i>1} \mathcal{I}_{\mathcal{S}_i \cap \mathcal{V}_s}$. By using
Proposition \ref{prp:ideals-intersection} in a similar fashion as in the proof of Lemma \ref{lem:V=A-geometry}, we conclude that $\mathcal{V}_s = \mathcal{S}_1$.
$(ii)\Rightarrow (iii):$ This is obvious from the definition of $\mathcal{V}_s$.
$(iii)\Rightarrow (iv):$ Let $h \in \mathcal{I}_{\mathcal{X}, m}$. Then $h$ vanishes on $\mathcal{A}$ and hence on $\mathcal{S}_1$ and by Proposition \ref{prp:VIS} we must have that
$h \in \mathcal{I}_{\mathcal{S}_1}=\langle \boldsymbol{b}_{1}^\top x, \dots, \boldsymbol{b}_{s}^\top x \rangle$. $(iv)\Rightarrow (i):$ $\mathcal{I}_{\mathcal{X}, m} \subset \langle \boldsymbol{b}_{1}^\top x, \dots, \boldsymbol{b}_{s}^\top x \rangle_m$ can be written as $\mathcal{I}_{\mathcal{X}, m} \subset \mathcal{I}_{\mathcal{V}_s}$. By the general position assumption $\mathcal{I}_{\mathcal{A}, m} = \mathcal{I}_{\mathcal{X}, m}$ and so we have $\mathcal{I}_{\mathcal{A}, m} \subset \mathcal{I}_{\mathcal{V}_s}$. Taking zero sets on both sides we get $\mathcal{A} \supset \mathcal{V}_s$, and intersecting both sides of this relation with $\mathcal{V}_s$, we get $\mathcal{A}_s \supset \mathcal{V}_s$. Since $\mathcal{A}_s \subset \mathcal{V}_s$, this implies that $\mathcal{V}_s = \mathcal{A}_s$.
\end{proof}
Similarly to Lemma \ref{lem:SinglePolynomialCheck} we have:
\begin{lemma} \label{lem:SinglePolynomialCheck-general}
Let $\boldsymbol{V}_s=[\boldsymbol{v}_1,\dots,\boldsymbol{v}_{D-s}]$ be a basis for $\mathcal{V}_s$, and let $\sigma_{\boldsymbol{V}_s}:\mathcal{V}_s \rightarrow \mathbb{R}^{D-s}$ be the linear map that takes a vector $\boldsymbol{v} = \alpha_1 \boldsymbol{v}_1 + \cdots + \alpha_{D-s} \boldsymbol{v}_{D-s}$ to its coordinate representation $(\alpha_1,\dots,\alpha_{D-s})^\top$. Suppose that
$\mathcal{X}_1$ is in general position inside $\mathcal{S}_1$ with respect to degree $m$. Then
$\mathcal{V}_s = \mathcal{A}_s$ if and only if the embedded data matrix $\nu_{m}(\sigma_{\boldsymbol{V}_s}(\mathcal{X} \cap \mathcal{V}_s))$ is full rank.
\end{lemma}
By Lemma \ref{lem:FiltrationTermination}, if $\mathcal{I}_{\mathcal{X}, m} \subset \langle \boldsymbol{b}_{1}^\top x, \boldsymbol{b}_{2}^\top x \rangle$, the algorithm terminates the filtration with output the orthogonal basis $\left\{\boldsymbol{b}_1,\boldsymbol{b}_2\right\}$ for the orthogonal complement of the irreducible component $\mathcal{S}_1$ of $\mathcal{A}$. If on the other hand $\mathcal{I}_{\mathcal{X}, m} \not\subset \langle \boldsymbol{b}_{1}^\top x, \boldsymbol{b}_{2}^\top x \rangle$, then the algorithm picks a basis element $p_3$ of $\mathcal{I}_{\mathcal{X}, m} $ such that $p_3 \not\in \mathcal{I}_{\mathcal{V}_2}$ and $\nabla p_3 |_{\boldsymbol{x}} \not\in \Span(\boldsymbol{b}_1,\boldsymbol{b}_2)$, and defines a subspace $\mathcal{V}_3$ of codimension $1$ inside $\mathcal{V}_2$ using $\pi_{\mathcal{V}_2}\left(\nabla p_3 |_{\boldsymbol{x}}\right)$.\footnote{The proof of existence of such a $p_3$ is similar to the proof of Lemma \ref{lem:exists-q} and is omitted.} Setting $\boldsymbol{b}_3:= \pi_{\mathcal{V}_2}\left(\nabla p_3 |_{\boldsymbol{x}}\right)$, the algorithm uses Lemma \ref{lem:FiltrationTermination} to determine whether to terminate the filtration or take one more step and so on.
The principles established in the previous sections, formally lead us to the algebraic descending filtration Algorithm \ref{alg:ADF} and its Theorem \ref{thm:ADF} of correctness.
\begin{algorithm} \caption{Algebraic Descending Filtration (ADF)}\label{alg:ADF}
\begin{algorithmic}[1]
\Procedure{ADF}{$p,\boldsymbol{x},\mathcal{X},m$}
\State $\mathfrak{B} \gets \nabla p|_{\boldsymbol{x}}$;
\While{$\mathcal{I}_{\mathcal{X}, m} \not\subset \langle \boldsymbol{b}^\top x: \boldsymbol{b} \in \mathfrak{B} \rangle $}
\State find $p \in \mathcal{I}_{\mathcal{X},\le m} - \langle \boldsymbol{b}^\top x: \boldsymbol{b} \in \mathfrak{B} \rangle$ s.t. $\nabla p|_{\boldsymbol{x}} \not\in \Span(\mathfrak{B})$;
\State $\mathfrak{B} \gets \mathfrak{B} \cup \left\{{\tiny } \pi_{\Span(\mathfrak{B})^\perp}\left(\nabla p|_{\boldsymbol{x}}\right)\right\}$;
\EndWhile
\State \Return $\mathfrak{B}$;
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Correctness of Algorithm \ref{alg:ADF}] \label{thm:ADF}
Let $\mathcal{X}=\left\{\boldsymbol{x}_1,\dots,\boldsymbol{x}_N\right\}$ be a finite set of points in general position (Definition \ref{dfn:GeneralPosition}) with respect to degree $m$ inside a transversal (Definition \ref{dfn:transversal}) arrangement $\mathcal{A}$ of at most $m$ linear subspaces of $\mathbb{R}^D$. Let $p$ be a polynomial of minimal degree that vanishes on $\mathcal{X}$. Then there always exists a nonsingular $\boldsymbol{x} \in \mathcal{X}$ such that $\nabla p|_{\boldsymbol{x}} \neq \boldsymbol{0}$, and for such an $\boldsymbol{x}$, the output $\mathfrak{B}$ of Algorithm \ref{alg:ADF} is an orthogonal basis for the orthogonal complement in $\mathbb{R}^D$ of the irreducible component of $\mathcal{A}$ that contains $\boldsymbol{x}$.
\end{theorem}
\subsection{The FASC algorithm} \label{subsection:All}
In Sections \ref{subsection:FirstStepFiltration}-\ref{subsection:MultipleSteps} we established the theory of a single filtration, according to which one starts with a nonsingular point $\boldsymbol{x}_1:=\boldsymbol{x} \in \mathcal{A} \cap \mathcal{X}$ and obtains an orthogonal basis $\boldsymbol{b}_{11},\dots,\boldsymbol{b}_{1c_1}$ for the orthogonal complement of the irreducible component $\mathcal{S}_1$ of $\mathcal{A}$ that contains reference point $\boldsymbol{x}_1$. To obtain an orthogonal basis $\boldsymbol{b}_{21},\dots,\boldsymbol{b}_{2c_2}$ corresponding to a second irreducible component $\mathcal{S}_2$ of $\mathcal{A}$, our approach is the natural one: remove $\mathcal{X}_1$ from $\mathcal{X}$ and run a filtration on the set $\mathcal{X}^{(1)}:=\mathcal{X} - \mathcal{X}_1$. All we need for the theory of Sections \ref{subsection:FirstStepFiltration}-\ref{subsection:MultipleSteps} to be applicable to the set $\mathcal{X}^{(1)}$, is that $\mathcal{X}^{(1)}$ be in general position inside the arrangement $\mathcal{A}^{(1)}:=\mathcal{S}_{2} \cup \cdots \cup \mathcal{S}_n$. But this has been proved in Lemma \ref{lem:GeneralPosition}.
With Lemma \ref{lem:GeneralPosition} establishing the correctness of recursive application of a single filtration, the correctness of the FASC Algorithm \ref{alg:AASC} follows at once, as in Theorem \ref{thm:AASC}. Note that in Algorithm \ref{alg:AASC}, $n$ is the number of subspaces, while $\mathfrak{D}$ and $\mathfrak{L}$ are ordered sets, such that, up to a permutation, the $i$-th element of $\mathfrak{D}$ is $d_i = \dim \mathcal{S}_i$, and the $i$-th element of $\mathfrak{L}$ is an orthogonal basis for $\mathcal{S}_i^\perp$.
\begin{algorithm} \caption{Filtrated Algebraic Subspace Clustering}\label{alg:AASC} \begin{algorithmic}[1]
\Procedure{FASC}{$\mathcal{X} \in \mathbb{R}^{D \times N},m$}
\State $n \gets 0$; $\mathfrak{D} \gets \emptyset $; $\mathfrak{L} \gets \emptyset$;
\While{$\mathcal{X} \neq \emptyset$}
\State find polynomial $p$ of minimal degree that vanishes on $\mathcal{X}$;
\State find $\boldsymbol{x} \in \mathcal{X}$ s.t. $\nabla p|_{\boldsymbol{x}} \neq 0$;
\State $\mathfrak{B} \gets$ ADF$(p,\boldsymbol{x},\mathcal{X},m)$;
\State $\mathfrak{L} \gets \mathfrak{L} \cup \left\{\mathfrak{B}\right\}$;
\State $\mathfrak{D} \gets \mathfrak{D} \cup \left\{D - \card(\mathfrak{B}) \right\}$;
\State $\mathcal{X} \gets \mathcal{X} - \Span(\mathfrak{B})^\perp$;
\State $n \gets n+1$; $m \gets m-1$;
\EndWhile
\State \Return $n,\mathfrak{D},\mathfrak{L}$;
\EndProcedure
\end{algorithmic}
\end{algorithm}
\begin{theorem}[Correctness of Algorithm \ref{alg:AASC}] \label{thm:AASC}
Let $\mathcal{X}=\left\{\boldsymbol{x}_1,\dots,\boldsymbol{x}_N\right\}$ be a set in general position
with respect to degree $m$ (Definition \ref{dfn:GeneralPosition}) inside
a transversal (Definition \ref{dfn:transversal}) arrangement $\mathcal{A}$ of at most $m$ linear
subspaces of $\mathbb{R}^D$. For such an $\mathcal{X}$ and $m$, Algorithm \ref{alg:AASC} always terminates with output a set $\mathfrak{L}=\left\{\mathfrak{B}_1,\dots,\mathfrak{B}_n\right\}$, such that up to a permutation, $\mathfrak{B}_i$ is an orthogonal basis for the orthogonal complement of the $i^{th}$ irreducible component $\mathcal{S}_i$ of $\mathcal{A}$, i.e., $\mathcal{S}_i = \Span(\mathfrak{B}_i)^\perp, i=1,\dots,n$, and $\mathcal{A} = \bigcup_{i=1}^n \mathcal{S}_i$.
\end{theorem}
\section{Filtrated Spectral Algebraic Subspace Clustering} \label{section:FSASC}
In this section we show how FASC (Sections \ref{section:geometricAASC}-\ref{section:mfAASC}) can be adapted to a working subspace clustering algorithm that is robust to noise. As we will soon see, the success of such an algorithm depends on being able to 1) implement a single filtration in a robust fashion, and 2) combine multiple robust filtrations to obtain the clustering of the points.
\subsection{Implementing robust filtrations} \label{subsection:FSASC-filtrations}
Recall that the filtration component ADF (Algorithm \ref{alg:ADF}) of the FASC Algorithm \ref{alg:AASC}, is based on computing a descending filtration of ambient
spaces $\mathcal{V}_1 \supset \mathcal{V}_2 \supset \cdots$. Recall that $\mathcal{V}_1$ is obtained as the hyperplane of $\mathbb{R}^D$ with normal vector $\nabla p|_{\boldsymbol{x}}$, where $\boldsymbol{x}$ is the reference point associated with the filtration, and $p$ a polynomial of minimal degree $k$ that vanishes on $\mathcal{X}$. In the absence of noise, the value of $k$ can be characterized as the smallest $\ell$ such that $\nu_{\ell}(\mathcal{X})$ drops rank (see section \ref{subsection:Hyperplanes} for notation). In the presence of noise, and assuming that $\mathcal{X}$ has cardinality at least ${m+D-1 \choose m}$, there will be in general no vanishing polynomial of degree $\le m$, i.e., the embedded data matrix $\nu_{\ell}(\mathcal{X})$ will have full column rank, for any $\ell \le m$. Hence, in the presence of noise we do not know a-priori what the minimal degree $k$ is. On the other hand, we do know that $m \ge n$, which implies that the underlying subspace arrangement $\mathcal{A}$ admits vanishing polynomials of degree $m$. Thus a reasonable choice for an approximate vanishing polynomial $p_1:=p$, is the polynomial whose coefficients are given by the right singular vector of $\nu_m(\mathcal{X})$ that corresponds to the smallest singular value. Recall also that in the absence of noise we chose our reference point $\boldsymbol{x} \in \mathcal{X}$ such that $\nabla p_1|_{\boldsymbol{x}} \neq \boldsymbol{0}$. In the presence of noise this condition will be almost surely true every point $\boldsymbol{x} \in \mathcal{X}$; then one can select the point that gives the largest gradient, i.e., we can pick as reference point an $\boldsymbol{x}$ that maximizes the norm of the gradient $\left\|\nabla p_1|_{\boldsymbol{x}}\right\|_2$.
Moving on, ADF constructs the filtration of $\mathcal{X}$ by intersecting $\mathcal{X}$ with the intermediate ambient spaces $\mathcal{V}_1 \supset \mathcal{V}_2 \supset \cdots$. In the presence of noise in the dataset $\mathcal{X}$, such intersections will almost surely be empty. As it turns out, we can replace the operation of intersecting $\mathcal{X}$ with the intermediate spaces $\mathcal{V}_s,s=1,2,\dots$, by projecting $\mathcal{X}$ onto $\mathcal{V}_s$. \emph{In the absence of noise}, the norm of the points of $\mathcal{X}$ that lie in $\mathcal{V}_s$ will remain unchanged after projection, while points that lie outside $\mathcal{V}_s$ will witness a drop in their norm upon projection onto $\mathcal{V}_s$. Points whose norm is reduced can then be removed and the end result of this process is equivalent to intersecting $\mathcal{X}$ with $\mathcal{V}_s$.
\emph{In the presence of noise} one can choose a threshold $\delta>0$, such that if the distance of a point from subspace $\mathcal{V}_s$ is less than $\delta$, then the point is maintained after projection onto $\mathcal{V}_s$, otherwise it is removed.
But how to choose $\delta$? One reasonable way to proceed, is to consider the polynomial $p$ that corresponds to the right singular vector of $\nu_m(\mathcal{X})$ of smallest singular value,
and then consider the quantity
\begin{align}
\beta(\mathcal{X}) := \frac{1}{N} \sum_{j=1}^N \frac{\left|\boldsymbol{x}_j^\top \nabla p|_{\boldsymbol{x}_j}\right|}{\left\|\boldsymbol{x}_j\right\|_2 \left\|\nabla p|_{\boldsymbol{x}_j}\right\|_2}.
\end{align} Notice that in the absence of noise $\dim \mathcal{N}(\nu_m(\mathcal{X}))>0$ and subsequently
$\beta(\mathcal{X})=0$. In the presence of noise however, $\beta(\mathcal{X})$ represents the average distance of a point $\boldsymbol{x}$ in the dataset to the hyperplane that it produces by means of $\nabla p|_{\boldsymbol{x}}$ (in the absence of noise this distance is zero by Proposition \ref{prp:Grd}). Hence intuitively, $\delta$ should be of the same order of magnitude as $\beta(\mathcal{X})$; a natural choice is to set $\delta:= \gamma \cdot \beta(\mathcal{X})$, where $\gamma$ is a user-defined parameter taking values close to $1$. Having projected $\mathcal{X}$ onto $\mathcal{V}_1$ and removed points whose distance from $\mathcal{V}_1$ is larger than $\delta$, we obtain a second approximate polynomial $p_2$ from the right singular vector of smallest singular value of the embedded data matrix of the remaining projected points and so on.
It remains to devise a robust criterion for terminating the filtration. Recall that the criterion for terminating the filtration in ADF is $\mathcal{I}_{\mathcal{X}, m} \subset \langle \boldsymbol{b}_1^\top x,\dots,\boldsymbol{b}_s^\top x\rangle_m$, where $\mathcal{V}_s=\Span(\boldsymbol{b}_1,\dots,\boldsymbol{b}_s)^\perp$. Checking this criterion is equivalent to checking the inclusion $\mathcal{I}_{\mathcal{X}, m} \subset \langle \boldsymbol{b}_1^\top x,\dots,\boldsymbol{b}_s^\top x\rangle_{ m}$ of finite dimensional vector spaces. In principle, this requires computing a basis for the vector space $\mathcal{I}_{\mathcal{X}, m}$. Now recall from section \ref{subsection:SASC-A}, that it is precisely this computation that renders the classic polynomial differentiation algorithm unstable to noise; the main difficulty being the correct estimation of $\dim\left(\mathcal{I}_{\mathcal{X}, m}\right)$, and the dramatic dependence of the quality of clustering on this estimate. Consequently, for the purpose of obtaining a robust algorithm, it is imperative to avoid such a computation. But we know from Lemma \ref{lem:SinglePolynomialCheck-general} that, if $\mathcal{X}_i:=\mathcal{X} \cap \mathcal{S}_i$ is in general position inside $\mathcal{S}_i$ with respect to degree $m$ for every $i \in [n]$, then the criterion for terminating the filtration is equivalent to checking whether in the coordinate representation of $\mathcal{V}_s$ the points $\mathcal{X} \cap \mathcal{V}_s$ admit a vanishing polynomial of degree $ m$. But this is computationally equivalent to checking whether $\mathcal{N}\left(\nu_m\left(\sigma_{\boldsymbol{V}_s}(\mathcal{X} \cap \mathcal{V}_s)\right)\right) \neq 0$; see notation in Lemma \ref{lem:SinglePolynomialCheck-general}. This is a much easier problem than estimating $\dim\left(\mathcal{I}_{\mathcal{X}, m}\right)$, and we solve it implicitly as follows. Recall that in the absence of noise, the norm of the reference point remains unchanged as it passes through the filtration. Hence, it is natural to terminate the filtration at step $s$, if the distance from the projected reference point\footnote{Here by projected reference point we mean the image of the reference point under all projections up to step $s$.} to $\mathcal{V}_{s+1}$ is more than $\delta$, i.e., if the projected reference point is among the points that are being removed upon projection from $\mathcal{V}_s$ to $\mathcal{V}_{s+1}$. To guard against overestimating the number of steps in the filtration, we enhance the termination criterion by additionally deciding to terminate at step $s$ if the number of points that survived the projection from $\mathcal{V}_s$ to $\mathcal{V}_{s+1}$ is less than a pre-defined integer $L$, which is to be thought of as the minimum number of points in a cluster.
\subsection{Combining multiple filtrations} \label{subsection:FSASC-MultipleFiltrations}
Having determined a robust algorithmic implementation for a single filtration, we face the following issue: In general, two points lying approximately in the same subspace $\mathcal{S}$ will produce different hyperplanes that approximately contain $\mathcal{S}$ with different levels of accuracy. In the noiseless case any point would be equally good. In the presence of noise though, the choice of the reference point $\boldsymbol{x}$ becomes significant. How should $\boldsymbol{x}$ be chosen? To deal with this problem in a robust fashion, it is once again natural to construct a single filtration for each point in $\mathcal{X}$ and define an affinity between points $j$ and $j'$ as
\begin{align} \label{eq:affinity}
\boldsymbol{C}_{jj',\text{FSASC}} = \begin{cases}
\| \pi_{s_j}^{(j)} \circ \cdots \circ \pi_1^{(j)} (\boldsymbol{x}_{j'}) \| & \text{if $\boldsymbol{x}_{j'}$ remains}\\
0 & \text{otherwise},
\end{cases}
\end{align}
where $\pi_s^{(j)}$ is the projection from $\mathcal{V}_s$ to $\mathcal{V}_{s+1}$ associated to the filtration of point $\boldsymbol{x}_j$ and $s_j$ is the length of that filtration. This affinity captures the fact that if points $\boldsymbol{x}_j$ and $\boldsymbol{x}_{j'}$ are in the same subspace, then the norm of $\boldsymbol{x}_{j'}$ should not change from step $0$ to step $c$ of the filtration computed with reference point $\boldsymbol{x}_j$, where $c=D - \dim (\mathcal{S})$ is the codimension of the irreducible component $\mathcal{S}$ associated to reference point $\boldsymbol{x}_j$. Otherwise, if $\boldsymbol{x}_j$ and $\boldsymbol{x}_{j'}$ are in different subspaces, the norm of $\boldsymbol{x}_{j'}$ is expected to be reduced by the time the filtration reaches step $c$. In the case of noiseless data, only the points in the correct subspace survive step $c$ and their norms are precisely equal to one. In the case of noisy data, the affinity defined above will only be approximate.
\subsection{The FSASC algorithm} \label{subsection:FSASC}
Having an affinity matrix as in eq. \eqref{eq:affinity}, standard spectral clustering techniques can be applied to obtain a clustering of $\mathcal{X}$ into $n$ groups. We emphasize that in contrast to the abstract case of Algorithm \ref{alg:AASC}, the number $n$ of clusters must be given as input to the algorithm. On the other hand, the algorithm does not require the subspace dimensions to be given: these are implicitly estimated by means of the filtrations. Finally, one may choose to implement the above scheme for $M$ distinct values of the parameter $\gamma$ and choose the affinity matrix that leads to the smallest $n^{th}$ eigengap. The above discussion leads to the \emph{Filtrated Spectral Algebraic Subspace Clustering (FSASC)} Algorithm \ref{alg:FSASC}, in which
\begin{itemize}
\item $\textsc{Spectrum}\big(NL(\boldsymbol{C}+\boldsymbol{C}^{\top})\big)$ denotes the spectrum of the normalized Laplacian matrix of $\boldsymbol{C} + \boldsymbol{C}^{\top}$, \item $\textsc{SpecClust}\big(\boldsymbol{C}^*+(\boldsymbol{C}^*)^\top,n\big)$ denotes spectral clustering being applied to $\boldsymbol{C}^*+\boldsymbol{C}^{*\top}$ to obtain $n$ clusters,
\item $\textsc{Vanishing}\big(\nu_n(\mathcal{X})\big)$ is the polynomial whose coefficients are the right singular vector of $\nu_n(\mathcal{X})$ corresponding to the smallest singular value.
\item $\pi \gets \left[ \mathbb{R}^d \rightarrow \mathcal{H} \xrightarrow{\sim} \mathbb{R}^{d-1} \right]$ is to be read as ``$\pi$ is assigned the composite linear transformation $\mathbb{R}^d \rightarrow \mathcal{H} \xrightarrow{\sim} \mathbb{R}^{d-1}$, where the first arrow is the orthogonal projection of $\mathbb{R}^{d}$ to hyperplane $\mathcal{H}$, and the second arrow is the linear isomorphism that maps a basis of $\mathcal{H}$ in $\mathbb{R}^d$ to the standard coordinate basis of $\mathbb{R}^{d-1}$".
\end{itemize}
\begin{algorithm} \caption{Filtrated Spectral Algebraic Subspace Clustering (FSASC)} \label{alg:FSASC}
\begin{spacing}{0.8}
\begin{algorithmic} [1]
\Procedure{FSASC}{$\mathcal{X}, D, n,L,\{\gamma_m\}_{m=1}^M$}
\If {$N < \mathcal{M}_n(D)$}
\State \Return('Not enough points');
\Else
\State eigengap $\gets 0$; $\boldsymbol{C}^* \gets 0_{N \times N}$;
\State $\boldsymbol{x}_j \gets \boldsymbol{x}_j / ||\boldsymbol{x}_j||, \, \forall j \in [N]$;
\State $p \gets \Call{Vanishing}{\nu_n(\mathcal{X})}$;
\State $\beta \gets \frac{1}{N} \sum_{j=1}^N \big|\langle \boldsymbol{x}_j, \frac{\nabla p|_{\boldsymbol{x}_j}}{||\nabla p|_{\boldsymbol{x}_j}||}\rangle\big|$;
\For {$k = 1 : M$}
\State $\delta \gets \beta \cdot \gamma_k, \, \boldsymbol{C} \gets 0_{N \times N}$;
\For {$j = 1 : N$}
\State $C_{j,:} \gets \Call{Filtration}{\mathcal{X},\boldsymbol{x}_j,p,L,\delta,n}$;
\EndFor
\State $\{\lambda_s\}_{s=1}^N \gets \Call{Spectrum}{NL(\boldsymbol{C}+\boldsymbol{C}^\top)}$ ;
\If {(eigengap $< \lambda_{n+1} - \lambda_n$)}
\State eigengap $\gets \lambda_{n+1} - \lambda_n$; $\boldsymbol{C}^* \gets \boldsymbol{C}$;
\EndIf
\EndFor
\State $\left\{\mathcal{Y}_i\right\}_{i=1}^n \gets \Call{SpecClust}{\boldsymbol{C}^*+\boldsymbol{C}^{*\top},n}$;
\State \Return $\left\{\mathcal{Y}_i\right\}_{i=1}^n$;
\EndIf
\EndProcedure
\Statex
\Function{Filtration}{$\mathcal{X},\boldsymbol{x},p,L,\delta,n$}
\State $d \gets D, \, \mathcal{J} \gets [N], q \gets p, \boldsymbol{c} \gets 0_{1 \times N}$;
\State flag $\gets 1$;
\While {($d >1$) and ($\text{flag}=1$)}
\State $\mathcal{H} \gets \langle \nabla q|_{\boldsymbol{x}} \rangle^\perp, \, \pi \gets \left[ \mathbb{R}^d \rightarrow \mathcal{H} \xrightarrow{\sim} \mathbb{R}^{d-1} \right]$;
\If {$(||\boldsymbol{x}|| - || \pi(\boldsymbol{x})||)/||\boldsymbol{x}|| > \delta$}
\If {$d = D$}
\State $\boldsymbol{c}(j') \gets || \pi(\boldsymbol{x}_j')||, \, \forall j' \in [N]$;
\EndIf
\State flag $\gets 0$;
\Else
\State $\mathcal{J} \gets \left\{j' \in [N] : \frac{||\boldsymbol{x}_{j'}|| - || \pi(\boldsymbol{x}_{j'})||}{||\boldsymbol{x}_{j'}||} \le \delta \right\}$
\If {$\left| \mathcal{J} \right| < L$}
\State flag $\gets 0$;
\Else
\State $\boldsymbol{c}(j') \gets || \pi(\boldsymbol{x}_j')||, \, \forall j' \in \mathcal{J}$;
\State $\boldsymbol{c}(j') \gets 0, \, \forall j' \in [N]- \mathcal{J}$;
\If {$|\mathcal{J}| < \mathcal{M}_n(d)$}
\State flag $\gets 0$;
\Else
\State $d \gets d -1, \boldsymbol{x} \gets \pi(\boldsymbol{x})$;
\State $\boldsymbol{x}_{j'} \gets \pi(\boldsymbol{x}_{j'}) \, \forall j' \in \mathcal{J}$;
\State $ \mathcal{X} \gets \left\{\boldsymbol{x}_{j'}: j' \in \mathcal{J} \right\}$;
\State $q \gets \Call{Vanishing}{\nu_n(\mathcal{X})}$;
\EndIf
\EndIf
\EndIf
\EndWhile
\State \Return($\boldsymbol{c}$);
\EndFunction
\end{algorithmic}
\end{spacing}
\end{algorithm}
\subsection{A distance-based affinity (SASC-D)} \label{subsection:SASC-D}
Observe that\footnote{We will henceforth be assuming that all points $\boldsymbol{x}_1,\dots,\boldsymbol{x}_N$ are normalized to unit $\ell_2$-norm.} the FSASC affinity \eqref{eq:affinity} between points $\boldsymbol{x}_j$ and $\boldsymbol{x}_{j'}$, can be interpreted as the distance of point $\boldsymbol{x}_{j'}$ to the
orthogonal complement of the final ambient space $\mathcal{V}_{s_j}$ of the filtration corresponding to reference point $\boldsymbol{x}_j$. If all irreducible components of $\mathcal{A}$ were hyperplanes, then the optimal length of each filtration would be $1$. Inspired by this observation, we may define a simple \emph{distance-based} affinity, alternative to the \emph{angle-based} affinity of eq. \eqref{eq:ABA}, by
\begin{align}
\boldsymbol{C}_{jj',\text{dist}}: = 1 - \frac{\left|\boldsymbol{x}_{j'}^\top \nabla p|_{\boldsymbol{x}_j}\right|}{\left\|\nabla p|_{\boldsymbol{x}_j}\right\|_2}. \label{eq:DBA}
\end{align} The affinity of eq. \eqref{eq:DBA} is theoretically justified only for hyperplanes, as $\boldsymbol{C}_{jj',\text{angle}}$ is; yet as we will soon see in the experiments, $\boldsymbol{C}_{jj',\text{dist}}$ is much more robust than $\boldsymbol{C}_{jj',\text{angle}}$ in the case of subspaces of different dimensions. We attribute this phenomenon to the fact that, in the absence of noise, it is always the case that $\boldsymbol{C}_{jj',\text{dist}}=1$ whenever $\boldsymbol{x}_j,\boldsymbol{x}_{j'}$ lie in the same irreducible component; as mentioned in section \ref{subsection:SASC-A}, this need not be the case for $\boldsymbol{C}_{jj',\text{angle}}$. We will be referring to the Spectral ASC method that uses affinity \eqref{eq:DBA} as
\emph{SASC-D}.
\subsection{Discussion on the computational complexity} \label{subsection:complexity}
As mentioned in section \ref{section:ASC}, the main object that needs to be computed in algebraic subspace clustering is a vanishing polynomial $p$ in $D$ variables of degree $n$, where $D$ is the ambient dimension of the data and $n$ is the number of subspaces.
This amounts to computing a right null-vector of the $N \times \mathcal{M}_n(D)$ embedded data matrix $\nu_n(\mathcal{X})$, where $\mathcal{M}_n(D):={n+D-1 \choose n}$, and $N \ge \mathcal{M}_n(D)$. In practice, the data are noisy and there are usually no vanishing polynomials of degree $n$; instead one needs to compute the right singular vector of the embedded data matrix that corresponds to the smallest singular value.
Approximate iterative methods for performing this task do exist \cite{Schwetlick:LAA03,Liang:ETNA14,Stathopoulos:SIAM15}, and in this work we use the MATLAB function \texttt{svds.m}, which is based on an \emph{inverse-shift iteration} technique; see, e.g., the introduction of \cite{Liang:ETNA14}. Even though \texttt{svds.m} is in principle more efficient than computing the full SVD of $\nu_n(\mathcal{X})$ via the MATLAB function \texttt{svd.m}, the complexity of both functions is of the same order
\begin{align}
N \mathcal{M}_n(D)^2 = N { n+D-1 \choose n}^2, \label{eq:SVDcomplexity}
\end{align} which is the well-known complexity of SVD \cite{golub1996matrix} adapted to the dimensions of $\nu_n(\mathcal{X})$. This is because \texttt{svds.m} requires at each iteration the solution to a linear system of equations whose coefficient matrix has size of the same order as the size of $\nu_n(\mathcal{X})$.
Evidently, the complexity of \eqref{eq:SVDcomplexity} is prohibitive for large $D$ even for moderate values of $n$.
If we discount the spectral clustering step, this is precisely the complexity of SASC-A of
section \ref{subsection:SASC-A} as well as of SASC-D of section \ref{subsection:SASC-D}.
On the other hand, FSASC (Algorithm \ref{alg:FSASC}) is even more computationally demanding, as it requires the computation of a vanishing polynomial
at each step of every filtration, and there are as many filtrations as the total number of points. Assuming for simplicity that there is no noise and that the dimensions of all subspaces are equal to $d<D$, then the complexity of a single filtration in FSASC is of the order of
\begin{align}
\sum_{i=0}^{D-d} N \left(\mathcal{M}_n(D-i) \right)^2 = N \sum_{i=0}^{D-d} { n+D-i-1 \choose n}^2.
\end{align} Since FSASC computes a filtration for each and every point, its total complexity (discounting the spectral clustering step and assuming that we are using a single value for the parameter $\gamma$) is
\begin{align}
N \, \sum_{i=0}^{D-d} N \left(\mathcal{M}_n(D-i) \right)^2 = N^2 \sum_{i=0}^{D-d} { n+D-i-1 \choose n}^2.
\end{align} Even though the filtrations are independent of each other, and hence fully parallelizable, the complexity of FSASC is still prohibitive for large scale applications even after parallelization. Nevertheless, when the subspace dimensions are small, then FSASC is applicable after one reduces the dimensionality of the data by means of a projection, as will be done in section \ref{subsection:MotionSegmentation}. At any case, we hope that the complexity issue of FSASC will be addressed in future research.
\section{Experiments}
\label{subsection:Experiments}
In this section we evaluate experimentally the proposed methods FSASC (Algorithm \ref{alg:FSASC}) and SASC-D (section \ref{subsection:SASC-D}) and compare them to other state-of-the-art subspace clustering methods, using synthetic data (section \ref{subsection:SyntheticExperiments}), as well as real motion segmentation data (section \ref{subsection:MotionSegmentation}).
\subsection{Experiments on synthetic data} \label{subsection:SyntheticExperiments}
We begin by randomly generating $n=3$ subspaces of various dimension configurations $(d_1,d_2,d_3)$ in $\mathbb{R}^9$. The choice $D=9$ for the ambient dimension is motivated by applications in two-view geometry \cite{Hartley-Zisserman04,Vidal:IJCV06-multibody}. Once the subspaces are randomly generated, we use a zero-mean unit-variance Gaussian distribution with support on each subspace to randomly sample $N_i = 200$ points per subspace.
The points of each subspace are then corrupted by additive zero-mean Gaussian noise with standard deviation $\sigma\in\{0,0.01, 0.03, 0.05\}$ and support in the orthogonal complement of the subspace. All data points are subsequently normalized to have unit euclidean norm.
\begin{table}
\centering
\caption{Mean subspace clustering error in $\%$ over $100$ independent trials for synthetic data randomly generated in three random subspaces of $\mathbb{R}^{9}$ of dimensions $(d_1,d_2,d_3)$. The total number of points is $N=600$ with $200$ points associated to each subspace. We consider noiseless data $(\sigma=0)$ as well as data corrupted by zero-mean additive white noise of standard deviation $\sigma$ and support in the orthogonal complement of each subspace.}
\label{table:Synthetic_n3_error}
\ra{0.7}
\begin{tabular}{@{}l@{\, \, \,}c@{\, \,}c@{\, \,}c@{\, \,}c@{\, \,}c@{\, \,}c@{\, \,}c@{\, \,}c}\toprule[1pt] method & $(2,3,4)$ & $(4,5,6)$ & $(6,7,8)$ & $(2,5,8)$ & $(3,3,3)$ & $(6,6,6)$ & $(7,7,7)$ & $(8,8,8)$\\
\midrule[0.5pt]
& & & & $\sigma=0$\\
\cmidrule{5-5}
FSASC & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ \\
SASC-D & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ \\
SASC-A & $42$ & $39$ & $6$ & $14$ & $37$ & $24$ & $12$ & $\boldsymbol{0}$ \\
SSC & $\boldsymbol{0}$ & $1$ & $18$ & $49$ & $\boldsymbol{0}$ & $3$ & $14$ & $55$ \\
LRR & $\boldsymbol{0}$ & $3$ & $39$ & $5$ & $\boldsymbol{0}$ & $9$ & $42$ & $51$ \\
LRR-H & $\boldsymbol{0}$ & $3$ & $36$ & $6$ & $\boldsymbol{0}$ & $8$ & $38$ & $51$ \\
LRSC & $\boldsymbol{0}$ & $3$ & $39$ & $5$ & $\boldsymbol{0}$ & $9$ & $42$ & $51$ \\
LSR & $\boldsymbol{0}$ & $3$ & $39$ & $5$ & $\boldsymbol{0}$ & $9$ & $42$ & $51$ \\
LSR-H & $\boldsymbol{0}$ & $3$ & $32$ & $6$ & $\boldsymbol{0}$ & $8$ & $38$ & $51$ \\
\midrule[0.1pt]
& & & & $\sigma=0.01$\\
\cmidrule{5-5}
FSASC & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{1}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $5$ \\
SASC-D & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $1$ & $\boldsymbol{1}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{3}$ \\
SASC-A & $54$ & $45$ & $8$ & $24$ & $57$ & $36$ & $13$ & $\boldsymbol{3}$ \\
SSC & $2$ & $2$ & $18$ & $49$ & $\boldsymbol{0}$ & $3$ & $13$ & $55$ \\
LRR & $\boldsymbol{0}$ & $3$ & $38$ & $5$ & $\boldsymbol{0}$ & $9$ & $42$ & $51$ \\
LRR-H & $\boldsymbol{0}$ & $3$ & $36$ & $7$ & $\boldsymbol{0}$ & $8$ & $38$ & $51$ \\
LRSC & $\boldsymbol{0}$ & $3$ & $38$ & $5$ & $\boldsymbol{0}$ & $9$ & $42$ & $51$ \\
LSR & $\boldsymbol{0}$ & $3$ & $39$ & $5$ & $\boldsymbol{0}$ & $9$ & $42$ & $51$ \\
LSR-H & $\boldsymbol{0}$ & $3$ & $32$ & $6$ & $\boldsymbol{0}$ & $8$ & $38$ & $51$ \\
\midrule[0.1pt]
& & & & $\sigma=0.03$\\
\cmidrule{5-5}
FSASC & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{1}$ & $\boldsymbol{2}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{1}$ & $10$ \\
SASC-D & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $4$ & $3$ & $\boldsymbol{0}$ & $1$ & $2$ & $\boldsymbol{6}$ \\
SASC-A & $57$ & $46$ & $13$ & $31$ & $58$ & $37$ & $15$ & $7$ \\
SSC & $\boldsymbol{0}$ & $1$ & $20$ & $48$ & $\boldsymbol{0}$ & $3$ & $13$ & $55$ \\
\midrule[0.1pt]
& & & & $\sigma=0.05$\\
\cmidrule{5-5}
FSASC & $1$ & $\boldsymbol{0}$ & $\boldsymbol{2}$ & $\boldsymbol{3}$ & $1$ & $\boldsymbol{0}$ & $\boldsymbol{2}$ & $14$ \\
SASC-D & $1$ & $1$ & $7$ & $5$ & $1$ & $2$ & $5$ & $\boldsymbol{10}$ \\
SASC-A & $58$ & $46$ & $17$ & $36$ & $60$ & $39$ & $17$ & $11$ \\
SSC & $\boldsymbol{0}$ & $2$ & $20$ & $49$ & $\boldsymbol{0}$ & $3$ & $15$ & $55$ \\
LRR & $1$ & $3$ & $39$ & $6$ & $\boldsymbol{0}$ & $10$ & $42$ & $51$ \\
LRR-H & $1$ & $3$ & $36$ & $13$ & $\boldsymbol{0}$ & $8$ & $38$ & $52$ \\
LRSC & $1$ & $3$ & $39$ & $6$ & $\boldsymbol{0}$ & $10$ & $42$ & $51$ \\
LSR & $1$ & $3$ & $39$ & $6$ & $\boldsymbol{0}$ & $10$ & $42$ & $51$ \\
LSR-H & $1$ & $3$ & $32$ & $7$ & $\boldsymbol{0}$ & $8$ & $38$ & $51$ \\
\bottomrule[1pt]
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Mean intra-cluster connectivity over $100$ independent trials for synthetic data randomly generated in three random subspaces of $\mathbb{R}^{9}$ of dimensions $(d_1,d_2,d_3)$. There are $200$ points associated to each subspace, which are corrupted by zero-mean additive white noise of standard deviation $\sigma$ and support in the orthogonal complement of each subspace.}
\label{table:Synthetic_n3_intra}
\ra{0.7}
\begin{tabular}{@{}l@{\, \, \,}c@{\, \,}c@{\, \,}c@{\, \,}c@{\, \,}c@{\, \,}c@{\, \,}c@{\, \,}c}\toprule[1pt] method & $(2,3,4)$ & $(4,5,6)$ & $(6,7,8)$ & $(2,5,8)$ & $(3,3,3)$ & $(6,6,6)$ & $(7,7,7)$ & $(8,8,8)$\\
\midrule[0.5pt]
& & & & $\sigma=0$\\
\cmidrule{5-5}
FSASC & $\boldsymbol{1}$ & $\boldsymbol{1}$ & $\boldsymbol{1}$ & $\boldsymbol{1}$ & $\boldsymbol{1}$ & $\boldsymbol{1}$ & $\boldsymbol{1}$ & $\boldsymbol{1}$ \\
SASC-D & $\boldsymbol{1}$ & $\boldsymbol{1}$ & $\boldsymbol{1}$ & $\boldsymbol{1}$ & $\boldsymbol{1}$ & $\boldsymbol{1}$ & $\boldsymbol{1}$ & $\boldsymbol{1}$ \\
SASC-A & $0.37$ & $0.37$ & $0.37$ & $0.39$ & $0.34$ & $0.41$ & $0.37$ & $\boldsymbol{1}$ \\
SSC & $10^{-3}$ & $0.01$ & $10^{-4}$ & $10^{-3} $ & $0.01$ & $0.02$ & $10^{-3}$ & $10^{-7}$ \\
LRR & $0.59$ & $0.37$ & $0.43$ & $0.31$ & $0.64$ & $0.41$ & $0.45$ & $0.50$ \\
LRR-H & $0.28$ & $0.23$ & $0.23$ & $0.19$ & $0.31$ & $0.24$ & $0.24$ & $0.26$ \\
LRSC & $0.59$ & $0.37$ & $0.43$ & $0.31$ & $0.64$ & $0.41$ & $0.45$ & $0.50$ \\
LSR & $0.59$ & $0.37$ & $0.42$ & $0.31$ & $0.64$ & $0.41$ & $0.45$ & $0.50$ \\
LSR-H & $0.28$ & $0.24$ & $0.24$ & $0.21$ & $0.31$ & $0.25$ & $0.25$ & $0.27$ \\
\midrule[0.5pt]
& & & & $\sigma=0.01$\\
\cmidrule{5-5}
FSASC & $0.05$ & $0.35$ & $0.43$ & $0.10$ & $0.09$ & $0.43$ & $0.42$ & $0.43$ \\
SASC-D & $\boldsymbol{0.91}$ & $\boldsymbol{0.93}$ & $\boldsymbol{0.85}$ & $\boldsymbol{0.84}$ & $\boldsymbol{0.94}$ & $\boldsymbol{0.91}$ & $\boldsymbol{0.87}$ & $\boldsymbol{0.85}$ \\
SASC-A & $0.32$ & $0.30$ & $0.12$ & $0.14$ & $0.30$ & $0.29$ & $0.24$ & $0.07$ \\
SSC & $10^{-3}$ & $0.01$ & $10^{-4}$ & $10^{-3} $ & $0.01$ & $0.02$ & $10^{-3}$ & $10^{-7}$ \\
LRR & $0.42$ & $0.37$ & $0.43$ & $0.31$ & $0.51$ & $0.41$ & $0.45$ & $0.50$ \\
LRR-H & $0.13$ & $0.23$ & $0.23$ & $0.17$ & $0.22$ & $0.24$ & $0.24$ & $0.26$ \\
LRSC & $0.42$ & $0.37$ & $0.43$ & $0.31$ & $0.52$ & $0.41$ & $0.45$ & $0.50$ \\
LSR & $0.41$ & $0.37$ & $0.42$ & $0.31$ & $0.51$ & $0.41$ & $0.45$ & $0.50$ \\
LSR-H & $0.11$ & $0.24$ & $0.24$ & $0.18$ & $0.21$ & $0.25$ & $0.25$ & $0.27$ \\
\bottomrule[1pt]
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Mean inter-cluster connectivity in $\%$ over $100$ independent trials for synthetic data randomly generated in three random subspaces of $\mathbb{R}^{9}$ of dimensions $(d_1,d_2,d_3)$. There are $200$ points associated to each subspace, which are corrupted by zero-mean additive white noise of standard deviation $\sigma$ and support in the orthogonal complement of each subspace.}
\label{table:Synthetic_n3_inter}
\ra{0.7}
\begin{tabular}{@{}l@{\, \, \,}c@{\, \,}c@{\, \,}c@{\, \,}c@{\, \,}c@{\, \,}c@{\, \,}c@{\, \,}c}\toprule[1pt] method & $(2,3,4)$ & $(4,5,6)$ & $(6,7,8)$ & $(2,5,8)$ & $(3,3,3)$ & $(6,6,6)$ & $(7,7,7)$ & $(8,8,8)$\\
\midrule[0.5pt]
& & & & $\sigma=0$\\
\cmidrule{5-5}
FSASC & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{1}$ & $\boldsymbol{1}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{0}$ & $\boldsymbol{2}$ \\
SASC-D & $60$ & $60$ & $60$ & $60$ & $60$ & $60$ & $60$ & $60$ \\
SASC-A & $55$ & $55$ & $38$ & $43$ & $55$ & $50$ & $42$ & $35$ \\
SSC & $\boldsymbol{0}$ & $2$ & $22$ & $2$ & $\boldsymbol{0}$ & $7$ & $23$ & $46$ \\
LRR & $1$ & $49$ & $60$ & $45$ & $\boldsymbol{0}$ & $55$ & $60$ & $63$ \\
LRR-H & $\boldsymbol{0}$ & $18$ & $43$ & $9$ & $\boldsymbol{0}$ & $32$ & $44$ & $55$ \\
LRSC & $2$ & $49$ & $60$ & $45$ & $2$ & $55$ & $60$ & $63$ \\
LSR & $2$ & $49$ & $60$ & $43$ & $2$ & $56$ & $60$ & $64$ \\
LSR-H & $\boldsymbol{0}$ & $11$ & $24$ & $6$ & $\boldsymbol{0}$ & $19$ & $25$ & $30$ \\
\midrule[0.5pt]
& & & & $\sigma=0.01$\\
\cmidrule{5-5}
FSASC & $2$ & $4$ & $\boldsymbol{22}$ & $18$ & $2$ & $\boldsymbol{6}$ & $\boldsymbol{15}$ & $35$ \\
SASC-D & $62$ & $61$ & $60$ & $61$ & $62$ & $60$ & $60$ & $60$ \\
SASC-A & $63$ & $58$ & $46$ & $51$ & $64$ & $55$ & $47$ & $39$ \\
SSC & $\boldsymbol{0.1}$ & $\boldsymbol{1}$ & $23$ & $\boldsymbol{3} $ & $\boldsymbol{0.1}$ & $7$ & $23$ & $46$ \\
LRR & $17$ & $49$ & $60$ & $45$ & $16$ & $55$ & $60$ & $63$ \\
LRR-H & $1$ & $18$ & $43$ & $9$ & $1$ & $32$ & $44$ & $55$ \\
LRSC & $17$ & $49$ & $60$ & $45$ & $16$ & $55$ & $60$ & $63$ \\
LSR & $17$ & $49$ & $60$ & $46$ & $16$ & $55$ & $60$ & $64$ \\
LSR-H & $\boldsymbol{0.1}$ & $11$ & $24$ & $6$ & $\boldsymbol{0.1}$ & $19$ & $25$ & $\boldsymbol{30}$ \\
\bottomrule[1pt]
\end{tabular}
\end{table}
\begin{table}
\centering
\caption{Mean running time of each method in seconds over $100$ independent trials for synthetic data randomly generated in three random subspaces of $\mathbb{R}^{9}$ of dimensions $(d_1,d_2,d_3)$. There are $200$ points associated to each subspace, which are corrupted by zero-mean additive white noise of standard deviation $\sigma=0.01$ and support in the orthogonal complement of each subspace. The reported running time is the time required to compute the affinity matrix, and it does not include the spectral clustering step. The experiment is run in MATLAB on a standard Macbook-Pro with a dual core 2.5GHz Processor and a total of $4$GB Cache memory.}
\label{table:Synthetic_n3_rt}
\ra{0.7}
\begin{tabular}{@{}l@{\, \, \,}r@{\, \,}r@{\, \,}r@{\, \,}r@{\, \,}r@{\, \,}r@{\, \,}r@{\, \,}r}\toprule[1pt] method & $(2,3,4)$ & $(4,5,6)$ & $(6,7,8)$ & $(2,5,8)$ & $(3,3,3)$ & $(6,6,6)$ & $(7,7,7)$ & $(8,8,8)$\\
\midrule[0.5pt]
& & & & $\sigma=0.01$\\
\cmidrule{5-5}
FSASC & $13.57$ & $12.11$ & $8.34$ & $13.90$ & $13.69$ & $10.67$ & $8.55$ & $6.01$ \\
SASC-D & $0.03$ & $0.03$ & $0.03$ & $0.03$ & $0.03$ & $0.03$ & $0.03$ & $0.03$ \\
SASC-A & $0.03$ & $0.03$ & $0.03$ & $0.03$ & $0.03$ & $0.03$ & $0.03$ & $0.03$ \\
SSC & $5.01$ & $4.84$ & $5.06$ & $6.59 $ & $4.90$ & $4.71$ & $4.80$ & $5.03$ \\
LRR & $0.54$ & $0.36$ & $0.34$ & $0.45$ & $0.53$ & $0.34$ & $0.34$ & $0.34$ \\
LRR-H & $0.65$ & $0.48$ & $0.45$ & $0.61$ & $0.65$ & $0.46$ & $0.46$ & $0.45$ \\
LRSC & $\boldsymbol{0.01}$ & $\boldsymbol{0.01}$ & $\boldsymbol{0.01}$ & $\boldsymbol{0.01}$ & $\boldsymbol{0.01}$ & $\boldsymbol{0.01}$ & $\boldsymbol{0.01}$ & $\boldsymbol{0.01}$ \\
LSR & $0.05$ & $0.05$ & $0.05$ & $0.07$ & $0.05$ & $0.05$ & $0.05$ & $0.05$ \\
LSR-H & $0.25$ & $0.25$ & $0.24$ & $0.32$ & $0.24$ & $0.24$ & $0.24$ & $0.24$ \\
\bottomrule[1pt]
\end{tabular}
\end{table}
Using data as above, we compare the proposed methods FSASC (Algorithm \ref{alg:FSASC}) and SASC-D (section \ref{subsection:SASC-D}) to the state-of-the-art SASC-A (section \ref{subsection:SASC-A}) from algebraic subspace clustering methods, as well as to state-of-the-art \emph{self-expressiveness}-based methods, such as Sparse Subspace Clustering (SSC) \cite{Elhamifar:TPAMI13}, Low-Rank Representation (LRR) \cite{Liu:TPAMI13,Liu:ICML10}, Low-Rank Subspace Clustering (LRSC) \cite{Vidal:PRL14} and Least-Squares Regression subspace clustering (LSR) \cite{Lu:ECCV12}.
For FSASC we use $L=10$ and $\gamma=0.1$. For SSC we use the Lasso version with $\alpha_z=20$, where $\alpha_z$ is defined above equation (14) in \cite{Elhamifar:TPAMI13}, and $\rho=0.7$, where $\rho$ is the thresholding parameter of the SSC affinity (see MATLAB function \texttt{thrC.m} provided by the authors of \cite{Elhamifar:TPAMI13}). For LRR we use the ADMM version provided by its first author with $\lambda=4$ in equation (7) of \cite{Liu:PAMI12}. For LRSC we use the ADMM method proposed by its authors with $\tau=420$ and $\alpha=4000$, where $\alpha$ and $\tau$ are defined at problem $(P)$ of page $2$ in \cite{Vidal:PRL14}. Finally, for LSR we use equation (16) in \cite{Lu:ECCV12} with $\lambda=0.0048$. For both LRR and LSR we also report results with the heuristic post-processing of the affinity matrix proposed by the first author of \cite{Liu:PAMI12} in their MATLAB function \texttt{lrr\_motion\_seg.m}; we denote these versions of LRR and LSR by LRR-H and LSR-H respectively.
Notice that all compared methods are spectral methods, i.e., they produce a pairwise affinity matrix $\boldsymbol{C}$ upon which spectral clustering is applied. To evaluate the quality of the produced affinity, besides reporting the standard subspace clustering error, which is the percentage of misclassified points, we also report the \emph{intra-cluster} and \emph{inter-cluster connectivities} of the affinity matrices $\boldsymbol{C}$. As an intra-cluster connectivity we use the minimum algebraic connectivity among the subgraphs corresponding to the ground truth clusters. The algebraic connectivity of a subgraph is the second smallest eigenvalue of its normalized Laplacian, and measures how well connected the graph is. In particular, values close to $1$ indicate that the subgraph is indeed well-connected (single connected component), while values close to $0$ indicate that the subgraph tends to split to at least two connected components. Clearly, from a clustering point of view, the latter situation is undesirable, since it may lead to over-segmentation. Finally, as inter-cluster connectivity we use the percentage of the $\ell_1$-norm of the affinity matrix $\boldsymbol{C}$ that corresponds to erroneous connections, i.e., the quantity $\sum_{\boldsymbol{x}_j \in \mathcal{S}_i, \boldsymbol{x}_{j'} \in \mathcal{S}_{i'}, i \neq i'} |\boldsymbol{C}_{j,j'}|/||\boldsymbol{C}||_1$.
The smaller the inter-cluster connectivity is, the fewer erroneous connections the affinity contains. To summarize, a high-quality affinity matrix is characterized by high intra-cluster and low inter-cluster connectivity, which is then expected to lead to small spectral clustering error.
Tables \ref{table:Synthetic_n3_error}-\ref{table:Synthetic_n3_inter} show the clustering error, and the intra-cluster and inter-cluster connectivities associated with each method, averaged over $100$ independent experiments. Inspection of Table \ref{table:Synthetic_n3_error} reveals that, in the absence of noise ($\sigma=0$), FSASC gives exactly zero error across all dimension configurations. This is in agreement with the theoretical results of section \ref{section:mfAASC}, which guarantee that, in the absence of noise, the only points that survive the filtration associated with some reference point are precisely the points lying in the same subspace as the reference point. Indeed, notice that in Table \ref{table:Synthetic_n3_intra} and for $\sigma=0$ the connectivity attains its maximum value $1$, indicating that the subgraphs corresponding to the ground truth clusters are fully connected. Moreover in Table \ref{table:Synthetic_n3_inter} we see that for $\sigma=0$ the erroneous connections are either zero or negligible. This practically means that each point is connected to each and every other point from same subspace, while not connected to any other points, which is the ideal structure that an affinity matrix should have.
Remarkably, the proposed SASC-D, which is much simpler than FSASC, also gives zero error for zero noise. Table \ref{table:Synthetic_n3_intra} shows that SASC-D achieves perfect intra-cluster connectivity, while Table \ref{table:Synthetic_n3_inter} shows that
the inter-cluster connectivity associated with SASC-D is very large. This is clearly an undesirable feature, which nevertheless seems not to be affecting the clustering error in this experiment, perhaps because the intra-cluster connectivity is very high. As we will see though later (section \ref{subsection:MotionSegmentation}), the situation is different for real data, for which SASC-D performs inferior to FSASC.
Going back to Table \ref{table:Synthetic_n3_error} and $\sigma=0$, we see that the improvement in performance of the proposed FSASC and SASC-D over the existing SASC-A
is dramatic: indeed, SASC-A succeeds only in the case of hyperplanes, i.e., when $d_1=d_2=d_3=8$. This is theoretically expected, since in the case of hyperplanes there is only one normal direction per subspace, and the gradient of the vanishing polynomial at a point in the hyperplane is guaranteed to recover this direction. However, when the subspaces have lower-dimensions, as is the case, e.g., for the dimension configuration $(4,5,6)$, then there are infinitely many orthogonal directions to each subspace. Hence a priori, the gradient of a vanishing polynomial may recover any such direction, and such directions could be dramatically different even for points in the same subspace (e.g., they could be orthogonal), thus leading to a clustering error of $39\%$.
As far as the rest of the self-expressiveness methods are concerned, Table \ref{table:Synthetic_n3_error} ($\sigma=0$) shows what we expect: the methods
give a perfect clustering when the subspace dimensions are small, e.g., for dimension configurations $(2,3,4)$ and $(3,3,3)$, they start to degrade as the subspace dimensions increase ($(4,5,6)$, $(6,6,6)$), and eventually they fail when
the subspace dimensions become large enough ($(6,7,8)$,$(7,7,7)$,$(8,8,8)$). To examine
the effect of the subspace dimension on the connectivity, let us consider SSC and the dimension configurations $(2,3,4)$ and $(2,5,8)$: Table \ref{table:Synthetic_n3_intra} ($\sigma=0$) shows
that for both of these configurations the intra-cluster connectivity has a small value of $10^{-3}$. This is expected, since SSC computes sparse affinities and it is known to produce weakly connected clusters. Now, Table \ref{table:Synthetic_n3_inter} ($\sigma=0$)
shows that the inter-cluster connectivity of SSC for $(2,3,4)$ is zero, i.e., there are no erroneous connections, and so, even though the intra-cluster connectivity is as small as $10^{-3}$, spectral clustering can still give a zero clustering error. On the other hand, for the case $(2,5,8)$ the inter-cluster connectivity is $2\%$, which, even though small, when coupled with the small intra-cluster connectivity of $10^{-3}$, leads to a spectral clustering error of $49\%$. Finally, notice that for the case of $(8,8,8)$ the intra-cluster connectivity is $10^{-7}$ and the inter-cluster connectivity is $46\%$, indicating that the quality of the produced affinity is very poor, thus explaining the corresponding clustering error of $55\%$.
\begin{table}[t!]
\centering
\caption{Mean subspace clustering error in $\%$ over $100$ independent trials for synthetic data randomly generated in four random subspaces of $\mathbb{R}^{9}$ of dimensions $(8,8,5,3)$. There are $200$ points associated to each subspace, which are corrupted by zero-mean additive white noise of standard deviation $\sigma=0, 0.01, 0.03, 0.05$ and support in the orthogonal complement of each subspace.}
\label{table:Synthetic_n3_nv4}
\ra{0.7}
\begin{tabular}{@{}l@{\, \, \, \,}r@{\, \, \, \, }r@{\, \, \, \, }r@{\, \, \, \, }r@{\, \, \, \, }r@{\, \, \, \, }r@{\, \, \, \, }r@{\, \, \, \, }r}\toprule[1pt] method / $\sigma$ & $0$ & $0.01$ & $0.03$ & $0.05$ \\
\midrule[0.5pt]
FSASC & $\boldsymbol{0}$ & $\boldsymbol{2.19}$ & $\boldsymbol{5.08}$ & $\boldsymbol{7.65}$ \\
SASC-D & $22.88$ & $17.83$ & $15.93$ & $17.44$ \\
SASC-A & $22.88$ & $27.21$ & $31.43$ & $36.36$ \\
SSC & $64.39$ & $64.17$ & $64.36$ & $64.13 $ \\
LRR & $42.86$ & $42.88$ & $43.04$ & $42.91$ \\
LRR-H & $42.08$ & $42.06$ & $42.23$ & $42.21$ \\
LRSC & $42.85$ & $42.88$ & $43.05$ & $42.90$ \\
LSR & $42.84$ & $42.85$ & $43.00$ & $42.93$ \\
LSR-H & $38.72$ & $38.74$ & $38.96$ & $39.86$ \\
\bottomrule[1pt]
\end{tabular}
\end{table}
When the data are corrupted by noise ($\sigma=0.01,0.03,0.05$), the rest of the Tables
\ref{table:Synthetic_n3_error}-\ref{table:Synthetic_n3_inter} show that FSASC is the best
method, with the exception of the case of hyperplanes. In this latter case, i.e., when $d_1=d_2=d_3=8$, the best method is SASC-D with a clustering error of $6 \%$ when $\sigma=0.03$, as opposed to $10\%$ for FSASC. This is expected, since for the case of codimension-$1$ subspaces the length of each filtration should be precisely $1$, since in theory, the length of the filtration is equal to the codimension of the subspace associated to the reference point. Since FSASC automatically determines this length based on the data and the value of the parameter $\gamma$, it is expected that when the data are noisy, errors will be made in the estimation of the filtration length. On the other hand, SASC-D is equivalent to FSASC with an a priori configured filtration length equal to $1$, thus performing superior to FSASC. Certainly, giving as input to FSASC more than one values for $\gamma$, as shown in Algorithm \ref{alg:FSASC}, is expected to address this issue, but also increase the running time of FSASC (see Table \ref{table:Synthetic_n3_rt} for average running times of the methods in the current experiment).
We conclude this section by demonstrating the interesting property of FSASC of being able to give the correct clustering by using vanishing polynomials of degree strictly less than the true number of subspaces. Towards that end, we consider a similar situation as above, except that now we have $n=4$ subspaces of dimensions $(8,8,5,3)$. Contrary to
SASC-D and SASC-A, for which the theory requires degree-$4$ polynomials,
FSASC is still applicable if one works with polynomials of degree $3$: the crucial observation is that for the dimension configuration $(8,8,5,3)$, the corresponding subspace arrangement always admits vanishing polynomials of degree $3$, and the same is true for every intermediate arrangement occurring in a filtration. For example, if one lets $\boldsymbol{b}_1$ be a normal vector to one of the $8$-dimensional subspaces, and $\boldsymbol{b}_2$ a normal vector to the other, and $\boldsymbol{b}_3$ a normal vector to the $8$-dimensional subspace spanned by both the $5$-dimensional and $3$-dimensional subspace, then the polynomial $p(x) = (\boldsymbol{b}_1^\top x) (\boldsymbol{b}_2^\top x) (\boldsymbol{b}_3^\top x)$ has degree $3$ and vanishes on the entire arrangement of the four subspaces. Interestingly, Table \ref{table:Synthetic_n3_nv4} shows that FSASC gives
zero error in the absence of noise and $7.65\%$ error for the worst case $\sigma =0.05$, while all other methods fail. In particular, the other two algebraic methods, i.e., SASC-D and SASC-A, are not able to cluster the data using a single vanishing polynomial of degree $3$.
\subsection{Experiments on real motion sequences} \label{subsection:MotionSegmentation}
We evaluate different methods on the Hopkins155 motion segmentation data set \cite{Tron:CVPR07}, which contains 155 videos of $n=2,3$ moving objects, each one with $N=100$-$500$ feature point trajectories of dimension $D=56$-$80$. While SSC, LRR, LRSC and LSR can operate directly on the raw data, algebraic methods require $\mathcal{M}_n(D) \leq N$. Hence, for algebraic methods, we project the raw data onto the subspace spanned by their $D$ principal components, where $D$ is the largest integer $\le 8$ such that $\mathcal{M}_n(D) \leq N$, and then normalize each point to have unit norm. We apply SSC to i) the raw data (SSC-raw) and ii) the raw points projected onto their first $8$ principal components and normalized to unit norm (SSC-proj). For FSASC we use $L=10$ and $\gamma=0.001,0.005,0.01,0.05,0.1, 0.5,1,5,10 $. LRR, LRSC and LSR use the same parameters as in section \ref{subsection:SyntheticExperiments}, while for SSC the parameters are $\alpha = 800$ and $\rho = 0.7$.
The clustering errors and the intra/inter-cluster connectivities are reported in Table \ref{table:Hopkins155} and Fig. \ref{figure:Hopkins155-ordered-error}. Notice the clustering errors of about 5\% and 37\% for SASC-A for two and three motions respectively. Notice how changing the angle-based by the distance-based affinity, SASC-D already gives errors of around 5.5\% and 14\%. But most dramatically, notice how FSASC further reduces those errors to 0.8\% and 2.48\%. Moreover, even though the dimensions of the subspaces ($d_i \in \{1,2,3,4\}$ for motion segmentation) are low relative to the ambient space dimension ($D=56$-$80$) - a case that is specifically suited for SSC, LRR, LRSC, LSR - projecting the data to $D\leq 8$, which makes the subspace dimensions comparable to the ambient dimension, is sufficient for FSASC to get superior performance relative to the best performing algorithms on Hopkins 155. We believe that this is because, overall, FSASC produces a much higher intra-cluster connectivity, without increasing the inter-cluster connectivity too much.
\setlength{\tabcolsep}{0.2em}
\begin{table}
\centering
\ra{0.7}
\caption{Mean clustering error ($E$) in $\%$, intra-cluster connectivity ($C_1$), and inter-cluster connectivity ($C_2$) in $\%$ for the Hopkins155 data set.} \label{table:Hopkins155}
\begin{tabular}
{@{}l@{\, \, \, }@{\, \, }r@{\, \, }r@{\, \, }r@{\, \, }r@{\, \, }r@{\, \, }r@{\, \, }r@{\, \, }r@{\, \, }r@{\, \,}r@{\, \, }r@{\, \, }r@{\, \, }r}\toprule[1pt
\phantom{abc}&& \multicolumn{3}{c}{$2$ motions} & \phantom{ab}& \multicolumn{3}{c}{$3$ motions} &
\phantom{ab} & \multicolumn{3}{c}{all motions} & \\
\cmidrule{3-5} \cmidrule{7-9} \cmidrule{11-13}
method && $E$ & $C_1$ & $C_2$ && $E$ & $C_1$ & $C_2$ && $E$ & $C_1$ & $C_2$ \\ \midrule
FSASC && $\boldsymbol{0.80}$ & $0.18$ & $4$ && $\boldsymbol{2.48}$ & $0.10$ & $10$
&& $\boldsymbol{1.18}$ & $0.16$ & $5$ \\
SASC-D && $5.65$ & $0.82$ & $26$ && $14.0$ & $0.80$ & $46$ && $7.59$ & $0.81$ & $31$ \\
SASC-A && $4.99$ & $0.35$ & $5$ && $36.8$ & $0.09$ & $35$ && $12.2$ & $0.29$ & $12$ \\
SSC-raw && $1.53$ & $0.05$ & $2$ && $4.40$ & $0.04$ & $3$ && $2.18$ & $0.05$ & $2$ \\
SSC-proj && $5.87$ & $0.04$ & $3$ && $5.70$ & $0.03$ & 3 && $5.83$ & $0.03$ & $3$ \\
LRR && $4.26$ & $0.25$ & $19$ && $7.78$ & $0.25$ & $28$ && $5.05$ & $0.25$ & $21$ \\
LRR-H && $2.25$ & $0.05$ & $2$ && $3.40$ & $0.04$ & $3$ && $2.51$ & $0.05$ & $2$ \\
LRSC && $3.38$ & $0.25$ & $19$ && $7.42$ & $0.24$ & $28$ && $4.29$ & $0.25$ & $21$ \\
LSR && $3.60$ & $0.24$ & $18$ && $7.77$ & $0.23$ & $28$ && $ 4.54$ & $0.23$ & $21$ \\
LSR-H && $2.73$ & $0.04$ & $1$ && $2.60$ & $0.03$ & $2$ && $2.70$ & $0.04$ & $1$ \\
\bottomrule[1pt]
\end{tabular}
\end{table}
\begin{figure}[t!]
\centering
\includegraphics[trim=20 0 50 50,clip,width=0.8\linewidth]{Hopkins155_errors_053115b}
\caption{Clustering error ratios for both $2$ and $3$ motions in Hopkins155, ordered increasingly for each method. Errors start from the $90$-th smallest error of each method.}\label{figure:Hopkins155-ordered-error}
\end{figure}
\begin{comment}
\begin{table}
\centering
\ra{0.7}
\caption{Clustering error ($\%$) for two digits $(1,i),\, i=0,2,...,9$ in the MNIST dataset.}\label{table:MNIST}
\begin{tabular}{@{}l@{\, \, \, }@{\, \, }c@{\, \, }c@{\, \, }c@{\, \, }c@{\, \, }c@{\, \, }c@{\, \, }c@{\, \, }c@{\, \, }c@{}}\toprule[1pt]{}
& \multicolumn{8}{c}{Digits Pair} & \phantom{ab}\\
\cmidrule{2-10}
method & $(1,0)$ & $(1,2)$ & $(1,3)$ & $(1,4)$ & $(1,5)$ & $(1,6)$ & $(1,7)$ & $(1,8)$ & $(1,9)$ \\ \midrule
FSASC & $\boldsymbol{0.50}$ & $4.67$ & $1.55$ & $3.31$ & $1.11$
& $1.62$ & $2.27$ & $4.88$ & $1.81$ \\
SASC-D & $4.91$ & $14.2$ & $10.3$ & $23.9$ & $8.55$ & $13.1$ & $10.2$ & $21.5$ & $17.5$ \\
SSC-raw & $1.12$ & $9.15$ & $2.66$ & $5.77$ & $2.78$ & $1.87$ & $2.90$ & $13.3$ & $2.00$ \\
LSR & $1.03$ & $5.26$ & $2.13$ & $19.1$ & $1.29$ & $1.50$ & $5.40$ & $15.3$ & $5.90$ \\
LSR-H & $0.74$ & $\boldsymbol{1.35}$ & $\boldsymbol{1.12}$ & $\boldsymbol{3.15}$ & $\boldsymbol{0.88}$ & $\boldsymbol{0.90}$ & $\boldsymbol{1.33}$ & $\boldsymbol{4.38}$ & $\boldsymbol{1.11}$ \\
\bottomrule[1pt]
\end{tabular}
\end{table}
\subsection{Experiments on handwritten digit clustering} \label{subsection:Digits}
In this section we consider the problem of clustering two digits, one of which is the digit $1$ \footnote{The reader may wonder why we have chosen one of the digits to be fixed and equal to $1$. Our reasoning behind that choice is Benford's law \cite{Kossovsky14}, according to which the digit $1$ appears with high probability as the leading digit of numbers appearing in daily life.}. For each pair $(1,i), \, i = 0,2,...,9,$ we randomly select $200$ images from the MNIST database \cite{Lecun:PIEEE1998gradient} corresponding to each digit and compute the clustering errors averaged over $100$ independent experiments. SSC, LRR and LSR operate on raw data. LRR and LSR parameters are the same as before. For FSASC we set $L=10$ and $\gamma=1$ and for SSC we set $\alpha=10$ and $\rho=0.7$. For the three algebraic methods we first project the raw data onto their first $13$ principal components and then normalize each point to have unit norm. For comparison, we also run SSC on the projected data. Mean errors are reported in Table \ref{table:MNIST} with SASC-A, LRR, LRSC omitted since they perform poorly (with LRR performing worse with heuristic post-processing). We also do not show the numbers for SSC-proj since they are very close to those of SSC-raw. As in the case of motion segmentation, we observe that FSASC outperforms SASC-D (this time by a large margin), which in turn significantly outperforms SASC-A. This once more confirms the superiority of FSASC over previous algebraic methods. In this experiment, FSASC is superior to SSC as well, and in fact the only method that performs better is LSR-H.
We conclude by noting that for more than two digits the performance of FSASC degrades significantly. This is partly due to the fact that projecting the original $784$-dimensional data onto dimension $D=13$, makes the subspaces less separated, and the clustering problem considerably harder.
\end{comment}
\section{Conclusions and Future Research}
We presented a novel family of subspace clustering algorithms, termed
\emph{Filtrated Algebraic Subspace Clustering} (FASC). The common theme
of these algorithms is the notion of a filtration of subspace arrangements.
The first algorithm
of the family, termed \emph{Filtrated Algebraic Subspace Clustering}
(FASC) receives as input a finite point set in general position inside
a subspace arrangement, together with an upper bound on the number of subspaces
in the arrangement. Then FASC provably returns the number of the subspaces, their dimensions, as
well as a basis for the orthogonal complement of each subspace. The second algorithm
of the family, termed \emph{Filtrated Spectral Algebraic Subspace Clustering} (FSASC) is an adaptation of FASC to a working algorithm that is robust to noise.
In fact, by experiments on synthetic and real data we showed that FSASC is superior to state-of-the-art subspace clustering algorithms on several occasions.
Due to the power of the machinery of filtrations, FSASC is unique among other
subspace clustering algorithms in that it can handle robustly subspaces of
potentially very different dimensions, which can be arbitrarily close or far from the dimension of the ambient space. This is an important distinctive feature of FSASC from state-of-the-art Sparse and Low-Rank methods, which are in principle applicable only when the subspace dimensions are sufficiently small relative to the ambient dimension. However, this advantage of FSASC comes at the cost of a large computational complexity. Future research will address the problem of reducing this complexity with the aim of making FSASC applicable to large scale datasets. Additional challenges to be undertaken include making FSASC robust to missing entries and outliers.
|
2,877,628,088,412 | arxiv | \section{Introduction}
System identification (SI) is a long-standing problem that has fostered much research effort \cite{Ljung99, Pintelon12}.
A wide variety of SI methods have been developed in different frameworks (control theory, machine learning, information theory), and tailored to the specific situation at hand.
In each case, the following questions, among others, must be considered to choose the adequate SI method: is it possible to apply a forcing to the system of interest and observe its response (input-output SI), or is it only possible to measure a given observable (output-only or ``blind'' SI)?
Is a model of the system already available, with parameters to be identified (parameter identification), or has the model itself to be uncovered (model identification)?
Does the system exhibit nonlinear and/or transient behavior or can linear time invariance (LTI) be assumed?
Is the output corrupted by measurement noise?
Is the system itself subject to dynamic noise, i.e. external stochastic forcing?
Is the system chaotic?
Classical input-output SI techniques generally employ a state-space representation and estimate the parameters of a (postulated or physically derived) model by minimizing the error between the predicted and measured values of some output-based quantity, using e.g. maximum likelihood (ML), prediction error method (PEM), or least-squares (LS) \cite{Hamilton94, Shumway11, Pillonetto14, sovardi2016}.
Popular model classes include auto-regressive / moving-average (AR, MA, ARMAX) models \cite{Liu10}, finite impulse response (FIR) models \cite{Polifke14}, output error models (OEM) \cite{Ding10} and Volterra series \cite{King16}.
If physical insight is lacking, SI can take care of selecting an adequate model among several candidates, although a careful trade-off between accuracy and simplicity is needed; this kind of Occam's razor principle is typically applied with probabilistic (Bayesian) approaches \cite{Beck10} or sparsity-promoting algorithms \cite{Chen14}.
Methods based on machine learning use kernels \cite{Pillonetto10} rather than postulating a model in the first place.
Output-only SI methods have to rely on partial information, either because the system cannot be arbitrarily driven, or because the input cannot be measured.
Standard tools include Kalman filters \cite{Kalman60}, synchronization methods \cite{Yu08}, modal identification \cite{Nagarajaiah09} and reduced-order modeling \cite{Rowley17}.
Empirical dynamic modeling \cite{Ye15} allows for
model-free output-only SI.
As for input-output SI, sparse identification techniques are available for output-only SI \cite{Brunton2016}.
Rather than identifying a model or its parameters, some techniques allow the determination of a number of characteristics of a system:
distinguish between its chaotic and stochastic nature \cite{Zunino12}, unveil time delays \cite{Zunino10} or discover hidden patterns \cite{Crutchfield12} based on information theory (e.g. entropy and complexity);
detect causality with convergent cross mapping \cite{Sugihara12};
analyze periodicity and intermittency in noisy signals using recurrence quantification analysis \cite{Eckmann87, Zbilut98, Suresha16}.
The presence of measurement noise and dynamic noise often complicates the task of SI, deteriorating both its accuracy when identifying parameters and its ability to select plausible models, even though state-space representations can explicitly account for noise.
See \cite{Reynders08, Zhang11, Kwasniok12}
for some efforts towards better noisy SI.
However, one can take advantage of the very presence of dynamic noise to extract information and perform output-only SI: inherent stochastic forcing drives the system away from its deterministic equilibrium trajectory and make it visit states that would not been visited otherwise.
As proposed in \cite{friedrich2011report}, these
enriched statistics can then be processed to reconstruct the coefficients of the system's Langevin equation or corresponding Fokker-Planck equation \cite{Risken84} and identify the governing parameters.
\begin{figure*}[t]
\begin{center}
\includegraphics[width=150mm]{01.eps}
\end{center}
\caption{Summary and context of the paper. Center: stochastically forced dynamic system. In order to perform model-based output-only SI, models for the stochastic input and for the dynamic system (being in the present study a Van der Pol oscillator) are required. Left: effects of noise color on oscillator dynamics and statistics, with $x(t)$ being the system state and $A(t)$ its envelope (the energy is $\propto A^2$). Right: filtering the data to isolate the dynamics of interest can be needed. The corresponding filter bandwidth affects the statistics and dynamics of the data, which has to be accounted for in the parameter identification procedure.\label{fig:summary}}
\end{figure*}
In the present study this approach is adopted for output-only model-based SI for stochastically driven nonlinear oscillators: the parameters of a given analytical model are identified from the output signal of the system forced by a non-measurable random input.
Of course, in the case of linear harmonic oscillators, the system parameters (linear damping rate and resonance frequency) can be readily obtained, e.g. by estimating peak frequency and corresponding quality factor, which is not possible when nonlinearities are active.
In this context, accurate and robust output-only parameter identification requires:
\begin{enumerate}[(i)]
\item an adequate model of the system,
\item a model for the driving noise,
\item an appropriate data pre-processing.
\end{enumerate}
\noindent{}These aspects are pictured in \cref{fig:summary}, where a summary of the present work is sketched.
In regards to point (i), the selected model for this work is a Van der Pol oscillator (henceforth ``VDP''), which is a canonical model used in many different disciplines such as electronics (since the pioneering work \cite{van1920theory}), biology and medicine \cite{van1928heart,jewett1998refinement,lucero2013modeling}, neurology \cite{fitzhugh1961impulses,nagumo1962active}, optics \cite{wirkus2002dynamics,barland2003experimental}, seismology \cite{cartwright1999dynamics}, sociology and economics \cite{glass1988clocks} or thermoacoustics dynamics in turbulent combustors \cite{noiray16symp}, the latter being the application discussed in more detail in the second part of the paper. The stochastic differential equation of a Van der Pol oscillator driven by additive noise reads:
\begin{equation}
\label{VDPx}
\ddot{x}+\omega_0^{2}{x}=[2\nu-\kappa x^2]\dot{x} +\xi(t),
\end{equation}
where $x$ represents the state of the system, $f_0=\omega_0/2\pi$ the natural oscillation frequency, $\nu$ the linear growth rate, $\kappa$ the saturation constant and $\xi(t)$ the additive driving noise.\\
\noindent{}Concerning point (ii), the simplest model for $\xi$ in \cref{VDPx} is the white noise because it greatly simplifies the analytical derivations. However, a real stochastic forcing is always ``colored", i.e. it always features a non-zero autocorrelation time and a non-constant spectral distribution.
One can find a wide collection of studies where the color of the noise plays a fundamental role in the system dynamics, in topics such as economics, biology and mechanical configurations \cite{perello2002, qing2015, Sapsis08}, as well as in the specific case of oscillators \cite{masoliver1993,xu2011,spanos1978}.
In the field of thermoacoustics, one can for instance refer to \cite{tony15} or \cite{waugh2011}, the latter investigating the effect of different types of noise on limit-cycle triggering. This suggests that it is essential to take the noise color into account in system identification.\\
In the first part of the present work, the widely used Ornstein-Uhlenbeck process is used as the driving source of the Van der Pol oscillator. Afterwards another type of noise is introduced for the specific case of thermoacoustic instabilities in turbulent combustors. In both cases, the associated system dynamics and statistics are scrutinised and the effect of noise color on parameter identification is addressed. The need of properly filtering the output data to reliably identify the parameters -- item (iii) in the aforementioned list -- is then discussed.
\section{Van der Pol oscillator driven by Ornstein-Uhlenbeck noise}
\label{OU}
\subsection{Effect of colored noise on oscillations statistics}
\label{OU1}
\noindent{}\noindent In this section, the noise that drives the Van der Pol oscillator is generated by an Ornstein-Uhlenbeck (OU) process. It is widely used in various contexts to account for finite correlation time effects of a stochastic forcing.
One therefore considers that $\xi$ in \cref{VDPx} satisfies the following Langevin equation:
\begin{equation}
\label{OU_de}
\dot{\xi}(t)=-\dfrac{1}{\tau_\xi}\xi(t)+\dfrac{\sqrt{\gamma}}{\tau_\xi}\zeta(t),
\end{equation}
where $\zeta$ is a unit-variance Gaussian white noise of intensity $\Gamma$, $\gamma$ is a constant coefficient, which will be used later in the paper to adjust the power of the noise $\xi$, and $\tau_\xi$ denotes its characteristic time constant.
In the frequency domain, the OU process $\widehat{\xi}$ results from filtering $\widehat{\zeta}$ with the following transfer function
\begin{equation}
\label{OU_transfer_function_noise}
H(s)=\frac{\widehat{\xi}(s)}{\widehat{\zeta}(s)}=\frac{\sqrt{\gamma}}{1+\tau_\xi s},
\end{equation}
where $s=i\omega$ is the Laplace variable. The power spectrum of $\xi$ is given by
\begin{equation}
\label{OU_power_spectrum_noise}
S_{\xi\xi}(\omega)=|H|^2S_{\zeta\zeta}=\frac{\Gamma}{2\pi}\frac{\gamma}{1+\omega^{2}\tau_\xi^{2}},
\end{equation}
It is useful to define the quantity
\begin{equation}
\label{Gamma_ou}
\Gamma_\text{e}=2\pi S_{\xi\xi}(\omega_0)=\Gamma\frac{\gamma}{1+\omega_0^{2}\tau_\xi^{2}},
\end{equation}
which is the power spectral density of $\xi$ at the oscillator eigenfrequency, referred to as ``effective OU noise intensity" in the remainder of the paper.\\
Considering that the target of this study is to quantitatively compare white and colored noise forcing on the oscillator, it is necessary to set a criterion regarding the input power. It is convenient to adjust the intensity of $\xi$ by using the coefficient $\gamma$ such that the powers provided by $\xi$ and by a white noise of intensity $\Gamma$ in a band $[\omega_{1};\omega_{2}]$ are equal, i.e. $\int_{\omega_1}^{\omega_2}S_{\xi\xi}d\omega=\int_{\omega_1}^{\omega_2}{\Gamma}/{2\pi}d\omega$, which yields:
\begin{equation}
\label{OU_intensity_noise}
\gamma=\frac{\tau_\xi(\omega_2-\omega_1)}{\text{atan}(\omega_{2}\tau_\xi)-\text{atan}(\omega_{1}\tau_\xi)}.
\end{equation}
A sensible choice is to define this ``iso-power band'' around the oscillator resonance frequency $\omega_0$: $[\omega_{1};\omega_{2}]=[\omega_{0}-\Delta\Omega;\omega_{0}+\Delta\Omega]$.
Henceforth, $\Delta\Omega$ can vary between 0 (band degenerating in the single angular frequency $\omega_{0}$) and $\omega_0$ (band $[0;2\omega_0]$). The frequency range $\Delta\Omega$ will be referred to as ``iso-power semi-bandwidth''. One can see in \cref{fig:OU_isopower} how this parameter affects the forcing noise power spectrum.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=0.5\columnwidth]{02.eps}
\end{center}
\caption{Comparison between white noise (grey) and OU (red) noise power spectra, normalized by the white noise intensity $\Gamma$, for different iso-power semi-bandwidths $\Delta\Omega$. The power provided by the two types of noise is equal in the considered band (same area under the curve: note the linear scale). Note that $S_{\xi\xi}(\omega_0)=\Gamma_\text{e}/2\pi\neq \Gamma/2\pi$. \label{fig:OU_isopower}}
\end{figure}
The parameter $\tau_\xi$ is a direct measure of how much ``colored'' the noise is: the shorter $\tau_\xi$, the closer to a white noise $\xi$ is.
As $\tau_\xi$ goes to zero, the cut-off frequency goes to infinity, leading to a constant power spectrum, i.e. a white noise source. This is illustrated in \cref{fig:OU_fmax} (red spectra), together with the fact that the power spectral density of the oscillator response (blue spectra) is accordingly affected. Note that in the limit $\tau_\xi\rightarrow0$, one gets $\gamma\rightarrow1$ and $S_{\xi\xi}(\omega)\rightarrow{\Gamma}/{2\pi}=S_{\zeta\zeta}(\omega)$. The characteristic time $\tau_\xi$ is the noise correlation time, obtained via the autocorrelation function of $\xi$:
\begin{equation}
\label{OU_autocorrelation_function}
k_{\xi\xi}(t)=\Gamma\frac{\gamma}{2\tau_\xi}e^{-\frac{t}{\tau_\xi}}, \,\,\,\,\,\, \tau_\xi=\frac{1}{k_{\xi\xi}(0)}\int_{0}^{\infty}|k_{\xi\xi}(t)|dt.
\end{equation}
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=\textwidth]{03.eps}
\end{center}
\caption{Power spectra of the input $\xi$ and of the output $x$ for three different values of correlation time $\tau_\xi$, normalized by $T_0=1/f_0$. \label{fig:OU_fmax}}
\end{figure*}
\noindent Such OU process is now considered as being the driving force of the Van der Pol oscillator given by \cref{VDPx}. It is convenient to investigate the system in terms of its slowly-varying amplitude and phase dynamics with $x(t)\approx A(t)\cos{[\omega t + \varphi(t)]}=A(t)\cos{\phi(t)}$. This coordinate change is legitimate provided that $\nu\ll\omega_0$. Performing deterministic and stochastic averaging \cite{stratonovich1967} yields the following stochastic differential equation for the amplitude $A$:
\begin{equation}
\label{OU_A_dot_vdp_avg}
\dot{A}=A\left(\nu-\frac{\kappa}{8}A^{2}\right)+\frac{\Gamma_\text{e}}{4\omega_0^2 A}+\mu(t),\hspace{5mm}
\text{with}\quad\ \langle\mu\mu_\tau\rangle=\frac{\delta(\tau)\pi S_{\xi\xi}(\omega_0)}{\omega_0^2}=\frac{\Gamma_\text{e}}{2\omega_0^2}\delta(\tau).
\end{equation}
\noindent{} It is important to underline that the averaging procedure is valid only if $\tau_{\xi}\ll\tau_A$, where the amplitude correlation time $\tau_A$ is related to the system growth rate by $\tau_A \simeq \pi/|\nu|$ (see \cite{lax_1967,lax_2006,noiray16}). One can refer to \cref{fig:tscale} where the important time scales of the considered system are presented. It is also interesting to compare \cref{OU_A_dot_vdp_avg} to its white-noise-driven oscillator counterpart
\begin{equation}
\dot{A}=A\left(\nu-\frac{\kappa}{8}A^{2}\right)+\frac{\Gamma}{4\omega_{0}^{2}A}+\mu(t), \hspace{5mm}
\text{with}\quad \langle\mu\mu_\tau\rangle=\frac{\delta(\tau)\pi S_{\zeta\zeta}(\omega_0)}{\omega_0^2}=\frac{\Gamma}{2\omega_0^2}\delta(\tau).
\label{Aw_dot_vdp_avg}
\end{equation}
The two equations only differs by the fact that $\Gamma_\text{e}$ substitutes $\Gamma$. In the limit $\tau_\xi\rightarrow0$, $\Gamma_\text{e}\rightarrow\Gamma$, and \cref{OU_A_dot_vdp_avg} tends to \cref{Aw_dot_vdp_avg}.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.5\columnwidth]{04.eps}
\caption{Time scales involved in the stochastically forced oscillator: $\tau_\xi$ is the correlation time of the noise source $\xi$, $T_0=1/f_0$ is the oscillation period of $x$ and $\tau_A=\pi/|\nu|$ is envelope amplitude $A$ characteristic time scale. Note the two time different scales adopted for the two halves of the plot.}
\label{fig:tscale}
\end{center}
\end{figure}
Considering the Fokker-Planck equation associated with \cref{OU_A_dot_vdp_avg}, one can derive the stationary probability distribution (PDF) for the amplitude of the VDP oscillator driven by an OU noise:
\begin{equation}
P_{\mathrm{ou}}(A)=\mathcal{N}_{\mathrm{ou}}A\exp{\left[\frac{4\omega_{0}^{2}}{\Gamma_\text{e}}\left(\frac{\nu A^{2}}{2}-\frac{\kappa A^{4}}{32}\right)\right]},
\label{OU_PAc}
\end{equation}
and for the white-noise driven VDP oscillator:
\begin{equation}
P_\text{w}(A)=\mathcal{N}_\text{w}A\exp{\left[\frac{4\omega_{0}^{2}}{\Gamma}\left(\frac{\nu A^{2}}{2}-\frac{\kappa A^{4}}{32}\right)\right]},
\label{PAw}
\end{equation}
where $\mathcal{N}_{\mathrm{ou}}$ and $\mathcal{N}_{\mathrm{w}}$ are two normalization constants such that $\int_{0}^{\infty}P(A)dA=1$.
\begin{figure}
\begin{center}
\includegraphics[width=0.5\columnwidth]{05.eps}
\end{center}
\caption{Map of the coefficient ${\Gamma/\Gamma_\text{e}=(1+\omega_{0}^{2}\tau_\xi^{2})}/{\gamma}$. The closer this is to one, the closer the analytical expressions for $P_{\mathrm{ou}}$ and $P_{\mathrm{w}}$ are.\label{fig:OU_fact_map}}
\end{figure}
Apart from the normalization constants, $P_{\mathrm{ou}}$ and $P_{\mathrm{w}}$ differ by the factor $\Gamma/\Gamma_\text{e}={(1+\omega_{0}^{2}\tau_\xi^{2})}/{\gamma}$ in the exponential, which is depicted in \cref{fig:OU_fact_map}.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.5\columnwidth]{06.eps}
\end{center}
\caption{\color{black} Probability density function for two different linear growth rates $\nu$, two different iso-power semi-bandwidth $\Delta\Omega$, and five different adimensional correlation times $\tau_\xi/T_0$ of the driving noise (where $T_0=2\pi/\omega_0$ is the oscillation period). Shaded area and solid lines respectively correspond to the PDFs of the VDP driven by white noise ($P_{\mathrm{w}}$ given in \cref{PAw}) and to the VDP driven by the OU noise ($P_{\mathrm{ou}}$ given in \cref{OU_PAc}) for the same parameters $\nu$, $\kappa$, $\omega_0$ and $\Gamma$. The amplitude $A$ is normalized by $A_{\mathrm{m}}$, which is the amplitude of the maximum of $P_{\mathrm{w}}$. \label{fig:OU_vdp_pdf}}
\end{figure}\\
\Cref{fig:OU_vdp_pdf} compares the amplitude PDFs of the oscillator driven by white noise (shaded area) and by the colored noise (solid lines) for the same system parameters $\nu$, $\kappa$, $\omega_0$ and $\Gamma$.
Two different iso-power semi-bandwidths $\Delta\Omega$ (columns), which were already considered in \cref{fig:OU_isopower}, as well as two different values of the linear growth rates $\nu$ (rows) are considered. In the case of a wide iso-power band, one can observe that $P_{\mathrm{ou}}$ significantly deviates from $P_{\mathrm{w}}$ when $\tau_\xi$ increases. One can note that for large enough $\tau_\xi$ and for $\Delta\Omega<\omega_0$, $P_{\mathrm{ou}}$ tends to a limit case distribution\footnote{It can be proven that $\lim_{{\tau_\xi}\to\infty} (1+\omega_{0}^{2}\tau_\xi^{2})/\gamma=\omega_0^2/(\omega_0^2-\Delta\Omega^2)$, so except for the case $\Delta\Omega=\omega_0$, this limit is finite and the $P_{\mathrm{ou}}$ asymptotically tends to a limit PDF. Remember that $\tau_\xi\ll \tau_A$ must anyway hold to have a valid derivation of the equations.}. On the other hand, no significant difference among the PDFs can be noticed when $\Delta\Omega/\omega_0=0.1$.\\
\noindent{}To obtain a quantitative measure of the difference between the two PDFs, one can make use of the Hellinger distance:
\begin{equation}
\label{H}
H=\sqrt{1-B},
\end{equation}
where $ B=\int_{-\infty}^{+\infty}\sqrt{p(x)q(x)}\mathrm{d}x $ is the Bhattacharyya coefficient.
The Hellinger distance $H$ is a statistic quantity that measures the difference between two PDFs of the same random variable $p(x)$ and $q(x)$, and ranges from 0 when $p(x)=q(x)$, to 1 when they do not overlap.
In the following, $H$ is computed to compare $P_\text{w}$ and $P_\text{ou}$ in a systematic way for different points $(\Delta\Omega,\tau_\xi,\nu)$ of the space of iso-power semi-bandwidth, correlation time and growth rate. The results are presented as colormaps in \cref{fig:OU_H}.
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{07.eps}
\end{center}
\caption{Hellinger distance (\ref{H}), quantifying the difference between the PDFs of OU noise and white noise driven VDP oscillators. a) Different maps in the space $(\Delta\Omega,\tau_\xi,\nu)$. b) Detail of one linearly stable and one linearly unstable cases. c) PDFs comparison of two points in the map. The correlation time of the noise is normalized on the correlation time of the pressure amplitude $\tau_A=\pi/|\nu|$. The linear growth rate $\nu$ is normalized on the oscillator's angular frequency $\omega_0$.\label{fig:OU_H}}
\end{figure*}
\noindent{}The linear growth rate $\nu$ has a minor effect: all the maps in \cref{fig:OU_H}.a are similar, but $H$ is slightly higher when $\nu<0$, due to the shift of the amplitude of maximum probability $A_{\mathrm{m}}$ observed in this case (see again \cref{fig:OU_vdp_pdf}). Focusing on the other two parameters in \cref{fig:OU_H}.b, $H$ is large in the upper-right corner of the map, i.e. for high values of $\Delta\Omega$ and $\tau_\xi$. The influence of $\tau_\xi$ is intrinsically related to the noise color: as discussed earlier, the shorter $\tau_\xi$, the closer is $\xi$ to a white noise. That is why the region of match between $P_{\mathrm{c}}$ and $P_{\mathrm{w}}$ is wider for short correlation times. In case of a long $\tau_\xi$, the bandwidth $\Delta\Omega$ has a strong influence, leading for large values to a significant difference between $P_{\mathrm{c}}$ and $P_{\mathrm{w}}$.
A large $\Delta\Omega$ means that the equality of power between white noise and colored noise is set in a wide band around the oscillator eigenfrequency. If $\tau_\xi$ is long enough to let the oscillator frequency $f_0$ fall in the decaying part of $S_{\xi\xi}$, the power spectral density of the two forcing noise is sensibly different around $f_0$ (see again \cref{fig:OU_isopower}), and the response of the system significantly changes.
\subsection{Parameter identification and white-noise assumption}
\label{OU2}
\noindent{}In this section, the influence of the finite correlation time $\tau_\xi$ of the driving OU noise upon parameter identification strategies is investigated.
The problem is the following: the noise driving the oscillator is never white in practice. Therefore, the use of a white noise driven oscillator model as a base for parameter identification can be brought into question.\\
One alternative would be to adopt a model featuring a noise source with finite correlation time as exemplified in the previous section with the OU process. However, this would not make any difference if the adopted SI method relies on the statistics of the signal. In fact, looking at \cref{OU_PAc}, one can see that the analytical expression for $P_\text{ou}$ produces self-similar probability distributions. In other words, different combinations of $\Gamma$ and $\tau_\xi$, lead to the same output amplitude statistics.
However, if one is only interested in identifying the linear growth rate $\nu$ and the saturation coefficient $\kappa$ one should presumably be able to use a white noise driven VDP model as a basis for the SI.
Still, it has to be verified if the presence of non-zero autocorrelation time $\tau_\xi$ can affect the identification process: even though the amplitude PDFs of the two models are the same, the output time traces and spectra are different, especially for some combinations of parameters.\\
\noindent{}To verify the possibility of achieving a robust parameter identification of the linear growth rate $\nu$ and the saturation coefficient $\kappa$ using a white noise approximation, the following test is performed. A Van der Pol oscillator (see \cref{VDPx}), having the true parameters $\nu=\nu_\text{t}$ and $\kappa=\kappa_\text{t}$ and forced with an OU noise of intensity $\Gamma_\text{t}$ and correlation time $\tau_\xi$, is simulated in Simulink\textsuperscript{\textregistered}, and then the slowly-varying envelope $A(t)$ and phase $\varphi(t)$ of the output signal $x(t)$ are extracted.
\begin{figure*}
\begin{center}
{\includegraphics[width=\textwidth]{08.eps}}
\end{center}
\caption{a) Identified growth rate as a function of the filter bandwidth. b) Power spectrum of the signal (grey), and two filter windows (color highlight) for the VDP driven by OU noise. The thin black line is the spectrum of a VDP driven by a white noise of intensity $\Gamma_\text{e}$. c) Amplitude time traces (red and green) obtained from the two proposed filters, superimposed to the unfiltered signal (grey). In panel a) the corresponding points are highlighted.\label{fig:OU_filt_width}}
\end{figure*}
A parameter identification using the white noise driven model is then attempted, making use of the approaches 3 and 4 proposed in \cite{noiray13}. They consist in finding the optimum parameters $\nu$, $\kappa$ and $\Gamma$ giving the best fit of $P(A)$ and $P(A\dot{\varphi})$ for method 3, and of the drift and diffusion coefficients of the Fokker-Planck equation for method 4. However, the identified parameters significantly differ from the actual values: $\nu_\text{id}=2.1\nu_\text{t}$, $\kappa_\text{id}=1.9\kappa_\text{t}$, $\Gamma_\text{id}=1.7\Gamma_\text{t}$ with approach 3 and $\nu_\text{id}=1.5\nu_\text{t}$, $\kappa_\text{id}=1.6\kappa_\text{t}$, $\Gamma_\text{id}=1.5\Gamma_\text{t}$ using the approach 4.\\
\noindent{}As will become apparent, the parameter identification failed because of the lack of pre-processing of the data.
In fact it is wrong to assume that the measured output spectrum $S_{xx}(\omega)$ can be generated by an equivalent white noise source: the actual driving noise spectral power distribution $S_{\xi\xi}(\omega)$ leaves some peculiar signature in $S_{xx}(\omega)$. However, it is indeed possible to reproduce over a limited band around the oscillator frequency the actual output of the colored noise driven VDP with a white noise forcing, because $S_{\xi\xi}$ is a smooth function of frequency. This is exemplified in \cref{fig:OU_filt_width}.b, where one can see the spectrum of a colored noise driven VDP (thick grey line), overlaying the one of a VDP driven by a white noise of intensity $\Gamma_\text{e}$ (thin black line).\\
\noindent{}The next attempt is, therefore, to bandpass filter the signal obtained from the simulation in the band $f_0\pm\Delta f$, using a $\Delta f$ progressively narrower\footnote{Note that $\Delta f$ is not related to $\Delta\Omega$: the first is the filter semi-width adopted to pre-process the data for parameter identification, the second is a semi-bandwidth arbitrarily chosen to define the driving noise intensity.}. The obtained identification of $\nu$ is presented in \cref{fig:OU_filt_width}.a as a function of $\Delta f$. If $\Delta f=f_0$, the identified parameters values are close to the ones obtained using the unfiltered data. Decreasing $\Delta f$, the identified growth rate $\nu_\text{id}$ converges to the actual one $\nu_\text{t}$ for $\Delta f/ f_0=0.3$. The same trend is found for the saturation constant $\kappa$. This indicates that it is necessary to filter the data around the frequency of interest in order to perform a reliable model-based output-only parameter identification.\\
\noindent{}One might be tempted to reduce further the filter bandwidth, in order to decrease even more the driving noise modeling inaccuracy. However, one can see that below $\Delta f=0.05$ the estimated $\nu$ again deviates from the actual one. This fact is explained through the other two panels of \cref{fig:OU_filt_width}. In the panel b, the spectrum of the signal generated by the simulation of the OU noise driven VDP oscillator is presented, together with two different filter widths. The corresponding filtered time traces of the oscillation amplitude, used as data for the parameter identification, are plotted in panel c, superimposed to the unfiltered oscillator signal (grey). One can observe that if a too narrow band is considered, the signal is altered and substantially deviates from the original: the amplitude time trace follows the general trend, but does not capture anymore the high frequency content. This affects the statistics and dynamics of the data and, therefore, the outcome of the parameter identification. Hence, one must refrain from filtering too much the signal, to preserve the core information of the original signal.\\
\noindent{}In the next step, the parameter identification is performed for different colored noise parameters, to ensure that an adequate filtering is the means of achieving a reliable identification. In \cref{fig:OU_ID_map} the results of this test are presented. Each panel includes the identification result (method 4 in \cite{noiray13} is adopted) of 100 different simulations of the system, each corresponding to a different combination of noise parameters $\Delta\Omega$ and $\tau_\xi$. The identification inaccuracy is given in terms of relative error $\varepsilon=|\nu_\text{t}-\nu_\text{id}|/\nu_\text{t}$. In the left column, the identification results when using the raw data are presented. The iso-power semi-bandwidth $\Delta\Omega$ does not noticeably affect this error, as it just changes the value of $\Gamma_\text{e}$ to be identified. The noise correlation time has a dramatic impact on the identification error for long correlation times. However, the error vanishes if $\tau_\xi$ is very short, as in this case the driving noise gets closer a white one.
In the right column of \cref{fig:OU_ID_map}, the same signals are bandpass filtered in a band $[f_0(1\pm0.5)]$ before the parameter identification is run. One can notice how the identification is considerably enhanced, leading to very accurate results regardless of the parameters of the noise source. This result consolidates the confidence on the output-only parameter identification methods, as even without knowing the noise parameters $\Gamma$, $\Delta\Omega$ and $\tau_\xi$ it is possible to obtain the correct oscillator parameters $\nu$ and $\kappa$ by just applying an adequate filter to the output signal.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.5\columnwidth]{09.eps}
\end{center}
\caption{Map of OU noise driven oscillator identification, for different noise correlation time $\tau_\xi$ and iso-power semi-bandwidth $\Delta\Omega$. a) Identification using the unfiltered data. b) Identification using signals filtered in the band $[f_0(1\pm0.5)]$. The identification result is given as relative error: $\varepsilon=|\nu_\text{t}-\nu_\text{id}|/\nu_\text{t}$.\label{fig:OU_ID_map}}
\end{figure}
\\
\noindent{}Summing up, it can be stated that, for a OU noise driven VDP oscillator, the parameter identification based on a white noise approximation will accurately estimate the linear growth rate $\nu$ and the saturation constant $\kappa$ if the signal is filtered before the analysis. The filtering bandwidth has to be:
\begin{itemize}
\item narrow enough, to have a satisfactory approximation of the real noise with a white one over the considered band,
\item not too narrow, to preserve the amplitude dynamics of the signal.
\end{itemize}
A sensible strategy is to use a progressively narrower filter for the data pre-processing, and repeat the identification process until the obtained parameters reach a plateau.\\
In the next part of this work, the study will be carried out using a different type of noise source, which is also closer to the actual stochastic forcing characteristic of thermoacoustic systems.
\section{Thermoacoustic instabilities: Modeling}
\label{Thermoac}
\subsection{Practical context}
\label{intro_TA}
\noindent{}In gas turbine, aeronautics and aerospace applications, the race for more efficient, less polluting, more fuel- and operation-flexible systems is ongoing, towed by customers needs and environmental regulations \cite{lieuwen2012book}. The thermoacoustic instabilities taking place in the combustion chambers of these engines constitute a major difficulty to overcome \cite{poinsot2016}, because their resulting high amplitude acoustic levels induce high cycle fatigue of the combustor components and reduce their lifetime. The mechanisms ruling the constructive interaction between flames and acoustic modes are complex and the occurrence of these instabilities at a given engine operating point is hard to predict.\\
{\noindent}Therefore, the development of reliable predictive methods is of primary importance. Currently, brute force Large Eddy Simulations cannot be routinely used in a combustor design optimisation context due to their prohibitive computational costs. Therefore, a significant portion of the research efforts concentrate on the development of Helmholtz solvers and low-order thermoacoustic network models that are combined with experiments or computationally-cheaper numerical simulations \cite{schuermans2010, han2015,silva2017,nicoud2007,bourgouin2015,oberleithner2015,campa2014,schmid2013,ghirardo2015}.\\
\noindent{}In parallel, it is also important to establish robust system identification methods in order to validate the aforementioned linear-stability prediction tools. It has been recently shown that thermoacoustic linear growth rates can be extracted from limit cycle dynamic pressure data recorded in real systems \cite{noiray13,noiray2013dynamic,noiray16,noiray16symp}, and compared to the ones obtained using predictive thermoacoustic methods. Such network model validation is performed in \cite{bothien2015analysis}.\\
In the context of the present work, this section deals with output-only parameter identification methods applied to thermoacoustic systems, where the measurable output is the acoustic pressure at one location in the combustion chamber while the unknown input is the stochastic forcing resulting from the intense turbulence in the combustor. This last contribution is often modelled as an additive forcing, and assumed to be a white noise. In reality, this noise is not delta-correlated as explained in section \ref{Combustion noise}. Therefore, in section \ref{VDP_model} a more accurate model of the actual noise is introduced and the equations for the Van der Pol forced by this specific noise source are derived. The impact of the selected model on the effectiveness of the parameter identification is afterward scrutinised in section \ref{Validity of the white noise approximation}.\\
Regarding the system modeling, a single thermoacoustic mode description is often adopted in order to keep the number of system parameters to be identified to a minimum. This allows the use of a single oscillator as a model of the system. However the raw data, i.e. the acoustic pressure at a given location in the combustion chamber, result from the superposition of the contributions from all the combustor eigenmodes. Consequently, it cannot be directly treated and requires pre-processing to isolate the information corresponding to the single eigenmode considered for parameter identification. This can be done by bandpass filtering the data \cite{noiray16symp} or by performing a modal projection if several simultaneous records at different locations in the chamber are available \cite{noiray2013dynamic}. These data manipulations can, however, change the outcome of output-only parameter identification methods, because the signal and its statistics can be sensibly altered. This problem is considered in section \ref{Filter size for system identification}.
\subsection{Colored random excitation}
\label{Combustion noise}
\begin{figure}
\begin{center}
\includegraphics[width=0.5\columnwidth]{10.eps}
\caption{Example of power spectra ($S_{qq}$ and $S_{pp}$) of turbulence-induced heat release rate fluctuations $\widehat{q}_{\mathrm{n}}$ and combustion noise of a flame radiating sound in the free field (adapted from reference \cite{rajaram2009}). $S_{pp}$ can be approximated with a bandpass model (-$\Circle$-), plotted also in the inset, in linear scale. \label{fig:exp_Spp_Sqq_open}}
\end{center}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{11.eps}
\caption{Block diagram for the sound field generated by turbulent flames in open and closed environments. The open loop configuration (a) corresponds to open flames radiating noise in the free field. When the flame is enclosed (b), a thermoacoustic feedback operates and the acoustic block is fed by the total heat release rate fluctuations $\widehat{q}=\widehat{q}_{\mathrm{a}}+\widehat{q}_{\mathrm{n}}$, where $\widehat{q}_{\mathrm{a}}$ and $\widehat{q}_{\mathrm{n}}$ respectively stand for the acoustically- and turbulence-induced components.\label{fig:network}}
\end{center}
\end{figure*}
\begin{figure}[t!]
\begin{center}
\includegraphics[width=\columnwidth]{12.eps}
\end{center}
\caption{a) Example of normalized combustion noise spectra measured for different open flames configurations (adapted from \cite{rajaram2009}). In the inset, the frequency of the spectrum maximum $f_{\mathrm{max}}$ is given as a function of the flow characteristics for a large set of operating conditions (see main text for definitions). b) Typical acoustic pressure spectrum recorded in a combustion chamber.\label{fig:exp_Spp_engine}}
\end{figure}
\noindent{}In thermoacoustics, the acoustic pressure satisfies the Helmholtz equation with heat release rate source in the volume of the domain and the impedance conditions on boundaries:
\begin{equation}
\label{helmholtz}
\nabla^{2}\widehat{p}(s,x)-\left(\frac{s}{c}\right)^2\widehat{p}(s,x)=-s\frac{(\gamma-1)}{c^{2}}\widehat{q}(s,x) \hspace{5mm} \text{in the domain},
\end{equation}
\begin{equation}
\label{helmholtz_BC}
\frac{\widehat{p}(s,x)}{\mathbf{\widehat{u}}(s,x)\cdot \mathbf{n}}=Z(s,x) \,\,\,\,\, \text{on boundaries},
\end{equation}
where $\widehat{p}$ and $\widehat{u}$ are the acoustic pressure and velocity fluctuations, $s$ the Laplace variable, $x$ the position, $c$ the local speed of sound, $\gamma$ the specific heat ratio, $\widehat{q}$ the heat release rate fluctuation, $\mathbf{n}$ the outward normal to the boundary and $Z$ the acoustic impedance. This equation stands if the Mach number is low. If the flame is placed in an open environment, waves generated by the reaction zone are radiated away without reflections. In reference \cite{hirsch2007}, the radiated sound field in this situation is modelled as function of the turbulence-induced heat release rate fluctuation and compared to experimental data. The formal solution of \cref{helmholtz} for a fluctuating heat release rate source in an open environment is:
\begin{equation}
\label{helmholtz_sol_OF}
\widehat{p}(s,x)=s\frac{(\gamma-1)}{4\pi r c^{2}}\int_{V_{f}}\widehat{q}(s,y)e^{\frac{s}{c}|x-y|}d^3y,
\end{equation}
where $x$ is the observer position in the far field, $r\approx|x|$ is the distance of the observer from the flame. This equation is valid when the flame brush, which extends over the volume $V_f$, is compact with respect to the considered acoustic wavelength.
An example of the far-field acoustic power spectral density $S_{pp}$ in such configuration, i.e. the so-called \emph{combustion noise} \cite{strahle78}, is given in \cref{fig:exp_Spp_Sqq_open}, together with integrated heat release oscillation power spectrum $S_{qq}$. In this situation, the heat release rate fluctuations $\widehat{q}$ generating the sound field are only due to the non-coherent turbulent component $\widehat{q}_\text{n}$ (see \cref{fig:network}.a).\\
\noindent{}The combustion noise spectrum $S_{pp}$ features a maximum at frequency $f_{\mathrm{max}}$ and a bandpass signature, in contrast with the low-pass character of $S_{qq}$, having $f_{\mathrm{max}}$ as cut-off frequency. The two spectra are related to each other by \cref{helmholtz_sol_OF}, which is the topic of e.g. \cite{rajaram2009,ihme2009}.\\
All the authors, from the fundamental theoretical work by Clavin and Siggia \cite{clavin1991} to the systematic study by Rajaram and Lieuwen \cite{rajaram2009}, agree on the shape of the combustion noise spectrum $S_{pp}$. In \cite{rajaram2009} it has been shown that the normalized combustion noise power spectra of different burners operating under different conditions collapse on top of each other, indicating a general scaling law (see \cref{fig:exp_Spp_engine}.a). The combustion noise spectrum features a maximum, and varies like $S_{pp}(f) \propto f^{2}$ on the left side, and like $f^{-r}$, with $2<r<3.4$, on the right side. The peak frequency of the combustion noise spectrum can be estimated making use of experimental relations such as the one proposed in \cite{shivashankara1975}, involving dimensions, flow properties and chemical quantities. Alternatively, it has been observed in \cite{rajaram2009} that the Strouhal number $St=f_{\mathrm{max}}{L_{\mathrm{F}}}/{U_{\mathrm{avg}}}$ is almost in any case close to 1, where $L_{\mathrm{F}}$ is the flame length and $U_\text{avg}$ the average velocity of the reactants mixture. Hence $f_{\mathrm{max}}\approx{U_{\mathrm{avg}}}/{L_{\mathrm{F}}}$, which is shown in the inset of \cref{fig:exp_Spp_engine}.a.\\
\\
\noindent{}As exemplified in \cref{fig:exp_Spp_engine}, the acoustic signature dramatically changes when the flame is placed within a combustion chamber. In \cref{fig:exp_Spp_engine}b, a single mode dominates the spectrum, with a sharp peak of frequency $f_0$, surrounded by several side peaks, which correspond to the other thermoacoustic eigenmodes. One can conveniently express the acoustic pressure at a given location $x$ as
\begin{equation}
\label{P_sum_eigenmodes}
p(x,t)=\sum_{i=1}^{\infty}\eta_{i}(t)\psi_i(x),
\end{equation}
where $\eta_i$ denotes the amplitude of the $i^{th}$ mode and $\psi_i$ the spatial distribution of the corresponding natural acoustic mode of the chamber.
This spatial projection leads to a set of coupled stochastic nonlinear differential equations for the modes $\boldsymbol{\eta}(t)=\left[\eta_1(t),\cdots,\eta_j(t),\cdots\right]^T$.
However, it is often possible to describe the dynamics of a single mode $j$ by neglecting the influence of other modes \cite{culick2006book}.
In this case, the mode amplitude $\eta_j$ is given by the nonlinear stochastic oscillator
\begin{equation}
\label{modeamp}
\ddot{\eta}_{j}+\omega^{2}_{j}{\eta}_{j}=g_{j}({\eta}_{j},\dot{\eta}_{j}) +\xi_{j},
\end{equation}
where $\omega_j=2\pi f_j$ is the angular frequency of the $j^\text{th}$ natural acoustic mode, $\xi_{j}(t)$ is the additive stochastic forcing coming from turbulence-induced processes. The term $g_j$ is a non-linear function which includes, amongst others, the effects of acoustic damping mechanisms and coherent heat release rate fluctuations (this last contribution is coherent in the sense that it depends on the acoustic field). One can see in \cref{fig:network}.b a diagram depicting the coherent feedback $\widehat{q}_\text{a}$ from a flame located in a combustion chamber. At the same time the flame is also influenced by the turbulent flow. The resulting heat release fluctuation is the aforementioned $\widehat{q}_\text{n}$, which was the only source in case of an open flame. The turbulence-induced flow perturbations exhibit a much smaller spatial correlation than the acoustic ones, which are correlated over the entire combustor. These quantities $\widehat{q}_\text{a}(\omega)$ and $\widehat{q}_\text{n}(\omega)$ can be measured in dedicated test rigs equipped with loudspeakers and microphones, as explained in e.g. \cite{paschereit2002}, and can be used afterwards in network models providing predictions of the system stability.\\
\subsection{Colored noise driven Van der Pol oscillator}
\label{VDP_model}
In the following, it is assumed that the non-linear function $g_j$ in \cref{modeamp} results from a linear acoustic damping and a non-linear flame feedback: $g({\eta},\dot{\eta})=\dot{q}_{\mathrm{a}} -\alpha\dot{\eta}$, where $\alpha$ is the damping constant, and subscripts are omitted from now on.
The flame response is expanded up to the third order in acoustic amplitude, which is often sufficient to characterise supercritical thermoacoustic bifurcations \cite{lieuwen03,boujo2016}: ${q}_{\mathrm{a}}=\beta\eta-{\kappa}\eta^3/3$. This assumption yields the already presented Van der Pol oscillator equation:
\begin{equation}
\label{VDP}
\ddot{\eta}+\omega_0^{2}{\eta}=[2\nu-\kappa\eta^{2}]\dot{\eta} +\xi(t)
\end{equation}
where $\nu=(\beta -\alpha)/2$ is the linear growth rate.\\
Considering $\xi$ as a white noise, i.e. a delta correlated forcing, simplifies the modeling approach and has been used in most of the studies dealing with stochastically forced thermoacoustic limit cycles (e.g., again \cite{lieuwen03,boujo2016}).\\
In the remainder of the paper the random forcing $\xi$ is assumed to result from the non-coherent heat release rate fluctuations $q_\text{n}$ only. As a result, $S_{\xi\xi}$ follows the same power law as the combustion noise and is therefore proportional to $S_{pp}$ \cite{ihme2009, liu2015}.\\
\begin{figure}[t!]
\begin{center}
\includegraphics[width=0.5\columnwidth]{13.eps}
\end{center}
\caption{Comparison between white noise (grey) and colored noise (red) power spectra, normalized by the white noise intensity, for different iso-power bandwidths $\Delta\Omega$. The power provided by the two types of noise is equal in the considered band (same area under the curve: note the linear scale). Note that $S_{\xi\xi}(\omega_0)=\Gamma_\text{e}/2\pi \neq \Gamma/2\pi$.\label{fig:isopower}}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=\textwidth]{14.eps}
\end{center}
\caption{Mutual position of noise and pressure spectrum maxima for three different adimensional noise correlation times $2\pi\tau_\xi/T_0$. Depending on the correlation time of the forcing noise, its spectrum maximum changes position accordingly (\cref{fmax_noise}). This modifies the response of the system, as can be observed on the output spectra. \label{fig:BP_fmax}}
\end{figure*}
\noindent{}In order to keep the problem tractable, $\xi$ is defined by
\begin{equation}
\label{transfer_function_noise}
\widehat{\xi}(s)=H(s)\,\widehat{\zeta}(s)=\frac{\sqrt{\gamma}\tau^{2}s}{(1+\tau s)^{2}}\,\widehat{\zeta}(s),
\end{equation}
where $\zeta$ is a unit-variance Gaussian white noise of intensity $\Gamma$, $\gamma$ is a constant used to adjust the power of the process $\xi$ and $\tau$ is its characteristic time constant.
The resulting power spectrum is given by $|H|^2S_{\zeta\zeta}$:
\begin{equation}
\label{power_spectrum_noise}
S_{\xi\xi}(\omega)=\frac{\Gamma}{2\pi}\frac{\gamma\omega^{2}\tau^{4}}{(1+\omega^{2}\tau^{2})^{2}},
\end{equation}
that features a maximum at
\begin{equation}
\label{fmax_noise}
f_\text{max}=\frac{1}{2\pi\tau}.
\end{equation}
One can again define an ``effective colored noise intensity":
\begin{equation}
\label{Gamma_bp}
\Gamma_\text{e}=2\pi S_{\xi\xi}(\omega_0)=\Gamma\frac{\gamma\omega_0^{2}\tau^{4}}{(1+\omega_0^{2}\tau^{2})^{2}}.
\end{equation}
This model is a close approximation of actual experimental data, as shown in \cref{fig:exp_Spp_Sqq_open} (-$\Circle$-). This model is also close to others provided in literature, like in \cite{liu2015}, but, thanks to its simplicity, it allows for the analytical derivation that follows.\\
As done for the OU case, the colored noise power is equated to the one of a white noise of intensity $\Gamma$ in the band $[\omega_1;\omega_2]=\omega_0\pm\Delta\Omega$, which yields:
\begin{equation}
\label{intensity_noise}
\gamma=\frac{2(\omega_{2}-\omega_{1})}{\tau}\left(\text{atan}(\omega_{2}\tau)-\text{atan}(\omega_{1}\tau)-\frac{\omega_{2}\tau}{1+\omega_{2}^{2}\tau^{2}}+\frac{\omega_{1}\tau}{1+\omega_{1}^{2}\tau^{2}}\right)^{-1}.
\end{equation}
One can see in \cref{fig:isopower} how the parameter $\Delta\Omega$ affects the forcing noise power spectrum.\\
The characteristic time $\tau$ is related to the noise correlation time $\tau_\xi$, that can be obtained via the autocorrelation function of $\xi$,
\begin{equation}
\label{autocorrelation_function}
k_{\xi\xi}(t)=\Gamma\frac{\gamma\tau^{2}}{4\sqrt{2\pi}}(\tau-t)e^{-\frac{t}{\tau}},
\end{equation}
\begin{equation}
\label{autocorrelation_time}
\tau_\xi=\frac{1}{k_{\xi\xi}(0)}\int_{0}^{\infty}|k_{\xi\xi}(t)|dt=\frac{2\tau}{e},
\end{equation}
where $e=\exp{(1)} \simeq 2.718$.\\
The value of $\tau_\xi$ is related to the ``color'' of the noise: it determines where the maximum of the noise spectrum $f_\text{max}$ is located compared to the oscillator eigenfrequency $f_0$, affecting, as presented in \cref{fig:BP_fmax}, the response of the VDP. Focusing in a band around $f_0$, one can see how the oscillator is forced either by a source having the power increasing with frequency (``blue'' noise), or almost constant (close to a white noise), or decreasing (``pink'' noise). The resulting output $p$ is, accordingly, substantially different.
\begin{figure}[h!]
\begin{center}
{\includegraphics[width=0.5\columnwidth]{15.eps}}
\end{center}
\caption{Map of the coefficient $\Gamma/\Gamma_\text{e}={(1+\omega_{0}^{2}\tau^{2})^{2}}/{\gamma\omega_{0}^{2}\tau^{4}}$. The closer this is to one, the closer the analytical expressions for $P_{\mathrm{c}}$ and $P_{\mathrm{w}}$ are. Note that the coefficient can be either greater than one (red scale) or smaller (blue scale).\label{fig:fact_map}}
\end{figure}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.5\columnwidth]{16.eps}
\end{center}
\caption{Probability density function for two different linear growth rates $\nu$, two different iso-power semi-bandwidth $\Delta\Omega$, and three different adimensional correlation times $2\pi\tau_\xi/T_0$ of the driving noise (where $T_0=2\pi/\omega_0$ is the acoustic period). Solid lines are the PDFs for colored noise VDP (\cref{PAc}), shaded area the white noise driven PDF (\cref{PAw}) for the same parameters. The amplitude $A$ is given as relative to $A_{\mathrm{m}}$, the amplitude where $P_{\mathrm{w}}(A)$ is maximum.\label{fig:vdp_pdf}}
\end{figure}
\begin{figure*}[t!]
\begin{center}
\includegraphics[width=\textwidth]{17.eps}
\end{center}
\caption{Hellinger distance (\ref{H}), quantifying the difference between the PDFs of colored noise and white noise driven VDP oscillators. a) Different maps in the space $(\Delta\Omega,\tau_\xi,\nu)$. b) Detail of a linearly unstable case. c) PDFs for two linearly unstable points.\label{fig:H}}
\end{figure*}
\\
\noindent{}The VDP equation is again recast in amplitude-phase coordinates. In this case, this substitution is legitimate as, in most of the practical cases, the thermoacoustic systems satisfy the condition $\nu\ll\omega_0$. This means that the right hand side of \cref{modeamp} is much smaller than the left one and then $\eta(t)\approx A(t)\cos{[\omega t + \varphi(t)]}=A(t)\cos{\phi(t)}$.
Adopting the colored noise model (\ref{power_spectrum_noise}) for $\xi$, deterministic and stochastic averaging yields the stochastic differential equation:
\begin{equation}
\dot{A}=A\left(\nu-\frac{\kappa}{8}A^{2}\right)+\frac{\Gamma_\text{e}}{4\omega_{0}^{2}A}+\mu(t), \hspace{5mm} \langle\mu\mu_\tau\rangle=\frac{\pi S_{\xi\xi}(\omega_0)\delta(\tau)}{\omega_0^2}=\frac{\Gamma_\text{e}}{2\omega_0^2}\delta(\tau),
\label{A_dot_vdp_avg}
\end{equation}
with $\Gamma_\text{e}$ given by \cref{Gamma_bp}.
\noindent{}Again, the averaging method is valid if the correlation times are such that $\tau_{\xi}\ll\tau_A$ \cite{stratonovich1967}. This is generally verified for practical cases. The amplitude correlation time is related to the growth rate by $\tau_A \simeq \pi/|\nu|$ \cite{noiray16}. Taking $\nu=10$ rad/s, for instance, $\tau_A=314$ ms, while the noise correlation time $\tau_\xi=2\tau/e\approx1/e\pi f_{\mathrm{max}}$ is generally smaller than 1 ms ($f_{\mathrm{max}} \geq 50$ Hz, see \cref{fig:exp_Spp_engine}.a).\\
The stationary probability distribution for the amplitude of the bandpass noise driven VDP oscillator is then:
\begin{equation}
P_{\text{c}}(A)=\mathcal{N}_\text{c}A\exp{\left[\frac{4\omega_{0}^{2}}{\Gamma_\text{e}}\left(\frac{\nu A^{2}}{2}-\frac{\kappa A^{4}}{32}\right)\right]},
\label{PAc}
\end{equation}
where $\mathcal{N}_{\mathrm{c}}$ is the normalization constant to have $\int_{0}^{\infty}P_\text{c}(A)dA=1$.\\
%
Like for the OU case, \cref{A_dot_vdp_avg,PAc} have the same structure as their white noise driven system counterparts \cref{Aw_dot_vdp_avg,PAw}, with the effective colored noise intensity $\Gamma_\text{e}$ (\cref{Gamma_bp}) replacing the white noise intensity $\Gamma$. Therefore, $P_{\mathrm{c}}$ and $P_{\mathrm{w}}$ differ only for the factor $\Gamma/\Gamma_\text{e}$ in the exponential. In \cref{fig:fact_map} one can see a map of this factor, as function of the iso-power bandwidth $\Delta\Omega$ and of the source noise correlation time $\tau_\xi$.
\\
\noindent{}Comparing this map with the one for the OU noise (\cref{fig:OU_fact_map}), one can notice how $\Gamma$ and $\Gamma_\text{e}$ might differ whatever the correlation time. This is due to the fact that this type of noise does not converge to a white one for short $\tau_\xi$. Another difference is that this coefficient can be lower than one.\\
\noindent{}In line with this map, $P_{\mathrm{c}}$ and $P_{\mathrm{w}}$ can show significant differences, as depicted in \cref{fig:vdp_pdf}.
To compare quantitatively $P_\text{c}$ and $P_\text{w}$, the Hellinger distance is plotted in \cref{fig:H}. As for the OU noise, for small $\Delta\Omega$, $H$ tends to 0. However in this case, for large $\Delta\Omega$, $H$ is large whatever the correlation time of the noise source.\\
\section{Thermoacoustic instabilities: parameter identification}
\label{Results}
\noindent In this section, the white noise approximation is assessed in the context of parameter identification. As discussed before, the dynamics of a thermoacoustic mode can be seen as a SISO system. Although the output, represented by pressure oscillation, is easily accessible via experimental measures, the input, resulting from turbulence, is not known. Therefore, this system necessitates output-only parameter identification methods.
\subsection{Assessment of the white noise approximation}
\label{Validity of the white noise approximation}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.5\columnwidth]{18.eps}
\end{center}
\caption{Map of band-pass colored noise driven oscillator identification, for different noise correlation time $\tau_\xi$ and iso-power semi-bandwidth $\Delta\Omega$. a) Identification using the unfiltered data. b) Identification using signals filtered in the band $[f_0(1\pm0.5)]$. The identification result is given as relative error: $\varepsilon=|\nu_\text{t}-\nu_\text{id}|/\nu_\text{t}$.\label{fig:BP_ID_map}}
\end{figure}
\begin{figure*}[th!]
\begin{center}
\includegraphics[width=\textwidth]{19.eps}
\end{center}
\caption{Effects of three different filter bandwidths on the analysis of combustor pressure experimental data. a) Acoustic pressure spectrum and filters bands. b) Time traces resulting from the three different filtering. c) Detail of the envelopes over a time span of two $\tau_A$.\label{fig:filt_w}}
\end{figure*}
\noindent{}Following the same procedure as in \cref{OU2}, 100 test cases with fixed oscillator parameter $\nu=\nu_\text{t}$ and $\kappa=\kappa_\text{t}$, but different noise parameters $\tau_\xi$ and $\Delta\Omega$, are analysed to ensure that the identification methods relying on the white noise assumption are not biased by the actual noise spectrum and autocorrelation. The relative error $\varepsilon=|\nu_\text{t}-\nu_\text{id}|/\nu_\text{t}$ on the estimated oscillator linear growth rate $\nu_\text{id}$ is presented in \cref{fig:BP_ID_map}.\\
\noindent{}Like for the OU noise case, the identification might fail if the unfiltered data are used (left panel). It is interesting to notice that, compared to the OU case, the error is generally less severe. This is due to the spectral distribution of the bandpass noise, rapidly decaying in power at high and low frequencies. Another peculiar aspect is the distribution of the errors in the map. While for the OU noise low $\tau_\xi$ means a quasi-white noise forcing and, therefore, a small identification error, here short $\tau_\xi$ corresponds to a blue noise forcing.\\
As in the OU case, filtering the data prior to parameter identification improves the identification results (right panel of \cref{fig:BP_ID_map}). This is, again, due to a more accurate approximation of the real forcing spectrum with a white one in the considered frequency range. \\
The bandwidth of the filter adopted in the pre-processing of the data has to be chosen with care in order not to discard essential amplitude dynamics. In addition, practical acoustic spectra often feature neighboring peaks around the main one, due to the coexistence of several thermoacoustic modes in the combustor. This is a further constraint when one analyses experimental data and performs single-mode output-only parameter identification. These two aspects are covered in the following.
\subsection{Effect of data preprocessing on parameter identification}
\label{Filter size for system identification}
\begin{figure*}[h!]
\begin{center}
\includegraphics[width=\textwidth]{20.eps}
\end{center}
\caption{Double-oscillator simulation results. The model is made of two non-linear oscillators linearly coupled, and is fed by colored noise. Oscillator \#1 is linearly unstable, oscillator \#2 is stable. Values adopted for the parameters (refer to \cref{2VDP_eq}): $\omega_2=1.3\omega_1$, $\alpha_1/\omega_1=0.02$, $\beta_1/\omega_1=0.03$, $\alpha_2/\omega_1=0.03$, $\beta_2/\omega_1=-0.02$, $\kappa_1/\omega_1=\kappa_2/\omega_1=0.015$. The colored noise parameters (refer to \cref{power_spectrum_noise}) are: $f_\text{max}/f_1=0.2$, $\Delta\Omega/\omega_1=0.5$, $\Gamma/4\omega_1^2=1$.
a) Overview of the total pressure and forcing noise spectra. b) The spectral SPL of total output, single oscillators outputs $p_1$ and $p_2$, and forcing noise, in the frequency band that encloses the two oscillators' natural frequencies. c) The poles of the linearised coupled system move due to the feedback action, that can either decrease or increase the stability margin of each mode, or even fully destabilize a mode.\label{fig:vdp2}}
\end{figure*}
\begin{figure}[h!]
\begin{center}
\includegraphics[width=0.5\columnwidth]{21.eps}
\end{center}
\caption{Identified growth rate against the filter semi-bandwidth. The source signal is obtained via a Simulink\textsuperscript{\textregistered} simulation of a double VDP oscillator, of known parameter (e.g. $\nu_{\text{t}}$ is the true growth rate). The identification is performed on mode \#1, of eigenfrequency $f_1$, while another mode of eigenfrequency $f_2$ is in place. Three identification methods of \cite{noiray13} are used, respectively based on: the power spectral density of the amplitude (-$\bigtriangleup$-), the probability density function of the amplitude (-$\Circle$-) and the coefficients of the Fokker-Planck equation (-$\square$-).\label{fig:nu_id}}
\end{figure}
\noindent A typical combustor acoustic pressure spectrum features several peaks (\cref{fig:exp_Spp_engine}.b). The different modes acting in the domain are mutually coupled, each one influencing the response of the others. However, if the neighboring peaks are not too close, one can analyse one mode at a time, isolating its dynamic from that of the other modes. This is easily done by bandpass filtering the data and simplifies the system identification, since neither the parameters of neighboring modes, nor the coupling coefficients have to be taken into account.\\
\Cref{fig:filt_w} shows a typical situation and the effects of a different filter bandwidth. A wider portion of this spectrum has already been shown in \cref{fig:exp_Spp_engine}.b. This experimental spectrum features a strong peak, corresponding to the dominant mode eigenfrequency, surrounded by two others small peaks. In order to identify the mode parameters accurately, removing the other modes effect, the signal is filtered around the main peak, i.e. in the band $[f_0-\Delta f; f_0+\Delta f]$. The maximum bandwidth is the one that discards neighboring peaks while keeping the main peak and its tails ($\Delta f/f_0=0.20$ in this case). One could also choose narrower bands ($\Delta f/f_0=0.10$ or $\Delta f/f_0=0.025$ in this example), obtaining different resulting time signals. Looking at the central panels of \cref{fig:filt_w}, one can see that in the first case (green), the dynamics on time scales comparable to the amplitude correlation time $\tau_A=\pi/|\nu|$ is preserved: compared to the widest filter (blue), only high-frequency amplitude oscillations are lost. This means that the essential dynamics are unaffected. In the second case (red), the general trend is followed, but too much information has been lost to reliably identify the parameters $\nu$ and $\kappa$.\\
In the following, a ``toy model" of two coupled oscillators driven by colored noise is used to illustrate this issue:
\begin{equation}
\label{2VDP_eq}
\begin{cases}
\ddot{\eta}_1+\alpha_1\dot{\eta}_1+\omega_1^2\eta_1=[\beta_1-\kappa_1\eta_1^2]\dot{\eta_1} + [\beta_2-\kappa_2\eta_2^2]\dot{\eta_2} + \xi \\
\ddot{\eta}_2+\alpha_2\dot{\eta}_2+\omega_2^2\eta_2=[\beta_1-\kappa_1\eta_1^2]\dot{\eta_1} + [\beta_2-\kappa_2\eta_2^2]\dot{\eta_2} + \xi \\
p=p_1+p_2=\psi_1\eta_1+\psi_2\eta_2.
\end{cases}
\end{equation}
The total output $p$, which is the sum of the outputs of the two oscillators $\eta_{1}$ and $\eta_{2}$ weighted by $\psi_1$ and $\psi_2$, features a spectrum, plotted in \cref{fig:vdp2}, that is similar to the experimental pressure spectrum shown in \cref{fig:exp_Spp_engine}. This figure also highlights what is hidden behind a single-mode approximation.\\
\noindent{}Note the difference between the spectra of $p_2$ without coupling (theoretical, thick blue) and with coupling (numerical, thin blue), especially for $f=f_1$. This difference appears because the oscillators are coupled and the linearly unstable oscillator \#1, characterized by a limit-cycle at $f_1=\omega_1/2\pi$, is forcing oscillator \#2, having eigenfrequency $f_2=1.3f_1=\omega_2/2\pi$. At the same time, the linearly stable mode (oscillator \#2) contributes to $S_{pp}$ around the eigenfrequency $f_1$ of the unstable mode (oscillator \#1).
Therefore, the response of the system at $f=f_1$ is not due to the oscillator \#1 only.
However, if the two peaks are distant enough and one is stronger than the other, these mutual contributions are negligible, compared to the direct output of the oscillator \#1 at its natural frequency (more than 20 dB of difference in this example). Restricting the discussion to this case, one can adopt the aforementioned single-mode approximation, and attempt a parameter identification on one mode at a time.\\
\noindent{}To test the sensibility of the identification results to the filter bandwidth, the output signal is filtered with different bandwidth around the first eigenfrequency $f_1$. The aim is to extract the linear growth rate of the first unstable oscillator, which has the true value $\nu_{\text{t}} \approx (\beta_1-\alpha_1)/2$. For this purpose, three different methods of \cite{noiray13} are used. The results are presented in \cref{fig:nu_id}. One can observe that, whatever the adopted identification method, for too narrow filter bandwidth, the identified growth rate is far from the true one, whereas it converges to $\nu_\text{t}$ for large enough windows. On the other hand, when the filter is too wide, the effect of the neighboring mode starts to bias the identification.
Therefore, when one analyses experimental data around a frequency of interest, there exist, for the filter bandwidth: i) a lower limit, given by the need not to alter the amplitude statistics,
ii) an upper limit, given by the distance from the neighboring peaks.
These constraints have to be satisfied in parallel with the one regarding the validity of the white noise approximation (\cref{Validity of the white noise approximation}). However, in most of the practical cases, neighboring peaks are close and the maximum filter bandwidth to satisfy condition ii) is narrow enough that the effect of noise color can be safely neglected.\\
On the other hand, it is clear that any identification attempt on a mode that is both highly unstable and very close to another mode will fail because the filter to adopt to isolate one mode dynamics would be so narrow that condition i) is not fulfilled. In this situation a two-mode model would be required for parameter identification. As already suggested, it is advisable to iterate the parameter identification varying the applied filter bandwidth: one can be confident on the result if a plateau is observed.
\section{Conclusion}
\noindent In this work, the effects of the color of a stochastic excitation driving a Van der Pol Oscillator has been investigated.
First, an Ornstein-Uhlenbeck process has been considered as the driving source. Then, a noise model, mimicking the stochastic forcing exerted by turbulence in thermoacoustic systems, has been used.
It has been shown that in both cases the envelope statistics is the same as the one obtained with a white noise forcing, provided that an equivalent effective noise intensity is considered.
Then, the approximation of a colored noise by a white one has been assessed in the context of data analysis and parameter identification.
The main conclusion is that one can reliably identify the linear growth rate and saturation constant by band-pass filtering the data around the oscillator eigenfrequency.\\
This result is valid regardless of the parameters values and nature of the forcing noise. This fact consolidates the output-only parameter identification methods proposed in \cite{noiray13}, because in real cases it might be impossible to determine the spectral distribution of the forcing noise.\\
\section*{Acknowledgement}
\noindent{}This research is supported by the Swiss National Science Foundation under Grant 160579.
\bibliographystyle{apsrev4-1}
|
2,877,628,088,413 | arxiv | \section{Introduction}
\label{sec:1}
The failure of the Standard Model (SM), in describing phenomena like the baryon asymmetry of the universe (BAU) and the dark matter (DM), brings to mind that the SM cannot be considered as a fundamental model. Nevertheless, the discovery of the Higgs boson~\cite{Aad:2012tfa,Chatrchyan:2012xdj} as the first observed scalar has opened the way to consider the SM as an effective field theory (EFT) and also a window to the Higgs portal. To address the BAU and the DM problems, many theories and models have been proposed beyond the SM such as supersymmetry studies~\cite{Cline:2000kb,Huber:2006wf,Huber:2006ma,Pietroni:1992in,Davies:1996qn,Ham:2004nv,Menon:2009mz,Carena:2012np,Huber:2007vva,Kozaczuk:2014kva,Kozaczuk:2012xv,Jungman:1995df,Menon:2004wv,Cirigliano:2006dg,Cao:2011re}. Due to the attraction of the Higgs portal, it has been always of interest to investigate the SM extensions which directly challenge the Higgs portal like multi-scalar extensions~\cite{Damgaard:2015con,Vaskonen:2016yiu,Beniwal:2017eik,Chen:2017qcz,Espinosa:2011ax,Cline:1996mga,Fromme:2006cm,Kang:2017mkl,Dorsch:2013wja,Haarr:2016qzq,Gunion:1989ci,FileviezPerez:2008bj,Alanne:2014bra}. The existence of interactions between the Higgs and new scalars makes such models reasonable for explaining the BAU, which needs a strong first-order electroweak phase transition (SFOEWPT), the gravitational waves (GW) produced by an SFOEWPT and the DM. Moreover, such models also have other benefits. First, they are simple and straightforward. Second, they may be renormalized, so no new physics scale is needed. Third, they may be gauge independent, if there exists a barrier in the potential at tree-level~\cite{Patel:2011th}.
To justify the BAU, there is a need for Baryogenesis to exist~\cite{Kuzmin:1985mm,Shaposhnikov:1987tw,Dine:2003ax,Cline:2006ts,Canetti:2012zc} which itself needs an SFOEWPT, i.e. $\frac{v_{c}}{T_{c}}\gtrsim1$ where $v_{c}$ is the Higgs vacuum expectation value (VeV) at critical temperature $T_{c}$. This would not happen in the SM, but adding one or more new scalars to the SM potential may lead to an SFOEWPT. With regard to the new potential structure, two different phase transitions (PT) can happen. One of them is one-step PT in which there only exist initial and final phases. Cooling down the universe, it goes through a phase transition and breaks the electroweak symmetry. The other one is two-step (or multi-step) PT in which there also exists an intermediate phase (or more) between initial and final phases~\cite{Land:1992sm,Hammerschmitt:1994fn,Patel:2012pi,Huang:2014ifa,Blinov:2015sna}. The reader is referred to~\cite{Ellis:2018mja,Baker:2017zwx,Croon:2018erz,Beniwal:2018hyi,Huang:2017laj,Hashino:2018zsi,Mazumdar:2018dfl,Ghosh:2017fmr} for the most recent studies on the EWPT.
During the SFOEWPT, the bubbles with the non-zero VeV nucleates in the plasma. The stochastic GW background arising from the SFOEWPT can be generated by the bubbles collisions and shocks~\cite{Kosowsky:1991ua,Kosowsky:1992rz,Kosowsky:1992vn,Kamionkowski:1993fg,Caprini:2007xq,Huber:2008hg}, the sound waves~\cite{Hindmarsh:2013xza,Giblin:2013kea,Giblin:2014qia,Hindmarsh:2015qta}, and the Magnetohydrodynamic (MHD) turbulence~\cite{Caprini:2006jb,Kahniashvili:2008pf,Kahniashvili:2008pe,Kahniashvili:2009mf,Caprini:2009yp} in the plasma. Since the EWPT in the SM is a cross-over instead of strong one, the SM cannot predict the GW produced by the EWPT. So, this is another reason to look for beyond the SM. The recent observations of astrophysical GW~\cite{Abbott:2016blz,Abbott:2016nmj,Abbott:2017ylp,Abbott:2017vtc,Abbott:2017oio,TheLIGOScientific:2017qsa,Monitor:2017mdv,Abbott:2017gyy} have brought the hope to detect the GW produced by the EWPT~\cite{Caprini:2015zlo,Weir:2017wfa,Caprini:2018mtu}. The reader is referred to~\cite{Ellis:2018mja,Croon:2018erz,Beniwal:2018hyi,Huang:2017laj,Hashino:2018zsi,Demidov:2017lzf,Mazumdar:2018dfl,Kobakhidze:2016mch,Kobakhidze:2017mru,Dev:2016feu} for the most recent studies on the GW produced by the EWPT.
As mentioned before, the SM cannot explain DM which existence is well established by cosmological evidence. As the simplest way, this incompetence can be justified by adding one (or more) gauge singlet scalar to the SM. Since the DM should be stable to provide the observed relic density $\Omega_{c}h^{2}=0.120\pm0.001$ by $\mathbf{Planck}$ $\mathbf{2018}$~\cite{Aghanim:2018eyx}, it is necessary to impose a discrete symmetry on the DM candidate, in present study $S_{2} \rightarrow -S_{2}$. On the other hand, the global minimum of potential at zero temperature spontaneously breaks this discrete symmetry, so necessarily $<S_{2}>=0$. The reader is referred to~\cite{Athron:2018ipf,Bernal:2018ins,Baum:2017enm,Baker:2017zwx,Croon:2018erz,Beniwal:2018hyi,Li:2018qip,Hashino:2018zsi,Bandyopadhyay:2017tlq,Yepes:2018zkk,Ghosh:2017fmr} for the most recent studies on the DM.
The present work is arranged as follows: In section \ref{sec:2}, the most general and renormalizable extension of the SM is presented by adding two scalar sectors $S_{1}$ and $S_{2}$ to the usual SM potential\footnote{The model first presented in~\cite{Tofighi:2015fia} without the GW and the DM discussions. Here, the results of~\cite{Tofighi:2015fia} are improved for the EWPT, also, the GW and the DM signals are investigated.}. Assigning a non-zero VeV to $S_{1}$, the SFOEWPTH can occur in the model. Imposing a $Z_{2}$ symmetry on $S_{2}$ makes it a viable candidate for the DM. Also, constraints on the parameter space are discussed. The EWPT, GW and DM are respectively investigated in sections \ref{sec:3}, \ref{sec:4} and \ref{sec:5}. Finally, some conclusions are presented in section \ref{sec:6}.
\section{The Model}
\label{sec:2}
The tree-level potential of the model is given by
\begin{equation} \label{eq:2.1}
\begin{split}
V = & - m^{2} H^{\dagger}H + \lambda (H^{\dagger}H)^2 + \kappa_{0} S_{1} + 2 (\kappa_{1} S_{1}+\kappa_{2} S_{1}^{2}+\kappa_{3} S_{2}^{2}) H^{\dagger}H \\
& + \frac{1}{2} m_{1}^{2} S_{1}^{2} + \frac{\lambda_{1}}{4} S_{1}^{4} + \kappa_{4} S_{1}^{3} + \frac{1}{2} m_{2}^{2} S_{2}^{2} + \frac{\lambda_{2}}{4} S_{2}^{4} + \kappa_{5} S_{1} S_{2}^{2},
\end{split}
\end{equation}
The potential \ref{eq:2.1} is the usual SM potential with two extra gauge singlet scalars and interaction terms which provide Higgs portal between the new scalars and the usual SM particles. H stands for the complex Higgs doublet, $H=\begin{pmatrix}\chi_{1}+i\chi_{2} \\ \frac{1}{\sqrt{2}}(h+i\chi_{3}) \\ \end{pmatrix}$. $S_{2}$ stands for the DM imposing $S_{2}\rightarrow-S_{2}$. Acquiring a non-zero VeV, $S_{1}$ improves the strength of EWPT. The linear term of $S_{1}$ can be neglected by a shift in the potential. The $Z_{2}$ symmetry forbids the existence of linear and cubic terms for $S_{2}$, so the equation \ref{eq:2.1} is the most general renormalizable potential which could be made by adding two new scalars. In the unitary gauge at zero temperature, the theoretical fields can be reparameterized in terms of the physical fields,
\begin{equation} \label{eq:2.2}
H=\begin{pmatrix}0 \\ \frac{1}{\sqrt{2}}(h+v) \\ \end{pmatrix},\quad S_{1}=s_{1}+\chi,\quad S_{2}=s_{2},
\end{equation}
where $v=246.22 (GeV)$ and $\chi$ are the Higgs and $S_{1}$ VeV, respectively. Without loss of generality, one can write
\begin{equation} \label{eq:2.3}
\begin{split}
V = & -\frac{1}{2} m^{2} h^{2} + \frac{\lambda}{4} h^{4} + (\kappa_{1} s_{1}+\kappa_{2} s_{1}^{2}+\kappa_{3} s_{2}^{2}) h^{2} \\
& + \frac{1}{2} m_{1}^{2} s_{1}^{2} + \frac{\lambda_{1}}{4} s_{1}^{4} + \kappa_{4} s_{1}^{3} + \frac{1}{2} m_{2}^{2} s_{2}^{2} + \frac{\lambda_{2}}{4} s_{2}^{4} + \kappa_{5} s_{1} s_{2}^{2}.
\end{split}
\end{equation}
In order to have a stable potential, it is required that~\cite{Espinosa:2011ax,Tofighi:2015fia}
\begin{equation} \label{eq:2.4}
\lambda>0,\quad \lambda_{1}>0,\quad \lambda_{2}>0,\quad \kappa_{2}>-\frac{\sqrt{\lambda \lambda_{1}}}{2},\quad \kappa_{3}>-\frac{\sqrt{\lambda \lambda_{2}}}{2}.
\end{equation}
The tadpole equations at $(v,\chi,0)$ read
\begin{equation} \label{eq:2.5}
\begin{split}
& m^{2}=\lambda v^{2} + 2 (\kappa_{1} \chi + \kappa_{2} \chi^{2}), \\
& m_{1}^{2}=-\lambda_{1} \chi^{2} - 3 \kappa_{4} \chi - 2 \kappa_{2} v^{2} - \frac{\kappa_{1} v^{2}}{\chi}.
\end{split}
\end{equation}
From the diagonalization of squared-mass matrix and the tadpole equations, one can get
\begin{equation} \label{eq:2.6}
\begin{split}
& \lambda=\frac{M_{1}^{2}sin^{2}(\theta)+M_{H}^{2}cos^{2}(\theta)}{2 v^{2}}, \\
& \kappa_{2}=\frac{(M_{H}^{2}-M_{1}^{2})sin(2\theta)}{8v\chi}-\frac{\kappa_{1}}{2\chi}, \\
& \lambda_{1}=\frac{1}{2\chi^{2}}\left(M_{1}^{2}cos^{2}(\theta)+M_{H}^{2}sin^{2}(\theta)+\frac{\kappa_{1}v^{2}}{\chi}-3 \chi \kappa_{4}\right), \\
& m_{2}^{2}=M_{2}^{2}-2\kappa_{3}v^{2}-2\kappa_{5}\chi,
\end{split}
\end{equation}
where $M_{H}=126 (GeV)$, $M_{1}$, $M_{2}$ and $\theta$ are the Higgs mass\footnote{The last announcement for the Higgs mass is $M_{H}=125.09 (GeV)$~\cite{Aad:2015zhl}, however, 1-3 GeV deviation is acceptable.}, the physical mass of $S_{1}$, the physical mass of $S_{2}$ (the DM mass) and the mixing angle, respectively. In Ref.~\cite{Profumo:2014opa}, by performing a global fit to the Higgs data from both $\mathbf{ATLAS}$ and $\mathbf{CMS}$, the constraint on the mixing angle was given $|\theta|\leq32.86^{\circ}$ at $95\%$ confidence level (CL). In Ref.~\cite{Chao:2016vfq}, by performing a universal Higgs fit, the upper limit on the mixing angle was given $|\theta|\leq30.14^{\circ}$ at $95\%$ CL. In the present work, a Monte Carlo scan is performed over the parameter space with
\begin{equation} \label{eq:2.7}
\begin{split}
& 5GeV \leq M_{1} \leq 750 GeV,\quad 5GeV \leq M_{2} \leq 750 GeV,\quad -23^{\circ}\leq \theta \leq23^{\circ}, \\
& -80GeV\leq \kappa_{1}\leq 80GeV,\quad 0.0001\leq \kappa_{3}\leq 0.1,\quad -80GeV\leq \kappa_{4}\leq 80GeV, \\
& -80GeV\leq \kappa_{5}\leq 80GeV,\quad 30 GeV\leq \chi\leq 120 GeV,\quad 0\leq \lambda_{2}\leq 4.
\end{split}
\end{equation}
\section{Electroweak Phase Transition}
\label{sec:3}
To investigate the EWPT in a model, one needs the finite temperature effective potential given by
\begin{equation} \label{eq:3.1}
V_{eff}=V_{tree-level}+V_{1-loop}^{T=0}+V_{1-loop}^{T\neq0},
\end{equation}
where $V_{tree-level}$, $V_{1-loop}^{T=0}$ and $V_{1-loop}^{T\neq0}$ are the tree-level potential \eqref{eq:2.3}, the one-loop corrected potential at zero temperature (the so-called Coleman-Weinberg potential) and the one-loop finite temperature corrections, respectively. The last two read
\begin{equation} \label{eq:3.2}
\begin{split}
& V_{1-loop}^{T=0}=\pm\frac{1}{64\pi^2}\sum_{i=h,s_{1},s_{2},W,Z,t} n_i
m_i^4\left[\log\frac{m_i^2}{Q^2}-C_i\right],\\
& V_{1-loop}^{T\neq0}=\frac{T^4}{2\pi^2} \sum_{i=h,s_{1},s_{2},W,Z,t} n_i J_\pm \left[ \frac{m_{i}^{2}}{T^{2}}\right],\\
& J_\pm(\frac{m_{i}^{2}}{T^{2}}) = \pm \int_0^\infty dy\, y^2 \log\left(1\mp e^{-\sqrt{y^2+\frac{m_{i}^{2}}{T^{2}}}}\right),
\end{split}
\end{equation}
where $n_{i}$, $m_{i}$, $Q$ and $C_{i}$ denote the degrees of freedom, the field-dependent masses, the renormalization scale and the numerical constants, respectively. The degrees of freedom and the numerical constants are respectively given by $(n_{h,s_{1},s_{2}},n_{W},n_{Z},n_{t})=(1,6,3,12)$ and $(C_{W,Z},C_{h,s_{1},s_{2},t})=(5/6,3/2)$. The upper (lower) sign is for bosons (fermions). Assuming the longitudinal gauge bosons polarizations are screened by plasma, thermal masses just contribute to the scalars, so Daisy corrections become small and can be neglected. There are three possibilities to deal with the renormalization scale $Q$. First one is to add some counter terms to the effective potential \eqref{eq:3.1} to make it independent of Q without shifting VeV at zero temperature~\cite{Quiros:1999jp,Delaunay:2007wb}. Second one is to set Q at a proper scale, like $Q=160(GeV)$ the running value of the top mass, $Q=246.22(GeV)$ EW scale and $Q=1(TeV)$ for supersymmetry purposes. Third one is to take Q as a free parameter to avoid shifting VeV at zero temperature. Here, the last one is considered.
The main idea of the EWPT is that the early universe, which from particle physics point of view may be described by potential \eqref{eq:3.1}, is in a high phase\footnote{In this work, the high (low) phase denotes a phase which is the unstable (stable) vacuum for temperatures below $\mathrm{T_{c}}$.} with \begin{math}VeV=(<h>,<s1>,<s2>)^{high}\end{math} at high temperatures. Cooling down the universe, a new phase appears with \begin{math}VeV=(<h>,<s1>,<s2>)^{low}\end{math}. As the universe cools down, the two phases become degenerate at the critical temperature $T_{c}$. Since the strength of the EWPT is governed by $\xi=\frac{v_{c}}{T_{c}}$, all that needs to be done is to calculate $v_{c}$ and $T_{c}$ from the following conditions:
\begin{equation} \label{eq:3.3}
\begin{split}
& \left.\frac{\partial V_{eff}}{\partial h}\right|_{(<h>,<s1>,<s2>)^{high},T=T_{c}} = 0,\quad \left.\frac{\partial V_{eff}}{\partial h}\right|_{(<h>,<s1>,<s2>)^{low},T=T_{c}} = 0,\\
& \left.\frac{\partial V_{eff}}{\partial s_{1}}\right|_{(<h>,<s1>,<s2>)^{high},T=T_{c}} = 0,\quad \left.\frac{\partial V_{eff}}{\partial s_{1}}\right|_{(<h>,<s1>,<s2>)^{low},T=T_{c}} = 0,\\
& \left.\frac{\partial V_{eff}}{\partial s_{2}}\right|_{(<h>,<s1>,<s2>)^{high},T=T_{c}} = 0,\quad \left.\frac{\partial V_{eff}}{\partial s_{2}}\right|_{(<h>,<s1>,<s2>)^{low},T=T_{c}} = 0,\\
& V_{eff}\Big|_{(<h>,<s1>,<s2>)^{high},T=T_{c}} = V_{eff}\Big|_{(<h>,<s1>,<s2>)^{low},T=T_{c}}.
\end{split}
\end{equation}
The last condition guarantees degeneracy and the others guarantee existence of the high and low vacua. There is no analytical solution for the problem, so the calculations are implemented with the CosmoTransitions package~\cite{Wainwright:2011kj}. The benchmark points and the corresponding results are presented in table \ref{tab:1} and \ref{tab:2}, respectively. Here, the exact calculations are performed by CosmoTransitions to get the effective potential, compared to Ref.~\cite{Tofighi:2015fia} which used the high temperature expansion. Though, the results of Ref.~\cite{Tofighi:2015fia} should be improved for the high temperature expansion case. An extension of the SM with two new scalars was recently studied in Ref.~\cite{Chao:2017vrq}, but there are some differences between it and the present work. First, the high temperature expansion was used in~\cite{Chao:2017vrq}. Second, the cubic term $S_{1}^{3}$, which plays a crucial role in the EWPT as a barrier at tree-level, was not considered in ~\cite{Chao:2017vrq}.
\begin{table}
\centering
\begin{tabular}{| c | c | c | c | c | c | c | c | c | c | c |}
\hline
&$M_{1}(GeV)$&$M_{2}(GeV)$&$\theta$&$\chi(GeV)$&$\kappa_{1}$&$\kappa_{3}$&$\kappa_{4}$&$\kappa_{5}$&$\lambda_{2}$&$Q(GeV)$ \\
\hline
BM1 &25.27&655.22&-9.80&115&-40.72&0.0528&-4.04&-11.68&0.55&149\\
BM2 &65.74&337&17.16&65.75&-8.24&0.0241&-34&21.44&0.79&109\\
BM3 &83.16&235.91&-18.68&69.89&-40.53&0.0132&13.48&-15.44&3.62&160\\
BM4 &195.89&434.2&-20.42&96.03&-55.38&0.0322&-50.92&47.82&2.36&106.7\\
BM5 &226.06&126.33&-19.20&54.07&-29.63&0.0016&7.05&5.92&1.79&104.7\\
BM6 &254.18&420&-15.94&43.82&-35.19&0.0241&-13.01&48.3&1.74&91.19\\
BM7 &262.86&600&-21.9&53.04&-38.4&0.0618&-2.07&73.55&3.05&91.18\\
BM8 &305&325&-6&36&-47&0.0012&-2&-26.4&0.13&91.19\\
\hline
\end{tabular}
\caption{Benchmark points which provide the SFOEWPT.}\label{tab:1}
\end{table}
\begin{table}
\centering
\begin{tabular}{| c | c | c | c | c | c | c | c | c | c | c |}
\hline
&$\mathrm{VeV^{high}_{c}(GeV)}$&$\mathrm{VeV^{low}_{c}(GeV)}$&$\mathrm{T_{c}(GeV)}$&$\xi$ \\
\hline
BM1 &(0,6.66,0)&(152.44,58.12,0)&92.61&1.65\\
BM2 &(0,212.26,0)&(239.05,67.24,0)&60.33&3.96\\
BM3 &(0,2.13,0)&(117.2,27.16,0)&115.44&1.01\\
BM4 &(0,191.74,0)&(214.86,100.1,0)&97.06&2.21\\
BM5 &(0,110.24,0)&(164.43,76.63,0)&114.18&1.44\\
BM6 &(0,102.36,0)&(215.56,45.91,0)&97.22&2.22\\
BM7 &(0,113.79,0)&(222.88,48.07,0)&91.84&2.43\\
BM8 &(0,72.31,0)&(145.52,48.35,0)&118.13&1.23\\
\hline
\end{tabular}
\caption{The values of the VeV of the high and the low phases, $T_{c}$ and the strength of the SFOEWPT.}\label{tab:2}
\end{table}
\section{Gravitational Waves}
\label{sec:4}
The SFOEWPT may justify not only the BAU but also the GW signal produced by the EWPT. Actually, the EWPT occurs at a temperature lower than $T_{c}$, in which the first broken phase bubbles nucleate in the symmetric phase plasma of the early universe. The transition probability is given by $\Gamma(T)=\Gamma_{0}(T)e^{-S(T)}$ where $\Gamma_{0}(T)$ is of order $\mathcal{O}(T^{4})$ and S is the 4-dimensional action of the critical bubbles. For temperatures sufficiently greater than zero, it can be assumed $S=\frac{S_{3}}{T}$ where the 3-dimensional action is given by
\begin{equation} \label{eq:4.1}
S_{3} = 4\pi \int{dr r^{2} \left[ \frac{1}{2}\left(\partial_{r} \vec{\phi}\right)^2 + V_{eff}\right]}.
\end{equation}
Here, $\vec{\phi}=(h,s1,s2)$. The critical bubble profiles, which minimize the action \eqref{eq:4.1}, can be calculated from the equation of motions. The temperature for a particular configuration, which gives the nucleation probability of order $\mathcal{O}(1)$, is the nucleation temperature $T_{n}$.
The GW may be produced by the collision of the bubbles at some temperature $T_{*}$, it is usually assumed $T_{*}=T_{n}$. Supposing that the friction force is not enough to prevent the bubbles from running away, the GW signal is given by
\begin{equation} \label{eq:4.2}
\Omega_{GW} h^{2}\simeq\Omega_{col} h^{2}+\Omega_{sw} h^{2}+\Omega_{turb} h^{2}.
\end{equation}
As seen, the GW signal is given by the sum of bubbles collision, sound wave and turbulence in the plasma which respectively read~\cite{Kamionkowski:1993fg,Huber:2008hg,Hindmarsh:2013xza,Hindmarsh:2015qta,Binetruy:2012ze,Caprini:2009yp,Espinosa:2010hh,Caprini:2015zlo}
\begin{equation} \label{eq:4.3}
\begin{split}
& \Omega_{col} h^{2}=1.67 \times 10^{-5} \left(\frac{\beta}{H}\right)^{-2} \frac{0.11\, v_{b}^{3}}{0.42+v_{b}^{2}} \left(\frac{\kappa\, \alpha}{1+\alpha}\right)^{2} \left(\frac{g_{*}}{100}\right)^{-\frac{1}{3}}\frac{3.8\, \left(\frac{f}{f_{col}}\right)^{2.8}}{1+2.8\, \left(\frac{f}{f_{col}}\right)^{3.8}},\\
& \Omega_{sw} h^{2}=2.65 \times 10^{-6} \left(\frac{\beta}{H}\right)^{-1} v_{b} \left(\frac{\kappa_{v}\, \alpha}{1+\alpha}\right)^{2} \left(\frac{g_{*}}{100}\right)^{-\frac{1}{3}} \left(\frac{f}{f_{sw}}\right)^{3} \left(\frac{7}{4+3\left(\frac{f}{f_{sw}}\right)^{2}}\right)^{\frac{7}{2}},\\
& \Omega_{turb} h^{2}=3.35 \times 10^{-4} \left(\frac{\beta}{H}\right)^{-1} v_{b} \left(\frac{\epsilon\, \kappa_{v}\, \alpha}{1+\alpha}\right)^{\frac{3}{2}} \left(\frac{g_{*}}{100}\right)^{-\frac{1}{3}} \frac{\left(\frac{f}{f_{turb}}\right)^{3} \left(1+\frac{f}{f_{turb}}\right)^{-\frac{11}{3}} }{1+\frac{8\pi f}{h_{*}}},
\end{split}
\end{equation}
with
\begin{equation} \label{eq:4.4}
\begin{split}
& f_{col} = 16.5\times10^{-6}\, \mathrm{Hz}\, \left(\frac{0.62}{v_{b}^{2}-0.1\,v_{b}+1.8}\right) \left(\frac{\beta}{H}\right) \left(\frac{T_{n}}{100\, \mathrm{GeV}}\right) \left(\frac{g_{*}}{100}\right)^{\frac{1}{6}},\\
& f_{sw} = 1.9\times10^{-5}\, \mathrm{Hz}\, \left(\frac{1}{v_{b}}\right) \left(\frac{\beta}{H}\right) \left(\frac{T_{n}}{100\, \mathrm{GeV}}\right) \left(\frac{g_{*}}{100}\right)^{\frac{1}{6}},\\
& f_{turb} = 2.7\times10^{-5}\, \mathrm{Hz}\, \left(\frac{1}{v_{b}}\right) \left(\frac{\beta}{H}\right) \left(\frac{T_{n}}{100\, \mathrm{GeV}}\right) \left(\frac{g_{*}}{100}\right)^{\frac{1}{6}},\\
& h_{*} = 16.5\times10^{-6}\, \mathrm{Hz}\, \left(\frac{T_{n}}{100\, \mathrm{GeV}}\right) \left(\frac{g_{*}}{100}\right)^{\frac{1}{6}},\\
& \kappa=1-\frac{\alpha_{\infty}}{\alpha},\\
& \kappa_{v}=\frac{\alpha_{\infty}}{\alpha}\left(\frac{\alpha_{\infty}}{0.73+0.083\sqrt{\alpha_{\infty}}+\alpha_{\infty}}\right),\\
& \alpha_{\infty}=\frac{30}{24\, \pi^2 g_{*}} \left(\frac{v_{n}}{T_{n}}\right)^{2} \left(6\left(\frac{M_{W}}{v}\right)^{2} + 3\left(\frac{M_{Z}}{v}\right)^{2} + 6\left(\frac{M_{top}}{v}\right)^{2} \right).
\end{split}
\end{equation}
$v_{n}$ and $g_{*}$ are the Higgs VeV and the number of the relativistic degrees of freedom at $T_{n}$, respectively. Here, $\epsilon=0.1$, and $g_{*}$ is read from the MicrOMEGAs package~\cite{Belanger:2018mqt,Barducci:2016pcb}. Still, there are three important parameters which should be defined. One of them is the bubble wall velocity, since assumed that the bubbles run away, given by $v_{b} \simeq 1$.
The two others, $\alpha$ and $\beta$, are given as follows
\begin{equation} \label{eq:4.6}
\begin{split}
& \alpha = \left.\frac{\rho_{vac}}{\rho^{*}}\right|_{T_{n}},\\
& \beta = \left.\left[ H\, T\, \frac{d}{dT}\left(\frac{S_{3}}{T}\right) \right]\right|_{T_{n}},
\end{split}
\end{equation}
where $\rho_{vac}=\left( V_{eff}^{high}-T dV_{eff}^{high}/dT \right)-\left( V_{eff}^{low}-T dV_{eff}^{low}/dT \right)$, $\rho^{*}=g_{*}\pi^{2}T_{n}^{4}/30$ and $H_{n}$ are the latent heat (vacuum energy density) released by the EWPT, the background energy density of the plasma and Hubble parameter at $T_{n}$, respectively. Using the CosmoTransitions package~\cite{Wainwright:2011kj}, the parameters $\alpha$, $\beta/H$, $v_{n}$ and $T_{n}$ are calculated and presented in table \ref{tab:3}. In figures \ref{fig:1}, the GW signals are plotted versus frequency for the benchmark points of table \ref{tab:1}. To check if the GW signals for the benchmark points \ref{tab:1} fall within the sensitivity of GW detectors, the sensitivity curves of $\mathbf{eLISA}$, $\mathbf{ALIA}$, $\mathbf{DECIGO}$ and $\mathbf{BBO}$ detectors\footnote{The sensitivity curves of four representative configurations of $\mathbf{eLISA}$ are taken from~\cite{Caprini:2015zlo}. The $\mathbf{ALIA}$, the $\mathbf{DECIGO}$ and the $\mathbf{BBO}$ sensitivity curves are taken from \href{http://gwplotter.com}{GWPLOTTER}. The reader is referred to Ref.~\cite{Moore:2014lga} for details.} are also plotted in the figure \ref{fig:1}. As seen from the figure \ref{fig:1}, the dashed blue line corresponding to the GW signal for the BM7 may be detected by $\mathbf{N2A1M5L6}$ and $\mathbf{N2A5M5L6}$ configurations of $\mathbf{eLISA}$ and $\mathbf{BBO}$ detectors. The dashed red and yellow lines corresponding, respectively, to the GW signal for the BM4 and the BM6 may be detected by $\mathbf{N2A5M5L6}$ configuration of $\mathbf{eLISA}$ and $\mathbf{BBO}$ detectors. The dashed orange line corresponding to the BM2 may be detected by $\mathbf{DECIGO}$ and $\mathbf{BBO}$ detectors. The dashed green, cyan and purple lines corresponding, respectively, to the GW signal for BM1, BM5 and BM8 cannot be detected by the mentioned detectors. The GW signal for the BM3 isn't big enough to be shown at the scale of the figure \ref{fig:1}.
\begin{table}
\centering
\begin{tabular}{| c | c | c | c | c | c | c | c | c | c | c |}
\hline
&$\mathrm{VeV^{high}_{n}(GeV)}$&$\mathrm{VeV^{low}_{n}(GeV)}$&$\mathrm{T_{n}(GeV)}$&$\alpha$&$\beta/H$ \\
\hline
BM1 &(0,6.46,0)&(169.49,68.13,0)&89.15&0.0324&6291.32\\
BM2 &(0,212.75,0)&(244.78,63.20,0)&41.61&0.2595&18459.13\\
BM3 &(0,2.09,0)&(127.05,30.59,0)&114.43&0.0119&27039.55\\
BM4 &(0,194.67,0)&(243.13,93.35,0)&51.83&0.3131&130.43\\
BM5 &(0,110.54,0)&(185.04,69.54,0)&110.97&0.0245&5644.92\\
BM6 &(0,103.24,0)&(234.55,41.18,0)&77.05&0.0890&433.45\\
BM7 &(0,114.91,0)&(243.12,42.47,0)&43&0.5388&150.54\\
BM8 &(0,72.67,0)&(157.93,45.79,0)&115.38&0.0175&9306.63\\
\hline
\end{tabular}
\caption{The values of the VeV of the high and the low phases, $T_{n}$, $\alpha$ and $\beta/H$.}\label{tab:3}
\end{table}
\begin{figure}
\centering
\includegraphics[width=300pt]{fig1.eps}
\caption{The dashed blue, red, yellow, orange, green, cyan and purple lines represent the GW signal for BM7, BM4, BM6, BM2, BM1, BM5 and BM8, respectively. The solid black lines represent the sensitivity curves and are labeled by the name of the detectors. For eLISA, the sensitivity curves are labeled by the name of the configuration.}\label{fig:1}
\end{figure}
According to the tables \ref{tab:2} and \ref{tab:3}, it seems that BM2 is a special point. The value of $\beta/H$ is large at this point, while, the nucleation temperature is not very close to the critical temperature. At the same time, $T_{n}$ is low and $\alpha$ is large.\footnote{The authors thank an anonymous referee for pointing this out.} To clarify the situation of BM2, the phase transition properties of BM2 are shown in the figure \ref{fig:2}. As seen from the subfigure \ref{fig:2}-(a), the slope of $S_{3}/T$ increases around $T_{n}$ which indicates the parameter $\beta/H$ is large. The physics of this situation can be described by the tunneling profile, the norm of phases as a function of temperature, and the contour levels of the potential with the tunneling path. As seen from subfigure \ref{fig:2}-(b) and \ref{fig:2}-(d), the center of bubble is far away from the stable vacuum. Also from subfigure \ref{fig:2}-(c), it is seen that the transition occurs at a temperature where the unstable vacuum is close to disappearance. The values of potential at high and low phases are, respectively, $V^{high}_{eff}=-91583128.19 (GeV^{4})$ and $V^{low}_{eff}=-101840540.47 (GeV^{4})$, which give the pressure difference $\Delta p = 10257412.28 (GeV^{4})$. The barrier location is at $(h, s_{1}, s_{2})=(19.84, 212.19, 0)$ with $V_{eff}=-91582001.71 (GeV^{4})$ which gives the barrier height $\Delta V_{barrier \: height}=1126.48 (GeV^{4})$. Clearly, the barrier height is very small, $\Delta V_{barrier \: height}/\Delta p = 0.0001$. Due to the reasons given above, the bubbles are extremely thick walled. Since the barrier height is very small, the transition duration is very short, accordingly, the parameter $\beta/H$ is quite large. This extremely thick walled case is similar to the second-order phase transition in which $\beta/H \rightarrow \infty$ and there is no barrier. Moreover, there is another interesting note for BM2. Due to the cubic term $s_{1}^{3}$, it is expected that the model has a sizable barrier at tree-level like the supercooled scenario discussed in~\cite{Kobakhidze:2017mru}, but this is not the case for BM2. At this point, the model mimics the behavior of
supercooled phase transitions with the supercooling parameter $(T_{c}-T_{n})/T_{c}=0.31$, though, the transition is short-lived.
\begin{figure}[!h]
\subfloat[\label{subfig-1:dummy}]{%
\includegraphics[width=0.5\textwidth]{fig2a.eps}
}
\hfill
\subfloat[\label{subfig-2:dummy}]{%
\includegraphics[width=0.5\textwidth]{fig2b.eps}
}
\hfill
\subfloat[\label{subfig-3:dummy}]{%
\includegraphics[width=0.5\textwidth]{fig2c.eps}
}
\hfill
\subfloat[\label{subfig-4:dummy}]{%
\includegraphics[width=0.5\textwidth]{fig2d.eps}
}
\caption{Phase transition properties of BM2: The subfigure (a) presents $S_{3}/T$ versus temperature, the dashed horizontal red line shows $S_{3}/T=140$ where nucleation occurs. The subfigure (b) presents the tunneling profile as a function of radius. The subfigure (c) presents the norms of high (green line) and low (blue line) phases as functions of temperature, the dashed vertical red line shows the nucleation temperature. The subfigure (d) presents the contour levels of the potential at the nucleation temperature $T_{n}=41.61 (GeV)$, the dashed black line shows the tunneling path.}
\label{fig:2}
\end{figure}
\section{Dark Matter}
\label{sec:5}
As mentioned prior, imposing the $Z_{2}$ symmetry on $S_{2}$ makes it a viable candidate for the DM. Considering the freeze-out formalism, the DM relic density abundance can be calculated by solving the Boltzmann equation,
\begin{equation} \label{eq:5.1}
\frac{dn}{dt}=-3 H n-<\sigma v> (n^{2}-n_{eq}^{2}),
\end{equation}
where n, H, \text{<$\sigma$v>} are the number of the DM particles, the Hubble parameter and the thermally-averaged cross section for the DM annihilation, respectively. It is customary to rewrite the Boltzmann equation in terms of $Y=n/s$, where $s$ is the total entropy density of the universe, the result is~\cite{Gondolo:1990dk}
\begin{equation} \label{eq:5.2}
\frac{dY}{dx}=-\left(\frac{45\,G\,g_{*}}{\pi}\right)^{-\frac{1}{2}}\frac{M\,h_{*}}{x^{2}}(1+\frac{1}{3}\frac{T}{h_{*}}\frac{dh_{*}}{dT})<\sigma v>(Y^{2}-Y_{eq}^{2}),
\end{equation}
where $x=M/T$, $M$ is the DM mass. $h_{*}$ is the effective degree of freedom for the entropy densities. The DM relic density abundance reads,
\begin{equation} \label{eq:5.3}
\Omega_{DM} h^{2}\simeq (2.79\pm 0.05) \times 10^{8} \left(\frac{M}{GeV}\right) Y(0).
\end{equation}
It is assumed the usual SM particles only interact with Higgs in this model, so the annihilation channels for DM via the Higgs portal s-channel are $s_{2}s_{2}\rightarrow W^{+}W^{-},ZZ,f\bar{f}$. Also, there exists $s_{2}s_{2}\rightarrow \phi_{i}\phi_{j}$ (with $\phi_{i(j)}=h,s_{1}$ and $i(j)=1,2$) via $s$, $t$ and $u$ channels and four-point interactions.
The parameter space is constrained by the direct detection DM searches. To do this, one needs to calculate the spin-independent cross section for DM-nucleon scattering\footnote{The spin-dependent case is not studied here, because the DM candidate is assumed to be a scalar.}, and compares the result with the $\mathbf{XENON1T}$ $\mathbf{2018}$ experiment data~\cite{Aprile:2018dbl}. The spin-independent cross section is given by
\begin{equation} \label{eq:5.4}
\sigma_{SI}=\frac{4 M_{s_{2}}^{2} M_{N}^{2}}{\pi (M_{s{2}}+M_{N})^{2}}\Big| \mathcal{M}_{s_{2}-N} \Big|^{2},
\end{equation}
where $M_{s_{2}}$, $M_{N}$ and $\mathcal{M}_{s_{2}-N}$ are the DM mass, the nucleus mass and the scattering amplitude at low energy limit, respectively. $\mathcal{M}_{s_{2}-N}$ is related to $\mathcal{M}_{s_{2}-quark}$, so, calculating effective Lagrangian coefficients and nucleon form factors, $\mathcal{M}_{s_{2}-N}$ can be obtained from $\mathcal{M}_{s_{2}-quark}$. Here, the model is implemented in SARAH~\cite{Staub:2008uz,Staub:2013tta,Staub:2009bi}, the model spectrum is obtained by SPheno~\cite{Porod:2011nf,Porod:2003um} and the DM properties are studied by MicrOMEGAs~\cite{Belanger:2018mqt,Barducci:2016pcb}. The results are presented in table \ref{tab:4}.
\begin{table}
\centering
\begin{tabular}{| c | c | c | c | c | c | c | c | c | c | c |}
\hline
&$\Omega_{DM} h^{2}$&$\sigma_{SI}^{proton}(pb)$&$\sigma_{SI}^{neutron}(pb)$\\
\hline
BM1 &$0.104$&$5.151\times10^{-12}$&$5.315\times10^{-12}$\\
BM2 &$0.12$&$6.469\times10^{-12}$&$6.667\times10^{-12}$\\
BM3 &$6.28\times10^{-2}$&$9.447\times10^{-12}$&$9.729\times10^{-12}$\\
BM4 &$0.109$&$5.038\times10^{-11}$&$5.195\times10^{-11}$\\
BM5 &$0.108$&$6.186\times10^{-12}$&$6.363\times10^{-12}$\\
BM6 &$0.12$&$4.223\times10^{-12}$&$4.354\times10^{-12}$\\
BM7 &$5.90\times10^{-2}$&$4.755\times10^{-11}$&$4.906\times10^{-11}$\\
BM8 &$0.12$&$2.480\times10^{-11}$&$2.556\times10^{-11}$\\
\hline
\end{tabular}
\caption{The values of the DM relic abundance and the spin-independent cross sections.}\label{tab:4}
\end{table}
As seen in table \ref{tab:4}, the relic density of all benchmark points is compatible with $\mathbf{Planck}$ $\mathbf{2018}$ data which reports $\Omega_{c}h^{2}=0.120\pm0.001$\footnote{$0.05\leq \Omega_{c}h^{2}\leq 0.12$ would be acceptable.}. Moreover, the results fit with $\mathbf{XENON1T}$ $\mathbf{2018}$ experiment which gives an upper limit, less than $\mathbf{LUX}$ $\mathbf{2017}$~\cite{Akerib:2016vxi} and $\mathbf{PandaX}$-$\mathbf{II}$ $\mathbf{2017}$~\cite{Cui:2017nnn} reports, on the DM-nucleon spin-independent elastic scattering cross section. In the DM study, there are two differences with Ref.~\cite{Chao:2017vrq}. The first is the $s_{1}^{3}$ interaction which gives a significant contribution to the DM annihilation through $s_{1}$ s-channel, and consequently to relic density. The second is the spin-independent cross section which was taken to be zero in Ref.~\cite{Chao:2017vrq}, but the more realistic case like here is to have a non-zero DM-nucleon cross section, if weakly interacting massive particles (WIMP) constitute the DM. This is the main idea behind $\mathbf{LUX}$, $\mathbf{PandaX}$-$\mathbf{II}$ and $\mathbf{XENON1T}$ experiments.
\section{Conclusions}
\label{sec:6}
The main goal of this work has been to investigate the EWPT, the GW and the DM issues in an extension of the SM by adding two scalar degrees of freedom. To reach the goal, it has been assumed that one of the new scalars has a non-zero VeV to assist the phase transition and the other has no VeV to be a viable DM candidate. It has been seen if one takes the most general renormalizable form of the potential, the model can represent all the signals together. As seen from tables \ref{tab:2} and \ref{tab:3}, the model can have phase transitions from strong ($\xi \sim 1$) to very strong ($\xi \sim 4$). From figure \ref{fig:1}, the model presents the GW signals from the frequency range of $10^{-5}$(Hz) to $10$(Hz) which are detectable by $\mathbf{eLISA}$, $\mathbf{BBO}$ and $\mathbf{DECIGO}$. From table \ref{tab:4}, the model provides the DM signals which are in agreement with the $\mathbf{Planck}$ $\mathbf{2018}$ data and the $\mathbf{XENON1T}$ $\mathbf{2018}$ experiment. It is seen that the DM candidate may be quite massive with a mass greater than \text{100(GeV)} which belongs to the extremely cold DM; although, since the model has a rich parameter space, the lighter DMs might be found by performing a Monte Carlo simulation via a computer cluster. With all of these, it can be concluded that the SFOEWPT, the GW and the DM signals can successfully be described by the present model as an extension of the SM with two additional real gauge singlet scalars. As a final note, it has been assumed that the GW production from bubble collisions follows from thin-wall and envelope approximations which is usual in the literature. In this assumption, only uncollided parts of the bubbles are taken into account as the GW sources. Recently, it has been shown that the GW production from bubble collisions is analytically solvable~\cite{Jinno:2016vai,Jinno:2017fby}. Also, the possibility of using GWs and collider experiments to constrain the EWPT has been discussed in~\cite{Hashino:2018wee}. It is left for future work to study the GW signals of the present model using these recent studies.
\section{Acknowledgements}
\label{sec:7}
The authors would like to thank Ryusuke Jinno for comments on the GW production from bubble collisions.
\bibliographystyle{JHEP.bst}
|
2,877,628,088,414 | arxiv | \section{Introduction}
Power exhaust is a key challenge in next step fusion devices. Reducing the peak heat fluxes on the plasma facing components to tolerable levels, to a large extent, relies on impurity radiation in the boundary layer of the tokamak. The impurity radiation pattern in turn depends on plasma transport, both parallel and perpendicular to the magnetic field. This problem is generally addressed by solving plasma fluid models coupled to kinetic neutrals (e.g. as with the fluid code Soledge2d-EIRENE\cite{bufferand_near_2013}). Generally, the collision terms for the fluid dynamical equations are obtained by averaging over the kinetic equation with the constants of motion.
The moment-averaged collisional term can be determined by the use of different forms of ansatz for the distribution functions entering the collision operator of the kinetic equation. The two major ansatz used are namely, the Chapman-Enskog ansatz\cite{chapman_mathematical_1952} and Grad's Hermite polynomial ansatz\cite{grad_asymptotic_1963}. The Chapman-Enskog method involves decomposing the distribution function in terms of a small parameter expansion, and forming a moment-averaged hierarchy of equations at each order each with own collisional contribution. This method has been the most dominant so far, owing to its quick convergence. Grad's Hermite polynomial ansatz, on the other hand, involves decomposing the distribution function in terms of a series orthogonal tensorial polynomials, leading to a hierarchy of fluid equations for each order of the Hermite polynomial. These polynomials demonstrate mathematical properties which are somewhat easier to manipulate algebraically, however they have no clear rule for convergence. Often the convergence is either checked through brute force methods\cite{struchtrup_macroscopic_2005}, or by direct comparison of terms relevant to the physics in consideration\cite{ferziger_mathematical_1972,zhdanov_transport_2002}. Depending on the complexity of the collision operator, the treatment of the collision terms may get quite cumbersome.
Recently, in plasma physics oriented towards nuclear fusion, there has been a resurgence in the use of Grad's Hermite polynomial ansatz for use in calculating the collisional term, probably because of the improvement of algebraic techniques and availability of computer algebra systems. In recent plasma physics, it has been used to calculate the moments of the kinetic equation and the Landau collision operator\cite{landau_kinetic_1936} expressed in terms of Rosenbluth\cite{rosenbluth_fokker-planck_1957} potentials both for the linearized collision operator\cite{ji_exact_2006} and fully non-linear collision operator\cite{ji_full_2009}, and its extension to magnetized plasmas\cite{ji_framework_2014}. Extension of moments of the Landau collision operator to strong flow cases has also been done\cite{hirvijoki_fluid_2016}, and some closed-form analytic expressions for the involved integrals in the operator were also formulated\cite{pfefferle_exact_2017}. The Hermite polynomial ansatz expressed in terms of the product of Laguerre polynomials and irreducible monomial, has also been used to formulate and study drift-kinetic models\cite{jorge_drift-kinetic_2017} and gyrokinetic models\cite{jorge_nonlinear_2019} for the scrape-off layer\cite{frei_gyrokinetic_2020}, and also has been used to formulate a linear theory of electron plasma waves\cite{jorge_linear_2019}.
However, the Landau collision operator is only valid for warm plasmas where the weak coupling conditions apply\cite{balescu1997statistical}. The generalization of these operators to different plasma regimes, for example in trying to account for shielding effects as in the Balescu-Lenard collision operator, leads to an increase in mathematical sophistication and a corresponding difficulty in solving\cite{balescu_irreversible_1960}. The alternative is to use the much simpler Boltzmann collision operator. Given that the Landau operator can be thought of the Boltzmann operator under weak coupling limits\cite{balescu1997statistical}, we can expect that the use of the Boltzmann collision operator with the shielded Coulomb potential should provide quantitatively similar effects to the Landau or Balescu-Lenard collision operator as long as the plasma is in local thermodynamic equilibrium\cite{silin_introduction_1971} and does not exhibit large scale fluctuations\cite{klimontovich_kinetic_2013}. The added advantage of the Boltzmann collision operator is its use of explicit collision cross-sections.
Any coefficients derived in this manner would have the advantage of being applicable for a wide variety of gas and plasma dynamics merely by using the relevant cross-section for the system in question\cite{capitelli_transport_2013}, for example for ion-ion\cite{kihara_coefficients_1959,liboff_transport_1959,hahn_quantum_1971}, ion-neutral\cite{kihara_transport_1960}, neutral-neutral\cite{chapman_mathematical_1952,monchick_collision_1959,smith_automatic_1964, neufeld_empirical_1972,mason_transport_1954,rainwater_binary_1982}, and charge exchange collisions\cite{helander_fluid_1994,krasheninnikov_edge_2020}.
In the scope of this article, we focus on the derivation of fluid collision coefficients from the linearized Boltzmann collision operator. In the past, two sets of collisional terms, one for any temperature range and the other for temperature range close to plasma common temperature, have been derived and provided\cite{alievskii_1963_transport,yushmanov_diffusion_1980,zhdanov_transport_2002}. However, an explicit derivation process was not provided for the values of the collision coefficients. Therefore, in order to verify the accuracy of the coefficients provided, we firstly rederive the collision operator in terms of partial bracket integrals, and derive the exact values of the partial bracket integrals\cite{chapman_mathematical_1952,rat_transport_2001}. We provide, for the first time, expressions for calculating the general collisional terms up to rank-2, in a manner that can be implemented efficiently in modern computer algebra systems. We also explicitly provide the range of validity of our and the aforementioned coefficients, and clearly delineate the underlying assumptions. This would be useful in clearly defining the simulation parameter range for a number of code packages which have implemented certain versions of the previous collisional coefficients. For example, the previous two sets of coefficients, taken from Ref.\cite{zhdanov_transport_2002}, have been implemented in B2/SOLPS\cite{bergmann_implementation_1996,rozhansky_momentum_2015,sytova_impact_2018}, EDGE2D\cite{fichtmuller_multi-species_1998}, and more recently in Soledge2d-EIRENE\cite{bufferand_2019}, which solves an energy equation for each species { (while other codes mentioned solve only one total energy equation)}.
The article is organized as follows. We first provide a small introduction to the moment-averaged Boltzmann kinetic equation and the corresponding Boltzmann collision operator in Sec.\ref{sec:boltzmann}. We demonstrate its conservation properties in the process. Then, we introduce the Hermite polynomial ansatz and Grad's method, including the expression of the ansatz as a product of the Sonine polynomials and the irreducible tensorial monomial in Sec.\ref{sec:ansatz}. In Sec.\ref{sec:derivation_operator}, we present the derivation of the most general collision operator in terms of the partial bracket integrals. In Sec.\ref{sec:general_expressions}, we provide the general forms of the partial bracket integrals (with the full derivations in Appendix \ref{sec:bracket_integral_derivation}), alongwith the formulation of the cross-section integrals. Then we compare our obtained expressions for the collision coefficients with previously derived expressions in Sec.\ref{sec:comparisons} for collisions of various species relevant to fusion, ranging from a deuterium-tritium plasma collisions to heavy impurity collisions such as with tungsten. Finally, in Sec.\ref{sec:intuitive}, we compare calculations of approximate values of physically intuitive quantities such as viscosity and friction force, and provide recommendations on the range of validity for the sets of coefficients.
\section{The Boltzmann equation for multi-species plasma}
\label{sec:boltzmann}
The Boltzmann equation for the distribution function for species $\alpha$, $f_\alpha$ in the frame of the peculiar velocity of species $\i$, $\mathbf{c_\alpha}=\mathbf{v_\alpha}-\mathbf{u}$, is given by
\begin{equation}
\frac{d f_\alpha}{d t}+\mathbf{c_\alpha}.\nabla{f_\alpha}+\frac{1}{m_\alpha}\mathbf{F}^*_\alpha.\nabla_{c_\alpha}{f_\alpha} -c_{\alpha s}\frac{\partial f_\alpha}{\partial c_{\alpha_r}}\frac{\partial u_r}{\partial x_s}= \sum_{\beta} J_{\alpha\beta}, \label{boltzmannc}
\end{equation}
where the common plasma flow velocity $\mathbf{u}$ is given by
\begin{equation}
\rho \mathbf{u} = \sum_\alpha \rho_\alpha \mathbf{u_\alpha},\ \rho = \sum_\alpha \rho_\alpha,
\end{equation}
where $\rho$ represents the mass density.The $d/dt$ represents full time derivative given by $d/dt=\partial/\partial t+\mathbf{u}.\nabla$, and where the force term $\mathbf{F_\alpha}$ and $d\mathbf{u}/dt$ are combined to write the relative force in the moving frame given by $\mathbf{F^*_\alpha}=\mathbf{F_\alpha}-m_\alpha d\mathbf{u}/dt$.
The LHS is referred to as the ``free-streaming term'', and the RHS is the collisional contribution between species $\i$ and every other species of the system. The general ``gain-loss'' type Boltzmann collisional RHS is given by,
\begin{equation}
J_{\alpha\beta} = \iint (f^\prime_\alpha f^\prime_{1\beta}-f_\alpha f_{1\beta})g\sigma_{\alpha\beta}(g,\chi)d\Omega d\mathbf{c_{1\beta}}, \nonumber
\end{equation}
where $\alpha,1\beta$ refer to species of the two particles colliding, subscript $\prime$ refers to properties after the collision, $g$ is the relative velocity between the colliding particles, $\sigma_{\alpha\beta}$ is the collision cross section, and $\Omega$ is the solid angle in which the collision occurs. For the specific case of multi-species system, it takes the form
\begin{equation}
J_{\alpha\beta} = \iint \{f_\alpha(\mathbf{c}_\alpha^\prime) f_{1\beta}(\mathbf{c}_{1\beta}^\prime)-f_\alpha(\mathbf{c}_\alpha) f_{1\beta}(\mathbf{c}_{1\beta})\}g\sigma_{\alpha\beta}(g,\chi)d\Omega d\mathbf{c_{1\beta}}, \nonumber
\end{equation}
where the distribution functions for each species are only dependent on the velocity of the species. Such a form is valid for elastic collisions.
Now, for any quantity $\psi_\alpha$ depending purely on species $\alpha$, one can average over Eq.~\ref{boltzmannc} which attains the following form
\begin{multline}
\frac{d}{dt}n_\alpha \langle\psi_\alpha\rangle+n_\alpha\langle\psi_\alpha\rangle\nabla.\mathbf{u} +\nabla.(n_\alpha\langle\psi_\alpha\mathbf{c_\alpha}\rangle)\\
-n_\alpha\left\{\left\langle\frac{d\psi_\alpha}{dt}\right\rangle +\langle \mathbf{c_\alpha}.\nabla\psi_\alpha\rangle +\frac{1}{m_\alpha}\langle\mathbf{F}^*_\alpha.\nabla_{c_\alpha}{\psi_\alpha}\rangle\right.\\
\left. -\left(\left\langle c_{\alpha s}\frac{\partial \psi_\alpha}{\partial c_{\alpha_r}}\right\rangle\frac{\partial u_r}{\partial x_s}\right) \right\} = R_\alpha, \label{eq:transport}
\end{multline}
where $n_\alpha$ is the number density of the species $\alpha$, and where,
\begin{multline}
R_\alpha=\sum_{\beta} \int \psi_\alpha J_{\alpha\beta}d\mathbf{v_\alpha}\\
=\sum_{\beta}\iiint \psi_\alpha(f^\prime_\alpha f^\prime_{1\beta}-f_\alpha f_{1\beta})g\sigma_{\alpha\beta}(g,\chi)d\Omega d\mathbf{c_{\alpha}}d\mathbf{c_{1\beta}}.
\end{multline}
{ For elastic collisions, the moment averaged collision operator can be transformed into
\begin{equation}
R_\alpha=\sum_{\beta}\iiint (\psi^\prime_\alpha-\psi_\alpha)f_\alpha(\mathbf{c}_\alpha) f_{1\beta}(\mathbf{c}_{1\beta})g\sigma_{\alpha\beta}(g,\chi)d\Omega d\mathbf{c_{\alpha}}d\mathbf{c_{1\beta}},
\label{eq:boltzmann2}
\end{equation}
since the distribution functions for any given species are purely a function of the species peculiar velocity.}
By its form, the averaged Boltzmann operator is meant to conserve mass, and by the choice of velocities, it is meant to conserve energy and momentum. One can notice in the averaged collision operator of the form Eq.\,(\ref{eq:boltzmann2}), choosing $\psi_\alpha=m_\i$ leads to a strict conservation of mass for $R_{\i\j}$ for the averaged kinetic equation of each species. Hence, mass is strictly conserved. However, momentum and energy conservation can only be demonstrated over the sum of the averaged right hand sides of the kinetic equations for all species, i.e. $\sum_{\i,\j} R_{\i\j}=0$. In order to demonstrate this, it would be sufficient to show that $R_{\i\j}+R_{\j\i}=0$. This is as follows
\begin{multline}
R_{\alpha\beta}+R_{\j\i}=\int \psi_\alpha J_{\alpha\beta}d\mathbf{c_{\alpha}}+\int \psi_\beta J_{\j\i}d\mathbf{c_{\j}}\\
=\iiint (\psi^\prime_\alpha+\psi^\prime_\j-\psi_\alpha-\psi_\j)f_\alpha f_\beta g\sigma_{\alpha\beta}(g,\chi)d\Omega d\mathbf{c_{\alpha}}d\mathbf{c_{1\beta}}\nonumber.
\end{multline}
One can notice that for momentum and energy, the term $\psi^\prime_\alpha+\psi^\prime_\j-\psi_\alpha-\psi_\j$ vanishes as a result of the elastic nature of the collisions. Therefore, we now see that the Boltzmann collision operator is constructed to conserve mass, energy and momentum (and any linear combination of the three, for that matter), for any arbitrary form of the distribution function $f$. This fact is also useful to check the validity of the solutions obtained in the succeeding sections, acting as another check against calculation errors.
We now proceed to choosing an ansatz in order to expand the collisional term.
\section{Sonine-Hermite polynomial ansatz and Grad's method}
\label{sec:ansatz}
In this section, we describe the modification of Grad's method\cite{grad_asymptotic_1963} as used by Zhdanov\cite{zhdanov_transport_2002} in his previous papers. In the ansatz for the solution of the Boltzmann equation, it is assumed that the solution $f_\alpha$ is already near thermodynamic equilibrium for species $\alpha$, $f_\alpha^{(0)}$ as follows
\begin{align}
f_\alpha^{(0)}(\mathbf{c}_\i) &= n_\alpha \left( \frac{m_\alpha}{2\pi kT_\alpha}\right)^{3/2}\exp{\left(-\frac{m_\alpha c_\alpha^2}{2kT_\alpha}\right)}\nonumber\\
&=n_\alpha\left( \frac{\gamma_\alpha}{2\pi} \right)^{3/2}\exp{\left( -\frac{\gamma_\alpha}{2}c_\alpha^2\right)},
\label{eq:ansatz}
\end{align}
where $\gamma_\alpha={m_\alpha}/{kT_\alpha}$. In order to solve the Boltzmann equation (\ref{boltzmannc}), Zhdanov and Yushmanov choose an ansatz of the form
\begin{equation}
f_\alpha(\mathbf{c}_\i) = f_\alpha^{(0)}(\mathbf{c}_\i)\sum_{m,n} 2^{2n}m_\alpha^{-2}\gamma_\alpha^{2n+m} \tau_{mn} b^{mn}_{\alpha r_1\ldots r_m}G^{mn}_{\alpha r_1\ldots r_m},\label{eq:ansatz}
\end{equation}
where
\begin{multline}
G_\alpha^{mn}(\mathbf{c_\alpha},\gamma_\alpha) = (-1)^n n! m_\alpha\gamma_\alpha^{-(n+m/2)}\\
\times S^n_{m+1/2}\left(\frac{\gamma_\alpha}{2}\mathbf{c}^2_\alpha\right)P^{(m)}(\gamma_\alpha^{1/2}\mathbf{c_\alpha}).
\label{eq:sonine-hermite}
\end{multline}
Here, $S^n_{m+1/2}$ are the Sonine polynomials, given by,
\begin{equation}
S^n_{m+1/2}\left(\frac{\gamma_\alpha}{2}\mathbf{c}^2_\alpha\right) = \sum_{p=0}^n \left(-\frac{\gamma_\alpha}{2}\mathbf{c}^2_\alpha\right)^p\frac{(m+n+1/2)!}{p!(n-p)!(m+p+1/2)!},\nonumber
\end{equation}
where the first few $S^n_{m+1/2}$ are
\begin{equation}
S^0_{m+1/2}\left(\frac{\gamma_\alpha}{2}\mathbf{c}^2_\alpha\right)=1,\ S^1_{m+1/2}\left(\frac{\gamma_\alpha}{2}\mathbf{c}^2_\alpha\right) = m+\frac{3}{2}-\frac{\gamma_\alpha}{2}\mathbf{c}^2_\alpha. \nonumber
\end{equation}
Further, $P^{(m)}$ are the irreducible projection of the tensorial monomial $\mathbf{c}_\alpha^m=c_{\alpha r_1}\ldots c_{\alpha r_m}$, derived by the following recurrence relation
\begin{equation}
P^{(m+1)}(\gamma_\alpha^{1/2}\mathbf{c_\alpha}) = \gamma_\alpha^{1/2}\mathbf{c_\alpha} P^{(m)} -\gamma_\alpha^{1/2}\frac{c_\alpha^2}{2m+1}\frac{\partial P^{(m)}}{\partial \mathbf{c_\alpha}},\nonumber
\end{equation}
with $P^{(0)}=1$. This expression is a sum of an outer product and the gradient with respect to the first-rank tensorial monomial $\gamma_\alpha^{1/2}\mathbf{c_\alpha}$. The first few $ P^{m+1}(\gamma_\alpha^{1/2}\mathbf{c_\alpha})$ are given by
\begin{multline}
P^{(0)}(\gamma_\alpha^{1/2}\mathbf{c_\alpha})=1,\ P^{(1)}(\gamma_\alpha^{1/2}\mathbf{c_\alpha})=\gamma_\alpha^{1/2}\mathbf{c_\alpha},\\
P^{(2)}(\gamma_\alpha^{1/2}\mathbf{c_\alpha})=\gamma_\alpha\mathbf{c_\alpha}\mathbf{c_\alpha}-\frac{1}{3}\gamma_\alpha U c_\alpha^2.\nonumber
\end{multline}
As one can observe, each $P^{(m)}$ is a rank-$m$ irreducible tensor. We don't need to calculate any more than rank-2 for the scope of the current work.
The constant $\tau_{mn}$ arises as a result of internal contractions between $b_\alpha^{mn}$ and $G_\alpha^{mn}$, and is given by
\begin{equation}
\tau_{mn}= \frac{(2m+1)!(m+n)!}{n!(m!)^2(2m+2n+1)!}. \nonumber
\end{equation}
The forms mentioned in Refs.~\cite{grad_asymptotic_1963} and \cite{zhdanov_transport_2002} are cosmetically different because of the choice to use full factorial representations of functions and because of summing over full indices rather than over half-indices, but they are exactly the same. The coefficients $b^{mn}_\alpha$ are calculated as
\begin{equation}
n_\alpha b^{mn}_\alpha = \int G_\alpha^{mn} f_\alpha d\mathbf{c}_\alpha.
\end{equation}
Some values of $G_\alpha^{mn}$ are as follows
\begin{align}
G^{00}_\alpha &= m_\alpha,\ G^{10}_\alpha = m_\alpha \mathbf{c}_\alpha, \ G^{01}_\alpha=\frac{m_\alpha}{2}\left( c_\alpha^2-\frac{3}{\gamma_\alpha}\right),\nonumber\\
G^{11}_\alpha &= \frac{m_\alpha}{2}\mathbf{c}_\alpha \left( c_\alpha^2-
\frac{5}{\gamma_\alpha}\right),\ G^{20}_\alpha=m_\alpha\left(\mathbf{c}_\alpha\mathbf{c}_\alpha-\frac{1}{3}Uc_\alpha^2\right)\nonumber\\
G^{12}_\alpha &=\frac{m_\alpha}{4}\mathbf{c}_\alpha(c_\alpha^4-14\gamma_\alpha^{-1}c_\alpha^2+35\gamma_\alpha^{-2}),\nonumber\\
G^{21}_\alpha&=\frac{m_\alpha}{2}(c_\alpha^2-7\gamma_\alpha^{-1})\left(\mathbf{c}_\alpha\mathbf{c}_\alpha-\frac{1}{3}Uc_\alpha^2\right),\nonumber
\end{align}
and the corresponding $b^{mn}_\alpha$ are given by
\begin{align}
n_\alpha b^{00}_\alpha &= \rho_\alpha,\ n_\alpha b^{10}_\alpha = \rho_\alpha \mathbf{w}_\alpha,\ n_\alpha b^{01}_\alpha = 0,\nonumber\\
n_\alpha b^{11}_\alpha &= \mathbf{h}_\alpha,\ n_\alpha b^{20}_\alpha = \pi_\alpha,\nonumber\\
n_\alpha b^{12}_\alpha &= \mathbf{r}_\alpha,\ n_\alpha b^{21}_\alpha = \sigma_\alpha.\nonumber
\end{align}
Here, $b^{00}_\alpha,\ b^{01}_\alpha$ and $b^{10}_\alpha$ represent the intuitive hydrodynamical moments density, diffusion velocity and temperature $\rho_\alpha$, $\mathbf{w}_\alpha=\mathbf{u}_\i-\mathbf{u}$, and $T_\alpha$. the higher moments $b^{11}_\alpha$ and $b^{20}_\alpha$ represent the thermodynamically privileged moments (as per Balescu's nomenclature\cite{balescu_transport_1988}), the heat flux $\mathbf{h}_\alpha$ and the divergence free pressure-stress tensor $\pi_\alpha$, which are privileged because they contribute to the entropy. The higher-order moments $b^{12}_\alpha$ and $b^{21}_\alpha$ are non-privileged moments $\mathbf{r}_\alpha$ and $\sigma_\alpha$, which do not have a clear physical meaning, however which may contribute to the accuracy of moment equations in terms of representing the Boltzmann equation. As one can notice, these are all moments of ranks less than 2. Moments of rank-0 are scalar, like density $\rho_\i$ and temperature $T_\i$, are constitute $N$ variables each. Moments rank-1 are vectorial moments, like momentum $m_\i\mathbf{w}_\i$ and heat-flux $\mathbf{h}_\i$, and contribute $3N$ variables each. Moments of rank-2 are tensorial in nature, like the stress-tensor $\pi_\i$ and $\sigma_\i$, and contribute $5N$ variables each (and not $9N$, since they are symmetric and traceless). In principle, one can construct a $5N$-moment system with just the hydrodynamical moments, a $13N$-system with including the thermodynamically privileged moments, and a $21N$-system including $\mathbf{r}_\i$ and $\sigma_\i$.
Furthermore, the Sonine-Hermite polynomials are chosen by Eq.\,(\ref{eq:sonine-hermite}) so as to form the moments in the most physically intuitive manner possible. They are related to the regular Sonine-Hermite polynomials in Ref.\,\onlinecite{ grad_asymptotic_1963}, as follows
\begin{equation}
H^{mn}_{\i}(\xi\leftarrow\gamma_\i^{1/2}\mathbf{c}_\i)=\frac{1}{m_\i}2^n\gamma_\i^{n+m/2}G^{mn}_\i(\mathbf{c}_\i,\gamma_\i), \nonumber
\end{equation}
which then reduce to the Hermite polynomials defined in Refs.\,\onlinecite{grad_note_1949, grad_principles_1958} by the following relation
\begin{equation}
H^{m}_{\i}(\xi)=\sum_{n=0}^{[m/2]}H^{(m-2n)n}_{\i}(\xi). \nonumber
\end{equation}
One limitation to note, is that the choice of a zero function $f_\i^{(0)}$, defined at the common flow of the plasma, is valid for any difference of temperatures among different species, but limits the solution to requiring flow velocities of all species being approximately the same when the number of moments retained is finite. In principle, if one retained infinite moments, then the solution space provided by Eq.\,(\ref{eq:ansatz}) would be the same as one provided by choosing an $f_\i^{(0)}$ defined at the individual species' flow velocity\cite{suchy_collision_1996}. However, since we truncate this series at a very low number of moments, the solution spaces no longer overlap. However, the assumption of flows being close to each other is valid for SOL/edge of tokamaks, since the exit velocities of all species are close to the sound speed $c_s$\cite{stangeby_plasma_2000}. In general, one must keep in mind the general ordering of the diffusion velocities as follows
\begin{equation}
|\mathbf{w}_\i|\ll\left(\frac{kT_\i}{m_\i}\right)^{1/2}.
\end{equation}
One can now introduce $\psi_\alpha=G^{mn}_\alpha$ in the averaged Boltzmann equation (\ref{eq:transport}), in order to compute the moments, obtaining an infinite hierarchy of transport equations.
Generally, in the LHS of the hierarchy of balance equations for the moments obtained in this manner, the $k^{th}$ moment equation contains both the $(k-1)^{th}$ and $(k+1)^{th}$ moments. Hence, one has only $k$ equations for $k+1$ variables. In order to provide a closure, in Grad's method, one truncates at a moment $k$, such that moments higher than $k$ are calculated using the expansion for $f_\alpha$ truncated at the $k^\mathrm{th}$ term, which approximates the higher moments in terms of the lower ones. This closes the set of equations obtained. Illustration of Grad's closure and also Zhdanov closure are out of the scope of the current article, and will be demonstrated in an upcoming article. In this article, we illustrate the development and solution of the RHS of the equation, i.e. the moment averaged Boltzmann collisional operator, and compare it to previously obtained values by Zhdanov et al\cite{zhdanov_transport_2002,alievskii_1963_transport,yushmanov_diffusion_1980}.
\section{Derivation of the right hand side of the Boltzmann equation}
\label{sec:derivation_operator}
In the Boltzmann collision integral $J_{\alpha\beta}$, it is possible to choose a distribution function which takes the form
\begin{equation}
f_\alpha = f^{(0)}_\alpha (1+\Phi_\alpha), \nonumber
\end{equation}
which essentially represents the ansatz as a perturbed Maxwellian. The moment-averaged collision operator Eq.\,(\ref{eq:boltzmann2}) can be written as
\begin{multline}
R_{\alpha\beta}=\int \psi_\alpha J_{\alpha\beta}d\mathbf{c_{\alpha}}\\
\approx\iiint f^{(0)}_\alpha f^{(0)}_\beta(\psi^\prime_\alpha-\psi_\alpha)(1+\Phi_\alpha+\Phi_\beta)g\sigma_{\alpha\beta}(g,\chi)d\Omega d\mathbf{c_{\alpha}}d\mathbf{c_{1\beta}}, \nonumber
\end{multline}
on neglecting the $\Phi\Phi$ terms. This linearization of the collision operator makes it bilinear, i.e.\,they satisfy the relations
\begin{align}
J(f_\alpha,f_\beta+f_\gamma)&=J(f_\alpha,f_\beta)+J(f_\alpha,f_\gamma),\nonumber\\
J(f_\alpha+f_\gamma,f_\beta)&=J(f_\alpha,f_\beta)+J(f_\gamma,f_\beta)\nonumber\\
J(kf_\alpha,lf_\beta)&=klJ(f_\alpha,f_\beta), \nonumber
\end{align}
and correspondingly the moment-average $R(\psi,f_\alpha,f_\beta)$ is trilinear.
\begin{align}
R(\psi,f_\alpha,f_\beta+f_\gamma)&=R(\psi,f_\alpha,f_\beta)+R(\psi,f_\alpha,f_\gamma),\nonumber\\
R(\psi,f_\alpha+f_\gamma,f_\beta)&=R(\psi,f_\alpha,f_\beta)+R(\psi,f_\gamma,f_\beta),\nonumber\\
R(\psi+\eta,f_\alpha,f_\beta)&=R(\psi,f_\alpha,f_\beta)+R(\eta,f_\alpha,f_\beta),\nonumber\\
R(j\psi,kf_\alpha,lf_\beta)&=jklR(\psi,f_\alpha,f_\beta).
\end{align}
This allows us to decompose the moment-average into sums of smaller terms, which is useful analytically. This also is similar to the properties exhibited by some other linearized operators such as the linearized Landau operator\cite{balescu_transport_1988,helander_collisional_2005}. On substituting the Sonine-Hermite polynomial ansatz from Eq.\,(\ref{eq:ansatz}) for the distribution functions $f$, and set the moment $\psi=G^{mn}_{\i}$ from Eq.\,(\ref{eq:sonine-hermite}), we obtain
\begin{multline}
R^{mnkl}_{\alpha\beta} = \iiint f^{(0)}_\alpha f^{(0)}_\beta \{G^{mn}_\alpha(\mathbf{c}_\alpha^\prime)-G^{mn}_\alpha(\mathbf{c}_\alpha)\}\\
\times\{1+
2^{2l}\gamma_\alpha^{2l+k}m_\alpha^{-2}\tau_{kl}G_\alpha^{kl}(\mathbf{c}_\alpha)b^{kl}_\alpha\\
+2^{2l}\gamma_\beta^{2l+k}m_\beta^{-2}\tau_{kl}G_\beta^{kl}(\mathbf{c}_\beta)b^{kl}_\beta
\}\\
\times g\sigma_{\alpha\beta}(g,\chi)d\Omega d\mathbf{c_{\alpha}}d\mathbf{c_{1\beta}}, \nonumber
\end{multline}
where $R^{mnkl}_{\alpha\beta}$ represents the part of $R^{mn}_{\alpha\beta}$ averaging over the $kl$ term of the ansatz Eq.\,(\ref{eq:ansatz}). Noting that $G^{kl}b^{kl}$ is an inner product, we now substitute the definition of $G^{mn}$, and use the following integral identity\cite{ji_exact_2006,weinert_spherical_1980}
\begin{multline}
\int P^{(m)}(P^{(k)}:W)G(v)d\mathbf{v}\\
=\frac{W}{2m+1}\delta_{km}\int P^{(m)}:P^{(m)}G(v)d\mathbf{v}, \label{eq:pmidentity}
\end{multline}
where $W$ is symmetric and traceless tensor of rank $k$ not a function of $\mathbf{v}$. Furthermore, we define a ``bracket'' integrals of the following form
\begin{equation}
n_\alpha n_\beta [F,G]=\iiint f^{(0)}_\alpha f^{(0)}_\beta G(F^\prime-F)g\sigma_{\alpha\beta}(g,\chi)d\Omega d\mathbf{c_{\alpha}}d\mathbf{c_{1\beta}},
\end{equation}
through which we can contract over index $k$ in $R^{mnkl}_{\alpha\beta}$ and write it as $R^{mnl}_{\alpha\beta}$, such that
\begin{equation}
R_{\alpha\beta}^{mnl}=(1-\delta_{m0}\delta_{l0})(A_{\alpha\beta}^{mnl}b^{ml}_\alpha+B_{\alpha\beta}^{mnl}b^{ml}_\beta)+\delta_{m0}\delta_{l0}C_{\alpha\beta}^{mnl},
\label{eq:collision_second_form}
\end{equation}
where $A_{\alpha\beta}^{mnl}$, $B_{\alpha\beta}^{mnl}$ and $C_{\alpha\beta}^{mnl}$ are given by
\begin{align}
A_{\alpha\beta}^{mnl}&= Q_{\alpha\beta}^{mnl}
\gamma_\alpha^{l-n}\times\nonumber\\
&\left[S^n_{m+1/2}\left(W_\alpha^2\right)P^{(m)}(\mathbf{W}_\alpha),S^l_{m+1/2}(W_\alpha^2)P^{(m)}(\mathbf{W}_\alpha)\right]\nonumber\\
B_{\alpha\beta}^{mnl}&=Q_{\alpha\beta}^{mnl}\frac{\gamma_\beta^{l+m/2}}{\gamma_\alpha^{n+m/2}}\frac{m_\alpha}{m_\beta}\times\nonumber\\
&\left[S^n_{m+1/2}(W_\alpha^2)P^{(m)}(\mathbf{W}_\alpha),S^l_{m+1/2}(W_\beta^2)P^{(m)}(\mathbf{W}_\beta)\right]\nonumber\\
C^{mnl}_{\alpha\beta} &= (-1)^n n! \gamma_\alpha^{-(n+m/2)}m_\alpha n_\i n_\j\left[S^n_{1/2}(W_\alpha^2),1\right],
\label{eq:AmnBmn1}
\end{align}
where
\begin{equation}
Q_{\alpha\beta}^{mnl}=(-1)^{n+l}2^{2l+m} \frac{(2m)!(m+l)!n!}{(m!)^2 (2m+2l+1)!}n_\alpha n_\beta.
\label{eq:qmnl}
\end{equation}
This moment-averaged collision operator is valid for any difference of masses or temperatures of the colliding species.
{ Having now derived our expressions for the moment-averaged collision operator, we now illustrate a slightly different expression derived originally in the appendix of Ref.\,\onlinecite{zhdanov_transport_2002} by Zhdanov, which brings with it some additional assumptions.} The collision operator is linearized by choosing a distribution function defined at the common temperature of the plasma $T=\sum_\alpha n_\alpha T_\alpha/\sum_\alpha n_\alpha$, such that $f^{\prime(0)}_\alpha f^{\prime(0)}_\beta=f^{(0)}_\alpha f^{(0)}_\beta$, leading to
\begin{equation}
J_{\alpha\beta} = \iint f^{(0)}_\alpha f^{(0)}_\beta(\Phi^\prime_\alpha+\Phi^\prime_\beta-\Phi_\alpha-\Phi_\beta)g\sigma_{\alpha\beta}(g,\chi)d\Omega d\mathbf{c_{1\beta}}. \nonumber
\end{equation}
on neglecting the squared $\Phi\Phi$ terms. However, in such a form of the collision operator, Zhdanov et al assume that the temperatures of each species is close to the common temperature, i.e. $|T-T_\i|\ll T$. One could term this as the linearized Boltzmann collision integral under quasi thermodynamic equilibrium conditions. { Such a scheme is well-suited when the masses of the colliding species are of the same order, thus both species possessing a collisional relaxation timescale of the same order as well, leading to their distribution functions being close enough to a Maxwellian defined at the common temperature $T$. However, one needs to be careful when the masses of the species are at different orders, leading to different relaxation timescales, such as in the case of heavy impurities colliding with the plasma fuel species, where relaxation timescales may be different.}
On integrating over a moment $\psi_\alpha=G^{mn}_\i$ to obtain the moment averaged collision integral using the ansatz Eqs.\,(\ref{eq:ansatz}) for the distribution function, { and on noting that $b_\i^{\prime kl}=b_\i^{kl}$ for a given species $\i$ by construction since the form of $f_\i$ remains the same}, we can obtain
\begin{equation}
R_{\alpha\beta}^{mnl}=A_{\alpha\beta}^{mnl}b^{ml}_\alpha+B_{\alpha\beta}^{mnl}b^{ml}_\beta,
\label{eq:collision_first_form}
\end{equation}
where $A_{\alpha\beta}^{mnl}$ and $B_{\alpha\beta}^{mnl}$ are given by
\begin{align}
A_{\alpha\beta}^{mnl}&= Q_{\alpha\beta}^{mnl}
\gamma_\alpha^{l-n}\times\nonumber\\
&\left[S^l_{m+1/2}(W_\alpha^2)P^{(m)}(\mathbf{W}_\alpha),S^n_{m+1/2}\left(W_\alpha^2\right)P^{(m)}(\mathbf{W}_\alpha)\right]\nonumber\\
B_{\alpha\beta}^{mnl}&=Q_{\alpha\beta}^{mnl}\frac{\gamma_\beta^{l+m/2}}{\gamma_\alpha^{n+m/2}}\frac{m_\alpha}{m_\beta}\times\nonumber\\
&\left[S^l_{m+1/2}(W_\beta^2)P^{(m)}(\mathbf{W}_\beta),S^n_{m+1/2}(W_\alpha^2)P^{(m)}(\mathbf{W}_\alpha)\right]
\label{eq:singletempamnlbmnl}
\end{align}
where $Q_{\alpha\beta}^{mnl}$ has the same expression as Eq.\,(\ref{eq:qmnl}). This is also the collision operator employed for deriving the $21N$-moment single-temperature collision coefficients in Ref.\,\onlinecite{yushmanov_diffusion_1980}. Note, however, that the $A_{\alpha\beta}^{mnl},B_{\alpha\beta}^{mnl}$ here correspond to the $A_{\alpha\beta}^{mln},B_{\alpha\beta}^{mln}$ of Eq.\,(\ref{eq:AmnBmn1}).
The limit of the moment averaged collision operator (\ref{eq:collision_first_form}) is that it is only valid for quasi thermodynamic equilibrium conditions with the species temperatures being close to the plasma common temperature. However, the one in Eq.\,(\ref{eq:collision_second_form}) has no such assumption. Therefore, each individual moment averaged term of Eq.\,(\ref{eq:collision_first_form}) will be less accurate than Eq.\,(\ref{eq:collision_second_form}) for increasing difference in the temperatures of the colliding species.
{ It is worth discussing what the assumption of $b^{\prime kl}=b^{kl}$ means. Since $b^{kl}$ enter the distribution functions, this assumption essentially implies no change in the physical quantity in the pre-collision and post-collision distribution functions. For example $b_\i^{01}=\rho_\i\mathbf{w}_\i$ is the diffusion velocity of the species $\i$, and assuming that $b_\i^{01}= b_\i^{\prime 01}$ implies that the diffusion velocities of the pre-collision and post-collision distributions remain the same for the same species $\i$, not changing in the timescale of the collision. This would also ensure that the distribution function $f$ has the same form for a given species $\i$ for all four pre-collision and post-collision distributions, as necessitated by the Boltzmann equation. This is a reasonable assumption when the duration of the collision is very small as in the case of short-range forces, e.g.\,in case of rigid-sphere collisions where the collision only lasts an instant, or for weakly-coupled long-range forces such as Coulomb potential, where again the small-angle collision duration is very small. However, for long-range interaction potentials with strong coupling, one has to be careful that the collision duration is much smaller than the timescale of the system evolution, and in general for such an interaction potential the Boltzmann collision operator is anyhow not appropriate. }
To use either expression of the moment-averaged collision operator, we need to calculate these bracket integrals for the required $(m,n)$ values in order to derive the desired forces. Because of Eq.\,(\ref{eq:pmidentity}), the only brackets that survive in the moment-averaged linearized collision operator are the ones possessing the same rank, which significantly reduces the number of terms one must calculate. Certain methods for deriving these bracket integrals are provided in Refs.\cite{chapman_mathematical_1952}, \cite{ferziger_mathematical_1972} and \cite{rat_transport_2001} for $m=1,2$, as they are well suited enough within the scope of the $21N$-moment scheme. In the next section we indicate the general expressions for the bracket integrals obtained by following these methods for a case of species $\alpha,\beta$ possessing different masses and different temperatures.
\section{General expressions for the bracket integrals}
\label{sec:general_expressions}
The general expressions for the rank-$m$ bracket integrals take the following general forms {
\begin{multline}
\left[S^p_{3/2}(W_\i^2)P^{(m)}(\mathbf{W}_\i),S^q_{3/2}(W_\j^2)P^{(m)}(\mathbf{W}_\j)\right]\\
\sim\sum_{rl} \frac{A^{pqrl,m}_{\i\j}}{k_{\i\j}^{r+3/2}}\Omega_{\i\j}^{lr},\nonumber
\end{multline}
and
\begin{multline}
\left[S^p_{3/2}(W_\alpha^2)P^{(m)}(\mathbf{W}_\alpha),S^q_{3/2}(W_\alpha^2)P^{(m)}(\mathbf{W}_\alpha)\right]\\
\sim\sum_{rl} \frac{A^{pqrl,m}_{\i\i}}{k_{\i\j}^{r+3/2}}\Omega_{\i\j}^{lr}.\nonumber
\end{multline}
The coefficients $A^{pqrl,m}_{\i\j}$, $A^{pqrl,m}_{\i\i}$ are functions of mass and temperature ratios of the species $\i$ and $\j$. The terms $\Omega_{\i\j}^{lr}$ are the effective cross-section moment integrals of Chapman and Cowling (henceforth referred to as the ``Chapman-Cowling integrals'') which are dependent on the potential of interaction between species $\i$ and $\j$. The factor $k_{\i\j}$ is an arbitrary function of the masses and temperatures of the colliding species, which can be chosen freely as long as $k_{\i\j}>0$. The choice also affects the forms of $A^{pqrl,m}_{\i\j}$,$A^{pqrl,m}_{\i\i}$ and $\Omega_{\i\j}^{lr}$ which depend on $k_{\i\j}$ individually, such that, in principle, the overall bracket integral values do not depend on $k_{\i\j}$.}
The exact derivations the generalized coefficients $A^{pqrl,m}_{\i\j}$ and $A^{pqrl,m}_{\i\i}$ for different bracket integrals up to rank-2, with all steps supplied for verification purposes, are provided in Appendix \ref{sec:bracket_integral_derivation}. For the purpose of our work, up to rank-2 suffices, as in the context of the Boltzmann collision operator, only the moments up to rank-2 have been considered in previous works. The solution for higher-order bracket integrals is out of the scope of the current article, and will be reserved for a future manuscript.
The ``reversed'' bracket integrals $\left[S^p_{3/2}(W_\j^2)P^{(m)}(\mathbf{W}_\j),S^q_{3/2}(W_\i^2)P^{(m)}(\mathbf{W}_\i)\right]$ required for the second form of the collision operator, and $\left[S^p_{3/2}(W_\beta^2)P^{(m)}(\mathbf{W}_\beta),S^q_{3/2}(W_\beta^2)P^{(m)}(\mathbf{W}_\beta),\right]$ required for verifying conservation properties, can be obtained by transforming $\i\rightleftharpoons\j$ in the above expressions provided. Since the two types bracket integrals are calculated independently of each other, any mistake in calculating them would lead to non-conservation of momentum or energy. Hence, demonstrating conservation of momentum and energy with the obtained quantities is an adequate testament to their accuracy (see Appendix \ref{sec:bracket_values}). Furthermore, the expressions for the bracket integrals are very amenable to being implemented in computer algebra systems, as they are just composed of sums and products which any computer algebra system should be able to perform. We implement the expressions for the bracket integrals and the moment-averaged collision term in Mathematica\cite{mathematica}.
\label{sec:cross_sections}
{ The Chapman-Cowling integral is written in the following form for our case
\begin{align}
\Omega_{\i\j}^{lr} =& \left(\frac{\pi}{d_{\i\j}}\right)^{1/2}\int_0^\infty \exp(-\zeta^2) \zeta^{2r+3} \phi^{(l)}_{\i\j} d\zeta,\\
\phi^{(l)}_{\i\j}=&\int^\infty_0 (1-\cos^l{\chi}) \sigma_{\alpha\beta}(g,\chi)\sin{\chi}d\chi,
\end{align}
where $\zeta=d_{\i\j}^{1/2}g$ and where the factor $d_{\i\j}$ is related to $k_{\i\j}$ by the following relation
\begin{equation}
d_{\i\j}=k_{\i\j}\left\{\mu_{\i\j}^2\left(\frac{\gamma_\i}{2m_\i^2}+\frac{\gamma_\j}{2m_\j^2}\right)\right\}. \nonumber
\end{equation}
Choosing $k_{\i\j}$ fixes $d_{\i\j}$ and vice versa, and note therefore that $\Omega_{\i\j}^{lr}$ is a functional of $d_{\i\j}$ and the effective cross-section $\phi^{(l)}_{\i\j}$.}
Essentially we are now left with calculating the effective cross sections $\phi^{(l)}_{\i\j}$, which in turn depend on the physics of the particle-potential interaction. Therefore, the choice of potential is crucial to calculating these effective cross sections accurately. We essentially have a choice between pure Coulomb potential or the shielded Coulomb potential, as these are the only ones that apply to fully ionized plasmas in the fusion domain. However, there are always integrability and convergence issues with the potentials used in these calculations. For example, the pure Coulomb potential diverges at the limit of low collision angles with high impact parameters which constitute the majority of collisions in a hot plasma. This is usually mitigated by choosing a cutoff for the integral at the Debye length radius, which then leads to a converged integral. For the shielded Coulomb potential, for high energy and low angle interactions, it may lead to an issue where some forms of the integrals obtained do not converge at high impact parameter limits. Some physical approximations, such as ignoring large angle collisions for the shielded Coulomb potential part of the integration, are used to express the integral approximately in forms that converge. From a modelling point of view, it is worth keeping in mind that these two potentials will offer slightly different collisional coefficients, which can provide a range of values which could be useful for comparison with experiments.
In general, the integration of the effective cross-sections with the shielded Coulomb potential is not so simple, and often has to be manually done for different values of $l$, with unique approximations applied at each value. However, it is possible to find the asymptotic values of the cross-sections through a perturbation method\cite{kihara_coefficients_1959}. { For our case, the asymptotic form of $\Omega$-integral for the shielded Coulomb potential looks like
\begin{multline}
\Omega^{lr}_{\i\j,sh}(d_{\i\j})= l\Gamma(r)\left(\frac{\pi}{d_{\i\j}}\right)^{1/2}\frac{\Delta_{\i\j}^2}{4}\left(\frac{2kTd_{\i\j}}{\mu_{\i\j}}\right)^2\\
\times\left\{\ln\left(\frac{4\lambda_D}{\Delta_{\i\j}} \frac{\mu_{\i\j}}{2kTd_{\i\j}}\right)+A_r-C_l -2\ln\gamma\right\},
\label{eq:kihara_formula}
\end{multline}
}where
\begin{align}
C_l&=\left\{\begin{tabular}{ll}
$(1+\frac{1}{3}+\frac{1}{5}+\ldots+\frac{1}{l})-\frac{1}{2l},$ & for odd $l$\\
$(1+\frac{1}{3}+\frac{1}{5}+\ldots+\frac{1}{l-1}),$ & for even $l$
\end{tabular}\right.\nonumber\\
A_r&=1+\frac{1}{2}+\frac{1}{3}+\ldots+\frac{1}{r-1},\ A_{1}=0,\nonumber
\end{align}
where $T$ is the plasma common temperature given by $T=(1/n)\sum_\i n_\i T_\i$, where $\Delta_{\i\j}$ is the mean distance of closest approach (also called the ``particle diameter''), $\lambda_D$ is the Debye length given by
\begin{equation}
\lambda_D^{-2}=\sum_\alpha \frac{ n_\i Z_\i^2 e^2}{\epsilon_0 kT_\i},\nonumber
\end{equation}
and $\gamma$ is the Euler-Mascheroni constant. This formula has the advantage of being easily implementable in computer algebra softwares. { However, it is worth noting that this expression leads to the Chapman-Cowling integral for the shielded Coulomb potential (and hence also the bracket integrals) evaluating to different values depending on the chosen value of $d_{\i\j}$. This arises from two assumptions in the calculation of the cross-section from the shielded Coulomb potential in Refs.\,\onlinecite{kihara_coefficients_1959} and \onlinecite{liboff_transport_1959}. The first being the non-dimensionalizing of the impact parameter with respect to the particle diameter and the Debye length, which then vanishes under the asymptotic lower limit of the collision integral, leaving the final result the same for any choice of the Debye length or the particle diameter. The second is that the model of collisions is essentially a collision model of one particle being deflected by one potential. The cross section integrals $\phi^{(l)}$, found in this manner depend only on the relative velocity $g$ other than the potential of interaction. However, the relative velocity $g$ in the Chapman-Cowling is scaled by a factor $d_{\i\j}^{1/2}$, and hence the value of $\phi^{(l)}$ will also need to be transformed to the scaled value of $\zeta=d_{\i\j}^{1/2}\gamma$, leading to the additional factors of ${2kTd_{\i\j}}/{\mu_{\i\j}}$ seen in Eq.\,(\ref{eq:kihara_formula}). For our coefficients, we choose $d_{\i\j}$ in such a manner that $d_{\i\j}={\mu_{\i\j}}/{2kT}$, in order to agree with the two aforementioned calculations of the cross-sections for the shielded Coulomb potential in Refs.\,\onlinecite{kihara_coefficients_1959} and \onlinecite{liboff_transport_1959}. }
Having $d_{\i\j}$ in such a manner also allows us to immediately use cross-section values from numerical and empirical databases such as AMJUEL\cite{reiter2000data}, LXCat\cite{pancheshnyi2012lxcat} and ADAS\cite{summers2011atomic}, because the cross-sections in these databases are often expressed as a polynomial of ${\mu_{\i\j}g^2}/{2kT}$.
Some attention also needs to be paid to the chosen value of $\Delta_{\i\j}$. Following Liboff\cite{liboff_transport_1959}, Kihara et al\cite{kihara_coefficients_1959} and Hahn et al\cite{hahn_quantum_1971}, one may choose it as follows
\begin{equation}
\Delta_{\i\j}=\frac{|Z_\i Z_\j| e^2}{4\pi\epsilon_0 kT},\label{kihara_closest_distance}
\end{equation}
where $T$ is the common temperature of the plasma. Such a value of $\Delta_{\i\j}$ is also termed the ``particle diameter'' in the literature (where $\pi\Delta_{\i\j}^2$ would be the collision cross-section for the rigid-sphere collision case\cite{chapman_mathematical_1952}). Following Zhdanov\cite{zhdanov_transport_2002}, one could choose it as the average inverse impact parameter $\langle 1/b_0\rangle^{-1}$ over the distributions as follows
\begin{equation}
\Delta_{\i\j}=\frac{|Z_\i Z_\j| e^2 \gamma_{\i\j}}{12\pi\epsilon_0 \mu_{\i\j}},\label{zhdanov_closest_distance}
\end{equation}
where $\gamma_{\i\j}$ is given by $\gamma_{\i\j}=\gamma_\i\gamma_\j/(\gamma_\i+\gamma_\j)$. (which for equal temperatures makes the logarithmic term in the expression equal to the Coulomb logarithm $\ln\Lambda_{\i\j}$). Whichever is chosen has to be used consistently. In Ref.\,\onlinecite{zhdanov_transport_2002} by Zhdanov, the cross-section has been treated by using Eq.\,(\ref{zhdanov_closest_distance}) for the Coulomb logarithm, but Eq.\,(\ref{kihara_closest_distance}) for the mean distance of closest approach outside the logarithm. This is essentially the same approximation as in Rosenbluth et al\cite{rosenbluth_fokker-planck_1957}, but which has non-negligible effect on the $\Omega^{lr}_{\i\j}$ for high values of $(l,r)$ (since neither $A_n$ nor $C_n$ converge for large $n$, and can become larger than the log term for a large enough value of $n$). This still remains a reasonable approximation however, because we are only concerned with a small number of lower-order moments.
One can also, following the interpolated formula provided above, extrapolate results obtained by following Liboff's procedure\cite{liboff_transport_1959} for results provided by Bonnefoi\cite{bonnefoi_thesis_1975}, and one will find the same expression as Eq.\,(\ref{eq:kihara_formula}), but with $-2\gamma$ in place of the $-2\ln\gamma$.
The cause of this is that, in Liboff's procedure, the general angle of deflection of the collision takes the form of the first-order modified Bessel function of the second kind $K_1$, and one must use approximations on the order of the energy of the interaction to express the integral at any order as the square of $K_1$, as every other power of $K_1$ is non-integrable in the given limits. This leads to the integral having a correction of $A_r-\gamma$ instead of that in Eq.\,(\ref{eq:kihara_formula}).
One feature to note of these shielded Coulomb potential cross-sections is that they ignore any difference in shielding for the attractive and repulsive potential cases, because the particle diameter is assumed to be positive for both attractive and repulsive cases. Depending on the kind of plasma, this may be relevant\cite{capitelli_transport_2013, dangola_thermodynamic_2008}. However, for the case of fusion plasmas, this may safely be neglected, given that the logarithmic term is dominant for hot plasmas.
We also mention for the sake of completeness that the original formula for the Chapman-Cowling integrals used by Zhdanov for the values of $\Omega^{lr}_{\i\j}$ in Ref.\,\onlinecite{zhdanov_transport_2002} found using the Coulomb potential with the Debye length cutoff is given by
\begin{equation}
\Omega^{lr}_{\i\j}=\sqrt{\pi}l(r-1)! \left( \frac{Z_\i Z_\j e^2}{4\pi\epsilon_0} \right)^2 \frac{\ln{\Lambda_{\i\j}}}{\mu_{\i\j}^{1/2} (2kT)^{3/2}},
\label{eq:zhdanov_omegarl}
\end{equation}
where the plasma common temperature $T$ refers to $T=(1/n)\sum_\i n_\i T_\i$. This follows from Zhdanov's forms of the $\Omega$-integrals, when the temperatures of the components are close to each other and when the Debye length $\lambda_D$ is much smaller than the inverse of the average inversed impact parameter, i.e.
\begin{equation}
\lambda_D\ll \bar{\left(\frac{1}{b_0} \right)}^{-1},\nonumber
\end{equation}
and where $\Lambda_{\i\j}$ is the Coulomb logarithm given by
\begin{equation}
\Lambda_{\i\j}=\frac{12\pi\epsilon_0kT}{Z_\i Z_\j e^2}\lambda_D.\nonumber
\end{equation}
This follows the general result in Eq.\,(\ref{eq:kihara_formula}) to the order of the logarithmic term.
{ We will calculate our cross-sections with Eq.\,(\ref{eq:kihara_formula}) with Eq.\,(\ref{kihara_closest_distance}) and with $d_{\i\j}=\mu_{\i\j}/(2kT)$.} With expressions for the coefficients $A^{pqrl}_{\i\j/\i\i}$ and the cross-sections $\Omega^{lr}_{\i\j,sh}$, we can now proceed to calculate the collisional coefficients and compare them to the ones found in existing literature.
\section{Range of Validity of Zhdanov's values}
\label{sec:comparisons}
Zhdanov et al have previously derived two sets of coefficients for the values of the collisional coefficients. The first set was derived for a multi-temperature, multi-component plasma without any explicit assumptions on the temperatures of the species (see Appendix \ref{sec:zhdanov13}). These $13N$-moments multi-temperature coefficients were also calculated using the linearized Boltzmann operator. The derivation method for a few lower order moments is given in Refs.\,\onlinecite{zhdanov_transport_2002} and \onlinecite{zhdanov_influence_2013}. However, to the best of our knowledge, no explicit general derivation scheme method was provided, making it difficult to verify some of the cumbersome higher-order moments, and generate higher-order ones, in case we need larger number of moments. These coefficients generally take the form
\begin{equation}
A^{mpq}_{\i\j},B^{mpq}_{\i\j}\sim\sum_{k=0}^2 \Theta_{\i\j}^k \sum_{rl} K_{\i\j,\i}^{m,pqrl} \Omega_{\i\j}^{rl},
\label{eq:zhdanov13_forms}
\end{equation}
where $K_{\i\j,\i}^{m,pqrl}$ is a term depending solely on the masses, temperatures and densities of the species, $\Omega_{\i\j}^{rl}$ is the Chapman-Cowling effective cross-section moment defined by Eq.\,(\ref{eq:zhdanov_omegarl_proper}),
and $\Theta_{\i\j}$ is given by
\begin{equation}
\Theta_{\i\j}=\left(1-\frac{T_\j}{T_\i}\right)\left/\left(1+\frac{m_\j}{m_\i}\right)\right..\nonumber
\end{equation}
{ We can verify that on choosing $d_{\i\j}=\gamma_{\i\j}/2$, our coefficients equal the ones provided by Zhdanov (see Appendix C), thus verifying the coefficients provided in known literature for the chosen value of $d_{\i\j}$. We can now also generate higher-order moments of this multi-temperature form if required.}
The second set of collision coefficients, provided by Zhdanov in Refs.\,\onlinecite{zhdanov_transport_2002} and \onlinecite{yushmanov_diffusion_1980}, are derived using the form of the collision operator given by Eq.\,(\ref{eq:collision_first_form}), for $21N$-moments (see Appendix \ref{sec:zhdanov21}). They take the general form
\begin{equation}
A^{mpq}_{\i\j},B^{mpq}_{\i\j}\sim \sum_{rl} L_{\i\j,\i}^{m,pqrl} \Omega_{\i\j}^{rl},
\label{eq:zhdanov21_forms}
\end{equation}
where $L_{\i\j,\i}^{m,pqrl}$ is a term depending on the masses, temperatures and densities of the species. The bracket integrals are evaluated at the plasma common temperature $T=\sum_\i n_\i T_\i/\sum_\i n_\i$, however. { There still remains, however, an individual species temperature dependence from the terms multiplying the bracket integral (See Exs.\,(\ref{eq:singletempamnlbmnl})). The values of $\Omega_{\i\j}^{rl}$, however, are proportional to the approximate formula Eq.\,(\ref{eq:zhdanov21Ncrosssections}), which is accurate to the order of the Coulomb logarithm.}
When comparing our expressions derived with $d_{\i\j}=\mu_{\i\j}/(2kT)$ with the expressions Eqs.\,(\ref{eq:zhdanov13_forms}) and Eqs.\,(\ref{eq:zhdanov21_forms}), we can expect the $13N$-moment multi-temperature coefficients to agree in the case of equal temperatures and the $21N$-moment coefficients to modestly agree, and for both have a range of reasonable agreement in the vicinity of equal temperatures. To understand where the coefficients begin to diverge, we consider some physical situations relevant to SOL/edge physics, i.e. 1.\,a three component plasma intended to be the fusion fuel, with electrons, deuterium (D) and tritium (T), the three being at comparable densities, as D-T fusion is planned to be used in current and future high-Q campaigns, 2.\,a three component plasma with light impurities at significant fraction (10\%) of the main fuel species, i.e.\,electrons, hydrogen (H) and carbon (C), with the carbon in the plasma originating from facing plasma components made of graphite, 3.\,a three component plasma with injected mid-weight impurities with densities at a small fraction (1\%) of the fuel species density, electrons at a small fraction, hydrogen and argon (Ar), often used for controlled experimentation with impurities or for other purposes, and finally 4.\,a three component plasma with a heavy impurity at trace levels (0.001\%), i.e.\,electrons, ions and tungsten (W), where the tungsten usually originates from the walls and divertors made of tungsten. Such a choice of scenarios will help us scan over operationally relevant mass ratios and density ratios, allowing us to focus on the effect of the temperature ratio. The values of masses, charges, densities and temperatures we choose can be found in Table (\ref{table:values}). The temperatures are chosen so as to provide a range of temperature ratio spanning $0.1-2$.
\begin{table}
\centering
\begin{tabular}{|c||c|c|c|c|}
\hline \rule{0pt}{2ex}
$\i-\j\rightarrow$ & \text{T-D} & \text{C-H} & \text{Ar-H} & \text{W-H} \\
\hline\hline
\rule{0pt}{3ex} $n_\i$ & $10^{19}$ & $10^{18}$ & $10^{17}$ & $10^{14}$\\
$Z_\i$ & $+1$ & $+6$ & $+7$ & $+7$\\
$m_\i$ & $3$ amu & $12$ amu & $40$ amu & $184$ amu\\
$T_\i$ & $100$ eV & $100$ eV & $100$ eV & $100$ eV\\
$n_\j$ & $10^{19}$ & $10^{19}$ & $10^{19}$ & $10^{19}$\\
$Z_\j$ & $+1$ & $+1$ & $+1$ & $+1$\\
$m_\j$ & $2$ amu & $1$ amu & $1$ amu & $1$ amu\\
$T_\j$ & $10-200$ eV& $10-200$ eV& $10-200$ eV& $10-200$ eV\\
\hline
\end{tabular}
\caption{Values of constants used for the different operational cases chosen. At 100eV, the maximum excitation state for higher-Z impurities is around +7, which is why we chose to limit Argon and Tungsten charge state to +7.}
\label{table:values}
\end{table}
Firstly, we begin by looking at coefficients for the D-T case with a physical significance that can be intuitively understood, e.g. the friction force, governed by $A^{100}_{\ij},B^{100}_{\ij}$, the thermal gradient force, which is governed by $A^{110}_{\ij},B^{110}_{\ij}$, the energy exchange term given by $C^{010}_{\ij}$.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/DT/forces/friction_force}
\includegraphics[width=\columnwidth]{figures/DT/forces/thermal_gradient_force}
\includegraphics[width=\columnwidth]{figures/DT/forces/energy_exchange}
\caption{Comparison of physically relevant moments, i.e., from top to bottom, friction force, the thermal gradient force, and the energy exchange, between the two species D and T. The plots are of the coefficient values plotted against the temperature ratio $T_D/T_T$. { In the legends for the plots, first $A^{mnl}_{\i\j}/B^{mnl}_{\i\j}$ refer to our coefficients, the ``$13N$'' indicates Zhdanov's multi-temperature collisional coefficients and similarly the ``$21N$'' refers to Zhdanov's single-temperature collisional coefficients.} }
\label{fig:intuitive_terms}
\end{figure}
We can immediately notice in Fig.\,(\ref{fig:intuitive_terms}), that generally in the vicinity of equal temperatures $T_D/T_T=1$, the curves for all coefficients tend to follow each other quite closely. In particular, the curves for the $13N$-moment multi-temperature coefficients follow our obtained values much closer than the $21N$-moment single-temperature ones. However, they deviate quite significantly going away from equal temperatures.
Based on these observations, we state that the $13N$-moment multi-temperature coefficients agree more with ours in the vicinity of equal temperature than the $21N$-moment single-temperature ones. However, to better recommend a range of validity and quantitatively characterize the deviations, we proceed to plot the percentage differences in the coefficients, defined as the absolute value percentage difference of Zhdanov's two sets of coefficients with respect to our coefficients Eqs.\,(\ref{eq:collision_second_form}).
The percentage differences in the $13N$-moment multi-temperature coefficients are plotted in Fig.\,(\ref{fig:zhdanov13_errors}).
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/DT/diff_temp/same_cross_section_coefficient_errors_rank0}
\includegraphics[width=0.9\columnwidth]{figures/DT/diff_temp/same_cross_section_coefficient_errors_rank1A}
\includegraphics[width=0.9\columnwidth]{figures/DT/diff_temp/same_cross_section_coefficient_errors_rank1}
\includegraphics[width=0.9\columnwidth]{figures/DT/diff_temp/same_cross_section_coefficient_errors_rank2}
\caption{Plots of percentage differences in multi-temperature coefficients calculated by Eqs.\,(\ref{eq:zhdanov13_forms}) and ours calculated from Eq.\,\ref{eq:collision_second_form}. { The top plot showcases the difference for the rank-0 coefficient, the two middle plots shows the differences for the rank-1 coefficients, and the bottom plot shows the differences for the rank-2 coefficients.}}
\label{fig:zhdanov13_errors}
\end{figure}
We can notice exact agreement at equal temperatures, and that the differences in the coefficients are very low in the vicinity of equal temperatures. However, they seem to deviate rapidly as the temperature ratio decreases below 0.5. In particular, the coefficients related to the heat flux transmission $A^{110}_{\i\j},B^{110}_{\i\j}$ seem to deviate very quickly. This would indicate that at significant temperature differences, the representation of heat flux gains more importance. { In Table (\ref{table:13n}), we showcase the maximum differences in the temperature ratio range of $0.8-1.2$. We can see that all differences for all four physical cases are less than 22\%, with most coefficients having differences less than 6\%.} Furthermore, we can notice that the heavier the impurity becomes, the lower the differences are. This indicates that the $13N$-moment multi-temperature coefficients may be more suitable for simulation the heavier the impurity species being simulated.
\begin{table}
\centering
\begin{tabular}{|c||c|c|c|c|}
\hline
\rule{0pt}{2ex}
\text{Coefficient} & \text{D-T} & \text{C-H} & \text{Ar-H} & \text{W-H} \\
\rule{0pt}{3ex}$C^{010}$& 5.29 & 6.19 & 5.32 & 5.20 \\
$A^{100}$& 5.29 & 6.19 & 5.32 & 5.20 \\
$B^{100}$& 5.29 & 6.19 & 5.32 & 5.20 \\
$A^{101}$& 14.50 & 26.06 & 23.28 & 23.39 \\
$B^{101}$& 14.50 & 26.06 & 23.28 & 23.39 \\
$A^{110}$& 12.51 & 19.77 & 18.27 & 18.37 \\
$B^{110}$& 42.95 & 14.89 & 2.18 & 2.48 \\
$A^{111}$& 5.73 & 6.92 & 6.19 & 6.11 \\
$B^{111}$& 6.77 & 13.36 & 12.15 & 12.20 \\
$A^{200}$& 6.65 & 8.60 & 7.50 & 7.39 \\
$B^{200}$& 3.24 & 3.25 & 2.75 & 2.65 \\
\hline
\end{tabular}
\caption{Table of maximum percentage differences in the $13N$-moment multi-temperature coefficients in the range $T_\i/T_\j=0.8-1.2$. It can be noticed that the differences remain reasonably low for small temperature differences.}
\label{table:13n}
\end{table}
We show the differences in the single-temperature $21N$-moment single-temperature coefficients for the D-T case in Figs.\,(\ref{fig:zhdanov21_errors0}), ((\ref{fig:zhdanov21_errorsA})) and (\ref{fig:zhdanov21_errorsB}).
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/DT/equal_temp/diff_cross_section_coefficient_errors_rank0}
\caption{Plots of percentage differences in multi-temperature coefficient $C^{210}_{TD}$ calculated by Eqs.\,(\ref{eq:zhdanov21_forms}) and ours calculated from Eq.\,\ref{eq:collision_second_form}. { The spike near unity temperature ratio is because of our collision coefficient changing signs, but the error being the absolute value of the relative difference.}}
\label{fig:zhdanov21_errors0}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/DT/equal_temp/diff_cross_section_coefficient_errors_rank11A}
\includegraphics[width=0.9\columnwidth]{figures/DT/equal_temp/diff_cross_section_coefficient_errors_rank12A}
\includegraphics[width=0.9\columnwidth]{figures/DT/equal_temp/diff_cross_section_coefficient_errors_rank2A}
\caption{Plots of percentage differences in multi-temperature coefficient $A^{mnl}_{TD}$ calculated by Eqs.\,(\ref{eq:zhdanov21_forms}) and ours calculated from Eq.\,\ref{eq:collision_second_form}. { The top and the middle plots showcase the differences for the rank-1 coefficients, and the bottom plot shows the differences for the rank-2 coefficients.}}
\label{fig:zhdanov21_errorsA}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.9\columnwidth]{figures/DT/equal_temp/diff_cross_section_coefficient_errors_rank11B}
\includegraphics[width=0.9\columnwidth]{figures/DT/equal_temp/diff_cross_section_coefficient_errors_rank12B}
\includegraphics[width=0.9\columnwidth]{figures/DT/equal_temp/diff_cross_section_coefficient_errors_rank2B}
\caption{Plots of percentage differences in multi-temperature coefficients $B^{mnl}_{TD}$ calculated by Eqs.\,(\ref{eq:zhdanov21_forms}) and ours calculated from Eq.\,\ref{eq:collision_second_form}. { The top and the middle plots showcase the differences for the rank-1 coefficients, and the bottom plot shows the differences for the rank-2 coefficients.}}
\label{fig:zhdanov21_errorsB}
\end{figure}
One can notice in these figures that the differences in the coefficients are significantly higher than those of the $13N$-moment multi-temperature case. Furthermore, the percentage differences in higher order moments, e.g. $A/B_{\i\j}^{11n}$, $A/B_{\i\j}^{12n}$ and $A/B_{\i\j}^{21n}$ are significantly higher than those of the lower order moment ones. The same trends are observed in the cases of carbon, argon and tungsten as well. In order to compare the differences, in Table (\ref{table:21n}), we show the maximum difference in the temperature ratio range of $0.8-1.2$. We notice the same trend as earlier for the lower order coefficients, in the decrease in the percentage differences the heavier the impurity species gets. However, we also notice that the difference for the higher-order $A/B_{\i\j}^{11n}$, $A/B_{\i\j}^{12n}$ and $A/B_{\i\j}^{21n}$ moments can be up to three orders of magnitude higher than the lower order ones. Thus, the $21N$-moment single-temperature coefficients cannot be recommended for simulation purposes with significant temperature differences, compared to the $13N$-moment multi-temperature ones.
\begin{table}
\centering
\begin{tabular}{|c||c|c|c|c|}
\hline
\rule{0pt}{2ex}
\text{Coefficient} & \text{D-T} & \text{C-H} & \text{Ar-H} & \text{W-H} \\
\hline\hline
\rule{0pt}{3ex}$C^{010}$& 16.44 & 18.62 & 12.46 & 10.57 \\
$A^{100}$& 16.44 & 18.62 & 12.46 & 10.57 \\
$B^{100}$& 16.44 & 18.62 & 12.46 & 10.57 \\
$A^{101}$& 14.08 & 12.93 & 8.22 & 10.51 \\
$B^{101}$& 14.08 & 12.93 & 8.22 & 10.51 \\
$A^{102}$& 28.40 & 39.39 & 34.35 & 36.97 \\
$B^{102}$& 28.40 & 39.39 & 34.35 & 36.97 \\
$A^{110}$& 38.98 & 81.66 & 106.88 & 115.30 \\
$B^{110}$& 176.17 & 452.64 & 191.55 & 99.35 \\
$A^{111}$& 27.77 & 47.18 & 62.00 & 67.48 \\
$B^{111}$& 11.53 & 16.38 & 14.66 & 15.20 \\
$A^{112}$& 25.76 & 42.10 & 55.27 & 60.30 \\
$B^{112}$& 10.34 & 25.43 & 23.92 & 27.69 \\
$A^{120}$& 197.05 & 1004.96 & 1491.43 & 1790.87 \\
$B^{120}$& 336.60 & 282.77 & 98.32 & 99.65 \\
$A^{121}$& 40.53 & 102.39 & 134.03 & 144.63 \\
$B^{121}$& 24.87 & 207734.00 & 752.74 & 98.05 \\
$A^{122}$& 45.89 & 110.41 & 137.11 & 146.93 \\
$B^{122}$& 13.20 & 14.10 & 14.06 & 14.76 \\
$A^{200}$& 16.27 & 27.66 & 25.55 & 29.26 \\
$B^{200}$& 19.07 & 25.95 & 22.54 & 22.50 \\
$A^{201}$& 41.50 & 1180.76 & 4493.82 & 21616.10 \\
$B^{201}$& 10.41 & 14.15 & 16.71 & 19.93 \\
$A^{210}$& 41.79 & 1953.74 & 8207.40 & 40552.00 \\
$B^{210}$& 54.69 & 3883.28 & 192.60 & 98.86 \\
$A^{211}$& 234.01 & 4226.10 & 18208.20 & 189903.00 \\
$B^{211}$& 11.93 & 15.65 & 14.30 & 15.18 \\
\hline
\end{tabular}
\caption{Table of maximum percentage differences in the $21N$-moment single-temperature coefficients in the range $T_\i/T_\j=0.8-1.2$.}
\label{table:21n}
\end{table}
\section{Effect of Coefficients on Viscosity and Friction Force Calculations}
\label{sec:intuitive}
We have up until now clearly noted the difference in coefficients numerically. However, it may also be instructive to study some intuitive physical quantities such as viscosity and heat flux.
The process of obtaining values of viscosity and heat-flux would essentially close the $13N$-moment system of equations given by (\ref{eq:transport}), by eliminating the higher-order moments, $\mathbf{h}$ and $\pi$ in this case, in terms of lower order ones.
In order to obtain the closed set of equations, we neglect any electric and magnetic fields, and follow the procedure in Ref.\,\onlinecite{zhdanov_effect_1962}, restricting the collisional terms to the $13N$-moment multi-temperature approximation. Clearly, this approximation will yield viscosity and heat-flux purely inertial in origin. In addition to neglecting the fields, we also assume that the higher order moments evolve much slower in time and have much smaller gradients than the lower order plasma dynamical moments. In terms of the Knudsen number $Kn\sim\lambda/L\sim\tau/T$ where $\lambda$ refers to the mean free path and $L$ the scale length of the system in question, and equivalently $\tau$ is the collision frequency and $T$ the timescale of the evolution of the system, we see, therefore that the plasma dynamical moments are of the order of the $Kn$, and ignore every term that is of a higher order than $Kn$. This implies that we neglect all time and space gradients of higher order moments in the LHS of the moment-averaged Boltzmann equation (\ref{eq:transport}). Hence, the only terms that survive are the ones proportional to $\langle dG^{mn}_\i/dt\rangle$ (through the $T_\i$ dependence of $\gamma_\i$) for rank-1 quantities, and the ones proportional to $\partial u_r/\partial x_s$ for the rank-2 quantities (as the $c_{\alpha s}$ derivative reduces the order of the moment).
Thus, the reduced evolution equation, now the steady-state equation, for the stress tensor $\pi_\i$ for the species $\i$ becomes
\begin{align}
2p_{\i} \varepsilon = \sum_{\beta\neq\alpha}\left( \frac{A_{\i\j}^{200}}{n_\i}\pi_\i + \frac{B_{\i\j}^{200}}{n_\j}\pi_\j \right),
\label{eq:reduced_stress}
\end{align}
where $\varepsilon$ is given by
\begin{equation}
\varepsilon_{rs}=\left\{\frac{\partial u_r}{\partial x_s} \right\}=\frac{1}{2}\left(\frac{\partial u_r}{\partial x_s}+\frac{\partial u_s}{\partial x_r} \right)-\frac{1}{3}\delta_{rs}\frac{\partial u_l}{\partial x_l}.\nonumber
\end{equation}
This is the usual form by which one represents viscous forces as the viscosity $\eta$ multiplied to a traceless strain rate tensor $\varepsilon$\cite{kulsrud_plasma_2020}. We can re-write this equation in the following form
\begin{equation}
\sum_\gamma \frac{B^{*200}_{\alpha\gamma}}{n_\gamma}\pi_\gamma=2p_\i \varepsilon,
\end{equation}
where the sum $\gamma$ is over all species including $\i$, and where
\begin{equation}
B^{*200}_{\alpha\gamma}=\left\{\begin{tabular}{ll}
$\sum_{\j\neq\i}A_{\i\j}^{200}$ & $,\ \i=\gamma$\\
$B_{\i\gamma}^{200}$ & $,\ \i\neq\gamma$
\end{tabular}\right. .
\end{equation}
For $N$ species in the plasma, the set of $N$ equations corresponding to Eq.\,(\ref{eq:reduced_stress}) for all species can then be compactly written in a matricial form
\begin{equation}
B^{*200}\Pi=P\varepsilon,
\end{equation}
where the $B^{*200}$ is an $N\times N$ matrix whose elements are given by
\begin{equation}
B^{*200}=\left(\begin{tabular}{cccc}
$\frac{\sum_{\j\neq\i}A_{\i\j}^{200}}{n_\i}$ & $\frac{B_{\i\gamma}^{200}}{n_\gamma}$ & $\ldots$ & $\frac{B_{\i\omega}^{200}}{n_\omega}$\\
$\frac{B_{\gamma\i}^{200}}{n_\i}$ & $\ddots$ & $ $ & $\vdots$\\
$\vdots$ & $ $ & $\ddots$ & $\vdots$\\
$\frac{B_{\omega\i}^{200}}{n_\i}$ & $\ldots$ & $\ldots$ & $\frac{\sum_{\j\neq\omega}A_{\omega\j}^{200}}{n_\omega}$
\end{tabular}\right), \label{eq:bstar200}
\end{equation}
where $\omega$ is an arbitrarily chosen $n^{th}$ species, and where $\Pi$ and $P$ is the column matrix with all the values of $\pi_\gamma$ and $2p_\gamma$ respectively. On inverting the equation, and comparing it to the classical form of stress tensor $\pi_\gamma=-2\eta_\gamma\varepsilon$, we can find the column matrix of the viscosity $E$ given by
\begin{equation}
E=-\frac{1}{2}(B^{*200})^{-1}P,
\end{equation}
where the elements of $E$ are the partial viscosity of each species $\eta_\gamma$. One can find the total viscosity $\eta=\sum_\gamma \eta_\gamma$ by the formula $\eta=Tr(E^T U)$.
Similarly, the reduced heat-flux $\mathbf{h}_\i$ equation is given by
\begin{align}
\frac{5}{2}\frac{1}{\gamma_\i}\frac{p_\i}{T_\i}\nabla T_\i &= \sum_{\beta\neq\alpha}\left( \frac{A_{\i\j}^{111}}{n_\i}\mathbf{h}_\i + \frac{B_{\i\j}^{111}}{n_\j}\mathbf{h}_\j\right)\nonumber\\
&+\sum_{\beta\neq\alpha}\left( {A_{\i\j}^{110}}{m_\i}\mathbf{w}_\i + {B_{\i\j}^{110}}{m_\j}\mathbf{w}_\j\right),
\end{align}
which can then be written in a matrix form as
\begin{equation}
B^{*111}H+B^{*110}W=T,
\end{equation}
where $B^{*111}$ has the same form as $B^{*200}$ with the $200$-index coefficients replaced with $111$-index coefficients in Eq.\,(\ref{eq:bstar200}).
$B^{*110}$ also has a similar form as Eq.\,(\ref{eq:bstar200}), with the $200$-index coefficients replaced by $110$-index coefficients, and which multiply by $m$ instead of $(1/n)$.
The column matrices $H$, $W$ and $T$ are matrices containing $\mathbf{h}_\gamma$, $\mathbf{w}_\gamma$, and $\frac{5}{2}\frac{1}{\gamma_\gamma}\frac{p_\gamma}{T_\gamma}\nabla T_\gamma$ respectively. The heat-flux for all species can then be similarly written as
\begin{equation}
H=(B^{*111})^{-1}T-(B^{*111})^{-1}B^{*110}W.
\end{equation}
One can notice that approximating the heat-fluxes in this manner, reduces them to a linear combination of temperature gradients and flows. On substituting this value of the heat flux, the RHS of the momentum balance equation will depend solely on only two terms, one proportional to the flows and the other proportional to the temperature gradients, hence recovering the familiar form of the collision term, one with friction force dependent on the flow difference and the other a thermal gradient force, dependent on the difference of temperature gradients\cite{braginskii_transport_1965,stangeby_plasma_2000}. The heat-flux term then adds to the existing friction force term and augments it in the following manner
\begin{multline}
R^{1}_{\i\j,fric}=\left[ \left(A_{\i\j}^{100}m_\i \mathbf{w}_\i-\frac{A_{\i\j}^{101}}{n_\i}[(B^{*111})^{-1}B^{*110}W]_\i)\right)\right.\\
+\left.\left(B_{\i\j}^{100}m_\j \mathbf{w}_\j-\frac{B_{\i\j}^{101}}{n_\j}[(B^{*111})^{-1}B^{*110}W]_\j\right)\right]
\end{multline}
and the thermal gradient force remains
\begin{multline}
R^{1}_{\i\j,therm}=\left[ \frac{A_{\i\j}^{101}}{n_\i}[(B^{*111})^{-1}T]_\i +\frac{B_{\i\j}^{101}}{n_\j}[(B^{*111})^{-1}T]_\j\right],
\end{multline}
where $[\dots]_\gamma$ indicates the element of the column matrix corresponding to the species $\gamma$.
To estimate the augmentation of the friction force, we need to compare the additional term to the original coefficients $A_{\i\j}^{100}m_\i$ and $B_{\i\j}^{100}m_\j$.
One can notice, however, that the friction force term $R^{1}_{\i\j,fric}$ now becomes quite complex, with friction among any two pairs of species depending on the flow velocity of all species. This makes any straightforward comparisons of the coefficients of the friction force cumbersome and prone to overinformation. Therefore, we only choose the coefficient of $\mathbf{w}_\i$ and $\mathbf{w}_\j$ from the heat-flux contribution $\mathbf{h}_\i$ and $\mathbf{h}_\j$ and compare them to the original coefficients corresponding to the original friction force.
Furthermore, one can repeat the same procedure with a higher-order number of moments as has been done in Ref.\,\onlinecite{yushmanov_diffusion_1980}, often referred to as the ``Zhdanov closure'' when applied to close a $21N$-moment set of equations with the moments of the collision operator given by Eq.\,(\ref{eq:collision_first_form}). It must be mentioned that the higher approximations to the viscosity, which would depend on the electric field and the magnetic field, can be decomposed into a form which is a linear combination of different viscosity contributions\cite{braginskii_transport_1965,alievskii_1963_transport,alievskii_viscous_1964}. Comparison of such approximations with more nuanced field effects and higher number of moments is out of the scope of the current article, since a larger number of moments would imply needing the use of block matrices instead of regular matrices used here, and will be a part of our planned future work.
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/DT/viscosity}
\caption{Plot of total viscosities for the Deuterium-Tritium plasma calculated using our coefficients, and Zhdanov's $13N$-moment multi-temperature and $21N$-moment single-temperature coefficients, plotted against the temperature ratio $T_D/T_T$. { Notice the virtual overlap between the viscosities calculated from Zhdanov's $13N$-moment coefficients and ours in the vicinity of equal temperature. In the legends for the plots, first $\eta$ refers to our coefficients, the subscript ``$13N$'' indicates the viscosity calculated Zhdanov's multi-temperature collisional coefficients and similarly the subscript ``$21N$'' for Zhdanov's single-temperature collisional coefficients.}}
\label{fig:DT_viscosity}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=\columnwidth]{figures/DT/friction_force_augmentationA}
\includegraphics[width=\columnwidth]{figures/DT/friction_force_augmentationB}
\caption{The augmentation of the friction force between Deuterium and Tritium by the contribution of the heat fluxes (in \%), for ours and Zhdanov's coefficients, plotted against the temperature ratio $T_D/T_T$. { In the legends for the plots, the ``$13N$'' indicates the value of the friction force augmentation for Zhdanov's multi-temperature collisional coefficients and similarly the ``$21N$'' refers to the friction force augmentation Zhdanov's single-temperature collisional coefficients.}}
\label{fig:DT_friction_augmentation}
\end{figure}
For the particular case of a Deuterium-Tritium plasma, one can notice the total viscosity $\eta$ in Fig.\,(\ref{fig:DT_viscosity}) in cases of all three coefficients follow each other quite closely in the vicinity of equal temperatures, with the $13N$-moment multi-temperature ones practically overlapping with our exact values.
One can see in Tables (\ref{table:vis13n}) and (\ref{table:vis21n}) that the differences in viscosity for the $13N$-moment multi-temperature case are significantly lower than those of the $21N$-moment single-temperature case, and also that differences in viscosity generally seem to decrease with increasing mass ratio.
{ Furthermore, we can compare the viscosity values obtained with the prescription for parallel viscosity given by Braginskii (Ref.\,\onlinecite{braginskii_transport_1965}, page 229), where Braginskii provides the magnitude of viscosity by the expression $\eta\sim nkT\tau_{\i\j}$, where $\tau_{\i\j}$ is the mean time between collisions between species $\alpha$ and $\beta$, and where $n=\sum_i n_i$. Values of $\tau_{\i\j}$ are provided by Braginskii for ions and electrons, but not for impurities. However, since Braginskii follows the Chapman-Enskog solution, we can use the original formula provided by Chapman and Cowling (Ref.\,\onlinecite{chapman_mathematical_1952}, Sec.\,9.81) and Zhdanov (Ref.\,\onlinecite{zhdanov_transport_2002}, Sec.\,3.1), for estimating the mean time between collisions $\tau_{\i\j}$
\begin{equation}
\tau_{\i\j}=\frac{n\mu_{\i\j}[\mathcal{D}_{\i\j}]_1}{n_\beta kT},
\end{equation}
where the first approximation to the diffusion coefficient $[\mathcal{D}_{\i\j}]_1$ is given by
\begin{equation}
[\mathcal{D}_{\i\j}]_1=\frac{3kT}{16n\mu_{\i\j}\Omega_{\i\j}^{11}}.
\end{equation}
On substituting the physical values from Table I, and the value of $\Omega_{\i\j}^{11}$ from Eq.\,(\ref{eq:kihara_formula}), one can find that the viscosity prescribed by Braginskii, $\eta\sim nkT\tau_{\i\j}$, falls within the same range of values as found in Fig.\,(\ref{fig:DT_viscosity}). }
A similar trend can be partly observed in the augmentation of the friction force coefficients. In Fig.\,(\ref{fig:DT_friction_augmentation}), the increase in friction force is indicated as a percentage of the original value. In general, with increasing mass ratio and decreasing densities, the augmentation in the term proportional to $\mathbf{w}_\i$ decreases and increases in the term proportional to $\mathbf{w}_\j$. This is expected, as the dominant part of the heat-flux contribution to the friction force is bound to arise from the species with the larger density. In case of the $21N$-moment single-temperature cases for the friction force augmentation for the term proportional to the background flow contribution, the differences seem to rise again for high mass ratios, as evidenced by the W-H case. The percentage differences on these computed physical quantities seem to have a minimum in between the Argon and Tungsten cases, which indicates that even the relatively smaller differences in the $21N$-moment single-temperature case may contribute to significant difference in physical quantities of interest. On the basis of this, we recommend caution in using the $21N$-moment single-temperature coefficients even for heavy impurities when temperature differences may be significant.
\begin{table}
\centering
\begin{tabular}{|l||c|c|c|c|}
\hline
&D-T&C-H&Ar-H&W-H \\
\hline\hline
\rule{0pt}{3ex}Viscosity $\eta$ &1.84&1.12 &0.86 &2.49 \\
Friction, $\mathbf{w}_\i$&163.03&69.21 &3.88 &4.34 \\
Friction, $\mathbf{w}_\j$&19.06&23.27 &22.27 & 15.24 \\
\hline
\end{tabular}
\caption{Table of percentage differences in viscosity, friction force augmentations in $\mathbf{w}_\i$ and $\mathbf{w}_\beta$ in the $13N$-moment multi-temperature coefficients in the $0.8-1.2$ temperature ratio range.}
\label{table:vis13n}
\end{table}
\begin{table}
\centering
\begin{tabular}{|l||c|c|c|c|}
\hline
&D-T&C-H&Ar-H&W-H \\
\hline\hline
\rule{0pt}{3ex}Viscosity $\eta$ &16.94&15.14 &6.14 &36.99\\
Friction, $\mathbf{w}_\i$&1509.65&336.12 &245.95 &119.45\\
Friction, $\mathbf{w}_\j$&11.20&10.13 & 3.80&231.76\\
\hline
\end{tabular}
\caption{Table of percentage differences in viscosity, friction force augmentations in $\mathbf{w}_\i$ and $\mathbf{w}_\beta$ in the $21N$-moment single temperature coefficients in the $0.8-1.2$ temperature ratio range.}
\label{table:vis21n}
\end{table}
Thus, from these numerical results we can conclude 1.\,that the coefficients calculated by our coefficients, Zhdanov's $13N$-moment multi-temperature and $21N$-moment single-temperature expressions tend to follow each other quite closely in the vicinity of equal temperatures, 2.\,that the differences for the $21N$-moment single-temperature coefficients are higher than those of the $13N$-moment multi-temperature coefficients, 3.\,that the differences in coefficients become quite significant outside the vicinity of equal temperatures, however, that in general again, the differences in $13N$-moment multi-temperature coefficients are less than those of the $21N$-moment single-temperature case, 4.\,that the differences in higher-order moments become quite significant in case of the $21N$-moment single-temperature coefficients, 5.\,that the differences in coefficients in general decrease with increasing mass ratio, and 6.\,that despite the agreement and trends being similar in physical quantities such as viscosity and augmentation in friction force, computed from the coefficients in the vicinity of equal temperatures, the values computed from the $21N$-moment single-temperature coefficients may present significant differences as the mass ratio increases, especially in the augmentation of the friction force term.
\section{Summary, Conclusions and Future Work}
In this article, we have generalized the calculation of moment averaged collisional coefficients for a multi-component, multi-temperature plasma without making any assumptions on the masses or temperatures of the colliding species with the linearized Boltzmann collision operator for up to rank-2 tensorial moments. We started by taking an ansatz for the distribution function in terms of the Sonine polynomials and the irreducible tensorial Hermite monomials Eq.\,(\ref{eq:ansatz}), and then expressing the moment in terms of these polynomials of different orders Eq.\,(\ref{eq:sonine-hermite}). On taking moments defined in this manner, and then averaging over the Boltzmann collision operator, we obtain a generalized moment-averaged collision term, which expresses itself in terms of partial bracket integrals in Eq.\,(\ref{eq:collision_second_form}). We evaluate these partial bracket integrals analytically and derive general expressions up to rank-2. The collision operator found in this manner is valid for any range of masses and temperatures, but is restricted to the flow differences being much smaller than the order of the thermal velocities of the plasma. Furthermore, the collision term automatically preserves energy and momentum due to symmetry properties of the Boltzmann collision operator. These expressions for the collisional coefficients were found to be very amenable to being implemented in computer algebra systems, and in our case were implemented in Mathematica\cite{mathematica}, and were found to conserve mass, energy and momentum.
{ The collision coefficients are found to essentially be linear combinations of product of a term $A^{pqrl}_{\i\j/\i\i}$ depending purely on masses and temperatures, and another purely depending on the collisional cross-section $\Omega^{lr}_{\i\j}$, which in turn purely depends on the potential of interaction between the two colliding species and a factor $d_{\i\j}$. For the cross-sections, we chose a formula for the cross-section derived from asymptotic values of the form of the cross-section integral for the shielded Coulomb potential from Eq.\,(\ref{eq:kihara_formula}). We calculated our set of coefficients choosing $d_{\i\j}=\mu_{\i\j}/(2kT)$, as choosing this factor agrees with the calculations of effective cross-section integral for the shielded Coulomb potential from previous literature.}
Previously, Zhdanov et al had derived two sets of expressions for the collision coefficients and cross-sections, the first being derived for a multi-temperature plasma with terms provided up to $13N$-moments, and the second being derived at the plasma common temperature, with terms provided up to $21N$-moments. The procedure of derivation for the former is not explicitly provided for all moments, and no expressions for the coefficients were provided for higher-order collisional moments. { However, with our calculation procedure for the bracket integrals, with the suitable factor $d_{\i\j}=\gamma_{\i\j}/2$, the provided moments were found to be accurately derived. } The procedure for the latter $21N$-moment single-temperature set of coefficients was explicitly provided, but comes at the cost of temperature differences being much smaller than the order of the plasma common temperature.
We then compared our expressions to the ones provided by Zhdanov for multiple cases of colliding species relevant to fusion. We find that in the vicinity of the temperatures being equal, all sets of coefficients agree very well, but they diverge away from equal temperatures. We further find that the differences in the $13N$-moment multi-temperature coefficients are much smaller than those of the $21N$-moment single-temperature ones (See Appendices \ref{sec:zhdanov13} and \ref{sec:zhdanov21}). We also find that the differences in the coefficients decrease as the mass-ratio of the species increases. Furthermore, we use certain approximations in Knudsen number to obtain reduced equations for the stress-tensor and heat-flux, and use it to calculate the first inertial approximation to total viscosity and the augmentation of the friction force as contributed by the heat-flux. We find that while the differences mostly follow the same patterns as the coefficients, for high mass ratio and low density, as in the case of tungsten impurity in tokamak plasmas, the differences in these physical quantities become significant for the $21N$-moment single-temperature case. On the basis of this, we caution against using the $21N$-moment single-temperature coefficients where the temperature difference between species is significant. In the same vein, we find the $13N$-moment multi-temperature coefficients are agree better than the $21N$-moment single-temperature ones for small temperature differences, especially for modelling heavy-impurity transport, as for tungsten. { For any significant temperature differences between species, we recommend using multi-temperature coefficients.}
In the future, we plan to generalize the expressions for the linearized collision operator up to arbitrary rank-$m$ tensorial moments. We also plan to study in detail linear parallel closure schemes associated with fluid equations derived from the moments under the low Knudsen number approximation, particularly the Zhdanov closure. Furthermore, the study of the closure can help prescribe the appropriate choice of the factor $d_{\i\j}$ through studying its effect on the transport coefficients calculated by the Zhdanov closure scheme. The ones with the aforementioned $21N$-moment single-temperature scheme, derived by Zhdanov et al, often termed the Zhdanov closure, is one such scheme of closure particularly of interest to us, since it has already been implemented in many fluid codes of interest to the fusion community.
\section*{Acknowledgments}
The authors would like to thank Prof.\,V.\,Rat (Université de Limoges, France) for his generous help in obtaining the relevant texts and with explanation of technical details of his work in Ref.\,\onlinecite{rat_transport_2001}. Furthermore, the authors would also like to thank D.\,Brunetti (CCFE, UKAEA, UK), and, O.\,Sauter and S.\,Brunner (SPC, EPFL, Switzerland) for their valuable comments on the paper. We also thank the anonymous referees for their comments which helped improve the article significantly.
The projects leading to this publication have received funding from Excellence Initiative of Aix-Marseille Université - A*MIDEX, a French Investissement d’Avenir Programme, project TOP \& AMX-19-IET-013. This work has also been partly carried out within the framework of the EUROfusion Consortium and has received funding from the Euratom research and training programme 2014-2018 and 2019-2020 under grant agreement No.\,633053. The views and opinions expressed herein do not necessarily reflect those of the European Commission.
|
2,877,628,088,415 | arxiv | \section{Introduction}
A filament is a one-dimensional, smooth, connected structure
embedded in a multi-dimensional space.
Filaments arise in many applications.
For example, matter in the universe tends to concentrate
near filaments that comprise what is known
as the cosmic-web~\cite{Bond1996},
and the structure of that web can serve
as a tracer for estimating fundamental cosmological constants.
Other examples include neurofilaments and blood-vessel networks
in neuroscience~\cite{Lalonde2003},
fault lines in seismology~\cite{Fault},
and landmark paths in computer vision~\cite{Hile2009}.
Consider point-cloud data $X_1, X_2, \ldots, X_n$ in $\R^d$,
drawn independently from a density $p$ with compact support.
We define the filaments of the data distribution as the
\emph{ridges} of the probability density function $p$.
(See Section \ref{sec::ridges} for details.)
There are several alternative ways to formally define filaments~\cite{Eberly1996},
but the definition we use has several
useful statistical properties \cite{Genovese2012a}.
Figure~\ref{Fig:EX} shows two simple examples of point cloud data sets
and the filaments estimated by our method.
\begin{figure}
\centering
\subfigure
{
\includegraphics[scale =0.09]{figure1a}
}
\subfigure
{
\includegraphics[scale =0.09]{figure1b}
}
\caption{Examples of point cloud data with ridges (filaments).}
\label{Fig:EX}
\end{figure}
The problem of estimating filaments
has been studied in several fields
and a variety of methods have been developed,
including parametric~\cite{Stoica2007,Stoica2008};
nonparametric~\cite{Genovese2012b,Genovese2010,Genovese2012c};
gradient based~\cite{Genovese2012a,Sousbie2011,Novikov2006};
and topological~\cite{Dey2006,Lee1999,Cheng2005,Aanjaneya2012,Fabrizio2013}.
While all these methods provide filament estimates,
none provide an assessment of the estimate's uncertainty.
That filament estimates are random sets
is a significant challenge
in constructing valid uncertainty measures
\cite{Molchanov2005}.
In this paper, we introduce a local uncertainty measure for filament
estimates.
We characterize the asymptotic distribution of estimated filaments
and use it to
derive consistent estimates of the local uncertainty
measure and
to construct valid confidence sets
for the filament based on bootstrap resampling.
Our main results are as follows:
\begin{itemize}
\item We show that if the data distribution is smooth, so are the estimated filaments (Theorem~\ref{S1}).
\item We find the asymptotic distribution for estimated local uncertainty and its convergence rate (Theorem~\ref{LU2},~\ref{LU4}).
\item We construct valid and consistent, bootstrap confidence sets for the local uncertainty,
and thus pointwise confidence sets for the filament (Theorem~\ref{SB2}).
\end{itemize}
We apply our methods to point cloud data from examples in Astronomy
and Seismology
and demonstrate that they yield useful confidence sets.
\section{Background}
\subsection{Density Ridges} \label{sec::ridges}
Let $X_1,\cdots X_n$ be random sample from a
distribution with compact support in $\R^d$ that has density $p$.
Let $g(x) = \nabla p(x)$ and $H(x)$
denote the gradient and Hessian, respectively, of $p(x)$.
We begin by defining the \emph{ridges} of $p$,
as defined in~\cite{Genovese2012a,Ozertem2011,Eberly1996}.
While there are many possible definitions of ridges,
this definition gives stability in the underlying density,
estimability at a good rate of convergence, and fast reconstruction algorithms,
as described in \cite{Genovese2012a}.
In the rest of this paper, the filaments to be estimated are just the one-dimensional
ridges of $p$.
A mode of the density $p$ -- where the gradient $g$ is zero and all the eigenvalues of $H$
are negative -- can be viewed as a zero-dimensional ridge.
Ridges of dimension $0 < s < d$
generalize this to the zeros of a \emph{projected gradient}
where the $d - s$ smallest eigenvalues of $H$ are negative.
In particular for $s = 1$,
\begin{equation} \label{eq::ridge-def}
R \equiv \mbox{Ridge}(p) = \{x: G(x) = 0, \ \lambda_2(x) < 0\},
\end{equation}
where
\begin{align}
G(x) = V(x)V(x)^T g(x)
\end{align}
is the projected gradient.
Here, the matrix $V$ is defined as $V(x)=[v_2(x),\cdots v_d(x)]$
for eigenvectors $v_1(x),v_2(x),...,v_d(x)$ of $H(x)$
corresponding to eigenvalues
$\lambda_1(x) \ge \lambda_2(x)\ge \cdots \ge \lambda_d(x)$
Because one-dimensional ridges are the primary concern of this paper,
we will refer to $R$ in (\ref{eq::ridge-def}) as the ``ridges'' of $p$.
Intuitively, at points on the ridge,
the gradient is the same as the largest eigenvector
and the density curves downward sharply in directions orthogonal to that.
When $p$ is smooth and
the \emph{eigengap}
$\beta(x) = \lambda_1(x) - \lambda_2(x)$
is positive,
the ridges have the all essential properties of filaments.
That is, $R$ decomposes into a set of smooth curve-like
structures with high density and connectivity.
$R$ can also be characterized through Morse theory~\cite{Guest2001}
as the collection of $(d-1)$-critical-points
along with the local maxima,
also known as the set of 1-ascending manifolds with their local-maxima limit points~\cite{Sousbie2011}.
\subsection{Ridge Estimation}
We estimate the ridge in three steps: density estimation, thresholding, and ascent.
First, we estimate $p$ from from the data $X_1,\ldots, X_n$.
Here, we use the well-known kernel density estimator (KDE) defined by
\begin{align}
\hat{p}_n(x) = \frac{1}{nh^d} \sum_{i=1}^n K\left(\frac{||x-X_i||}{h}\right),
\end{align}
where the kernel $K$ is a smooth, symmetric density function such as a Gaussian
and $h\equiv h_n>0$ is the bandwidth which controls the smoothness of the estimator.
Because ridge estimation can tolerate a fair degree of oversmoothing
(as shown in \cite{Genovese2012a}), we select $h$ by a simple rule
that tends to oversmooth somewhat, the multivariate Silverman's rule~\cite{Silverman1986}.
Under weak conditions, this estimator is consistent;
specifically, $||\hat p_n - p||_\infty\stackrel{P}{\to}0$ as $n\to \infty$.
(We say that $X_n$ converges in probability to $b$, written
$X_n \stackrel{P}{\to}b$ if, for every $\epsilon>0$,
$P(|X_n - b|>\epsilon)\to 0$ as $n\to\infty$.)
Second, we threshold the estimated density to eliminate low-probability regions
and the spurious ridges produced in $\hat p_n$ by random fluctuations.
Here, we remove points with estimated density less than $\tau ||\hat{p}_n||_{\infty}$
for a user-chosen threshold $0 < \tau < 1$.
Finally, for a set of points above the density threshold,
we follow the ascent lines of the projected gradient to the ridge,
which is the the subspace constrainted mean
shift (SCMS) algorithm~\cite{Ozertem2011}.
This procedure can be viewed as estimating the ridge by applying the Ridge operator to $\hat p_n$:
\begin{align}
\hat{R}_n = \mbox{Ridge}(\hat{p}_n).
\end{align}
Note that $\hat{R}_n$ is a random set.
\subsection{Bootstrapping and Smooth Bootstrapping}
The bootstrap~\cite{Efron1979} is a statistical method for
assessing the variability of an estimator.
Let $X_1,\ldots, X_n$ be a random sample from a distribution $P$
and let $\theta (P)$ be some functional of $P$ to be estimated,
such as the mean of the distribution or (in our case) the ridge set of its density.
Given some procedure $\hat\theta(X_1,\ldots, X_n)$ for estimating $\theta(P)$
we estimate the variability of $\hat\theta$ by \emph{resampling} from
the original data.
Specifically, we draw a \emph{bootstrap sample}
$X_1^*,\ldots, X_n^*$ independently and with replacement
from the set of observed data points $\{X_1,\ldots,X_n\}$
and compute the estimate $\hat\theta^* = \hat\theta(X_1^*,\ldots, X_n^*)$
using the bootstrap sample as if it were the data set.
This process is repeated $B$ times, yielding $B$ bootstrap
samples and corresponding estimates $\hat\theta^*_1,\ldots,\hat\theta^*_B$.
The variability in these estimates is then used to assess the variability
in the original estimate $\hat\theta \equiv \hat\theta(X_1,\ldots, X_n)$.
For instance, if $\theta$ is a scalar, the variance of $\hat\theta$
is estimated by
$$
\frac{1}{B}\sum_{b=1}^B (\hat\theta_b^* - \overline{\theta})^2
$$
where
$\overline{\theta} = \frac1B \sum_{b=1}^B \hat\theta_b^*$.
Under suitable conditions, it can be shown that
this bootstrap variance estimates -- and confidence sets produced from it --
are consistent.
The \emph{smooth bootstrap} is a variant of the bootstrap that can be useful in function estimation problems
where the same procedure is used except
the bootstrap sample is drawn from the estimated density $\hat p$
instead of the original data. We use both variants below.
\section{Methods}
We measure the \emph{local uncertainty} in a filament (ridge) estimator $\hat R_n$
by the expected distance between a specified point in the original filament $R$
and the estimated filament:
\begin{equation} \label{eq::local-uncertainty}
\rho^2_n(x) = \begin{cases}
\E_{p} d^2(x, \hat{R}_n) & \mbox{if $x \in R$} \\
0 & \mbox{\rm otherwise}
\end{cases},
\end{equation}
where $d(x, A)$ is the distance function:
\begin{align}
d(x,A) = \underset{y\in A}{\inf}|x-y|.
\label{M:eq1}
\end{align}
The local uncertainty measure can be understood as the expected
dispersion for a given point in the original filament to the estimated
filament based on sample with size $n$. The theoretical analysis of $\rho^2_n(x)$ is
given in theorem~\ref{LU4}.
\subsection{Estimating Local Uncertainty}
Because $\rho^2_n(x)$ is defined in terms unknown distribution $p$ and the unknown filament set $R$,
it must be estimated.
We use bootstrap resampling to do this, defining an estimate of local uncertainty
\emph{on the estimated filaments}.
For each of $B$ bootstrap samples, $X_1^{*(b)},\cdots,X^{*(b)}_n$,
we compute the kernel density estimator
$\hat{p}^{*(b)}_n$,
the ridge estimate
$\hat{R}^{*(b)}_n = \mbox{Ridge}(\hat{p}^{*(b)}_n)$,
and the divergence
$\rho^{2}_{(b)}(x) = d^2(x, \hat{R}^{*(b)}_n)$ for all $x\in\hat R_n$.
We estimate $\rho^2_n(x)$ by
\begin{align}
\hat{\rho}^2_n(x) = \frac1B \sum_{b=1}^B \rho^{2}_{(b)}(x) \,\equiv \E(d^2(x, \hat{R}^*_n)|X_1,\cdots,X_n),
\end{align}
for each $x \in \hat{R}_n$,
where the expectation is from the (known) bootstrap distribution.
Algorithm 1 provides pseudo-code for this procedure,
and Theorem~\ref{SB2} shows that the estimate is consistent
under smooth bootstrapping.
\begin{algorithm}
\caption{Local Uncertainty Estimator}
\begin{algorithmic}
\State \textbf{Input:} Data $\{ X_1,\ldots,X_n\}$.
\State 1. Estimate the filament from $\{ X_1,\ldots,X_n\}$; denote the estimate by $\hat{R}_n$.
\State 2. Generate $B$ bootstrap samples: $X^{*(b)}_1,\ldots,X^{*(b)}_n$ for $b = 1,\ldots,B$.
\State 3. For each bootstrap sample, estimate the filament, yielding $\hat{R}^{*(b)}_n$ for $b = 1,\ldots,B$.
\State 4. For each $x\in \hat{R}_n$, calculate $\rho^{2}_{(b)}(x) = d^2(x,\hat{R}^{*(b)}_n)$, $b=1,\ldots,B$.
\State 5. Define $\hat{\rho}^2_n(x) = \mbox{mean}\{r^2_1(x),\ldots,r^2_B(x)\}$.
\smallskip
\State \textbf{Output:} $\hat{\rho}^2_n(x)$.
\end{algorithmic}
\end{algorithm}
\subsection{Pointwise Confidence Sets}
Confidence sets provide another useful assessment of uncertainty.
A $1-\alpha$ confidence set is a random set computed from the data
that contains an unknown quantity with at least probability $1 - \alpha$.
We can construct a pointwise confidence set for filaments from the
distance function ~\eqref{M:eq1}. For each point $x\in\hat{R}_n$, let
$r_{1-\alpha}(x)$ be the
$(1-\alpha)$ quantile value of $d(x, \hat{R}^*)$
from the bootstrap.
Then, define
\begin{align}
C_{1-\alpha}(X_1,\cdots,X_n) = \bigcup_{x\in \hat{R}} B(x,r_{1-\alpha}(x)).
\end{align}
This confidence set capture the local uncertainty:
for a point $x\in\hat{R}_n$ with low (high) local
uncertainty, the associated radius $r_{1-\alpha}(x)$ is
small (large).
But note that the confidence set attains $1-\alpha$ coverage around each point;
the coverage of the entire filament set is lower. That is, we can have high probability to cover each point but the probability to simultaneously cover all points (the whole filament set) might be lower.
\begin{algorithm}
\caption{Pointwise Confidence Set}
\begin{algorithmic}
\State \textbf{Input:} Data $\{ X_1,\ldots,X_n\}$; significance level $\alpha$.
\State 1. Estimate the filament from $\{ X_1,\ldots,X_n\}$; denote this by $\hat{R}_n$.
\State 2. Generates bootstrap samples $\{ X^{*(b)}_1,\ldots,X^{*(b)}_n\}$ for $b = 1,\ldots,B$.
\State 3. For each bootstrap sample, estimate the filament, call this $\hat{R}^{*(b)}_n$.
\State 4. For each $x\in \hat{R}_n$, calculate $\rho^{2}_{(b)}(x) = d^2(x,\hat{R}^{*(b)}_n)$, $b=1,\ldots,B$.
\State 5. Let $r_{1-\alpha}(x)= Q_{1-\alpha}(\rho_{(1)}(x),\ldots,\rho_{(B)}(x))$.
\smallskip
\State \textbf{Output:} $\bigcup_{x \in \hat{R}_n} B(x, r_{1-\alpha}(x))$ where $B(x, r)$ is the closed ball with center $x$ and radius $r$.
\end{algorithmic}
\end{algorithm}
\section{Theoretical analysis}
For the filament set $R$, we assume that it can be decomposed into a finite partition
\[
\{R_1,\cdots,R_k\}
\]
such that each $R_i$ is a one dimensional manifold. Such a partition can be constructed by the equation of traversal in page 56 of~\cite{Eberly1996}. For each $R_i$, we can parametrize it by a function $\phi_i(s): [0,1]\to R_i$ from the equation of traversal mentioned with suitable scaling.
For simplicity, in the following proofs we assume that the filament set $R$ is a single $R_i$ so that we can construct the parametrization $\phi$ easily. All theorems and lemmas we prove can be applied to the whole filament set $R=\bigcup_i R_i$ by repeating the process for each individual $R_i$.
\subsection{Smoothness of Density Ridges}
To study the properties of the uncertainty estimator,
we first need to establish some results about the smoothness of the filament.
The following theorem provides conditions for smoothness of the filaments.
Let $\mathbf{C}^{k}$ denote the collection of $k$ times continuously
differentiable functions.
\begin{thm}[Smoothness of Filaments] \label{S1}
Let $\phi(s): [0,1]\to R$ be a parameterization of filament set $R$,
and for $s_0\in[0,1]$, let $U\subset R$ be an open set containing $\phi(s_0)$.
If $p$ is $\mathbf{C}^{k}$ and the eigengap $\beta(x) > 0$ for $x\in U$,
then $\phi(s)$ is $\mathbf{C}^{k-2}$ for $s\in \phi^{-1}(U)$.
\end{thm}
Theorem~\ref{S1} says that filaments from a smooth density will be smooth.
Moreover, estimated filaments from the KDE will be smooth if the kernel function is smooth.
In particular, if we use Gaussian kernel, which is $\mathbf{C}^{\infty}$, then the
corresponding filaments will be $\mathbf{C}^{\infty}$ as well.
\subsection{Frenet Frame}
\begin{figure}
\center
\includegraphics[scale =0.5]{figure2}
\caption{An example for Frenet frame in two dimension.}
\label{FF1}
\end{figure}
In the arguments that follow, it is useful to have a well-defined
``moving'' coordinate system along a smooth curve.
Let $\gamma: \R \mapsto \R^d$ be an arc-length parametrization for a $\mathbf{C}^{k+1}$
curve with $k\ge d$.
The \textit{Frenet frame}~\cite{Kuhnel2002}
along $\gamma$
is a smooth family of orthogonal bases at $\gamma(s)$
\begin{align*}
e_1(s), e_2(s), \cdots e_d(s)
\end{align*}
such that $e_1(s) = \gamma'(s)$ determines the direction of the curve.
The other basis elements $e_2(s),\cdots, e_d(s)$ are called the curvature vectors
and can be determined by a Gram-Schmidt construction.
Assume the density is $\mathbf{C}^{d+3}$. We can construct a Frenet
frame for each point on the filaments. Let $e_1(s), \cdots, e_d(s)$ be
the Frenet frame of $\phi(s)$ such that
\begin{align*}
e_1(s) &= \frac{\phi'(s)}{|\phi'(s)|}\\
e_j(s) &= \frac{\tilde{e}_j(s)}{|\tilde{e}_j(s)|}\\
\tilde{e}_j(s) &= \phi^{(j)}(s) - \sum_{i=1}^{j-1} <\phi^{(j)}(s), e_i(s)>e_i(s), j=2,\cdots, d,
\end{align*}
where $\phi^{(j)}(s)$ is the $j$th derivative of the $\phi(s)$ and $<a,b>$ is the inner product of vector $a,b$.
An important fact is that the basis element $e_j(s)$ is
$\mathbf{C}^{d+3-j}$, $j=1,2\cdots d$. Frenet frames are widely used in
dynamical systems because they provide a unique and continuous frame to
describe trajectories.
\subsection{Normal space and distance measure}
The \emph{reach} of $R$, denoted by $\kappa(R)$, is the smallest real number $r$
such that each
$x\in \{y:\ d(y,R) \leq r\}$ has a unique projection onto $R$~\cite{Federer1959}.
We define the normal space $L(s)$ of $\phi(s)$ by
\begin{align}
L(s)= \Bigl\{\sum_{i=2}^d \alpha_i e_i(s)\in \R^d: \alpha_2^2+\cdots +\alpha_d^2 \leq \kappa(R)^2\Bigr\}.
\end{align}
Note that since we have second derivative of $\phi(s)$ exists and finite, the reach will be bounded from below.
\begin{figure}
\center
\includegraphics[scale =0.4]{figure3}
\label{FF2}
\caption{An example for the normal space $L(s)$ along a ridge in three dimensional.}
\end{figure}
Finally, define the Hausdorff distance between two subsets of $\R^d$
by
\begin{equation}
d_H(A,B) = \inf\{\epsilon:\; A \subset B \oplus \epsilon \mathand B \subset A \oplus \epsilon\},
\end{equation}
where $A \oplus \epsilon = \bigcup_{x\in A} B(x,\epsilon)$ and $B(x,\epsilon)=\{y:\; \norm{x-y} \le \epsilon\}$.
\subsection{Local uncertainty}
Let the estimated filament be the ridge of KDE.
We assume the following:
\begin{itemize}
\item[(K1)] The kernel $K$ is $\mathbf{C}^{d+3}$.
\item[(K2)] The kernel $K$ satisfies condition $K_1$ in page 5 of~\cite{Gine2002}.
\item[(P1)] The true density $p$ is in $\mathbf{C}^{d+3}$.
\item[(P2)] The ridges of $p$ have positive reach.
\item[(P3)] The ridges of $p$ are closed. For example, Figure~\ref{Fig:EX}-(b).
\end{itemize}
(K1) and (K2) are very mild assumptions on the kernel function. For
instance, Gaussain kernels satisfy both. (P1-P3) are
assumptions on the true density. (P1) is a smoothness condition.
(P2) is a smoothness assumption on the ridge.
(P3) is included to
avoid boundary bias when estimating the filament near endpoints.
Now we introduce some norms and semi-norms
characterizing the smoothness of the density $p$.
A vector $\alpha = (\alpha_1,\ldots,\alpha_d)$
of non-negative integers is called a multi-index
with $|\alpha| = \alpha_1 + \alpha_2 + \cdots + \alpha_d$
and corresponding derivative operator
$$
D^\alpha = \frac{\partial^{\alpha_1}}{\partial x_1^{\alpha_1}} \cdots \frac{\partial^{\alpha_d}}{\partial x_d^{\alpha_d}},
$$
where $D^\alpha f$ is often written as $f^{(\alpha)}$.
For $j = 0,\ldots, 4$, define
\begin{align}
\norm{p}_{\infty}^{(j)} = \underset{\alpha:\; |\alpha| = j}{\max} \underset{x\in\R^d}{\sup} |p^{(\alpha)}(x)|.
\end{align}
When $j = 0$, we have the infinity norm of $p$; for $j > 0$, these are semi-norms.
We also define
\begin{align}
\norm{p}^*_{\infty, k} = \underset{j=0,\cdots,k}{\max} \norm{p}^{(j)}_{\infty}.
\end{align}
It is easy to verify that this is a norm. Next we recall a theorem
in~\cite{Genovese2012a} which establish the link of Hausdorff distance
between $R,\hat{R}_n$ with the metric between density.
\begin{thm}[Theorem 6 in~\cite{Genovese2012a}] \label{LU0}
Under conditions in~\cite{Genovese2012a}, as $||p-\hat{p}_n||^*_{\infty, 3}$ is sufficiently small , we have
\begin{align*}
d_H(R,\hat{R}_n) = O_P(||p-\hat{p}_n||^*_{\infty, 2}).
\end{align*}
\end{thm}
This theorem tells us that we have convergence in Hausdorff distance for estimated filaments.
\begin{lem}[Local parametrization] \label{LU1}
For the estimated filament $\hat{R}_n$,
define $\hat{\phi}_n(s) = L(s)\cap \hat{R}_n$
and $\Delta_n = d_H(\hat{R}_n, R)$.
Assume (K1), (K2), (P1), (P2).
If $||p-\hat{p}_n||^*_{\infty, 4}\stackrel{P}{\to} 0$,
then, when $\Delta_n$ is sufficiently small, \vspace{-3ex}
\begin{enumerate}
\item $\hat{\phi}_n(s)$ is a singleton for all $s$ except in a set $S_n$ containing the boundaries
with $\Length(S_n)\leq O(\Delta_n)$.
\item $\frac{d(x,\hat{R}_n) - |\phi(s)-\hat{\phi}_n(s)|}{|\phi(s)-\hat{\phi}_n(s)|} = o_P(1)$
for $x$ not at the boundary of filaments.
\item If in addition (P3) holds, then $S_n = \emptyset$.
\end{enumerate}
\vspace*{-4ex}
\end{lem}
Notice that a sufficient condition for Lemma~\ref{LU1} is $\frac{nh^{d+8}}{\log n}\rightarrow \infty$ by
Lemma~\ref{LUlem2}.
\begin{figure}
\center
\includegraphics[scale =0.4]{figure4}
\label{FF2}
\caption{An example for $\hat{\phi}_n(s)$.}
\end{figure}
Claim 1 follows because
the Hausdorff distance is less than $\min\{\frac{\kappa(R)}{2},
\frac{\kappa(\hat{R}_n)}{2}\}$. This will be true
since by Theorem~\ref{LU0}, the Hausdorff distance is contolled by
$||p-\hat{p}_n||^*_{\infty, 2}$, and we have a stronger convergence
assumption. The only exception is points near the boundaries of $R$
since $\hat{R}_n$ can be shorter than $R$ in this case. But this can only
occur in the set with length less than Hausdorff distance.
Claim 2 follows from the fact that the normal space for $\phi(s)$ and $\hat{\phi}$
will be asymptotically the same.
If we assume (P3), then $R$ has no boundary, so that $S_n$ is an empty set.
Note that Claim 2 gives us the validity of approximation for $d(x,\hat{R}_n)$
via $|\phi(s)-\hat{\phi}_n(s)|$. So the limiting bahavior of local uncertainty $d(x,\hat{R}_n)$
will be the same as $|\phi(s)-\hat{\phi}_n(s)|$. In the following, we will study the limiting distributions for $|\phi(s)-\hat{\phi}_n(s)|$.
We define the \emph{subspace derivative} by $\nabla_{L} = L^T\nabla$,
which in turn gives the \emph{subspace gradient}
\begin{align*}
g(x;L) = \nabla_{L} p(x)
\end{align*}
and the \emph{subspace Hessian}
\begin{align*}
H(x;L) = \nabla_L \nabla_L p(x).
\end{align*}
Then we have the following theorem on local uncertainty,
where $X_n\stackrel{d}{\to} Y$ denotes convergence in distribution.
\begin{thm}[Local uncertainty theorem] \label{LU2}
Assume (K1),(K2),(P1),(P2). If $\frac{nh^{d+8}}{\log n}\rightarrow \infty, nh^{d+10}\rightarrow 0$, then
\begin{align*}
\sqrt{nh^{d+2}}&([\phi(s)-\hat{\phi}_n(s)]-L(s)\mu(s) h^2) \overset{d}{\rightarrow} L(s) A(s)
\end{align*}
where
\begin{align*}
A(s) &\overset{d}{=} N(0, \Sigma(s))\in\R^{d-1}\\
\mu(s) & = c(K) H(\phi(s);L(s))^{-1} \nabla_{L(s)} (\nabla \bullet \nabla) p(\phi(s))\\
\Sigma(s)&= H(\phi(s);L(s))^{-1} \nabla_{L(s)} K (\nabla_{L(s)} K)^T H(\phi(s);L(s))^{-1} p(\phi(s))
\end{align*}
for all $\phi(s)\in R\backslash S_n$ with $\Length(S_n)\leq O(d_H(R,\hat{R}_n))$.
\end{thm}
Theorem \ref{LU2} states the asymptotic behavior of $\phi(s)-\hat{\phi}_n(s)$ which is
asymptotically equivalent to local uncertainty. $L(s)\mu(s)h^2$ is the bias component
and $L(s)A(s)$ is the stochastic variation component in which the parameter
$\Sigma(s)$ controls the amount of varitaion.
The contents in parameters $\mu(s)$ and $\Sigma(s)$ link
the geometry of the local density function with the local uncertainty.
\textbf{Remarks:} \vspace{-3ex}
\begin{itemize}
\item Note that $\frac{nh^{d+8}}{\log n}\rightarrow \infty$ is
a sufficient condition for up to the fourth derivative uniform
convergence. The uniform convegence in these derivative along with
(P2) and theorem~\ref{S1} ensures the reach of
$\hat{R}_n$ will converge to the condition number of $R$.
\end{itemize}
By theorem \ref{LU2} and claim 2 in lemma \ref{LU1},
we know the asymptotic distribution of local
uncertainty $d(x,\hat{R}_n) $. So we have
the following theorem on local uncertainty measure.
\begin{thm}
\label{LU4}
Define the local uncertatiny measure by
\begin{align*}
\rho^2_n(\phi(s)) = \E(d(\phi(s), \hat{R}_n)^2),
\end{align*}
where $\phi(s)$ ranges over all points in $R$.
Assume that (K1), (K2), (P1), and (P2) hold.
If $\frac{nh^{d+8}}{\log n}\rightarrow \infty, nh^{d+10}\rightarrow 0$ then
\begin{align*}
\rho^2_n(\phi(s)) = \mu(s)^T\mu(s) h^4 + &\frac{1}{nh^{d+2}} {\rm Trace}(\Sigma(s)^2) + \\
&o(h^4) + o(\frac{1}{nh^{d+2}}),
\end{align*}
for all $\phi(s)\in R\backslash S_n$ with $\Length(S_n)\leq O(d_H(R,\hat{R}_n))$.
\end{thm}
This theorem is just an application of theorem~\ref{LU2}. However, it
gives the convergence rate of local uncertainty measures.
If we assume (P3), then Theorem~\ref{LU2},~\ref{LU4} can be applied to all points on the filaments.
\subsection{Bootstrapping Result}
For the bootstrapping result, we assume (P3) for convenience. Note that
if we do not assume (P3), the result still holds for points not close
to terminals. Let $q_m$ be a sequence of densities satisfying (P1).
We want to study the local uncertainty of the associated filaments. So we
work on the random sample generated from $q_m$ and use the random
sample to build estimated filaments for filaments of $q_m$. Define
$\psi_m(s), L_m^*(s)$ as the a parametrization for the filaments and
associated normal space of $q_m$. Then we have the following
convergence theorem for a sequence of densities converging to $p$.
\begin{figure}
\center
\includegraphics[scale =0.4]{figure5}
\label{SBplot1}
\caption{An example for $\xi_m(s)$ along with $\phi, \psi_m$.}
\end{figure}
\begin{thm} \label{SB2}
Assume that (P1--3) hold.
Let $q_m$ be a sequence of probability densities
that satisfy (P1), (P2), and $\norm{p - q_m}^*_{\infty,3} \to 0$ as $m\to\infty$.
If $d_H(R(q_m), R(p))$ is sufficiently small,
we can find a bijection $\xi_m: [0,1]\mapsto [0,1]$ such that \vspace{-3ex}
\begin{enumerate}
\item $|\psi_m(\xi_m(s))- \phi(s)| \to 0$.
\item $\left|\frac{<\phi'(s),\psi'_m(\xi_m(s))>}{|\psi'_m(\xi_m(s))| |\phi'(s)|} \right| \to 1$.
\item $\underset{s\in [0,1]}{\sup} |\mu(s;q_m)-\mu(s;p)|\to 0$.
\item $\underset{s\in [0,1]}{\sup} |\Sigma(s;q_m)-\Sigma(s;p)|\to 0$.
\end{enumerate}
In particular, if we use $\hat{p}_n=q_n$ with $\frac{nh^{d+8}}{\log n}\rightarrow \infty, nh^{d+10}\rightarrow 0$,
then the above result holds with high probability.
\end{thm}
Note that the local uncertainty measure has unknown support and unknown parameters given in theorem \ref{LU4}. Claim 1 shows the convergence in support while claim 3,4 prove the consistency of the parameters controlling uncertainties. This theorem states that if we have a sequence of densities converging to a limiting density, then the local uncertainty will converge in a sense.
\textbf{Remarks:} \vspace{-3ex}
\begin{itemize}
\item Notice that $\psi_m(\xi_m(s))$ need not be the same as
$L(s)\cap R(q_m)$. The latter one lives in the normal space of
$\phi(s)$ but the former need only be a continuously
bijective mapping. The projection that maps $s$ to the point
$L(s)\cap R(q_m)$ is one choice of $\xi_m$.
\item The last result holds immediately from Lemma~\ref{LUlem2} as we
pick $\frac{nh^{d+8}}{\log n}\rightarrow \infty,
nh^{d+10}\rightarrow 0$. The bandwidth in this case will ensure
uniform convergence in probability up to the forth derivative which
is sufficient to the condition.
\end{itemize}
\section{Examples}
We apply our methods to two datasets, one from astronomy
and one from seismology.
In both cases, we use an isotropic Gaussian kernel for the KDE
and threshold using $\tau = 0.1$.
We use a $50\times50$ uniform grid over each sample as initial points
in the ascent step for running SCMS.
We compare the result from bootstrapping and smooth
bootstrapping based on 100
bootstrap samples to estimate uncertainty.
\begin{figure}
\centering
\subfigure[Bootstrapping]
{
\includegraphics[width =2.2 in, height = 2.2in]{figure6a}
}
\subfigure[Smooth bootstrapping]
{
\includegraphics[width =2.2 in, height = 2.2in]{figure6b}
}
\caption{Local uncertainty measures and pointwise confidence sets for SDSS data. (a): Bootstrapping result. (b): Smooth bootstrapping result. We display local uncertainty measures based on color (red: high uncertainty) and 90\% pointwise confidence sets.}
\label{Fig:R1}
\end{figure}
\textit{Astronomy Data.}
The data come from \textit{Sloan Digit Sky Survey(SDSS) Data Release(DR) 9}.
\footnote{The SDSS dataset http://www.sdss3.org/dr9/}
In this dataset, each point is a galaxy and is characterized by three features (\textit{z, ra, dec}).
\textit{z} is the redshift value, a measurement of the distance form that galaxy to us.
\textit{ra} is right ascesion, the latitude of the sky.
\textit{dec} is declination, the longitude of the sky.
We restrict ourselves to \textit{z=0.045$\sim$0.050} which is a slice of data on the
\textit{z} coordinate that consists of $2,532$ galaxies.
We selected values in
\textit{(ra, dec)=(0 $\sim$ 30, 140 $\sim$ 170)}.
The bandwidth $h$ is 2.41.
Figure~\ref{Fig:R1} displays the local uncertainty measures with
pointwise confidence sets. The red color indicates higher local
uncertainty while the blue color stands for lower
uncertianty. Bootstrapping shows a very small local uncertainty and
very narrow pointwise confidence sets. Smooth bootstapping yields a
loose confidence sets but it shows a clear pattern of local
uncertainty which can be explained by our theorems.
From Figure~\ref{Fig:R1}, we identify four cases associated with high
local uncertainty: high curvature of the filament, flat density near
filaments, terminals (boundaries) of filaments, and intersecting of filaments. For
the points near curved filaments, we can see uncertainty increases in
every case. This can be explained by theorem~\ref{LU2}. The curvature
is related to the third derivative of density from the definition of
ridges. From theorem \ref{LU2}, we know the bias in filament estimation is
proportional to the third derivative. So the estimation for highly curved filaments tends
to have a systematic bias in filament estimation and our uncertainty measure captures this
bias successfully.
For the case of a flat density, by theorem \ref{LU2}, we know both the bias
and variance of local uncertainty is proportional to the inverse of
the Hessian. A flat density has a very small Hessian matrix and thus the
inverse will be huge; this raises the uncertainty. Though our theorem
can not be applied to terminals of filaments, we can still explain the
high uncertianty. Points near terminals suffer from boundary bias in
density estimation. This leads to an increase in the uncertainty.
For regions near connections, the eigengap
$\beta(x)=\lambda_1(x)-\lambda_2(x)$ will approach $0$ which causes
instability of the ridge since our definition of ridge requires
$\beta(x)>0$.
All cases with high local uncertainty can be explained by our theoretical result.
So the data analysis is consistent with our theory.
\begin{figure}
\centering
\subfigure[Bootstrapping]
{
\includegraphics[width =2.2 in, height = 2.2in]{figure7a}
}
\subfigure[Smooth bootstrapping]
{
\includegraphics[width =2.2 in, height = 2.2in]{figure7b}
}
\caption{Earthquake data. This is a collection of earthquake
data in longitude $(100\sim160) E$, lattitude $(0\sim 60)N$
from 01/01/2013 to 09/30/2013. Total sample size is
$1169$. Blue curves are the estimated filaments; brown dots
are the plate boundaries.}
\label{Fig:R2}
\end{figure}
\textit{Earthquake Data}.
We also apply our technique to data from the U.S.~Geological Survey
\footnote{The USGS dataset http://earthquake.usgs.gov/earthquakes/search/}
that locates $1,169$ earthquakes hat occur
in region between longitude $(100E\sim160E)$, latitude $(0N\sim 60N)$
and in dates between 01/01/2013 to 09/30/2013.
We are particularly interested in detecting
plate boundaries,
which see a high incidence of earthquakes.
We pre-process the data to remove a cluster of earthquakes that are
irrelevant to the plate boundary.
For this data, we only consider those filaments
with density larger than $\tau = 0.02$ of the maximum of the density.
Because the noise level is small, we adjust the KDE bandwidth
to $0.7$ times the Silverman rule ($h = 2.83$).
Figure~\ref{Fig:R2} displays the estimated filaments and $90\%$ pointwise
confidence sets.
The Figure shows the true plate boundaries from Nuvel data set
\footnote{Nuvel data set http://www.earthbyte.org/}
as brown points.
As can be seen in the Figure, smooth
bootstrapping has better coverage over the plate boundary.
We notice the bad coverage in the bottom part; this is reasonable since the
boundary bias and lack of data cause trouble in estimation and
uncertainty measures.
We also identify some parts of filaments with high local uncertainty.
The filaments with high uncertainty can be explained by theorem~\ref{LU2}.
The data analysis again support our theoretical result.
In both Figure~\ref{Fig:R1},~\ref{Fig:R2}, we see a clear picture on the uncertainty assessment
for filament estimation.
In data from two or three dimension, we can visualize uncertainties in estimation of
filaments with different colors or confidence regions.
That is, we can display estimation and the uncertainty in the same plot.
\section{Discussion and Future Work}
In this paper, we define a local uncertainty measure for filament estimation and
study its theoretical properties.
We apply bootstrap resampling
to estimate local uncertainty measures and construct
confidence sets, and we prove that both are consistent and
data analysis also supports our result.
Our method provides one way to numerically quantify
the uncertainty for estimating filaments.
We also visualize uncertatiny measures with estimated filaments in the same plot;
this can be one easy way to show estimation and the uncertainty simultaneously.
Our approach has no constraints on
the dimension of the data so it can be extended to data from higher
dimension (although the confidence sets will be larger).
Our definition of local uncertainty and our estimation
method can be applied to other geometric estimation algorithms,
which we will investigate in the future.
|
2,877,628,088,416 | arxiv | \section*{Introduction}
The asymptotic performance of error-correcting codes is a classical topic in coding theory, going back to the work by Shannon, where random codes are used to get arbitrarily close to the capacity of a given discrete memoryless channel. \cite{shannon48}
As in Shannon's proof, most of the literature has focused
on the case where the field size is fixed and the code length goes to infinity.
Recently, increasing attention has been paid to the case where the field size grows and the other code parameters stay fixed, especially in connection with open questions in the theory of rank-metric codes and their related open questions in semifield theory~\cite{gruica2022rank}. For example, it had been unknown if there are maximum rank distance codes that are not equivalent to a Gabidulin code, until it was shown that maximum rank distance codes are dense in the set of linear codes (for growing field extension degree of the ambient space), whereas Gabidulin codes are not~\cite{neri2018genericity}. Furthermore, lower bounds (in particular the Gilbert-Varshamov bound) for the minimum distance of a randomly chosen code (attained by high probability) are needed in several applications in code-based cryptography, see e.g. \cite{baldi2021restricted,CVE,Vron2009ImprovedIS}. Such lower bounds can again be derived by density results about the respective code families.
In this paper, we investigate the proportion, or density, of error correcting codes with good distance properties, where we distinguish different degrees of linearity of the codes. We develop a general framework for determining (upper and lower bounds on) densities of codes in discrete
translation-invariant metric spaces, with a focus on the asymptotic behavior of these densities. Then we use this framework to determine the asymptotic densities of linear, sublinear and nonlinear codes over finite fields with respect to various metrics, namely the Hamming metric, the rank metric, and the sum-rank metric.
In particular, we study extremal (or optimal) codes, in the sense that they attain the Singleton-type bound for the respective metric.
Considering codes in $\F_{q^{m}}^{n}$ endowed with the Hamming metric it is known that if $n < q^{m}+1$ there exist linear maximum distance separable (MDS) codes, i.e., codes that achieve the classical Singleton bound, namely Reed-Solomon codes \cite{reed1960polynomial} and their generalizations. It is also well-known that linear MDS codes are dense as the field size tends to infinity.
Only few results are known about the nonlinear case, e.g., in \cite{barg2002random} it was observed that over a binary field the minimum distance of random nonlinear codes is asymptotically worse than in the linear setting. Nevertheless they neither considered this with respect to the Singleton bound nor asymptotically as the field size tends to infinity. Later, in \cite{gruica2021typical} sparsity of nonlinear MDS codes over $\F_q$ was shown.
Furthermore, the question of the existence of sublinear MDS codes has recently received attention, due to their application in quantum coding theory. For example in \cite{ball2020additive} a geometric approach is used to classify all additive codes over $\F_{9}$. Then the MDS conjecture for this class of codes could be verified and hence the quantum MDS conjecture over~$\F_{3}$.
In the rank metric it is also known that linear maximum rank-distance (MRD) codes in $\F_{q^m}^n$ exist if $m\geq n$, namely Gabidulin codes \cite{gabidulin}. Moreover,
several density results of MRD codes are known, depending on whether they are nonlinear, $\F_{q}$-linear or even linear over the field extension~$\F_{q^m}$. In particular, it has been shown in \cite{neri2018genericity} that~$\F_{q^m}$-linear MRD codes are dense as $m \to +\infty$. But if the codes belong to the wider class of~$\F_{q}$-linear MRD codes they are not dense, neither for $q \to +\infty$, nor for $m \to + \infty$ (except for trivial parameter choices). In particular, $\F_{q}$-linear MRD codes are sparse as $q \to +\infty$ \cite{gruica2022common}.
In the sum-rank metric the asymptotic densities for an arbitrary degree of linearity remained mostly unexplored. Recently, in \cite{ott2021bounds}, the Schwartz-Zippel Lemma has been used to show that for $m \rightarrow + \infty$, most $\mathbb{F}_{q^{m}}$-linear codes in $\mathbb{F}_{q^{m}}^{n}$ are maximum sum-rank distance (MSRD) codes.
This contrasting behavior with respect to different metrics and different types of linearity motivated us to study the density of codes in general metric spaces, with various degrees of linearity.
For this, we consider codes in $\F_{q^m}^n$, equipped with an arbitrary translation-invariant metric~$D$.\footnote{The results can easily be extended for more general finite metric spaces, where the size of the balls of radius~$r$ is independent of the center. For simplicity, however, we will restrict ourselves to $\F_{q^m}^n$ and translation-invariant metrics.} For the linearity degree of the code we consider the maximal subfield $\F_{q^{\ell}} \subseteq\F_{q^m}$ over which the codes are linear, for any divisor $\ell$ of $m$.
Then we classify the asymptotic densities of (possibly) nonlinear codes and codes that are linear over the subfield $\F_{q^{\ell}} \subseteq \F_{q^m}$ separately.
We also treat the asymptotic density with respect to the four parameters $q,n,\ell$ and $s := [\F_{q^m} \colon \F_{q^{\ell}}]$ separately.
\paragraph*{Our contribution.} We determine upper and lower bounds on the proportion of codes in~$\F_{q^m}^n$ with a prescribed minimum distance (with respect to an arbitrary translation-invariant distance~$D$) among all codes of a fixed cardinality. For this, we distinguish the case of nonlinear and~$\F_{q^\ell}$-linear codes in~$\F_{q^m}^n$, for any suitable $\ell$. We then analyze the asymptotic behavior of these bounds and show that this highly depends on the volume of the balls with respect to $D$. In particular, we give conditions on the volume with respect to the prescribed cardinality and minimum distance, such that the family of codes is dense or sparse (both in the nonlinear or (sub)linear case). With these results we can straightforwardly see that nonlinear random codes achieve the Gilbert-Varshamov bound with probability going to zero, both for growing length or field size, while (sub)linear codes achieve it with probability going to one, for growing field size or linearity degree (but not for growing length).
In the Hamming metric, we show that independent of the linearity degree, the class of $\F_{q^{\ell}}$-linear MDS codes, as well as the class of nonlinear MDS codes, are both sparse as $n \to +\infty$.
When considering the asymptotic densities with respect to the field size, the MDS codes which are at least linear over a subfield of $\F_{q^m}$ are dense as $q \to +\infty$ (or $\ell \to +\infty$ in the sublinear case). On the other hand, the probability that a nonlinear code is MDS tends to zero for $q \to +\infty$. Finally, we give an upper bound for the asymptotic density of MDS codes with respect to $s$ and hence conclude that MDS codes are not dense as $s \to +\infty$.
In the rank metric we show that nonlinear MRD codes are sparse with respect to both the field size and the code length. Moreover, we recover the previously known results about the sparsity and density of $\F_q$- or $\F_{q^m}$-linear MRD codes. As new contributions we derive more general results for $\F_{q^\ell}$-linear MRD codes with respect to any of the parameters $q,n,\ell, s$. In particular, we derive bounds on $\ell$, for which MRD codes are sparse and for which they are dense, for $q\rightarrow +\infty$. For $\ell \rightarrow +\infty$ we show that MRD codes are always dense. For growing~$n$ or~$s$ we derive upper bounds on the density and show that MRD codes are generally not dense.
In the sum-rank metric we show that nonlinear MSRD codes are sparse with respect to the field size. For sublinear MSRD codes we derive bounds on $\ell$ such that the sets of parameters for which MSRD codes are sparse and for which they are dense as $q$ tends to infinity can be classified.
\vspace{0.3cm}
\textbf{The outline} of this article is as follows: In Section 2 we start by introducing the relevant terminology used throughout the article. We define the density functions associated with families of sublinear as well as (possibly) nonlinear codes. We also clarify the terms sparsity and density with respect to the parameters $q,n,s$ and $\ell$. Then we briefly provide the graph theory tools to obtain the upper and lower bounds on the density of a code family within a larger family. For more details on this and the specific bipartite graphs we refer to \cite{gruica2022common}.
In Section 3 we consider the asymptotic versions of these bounds with respect to the parameters $n,s, \ell$ and $q$. Under certain assumptions we can derive that the codes are sparse or dense for a given parameter.
In the second part of this article, covering Sections 4 to 6, we apply the developed classifiers from Section 3 to codes with respect to various distance functions. Particularly, we will focus on three major distance functions, namely the Hamming, rank, and sum-rank distance.
\section{Preliminaries} \label{sec:prelim}
\subsection{Finite Metric Spaces and Codes}
Throughout this paper we let $n$ and $m$ be integers with $m,n \ge 1$. Moreover, we let $q$ be a prime power, we denote the finite field of $q$ elements by $\F_q$ and the extension field of $\F_q$ with extension degree $m$ is denoted by $\F_{q^m}$. From now on, unless otherwise stated, we work in the vector space~$\F_{q^m}^n$ and we consider a translation-invariant distance $D: \F_{q^m}^n \times \F_{q^m}^n \rightarrow \R_{\ge 0}$.
\begin{notation} \label{not:landau}
We will repeatedly use the Bachmann-Landau notation (``Big O'', ``Little O'',``$\sim$'' and~``Omega'') to describe the asymptotic growth rate of functions defined on an infinite set of natural numbers; see for example~\cite{de1981asymptotic}. Since we are often interested in the asymptotics as the field size $q$ tends to infinity, we denote the set of prime powers by $Q$ and we omit ``$q \in Q$'' when writing $q \to +\infty$.
\end{notation}
\begin{definition} \label{def:code}
A subset $\mathcal{C} \subseteq \F_{q^m}^n$ is called a \textbf{code}. If $|\mC| \ge 2$ then the \textbf{minimum distance} of a code $\mC \subseteq \F_{q^m}^n$ with respect to the distance $D$ is
\begin{align*}
D(\mC) := \min\{D(x,y) : x,y \in \mC, \, x \ne y\}.
\end{align*}
For a divisor $\ell$ of $m$, we denote an $\F_{q^\ell}$-linear code $\mC \subseteq \F_{q^m}^n$ of cardinality $S$ and minimum distance at least $d$ by $[\F_{q^m}^n,S,\ell,d]^D$-code. If on the other hand $\mC \subseteq \F_{q^m}^n$ is nonlinear, of cardinality $S$ and minimum distance at least $d$, we say $\mC$ is an $[\F_{q^m}^n,S,0,d]^D$-code.
\end{definition}
We will make extensive use of the $q$-ary binomial coefficient, which is defined as
$$\dstirling{a}{b}_{q} := \prod_{i=0}^{b-1}\frac{q^{ a}-q^{ i}}{q^{ b}-q^{ i}}$$
and counts the number of $b$-dimensional subspaces of an $a$-dimensional vector space over $\F_{q}$. Note that the number of $\F_{q^\ell}$-linear codes in $\F_{q^m}^n$ (for a divisor $\ell$ of $m$) of $\F_{q^\ell}$-dimension~$k$ is
\begin{align*}
|\{\mC \subseteq \F_{q^{m}}^n : |\mC|=q^{k \ell}, \, \textnormal{$\mC$ is $\F_{q^{\ell}}$-linear}\}| = \qbin{ns}{k}{q^\ell}.
\end{align*}
This formula will play a crucial role in determining bounds for the proportion of codes with certain distance properties among the set of codes of the same cardinality or dimension.
\begin{definition}
We let
\begin{equation} \label{eq:density_nonlinear}
\delta_q^D(\F_{q^m}^n,S,0,d) := \frac{|\{\mC \subseteq \F_{q^m}^n : |\mC|=S, \, D(\mC) \ge d\}|}{|\{\mC \subseteq \F_{q^m}^n : |\mC|=S\}|}
\end{equation}
denote the \textbf{density of (possibly) nonlinear codes} of minimum distance $D(\mC)$ at least $d$ within the set of (possibly) nonlinear codes of the same cardinality $S$.
Analogously,
we define
\begin{equation} \label{eq:density_sublinear}
\delta_q^D(\F_{q^m}^n,S,\ell,d) := \frac{|\{\mC \subseteq \F_{q^{m}}^n : |\mC|=S, \, D(\mC) \ge d, \, \textnormal{$\mC$ is $\F_{q^{\ell}}$-linear }\}|}{|\{\mC \subseteq \F_{q^{m}}^n : |\mC|=S, \, \textnormal{$\mC$ is $\F_{q^{\ell}}$-linear}\}|}
\end{equation}
as the \textbf{density of $\F_{q^{\ell}}$-linear codes} of minimum distance $D(\mC)$ at least $d$ within the set of $\smash{\F_{q^{\ell}}}$-linear codes of the same cardinality $S$.
Taking the limit in equation~\eqref{eq:density_nonlinear} or equation~\eqref{eq:density_sublinear} as $q\to +\infty, n\to +\infty,s\to +\infty$ or $\ell \to +\infty$ (and the other three parameters are fixed constants) defines the \textbf{asymptotic density}, as $q,n,s$ or $\ell$ tends to infinity, respectively.
If the limit is $0$ we say that $\F_{q^{\ell}}$-linear codes of minimum distance at least $d$ and cardinality $|S|$ are \textbf{sparse} as $q,n,s$ or $\ell \to +\infty$. If the limit is~$1$ we say that such codes are \textbf{dense} as $q,n,s$ or $\ell \to +\infty$.
\end{definition}
Since we assume $D$ to be invariant under translation, the ball with center $c \in \F_{q^m}^n$ of radius $0 \le r < \infty$ in $\F_{q^m}^n$, which is the set $\{x \in \F_{q^m}^n : D(x,c) \le r\}$, has the same size for any center $c \in \F_{q^m}^n$. Thus, when we consider the volume of the ball of radius $r$ in $\F_{q^m}^n$, we do not specify the center of the ball. For ease of exhibition, we propose the following notation.
\begin{notation} \label{def:ball}
We denote the volume of the ball of radius $0 \le r < + \infty$ in $\F_{q^m}^n$ by
$$\textbf{v}_{q}^{D}(\F_{q^m}^n,r) := |\{x \in \F_{q^m}^n : D(x,0) \le r\}|.$$
\end{notation}
\subsection{Graph Theory Tools}
In this subsection we briefly recall some graph theory results from~\cite[Section 3]{gruica2022common}. As we will show shortly, studying the number of isolated vertices in certain bipartite graphs will give us bounds on the number of codes we are interested in.
\begin{definition}
A (\textbf{directed}) \textbf{bipartite graph} is a 3-tuple $\mB=(\mV,\mW,\mE)$, where $\mV$ and $\mW$ are finite non-empty sets and $\mE \subseteq \mV \times \mW$. The elements of $\mV \cup \mW$ are the \textbf{vertices} of the graph and the tuples given by relation $\mE$ are called the \textbf{edges} of $\mB$. We say that a vertex~$W \in \mW$ is
\textbf{isolated} if there is no $V \in \mV$ such that $(V,W) \in \mE$. We say that the bipartite graph
$\mB$ is
\textbf{left-regular} of \textbf{degree} $\partial \ge 0$ if for all $V \in \mV$
$$|\{W \in \mW : (V,W) \in \mE\}| = \partial.$$
\end{definition}
To derive a lower bound for the number of non-isolated vertices in a left-regular bipartite graph, we introduce the concept of an association. This notion generalizes strong regularity properties on a bipartite graph with respect to certain functions defined on its left-vertices.
\begin{definition} \label{def:assoc}
Let $\mV$ be a finite non-empty set and let $r \ge 0$ be an integer. An \textbf{association} on $\mV$ of \textbf{magnitude} $r$ is a function
$\alpha: \mV \times \mV \to \{0,...,r\}$ satisfying the following:
\begin{itemize}
\item[(i)] $\alpha(V,V)=r$ for all $V\in \mV$;
\item[(ii)] $\alpha(V,V')=\alpha(V',V)$ for all $V,V' \in \mV$.
\end{itemize}
\end{definition}
\begin{definition}
Let $\mB=(\mV,\mW,\mE)$ be a finite bipartite graph and let $\alpha$ be an association on~$\mV$ of magnitude $r$. We say that $\mB$ is \textbf{$\alpha$-regular} if for all $(V,V') \in \mV \times \mV$ the number of vertices $W \in \mW$ with $(V,W) \in \mE$ and
$(V',W) \in \mE$ only depends on $\alpha(V,V')$. If this is the case, we denote this number by~$\mW_\ell(\alpha)$, where $\ell=\alpha(V,V') \in \{0, \dots , r \}$.
\end{definition}
\begin{remark}
Note that an $\alpha$-regular bipartite graph for an association $\alpha$ is necessarily left-regular of degree $\partial=\mW_r(\alpha)$.
\end{remark}
We can now bound the number of non-isolated vertices in a left-regular bipartite graph. With these two bounds we will derive a lower and an upper bound for the density function of codes in $\matt$ having certain distance properties.
\begin{lemma}[\text{see \cite[Lemma 3.2]{gruica2022common}}] \label{lem:upperbound}
Let $\mB=(\mV,\mW,\mE)$ be a bipartite and left-regular graph of degree $\partial>0$.
Let $\mF \subseteq \mW$ be the collection of non-isolated vertices of $\mW$.
We have
$$|\mF| \le |\mV| \, \partial.$$
\end{lemma}
The following lemma follows by combining the notion of an association and the Cauchy-Schwarz Inequality.
\begin{lemma}[\text{see \cite[Lemma 3.5]{gruica2022common}}] \label{lem:lowerbound}
Let $\mB=(\mV,\mW,\mE)$ be a finite bipartite $\alpha$-regular graph, where $\alpha$ is an association on~$\mV$ of magnitude~$r$. Let $\mF \subseteq \mW$ be the collection of non-isolated vertices of $\mW$. If
$\mW_r(\alpha) >0$, then
$$|\mF| \ge \frac{\mW_r(\alpha)^2 \, |\mV|^2}{\sum_{\ell=0}^r \mW_\ell(\alpha) \, |\alpha^{-1}(\ell)|}.$$
\end{lemma}
\section{Upper and Lower Bounds on the Density of Codes}\label{sec:bounds}
In this subsection we use the upper bound of Lemma~\ref{lem:upperbound} and the lower bound of Lemma~\ref{lem:lowerbound} to give bounds on the number of codes in the metric space $(\F_{q^m}^n,D)$ that have minimum distance bounded from below by some positive integer $d$. From these bounds we will later obtain a prediction for the asymptotic behavior of the density of these codes.
\subsection{Bounds for Nonlinear Codes}
\label{sec:nonlinear}
We first consider nonlinear codes. Since we do not impose any linearity (nor sublinearity) on our codes in this subsection, we let $m=1$, meaning that our ambient space is $\F_{q}^n$ and as before,~$D$ is a translation-invariant distance on $\F_{q}^n$.
\begin{theorem} \label{thm:nonlinbound}
Let $S \ge 4$ and $1 \le d < +\infty$ be integers. Define the quantities
\begin{align*}
\beta^0 &= \frac{1}{2}q^n (\textbf{v}_q^{D}(\F_{q}^n,d-1)-1) - 2\textbf{v}_q^{D}(\F_{q}^n,d-1) +3, \\
\beta^1 &= 2\textbf{v}_q^{D}(\F_{q}^n,d-1)-4, \\
\Theta &= 1 + \beta^1 \frac{S-2}{q^n-2} \, + \, \beta^0 \frac{(S-2)(S-3)}{\left(q^n-2\right)\left(q^n-3\right)},
\end{align*}
and let $\mF:= \{\mC \subseteq \F_{q}^n : |\mC|=S, \, D(\mC) \le d-1\}$. We have
\begin{align*}
\frac{1}{2\Theta} q^n \left(\textbf{v}_q^{D}(\F_{q}^n,d-1)-1\right) \binom{q^n-2}{S-2}\le |\mF| \le \frac{1}{2} q^n \left( \textbf{v}_q^{D}(\F_{q}^n,d-1)-1\right)\binomi{q^n-2}{S-2}.
\end{align*}
\begin{proof}
We work with the bipartite graph $\mB=(\mV,\mW,\mE)$, where $$\mV=\{\{x,y\} \subseteq \F_{q}^n : x \ne y, \, D(x, y) \le d-1 \},$$ $\mW$ is the collection of codes $\mC \subseteq \F_{q}^n$ with $|\mC| = S$, and
$(\{x,y\},\mC) \in \mE$ if and only if $\{x,y\} \subseteq \mC$.
Note that the set of non-isolated vertices in $\mW$ is exactly $\mF$.
We have \begin{align*}
|\mV| = \frac{1}{2} q^n \left(\textbf{v}_q^{D}(\F_{q}^n,d-1)-1\right), \quad |\mW| = \binomi{q^{n}}{S}.
\end{align*}
Moreover, one easily checks that
\begin{align*}
|\{ \mC \in \mW : (\{x,y\}, \mC) \in \mE\}| = \displaystyle \binom{q^{n}-2}{S-2}.
\end{align*}
Hence $\mB$ is a left-regular graph of degree $$\binomi{q^n-2}{S-2}.$$ Applying Lemma~\ref{lem:upperbound} we obtain the upper bound on $|\mF|$.
To prove the lower bound, we let $\alpha: \mV \times \mV \longrightarrow \{0,1,2\}$
be defined by $$\alpha(\{x,y\},\{z,t\}) := 4-|\{x,y,z,t\}|$$
for all $x,y,z,t \in \F_{q}^n.$
We have
\begin{align*}
|\alpha^{-1}(2)| &= |\mV|, \\
|\alpha^{-1}(1)| &= 2|\mV|(\textbf{v}_q^{D}(\F_{q}^n,d-1)-2), \\
|\alpha^{-1}(0)| &= |\mV|(|\mV|-2\textbf{v}_q^{D}(\F_{q}^n,d-1)+3).
\end{align*}
It is easy to see that $|\alpha^{-1}(2)| = |\mV|$.
The elements of the domain $\alpha^{-1}(1)$ can be obtained by choosing
$\{x,y\} \in \mV$ arbitrary
and then $\{z,t\} \in \mV$ with
either $z=x$ or $z=y$ and $$t \in \{v \in \F_{q}^n : D(v,z) \le d-1\} \backslash \{x,y\}.$$ Therefore $$ |\alpha^{-1}(1)|=2|\mV|(\textbf{v}_q^{D}(\F_{q}^n,d-1)-2).$$
To compute $|\alpha^{-1}(0)|$ we simply note that $$|\mV|^2=|\alpha^{-1}(0)|+|\alpha^{-1}(1)|+|\alpha^{-1}(2)|.$$ Therefore the value of $|\alpha^{-1}(0)|$
follows from the values of $|\alpha^{-1}(1)|$ and $|\alpha^{-1}(2)|$.
Simple counting arguments imply that the bipartite graph~$\mB$ is regular with respect to $\alpha$. Therefore for $(\{x,y\},\{z,t\}) \in \mV \times \mV$ and $\ell=\alpha(\{x,y\},\{z,t\})$ we define
\begin{align*}
\mW_{\ell}(\alpha) := |\{W \in \mW_q : \{x,y,t,z\} \subseteq W\}| = \binomi{q^{n}-4+\ell}{S-4+\ell}.
\end{align*}
We can now apply Lemma~\ref{lem:lowerbound} obtaining that $|\mF|$ is lower bounded by
\begin{equation*}
\frac{\mW_{2}(\alpha)^2 \, |\mV|^2}{|\alpha^{-1}(2)|\mW_{2}(\alpha) + |\alpha^{-1}(1)|\mW_{1}(\alpha) + |\alpha^{-1}(0)|\mW_{0}(\alpha)}.
\end{equation*}
Finally, plugging in the formulas for $|\alpha^{-1}(0)|$, $|\alpha^{-1}(1)|$ and $|\alpha^{-1}(2)|$, and applying the identity
\begin{align} \label{eq:binomi}
\binom{a}{b} = \frac{a}{b} \binom{a-1}{b-1},
\end{align}
for all $a \ge b \ge 1$ yields the desired result.
\end{proof}
\end{theorem}
As a corollary we get the following bounds on the proportion of codes in $\F_q^{n}$ of minimum distance at least $d$, again using the identity in~\eqref{eq:binomi}.
\begin{corollary} \label{cor:nonlindensity}
Let $S \ge 2$ be an integer and consider $1 \le d < +\infty$. We have
\begin{align*}
1- \frac{(\textbf{v}_q^{D}(\F_q^n,d-1)-1)S(S-1)}{2\left(q^n-1\right)} \le \delta_q^D(\F_q^n,S,0,d) \le 1-\frac{(\textbf{v}_q^{D}(\F_q^n,d-1)-1)S(S-1)}{2\Theta(q^n-1)}
\end{align*}
where $\Theta$ is the same as in Theorem~\ref{thm:nonlinbound}.
\end{corollary}
\subsection{Bounds for (Sub)Linear Codes}
In this subsection, we fix a divisor $\ell$ of $m$ and let
$s= [\mathbb{F}_{q^{m}} : \mathbb{F}_{q^{\ell}}]$.
We will consider $\F_{q^\ell}$-linear codes in the metric space $(\F_{q^m}^n,D)$.
We start by providing the bounds obtained using the tools from graph theory.
\begin{theorem}\label{thm:subavoid}
Let $1 \le k \le ns$ and $1 \le d < +\infty$ be integers. Let
\begin{align*}
\mF := \{\mC \subseteq \F_{q^m}^n : |\mC|=q^{\ell k}, \, D(\mC) \le d-1, \, \textnormal{$\mC$ is $\F_{q^{\ell}}$-linear}\}.
\end{align*}
We have
\begin{align*}
|\mF| &\le \left(\displaystyle\frac{\textbf{v}_q^{D}(\F_{q^m}^n,d-1)-1}{q^{\ell}-1}\right)\dstirling{ns-1}{k-1}_{q^{\ell}}, \\[0.2cm]
|\mF| &\ge \frac{\displaystyle \frac{\textbf{v}_{q}^{D}(\F_{q^m}^n,d-1)-1}{q^{\ell}-1} \dstirling{ns-1}{k-1}_{q^{\ell}}^{2}}{\dstirling{ns-1}{k-1}_{q^{\ell}}+\left(\displaystyle \frac{\textbf{v}_q^{D}(\F_{q^m}^n,d-1)-1}{q^{\ell}-1} -1 \right)\dstirling{ns-2}{k-2}_{q^{\ell}}}.
\end{align*}
\end{theorem}
\begin{proof}
We work with the bipartite graph $$\mB=(\mV,\mW,\mE),$$ where $\mV$ is the set of elements in $\F_{q^m}^n$ of minimum distance at most $d-1$ (up to multiples by elements of $\F_{q^\ell}$) from 0, $\mW$ is the collection of $\F_{q^\ell}$-linear codes in $\F_{q^m}^n$ of $\F_{q^\ell}$-dimension $k$, and $(M,\mC) \in \mE$ if and only if $M \in \mC$. Hence $\mF$ is the family of non-isolated vertices in $\mW$.
Note that we have
\begin{align*}
|\mV| = \frac{\textbf{v}_q^{D}(\F_{q^m}^n,d-1)-1}{q^{\ell}-1}, \quad |\mW| = \dstirling{ns}{k}_{q^{\ell}}.
\end{align*}
Moreover, for $M \in \mV$ we have
\begin{align*}
|\{ \mC \in \mW : (M, \mC) \in \mE\}|= \dstirling{ns-1}{k-1}_{q^{\ell}}.
\end{align*}
Hence the bipartite graph $\mB$ is left-regular of degree
$$\dstirling{ns-1}{k-1}_{q^{\ell}}.$$
With this, the upper bound in the theorem is an easy consequence of Lemma~\ref{lem:lowerbound}.
For the lower bound, consider the association $$\alpha : \mV \times \mV \longrightarrow \{0,1\}, \quad (V,V') \mapsto 2-\dim\langle V,V' \rangle.$$ It is easy to see that
\begin{align*}
|\alpha^{-1}(0)|= |\mV| (|\mV|-1),\quad |\alpha^{-1}(1)|= |\mV|
\end{align*}
and $\mB$ is $\alpha$-regular. Furthermore we have
\begin{align*}
\mW_0(\alpha)= \qbin{ns-2}{k-2}{q^\ell}, \; \mW_1(\alpha)= \qbin{ns-1}{k-1}{q^\ell}
\end{align*}
which combined with Lemma~\ref{lem:upperbound} directly implies the second bound in the theorem.
\end{proof}
As an immediate consequence of the last theorem we obtain bounds on the density function of $\F_{q^\ell}$-codes with minimum distance bounded from below.
\begin{corollary} \label{cor:subbound}
Let $1 \le k \le ns$ and $1 \le d < +\infty$ be integers. We have
\begin{align} \label{eq:upperBoundsublim}
1- \frac{\displaystyle (\textbf{v}_q^{D}(\F_{q^m}^n,d-1)-1)\dstirling{ns-1}{k-1}_{q^{\ell}} }{(q^{\ell}-1)\dstirling{ns}{k}_{q^{\ell}}}
\le
\delta_q^D(\F_{q^m}^n,q^{\ell k},\ell,d) &\le 1- \frac{\displaystyle (\textbf{v}_q^{D}(\F_{q^m}^n,d-1)-1)\dstirling{ns-1}{k-1}_{q^{\ell}} }{\bar\Theta(q^{\ell}-1)\dstirling{ns}{k}_{q^{\ell}}},
\end{align}
where $$\bar\Theta = 1+\dstirling{ns-1}{k-1}_{q^{\ell}}^{-1}\left(\displaystyle \frac{\textbf{v}_q^{D}(\F_{q^m}^n,d-1)-1}{q^{\ell}-1} -1 \right)\dstirling{ns-2}{k-2}_{q^{\ell}}.$$
\end{corollary}
\section{Asymptotic Results}
\label{sec:asy}
This section is devoted to general asymptotic results on the density function of codes in $\F_{q^m}^n$ endowed with the translation-invariant metric $D$. More precisely, we are interested in the following question: What is the probability that a uniformly random code in $\F_{q^m}^n$ of a given cardinality and with the translation-invariant metric $D$ has minimum distance (at least) $d$, as either $q$, $n$, or in the (sub)linear case $s$ or $\ell$ tend to infinity, where we always assume that $m= \ell s$. Our results indicate that the answer to this question highly depends on the volume of the ball in the metric space $(\F_{q^m}^n, D)$. We treat the nonlinear and (sub)linear case separately again, analogously to Section~\ref{sec:bounds}.
\subsection{The Nonlinear Case}\label{sec:nonlinear-asym}
From Corollary~\ref{cor:nonlindensity} we obtain asymptotic bounds on the density (as $q \to +\infty$ and $n \to +\infty$) of (possibly) nonlinear codes with minimum distance bounded from below. We start with the result for increasing field size $q$.
\begin{theorem} \label{thm:nonLinearLimq}
Let $n \ge 3$ and let $1 \le d \le n$ be integers. Consider the sequence of vector spaces~$\smash{(\mathbb{F}_{q}^{n})_{q \in Q}}$ and a sequence of integers $\smash{(S_q)_{q \in Q}}$ with $S_q \ge 4$ for all $q \in Q$.
\begin{itemize}
\item[(i)] We have
\begin{align} \label{eq:asymlowerBoundnonlinear_q}
\max \left\{ \liminf_{q \to +\infty} \left( 1-\frac{\textbf{v}_q^{D}(\F_{q}^n,d-1)S_{q}^{2}}{2q^{n} }\right),0 \right\} \leq \liminf_{q \to +\infty}\delta_q^D(\F_{q}^n,S_{q},0,d).
\end{align}
\item[(ii)]
If $\textbf{v}_q^{D}(\F_{q}^n,d-1) \in \Omega\left(q^{n}S_{q}^{-2}\right) \, \text{as} \, \, q \to +\infty, \, \text{then}$
\begin{align} \label{eq:asymupperBound_q}
\limsup_{q \to +\infty} \delta_q^D(\F_{q}^n,S_{q},0,d)\leq \limsup_{q \to +\infty}\left(\frac{1}{1+\frac{\textbf{v}_q^{D}(\F_q^n,d-1)S_q^2}{2q^{n}}}\right) <1.
\end{align}
\end{itemize}
In particular,
\begin{align*}
\lim_{q \to +\infty}\delta_q^D(\F_{q}^n,S_q,0,d) = \begin{cases}
1 \quad &\textnormal{ if $\textbf{v}_q^{D}(\F_{q}^n,d-1) \in o(q^{n}S_{q}^{-2})$ as $q \to +\infty$,} \\
0 \quad &\textnormal{ if $\textbf{v}_q^{D}(\F_{q}^n,d-1) \in \omega(q^{n}S_{q}^{-2})$ as $q \to +\infty$.}
\end{cases}
\end{align*}
\end{theorem}
\begin{proof}
The first statement of the theorem is an easy consequence of Corollary~\ref{cor:nonlindensity} and the fact that we always have $\delta_q^D(\F_q^n, S_q, 0, d) \ge 0$.
For the second statement, as in Theorem~\ref{thm:nonlinbound} we let
\begin{align*}
\beta^0_q &= \frac{1}{2}q^n (\textbf{v}_q^{D}(\F_q^n,d-1)-1) - 2\textbf{v}_q^{D}(\F_q^n,d-1) +3, \\
\beta^1_q &= 2\textbf{v}_q^{D}(\F_q^n,d-1)-4, \\
\Theta_q &= 1 + \beta^1_q \frac{S_q-2}{q^n-2} \, + \, \beta^0_q \frac{(S_q-2)(S_q-3)}{\left(q^n-2\right)\left(q^n-3\right)}.
\end{align*}
It is easy to check that we have
\begin{align*}
\Theta_q \sim 1+ \frac{\textbf{v}_q^{D}(\F_q^n,d-1) S_q^2}{2q^n} \quad \textnormal{ as $q \to +\infty$.}
\end{align*}
Therefore,
\begin{align}
\limsup_{q \to +\infty}\left(1- \frac{(\textbf{v}_q^{D}(\F_q^n,d-1)-1)S_q(S_q-1)}{2\Theta_q(q^n-1)}\right) &= \limsup_{q \to +\infty}\left(1- \frac{\textbf{v}_q^{D}(\F_q^n,d-1)S_q^2}{2q^n+ \textbf{v}_q^{D}(\F_q^n,d-1)S_q^2}\right). \label{eq:nonlin3}
\end{align}
In order to prove that the asymptotic upper bound in the theorem is smaller than 1, note that $\textbf{v}_q^{D}(\F_{q}^n,d-1) \in \Omega\left(q^{n}S_{q}^{-2}\right) \, \text{as} \, \, q \to +\infty$ means that
\begin{align*}
\liminf_{q \to +\infty} \frac{\textbf{v}_q^{D}(\F_q^n,d-1)S_q^2}{q^n} > 0.
\end{align*}
In particular, rewriting~\eqref{eq:nonlin3} gives
\begin{align*}
\limsup_{q \to +\infty} \left(1-\frac{ \textbf{v}_q^{D}(\F_q^n,d-1)S_q^2}{2q^n+\textbf{v}_q^{D}(\F_q^n,d-1)S_q^2}\right) = \limsup_{q \to +\infty} \left(\frac{1}{1+\frac{\textbf{v}_q^{D}(\F_q^n,d-1)S_q^2}{2q^{n}}}\right)< 1,
\end{align*}
concluding the proof of the theorem.
\end{proof}
We now establish the analogous result to Theorem~\ref{thm:nonLinearLimq} but for increasing vector length~$n$.
The proof of the following statement is an easy alteration of the proof of Theorem~\ref{thm:nonLinearLimq} and hence we omit it.
\begin{theorem} \label{thm:nonLinearLimn}
Let $q \in Q$ be a prime power. Consider the sequence of vector spaces $(\mathbb{F}_{q}^{n})_{n \geq 1}$ and a sequence of integers $(S_n)_{n \geq 1}$ with $S_n \ge 4$ for all $n \geq 1$ and we let $d$ be an integer such that $1 \le d < +\infty$.
\begin{itemize}
\item[(i)] We have
\begin{align} \label{eq:asymlowerBoundnonlinear_q}
\max \left\{ \liminf_{n \to +\infty} \left( 1-\frac{\textbf{v}_q^{D}(\F_{q}^n,d-1)S_{n}^{2}}{2q^{n} }\right),0 \right\} \leq \liminf_{n \to +\infty}\delta_q^D(\F_{q}^n,S_{n},0,d).
\end{align}
\item[(ii)]
If $\textbf{v}_q^{D}(\F_{q}^n,d-1) \in \Omega\left(q^{n}S_{n}^{-2}\right) \, \text{as} \, \, n \to +\infty, \, \text{then}$
\begin{align} \label{eq:asymupperBound_q}
\limsup_{n\to +\infty} \delta_q^D(\F_{q}^n,S_{n},0,d)\leq \limsup_{n \to +\infty}\left(\frac{1}{1+\frac{\textbf{v}_q^{D}(\F_q^n,d-1)S_q^2}{2q^{n}}}\right) <1.
\end{align}
\end{itemize}
In particular,
\begin{align*}
\lim_{n \to +\infty}\delta_q^D(\F_{q}^n,S_n,0,d) = \begin{cases}
1 \quad &\textnormal{ if $\textbf{v}_q^{D}(\F_{q}^n,d-1) \in o(q^{n}S_{n}^{-2})$ as $n \to +\infty$,} \\
0 \quad &\textnormal{ if $\textbf{v}_q^{D}(\F_{q}^n,d-1) \in \omega(q^{n}S_{n}^{-2})$ as $n \to +\infty$.}
\end{cases}
\end{align*}
\end{theorem}
As we applied the bounds given in Corollary~\ref{cor:nonlindensity} in a very general way, without having any knowledge about $\smash{\textbf{v}_{q}^{D}(\F_{q}^n,r)}$, the requirements on the asymptotic behavior on the cardinalities in Theorem~\ref{thm:nonLinearLimq} and Theorem~\ref{thm:nonLinearLimn} for certain codes to be dense or sparse are the same, even though we send different parameters to infinity. This is something that we will also observe when looking at (sub)linear codes in the next subsection.
\begin{remark} \label{rem:gilbvarsh}
In~\cite{gilbert1952comparison,varshamov1957estimate} Gilbert and Varshamov gave a lower bound that shows the existence of codes of sufficiently large cardinality and minimum distance. More precisely, it says that there exists a code $\mC\subseteq \F_{q}^n$ of minimum distance $d$ with
$$|\mC| \geq \frac{q^{n}}{\textbf{v}_q^{D}(\mathbb{F}_{q}^n,d-1)}.$$
It is an immediate consequence of Theorem~\ref{thm:nonLinearLimq} and Theorem~\ref{thm:nonLinearLimn} that, while such codes exist, the probability that a uniformly random (possibly nonlinear) code, whose cardinality is close to the Gilbert-Varshamov bound, has minimum distance at least $d$, goes to 0 both as $q \to +\infty$ and $n \to +\infty$.
\end{remark}
\subsection{The (Sub)Linear Case} \label{sec:(Sub)linear_asym}
From Corollary~\ref{cor:subbound} we obtain results for the asymptotic density of (sub)linear codes with minimum distance bounded from below as one of the parameters $q,n,s$ or $\ell$ tends to infinity and the other three parameters are treated as constants.
We will repeatedly use some asymptotic estimates of the $q$-binomial coefficient. One of these estimates involves the quantity
\begin{align*}
\pi(q) := \prod_{i=1}^{\infty} \left( \frac{q^{i}}{q^{i}-1} \right).
\end{align*}
Note that the infinite product $\pi(q)$ is closely linked to the Euler function $\phi$; see e.g.~\cite[Section 14]{apostol2013introduction}. The Euler function $\phi:(-1,1) \to \R$ is defined as
\begin{equation*}
\phi : x \mapsto \prod_{i=1}^{\infty}(1-x^i).
\end{equation*}
We have $\pi(q)=1/\phi(1/q)$ for all $q \in Q$.
We will also need the following asymptotic estimates.
\begin{lemma} \label{lem:asymptotics}
Let $a \ge b \ge 0$ be integers and let $k : \mathbb{N} \rightarrow \mathbb{N}$ and $\tilde{k} : \mathbb{N} \rightarrow \mathbb{N}$ be linear functions such that $k(n) \geq \tilde{k}(n)$ for all $n \in \mathbb{N}$. We have the following estimates.
\begin{enumerate}[label=(\alph*)]\setlength\itemsep{0cm}
\item
\label{lem:asymptoticss}
$ {\dstirling{k(n)}{\tilde{k}(n)}_{q^{\ell}}} \sim q^{\ell \tilde{k}(n)(k(n)-\tilde{k}(n))}\pi(q^{\ell})$ \quad \textnormal{as $n \to +\infty$.}
\item \label{lem:asymptoticsq}
$ {\dstirling{a}{b}_{q^{\ell}}} \sim q^{\ell b(a-b)}$ \quad \textnormal{both as $q \to +\infty$ and $\ell \to +\infty$.}
\end{enumerate}
\end{lemma}
Note that similar asymptotic estimates as in Lemma~\ref{lem:asymptotics} have been proved in~\cite[Section 6]{gruica2022common}, and thus we omit the proofs.
\begin{theorem} \label{thm:sublimq}
Let $n \ge 3$ and $\ell,s \ge 1$ be integers and consider the sequence of vector spaces~$\smash{(\F_{q^{m}}^n)_{q \in Q}}$. We fix $1 \le k \le ns$ and we let $1 \le d < +\infty$.
\begin{itemize}
\item[(i)] We have
\begin{align} \label{eq:asymlowerBound_q}
\max\left\{ \liminf_{q \to +\infty} \left( 1-\frac{\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)}{q^{\ell(ns+1-k)}} \right),0 \right\} \leq \liminf_{q \to +\infty}\delta_q^D(\F_{q^{m}}^n,q^{\ell k},\ell,d).
\end{align}
\item[(ii)]
If $\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1) \in \Omega\left(q^{\ell(ns+1-k)}\right) \, \text{as} \, \, q \to +\infty, \, \text{then}$
\begin{align} \label{eq:asymupperBound_q}
\limsup_{q \to +\infty} \delta_q^D(\F_{q^{m}}^n,q^{\ell k},\ell,d)\leq \limsup_{q \to +\infty} \left( \frac{1}{1 + {\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)}{q^{-\ell(ns+1-k)}}}\right) <1.
\end{align}
\end{itemize}
In particular,
\begin{align*}
\lim_{q \to +\infty}\delta_q^D(\F_{q^{m}}^n,q^{\ell k},\ell,d) = \begin{cases}
1 \quad &\textnormal{ if $\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1) \in o(q^{\ell(ns+1-k)})$ as $q \to +\infty$,} \\
0 \quad &\textnormal{ if $\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1) \in \omega(q^{\ell(ns+1-k)})$ as $q \to +\infty$.}
\end{cases}
\end{align*}
\end{theorem}
\begin{proof}
From Lemma~\ref{lem:asymptotics} we obtain
\begin{align} \label{eq:asymlower_q}
\frac{(\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)-1)\dstirling{ns-1}{k-1}_{q^{\ell}}}{(q^{\ell}-1)\dstirling{ns}{k}_{q^{\ell}}} \sim \frac{\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)}{q^{\ell(ns+1-k)}} \quad \text{as} \, \, q \to +\infty.
\end{align}
Now (i) is an easy consequence of Corollary ~\ref{cor:subbound} and the fact that $\delta_q^D(\F_{q^{m}}^n,q^{\ell k},\ell,d) \ge 0$.
For the second statement we consider the upper bound of Corollary~\ref{cor:subbound}. Together with Lemma~\ref{lem:asymptotics} we have
\begin{align*}
\left( \displaystyle \frac{\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)-1}{q^{\ell}-1} \right) \dstirling{ns-1}{k-1}_{q^{\ell}}^{2} \sim \textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)q^{2\ell(k-1)(ns-k)-\ell}
\end{align*}
and
\begin{align*}
\dstirling{ns}{k}_{q^{\ell}}\left( \dstirling{ns-1}{k-1}_{q^{\ell}}+\left(\displaystyle \frac{\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)-1}{q^{\ell}-1} -1 \right)\dstirling{ns-2}{k-2}_{q^{\ell}}\right) \\
\sim
q^{\ell(2k-2)(ns-k)-\ell}(q^{\ell(ns-k+1)} + \textbf{v}_q^{D}(\F_{q^{m}}^n,d-1) )
\end{align*}
as $q \to +\infty$. Hence we obtain
\begin{align*}
\frac{\left( \displaystyle \frac{\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)-1}{q^{\ell}-1} \right) \dstirling{ns-1}{k-1}_{q^{\ell}}^{2}}{\dstirling{ns}{k}_{q^{\ell}}\left( \dstirling{ns-1}{k-1}_{q^{\ell}}+\left( \displaystyle\frac{\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)-1}{q^{\ell}-1} -1 \right)\dstirling{ns-2}{k-2}_{q^{\ell}}\right)}
\sim \frac{\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)}{q^{\ell(ns-k+1)} + \textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)},
\end{align*}
as $q \to +\infty$.
Now, taking the limit as $q \to +\infty$ in the bound of~\eqref{eq:upperBoundsublim} reads
\begin{align*}
\limsup_{q \to +\infty}\delta_q^D(\F_{q^m}^n,q^{\ell k},\ell,d) &\le \limsup_{q \to +\infty} \left(1- \frac{\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)}{q^{\ell(ns-k+1)} + \textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)} \right).
\end{align*}
Since $q^{\ell k} \in \Omega\left(\frac{q^{\ell(ns+1)}}{\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)}\right) \, \text{as} \, \, q \to +\infty$, we have
\begin{align*}
\limsup_{q \to +\infty} \left(1- \frac{\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)}{q^{\ell(ns-k+1)} + \textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)} \right) = \limsup_{q\to +\infty}\frac{1}{1+\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)q^{-\ell(k-ns-1)}} <1 ,
\end{align*}
yielding the statement in~\eqref{eq:asymupperBound_q}.
\end{proof}
Note that the asymptotic estimates for the $q$-binomial coefficient given in Lemma~\ref{lem:asymptotics} are the same for $q \to +\infty$ and for $\ell \to +\infty$.
Hence we can give an analogous result to Theorem~\ref{thm:sublimq} but with the degree of linearity $\ell$ tending to infinity.
Further note that the parameters of the $q$-binomial coefficient involved in the upper and lower bound of $\delta_q^D(\F_{q^{m}}^n,q^{\ell k},\ell,d)$ (see Corollary~\ref{cor:subbound}) can be written as a linear function in either $n$ or in $s$ such that the assumptions of
Lemma~\ref{lem:asymptotics} are fulfilled. This gives us then two asymptotic results regarding $n \to +\infty$ (and $s \to +\infty$). Since the proofs of Theorem~\ref{thm:sublimell}, Theorem~\ref{thm:sublimn} and Theorem~\ref{thm:sublims} follow the same arguments as the proof of Theorem~\ref{thm:sublimq} we omit them.
\begin{theorem}\label{thm:sublimell}
Let $n \ge 3$, $s \ge 1$ be integers and let $q \in Q$. Consider the sequence of vector spaces $(\F_{q^{\ell s}}^n)_{\ell \geq 1}$. We fix $1 \le k \le ns$ and we let $1 \le d < +\infty $.
\begin{itemize}
\item[(i)] We have
\begin{align*}
\max\left\{ \liminf_{\ell \to +\infty} \left( 1-\frac{\textbf{v}_q^{D}(\F_{q^{\ell s}}^n,d-1)}{q^{\ell(ns+1-k)}} \right),0 \right\} \leq \liminf_{\ell \to +\infty}\delta_q^D(\F_{q^{\ell s}}^n,q^{\ell k},\ell,d).
\end{align*}
\item[(ii)]
If $\textbf{v}_q^{D}(\F_{q^{\ell s}}^n,d-1) \in \Omega\left(q^{\ell(ns+1-k)}\right) \, \text{as} \, \, \ell \to +\infty, \, \text{then}$
\begin{align*}
\limsup_{\ell \to +\infty} \delta_q^D(\F_{q^{\ell s}}^n,q^{\ell k},\ell,d)\leq \limsup_{\ell \to +\infty}\left(\frac{1}{1 + \textbf{v}_q^{D}(\F_{q^{\ell s}}^n,d-1)q^{-\ell(ns+1-k)}}\right) <1.
\end{align*}
\end{itemize}
In particular,
\begin{align*}
\lim_{\ell \to +\infty}\delta_q^D(\F_{q^{\ell s}}^n,q^{\ell k},\ell,d) = \begin{cases}
1 \quad &\textnormal{ if $\textbf{v}_q^{D}(\F_{q^{\ell s}}^n,d-1) \in o(q^{\ell(ns+1-k)})$ as $\ell \to +\infty$,} \\
0 \quad &\textnormal{ if $\textbf{v}_q^{D}(\F_{q^{\ell s}}^n,d-1) \in \omega(q^{\ell(ns+1-k)})$ as $\ell \to +\infty$.}
\end{cases}
\end{align*}
\end{theorem}
In the next theorem we consider the asymptotic density, where the code length $n$ tends to infinity and the other parameters are treated as constants.
\begin{theorem}\label{thm:sublimn}
Let $s, \ell \ge 1$ and $1 \le d < +\infty $ be integers and let $q \in Q$. Consider the sequence of vector spaces $(\F_{q^{m}}^n)_{n \geq 1}$ and let $k : \mathbb{N} \rightarrow \mathbb{N}$ be a linear function such that $(k(n))_{n \ge 1}$ describes a sequence of integers where $1 \le k(n) \le ns$ for all $n \ge 1$.
\begin{itemize}
\item[(i)] We have
\begin{align*}
\max\left\{ \liminf_{n \to +\infty} \left( 1-\frac{\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)}{q^{\ell(ns+1-k)}} \right),0 \right\} \leq \liminf_{n \to +\infty}\delta_q^D(\F_{q^{m}}^n,q^{\ell k},\ell,d).
\end{align*}
\item[(ii)]
If $\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1) \in \Omega\left(q^{\ell(ns+1-k)}\right) \, \text{as} \, \, n \to +\infty, \, \text{then}$
\begin{align} \label{eq:asymupperBound_q}
\limsup_{n \to +\infty} \delta_q^D(\F_{q^{m}}^n,q^{\ell k},\ell,d)\leq \limsup_{n \to +\infty}\left(\frac{1}{1 + \textbf{v}_q^{D}(\F_{q^{m}}^n,d-1)q^{-\ell(ns+1-k)}}\right) <1.
\end{align}
\end{itemize}
In particular,
\begin{align*}
\lim_{n \to +\infty}\delta_q^D(\F_{q^{m}}^n,q^{\ell k},\ell,d) = \begin{cases}
1 \quad &\textnormal{ if $\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1) \in o(q^{\ell(ns+1-k)})$ as $n \to +\infty$,} \\
0 \quad &\textnormal{ if $\textbf{v}_q^{D}(\F_{q^{m}}^n,d-1) \in \omega(q^{\ell(ns+1-k)})$ as $n \to +\infty$.}
\end{cases}
\end{align*}
\end{theorem}
In the last theorem concerning the asymptotic density of $\F_{q^{\ell}}$-linear codes we let the degree $[\F_{q^{m}}\colon \F_{q^{\ell}}]$, denoted by $s$, go to infinity.
\begin{theorem}\label{thm:sublims}
Let $n \geq 3,$ $\ell \ge 1$ and $1 \le d < +\infty$ be integers and let $q \in Q$. Consider the sequence of vector spaces $(\F_{q^{\ell s}}^n)_{s \geq 1}$ and let $k : \mathbb{N} \rightarrow \mathbb{N}$ be a linear function such that $(k(s))_{s \ge 1}$ describes a sequence of integers with $1 \le k(s) \le ns$ for all $s \ge 1$.
\begin{itemize}
\item[(i)] We have
\begin{align*}
\max\left\{ \liminf_{s \to +\infty} \left( 1-\frac{\textbf{v}_q^{D}(\F_{q^{\ell s}}^n,d-1)}{q^{\ell(ns+1-k)}} \right),0 \right\} \leq \liminf_{s \to +\infty}\delta_q^D(\F_{q^{\ell s}}^n,q^{\ell k},\ell,d).
\end{align*}
\item[(ii)]
If $\textbf{v}_q^{D}(\F_{q^{\ell s}}^n,d-1) \in \Omega\left(q^{\ell(ns+1-k)}\right) \, \text{as} \, \, s \to +\infty, \, \text{then}$
\begin{align*}
\limsup_{s \to +\infty} \delta_q^D(\F_{q^{\ell s}}^n,q^{\ell k},\ell,d)\leq \limsup_{s \to +\infty}\left(\frac{1}{1 + \textbf{v}_q^{D}(\F_{q^{\ell s}}^n,d-1)q^{-\ell(ns+1-k)}}\right) <1.
\end{align*}
\end{itemize}
In particular,
\begin{align*}
\lim_{s \to +\infty}\delta_q^D(\F_{q^{\ell s}}^n,q^{\ell k},\ell,d) = \begin{cases}
1 \quad &\textnormal{ if $\textbf{v}_q^{D}(\F_{q^{\ell s}}^n,d-1) \in o(q^{\ell(ns+1-k)})$ as $s \to +\infty$,} \\
0 \quad &\textnormal{ if $\textbf{v}_q^{D}(\F_{q^{\ell s}}^n,d-1) \in \omega(q^{\ell(ns+1-k)})$ as $s \to +\infty$.}
\end{cases}
\end{align*}
\end{theorem}
\begin{remark} \label{rem:gilbvarsh2}
Since the term ${q^{\ell ns}}/{q^{\ell(ns+1)}}$ tends to zero both as $q \to +\infty$ and $\ell \to +\infty$ Theorem~\ref{thm:sublimq} and Theorem~\ref{thm:sublimell} imply that a uniformly random $\F_{q^{\ell}}$-linear code in $\F_{q^m}^n$ of minimum distance at least $d$, for some integer $1 \le d < +\infty$, will attain the Gilbert-Varshamov bound with high probability for large $q$ and also for large linearity degree $\ell$.
However, when either $n$ or $s$ tend to infinity (and the other parameters are treated as constants), then it can easily be seen that the probability that $\F_{q^\ell}$-linear codes attain the Gilbert-Varshamov bound is upper bounded by $q^{\ell}/(q^{\ell}+1)$.
\end{remark}
\begin{theorem}
Let $(\F_{q^m}^n,D)$ be a metric space where $D$ is a translation-invariant metric on $\F_{q^m}^n$ and let $\ell$ be a divisor of $m$. The probability that a uniformly random $\F_{q^\ell}$-linear code in $(\F_{q^m}^n,D)$ satisfies the Gilbert-Varshamov bound is (infinitesimally close to) 1 if we let $q$ grow.
\end{theorem}
\section{Application I: Codes in the Hamming Metric} \label{sec:Hamming}
In this section we determine the asymptotic density of codes in $\F_{q^{m}}^{n}$, equipped with the Hamming metric. In particular, we study the probability that a uniformly random code in $\F_{q^{m}}^{n}$ of a given cardinality achieves the Singleton bound with equality as one of the four parameters $q,n,s$ or $\ell$ tends to infinity (where $m=s\ell$) and the other three parameters are treated as constants.
We start by introducing the needed preliminaries.
\begin{definition}
Let $x \in \F_{q^m}^n$. The \textbf{Hamming weight} of $x$ is $\wH(x)$ where $\wH$ is the function defined as
\begin{equation*}
\wH : \F_{q^m}^n \longrightarrow \mathbb{N}, \hspace{0.5em} x \mapsto |\{ i \in [n] : x_{i} \neq 0 \}|.
\end{equation*}
Let $x,y \in \F_{q^m}^n$, then the \textbf{Hamming distance} between $x$ and $y$ is $\dH(x,y) := \wH(x-y)$.
\end{definition}
Throughout this section, we are working in the metric space $(\F_{q^m}^n,\dH)$. As in Section~\ref{sec:prelim}, we denote nonlinear codes of cardinality $S$ in $\F_{q^m}^n$ and of Hamming distance at least $d$ as $[\F_{q^m}^n,S,0,d]^{H}$-codes. Similarly, we call $\F_{q^\ell}$-linear codes of dimension $k$ and minimum Hamming distance at least $d$ $[\F_{q^m}^n,q^{\ell k},\ell,d]^{H}$-codes.
Recall that it is a desirable property for a code to have large cardinality and large minimum distance at the same time. The trade-off between the cardinality and the minimum distance of a Hamming-metric code is captured by the following famous result.
\begin{theorem}[Singleton bound; \text{see \cite[Theorem 5.4]{delsarte1978bilinear}}] \label{thm:singletonlikeHamming}
Let $0\leq \ell \leq m$ and $\mC$ be a $[\F_{q^m}^n,q^{\ell k},\ell,d]^{H}$-code. We have $q^{\ell k} \le q^{m(n-d+1)}.$
\end{theorem}
A code in $(\F_{q^m}^n, \dH)$ is called \textbf{MDS} (\textbf{maximum distance separable}) \textbf{code} if its cardinality meets the Singleton bound with equality.
To estimate the asymptotic density of MDS codes we need the size of the Hamming-metric ball and its estimates. It is well-known and easy to see that the volume of the Hamming-metric ball of radius $r$ is
\begin{equation} \label{eq:sizeHammingVolume}
\textbf{v}_{q}^{\textnormal{H}}(\F_{q^m}^n,r):= \sum_{i=0}^r\binom{n}{i}(q^{m}-1)^{i},
\end{equation}
for any $0 \le r < \infty$.
The following lemma states the asymptotic estimates of the Hamming-metric ball as $q \to +\infty$, as $n\to +\infty$ and as $m\to +\infty$. It can easily be derived from equation~\eqref{eq:sizeHammingVolume}.
\begin{lemma} \label{lem:hamballasy}
The following estimates hold.
\begin{itemize}
\item[(i)] Let $0 \le r \le n$. We have $\textbf{v}_{q}^{\textnormal{H}}(\F_{q^m}^n,r) \sim \binom{n}{r}q^{rm}$ as $q \to +\infty$.
\item[(ii)] Let $0 \le r \le n$. We have $\textbf{v}_{q}^{\textnormal{H}}(\F_{q^m}^n,r) \sim \binom{n}{r}q^{rm}$ as $m \to +\infty$.
\item[(iii)] Let $0 \le r < +\infty$. We have $\textbf{v}_{q}^{\textnormal{H}}(\F_{q^m}^n,r) \sim \binom{n}{r}(q^{m}-1)^r$ as $n \to +\infty$.
\end{itemize}
\end{lemma}
\subsection{Nonlinear MDS Codes}
In this short subsection we determine the density of nonlinear MDS codes in $\F_q^n$ (i.e., we set $m=1$, as before) as either $q$ or $n$ tends to infinity. We summarize the obtained results in the following theorem.
\begin{theorem} \label{thm:nonlinearnHamming}
Let $d \ge 2$ be an integer.
\begin{itemize}
\item[(i)]
Let $n \ge 2$ be an integer. Then we have
$$\lim_{q \to +\infty} \delta_q^{H}(\F_{q}^n,q^{n-d+1},0,d)= 0.$$
\item[(ii)]
We have
$$\lim_{n \to +\infty} \delta_q^{H}(\F_{q}^n,q^{n-d+1},0,d)= 0.$$
\end{itemize}
In particular, nonlinear MDS codes in $\F_q^n$ are sparse both as $q \to +\infty$ and as $n \to +\infty$.
\begin{proof}
From the estimates given in Lemma~\ref{lem:hamballasy} we get
\begin{align*}
\lim_{q \to +\infty}\frac{q^{n}}{\textbf{v}_{q}^{\textnormal{H}}(\F_{q}^{n},d-1)q^{2(n-d+1)}} = \lim_{q \to +\infty}\frac{1}{\binom{n}{d-1}q^{n-d+1}} =0
\end{align*}
which, by Theorem~\ref{thm:nonLinearLimq}, gives the first result.
Similarly, we have
\begin{align*}
\lim_{n \to +\infty}\frac{q^{n}}{\textbf{v}_{q}^{\textnormal{H}}(\F_{q}^{n},d-1)q^{2(n-d+1)}} = \lim_{n \to +\infty}\frac{1}{\binom{n}{d-1}q^{n}}\left( \frac{q^{2}}{q-1}\right)^{d-1} =0
\end{align*}
which, together with Theorem~\ref{thm:nonLinearLimn}, implies the second result.
\end{proof}
\end{theorem}
\subsection{(Sub)linear MDS Codes}
Since we are now in the (sub)linear case, we fix a divisor $\ell$ of $m$ such that $\F_{q^{\ell}}$ is a subfield of $\F_{q^{m}}$ and $s:= [\F_{q^{m}} \colon \F_{q^{\ell}}]$. We then apply the results of Section~\ref{sec:(Sub)linear_asym} to derive density results of $[\F_{q^m}^n,q^{m(n-d+1)},\ell,d]^{H}$-codes when one of the four parameters $q, \ell, n$ or $s$ tends to infinity.
First we consider the asymptotic density of $\F_{q^{\ell}}$-linear MDS codes as the field size $q$ tends to infinity.
\begin{theorem} \label{thm:sublinHammingq}
Let $n, \ell, s,d \geq 1$ be integers such that $1 \leq d \leq n$.
Then we have
\begin{align*}
\lim_{q \to +\infty} \delta_q^{H}(\F_{q^{m}}^n,q^{m(n-d+1)},\ell,d)= 1,
\end{align*}
that is, MDS codes in $\F_{q^m}^n$ are dense as $q \to +\infty.$
\end{theorem}
\begin{proof}
From the estimate of the volume given in Lemma~\ref{lem:hamballasy} we have
\begin{align} \label{eq:sublinHamming}
\lim_{q \to +\infty} \frac{q^{m(n-d+1)} \, \textbf{v}_{q}^{\textnormal{H}}(\F_{q^{m}}^n,d-1)}{q^{\ell(ns+1)}} = \lim_{q \to +\infty} \frac{\binom{n}{d-1}}{q^{\ell}}=0
\end{align}
which by Theorem~\ref{thm:sublimq} gives the statement of the theorem.
\end{proof}
Analogously, by the asymptotic formulas given in Theorem~\ref{thm:sublimell}, Theorem~\ref{thm:sublinHammingq} reads as follows for increasing field extension degree $\ell$.
\begin{theorem} \label{thm:sublinHammingell}
Let $q \in Q$ be a prime power and suppose that $n \geq 3, s,d \geq 1$ are integers such that $1 \leq d \leq n$.
Then
\begin{align*}
\lim_{\ell \to +\infty} \delta_q^{H}(\F_{q^{\ell s}}^n,q^{\ell s(n-d+1)},\ell,d)= 1.
\end{align*}
\end{theorem}
If we let the code length $n$ tend to infinity, we obtain the following sparsity result. We state this result for completeness, even though it is well known that MDS codes do not exist at all for large $n$; see Remark \ref{rem:MDSconj}.
\begin{theorem} \label{thm:sublinHammingn}
Let $q \in Q$ be a prime power, let $ \ell, s \geq 1$ be integers and fix $d \geq 2$. We have
\begin{align*}
\lim_{n \to +\infty} \delta_q^{H}(\F_{q^{m}}^n,q^{m(n-d+1)},\ell,d)= 0.
\end{align*}
\begin{proof}
By Lemma~\ref{lem:hamballasy} we have
\begin{align*}
\textbf{v}_{q}^{\textnormal{H}}(\F_{q^{m}}^n,d-1) \sim \binom{n}{d-1}(q^{m}-1)^{d-1} \quad \textnormal{as} \, n \to + \infty.
\end{align*}
One easily checks that $\textbf{v}_{q}^{\textnormal{H}}(\F_{q^{m}}^n,d-1) \in \omega(q^{\ell (s(d-1)+ 1)}).$ This combined with Theorem~\ref{thm:sublimn} gives the statement of the theorem.
\end{proof}
\end{theorem}
The last parameter we consider is the degree $s$ of the field extension $\F_{q^{m}}/ \F_{q^{\ell}}$. For this parameter we will only upper bound the asymptotic density $\lim_{s \to +\infty} \delta_q^{H}(\F_{q^{\ell s}}^n,q^{\ell s(n-d+1)},\ell,d)$.
\begin{theorem} \label{thm:sublinHammingn}
Let $q \in Q$ be a prime power and suppose that $n \geq 3$ and $\ell \geq 1$ be integers and let $2 \leq d \leq n$.
Then
\begin{align*}
\limsup_{s \to +\infty} \delta_q^{H}(\F_{q^{\ell s}}^n,q^{\ell s(n-d+1)},\ell,d)\leq \frac{1}{1+ \binom{n}{d-1}q^{-\ell}} <1.
\end{align*}
\end{theorem}
\begin{proof}
From the estimate of $\textbf{v}_{q}^{\textnormal{H}}(\F_{q^{\ell s}}^n,d-1)$ given in Lemma~\ref{lem:hamballasy} we get
\begin{align} \label{eq:sublinHammings}
\lim_{s \to +\infty} \frac{q^{\ell s(n-d+1)} \cdot \textbf{v}_{q}^{\textnormal{H}}(\F_{q^{\ell s}}^n,d-1)}{q^{\ell(ns+1)}} = \frac{\binom{n}{d-1}}{q^{\ell}}.
\end{align}
Hence $\textbf{v}_{q}^{\textnormal{H}}(\F_{q^{\ell s}}^n,d-1) \in \Omega(q^{\ell (s(d-1)+ 1)})$ as $s\to + \infty$ and applying Theorem~\ref{thm:sublims} gives
\begin{align} \label{eq:upperBoundHamming}
\limsup_{s \to +\infty} \delta_q^{H}(\F_{q^{\ell s}}^n,q^{\ell s(n-d+1)},\ell,d) \leq \limsup_{s \to +\infty}\left( \frac{1}{1+\textbf{v}_{q}^{\textnormal{H}}(\F_{q^{\ell s}}^n,d-1)q^{-\ell(s(d-1)+1)}}\right).
\end{align}
Now the desired upper bound follows from \eqref{eq:sublinHammings} and \eqref{eq:upperBoundHamming}.
\end{proof}
\begin{remark}\label{rem:MDSconj}
Note that fixing the linearity degree of the code and studying the asymptotic density of MDS codes for the parameters $q,n, \ell$ and $s$ leads to different results. For growing field size, that is $q\to + \infty$ (or $\ell \to + \infty$), $[\F_{q^{m}}^n,q^{m(n-d+1)},\ell,d]^{H}$-codes are dense. On the contrary, if we let $n$ grow, then MDS codes are sparse. This is in line with the MDS conjecture, which implies that no non-trivial $[\F_{q^m}^{n}, q^{m(n-d+1)}, \ell, d]$-codes exist if $n>q^m+2$.
Considering the asymptotic density for $s \to +\infty$, one can see that $[\F_{q^{m}}^n,q^{m(n-d+1)},\ell,d]^{H}$-codes are not dense, but the question whether MDS codes are sparse or not for large $s$ remains open.
\end{remark}
\section{Application II: Codes in the Rank Metric} \label{sec:Rank}
In this section we apply the results of Sections~\ref{sec:bounds} and~\ref{sec:asy} to codes in the rank metric. We start by quickly recalling the needed preliminaries on rank-metric codes.
\begin{definition}
Let $x \in \F_{q^m}^n$. The \textbf{rank weight} of $x$ is defined as the dimension of the $\mathbb{F}_{q}$-span of its entries. More formally, for $x \in \F_{q^m}^n$ we define $\wrk(x)$ as
\begin{equation*}
\wrk : \F_{q^m}^n \longrightarrow \mathbb{N}, \hspace{0.5em} x \mapsto \dim_{\mathbb{F}_{q}}\langle x_{1}, \dots , x_{n} \rangle.
\end{equation*}
The \textbf{rank distance} between $x,y \in \F_{q^m}^n$ is then defined as $\drk(x,y):=\wrk(x-y)$.
\end{definition}
One can show that $\drk$ is a translation-invariant metric on $\F_{q^m}^n$ and throughout this section we always work with the metric space $(\F_{q^m}^n, \drk)$. At times we will take advantage of the following observation.
\begin{remark} \label{rem:matrixiso}
When we equip the matrix space $\F_q^{m\times n}$ with the metric $\tilde{D}^\textnormal{rk}(X,Y):=\rk(X-Y)$, for all $X,Y \in \F_q^{m\times n}$, then
a suitable vector space isomorphism from $(\F_{q^m}^n, D^\rk)$ to $(\F_q^{m\times n}, \tilde{D}^\textnormal{rk})$ is also an isometry, see e.g.~\cite{gorla2018codes}.
However, the linearity degree of a code is generally not preserved. In particular, the image of an $\F_{q^\ell}$-linear code in $\F_{q^m}^{n}$ is ``only'' $\F_q$-linear in $\F_q^{m\times n}$, for any $1\leq \ell \leq m$. Nonetheless, for $\ell\in \{0,1\}$, the set of $[\F_{q^m}^n, S, \ell, d]^\rk$-codes is in one-to-one-correspondence with the set of $[\F_{q^n}^m, S, \ell, d]^\rk$-codes.\footnote{If $\ell$ is also a divisor of $n$, then there exist $\F_{q^\ell}$-isomorphisms from $\F_{q^m}^n$ to $\F_{q^\ell}^{s\times n}\cong \F_{q^\ell}^{sn}$, and from $\F_{q^\ell}^{sn}\cong \F_{q^\ell}^{m\times (n/\ell)}$ to $ \F_{q^n}^{m}$. Defining the $\F_q$-rank metric in a suitable manner on $\F_{q^\ell}^{sn}$, we get that these isomorphisms are isometries again. Then the set of $[\F_{q^m}^n, S, \ell, d]^\rk$-codes is in one-to-one-correspondence with the set of $[\F_{q^n}^m, S, \ell, d]^\rk$-codes, also for larger $\ell$.}
\end{remark}
As usual, we denote a nonlinear code in $(\F_{q^m}^n, \drk)$ of cardinality $S$ and minimum distance at least $d$ as a $[\F_{q^m}^n,S,0,d]^\textnormal{rk}$-code. If the code is also $\F_{q^\ell}$-linear and then we say it is a $[\F_{q^m}^n,S,\ell,d]^\textnormal{rk}$-code.
A rank-metric code cannot both have large dimension and minimum distance. The following result by Delsarte shows the relation between these two quantities.
\begin{theorem}[Singleton-like bound; \text{see \cite[Theorem 5.4]{delsarte1978bilinear}}] \label{thm:singletonlike}
Let $\mC \subseteq \F_{q^m}^n$ be a rank-metric code. We have $|\mC| \le q^{\max\{n,m\}(\min\{n,m\}-\drk(\mC)+1)}.$
\end{theorem}
We call a rank-metric code meeting the bound in Theorem~\ref{thm:singletonlike} with equality an \textbf{MRD} (\textbf{maximum rank distance}) code.
\begin{remark} \label{rem:quasiMRD}
Let $\ell$ be a divisor of $m$ and let $s=m/s$. The dimension over $\F_q$ of any $\F_{q^\ell}$-linear rank-metric code $\mC \subseteq \F_{q^{\ell s}}^n$ has to be divisible by $\ell$. In particular, if the dimension of $\mC$ is $k$ over $\F_{q^\ell}$ where $0 \le k \le ns$ and $d=\drk(\mC)$, then from the bound in Theorem \ref{thm:singletonlike}, it follows
\begin{align} \label{eq:largestk}
\ell k \le \max\{n,\ell s\}(\min\{n,\ell s\}-d+1).
\end{align}
If $\ell s \le n$, then the largest integer $k$ that satisfies \eqref{eq:largestk} is
\begin{align} \label{eq:largestk2}
k^{*}=\left\lfloor \frac{n(\ell s -d+1)}{\ell}\right\rfloor.
\end{align}
Clearly, codes attaining the largest possible dimension $k^*$ in~\eqref{eq:largestk2} are not necessarily MRD (it depends on whether or not the fraction on the RHS of \eqref{eq:largestk2} evaluated for the parameters of the considered code is an integer or not). If a code attains this bound with equality but is not MRD, then it is called a \textbf{quasi-MRD code}.
\end{remark}
In order to make use of the results in Sections~\ref{sec:bounds} and~\ref{sec:asy} we need the volume of the ball in the rank
metric and its asymptotic estimates. The volume of the rank-metric ball of radius $r$ is given by
\begin{equation} \label{eq:rkballasy}
\bbqrk{\F_{q^m}^n,r}:= \sum_{i=0}^r \qbin{n}{i}{q} \prod_{j=0}^{i-1}(q^{m}-q^{j})
\end{equation}
for any $0 \le r \le \min\{n,m\}$; see for example \cite{gabidulin}.
As we are mainly interested in asymptotic results we need the following lemma, which states the well-known asymptotic estimates of the rank-metric ball as $q \to +\infty$, as $m \to +\infty$ and as~$n \to +\infty$.
\begin{lemma} \label{lem:rkballasy}
The following estimates hold.
\begin{itemize}
\item[(i)] Let $0 \le r \le \min\{n,m\}$. We have $\bbqrk{\F_{q^m}^n,r} \sim q^{r(m+n-r)}$ as $q \to +\infty$.
\item[(ii)] Let $0 \le r \le n$. We have $\bbqrk{\F_{q^m}^n,r} \sim
\qbin{n}{r}{q}q^{rm}$ as $m \to +\infty$.
\item[(iii)] Let $0 \le r \le m$. We have $\bbqrk{\F_{q^m}^n,r} \sim \qbin{m}{r}{q}q^{rn}$ as $n \to +\infty$.
\end{itemize}
\end{lemma}
\subsection{Nonlinear MRD Codes}
In this short subsection we investigate the density of (possibly nonlinear) MRD codes both as $q \to +\infty$ and $n \to +\infty$.
Note that by Remark~\ref{rem:matrixiso} computing the asymptotic densities for $n \to +\infty$ and $m \to +\infty$ for nonlinear MRD codes in $\F_{q^m}^n$ are the same (up to transposition) and thus we will only treat the case where $n \to +\infty$ in this subsection.
In the following theorem we provide asymptotic results on the density function of nonlinear MRD codes both as $q \to +\infty$ and $n \to +\infty$.
\begin{theorem} \label{thm:nonlinmrdqn}
Let $d \ge 2$ be an integer.
\begin{itemize}
\item[(i)] Let $m \ge 2$ and $n \ge 2$ be integers. We have
\begin{align*}
\lim_{q \to +\infty}\delta_q^{\textnormal{rk}} (\F_{q^m}^n, q^{\max\{n,m\}(\min\{n,m\}-d+1)},0,d) =0.
\end{align*}
\item[(ii)] Let $m \ge 2$ be an integer and suppose that $d \le m$. We have
\begin{align*}
\lim_{n \to +\infty}\delta_q^{\textnormal{rk}} (\F_{q^m}^n, q^{n(m-d+1)},0,d) =0.
\end{align*}
\end{itemize}
Thus, nonlinear MRD codes are sparse both as $q \to +\infty$ and as $n \to +\infty$.
\end{theorem}
\begin{proof}
From the estimate given in Lemma~\ref{lem:rkballasy} we get \begin{align*}
\lim_{q \to +\infty} \frac{q^{mn}}{\bbqrk{\F_{q^m}^n,d-1}q^{2\max\{n,m\}(\min\{n,m\}-d+1)}} = \lim_{q \to +\infty} \frac{1}{q^{(d-1)(m+n+2\max\{n,m\}-d+1)+mn}} = 0
\end{align*}
which by Theorem~\ref{thm:nonLinearLimq} proves the first result.
Similarly, we have
\begin{align*}
\lim_{n \to +\infty} \frac{q^{mn}}{\bbqrk{\F_{q^m}^n,d-1}q^{2n(m-d+1)}} = \lim_{n \to +\infty} \frac{1}{\qbin{m}{d-1}{q}q^{n(m-d+1)}} =0
\end{align*}
where we used the asymptotic estimate for $n \to +\infty$ in Lemma~\ref{lem:rkballasy}. The second statement in the theorem is then a consequence of Theorem~\ref{thm:nonLinearLimn}.
\end{proof}
\subsection{(Sub)linear MRD Codes}
The problem of determining whether ($\F_q$-linear and $\F_{q^m}$-linear) MRD codes are dense or sparse has been studied before and we start this subsection by revisiting the results and their approaches that have been developed so far (see e.g.~\cite{antrobus2019maximal,byrne2020partition,gluesing2020sparseness,gruica2022common,gruica2022rank,neri2018genericity}), one of which is the application of the results presented in Section~\ref{sec:bounds}. We give a short recap of some of the results obtained with the other approaches in the language of this paper.
In~\cite{gruica2022common} it was shown that $[\F_{q^m}^n,q^{m(n-d+1)},1,d]$-MRD codes are sparse as $q \to +\infty$ unless $d=1$ or $n=d=2$. Together with a result from~\cite{antrobus2019maximal}, where they computed the exact asymptotic density for when $n=d=2$ using the theory of spectrum-free matrices, the density question for $\F_q$-linear MRD codes as $q\to +\infty$ is fully solved. Moreover, in~\cite{gluesing2020sparseness}, for the case where $m=n=d=3$, an exact asymptotic estimate as for the density function of $[\F_{q^m}^n,q^{m(n-d+1)},1,d]$-MRD codes $q \to +\infty$ was provided. This result was later generalized in~\cite{gruica2022rank} for any $m=n=d$ that are prime.
Both the approach of~\cite{gluesing2020sparseness} and the one of~\cite{gruica2022rank} are based on the connection between full-rank MRD codes and semifields.
Finally, using the Schwartz-Zippel Lemma, in~\cite{neri2018genericity} it was shown that $\F_{q^m}$-linear MRD codes are dense as $m \to +\infty$. For $\F_q$-linear MRD codes it has only been shown that they are \emph{not} dense as $m \to +\infty$ (see~\cite{antrobus2019maximal,byrne2020partition,gruica2022common}). Whether they are sparse or not is to this day still an open question.
As the density of MRD codes has already been investigated rather thoroughly, in this subsection we will just close the gap by discussing the remaining asymptotic density results. We start by looking at the sublinear case for the field size $q$ tending to infinity. As explained in Remark~\ref{rem:quasiMRD}, when $n > \ell s$, then the codes of maximum possible dimension are not necessarily MRD codes. For this reason, the following theorem is split into two parts.
\begin{theorem} \label{thm:sublinmrdq}
Let $n$, $\ell$ and $s$ be positive integers and let $2 \le d \le n$.
\begin{itemize}
\item[(i)] If $\ell s \ge n$ then we have
\begin{align*}
\lim_{q \to +\infty} \delta_q^\textnormal{rk}(\F_{q^{\ell s}}^n,q^{\ell s(n-d+1)},\ell,d) =
\begin{cases}
1 \quad &\textnormal{ if $\ell > (d-1)(n-d+1)$,} \\
0 \quad &\textnormal{ if $\ell < (d-1)(n-d+1)$.}
\end{cases}
\end{align*}
Moreover, if $\ell = (d-1)(n-d+1)$ then $\lim_{q \to +\infty} \delta_q^\textnormal{rk}(\F_{q^{\ell s}}^n,q^{\ell s(n-d+1)},\ell,d) \le 1/2$.
\item[(ii)] If $n > \ell s$ then we have
\begin{align*}
\lim_{q \to +\infty} \delta_q^\textnormal{rk}(\F_{q^{\ell s}}^n,q^{\ell k},\ell,d) =
\begin{cases}
1 \quad &\textnormal{ if $\ell > (d-1)(\ell s -d+1)+r$,} \\
0 \quad &\textnormal{ if $\ell < (d-1)(\ell s -d+1)+r$.}
\end{cases}
\end{align*}
where $k:=\lfloor n(\ell s-d+1)/\ell \rfloor$ and $r:=n(d-1)-\ell \lceil n(d-1)/\ell \rceil $.
Moreover, if $\ell = (d-1)(\ell s -d+1)+r$ then $\lim_{q \to +\infty} \delta_q^\textnormal{rk}(\F_{q^{\ell s}}^n,q^{\ell k},\ell,d) \le 1/2$.
\end{itemize}
\end{theorem}
\begin{proof}
For the first part of the theorem, note that we have
\begin{align*}
\lim_{q \to +\infty} \frac{\bbqrk{\F_{q^{\ell s}}^n,d-1}q^{\ell s (n-d+1)}}{q^{\ell(ns+1)}}= \lim_{q \to +\infty} q^{(d-1)(n-d+1)-\ell} = 0
\end{align*}
if and only if $\ell > (d-1)(n-d+1)$.
On the other hand,
\begin{align*}
\lim_{q \to +\infty} \frac{q^{\ell(ns+1)}}{\bbqrk{\F_{q^{\ell s}}^n,d-1}q^{\ell s (n-d+1)}}= \lim_{q \to +\infty} \frac{1}{q^{(d-1)(n-d+1)-\ell}} = 0
\end{align*}
if and only if $\ell < (d-1)(n-d+1)$.
Finally, the case $\ell = (d-1)(n-d+1)$ is another easy application of Theorem~\ref{thm:sublimq}.
To prove the second part of the theorem, we proceed analogously to the arguments for the first part, where we additionally use the fact that $k = \lfloor{ n(\ell s-d+1)}/{\ell}\rfloor = ns-\lceil n(d-1)/\ell \rceil$.
\end{proof}
We now turn to the question if $\mathbb{F}_{q^{\ell}}$-linear MRD codes in $\F_{q^{\ell s}}^n$ are dense or sparse as $\ell$, $s$ and~$n$ tend to infinity.
Recall that in~\cite{neri2018genericity} it was shown that $\F_{q^m}$-linear MRD codes are dense as $m \to +\infty$. The following result generalizes this fact, and shows that also $\F_{q^\ell}$-linear MRD codes in $\F_{q^{\ell s }}^n$ are dense as $\ell \to +\infty$.
\begin{theorem}
Let $q \in Q$ be a prime power and suppose that $n \geq 3$ and $s,d \geq 1$ are integers with $1 \leq d \leq n$.
Then
$$\lim_{\ell \to +\infty} \delta_q^\textnormal{rk}(\F_{q^{\ell s}}^n,q^{ \ell s(n-d+1)},\ell,d)= 1,$$
i.e., $\F_{q^\ell}$-linear MRD codes in $\F_{q^{\ell s }}^n$ are dense as $\ell \to +\infty$.
\end{theorem}
\begin{proof}
By~\eqref{eq:rkballasy} we have
\begin{align*}
\bbqrk{\F_{q^{\ell s}}^n,d-1} \sim \qbin{n}{d-1}{q}q^{(d-1)\ell s} \quad \textnormal{ as $\ell \to +\infty$.}
\end{align*}
Therefore $q^{\ell s(n-d+1)} \in o\left({q^{\ell (ns-1)}}/{\bbqrk{\F_{q^{\ell s}}^n,d-1}}\right)$ as $\ell \to +\infty$, and Theorem~\ref{thm:sublimell} concludes the proof.
\end{proof}
\begin{theorem} \label{thm:mrdlims}
Let $q \in Q$ be a prime power and suppose that $n \geq 3$ and $\ell,d \geq 1$ are integers with $2 \leq d \leq n$.
Then
$$\limsup_{s \to +\infty} \delta_q^\textnormal{rk}(\F_{q^{\ell s}}^n,q^{\ell s(n-d+1)},\ell,d) \le \frac{q^\ell}{q^\ell+\qbin{n}{d-1}{q}} < 1.$$
\end{theorem}
\begin{proof}
We will apply Theorem~\ref{thm:sublims} for proving the statement. First note that by~\eqref{eq:rkballasy} we have
\begin{align*}
\bbqrk{\F_{q^{\ell s}}^n,d-1} \sim \qbin{n}{d-1}{q}q^{(d-1)\ell s} \quad \textnormal{ as $s \to +\infty$.}
\end{align*}
In particular, we have
\begin{align*}
1- \frac{\bbqrk{\F_{q^{\ell s}}^n,d-1}}{q^{\ell(ns-s(n-d+1)+1)} + \bbqrk{\F_{q^{\ell s}}^n,d-1}} = \frac{q^{\ell(s(d-1)+1)}}{q^{\ell(s(d-1)+1)} + \bbqrk{\F_{q^{\ell s}}^n,d-1}} \sim \frac{q^\ell}{q^\ell+\qbin{n}{d-1}{q}}
\end{align*}
as $s \to +\infty$, which proves the theorem.
\end{proof}
We now investigate the density of codes in $\F_{q^{\ell s}}^n$ as their vector length $n$ tends to infinity. Recall again that by Remark \ref{rem:quasiMRD}, in this setting, codes of the largest possible dimension are not necessarily MRD.
\begin{theorem} \label{thm:mrdlimn}
Let $q \in Q$ be a prime power, let $\ell,s \geq 1$ be integers and fix $2 \le d \le \ell s$.
We have
$$\limsup_{n \to +\infty} \delta_q^\textnormal{rk}(\F_{q^{\ell s}}^n,q^{\ell k(n)},\ell,d) \le \frac{1}{1+\qbin{m}{d-1}{q}q^{-2\ell}} < 1,$$
where $k(n):=\lfloor{ n(\ell s-d+1)}/{\ell}\rfloor$ for all $n \ge 2$.
\end{theorem}
\begin{proof}
Applying Theorem \ref{thm:sublimn} and the asymptotic estimate as $n \to +\infty$ we get
\begin{align} \label{eq:samesame}
\limsup_{n \to +\infty} \delta_q^\textnormal{rk}(\F_{q^{\ell s}}^n,q^{\ell k(n)},\ell,d) \le \limsup_{n \to +\infty} \frac{1}{1+\qbin{m}{d-1}{q}q^{n(d-1)-\ell \lceil n(d-1)/\ell \rceil -\ell}},
\end{align}
where we used that $k(n) = \lfloor{ n(\ell s-d+1)}/{\ell}\rfloor = ns-\lceil n(d-1)/\ell \rceil$. Since $0 \le \ell \lceil n(d-1)/\ell \rceil -n(d-1) \le \ell$ we infer the statement in the theorem.
\end{proof}
Recall that by Remark~\ref{rem:matrixiso} there is a one-to-one correspondence between $\F_q$-linear MRD codes in $\F_{q^m}^n$, and $\F_q$-linear MRD codes in $\F_{q^n}^m$. Therefore, for $\ell=1$, the upper bound for the density as $m \to +\infty$ in Theorem~\ref{thm:mrdlims} and the upper bound for the density as $n \to +\infty$ in~\eqref{eq:samesame} are the same (up to exchanging $n$ and $m$).\footnote{Analogously for larger $\ell$ if $\ell$ divides $n$.}
\section{Application III: Codes in the Sum-Rank Metric}\label{sec:SumRank}
In the scope of this paper, we only investigate sum-rank-metric codes $\mC$ in $\F_{q^m}^n$ that consist of block vectors $[x_1 | \dots | x_t]^\top \in \F_{q^m}^n$ where all blocks have equal size such that $x_i \in \F_{q^m}^{\eta}$ for all $i \in [t]$ and $n=\eta t$.
We define the sum-rank weight and sum-rank distance as follows.
\begin{definition}\label{def:sumrk}
Let $x = [x_1 | \dots | x_t]^\top \in \F_{q^m}^n$. The ($t$-)\textbf{sum-rank weight} of $x$ is $ \omega^{sr,t}(x)$ where~$ \omega^{sr,t}$ is the function defined as
\begin{equation*}
\omega^{sr,t} : \F_{q^m}^n \longrightarrow \mathbb{N}, \hspace{0.5em} x \mapsto \sum_{i=1}^{t}\omega^{rk}(x_{i}),
\end{equation*}
where $\wrk(x_{i}) := \dim_{\F_{q}}\langle x_{i,1}, \dots ,x_{i,\eta} \rangle $ denotes the dimension of the $\F_{q}$-span of the entries of $x_{i}$.
For vectors $x,y \in \F_{q^m}^n$ the (t-)\textbf{sum-rank distance} is $\dsr(x,y):=\omega^{sr,t}(x-y)$.
\end{definition}
Note that we clearly have that $\wrk(x_{i}) \leq \min\{m,\eta\}$ for $x_i \in \F_{q^m}^{\eta}$.
As usual, if $\mC \subseteq \F_{q^m}^n$ is an $\F_{q^{\ell}}$-linear subspace of dimension $k$ and minimum distance at least $d$ we say that $\mC$ is a $[\F_{q^m}^n,q^{\ell k},\ell,d]^{sr,t}$-code, and if it is (possibly) nonlinear and has cardinality $S$, we say it is a $[\F_{q^m}^n,S,0,d]^{sr,t}$-code.
\begin{remark} \label{rem:specialsumrkinst}
Note that if we set $\eta=1$ in Definition~\ref{def:sumrk}, then the sum-rank weight of a vector $x \in \F_{q^m}^n$ is the number of nonzero entries of $x$, and thus it reduces to the Hamming metric. On the other hand, if $t=1$ then it is easy to see that the sum-rank weight of a vector is the standard rank weight. Therefore, codes in the Hamming metric and codes in the rank metric can be seen as special instances of codes in the sum-rank metric.
\end{remark}
\begin{theorem} (Singleton-type bound; \text{see \cite[Theorem 3.2]{byrne2021Fundamental}}) \label{thm:sumrksingl}
Let $\mC \subseteq \F_{q^m}^n$ be a code in the sum-rank metric. Then the cardinality of $\mC$ is upper bounded by
\begin{equation} \label{eq:Singletonlike_sr}
|\mC| \leq q^{\max\{m, \eta \}(t\min\{m,\eta \}-d+1)}.
\end{equation}
\end{theorem}
A sum-rank-metric code is called \textbf{MSRD} (\textbf{maximum sum-rank distance}) code if its cardinality meets the Singleton-type bound in Theorem~\ref{thm:sumrksingl} with equality.
\begin{remark} \label{rem:quasiMSRD}
Let $\ell$ be a divisor of $m$ and let $s=[\F_{q^{m}}\colon \F_{q^{\ell}}]$. The $\F_q$-dimension of any $\F_{q^\ell}$-linear sum-rank-metric code $\mC \subseteq \F_{q^{\ell s}}^n$ has to be divisible by $\ell$. In particular, if $\dim_{\F_{q^{\ell}}}(\mC)=k$, where $0 \le k \le ns$, then from the bound in equation \eqref{eq:Singletonlike_sr}, we have
\begin{align} \label{eq:largestk_sr}
\ell k \le \max\{\ell s, \eta \}(t\min\{\ell s ,\eta \}-d+1),
\end{align}
where $d=\drk(\mC)$. If $\ell s \le \eta$, then the largest integer $k$ that satisfies \eqref{eq:largestk_sr} is
\begin{align*}
k^{*}:=\left\lfloor \frac{\eta(\ell s t -d+1)}{\ell}\right\rfloor.
\end{align*}
Clearly, a code $\mC$ with $\dim_{\F_{q^{\ell}}}(\mC)= k^{*}$ is not necessarily MSRD. A code $\mC$ with $\dim_{\F_{q^{\ell}}}(\mC)= k^{*}$ that does not attain the Singleton bound with equality is called a \textbf{quasi-MSRD code}.
\end{remark}
In what follows, we define the set of all possible partitions of a number $r$ into exactly $t$ parts by $U_{r}$, where each part is upper bounded by $\min\{m,\eta\}$. This set of partitions will be needed for giving the volume of the sum-rank-metric ball.
\begin{notation}
Let $n \geq 3$ and $0 \le r \le t \min\{m,\eta\}$. Suppose that $t \geq 1$ is an integer and let $n = \eta t$. We define
\begin{equation*}
U_{r}:= \left\{u=(u_{1}, \dots, u_{t}) \in \mathbb{N}_{0}^{t} \bigm|\, \sum_{i=1}^{t}u_{i}=r, u_{i} \leq \min\{m,\eta\} \textnormal{ for all } i \in [t] \right\}.
\end{equation*}
\end{notation}
The following lemma gives a closed formula for the volume of the sum-rank-metric ball of a given radius.
\begin{lemma}\cite{byrne2021Fundamental}
Let $0 \leq r \le t\min\{m,\eta\}$. We have
\begin{equation*}
\bbqsr{\F_{q^m}^n, r} := \displaystyle\sum_{h=0}^r \displaystyle\sum_{u \in U_{h}} \displaystyle\prod_{i=1}^t\dstirling{\eta}{u_{i}}_{q} \prod_{j=0}^{u_{i}-1}(q^{m}-q^{j}).
\end{equation*}
\end{lemma}
The following lemma states the asymptotic estimates of the volume of the sum-rank-metric ball as $q$ goes to infinity (and the other parameters are treated as constants).
\begin{lemma} \label{lem:srkballasy}
Let $t$ be a divisor of $n$, let $\eta = \frac{n}{t}$ and let $0 \le r \le t\min\{m,\eta\}$. We have
\begin{align*}
\bbqsr{\F_{q^m}^n, r} \sim \binom{t}{\tilde{z}} q^{\frac{\tilde{z}^2}{t}-\tilde{z}+r(m+\eta -\frac{r}{t})}
\quad \textnormal{ as $q \to +\infty$,}
\end{align*}
where $\tilde{z} \equiv r \pmod t$.
\end{lemma}
\begin{proof}
Note that for any $u = (u_1,\dots,u_t) \in U_{h}$ we have
\begin{align*}
\prod_{i=1}^t \qbin{\eta}{u_i}{q} \prod_{j=0}^{u_i-1} (q^m-q^j) \sim q^{\sum_{i=1}^t u_i(m+\eta-u_i)} = q^{h(\eta +m)-\sum_{i=1}^t u_{i}^{2}} \quad \textnormal{ as } q \to +\infty.
\end{align*}
Let $z \equiv h \pmod t$. Using Lagrange multipliers one sees that the maximum of the real-valued function $f:\R^t \longrightarrow \R, \, f : (x_1, \dots, x_t) \mapsto -\sum_{i=1}^t x_{i}^{2}$ in the region of $\R^t$ constrained by
\begin{align*}
\sum_{i=1}^t x_i = h, \quad 0 \le x_i \le \eta \textnormal{ for all } i \in \{1, \dots, t\}
\end{align*}
is attained for $x^{*} \in \R^t$ with $x^{*}_{1} = \dots = x^{*}_{t}= \frac{h}{t}$. Note that $f$ is concave in all $t$ variables, hence the integer partition $u \in U_{h}$ where $f$ reaches its maximum is $u^{*}= (u^{*}_{1},\dots,u^{*}_{t})$ with $u^{*}_{1} = \dots = u^{*}_{i_{z}}= \lfloor \frac{h}{t} \rfloor +1$ and $u^{*}_{k}=\lfloor \frac{h}{t} \rfloor$ where $k \in [t]\setminus \{i_{1} , \dots ,i_{z} \}$.
This gives
\begin{align*}
\max_{u \in U_{h}}\left( -\sum_{i=1}^t u_{i}^{2} \right) &= -z(\left\lfloor \frac{h}{t} \right\rfloor+1)^{2}-(t-z)\left\lfloor \frac{h}{t} \right\rfloor^{2}\\
&=- 2\left\lfloor \frac{h}{t} \right\rfloor z- \left\lfloor \frac{h}{t} \right\rfloor^{2}t-z \\
&= -\left\lfloor \frac{h}{t} \right\rfloor(z+h)-z\\
&= -\frac{(h-z)(h+z)}{t}-z = -\frac{h^{2}-z^{2}}{t}-z.
\end{align*}
Hence
\begin{align*}
\displaystyle\sum_{u \in U_{h}} \displaystyle\prod_{i=1}^t\dstirling{n_{i}}{u_{i}}_{q} \prod_{j=0}^{u_{i}-1}(q^{m}-q^{j}) \sim \binom{t}{z}q^{h(m + \eta)-z-h^{2}/t + z^{2}/t} \quad \textnormal{ as $q \to +\infty$.}
\end{align*}
Write $p(h) = h(mt+n-h)$ as a polynomial in $h$ with roots at $h=0$, $h=mt+n$ and a maximum at $h^{*}= \frac{mt+n}{2}$. Note that $p$ is monotonically increasing on $\left[0, \frac{mt+n}{2}\right)$. For $h \le r \le t\min\{\eta,m\}$ we have
$$h \le r \le \frac{mt+n}{2},$$
from which we can conclude that
$$ \bbqsr{\F_{q^m}^n, r} \sim \binom{t}{\tilde{z}} q^{\frac{\tilde{z}}{t}-\tilde{z}+r(m+\eta -\frac{r}{t}) },$$
where $\tilde{z} \equiv r \pmod t$.
\end{proof}
As we have seen in Remark \ref{rem:specialsumrkinst}, both the Hamming and the rank metric are special instances of the sum-rank metric. Analyzing the asymptotic densities of (sub)linear codes in these two metrics, we can infer that their behavior differs, especially when looking at $q \to +\infty$. This interesting fact motivates us to generalize and to analyze the asymptotic density as $q$ tends to infinity in the sum-rank metric.
\subsection{Nonlinear MSRD Codes}
We determine the sparsity of (possibly) nonlinear MSRD codes as $q \to +\infty$.
\begin{theorem} \label{thm:nonlinearnSr}
Let $d \geq 2$ and $m,\eta \geq 1$ be integers, let $t$ be the number of blocks in the sum-rank metric and let $n=\eta t$.
Then we have
$$\lim_{q \to +\infty} \delta_q^{\textnormal{sr,t}}(\F_{q^{m}}^n,q^{\max\{m,\eta\}(t\min\{m,\eta\}-d+1)},0,d)= 0,$$
i.e., nonlinear MSRD codes are sparse as $q \to +\infty$.
\begin{proof}
By Lemma \ref{lem:srkballasy} we have
\begin{align*}
0 \leq \lim_{q \to +\infty} \frac{q^{mn}}{\textbf{v}_{q}^{\textnormal{sr,t}}(\F_{q^m}^n,d-1)q^{2\max\{m,\eta\}(t\min\{m,\eta\}-d+1)}}\leq \lim_{q \to +\infty} \frac{1}{q^{(d-1)(3\max\{m,\eta\})}}=0
\end{align*}
and Theorem~\ref{thm:nonLinearLimq} proves the statement.
\end{proof}
\end{theorem}
\subsection{(Sub)linear MSRD Codes}
Note that if $\eta > \ell s$, then the codes of maximum possible dimension are not necessarily MSRD codes (see also Remark \ref{rem:quasiMSRD}). Due to this reason we consider the two cases $\ell s < \eta$ and $\eta \le \ell s$ separately.
\begin{theorem} \label{thm:asympsrq}
Let $\eta, \ell , s, t ,d$ be positive integers such that $1<t<n$ and $2 \le d \le t\min\{m,\eta\}$. Let $n = \eta t$ and $\tilde{z} \equiv d-1 \pmod t$. Then define $\theta := (d-1)\left(\min\{m, \eta \} - \frac{d-1}{t} \right) +\frac{\tilde{z}^{2}}{t}-\tilde{z}$.
\begin{itemize}
\item[(i)]
If $m \geq \eta$ then we have
\begin{align*}
\lim_{q \to +\infty} \delta_q^{sr,t}(\F_{q^{m}}^n,q^{m(n-d+1)},\ell,d) = \begin{cases}
1 \quad &\textnormal{if } \theta < \ell, \\
0 \quad &\textnormal{if } \theta > \ell.
\end{cases}
\end{align*}
Moreover if $\theta = \ell$ then $ \lim_{q \to +\infty} \delta_q^{sr,t}(\F_{q^{m}}^n,q^{m(n-d+1)},\ell,d) \leq \frac{1}{1+\binom{t}{\tilde{z}}}.$
\item[(ii)] If $\eta > m$ then we have
\begin{align*}
\lim_{q \to +\infty} \delta_q^{sr,t}(\F_{q^{m}}^n,q^{\ell k},\ell,d) = \begin{cases}
1 \quad &\textnormal{if } \theta -r < \ell, \\
0 \quad &\textnormal{if } \theta -r > \ell.
\end{cases}
\end{align*}
Moreover, if $\theta -r = \ell$ then
$\lim_{q \to +\infty} \delta_q^{sr,t}(\F_{q^{m}}^n,q^{\ell k},\ell,d) \leq \frac{1}{1+\binom{t}{\tilde{z}}}$ ,
where $k= \left\lfloor \frac{\eta(m t -d+1)}{\ell} \right\rfloor$ and $r = \ell \left( \left\lceil \frac{\eta(d-1)}{\ell} \right\rceil - \frac{\eta(d-1)}{\ell}\right)$.
\end{itemize}
\begin{proof}
By Lemma \ref{lem:srkballasy} we have for the first part of the theorem
\begin{align}\label{eq:asymsrgeneral}
\lim_{q \to +\infty} \frac{ \bbqsr{\F_{q^{m}}^n, d-1}q^{m(\eta t -d+1)}}{q^{\ell(ns+1)}} = \lim_{q \to + \infty}\frac{\binom{t}{\tilde{z}}q^{(d-1)(\eta -\frac{d-1}{t})+\frac{\tilde{z}^{2}}{t}-\tilde{z}}}{q^{\ell}} =0
\end{align}
if and only if $\ell > \theta$. On the other hand
\begin{align}\label{eq:asymsrgeneral}
\lim_{q \to +\infty} \frac{q^{\ell(ns+1)}}{\bbqsr{\F_{q^{m}}^n, d-1}q^{m(\eta t -d+1)}} = \lim_{q \to + \infty}\frac{1}{\binom{t}{\tilde{z}}q^{(d-1)(\eta -\frac{d-1}{t})+\frac{\tilde{z}^{2}}{t}-\tilde{z}-\ell}} =0,
\end{align}
if and only if $\ell < \theta$. Finally, for $\ell = \theta$ we obtain
\begin{align}\label{eq:asymsrgeneral}
\lim_{q \to +\infty} \frac{ \bbqsr{\F_{q^{m}}^n, d-1}q^{m(\eta t -d+1)}}{q^{\ell(ns+1)}} = \binom{t}{\tilde{z}}
\end{align}
and the third case is another easy consequence of Theorem~\ref{thm:sublimq}.
To prove the second part of the theorem, we proceed analogously to the arguments for the first part, where we additionally use the fact that $k = \lfloor{ \eta(\ell s t-d+1)}/{\ell}\rfloor = ns-\lceil \eta(d-1)/\ell \rceil$.
\end{proof}
\end{theorem}
As the density of $\F_{q}$-linear codes in the Hamming metric and rank metric has been studied pretty thoroughly, we will conclude this section by taking a closer look at their counterparts in the sum-rank metric.
\begin{theorem} \label{thm: Fqlinearsr}
Let $m \ge \eta$ and $2 \le d \le n$ be integers. We have \begin{align*}
\lim_{q \to +\infty}\delta_q^{sr,t}(\F_{q^{m}}^n,q^{m( n-d+1)},1,d) = \begin{cases}
1 \quad &\textnormal{ if $(d-1)(n-d+1) < t$,} \\
0 \quad &\textnormal{ if $(d-1)(n-d+1) > t+t^2/4$.}
\end{cases}
\end{align*}
\end{theorem}
\begin{proof}
Note that, using simple methods from Calculus, one can show that $$-t/4 \le \frac{(\tilde{z})^2}{t}-\tilde{z} \le 0,$$
where $\tilde{z} \equiv d-1 \pmod t$. Combining this with Theorem~\ref{thm:asympsrq} where we set $\ell=1$ and use the fact that $m \ge \eta$ concludes the proof.
\end{proof}
From the assumption $2 \le d \le n$ in Theorem \ref{thm: Fqlinearsr} we get
$$(d-1)(n-d+1) \in \left\{ (n-1), \dots , \left\lfloor \frac{n^{2}}{4}\right\rfloor \right\}.$$
This gives us a further characterization (see Corollary \ref{cor:boundsForeta} and Figure \ref{figure:srbounds} below) of sum-rank-metric codes from which we can extract all values of block length $\eta$, where we observe a sparse behaviour as $q \to +\infty$.
\begin{corollary} \label{cor:boundsForeta}
Let $m \ge \eta$ and $2 \le d \le n$ be integers. We have \begin{align*}
\lim_{q \to +\infty}\delta_q^{sr,t}(\F_{q^{m}}^n,q^{m( n-d+1)},1,d) = \begin{cases}
1 \quad &\textnormal{ if $\eta < \frac{2}{\sqrt{t}}$}\\
0 \quad &\textnormal{ if $\eta > \frac{(t+2)^{2}}{4t}$.}
\end{cases}
\end{align*}
\end{corollary}
\begin{figure}[hbt!]
\centering
\begin{tikzpicture}
\begin{axis}[ axis lines=middle, xmin=1,xmax=10,
ymin=1,ymax=4,
width=\textwidth,
height=0.4\textwidth,
legend style={at={(1,0.2)},anchor=south
east},
xlabel=$t$, ylabel=$\eta (t)$,restrict y to domain=0:100, ]
\addplot[color=blue, domain=1:10,samples=301, unbounded coords=discard] {(2/sqrt(x))};
\addlegendentry{$\frac{2}{\sqrt{t}}$}
\addplot[color=red, domain=1:10,samples=301, unbounded coords=discard] {((x+2)^2/(4*x))};
\addlegendentry{$\frac{(t+2)^{2}}{4t}$}
\addplot[color=green, only marks,
mark=halfcircle*,
mark size=0.8pt] coordinates {
(1,1)
(2,1)
(3,1)};
\addplot[color=green, only marks,
mark=halfcircle*,
mark size=0.8pt] coordinates {
(1,3)
(1,4)
(2,3)
(2,4)
(3,3)
(3,4)
(4,3)
(4,4)
(5,3)
(5,4)
(6,3)
(6,4)
(7,3)
(7,4)
(8,4)
(9,4)
(10,4)
};
\addplot[color=black, only marks,
mark=halfcircle*,
mark size=0.8pt] coordinates {
(4,1)
(5,1)
(6,1)
(7,1)
(8,1)
(9,1)
(10,1)
(1,2)
(2,2)
(3,2)
(4,2)
(5,2)
(6,2)
(7,2)
(8,2)
(8,3)
(9,3)
(10,3)
(9,2)
(10,2)
};
\end{axis}
\end{tikzpicture}
\caption{Block size $\eta$ depending on $t$ to fully characterize the asymptotic density of $\F_{q}$-linear MSRD codes as $q \to +\infty$. The green dots above the red line represent sets of parameters for which MSRD codes are sparse in $\F_{q^{m}}^{\eta t}$, whereas those below the blue line indicate a dense behaviour. The black dots represent sets of parameters for which the asymptotic behaviour of the density is unknown, or as in the case of $t=1$ and $\eta =2$ can not be classified as asymptotically dense or sparse.}
\label{figure:srbounds}
\end{figure}
\begin{remark}\label{rem:comparison}
Recall from Remark~\ref{rem:specialsumrkinst} that codes in the Hamming metric and codes in the rank metric are subfamilies of sum-rank-metric codes. It is interesting to observe that these two classes of codes behave very differently with respect to density considerations as $q \to +\infty$. More explicitly, in Section~\ref{sec:Hamming} we saw that $\F_q$-linear MDS codes are dense as $q \to +\infty$. This is in stark contrast with the behavior of $\F_q$-linear MRD codes: they are sparse as $q \to +\infty$ (see Section~\ref{sec:Rank}). Even though there does not seem to be an obvious way to fully characterize for which values of $\eta, t$ and $d$ we have sparsity or density (or non-density) of the corresponding codes, experimental results strongly indicate that, in general, sum-rank-metric codes behave similarly to standard rank-metric codes, with an exception when $\eta=1$ (which is the case of Hamming-metric codes). Therefore MDS codes behave rather \emph{atypical} with respect to density considerations. We collect some examples of the asymptotic behavior of the density of sum-rank-metric codes in Table~\ref{table_sumrk}.
\begin{table}[h!]
\centering
\renewcommand\arraystretch{1.2}
\begin{tabular}{|c|c|c|c|c|c|}
\hline
\;$\eta$ \;&\; $t$ \;& \; $d$ \; & dense & \emph{not} dense & sparse \\\noalign{\global\arrayrulewidth 1.8pt}
\hline
\noalign{\global\arrayrulewidth0.4pt}
1 & 10 & 5 & $\checkmark$ & & \\
\hline
$2$ & $\ge 1$ & 2 & & $\checkmark$ & \\
\hline
$\ge 2$ & 10 & 5 & & & $\checkmark$ \\
\hline
$3$ & $\ge 1$ & 3 & & & $\checkmark$ \\
\hline
\end{tabular}
\caption{Examples illustrating the asymptotic behavior of the density of $\F_q$-linear sum-rank-metric codes as $q \to +\infty$.}
\label{table_sumrk}
\end{table}
\end{remark}
\FloatBarrier
\subsection*{Conclusion and Outlook}
We developed a general framework for determining densities of codes over finite fields, equipped with an arbitrary translation-invariant metric, with any linearity degree. This unifies several previous results (which were mostly derived with different mathematical tools) on densities of linear and nonlinear codes in the Hamming, rank and sum-rank metric. Moreover, we established new results for codes in these three metrics, answering many of the open question in this field.
One general observation is that nonlinear codes achieving the Gilbert-Varshamov in any metric, or the Singleton-type bound in the sum-rank metric (including the Hamming and rank metric as special cases), are always sparse, for both the field size or the length going to infinity. However, in the sublinear case, the metric, the linearity degree and the parameter going to infinity make a difference. In some cases we get sparsity, in some density and in some cases we can only derive an upper bound on the asymptotic density, showing that the code family is not dense (but we do not know if they are sparse).
As a particular case of interest we studied the asymptotic behavior of MSRD codes for $q\rightarrow +\infty$, since MSRD codes generalize both the Hamming, as well as the rank metric. We derived bounds on the number of blocks for $\F_{q}$-linear MSRD codes to be dense or sparse. As explained in Remark \ref{rem:comparison}, the only case where MSRD codes are dense is when they are MDS codes. In all other cases we either get sparsity or an upper bound below $1$ for the density.
There are still many open questions for future work. E.g., in all cases where we obtained upper bounds below $1$ for the density, it is not clear if this bound is sharp or if there are sharper bounds, or if these code families are even sparse. Furthermore, we only presented the asymptotic densities of MSRD codes as the field size $q$ tends to infinity. With similar techniques we can derive results for the other parameters going to infinity, however, it was out of the scope of this paper to include them here; in particular, since they become more difficult
due to the vast number of variable parameters. Moreover, we restricted ourselves to MSRD codes where each block has the same length, and it remains an open problem to study the densities of such codes with different block lengths. Lastly, we will extend our framework to other metric spaces and study the densities of codes over finite rings, equipped with the Hamming or the Lee metric.
\bigskip
\bibliographystyle{amsplain}
|
2,877,628,088,417 | arxiv | \section{Introduction}
Positron emission tomography (PET) has developed into a standard tool for imaging methods in medicine. The required radionuclides are often produced with the aid of cyclotrons. Depending on the emitter to be produced, different nuclides are bombarded with protons or deuterons that trigger nuclear reactions. In addition to the desired nuclide, neutron and gamma radiation is also produced during these nuclear reactions. These are the main source of the radioactive dose rate on the outside of the protection buildings and determine the shielding design. In addition, neutron radiation leads to the activation of the construction and building materials, which could be important for the decommissioning of the facilities. Therefore, the correct determination of neutron and gamma source terms is the imperative basic condition for a correct shielding calculation and thus for a sufficient protection of the employees.
Several approaches can be taken to obtain the source term needed for the shielding calculations. In one approach, the source term spectrum is determined using nuclear model programs such as ALICE-91~\cite{alice91}, and subsequently, the corresponding transport calculation (shielding calculation) is done using the obtained spectrum. The determination of the absolute number of emitted neutrons is then carried out on the basis of tabulated activities assuming a full absorption of the beam. For many reactions these are available in tabular form for different proton energies and a standard current~\cite{NDS-NEA}. This approach for the ${}^{18}$F-production is applied in the source term quoted in the supporting documentation for an ACSI TR-FLEX cyclotron~\cite{Sherbrooke}, assuming that the neutron source term originates exclusively from the desired reaction. In~\cite{Sheu} the energy and angular
distributions of the neutron source term were taken from
the double differential data of the nearby reaction ${}^{14}$N(p,n)${}^{14}$O, but also basing the absolute number of emitted neutrons on the production rate of ${}^{18}$F. Often the source term can be traced back to a confidential information from the manufacturer of the cyclotron with little information on how it was obtained~\cite{Bosko1,Bosko2,Facure}.
\newline
A different approach which is related to the large progress on radiation transport and reaction codes in recent years consist in the direct calculation of the full neutron and gamma source terms including all contributing reaction channels with the radiation transport codes themselves, often accompanied by comparisons of the simulated results with experimental measurements. Examples for this approach can be found in~\cite{Cruzate, Benavente, Infantino}.
For the shielding calculations for the new cyclotron with proton beam energies of 24 MeV and 28 MeV at the HZDR~\cite{JRP36}, this second approach was used. Different nuclear models are integrated into the program MCNP6~\cite{mcnp6} to calculate the generation of neutrons in the target. Likewise, the source term can also be determined with the help of corresponding cross section tables. Both possibilities were used. The source terms calculated in this approach show a large difference respect to the values that were obtained using the approach mentioned above on the basis of tabulated activities. A calculation with the FLUKA~\cite{fluka,fluka2} program gave similar results like MCNP6. These results have been already published and discussed in~\cite{JRP36}. \newline
To validate the results from independent radiation transport codes, in addition to source term calculations with MCNP6 and FLUKA for HZDR's 18 MeV cyclotron\footnote{Cyclone 18/9 model by IBA}, this work gives experimental results for neutron fluence measurements using activation sample monitors. For these measurements, existing experience from the field of reactor dosimetry was applied.
\section{Determination of the neutron source term}
\label{Sec:SourceTerm}
To calculate the neutron source term, simulation models were created with both MCNP6 and FLUKA which consist of a cylinder with radius 0.55 cm and a length of 4 cm, filled with water enriched with 97\% \textsuperscript{18}O.
These dimensions correspond to typical target bodies used at the IBA cyclotron.
The protons' direction is along the cylinder axis, hitting the target on one of the circular base surfaces. The precise shape of the proton beam is not known. Therefore, two approaches were calculated. In the first case the proton beam was simulated as an infinitesimally small pointlike beam and in the second case a circular surface beam with a Gaussian distribution with a standard deviation of 0.125 cm cut off at the target radius was chosen. The two approaches gave identical results (see also~\cite{JRP36}). In the following we will use the results obtained with a pointlike proton beam in the simulations. The emitted neutron spectrum is determined on the surface of a surrounding sphere with a radius of 10 m, large enough to minimize geometrical effects due to the target shape.
The generation of neutrons in the target was carried out using nuclear physics models of reaction cross sections. In MCNP a cascade exciton model (CEM)~\cite{cem} was used, while FLUKA uses a pre-equilibrium cascade model (PEANUT)~\cite{peanut} for the nuclear interactions. In addition, MCNP6 calculations were also carried out with evaluated nuclear data of the (p,n) reaction. \newline
Since \textsuperscript{18}O-data are not included in the standard library of MCNP6, they were generated using the NJOY~\cite{njoy} program and imported into MCNP6. The required reaction cross-sections were read from the nuclear data library TENDL, which is based on the nuclear core model code TALYS~\cite{talys}. This possibility to use externally generated cross sections does not exist for FLUKA. Since FLUKA does not include neutron cross sections for \textsuperscript{18}O, cross section data for \textsuperscript{16}O was used instead for the interactions of neutrons in the water. At thermal energies, the O${}^{16}$ total neutron cross section is at maximum 25\% higher than the one for O${}^{18}$. In both cases, the cross sections are dominated by the ones for elastic scattering of neutrons. Since the main source of neutrons is the dominating (p,n) reaction at the O${}^{18}$, the influence of the secondary (n,g), (n,n') and (n,2n) reactions is very small. Further differences in the calculations consist of the used cross section data libraries. The data libraries ENDF/B-VI.8~\cite{bvi8} were used for the interactions of neutrons with energies below 20 MeV at FLUKA and ENDF/BVII.1~\cite{bvii1} for MCNP6.
\begin{figure}[t!]
\centering
\begin{subfigure}{0.48\linewidth}
\begin{center}
\includegraphics[trim=25 10 0 30 , clip, width=0.99\linewidth]{figures/ProFlu_18MeV.pdf}
\end{center}
\end{subfigure}
\hspace{0.2cm}
\begin{subfigure}{0.48\linewidth}
\begin{center}
\includegraphics[trim=25 10 0 30 , clip, width=0.99\linewidth]{figures/NeuFlu_18MeV.pdf}
\end{center}
\end{subfigure}
\caption{Proton- and neutron fluences per primary proton obtained with FLUKA for 18 MeV protons hitting the \textsuperscript{18}O-enriched water target.}\label{Fig:Fluence18MeV}
\end{figure}
In fig.~\ref{Fig:Fluence18MeV}, both the proton and neutron fluences per primary proton are depicted as obtained with the FLUKA simulation code. The protons penetrate only about 0.5 cm into the water target before they are stopped. The water target absorbs almost all protons in the forward direction, leaving only the backscattered ones to the left of the target. Neutrons are produced along the trajectory of the proton beam in the water.
Fig.~\ref{Fig:nRate} shows the differential neutron rate recorded across the surrounding sphere for 1$\upmu$A of proton beam current obtained with MCNP6 (version 6.1.1) and FLUKA (version 2011.2x). Integrating the spectrum over energy we find a total neutron yield of $3.21\times10^{10}$ n/s for 1 $\upmu$A of proton current for the FLUKA calculation, and $2.99\times10^{10}$ n/s for 1 $\upmu$A of proton current for MCNP6. The higher yield obtained with FLUKA respect to the MCNP6 calculations has already been observed for 24 and 28 MeV protons in ~\cite{JRP36}, and is attributed to differences of the underlying nuclear physics models. The values are about a factor 3 higher than the value of 1.115$\times$10\textsuperscript{10} n/s for 1 $\upmu$A of proton current obtained from~\cite{IAEA} for the ${}^{18}$O(p,n)${}^{18}$F channel. We attribute the difference to additional neutron-producing reaction channels opening at 18 MeV proton energy, as suggested in~\cite{Carroll}. It should be noted however that measurements of the neutron yield rate reported in~\cite{Mendez} and~\cite{Hagiwara} give results which are close to the value of 1.115$\times$10\textsuperscript{10} n/s for 1 $\upmu$A\footnote{It was confirmed by the authors of~\cite{Hagiwara} that the values quoted in their document need to be corrected by a factor of 10 and the resulting neutron production yield should therefore read (1.55$\times$10\textsuperscript{10}$\pm$1.03$\times$10\textsuperscript{9}) n/s for 1 $\upmu$A of beam current. We thank M. Hagiwara for this information.}.
\begin{figure}[t!]
\centering
\includegraphics[trim=10 100 10 70 , clip, width=0.99\linewidth]{figures/n-quelle_18MeV_neu.pdf}
\caption{Differential neutron rate for an 18 MeV proton beam with 1$\upmu$A hitting the water target. The spectra are available from~\cite{data_Fig2}.}
\label{Fig:nRate}
\end{figure}
\section{Experimental validation of the radiation field around the target}
To validate the calculation of the source terms in sec.~\ref{Sec:SourceTerm}, activation monitor foils where placed on top of the irradiation target during a routine run for \textsuperscript{18}F production. After irradiation, the activation of the foils was measured and compared to predictions from the radiation transport and reaction codes MCNP6 (version 6.1.1) and FLUKA. For these activation studies, a special developer version of FLUKA~\cite{FLUKA2017} was used which includes updated information on branching ratios to meta-stable states using the JEFF-3.1A activation library~\cite{JEFF31A} that is not yet available in the official FLUKA version~\cite{FLUKA2011}.
\subsection{Experimental setup to measure the radiation field with sample activation}
Figures~\ref{Fig:Samples_a} and~\ref{Fig:Samples_b} show the individual activation monitor samples as well as the sample packages and their position on the irradiation container at the cyclotron. The samples consist of different metal foils made of pure metals or alloys. Table~\ref{Tab:1} shows the monitor samples and the reactions under study with the generated nuclides, the reaction threshold and their half-life. The selected metals are standard monitor materials which are inserted for neutron flux and fluence measurements at fission reactors for power determinations as well as for the validation of the results in reactor dosimetry~\cite{dosimetry}. As can be seen from table~\ref{Tab:1}, several of the materials have reactions starting at different threshold energies. This makes it possible to study different energy regions in the spectrum. The monitor packages were positioned directly on top of the irradiation target in order to achieve a high neutron flux and thus high reaction rates. The irradiation took place during a regular ${}^{18}$F production run. The energy of the protons was 18 MeV with average beam current of 25 $\upmu$A and the irradiation lasted for 50 minutes.
\begin{figure}[th!]
\begin{subfigure}{0.48\linewidth}
\begin{tikzpicture}[>=stealth]
\node[anchor=south west, inner sep=0] (image) at (0,0){
\includegraphics[trim=0 240 0 0 , clip, width=0.95\linewidth]{figures/samples.pdf}};
\begin{scope}[x={(image.south east)},y={(image.north west)}]
\draw (0.1,0.58) node [anchor=north west,very thick] {\bf \sffamily Tin};
\draw (0.08,0.45) node [anchor=north west,very thick] {\tiny \bf \sffamily 100.\% Sn};
\draw[black,very thick] (0.34,0.335) -- (0.22,0.45);
\draw (0.4,0.225) node [anchor=north,very thick] { \bf \sffamily Indium};
\draw (0.4,0.12) node [anchor=north,very thick] {\tiny \bf \sffamily 99.9\% In};
\draw[black,very thick] (0.42,0.18) -- (0.47,0.26);
\draw (0.65,0.225) node [anchor=north,very thick] { \bf \sffamily Zinc};
\draw (0.65,0.12) node [anchor=north,very thick] {\tiny \bf \sffamily 100.\% Zn};
\draw[black,very thick] (0.57,0.24) -- (0.6,0.17);
\draw (0.75,0.56) node [very thick] {\bf \sffamily Multi-component};
\draw (0.87,0.52) node [anchor=north,very thick] {\tiny \bf \sffamily 81.63\% Ni};
\draw (0.87,0.45) node [anchor=north,very thick] {\tiny \bf \sffamily 15.16\% Mo};
\draw (0.87,0.38) node [anchor=north,very thick] {\tiny \bf \sffamily 2.62\% W};
\draw (0.87,0.31) node [anchor=north,very thick] {\tiny \bf \sffamily 0.26\% Mn};
\draw (0.87,0.24) node [anchor=north,very thick] {\tiny \bf \sffamily 0.31\% Au};
\draw[black,very thick] (0.75,0.52) -- (0.684,0.4);
\end{scope}
\end{tikzpicture}
\caption{}\label{Fig:Samples_a}
\end{subfigure}
\begin{subfigure}{0.48\linewidth}
\centering
\includegraphics[trim=0 0 200 0 , clip, width=0.95\linewidth]{figures/160929_1.pdf}
\caption{}\label{Fig:Samples_b}
\end{subfigure}
\caption{(a) Examples for activation foil samples used in the experiment. (b) The two stacks of activation sample foils in a plastic bag placed on top of the target flange.}\label{Fig:Samples}
\end{figure}
\begin{table}[t!]
\caption{Composition of the monitor samples with the studied activation reactions and corresponding half-lifes.}
\centering
\begin{tabular}{|c|c|c|c|r|}
\hline
Monitor Sample & Mass fraction & Reactions & Threshold & Half-life\\
\hline
Multi-component & 81.63\% Ni& \textsuperscript{58}Ni(n,np)\textsuperscript{57}Co & 8 MeV & 271.74d\\
& & \textsuperscript{58}Ni(n,p)\textsuperscript{58}Co & 0.4 MeV & 70.86d\\
& 15.16\% Mo & \textsuperscript{98}Mo(n,g)\textsuperscript{99}Mo & therm. & 66.0h\\
& & \textsuperscript{100}Mo(n,2n)\textsuperscript{99}Mo & 8 MeV & 66.0h \\
& 2.62\% W & \textsuperscript{186}W(n,g)\textsuperscript{187}W & therm. & 23.72h\\
& 0.26\% Mn & \textsuperscript{55}Mn(n,g)\textsuperscript{56}Mn & therm. & 2.58h\\
& 0.31\% Au & \textsuperscript{197}Au(n,g)\textsuperscript{198}Au & therm. (4eV) & 2.69d\\
\hline
Zinc & 100.00\% Zn& \textsuperscript{64}Zn(n,p)\textsuperscript{64}Cu &0.08 MeV & 12.70h\\
& & \textsuperscript{64}Zn(n,g)\textsuperscript{65}Zn & therm. & 244d \\
& & \textsuperscript{68}Zn(n,g)\textsuperscript{69m}Zn & therm. & 13.76h \\
\hline
Indium & 99.9\% In& \textsuperscript{113}In(n,g)\textsuperscript{114m}In & therm. & 49.5d \\
& & \textsuperscript{115}In(n,2n)\textsuperscript{114m}In & 9 MeV & 49.5d\\
& & \textsuperscript{115}In(n,n')\textsuperscript{115m}In & 0.3 MeV & 4.5h \\
\hline
Tin & 100.00\% Sn& \textsuperscript{116}Sn(n,g)\textsuperscript{117m}Sn & therm. & 13.60d\\
& & \textsuperscript{117}Sn(n,n')\textsuperscript{117m}Sn & 0.15 MeV & 13.60d\\
& & \textsuperscript{118}Sn(n,2n)\textsuperscript{117m}Sn & 9 MeV & 13.60d\\
\hline
\end{tabular}
\label{Tab:1}
\end{table}
Two packages with the same stacks of activation monitors were irradiated simultaneously. This allowed independent measurements using two independent laboratories for the sample activation analysis. The activation measurements were carried out at the "Department of Environmental Monitoring" and at the "Laboratory for Environment and Radionuclide Analysis" of the "VKTA - Strahlenschutz, Analytik \& Entsorgung Rossendorf e. V."\footnote{VKTA - Radiation Protection, Analytics \& Disposal Inc., \url{www.vkta.de}}. The activity was determined by gamma spectrometry using high-purity germanium detectors. Typically, the detectors\footnote{Broad Energy HPGe Detector BE5030P by Mirion Technologies, Inc.} have about 44\% efficiency relative to a $3\,"\times 3\,"$ NaI(Tl) detector, a resolution of 1.78~keV at 1332~keV and are calibrated weekly against a certified standard.
In order to detect nuclides with a relatively short half-life, some of the activation monitors were already examined about one hour after the end of irradiation. For some nuclides with longer half-life, the measurements were repeated at longer cooling times and the activity value at the end of irradiation was extrapolated back using the known half-life values. However, it was found in previous studies that these extrapolations were not always reliable, especially when there is a delayed production of a nuclide from an excited state. Therefore both labs were asked to provide the measured values at the time of the measurement and also the values extrapolated back to the time at end of irradiation (EOI). The quoted uncertainties by both laboraties were in the range between 5\% and 25\%, depending on the reaction channel.
\subsection{Calculation of sample activation}
\label{CalcSampAct}
Fig.~\ref{Fig:Geometry} shows the geometrical models of the irradiation target chamber created with FLUKA and MCNP6. The construction of the irradiation target was reproduced in great detail. While it is known from which materials the components are made of, precise information on densities and composition was not always available, so standard specifications had to be used. Composition and densities of the materials were implemented identically in the FLUKA and MCNP6 simulations. The beam tube adapter flange and surrounding environment like walls were not implemented in the simulations, on the assumption that the influence of back scattered neutrons is negligible on the activation of the monitors.
\begin{figure}[th!]
\centering
\begin{tikzpicture}
\node[anchor=south west, inner sep=0] (image) at (0,0) {
\begin{subfigure}{0.35\linewidth}
\begin{center}
\includegraphics[trim=77.7 0 0 0 , clip, width=1.40\linewidth]{figures/FLUKA_model.pdf}
\caption{FLUKA geometry model}
\end{center}
\end{subfigure}
\hspace{2.5cm}
\begin{subfigure}{0.38\linewidth}
\begin{center}
\includegraphics[trim=0 0 0 0 , clip, width=1.4\linewidth]{figures/MCNP_model.pdf}
\caption{MCNP6 geometry model}
\end{center}
\end{subfigure}
};
\begin{scope}[x={(image.south east)},y={(image.north west)}]
\draw[->,black,ultra thick] (-0.075,0.545) -- (0.005,0.545);
\draw (-0.035,0.54) node [anchor=north] {{\bf \sffamily proton}};
\draw (-0.035,0.48) node [anchor=north] {{\bf \sffamily beam}};
\draw (0.47,1.) node [anchor=south] (actsamp) {{\bf \sffamily activation samples}};
\draw (0.49,.9) node [anchor=south] (brass) {{\bf \sffamily brass body}};
\draw (0.48,.8) node [anchor=south] (cool) {{\bf \sffamily cooling water}};
\draw (0.48,.7) node [anchor=south] (targ) {{\bf \sffamily target (H${}_{\bm{\mathsf 2}}{}^{\bm{\mathsf 18}}$O)}};
\draw (0.49,.3) node [anchor=north] (niob) {{\bf \sffamily niobium}};
\draw (0.49,.2) node [anchor=north] (steel) {{\bf \sffamily stainless steel}};
\draw[-{>[scale=.8]}, black, thick] (actsamp.west) -- (0.165, 0.94);
\draw[-{>[scale=.8]}, black, thick] (actsamp.east) -- (0.75, 0.9);
\draw[-{>[scale=.8]}, black, thick] (brass.west) -- (0.2, 0.80);
\draw[-{>[scale=.8]}, black, thick] (brass.east) -- (0.65, 0.85);
\draw[-{>[scale=.8]}, black, thick] (cool.west) -- (0.1515, 0.62);
\draw[-{>[scale=.8]}, black, thick] (cool.east) -- (0.72, 0.638);
\draw[-{>[scale=.8]}, black, thick] (targ.west) -- (0.13, 0.545);
\draw[-{>[scale=.8]}, black, thick] (targ.east) -- (0.7, 0.545);
\draw[-{>[scale=.8]}, black, thick] (niob.west) -- (0.14, 0.48);
\draw[-{>[scale=.8]}, black, thick] (niob.east) -- (0.67, 0.455);
\draw[-{>[scale=.8]}, black, thick] (steel.west) -- (0.12, 0.33);
\draw[-{>[scale=.8]}, black, thick] (steel.east) -- (0.685, 0.33);
\draw[->,black,ultra thick] (0.52,0.545) -- (0.59,0.545);
\draw (0.55,0.54) node [anchor=north] {{\bf \sffamily proton}};
\draw (0.55,0.48) node [anchor=north] {{\bf \sffamily beam}};
\end{scope}
\end{tikzpicture}
\caption{Target geometries for the two simulation codes.}\label{Fig:Geometry}
\end{figure}
The stacks of foil samples were included in the simulations at their corresponding position during the irradiation. Material densities of the samples where measured for each sample before irradiation, and the average value of the samples of same type was used in the simulation. The source term in the simulations consisted of a point-like proton beam with 18 MeV kinetic energy.
Fig.~\ref{Fig:NeuFlu} shows the neutron fluence on a central vertical section plane as obtained from the output of a FLUKA simulation. One can clearly see the proton stopping peak immediately after the protons enter the \textsuperscript{18}O-enriched water and the almost isotropic emission of the neutrons through the target geometry.
\begin{figure}[th!]
\centering
\includegraphics[trim=10 20 10 50 , clip, width=0.85\linewidth]{figures/NeutronFluence.pdf}
\caption{Neutron fluence in neutrons/cm${}^2$/primary proton around the target geometry (evaluated with the FLUKA Monte Carlo code).}\label{Fig:NeuFlu}
\end{figure}
Fig.~\ref{Fig:neufluSamp} shows the spectra of neutron flux entering the different samples as calculated by MCNP6 and FLUKA. Similar to fig.~\ref{Fig:nRate} in sec.~\ref{Sec:SourceTerm}, at energies below 1 MeV, the FLUKA values are higher than the MCNP6 values. The strong resonance (at about 1.45 eV) of the absorption cross section for \textsuperscript{115}In with almost 30000 barn is visible in the spectra, except for tin, because the tin sample was placed below the indium sample.
\begin{figure}[th!]
\centering
\includegraphics[trim=50 60 50 60 , clip, width=0.85\linewidth]{figures/neutron_spectra_18MeV_cyclotron.pdf}
\caption{Differential neutron flux per primary proton evaluated with FLUKA and MCNP6}\label{Fig:neufluSamp}
\end{figure}
Given the neutron flux rate, the activities $A_i(t_{\mathrm meas})$ for each produced nuclide at a time $t_{\mathrm meas}$ after irradiation in an energy bin $i$ can be determined using the relation
\begin{equation}
A_i(t_{\mathrm meas}) = \varrho \cdot V \cdot \lambda \cdot \sigma_i \cdot \dot{\Phi}_i\cdot t_{\mathrm irr} \cdot \left(1 - e^{-\lambda t_{\mathrm irr}} \right) \cdot e^{-\lambda (t_{\mathrm meas})}
\label{Eq:1}
\end{equation}
In eq.~\ref{Eq:1}, $\varrho$ is the density of nuclei in the sample (in nuclei/(barn$\cdot$cm)), $V$ is the sample volume in cm\textsuperscript{3}, $\sigma_i$ is the corresponding reaction cross section in barn for energy bin $i$, $\dot{\Phi}_i$ is the corresponding neutron flux rate obtained from the simulation in neutrons/cm\textsuperscript{2}/s, $t_{\mathrm irr}$ is the irradiation time in seconds and $\lambda$ is the decay constant of the reaction product (in 1/s). The total activity is then the sum of the $A_i$ over all energy bins $i$.
Given an irradiation time profile, FLUKA conveniently gives the resulting nuclide activities in Bq/cm\textsuperscript{3} for selected geometry regions at desired times directly in a tabular output. The required cross section data is hard-coded into the FLUKA program and cannot be changed by the user. For MCNP6, eq.~\ref{Eq:1} needs to be applied externally to the simulated neutron flux rates. In this case, the required cross section data had to be generated with the NJOY program. This procedure has the advantage that the neutron flux can be folded over cross sections from different nuclear data libraries. This allows to estimate systematic uncertainties coming from differences between the available cross section data sets. If more than one reaction channel contributed to a measured final state isotope, the resulting activities were added to obtain the final result.
\subsection{Discussion of the results}
In \cref{Tab:2,Tab:3,Tab:4,Tab:5} the results for the measured and simulated activities for the different monitors at the corresponding time of measurement are presented. Measurements obtained by the "Department of Environmental Monitoring" are reported as "Analysis A" and the ones by the "Laboratory for Environment and Radionuclide Analysis" are reported as "Analysis B". We have only kept results for reactions for which both laboratories reported a significant value and for which the statistical uncertainty of the simulations was 15\% or better. This e.g. excludes the reaction \textsuperscript{55}Mn(n,g)\textsuperscript{56}Mn in table~\ref{Tab:1}, for which only Analysis A gave a measured result. In total 11 measurements for different nuclides remain. Numbers in parenthesis correspond to uncertainties on the last digits. The MCNP6 results in the tables use cross section data from the JEFF3.1A libraries for the Indium, Zinc and Tin monitors, while for the multi-component monitor ENDF/B-VII.1 were used (except for the \textsuperscript{58}Ni(n,p)\textsuperscript{58}Co reaction, for which also JEFF3.1A libraries were used).
\begin{table}[t!]
\caption{Measured and calculated activities for the multi-component monitor. The measurements for analysis A were done at $t_{\mathrm meas}=$6h17m after EOI, measurements for analysis B were done at $t_{\mathrm meas}=$30h13m after EOI. The uncertainty of the FLUKA values corresponds to the statistical uncertainty. A starred value (*) indicates a C/E ratio which is outside the interval [0.6; 1.4].}
\centering
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
{\bf Multi-comp.} & \multicolumn{2}{|c|}{Measured activity } & \multicolumn{2}{|c|}{Simulated activity } & \multicolumn{3}{|c|}{Comparison }\\
{\bf monitor} & \multicolumn{2}{|c|}{[Bq]} & \multicolumn{2}{|c|}{[Bq]} &\multicolumn{3}{|c|}{C/E} \\
\hline
\hline
Reaction &Analysis & Result & MCNP6 & FLUKA & MCNP6 & FLUKA & C\textsubscript{M}/C\textsubscript{F} \\
\hline
\textsuperscript{58}Ni(n,np)\textsuperscript{57}Co & A & 4.2(5) & 0.58 & 0.74(3) & 0.14* & 0.18* &0.78 \\
& B & 4.7(5) & 0.58 & 0.74(3) & 0.12* & 0.16* & 0.78\\
\hline
\textsuperscript{58}Ni(n,p)\textsuperscript{58}Co & A & 330(20) & 274 & 255(1) & 0.83 & 0.77 &1.07 \\
& B & 455(45) & 378 & 351(2) & 0.83 & 0.77 & 1.08 \\
\hline
\textsuperscript{99}Mo prod.& A & 120(8) & 72 & 118(3) & 0.60 & 0.98 & 0.61 \\
& B & 117(12) & 56 & 91.6(2) & 0.48* & 0.78 & 0.61\\
\hline
\textsuperscript{186}W(n,g)\textsuperscript{187}W & A & 200(12) & 133 & 239(8) & 0.67 & 1.20 &0.56* \\
& B & 106(11) & 67 & 120(4) & 0.63 & 1.13 & 0.56*\\
\hline
\textsuperscript{197}Au(n,g)\textsuperscript{198}Au & A & 68(4) & 41 & 69(4) & 0.60 & 1.02 & 0.59* \\
& B & 63(6) & 31.4 & 53(3) & 0.50* & 0.72 & 0.59*\\
\hline
\end{tabular}
\label{Tab:2}
\end{table}
\begin{table}[t!]
\caption{Measured and calculated activities for the indium monitor. The measurements for analysis A were done at $t_{\mathrm meas}=$7h47m after EOI, measurements for analysis B were done at $t_{\mathrm meas}=$27h32m after EOI. The uncertainty of the FLUKA values corresponds to the statistical uncertainty. A starred value (*) indicates a C/E ratio which is outside the interval [0.6; 1.4].}
\centering
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
{\bf Indium} & \multicolumn{2}{|c|}{Measured activity } & \multicolumn{2}{|c|}{Simulated activity } & \multicolumn{3}{|c|}{Comparison }\\
& \multicolumn{2}{|c|}{[Bq]} & \multicolumn{2}{|c|}{[Bq]} &\multicolumn{3}{|c|}{C/E} \\
\hline
Reaction & Analysis & Result & MCNP6 & FLUKA & MCNP6 & FLUKA & C\textsubscript{M}/C\textsubscript{F} \\
\hline
\hline
\textsuperscript{114m}In prod. & A & 29(7) & 7 & 9.9(1) & 0.24* & 0.34* & 0.71 \\
& B & 25(3) & 6.9 & 9.7(1) & 0.28* & 0.39* & 0.71\\
\hline
\textsuperscript{115}In(n,n')\textsuperscript{115m}In & A & 9300(465) & 9145 & 9855(41) & 0.98 & 1.06 &0.93 \\
& B & 474(47) & 434 & 466(2) & 0.92 & 0.98 & 0.93\\
\hline
\end{tabular}
\label{Tab:3}
\end{table}
\begin{table}[t!]
\caption{Measured and calculated activities for the tin monitor. The measurements for analysis A were done at $t_{\mathrm meas}=$26h43m after EOI, measurements for analysis B were done at $t_{\mathrm meas}=$52h35m after EOI. The uncertainty of the FLUKA values corresponds to the statistical uncertainty. A starred value (*) indicates a C/E ratio which is outside the interval [0.6; 1.4].}
\centering
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|}
\hline
{\bf Tin} & \multicolumn{2}{|c|}{Measured activity } & \multicolumn{2}{|c|}{Simulated activity } & \multicolumn{3}{|c|}{Comparison }\\
& \multicolumn{2}{|c|}{[Bq]} & \multicolumn{2}{|c|}{[Bq]} &\multicolumn{3}{|c|}{C/E} \\
\hline
Reaction & Analysis & Result & MCNP6 & FLUKA & MCNP6 & FLUKA & C\textsubscript{M}/C\textsubscript{F} \\
\hline
\hline
\hline
\textsuperscript{117m}Sn prod. & A & 84(5) & 55.4 & 66.9(9) & 0.66 & 0.80 &0.83 \\
& B & 94(9) & 52.5 & 63.5(9) & 0.56* & 0.68 & 0.83\\
\hline
\end{tabular}
\label{Tab:4}
\end{table}
\begin{table}[t!]
\caption{Measured and calculated activities for the zinc monitor. The measurements for analysis A were done at $t_{\mathrm meas}=$24h56m after EOI, measurements for analysis B were done at $t_{\mathrm meas}=$49h27m after EOI. The uncertainty of the FLUKA values corresponds to the statistical uncertainty. A starred value (*) indicates a C/E ratio which is outside the interval [0.6; 1.4].}
\centering
\footnotesize
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|}
\hline
{\bf Zinc} & \multicolumn{2}{|c|}{Measured activity } & \multicolumn{2}{|c|}{Simulated activity } & \multicolumn{3}{|c|}{Comparison }\\
& \multicolumn{2}{|c|}{[Bq]} & \multicolumn{2}{|c|}{[Bq]} &\multicolumn{3}{|c|}{C/E} \\
\hline
Reaction & Analysis & Result & MCNP6 & FLUKA & MCNP6 & FLUKA & C\textsubscript{M}/C\textsubscript{F} \\
\hline
\hline
\textsuperscript{64}Zn(n,p)\textsuperscript{64}Cu & A & 1200(60) & 891 & 830(8) & 0.74 & 0.69 & 1.07 \\
& B & 242(48) & 234 & 218(2) & 0.97 & 0.90 & 1.07\\
\hline
\textsuperscript{64}Zn(n,g)\textsuperscript{65}Zn & A & 3.2(7) & 1.7 & 2.26(2) & 0.53* & 0.71 & 0.75 \\
& B & 2.8(4) & 1.7 & 2.25(2) & 0.61 & 0.80 & 0.76\\
\hline
\textsuperscript{68}Zn(n,g)\textsuperscript{69m}Zn & A & 15(1) & 6.3 & 8.3(4) & 0.42* & 0.55* & 0.76 \\
& B & 6.3(6) & 1.8 & 2.41(12) & 0.28* & 0.38* & 0.75\\
\hline
\end{tabular}
\label{Tab:5}
\end{table}
\Cref{Tab:2} gives the results for the multi-component monitor. The two analyses A and B were done with a time difference of 24 hours in the two different labs. Except for the \textsuperscript{58}Ni(n,np)\textsuperscript{57}Co reaction, the FLUKA code reproduces the experimental results reasonably well, with calculation-over-experiment (C/E) values between 0.77 and 1.20. MCNP6 gives results which are lower than the FLUKA ones by 20 to 40\%, except for the \textsuperscript{58}Co production, in which MCNP6 results give slightly better C/E ratios than FLUKA results. The lower values obtained with the MCNP6 code respect to the FLUKA results are consistent with the results on the source term in sec.~\ref{Sec:SourceTerm}, in which the MCNP6 results below 1 MeV are 40 to 50\% lower than the ones obtained with FLUKA.
Both simulation codes consistently predict activities which are too low by factors 5 to 8 for the \textsuperscript{58}Ni(n,p)\textsuperscript{57}Co channel, while the experiments agree within errors (the long half-life of \textsuperscript{57}Co of 270 days allows to compare the measurements directly). An interesting fact is that the activity due to \textsuperscript{58}Co is larger for analysis B. This is due to the fact that there is a delayed production of \textsuperscript{58}Co due to the presence of the meta-stable state \textsuperscript{58m}Co which decays with a half-life of 9 hours to \textsuperscript{58}Co. This is taken care of in the simulations, but not in the extrapolation of the measurements back to the end of irradiation. For this reason it was decided to compare the simulations with the experimental results at the time of measurement.
For the results of the indium monitor in~\cref{Tab:3}, we find that both codes predict a factor 3 to 4 less activity for the production of \textsuperscript{114m}In, but are remarkably close to the measurements for the channel \textsuperscript{115}In(n,n')\textsuperscript{115m}In.
The results for the production of \textsuperscript{117m}Sn with the tin monitor are given in~\cref{Tab:4}. The results with the FLUKA code give C/E ratios of 0.80 and 0.68, with the MCNP6 results being 17\% lower in both cases.
\Cref{Tab:5} gives the results for the zinc monitor. For the channel \textsuperscript{64}Zn(n,p)\textsuperscript{64}Cu, MCNP6 gives results which have a C/E ratio of 0.74 for analysis A and 0.97 for analysis B, with FLUKA results being consistently 7\% lower. The activity due to the reaction \textsuperscript{64}Zn(n,g)\textsuperscript{65}Zn is simulated by the FLUKA code with C/E ratios of 0.71 for analysis A and 0.80 for analysis B, while for the channel \textsuperscript{68}Zn(n,g)\textsuperscript{69m}Zn FLUKA predicts values significantly lower than the experimental results, namely a C/E ratio of 0.55 for analysis A and 0.38 for analysis B. For both reactions, MCNP6 predictions are about 25\% lower than the FLUKA predictions.
In summary, for most of the reactions, the FLUKA code gives results with C/E ratios between 0.68 and 1.20, with the MCNP6 calculations giving in general results which are 10 to 40\% lower (with the exception of the \textsuperscript{64}Zn(n,p)\textsuperscript{64}Cu and the \textsuperscript{58}Ni(n,p)\textsuperscript{58}Co reaction, for which MCNP6 results are 7 to 8\% higher than the FLUKA results).
Both codes consistently give lower results for the production of \textsuperscript{114m}In and the channel \textsuperscript{68}Zn(n,g)\textsuperscript{69m}Zn, with C/E ratios between 0.34 and 0.55 for FLUKA results and 0.24 and 0.42 for MCNP6 results. An especially large deviation between simulation and measured values is found for the reaction \textsuperscript{58}Ni(n,np)\textsuperscript{57}Co channel, with C/E ratios 0.16 and 0.18 for the FLUKA results and 0.12 and 0.14 for results obtained with MCNP6. Again, for these three channels, the MCNP6 results are lower than the FLUKA results by 20 to 30\%.
\subsection{Uncertainties in the calculations}
To estimate the uncertainties on the calculations, we need to address the different terms in \cref{Eq:1}. The primary sources of uncertainties in the simulations are the neutron flux rate at the sample position and the reaction cross sections. The neutron flux rate depends on the calculation of the source term (and therefore the underlying model for proton-induced neutron production in the water target), the proton beam current and to some extend the modeling of the target geometry. As mentioned in \cref{CalcSampAct}, the geometry of the system has been implemented with great care. The two geometric models differ only in minor details (see fig.~\ref{Fig:Geometry}), and contain both identical material compositions and densities. We therefore consider systematic effects from the geometry negligible when comparing the two simulations, and also they are thought to have a minor effect when comparing simulation results to the measurement, even given the fact that the influence of possible backscattering of neutrons from surrounding walls was not considered. The neutron flux rate scales linearly with the proton beam current, and a deviation of the current from the nominal value of 25 $\upmu$A will reflect on the neutron flux rate. However, the beam parameters can be determined very well and their uncertainties are very small. The dominant effect comes from the uncertainty of the source term - as can be seen from ~\cref{Fig:nRate} this can reach up to 50\% below energies of 1 MeV, and is certainly the reason why the MCNP6 results for most channels are significantly lower than the ones from FLUKA. A hint at the size of the uncertainty (which is energy dependent) is given by the C\textsubscript{M}/C\textsubscript{F}-values in \cref{Tab:2,Tab:3,Tab:4,Tab:5}. As can be seen, it can reach up to 40\% and more in some cases.
Fundamental for the determination of the reaction rates are the data libraries used in the evaluations. Different evaluated nuclear data libraries are available for the majority of nuclides at e.g. NEA~\cite{NEA}. For some nuclides only data calculated from theoretical models is available. Using MCNP6's capability to include different cross section data sets, we have studied the effects of different cross sections on the simulation calculations. The cross section data was generated with the use of the NJOY program. Among the libraries tested were ENDF/B-VII, JEFF3.1A, JENDL33, EAF2010 and ROSFOND2010. If branching fractions for isomeric states were given, these were considered. While for some of the considered reactions (like \textsuperscript{115}In(n,n')\textsuperscript{115m}In, \textsuperscript{117m}Sn production and \textsuperscript{197}Au(n,g)\textsuperscript{198}Au), the different libraries gave consistent results, in some cases discrepancies on the order of 30 to 40\% could be observed (\textsuperscript{114}In production and the \textsuperscript{68}Zn(n,g)\textsuperscript{69m}Zn reaction). As an example, in~\cref{Fig:7} the evaluated cross section for the \textsuperscript{68}Zn(n,g)\textsuperscript{69}Zn/\textsuperscript{69m}Zn reaction is shown obtained from two different data libraries (JEFF3.1A and the ENDF/B-VII). It can be seen that while for thermal energies, the cross sections are in good agreement, at high energies and around the first large resonance, the ENDF/B-VII data gives higher values, resulting in an activation result which is about 30\% higher than the one calculated with the JEFF3.1A data. For the remaining reactions, the differences in the activation results due to the use of different cross section data sets is on the order of 10\%.
\begin{figure}[th!]
\centering
\includegraphics[trim=95 100 55 95 ,clip, width=0.8\linewidth]{figures/Zn_58_ng_cross.pdf}
\caption{Evaluated cross sections for the reaction \textsuperscript{68}FacureZn(n,g)\textsuperscript{69m}Zn as obtained with the JEFF3.1A and the ENDF/B-VII data libraries.}
\label{Fig:7}
\end{figure}
Another point is the influence of the energy group structure. While the FLUKA code uses a fixed 260 energy groups structure, one can have a larger number of groups for the cross section data sets generated with NJOPY for MCNP6. For the reaction \textsuperscript{58}Ni(n,p)\textsuperscript{58}Co activities were calculated based on ENDF/B-VII.1 data determined with a 260 and 640 energy group structure. One finds an effect of 10\% towards a better agreement with the experiments using the high-resolution structure. This shows a general problem in the calculation of group-wise cross-sections. These may be underestimated in some cases, especially for threshold reactions. Generally, the flux decreases in the high energy region but cross-sections increase very strongly. Therefore it is very important to resolve the upper energy range well.
The volumes of the monitors in the two sample stacks are not fully identical, the simulation results were calculated using the average volume of two monitors of the same material. While this effect is at maximum 1\% for the Indium, Zinc and the multicomponent monitor, it reaches about 5.5\% for the Tin monitors (and therefore the simulation results on the \textsuperscript{117m}Sn production). Of course, this is negligible compared to the potentially large uncertainties on the neutron flux rate and the cross section spectra.
After the irradiation, we were notified by the operator of the cyclotron that the irradiation had to be stopped for about 10 minutes due to a vacuum problem. Once the problem was fixed, irradiation resumed to complete the 50 minutes of irradiation time. Due to the fact that the lifetimes of the produced isotopes in the reactions in~\cref{Tab:1} are quite long, we do not expect a large effect due to this interruption. An additional simulation with the FLUKA package using an irradiation time profile of 30 minutes of beam, followed by 10 minutes of no beam and finally additional 20 minutes of beam gave indeed no significant differences within the statistical uncertainties respect to the calculation with a full uninterrupted beam for 50 minutes.
Finally, the values for material densities $\varrho$ and half-life times $\lambda$ were taken from the literature or are included in the simulation codes and the corresponding uncertainties are considered negligible for the present study.
\section{Conclusion and Outlook}
Inspired by the calculations of a shielding assessment for a new cyclotron bunker, investigations on the neutron source term for the \textsuperscript{18}F production at a IBA Cyclone 18/9 cyclotron were carried out using the Monte Carlo transport and reaction codes MCNP6 and FLUKA. It was found that below 1 MeV, the MCNP6 code gave a differential neutron rate which is smaller than the one by FLUKA by up to 50\%. The total neutron production yield for both codes was about 3 times larger than the value obtained from~\cite{IAEA} for the exclusive \textsuperscript{18}F production channel. To validate the results of the Monte Carlo codes, a more realistic model of the target geometry for the Cyclone 18/9 cyclotron was created with the two Monte Carlo codes which was used to calculate the activation of small monitor sample foils made of different metals and alloys during a typical run of \textsuperscript{18}F production. These results were then compared to the actual activation of the sample foils after a \textsuperscript{18}F run which was obtained using gamma spectroscopy with HPGe detectors at two independent laboratories. In total, 11 reactions were investigated, with C/E ratios between 0.6 and 1.4 for most cases. As a general trend, results calculated using the MCNP6 codes were 20 to 40\% lower than the ones obtained with FLUKA. This may be (partially) explained by the fact that the source term obtained with MCNP6 is lower than the one from FLUKA below 1 MeV. For three reactions, the Monte Carlo simulations were consistently giving much lower results than the measured data (C/E values as low as 0.12 were observed). This was the case for the \textsuperscript{58}Ni(n,np)\textsuperscript{57}Co reaction, the production of \textsuperscript{114m}In and the reaction \textsuperscript{68}Zn(n,g)\textsuperscript{69m}Zn. For these three reactions, the uncertainties discussed can not accommodate the discrepancies, and it is most likely that the underlying cross section data for these reactions is responsible for the results, and eventually this document may help to improve the cross section data base in the Monte Carlo programs in the future.
Despite the uncertainties of the measurements, the obtained results show that the calculation of the neutron source terms with the help of the methods and models which are implemented in radiation transport and reaction codes like MCNP6 and FLUKA should work better for a proton beam of 18 MeV than a calculation based solely on the \textsuperscript{18}F yield. This is consistent with observations in~\cite{Carroll}, which reports significantly higher neutron yields for proton energies above 12 MeV for evaluations using a full ALICE-91 calculation respect to evaluations with tabulated yield values for the \textsuperscript{18}O(p,n)\textsuperscript{18}F reaction only. However, our observations seem to contradict the experimental results in~\cite{Mendez,Hagiwara} which find reasonable agreement with the yield of
1.115$\times$10\textsuperscript{10} n/s for 1 $\upmu$A of proton current at 18 MeV energy obtained from~\cite{IAEA} for the ${}^{18}$O(p,n)${}^{18}$F channel.
In order to consolidate our results, further experiments will be planned at the new cyclotron of the HZDR. In these experiments, both the target material and the proton energy will to be varied. The aim is to provide validated absolute neutron fluence spectra for shielding calculations at medical cyclotrons.
\section*{Acknowledgement}
The authors would like to thank S. Bartel and M. K\"ohler of VKTA for providing the analysis of the sample monitors. We also like to thank S. Preusche and F. Hobitz of HZDR for assistance and support with the sample irradiation at the cyclotron.
\section*{References}
|
2,877,628,088,418 | arxiv |
\subsubsection{Searches for Jet Quenching in \mbox{$pA$}\xspace\ collisions}
Given the recent wealth of data showing evidence for
collective behavior in
\pPb, \pp\ collisions (for a recent review see Ref.~\cite{Nagle:2018nvi}),
it is natural to search for
jet quenching effects in these systems. As of this writing
no effects of jet quenching have been observed in \pPb\ or \pp\ collisions.
Here we discuss several searches for jet quenching in \mbox{$pA$}\xspace\ collisions.
The nuclear modification factor \RpA\ has been measured both in
\mbox{$d$$+$Au}\xspace\ collisions at RHIC and \pPb\ collisions at the LHC
for both jets~\cite{Adare:2015gla,Adam:2015hoa,ATLAS:2014cpa} and
charged particles~\cite{Adams:2003im,Adler:2003ii,Khachatryan:2016odn}.
No evidence for jet quenching was found in these measurements.
However, the precision of these measurements is limited by the
normalization uncertainties associated with the nuclear modification
factor coming from the luminosity and \TAA\ determination (along
with other sources).
In order to be sensitive to potentially smaller jet quenching
effects, measurements of self-normalized observables (e.g. normalized per-jet, or per-trigger
particle) have been done.
ALICE measured the charged-particle jets opposite to a high-transverse momentum
trigger hadron~\cite{Acharya:2017okq} and reported their per-trigger normalized yield over a broad kinematic range.
The advantage of the per trigger particle normalisation is that no \TAA\ scaling of the reference is needed and thus no Glauber modelling and interpretation of the event activity (EA) in terms of geometry is required.
Events are classified according to how hits in a forward scintillator in the Pb-going
direction or hits in a zero-degree neutron detector also in the Pb-going direction.
Figure \ref{fig:pPbjqALICE}, shows the ratio of the observable in the two EA classes. The ratio is consistent with no energy loss. The red line indicates a limit, at $90\%$ confidence level, on the average $p_{T}$ shift of 0.4 GeV/c, which is an estimate of the maximum energy that is transported outside the jet cone.
\begin{figure}[h!]
\centering
\includegraphics[width=0.45\textwidth]{plots/2018-May-20-ppb5_RCP_limit90_AKT04_V0A_W1.pdf}
\caption{ Ratio of recoil jet $p_{\rm{T}}$ distribution in \pPb\ events with high and low event activity measured in the forward detectors. From Ref. \cite{Aaboud:2017tke}}
\label{fig:pPbjqALICE}
\end{figure}
ATLAS measured fragmentation functions in \pPb\ collisions~\cite{Aaboud:2017tke}.
As shown in Figure~\ref{fig:pPbjq} in contrast
to \mbox{Pb$+$Pb}\xspace\ collisions, no significant deviation was found between the \pPb\ fragmentation
functions and the \pp\ ones for the soft particles in the jet.
There is some excess of less than 10\% in the central values
for charged-particles between approximately 1--5~GeV between
the fragmentation functions in \pPb\ and \pp\ collisions but it is within the
size of the systematic uncertainties.
Also shown in Figure~\ref{fig:pPbjq} is the measurement from CMS of
the dijet asymmetry in \pPb\ collisions selected on the forward
energy in the Pb-going direction~\cite{Chatrchyan:2014hqa}.
They evaluated the mean of this distribution
and found that quantity to be independent of the forward energy to within their
uncertainties.
\begin{figure}[h!]
\centering
\includegraphics[width=0.59\textwidth]{plots/atlas_pPb_FF.pdf}
\includegraphics[width=0.4\textwidth]{plots/CMS_pPb_xJ.pdf}
\caption{(left) Ratios of the fragmentation functions in \pPb\ collisions
to those in \pp\ collisions for various \mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$}\ selections as a
function of charged particle \mbox{${p_T}$}\xspace. (right) Mean value
of the ratio of the subleading to leading \mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$} (\mbox{$x_{J}$})
as a function of the transverse energy in the Pb-going direction. Figures
are from Ref.~\cite{Aaboud:2017tke} (left) and Ref.~\cite{Chatrchyan:2014hqa} (right). }
\label{fig:pPbjq}
\end{figure}
There is ongoing interest in more sensitive measurements which might
be more sensitive to any jet quenching signal but the existing measurements
clearly show that any jet quenching which might exist in \pPb\ collisions
is much smaller than that in heavy ion collisions.
\subsubsection{Light ion collisions}
In order to map the transition
between large systems (e.g. central heavy ion collisions collisions) with large
energy loss and small \mbox{$pA$}\xspace\ systems without observed energy
loss, there is a lot of interest in having small, symmetric
collision systems with which to potentially observe the turn-off of jet
quenching.
Some data from collision systems smaller than \mbox{Pb$+$Pb}\xspace\ or
\auau\ does exist. Most recently, the LHC delivered
\xexe\ collisions in 2017. Those results showed that the
value of \RAA\ for charged particles depends primarily
on the size of the collision system (as measured by the charged particle
multiplicity)~\cite{Acharya:2018eaq}.
However, the utility of the \xexe\ measurements to answer this question
is limited by the
fact that \xexe\ collisions are much closer in charged-particle
multiplicity and \Npart\ to \mbox{Pb$+$Pb}\xspace\ collisions than they are to \pPb\
collisions.
In light of this there remains a great interest in colliding a
much smaller collision system, with an \Npart\ close to that
of \pPb\ collisions but with a larger geometrical transverse overlap that increases the in-medium path length and thus potentially, quenching.
The preferred collision system
is \OO~\cite{Huss:2020dwe,Brewer:2021kiv}. The experimental projections for the nuclear modification factor of charged hadrons measured in a short \OO\ run of $0.5$ nb$^{-1}$ \cite{ALICE-PUBLIC-2021-004} were compared to theoretical expectations for jet quenching ~\cite{Huss:2020dwe,Huss:2020whe}. The comparison indicates that partonic energy loss signal might be observed at transverse momentum of approximately 20~GeV.
In the spring of 2021
RHIC ran \OO\ collisions for the STAR experiment.
The data from that run has not yet been analyzed but will
be the first look at this important question.
\section{Introduction}
\input{intro}
\section{A brief summary of the theoretical advances in jet quenching}
\input{theory}
\section{Jet Measurements in Heavy Ion Collisions}
\subsection{Jet Reconstruction}
\input{techniques}
\subsection{Jet tools}
\label{subsect:jettools}
\input{tools}
\subsection{Jet Observables}
\input{observables}
\section{What have we learned from jet measurements at the LHC and RHIC?}
We have organised available heavy-ion jet data around three physics
questions. First, what are the mechanisms responsible for the transport of energy from high-energy to low-energy modes within the QGP? Second, can we observe jets
probing free quarks and gluons within the QGP?
And finally, what is the critical size for the QGP formation?
The first class of measurements constrains the mechanisms of jet-medium interactions since it comprises a vast set of observables that are differential in jet size, flavour, shape, substructure and in-medium path length. We reconstruct jets after they interacted with the QGP and we measure the properties of a specific selection of jets, hose which have survived. We aim to learn about flavour hierarchy in energy loss, about the role of the medium response, or the interplay between energy loss and color coherence. These aspects of jet quenching are interconnected and measurements attempt to isolate the effects, for instance, by separating large-angle and small-angle components or by selecting jets with a hard 2-prong substructure.
The second class of observables comprise searches of large momentum transfer interactions in the medium as a proof of point-like scatterers within the QGP fluid. The third class of measurements comprises searches for jet quenching signatures in small systems like \pPb\
collisions that display signatures of collective effects.
\subsection{Jet energy transport within the QGP}
\input{transport}
\subsection{Effective Degrees of Freedom of the QGP}
\input{dof}
\subsection{Critical Size for QGP Formation}
\input{critical_size}
\section{Conclusions and Outlook}
\input{conclusions}
\section{Acknowledgements}
\input{thanks}
\section{References}
\bibliographystyle{unsrturl}
\subsubsection{How opaque is the QGP to jet propagation?}
Inclusive jets dominantly originate from light quarks and gluons. The high rates of these
jets allow for measurements over a very large kinematic range. Figure~\ref{fig:incjetsRAA}
shows recent ALICE and ATLAS results of the
jet nuclear modification factor, \mbox{$R_{\mathrm{AA}}$}, measured in 0--10\%
central collisions from 60~GeV to 1~TeV
for \mbox{$R=0.4$}\ jets~\cite{Acharya:2019jyg,Aaboud:2018twu} for 5.02~TeV \mbox{Pb$+$Pb}\xspace\
collisions at the LHC.
\mbox{$R_{\mathrm{AA}}$}\ remains below unity over the entire measured
kinematic range.
At RHIC, the jet \mbox{$R_{\mathrm{CP}}$}\ was measured~\cite{Adam:2020wen} and been found to be
consistent with values from Ref.~\cite{Abelev:2013kqa} in 2.76~TeV \mbox{Pb$+$Pb}\xspace\ collisions,
see Figure~\ref{fig:starrcp}. Additionally, \RAA\ has
been measured for neutral pions~\cite{PHENIX:2012jha}.
\begin{figure}
\centering
\includegraphics[width=0.44\textwidth]{plots/alice_r4_raa.pdf}
\includegraphics[width=0.54\textwidth]{plots/RAA_ATLAS.pdf}
\caption{Jet \mbox{$R_{\mathrm{AA}}$}\ as a function of \mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$}\ in \mbox{Pb$+$Pb}\xspace\ collisions. Figures are from Refs.~\cite{Acharya:2019jyg} (left) and \cite{Aaboud:2018twu} (right).}
\label{fig:incjetsRAA}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.95\textwidth]{plots/star_rcp_jet.pdf}
\caption{\mbox{$R_{\mathrm{CP}}$}\ for jets and charged particles at RHIC and the LHC
(as indicated on the plot)
for \mbox{$R=0.2$}\ (left) and \mbox{$R=0.3$}\ jets (right). From Ref.~\cite{Adam:2020wen}.}
\label{fig:starrcp}
\end{figure}
In order to extract energy loss values from these measurements, it is necessary to have a
model. Jet quenching models are reviewed in
Refs.~\cite{Qin:2015srf,Blaizot:2015lma,Cao:2020wlm}.
A great deal of theoretical work has gone into the development of these
models over many years. However, constraining models with data has been a
challenge.
The use of Bayesian techniques to extract jet
quenching parameters is a recent but rapidly evolving field.
This was first used in heavy ion collisions
to constrain the equation of state~\cite{Pratt:2015zsa} and is now used to extract
estimates for the QGP bulk properties including sheer and
bulk viscosity (see recent examples in
Refs.~\cite{Bernhard:2019bmu, JETSCAPE:2020mzn, Nijs:2020roc}).
From the LHC data, Ref.~\cite{He:2018gks} calculated that
jets have lost an average of 10--50 GeV for \mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$}\ between 100 and 900~GeV.
These energy loss
values provide additional information based on
the \RAA\ values, but they are
not direct properties of the QGP. The extraction of \qhat\ from energy
loss measurements was recently performed in Refs.~\cite{Ke:2020clc,Cao:2021keo} using
the LIDO and JETSCAPE software, respectively.
Both of these papers constrain the models to experimental data by evolving a jet
quenching calculation through a 2+1D hydrodynamic evolution (using event-averaged
initial conditions). At high temperature, the
two \qhat\ extractions agree and constrain $\qhat / T^3$ to be approximately 1--5
over the range 300~$< T <$~500~MeV
but the result from LIDO increases
sharply to 10--15 for $T<$~300~MeV, while the result from JETSCAPE remains constant
in that same range (both of these extractions are for $p=$~100~GeV).
The extractions use different energy loss models
and a different selection of experimental data and it is not clear
which (or both) aspect of the models leads to the low-$T$ difference in \qhat.
Both extractions include data from RHIC, but LIDO
includes the STAR jet \mbox{$R_{\mathrm{CP}}$}\ result~\cite{Adam:2020wen} and the PHENIX \mbox{$\pi^{0}$}\
result~\cite{Adare:2012wg} while JETSCAPE only includes the PHENIX \mbox{$\pi^{0}$}\ result.
JETSCAPE has broken down the constraints on \qhat\ from RHIC and the
LHC data separately and shown that there is essentially no constraining power in the
RHIC data in their model due to the limited kinematic range of the measurement.
The limited kinematic range and
statistical precision of the available RHIC data mean that the extractions are dominated
by the LHC data at 5.02~TeV. This should
change with data from the sPHENIX experiment~\cite{Adare:2015kwa}.
The ability to extract \qhat\ from the data via Bayesian analysis is a substantial step
forward in jet physics in heavy ion collisions. The current analyses represent a proof of concept of the Bayesian techniques and are improvable in several ways. On the one hand, next generation of analysis will include more differential jet observables that pose more constrain to the models than single hadron or fixed-$R$ inclusive jet suppression. On the other hand, a wider set of model calculations and approximations should be included in the analysis.
Other aspects, like going beyond event-averaged geometry to be sensitive to geometrical fluctuations in energy loss are also to be addressed.
For the rest of this section, we discuss other measurements which can provide
more experimental information about the details of energy transport in the QGP.
\subsubsection{How does the amount of lost energy depend on path length?}
A fundamental question is how energy loss of jets depends on the
path length the jet travels through the QGP.
We cannot know the specific path length traveled by the jet because of:
\begin{itemize}
\item event-by-event variation of the QGP shape and size
\item the unknown position of the hard scattering process within the QGP
\item the random propagation direction of the jet within the QGP
\end{itemize}
Because of these there can be a large variation between the path lengths encountered
even by jets produced within the same hard scattering. This variation,
along with the steeply falling jet cross section with transverse momentum
leads to a selection bias toward jets which have lost little energy and thus likely
also travelled through a smaller than average amount of QGP. This
is called the \textit{surface bias}~\cite{Renk:2012ve}.
In addition to path-length
variation there can also be fluctuations in the energy loss
process~\cite{Milhano:2015mng}.
In order to isolate effects which are sensitive to path-length
variations, jet observables which are
differential in the QGP geometry can be measured.
Additionally, model calculations must incorporate
realistic event-by-event geometry into calculations in order to make
meaningful comparisons to data. In this section, we will discuss
the physics processes thought to govern this question and the available measurements.
We will finish with some open questions.
In the perturbative description of energy loss, the spectrum of the emitted gluons is
expected to be $dI/d\omega \propto 1/\omega$ if the interactions with the medium are incoherent.
However the Landau–Pomeranchuk–Migdal (LPM)
effect in the QGP~\cite{Baier:1996kr,Wang:1994fx} leads to $dI/d\omega \propto 1/\omega^{3/2}$ for
$\omega < \omega_{c}$ and this leads to a quadratic dependence of the energy loss on the
in-medium path length, $L$, $\Delta E_{loss} \propto L^{2}$.
In a nonperturbative strong coupling model $\Delta E_{loss} \propto L^{3}$ is
expected~\cite{Chesler:2008uy}.
$\Delta E_{loss}(L)$ itself is not directly measurable. Instead, the key element in
this study has been to measure the azimuthal anisotropy, \mbox{$v_{n}$}, of jets and high-\mbox{${p_T}$}\xspace\ particles.
Measurements from RHIC using hadrons showed a larger $v_2$ than expected
from pQCD-based energy loss calculations~\cite{Adare:2010sp}. This
was taken as possible evidence for strong coupling energy loss with a stronger
dependence on $L$ than expected from pQCD. However,
conclusions made from these measurements were shown to be limited
by the use of non-fluctuating geometry; the addition
of geometrical fluctuations increased the value of \mbox{$v_{\mathrm{2}}$}\
expected from pQCD-based theoretical calculations~\cite{Noronha-Hostler:2016eow}.
At the LHC, measurements of jet~\cite{Aad:2013sla,Adam:2015mda,ATLAS:2020qxc} and
high-\mbox{${p_T}$}\xspace\ charged particle~\cite{Sirunyan:2017pan} \mbox{$v_{\mathrm{2}}$}\ have been performed at the LHC;
a compilation of the measurements for mid-central \mbox{Pb$+$Pb}\xspace\ collisions is shown in
Figure~\ref{fig:v2}. The \mbox{$v_{\mathrm{2}}$}\ value varies from approximately 5\% for 20~GeV charged particles
to about 2\% for 200~GeV jets. The \mbox{$v_{\mathrm{2}}$}\ values as a function of centrality follow
the geometrical expectations; a smaller \mbox{$v_{\mathrm{2}}$}\ value is seen in central collisions
than in mid-central and peripheral collisions~\cite{ATLAS:2020qxc}. In order to further
constrain the path-length dependence of energy loss, measurements of \mbox{$v_{\mathrm{3}}$}\ and
\mbox{$v_{\mathrm{4}}$}\ have been made for jets~\cite{ATLAS:2020qxc} and high-\mbox{${p_T}$}\xspace\ charged
particles~\cite{Sirunyan:2017pan}; above 20~GeV, there is no evidence for non-zero
\mbox{$v_{\mathrm{3}}$}\ or \mbox{$v_{\mathrm{4}}$}\ in any collision system. These higher-order
harmonics should introduce a smaller
path length difference between in-plane and out-of-plane directions than \mbox{$v_{\mathrm{2}}$}\
and so it is important to improve the precision of these measurements
in order experimentally constrain the path length dependence of energy loss.
\begin{figure}[ht]
\centering
\includegraphics[width=0.93\textwidth]{plots/jetvn_all.pdf}
\caption{A compilation of jet~\cite{Adam:2015mda,ATLAS:2020qxc} and
charged-particle~\cite{Sirunyan:2017pan} \mbox{$v_{\mathrm{2}}$}\
measurements. Figure is from Ref.~\cite{ATLAS:2020qxc}.}
\label{fig:v2}
\end{figure}
Interestingly, a non-zero \mbox{$v_{\mathrm{2}}$}\ has been measured for high-\mbox{${p_T}$}\xspace\ charged
particles in \pPb\ collisions~\cite{Aad:2019ajj}. The measured \mbox{$v_{\mathrm{2}}$}\ is approximately
2\% for 20--50~GeV particles in central \pPb\ collisions. In contrast to \mbox{Pb$+$Pb}\xspace\ collisions,
the \mbox{$v_{\mathrm{2}}$}\ in \pPb\ collisions is not accompanied by a large energy loss; in \pPb\
collisions \RpPb\ is consistent with unity~\cite{Adam:2015hoa,Khachatryan:2016odn}.
If this \mbox{$v_{\mathrm{2}}$}\ arises from path-length
dependent energy loss, the absolute size of the energy loss
would have to be sufficiently small to accommodate the
\RpPb\ results. Thus far, there is no understanding of
whether the observed \mbox{$v_{\mathrm{2}}$}\ can be attributed to energy loss or
if some other source is required to explain the data.
If the \pPb\ \mbox{$v_{\mathrm{2}}$}\ is due to some other mechanism
than path-length-dependent energy loss, then the impact to the commonly accepted understanding of these measurements in heavy-ion
collisions needs to be assessed.
Dijet measurements provide a different sensitivity to the path-length
dependence of energy loss through geometry
than single-jet
measurements.
The first LHC results showed a significant depletion of balanced dijets
in \mbox{Pb$+$Pb}\xspace\ collisions~\cite{Aad:2010bu}.
The qualitative explanation for this is that one jet loses more energy than
the other, either through an asymmetry in the path length or through fluctuations in
the energy loss.
Current measurements show the same decrease in the fraction of balanced
jet pairs in central \mbox{Pb$+$Pb}\xspace\ collisions compared to \pp\ collisions,
up to leading jets of at least 400~GeV~\cite{ATLAS:2020jyp}
(see Figure~\ref{fig:xJ_ATLAS}).
For leading jet \mbox{${p_T}$}\xspace of 158--178~GeV the \mbox{$x_{J}$}\ distribution in central
\mbox{Pb$+$Pb}\xspace\ collisions is consistent with no \mbox{$x_{J}$}\ dependence over the range of 0.5~$< \mbox{$x_{J}$} <$~1.0.
This is a very broad distribution and suggests that there is a very wide
variation in the magnitude of energy loss experienced by the subleading jet in
these collisions.
At 2.76~TeV, the first unfolded dijet
measurements in \mbox{Pb$+$Pb}\xspace\ collisions also
showed the imbalanced pairs expected from energy loss, but also an apparent peak in
the \mbox{$x_{J}$}\ distribution at approximately 0.5~\cite{Aaboud:2017eww}. Figure~\ref{fig:xJ_ATLAS}
shows the \mbox{$x_{J}$}\ distributions (after the unfolding) in central \mbox{Pb$+$Pb}\xspace\ collisions
for jets from 100~GeV to over 200~GeV. The peak structure is clear for the lowest
\mbox{${p_T}$}\xspace\ jets and becomes insignificant for $\mbox{${p_T}$}\xspace >$~126~GeV. The origin of this structure
is not known. New measurements at 5.02~TeV have been unable to reach as
low in jet \mbox{${p_T}$}\xspace to confirm this structure~\cite{ATLAS:2020jyp}.
\begin{figure}[ht]
\centering
\includegraphics[width=0.50\textwidth]{plots/xJ_ATLAS502.pdf}
\includegraphics[width=0.45\textwidth]{plots/xJ_ATLAS276.pdf}
\caption{Dijet momentum imbalance, \mbox{$x_{J}$}, at 5.02~TeV (left) and 2.76~TeV (right)
for 0--10\% central \mbox{Pb$+$Pb}\xspace\ collisions and \pp\ collisions. Figures are from Ref.~\cite{ATLAS:2020jyp} (left) and Ref.~\cite{Aaboud:2017eww} (right).}
\label{fig:xJ_ATLAS}
\end{figure}
Both the dijet imbalance and the jet azimuthal anisotropies
should be especially sensitive to the effects of fluctuations in the initial geometry and fluctuations in the energy loss process.
Due to this it is important to simultaneously experimentally constrain these quantities
and compare them with theoretical calculations.
\subsubsection{How does jet quenching depend on the characteristics of the jets?}
In the previous sections, jets were discussed as monolithic objects.
Here, we discuss measurements of the jet properties that were done in order to probe the medium-modifications of the internal jet radiation pattern. Such modifications can provide information on the microscopic details of the jet-QGP interactions.
The jet radiation pattern is explored via measurements of the jet shapes including fragmentation functions.
We also discuss varying the \textit{partonic flavor} of jets between quarks, gluons
and heavy quarks, to test the flavour and mass dependence of jet-medium interactions. Finally, we discuss measurements of the hard jet substructure which aim at probing the building blocks of the parton shower in medium.
{\bf Does jet quenching depend on the jet shape or are harder narrower jets quenched less?}
Differences in the parton shower evolution
are expected to lead to different energy loss, so it is reasonable to ask if
the structure of the jets which survive is modified from jets in \pp\ collisions.
This question is intrinsically related to the quark/gluon differences discussed in the next subsection
because gluon jets on average have a broader and softer fragmentation than
quark jets.
The most comprehensive measurement
of jet fragmentation in heavy-ion collisions is in Ref.~\cite{Aaboud:2018hpb}.
Figure~\ref{fig:FF_PbPb_z} shows the fragmentation functions in central \mbox{Pb$+$Pb}\xspace\ events
for \mbox{$R=0.4$}\ jets divided by the same quantity in \pp\ collisions for three jet
\mbox{${p_T}$}\xspace\ selections. The momentum fraction of the jet carried by the charged particle,
\mbox{$z$}, is determined with respect to the \textit{observed} jet energy (as opposed
to the original, pre-quenching, parton energy).
At high-\mbox{$z$}\ the ratios of the fragmentation functions are consistent
for all three jet \mbox{${p_T}$}\xspace\ selections and there is an excess of high-\mbox{$z$}\ particles in \mbox{Pb$+$Pb}\xspace\
collisions. This excess can be explained as the result of a selection bias: the measured jets with high-$z$ hadrons are jets with a harder fragmentation that have
been quenched less.
Since quark jets have harder fragmentation on average than gluon jets, this
could also be understood as
evidence for a stronger energy loss for gluon jets than quark
jets~\cite{Spousta:2015fca} resulting on an enhanced quark fraction at given \mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$}.
\begin{figure}
\centering
\includegraphics[width=0.55\textwidth]{plots/FF_z.pdf}
\caption{Ratios of the fragmentation functions in
central \mbox{Pb$+$Pb}\xspace\ collisions to those in \pp\ collisions
for three different \mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$}\ selections as a function of \mbox{$z$}. Figure is from Ref.~\cite{Aaboud:2018hpb}.}
\label{fig:FF_PbPb_z}
\end{figure}
In order to look at the angular distribution of energy in jets, jet angularities and other jet shapes have been measured~\cite{Chatrchyan:2013kwa,Acharya:2018uvf}. The measurement in
Ref.~\cite{Chatrchyan:2013kwa} measures
the distribution of calorimeter energy inside the jets.
Ref.~\cite{Acharya:2018uvf} is based on unfolded
\mbox{$R=0.2$}\ track-based jets and includes $\mbox{$p_{\mathrm{T}}^{\mathrm{part}}$} >$~0.15 GeV and measures
both the angularity, $g$ and the momentum dispersion $p_{T}D$. These observables correspond to $\lambda_{0,2}$ and $\lambda_{1,1}$ in Equation \ref{equation:angularities} respectively.
The small cone size of these jets emphasizes the core and
minimizes the effect of any medium response. The distributions of
these quantities in \mbox{Pb$+$Pb}\xspace\ collisions are shown in Figure~\ref{fig:alice_shapes} and indicate that the measured quenched jets are narrower and have a harder fragmentation than the \mbox{\sc Pythia}\xspace~\cite{Sjostrand:2014zea}
simulation of jets in the vacuum.
This can, yet again,
be interpreted as a selection bias by which broad jets with a softer fragmentation are more quenched and are filtered out from the selected reconstructed jet $p_{\rm{T}}$ bin.
Interestingly, like in the fragmentation function measurements from ATLAS above, a harder and narrower fragmentation
is consistent with a more quark-like fragmentation and the results agree well with \mbox{\sc Pythia}\xspace quark distributions as shown in~\cite{Acharya:2018uvf}.
It is however worth noting that the measurement of the jet charge \cite{CMS:2020plq} does not indicate a change of quark and gluon fractions in \mbox{Pb$+$Pb}\xspace\ relative to \pp\ collisions.
\begin{figure}[ht]
\centering
\includegraphics[width=0.85\textwidth]{plots/alice_shapes.pdf}
\caption{Jet girth and momentum dispersion in central Pb-Pb collisions compared to a vacuum simulation, from Ref.~\cite{Acharya:2018uvf}.}
\label{fig:alice_shapes}
\end{figure}
{\bf Does jet quenching depend on quark flavour and mass?}
At leading order in vacuum QCD, differences between quark and gluon fragmentation are dictated by color factors: the splitting rate is
enhanced by the color factor and is 2.25
times higher for gluons than for quarks, leading to broader and softer parton showers. The
larger splitting rate leads to an expectation of
more interactions between gluon jets and the QGP.
Inclusive jets are mixture of quark and gluon jets. The mixture is governed by
the parton distributions functions (PDFs) of the colliding
nucleons. At low-$x$, gluons dominate and toward the valence region
there is a greater fraction of quarks.
An attempt to measure the quark and gluon fractions in
jets in \mbox{Pb$+$Pb}\xspace\ collisions has not found any
significant difference from that measured in \pp\ collisions~\cite{Sirunyan:2020qvi},
but such measurements have substantial systematic uncertainties and model dependence.
Other techniques have been
used to attempt to enhance the quark-jet fraction and to look at
the effect on the jet quenching.
One technique to enhance the quark-jet sample is to measure
the rapidity dependence of jet observables. At forward rapidities,
the fraction of quark initiated jets will be enhanced because the jet
partons come from higher-$x$ partons than at smaller rapidities.
Alternatively, one can consider jets recoiling from isolated photons or Z-bosons. \mbox{\sc Pythia}\xspace simulations \cite{CMS:2021vsp} indicate quark fractions nearly a factor three higher in Z-jet events than in central dijet events for $R=0.4$ jets of $p_{\rm{T}}<200$ GeV at 13 TeV.
Lastly, heavy flavour jet tagging allows to study the effect of large
quark mass on jet quenching.
\begin{itemize}
\item \underline{Rapidity dependence of energy loss}
Figure~\ref{fig:incjetsRAA_rap}
shows the first evidence for
a rapidity dependence of \mbox{$R_{\mathrm{AA}}$}~~\cite{Aaboud:2018twu}.
There are two competing effects that could be expected.
First, the gluon jet fraction in the inclusive jet sample decreases toward increasing rapidity at
fixed jet transverse momentum, \mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$}, (see, for example Ref.~\cite{Spousta:2015fca} where the PYTHIA8 calculations show that the quark fraction almost doubles at forward rapidities $1.2 < |y| <2.1$ compared to $|y| <0 .3$ for jets of $p_{\rm{T}}=100$ GeV.)
As quarks are
expected to lose less energy than gluons in the QGP, the
value of \mbox{$R_{\mathrm{AA}}$}\ would
be expected to increase as $|\mbox{$y^{\mathrm{jet}}$}|$, and thus
the fraction of quark jets in the inclusive jet sample at a fixed \mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$},
increases. Second, the \mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$}\ spectra become steeper with
increasing $|\mbox{$y^{\mathrm{jet}}$}|$ (see, for example Ref.~\cite{Aaboud:2017dvo}); this would cause a reduction in the \mbox{$R_{\mathrm{AA}}$}\
value for the same energy loss. Figure~\ref{fig:incjetsRAA_rap}, \mbox{$R_{\mathrm{AA}}$}\ is shown to decrease with increasing
$|\mbox{$y^{\mathrm{jet}}$}|$ for jets with $\mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$} >$~300~GeV,
suggesting that the second effect dominates for these jets.
\begin{figure}
\centering
\includegraphics*[width=0.54\textwidth]{plots/ydep_raa.pdf}
\caption{
\mbox{$R_{\mathrm{AA}}$}\ as a function of the jet rapidity
normalized by $\mbox{$R_{\mathrm{AA}}$}(|y| < 0.3)$
for four \mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$}\ selections. Figure is from Refs.~\cite{Aaboud:2018twu}.}
\label{fig:incjetsRAA_rap}
\end{figure}
\item \underline{Photon-tagged jet observables}
Jets opposite in azimuth from a high momentum photon can also provide
an enhancement of quark-jets over inclusive jets because these photon-jet
pairs are primarily produced via $g+ q \to \gamma + q$ scattering.
Additionally,
the photon does not lose energy in the QGP via the strong interaction and therefore
provides information about the initial hard scattering momentum transfer.
However, these jets have a different geometrical bias than inclusive jets;
since the photon does not lose energy, the geometrical bias toward jets
produced near the surface is removed for photon-jet measurements.
Previous measurements have shown the \mbox{${p_T}$}\xspace\ of the jet relative to that of the photon is reduced
in heavy-ion
collisions relative to \pp\ collisions~\cite{Chatrchyan:2012gt}.
Measurements allow the study of the photon-jet
momentum balance as a function of the photon \mbox{${p_T}$}\xspace~\cite{Sirunyan:2017qhf,Aaboud:2018anc}. The ATLAS
results are unfolded and
are shown in Figure~\ref{fig:photonjets} for 100-158~GeV photons in 0--10\% central collisions.
Going from peripheral to central collisions, the fraction of balanced photon-jet
pairs (those with $\mbox{$x_{J\gamma}$} \approx 1$) decreases and the fraction of pairs in which the photon
has more \mbox{${p_T}$}\xspace\ than the jet increases. This is
qualitatively as expected from jet quenching, but due to the different
geometrical bias and observable than the inclusive jets it is not possible
to say without a model if these quark-enhanced jets have lost less energy,
as would be expected. The most probable value
of \mbox{$x_{J\gamma}$}\ in the most central collisions is about 0.3, indicating that
many jets have lost a large fraction of their transverse momentum. However, it is
interesting that even in the most central collisions, there remain
a substantial fraction of jets which are nearly
balanced--indicating that they have not lost a large amount of energy.
Measurements with $Z$-bosons as the tag have also been done~\cite{CMS:2017eqd};
the message is similar to that of the photon-tagged measurements but the
statistical precision is worse.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\linewidth]{plots/gammajet_ATLAS.pdf}\\
\caption{ Ratio of the jet transverse momentum to the
photon \mbox{${p_T}$}\xspace, $x_{J\gamma}$, in \pp\ and \mbox{Pb$+$Pb}\xspace\ collisions~\cite{Aaboud:2018anc}. }
\label{fig:photonjets}
\end{figure}
First measurements of
the fragmentation of the jets opposite a photon have been performed~\cite{Sirunyan:2018qec,Aaboud:2019oac}.
As discussed above,
these fragmentation functions differ from inclusive fragmentation functions in a few ways.
First, the
jets are possibly quenched more due to geometrical bias from the photon selection.
Second, the jets are at lower \mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$}\ than the inclusive jet fragmentation functions
because the tagging with the photon limits the statistics and provides a cleaner identification
of jets at lower \mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$}\ than in the inclusive case.
Finally, these jets have a much higher fraction of quark jets than the inclusive jet sample
do to the leading order dominance of the $q + g \to q + \gamma$ process in these events.
Measurements of photon-hadron correlations had been made at RHIC~\cite{Adare:2012qi,Abelev:2009gu,Ge:2017irb},
but
only recently were measurements made of the hadrons in reconstructed jets back-to-back with
a photon in \mbox{Pb$+$Pb}\xspace\ collisions~\cite{Sirunyan:2018qec,Aaboud:2019oac}.
Figure~\ref{fig:photonjetFF} shows the ratio fragmentation functions
in \mbox{Pb$+$Pb}\xspace\ collisions compared to \pp\ collisions for both jets opposite a photon
and inclusive jets. A stronger deviation of this ratio from unity is seen in
central collisions than in peripheral collisions for both jet selections. Interestingly,
when comparing the central data directly to the peripheral data, the centrality
dependence is significantly larger in the photon-tagged jets than in the inclusive jets.
It is not known if this is caused by the lower
\mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$}\ range for the photon-tagged fragmentation functions or the different
geometrical biases of the two samples, but being able to measure the fragmentation of
photon-tagged jets at the same \mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$}\ has inclusive jets would be an obvious
way to constrain the source of this difference.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\linewidth]{plots/photonjetFF.pdf}
\caption{The ratio of the fragmentation function as a function of
charged particle \mbox{${p_T}$}\xspace\ in central \mbox{Pb$+$Pb}\xspace\ collisions to \pp\ collisions
for jets opposite a photon
(squares) and inclusive jet fragmentation functions~\cite{Aaboud:2017bzv}
(circles). Figure is from Ref.~\cite{Aaboud:2019oac}.}
\label{fig:photonjetFF}
\end{figure}
\item \underline{Heavy Flavour}
In vacuum, besides the aforementioned differences between the radiation pattern of jets initiated by light quarks and gluons, dictated at LO by the color factors, quark mass plays a role. In QCD (an in gauge theories in general), radiation off a massive quark Q is suppressed in a cone of angle $\theta_{C}=m_{Q}/E_{Q}$. This is the
so-called dead cone effect \cite{Dokshitzer:1991fd} that causes heavy quarks to radiate less than light quarks. In heavy ion collisions, medium-induced radiation off heavy quarks is expected to fill the dead-cone region, but is predicted to be suppressed for high energy radiation as compared to light quarks \cite{Armesto:2003jh} resulting on a quark mass-dependence to energy loss.
The measurement of energy loss of heavy flavor jets
is very challenging. The overall rate of these jets is very low (a few percent of the
inclusive jet cross section) and identifying them relies on measurements sensitive to the decay of the $B$ or $D$ hadron carrying the quark
of interest inside the jet.
CMS has made a measurement of the \mbox{$b$}-jet\xspace\ \RAA\ in 2.76 TeV
collisions~\cite{Chatrchyan:2013exa} and found consistent \RAA\ values
between inclusive and
\mbox{$b$}-jets\xspace. Additionally, they measured the momentum imbalance of back-to-back \mbox{$b$}-jets\xspace\
and found them to be comparable to those measured for inclusive jets in 5.02~TeV \mbox{Pb$+$Pb}\xspace\ collisions~\cite{Sirunyan:2018jju}, see Figure~\ref{fig:bdijet}. Both
measurements have sizeable uncertainties and
\mbox{$b$}-jet\xspace\ measurements will be an important part of the LHC physics program in Run 3 and beyond.
\begin{figure}[ht]
\centering
\includegraphics[width=0.95\linewidth]{plots/CMS-HIN-16-005_Figure_006.pdf}
\caption{The momentum imbalance in inclusive and b-dijets as function of collision centrality in \mbox{Pb$+$Pb}\xspace \,collisions compared to \pp \,. Figure is from Ref.~\cite{Sirunyan:2018jju}}.
\label{fig:bdijet}
\end{figure}
The application of substructure techniques to heavy flavour jets in \pp\ collisions has recently lead to the first direct observation of the dead cone in QCD~\cite{ALICE:2021aqk}. The exploration of such techniques in heavy ion collisions is yet to happen.
Substructure of double HF-tagged jets (discussed in Ref.~\cite{Ilten:2017rbd} in the context of disentangling heavy flavour processes in pp collisions) might allow to identify the c$\overline{\rm{c}}$ or b$\overline{\rm{b}}$ antenna without the ambiguities of the inclusive SD analysis that are subject of strong contamination of combinatorial prongs.
Heavy quark measurements are expected to be substantially improved in the near
future with higher luminosity and detector upgrades
at the LHC Runs 3 and 4~\cite{Citron:2018lsq} and the ability
to tag \mbox{$b$}-jets\xspace\ at sPHENIX at RHIC~\cite{PHENIX:2015siv}.
\end{itemize}
{\bf Does jet quenching depend on the hard substructure?}
As described in Section~\ref{subsect:jettools}, the grooming procedure stops when the SD condition is met. The corresponding $z_{12}$ and angular separation $\Delta R_{12}$ are called groomed momentum balance and groomed jet radius and are denoted by $z_{g}$ and $R_{g}$ respectively.
In vacuum, $z_{g}$ is connected to the Altarelli-Parisi splitting function and displays
a universal behavior in $1/z$~\cite{Larkoski:2017bvj}. In Pb-Pb collisions,
the interpretation of the observable is more difficult because medium-induced radiation is expected to violate angular ordering~ \cite{Mehtar-Tani:2011hma}
while the CA reclustering forces angular ordering on the jet constituents, among other reasons.
Several different mechanisms can contribute to the modification of $z_{g}$
and $R_{g}$ in heavy ion collisions. If medium-induced radiation is hard enough, it can increase the number of prongs that pass the SD cut. On the other hand, jet prongs and constituents lose energy in the medium, which can
reduce the number of subjet prongs passing the SD cut, $n_{SD}$.
In addition, the amount of jet energy loss is dictated by color coherence: jets with a resolved substructure will lose more energy because they contain more prongs that interact with the medium incoherently.
The $z_{g}$ distribution in heavy-ion collisions
was first measured by CMS \cite{Sirunyan:2017bsd}, then by STAR~\cite{Adam:2015doa} and ALICE~\cite{Acharya:2019djg}.
The CMS and ALICE measurements are shown in Figure~\ref{fig:zg01}.
\begin{figure}[ht!]
\centering
\includegraphics[width=0.8\linewidth]{plots/CMS-HIN-16-006_Figure_004.pdf}
\includegraphics[width=0.8\linewidth]{plots/Overlayed4x2-100601.pdf}
\caption{Upper plot: CMS self-normalized results for the momentum balance $z_{g}$ in different jet momentum bins.
Figure is from Ref.~\cite{Sirunyan:2017bsd}. Lower plot: ALICE $z_{g}$ results for jets in a fixed momentum interval of $80<p_{T,jet}^{ch}<120$ GeV and as function of the groomed splitting angle $R_{g}$ ($\Delta R^{rec}$).
Figure is from Ref.~\cite{Acharya:2019djg}.}
\label{fig:zg01}
\end{figure}
The main feature of the ALICE track-based measurement was a suppression of the $z_{g}$ distribution with increasing $R_{g}$ and a hint of an enhancement at small angles. CMS did not perform a scan on the splitting opening angle (the default value is $R_{g}>0.1$) but did examine the jet $p_{T}$ dependence of the modification.
Both of these measurements were not fully corrected to particle-level. Rather, the \pp\ reference was modified to consider the impact of strong combinatorial background at the level of subjet prongs that dominates the low-$z_{g}$ region. ALICE also reported the measurement of the Les Houches multiplicity $n_{SD}$ which gives the number of prongs within the jet that pass the SD cut. The $n_{SD}$ distribution is shifted to lower values in PbPb relative to the vacuum calculation as expected if energy loss of the prongs reduces the number of subleading prongs passing the SD cut.
The next generation of groomed observables by ALICE were fully corrected and, for the sake of the unfolding stability, performed with a different selection of smaller jet $R$ and tighter SD grooming cuts ($z_{cut}=0.2,z_{cut}=0.4$) \cite{ALICE:2021obz}. The results are shown in Figure \ref{fig:zg02}. A similar message is distilled: small-angle splittings are enhanced while large-angle splittings are suppressed. And the $z_{g}$, when integrating over all angles, shows no modifications.
The data were compared to a set of models, including JetMed (denoted as Caucal et al), the Hybrid model (denoted as Pablos et al) and JETSCAPE~\cite{Kauder:2018cdt}.
The narrowing of $\theta_{g}$ is observed in these three different models, and this might seems surprising given the different nature of the implemented medium effects. So it is worth asking what is the most relevant common feature in these models, and one answer is the dominance of vacuum physics at early, high-energy stages of the shower~\cite{Du:2021pqa,Caucal:2018dla}. This brings in a key element for the interpretation: large $\theta_g$ biases to more activity in the early vacuum shower. Since vacuum structures with more prongs lead to more quenched jets, the shape of $\theta_{g}$ is the consequence of a selection bias; high $\theta_{g}$ jets are more quenched and migrate to lower jet $p_{T}$ bins. Other models in the plot like the one denoted by Yuan et al indicate that flavour-dependent energy loss can also play a role.
\begin{figure}[h!]
\centering
\includegraphics[width=0.5\linewidth]{plots/h_theta_g_0-10_R02_zcut02_Theory.pdf}
\caption{Normalized groomed jet radius $\theta_{g}$ in central collisions and small-R jets measured by ALICE~\cite{ALICE:2021obz},
compared to the same observable measured in pp collisions and state of the art model and theory calculations. }
\label{fig:zg02}
\end{figure}
Another substructure observable of interest is the $N$-subjettiness, denoted by $\tau_{N}$, which quantifies the degree to which a jet has a $N$(or fewer)-pronged substructure~\cite{Thaler:2010tr}
The ratio of $\tau_{2}/\tau_{1}$ is used to tag boosted hadronically-decaying objects such as the W and top quarks, which are typically 2-prong objects as compared to QCD jets, which are mostly 1-pronged. ALICE measured $\tau_{2}/\tau_{1}$\cite{Acharya:2021ibn} using several declustering metrics, including exclusive $k_{T}$ and CA+SD. The results do not reveal a significant change in the prong-structure of the jet relative to \mbox{\sc Pythia}\xspace, which describes the observable well in pp collisions.
\begin{figure}[h]
\centering
\includegraphics[width=0.45\linewidth]{plots/fig_06b.pdf}
\includegraphics[width=0.45\linewidth]{plots/fig_07b.pdf}
\caption{Left: Fully corrected $k_{T}$ distance measured for $R=1$ trimmed jets in the range $251<p_{T,jet}<316$ GeV for different centrality classes. Right: Nuclear modification factor as function of the $k_{T}$ distance. Figures are
from Ref.~\cite{ATLAS:2019rmd}.}
\label{fig:ktscale}
\end{figure}
ATLAS performed the first fully corrected measurement of the $k_{T}$ distance of large$-R$ jets in heavy ion collisions~\cite{ATLAS:2019rmd}.
First, \mbox{$R=0.2$}\ calorimeter jets were reconstructed via the usual
procedure. Then jets with $\mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$} > $~35~GeV
jets were taken as constituents for anti-$k_{T}$
jets clustered with $R = 1$.
Their constituents were reclustered with the $k_{T}$ algorithm~\cite{Dokshitzer:1997in,Ellis:1993tq} and then the last clustering step was unwound to register the $k_{\rm{T}}$ scale or distance, defined as
\begin{equation}
\sqrt{d_{12}}=min(p_{T,1}^2,p_{T,2}^2) \Delta R_{12}^{2} \end{equation}
where indexes $1$ and $2$ refer to the two prongs that were clustered last.
Large $\sqrt{d_{12}}$ selects jets with distinct hard prongs separated at large angles.
If an $R=1$ jet consists of only a single sub-jet (SSJ), $\sqrt{d_{12}}$ is not defined.
Figure~\ref{fig:ktscale} shows the $k_{T}$ distance distribution for different centralities and indicates that the majority of the jets consist of a
single sub-jet. Two-prong configurations are suppressed by more than 2 orders of magnitude.
The plot on the right shows the nuclear modification factor
is qualitatively different between those jets which have a single sub-jet
and those which have more than one. Those jets with a single sub-jet are suppressed
approximately 50\% less in central collisions than those jets which have multiple
sub-jets.
In parallel to the writing of this review, other observables are being explored. An example is the subjet energy fraction, which considers the fraction of energy carried by the leading subjet within the signal jet. Another example is the transverse momentum $k_{\rm{T}}$ of the splitting found with dynamical grooming \cite{Mehtar-Tani:2019rrk}, $k_{\rm{T,dyn}}$, which selects the hardest splitting within the CA-ordered jet tree.
All the discussed jet shape and jet substructure observables must be correlated to some degree, by construction. For illustration, in Fig. \ref{fig:Correlations} we show the linear correlation coefficients for PYTHIA8 \cite{Sjostrand:2007gs} jets reconstructed with $R=0.4$ with \mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$}\ $>100$ GeV. We observe that the $k_{\rm{T}}$ distance is strongly correlated to the girth and to $k_{\rm{T,dyn}}$ and strongly anti-correlated to the leading subjet fraction. The $n_{\rm{SD}}$, which is a measure of the intrajet multiplicity is naturally anti-correlated to the $p_{\rm{TD}}$ which is related to the dispersion in momentum of the jet constituents.
The $z_{g}$ measures a momentum balance while $R_{g}$ is an angle and they are not correlated. We also note the strong correlation between the girth and $R_{g}$, $k_{\rm{T,dyn}}$, and the $k_{\rm{T}}$ distance.
Finding a set of minimally correlated observables can be useful to perform systematic comparisons to models and calculations. An example of such procedure in \pp\ collisions is the recent extraction of $\alpha_{S}$ using jet substructure in $t\bar{t}$ events by CMS, where $R_{g}$,$z_{g}$,$\epsilon$ and jet multiplicity were identified as a set of minimally correlated variables among more than $30$ substructure observables \cite{CMS:2018ypj}.
The selection bias was discussed in the context of the $R_{g}$ but applies to most of the discussed observables. In order to mitigate this selection bias, and to increase the weight of quenched jets in the measured samples, different strategies are envisaged. An obvious one considers the substructure of jets recoiling from Z or $\gamma$ bosons. Other interesting approaches based on ML have been proposed~\cite{Du:2021pqa}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.7\linewidth]{plots/CorrelationMatrix_Full_Tagged.pdf}
\caption{Linear Correlation matrix of the different jet shapes and substructure observables discussed in this chapter.}
\label{fig:Correlations}
\end{figure}
\subsubsection{What happens to the energy lost from jets in the QGP?}
The main physics aim here is to understand the process by which
energy lost by the jet is incorporated into the QGP. There are
two reasons why this is important:
\begin{itemize}
\item this provides access to the hydrodynamization process
\item the energy from the medium response is correlated with the
jet and affects other observables which are used to quantify the strength
of energy loss~\cite{He:2018xjv,Pablos:2019ngg}.
\end{itemize}
Three kinds of observables have been used to search for this effect:
\begin{itemize}
\item cone size dependence of jet \RAA
\item correlations between jets and tracks
\item fragmentation functions and jet shapes
\end{itemize}
In order to capture the full dynamics of jet quenching, large-$R$ jets and access to their internal structure is desired. Cross sections and the ratios of cross sections for different $R$ are IRC-safe observables that can be analytically calculated and pose strong constrains to the theory.
The heavy-ion underlying event creates combinatorial or fake jets that prevent unfolding and only at very high jet $p_{\rm{T}}$ the measurement of the inclusive large-$R$ jet is feasible. Below 100 GeV, the different collaborations have measured
jet cross sections and their ratios for different $R$ up to $R=0.5$~\cite{ATLAS:2012tjt,ALICE:2015mdb}.
In the energy range of a few hundred GeV up to the 1 TeV, CMS has reported the first measurement of jet nuclear modification factors for jets with radii from $R=0.2$ up to $R=1$, for different centrality classes.
In central collisions, and up to $R=0.4$ (where there are still sufficient data points to observe a trend), the nuclear modification factor increases with jet $p_{\rm{T}}$, in agreement with the ATLAS result for $R=0.4$ jets~\cite{ATLAS:2018gwx}. Above 500~GeV, where a full scan of the $R$ dependence is possible, the data is consistent with no dependence of the \RAA with jet $R$. The comparison of the data to models and calculations reveals significant tensions in the simultaneous description of the nuclear modification factor and its $R$ dependence. In Fig. \ref{fig:CMSR} an example of comparisons to analytical calculations is shown.
What is common to many of the models compared to the CMS data, is the strong role of the medium response which gives a larger contribution at large $R$. In models like the Hybrid model, the $R$ dependence of $R_{\rm{AA}}$ can be explained as the result of the balance of a stronger suppression for broader jets, and the ability to include more
medium response inside the cone.
The high transverse momentum of the $R=1.0$ jets in the CMS measurement could limit the
effect of medium response. Measurements with a larger kinematic
range will allow for better discrimination between models.
Other measurement that emphasizes the role of the medium response at large $R$ is for instance the jet mass \cite{ALICE:2017nij} for $R=0.4$ jets. No modifications in \mbox{Pb$+$Pb}\xspace\ collisions relative to \pPb\ collisions
were observed, possibly due to a balance of energy loss and medium response effects~\cite{KunnawalkamElayavalli:2017hxo}.
\begin{figure}[h!]
\centering
\includegraphics[width=0.9\textwidth]{plots/CMS_raa_ratio_1.pdf}
\caption{ Ratio of the $R_{\rm{AA}}$ for different jet $R$ and $R_{\rm{AA}}$ for $R=0.2$, in different jet $p_{\rm{T}}$ intervals and compared to several calculations. Fig. from Ref.~\cite{CMS:2021vui}.}
\label{fig:CMSR}
\end{figure}
In order to look for medium response, measurements
of low momentum tracks around jets in heavy-ion collisions have been
performed.
There has been interest in measuring fragmentation
functions as a function of \mbox{$p_{\mathrm{T}}^{\mathrm{part}}$}, the
transverse momentum of the particle in the jet. This
is motivated to search for an
absolute scale in the modification of the fragmentation.
When looking at the jets fragmentation functions plotted as a function
of \mbox{$p_{\mathrm{T}}^{\mathrm{part}}$}, the low-\mbox{$p_{\mathrm{T}}^{\mathrm{part}}$}, $\mbox{$p_{\mathrm{T}}^{\mathrm{part}}$}\ <$~4~GeV, part of these ratios are approximately equal
for the three jet \mbox{${p_T}$}\xspace\ selections.
The low-\mbox{$p_{\mathrm{T}}^{\mathrm{part}}$}\ excess is thought to be due to the response of the
medium to the passage of the jet. The approximate scaling and extent in \mbox{$p_{\mathrm{T}}^{\mathrm{part}}$}\
of the excess would be then sensitive to some scale in the QGP associated with the
response.
\begin{figure}[h!]
\centering
\includegraphics[width=0.43\textwidth]{plots/FF_pt.pdf}
\includegraphics[width=0.55\textwidth]{plots/track_jetatlas_pt.pdf}
\caption{Left: Ratios of the fragmentation functions in
central \mbox{Pb$+$Pb}\xspace\ collisions to those in \pp\ collisions
for three different \mbox{$p_{\mathrm{T}}^{\mathrm{jet}}$}\ selections as a function of charged-particle \mbox{${p_T}$}\xspace.
Right: The same quantity as in the left plot only differential in the distance, \r,
from the jet axis as well (different sets of points).
Figures are from Ref.~\cite{Aaboud:2018hpb} (left) and ~\cite{Aad:2019igg}
(right).}
\label{fig:FF_PbPb}
\end{figure}
In order to study both the angular and longitudinal directions at once,
both CMS and ATLAS have measured two-dimensional fragmentation
functions~\cite{Khachatryan:2016erx,Khachatryan:2016tfj,Sirunyan:2018jqr,Aad:2019igg,Sirunyan:2021jty}.
In the longitudinal fragmentation functions we noted that the low-\mbox{$p_{\mathrm{T}}^{\mathrm{part}}$}\
excess was for particles below approximately 4 GeV. The two-dimensional
fragmentation functions in Refs.~\cite{Khachatryan:2016erx,Sirunyan:2018jqr,Aad:2019igg}
provide support for that approximate scale both in 2.76 and 5.02 TeV \mbox{Pb$+$Pb}\xspace\ collisions
at the LHC. Figure~\ref{fig:FF_PbPb} shows the ratio of the two-dimensional
fragmentation function in central \mbox{Pb$+$Pb}\xspace\ collisions to that in \pp\ collisions
as a function of \mbox{$p_{\mathrm{T}}^{\mathrm{part}}$}\ for different values of distance $r$ to the jet axis.
The magnitude of the modifications changes as a function of $r$, but the location
in \mbox{$p_{\mathrm{T}}^{\mathrm{part}}$}\ of the transition from suppression to enhancement is at approximately
4 GeV for all $r$ values.
This same 4~GeV scale is also seen in measurements of $Z$-hadron
correlations at ATLAS~\cite{ATLAS:2020wmg}.
|
2,877,628,088,419 | arxiv | \section{Introduction}
Observer-independence of the speed of light
postulated by Einstein's Theory of Special Relativity
implies that the speed of light for a given observer
is independent of frequency,
direction of propagation,
and polarization.
Conversely,
if Lorentz symmetry is broken,
any of these may no longer hold,
resulting in a vacuum
that effectively acts like an anisotropic, birefringent, and/or dispersive medium.
Theories of quantum gravity suggest
that Lorentz invariance may be violated at the Planck scale,
but effects must be strongly suppressed at attainable energies.
Photons propagating over astrophysical distances
enable some of the strongest tests
as tiny effects will accumulate during propagation of the photons.
The Standard-Model Extension\cite{sme,sme_electro_2009} (SME)
is an effective-field-theory approach
to describe low-energy effects of a more fundamental high-energy theory.
It allows a categorization of effects,
described in essence by a set of coefficients
that are characterized in part by the mass dimension $d$
of the corresponding operator.
The SME photon dispersion relation can be written as
\begin{equation}
E = \left(1 - \varsigma^0 \pm \sqrt{(\varsigma^1)^2 + (\varsigma^2)^2 +
(\varsigma^3)^2}\right) p,
\end{equation}
where the $\varsigma^i$ can be written
as an expansion in $d$ and spherical harmonics.
This results in $(d-1)^2$ independent coefficients $k_{(V)jm}^{(d)}$ for odd $d$
describing anisotropy, dispersion, and birefringence;
for even $d$,
$2(d-1)^2-8$ independent birefringent coefficients
$k_{(E)jm}^{(d)}$ and $k_{(B)jm}^{(d)}$
as well as $(d-1)^2$ nonbirefringent coefficients
$c_{(I)jm}^{(d)}$ emerge.
Effects of mass dimension $d$
are typically assumed to be suppressed by $M_\text{Planck}^{d-4}$.
The individual coefficients
$k_{(E)jm}^{(4)}$, $k_{(B)jm}^{(4)}$, $k_{(V)jm}^{(5)}$, and $c_{(I)jm}^{(6)}$
have been constrained
using astrophysical polarization\cite{kislat_krawczynski_2017,kislat_2018}
and time-of-flight measurements,\cite{kislat_krawczynski_2015}
and the $k_{(V)jm}^{(3)}$ have been constrained
using CMB measurements.\cite{cmb}
At $d > 6$,
linear combinations of coefficients have been constrained.\cite{datatables}
Here,
we will review some recent results
obtained from polarization measurements of astrophysical objects.
\section{Birefringence}
The polarization of an electromagnetic wave
can be described by the Stokes vector $\bvec{s} = (q, u, v)^T$,
where $q$ and $u$ describe linear polarization
and $v$ describes circular polarization.
Vacuum birefringence results in a change of the Stokes vector
during the propagation of a photon given by\cite{sme_electro_2009}
\begin{equation}\label{eq:dsdt}
\frac{d\bvec{s}}{dt} = 2E\bvec{\varsigma} \times \bvec{s},
\end{equation}
which represents a rotation of $\bvec{s}$ along a cone
around the birefringence axis
$\bvec{\varsigma} = (\varsigma^1, \varsigma^2, \varsigma^3)^T$.
Most astrophysical emission mechanisms
do not result in a strong circular polarization,
but linear polarization can be fairly large
reaching multiples of ten percent.
In general,
Eq.~\eqref{eq:dsdt} results in a change of the linear polarization angle (PA)
as the photon propagates,
as well as linear polarization partially turning into circular polarization.
However,
SME operators of odd and even $d$
result in distinctly different signatures.
If only odd-$d$ coefficients are nonzero,
$\varsigma^1 = \varsigma^2 = 0$
and $\bvec{\varsigma}$ is oriented along the circular-polarization axis $\bvec{v}$.
In this case,
linear polarization remains linear,
but the PA makes full \ang{180} rotations
as $\bvec{s}$ will rotate in the $q$--$u$ plane.
Any circular polarization would remain unaffected.
If only even-$d$ coefficients are nonzero,
$\varsigma^3 = 0$ and
$\bvec\varsigma$ is in the $q$--$u$ plane.
Then,
a Stokes vector with an initial $v = 0$
will in general rotate out of this plane
acquiring a circular polarization,
while at the same time the PA will undergo swings
with a magnitude depending on
the angle between $\bvec{s}$ and $\bvec\varsigma$.
However,
in this case linear polarization
with a polarization angle of
$\text{PA}_0 = \frac{1}{2}\arctan\left(-\varsigma^2/\varsigma^1\right)$
will remain unaffected.
For all $d \geq 4$,
Eq.~\eqref{eq:dsdt} results in an energy-dependent rate of change of $\bvec{s}$.
Hence,
birefringent Lorentz-violating effects
can be constrained by measuring the
(linear) polarization of astrophysical objects as a function of energy.
Spectropolarimetric observations are ideally suited for this purpose
as the change of PA with energy is measured directly
resulting in the strongest constraints.
Constraints can also be derived from broadband measurements
where polarization is integrated
over the bandwidth of the instrument.\cite{kostelecky_mewes_2013}
In this case,
the rotation of the PA
results in a net reduction of the observed polarization fraction (PF).
Given a set of SME coefficients,
the maximum observable PF,
$\text{PF}_\text{max}$,
can be calculated
assuming the emission at the source is \SI{100}{\percent} polarized.
Any observed $\text{PF} > \text{PF}_\text{max}$
would then rule out this set of SME coefficients.
Constraints obtained in this way
tend to be significantly weaker than those obtained from spectropolarimetry.
Observations of a single source
can be used to constrain linear combinations of SME coefficients.
Typically,
limits are obtained by restricting the analysis to a particular mass dimension $d$.
In the even-$d$ case,
additional assumptions must be made
due to the existence of the linearly polarized eigenmode of propagation.
Observations of multiple objects
can be combined to break all of these degeneracies
and to obtain constraints on individual coefficients,
even in the even-$d$ case.
The strongest individual constraints
result from hard x-ray polarization measurements of gamma-ray bursts (GRBs)
profiting from the high photon energies and high redshifts.
Typical constraints derived from these measurements
are on the order of \SI{1e-34}{\per\giga\electronvolt} or better
in the $d=5$ case.\cite{kostelecky_mewes_2013,grbs}
However,
the number of bursts whose polarization has been measured
is limited at this point
and in particular the early results suffer from large systematic uncertainties.
At least for $d < 6$,
very strong constraints can also be obtained
from optical polarization measurements of distant objects,
such as active galactic nuclei and GRB afterglows.
Systematic uncertainties of these measurements are smaller,
and a much larger number of results is already available.
In two recent papers,\cite{kislat_krawczynski_2017,kislat_2018}
we have combined more than 60 optical polarization measurements,
both spectropolarimetric and broadband integrated measurements,
to constrain individually all birefringent coefficients
of mass dimensions $d=4$ and $d=5$.
Constraints on the dimensionless coefficients
$k_{(E)jm}^{(4)}$ and $k_{(B)jm}^{(4)}$
are \num{<3e-34},
and the $k_{(V)jm}^{(5)}$ are constrained to \SI{<7e-26}{\per\giga\electronvolt}.
\section{Summary and outlook}
Astrophysical polarization measurements are an extremely powerful tool
to constrain Lorentz-invariance violation in the photon sector
due to the extremely long baselines.
In the future,
new instruments with significantly improved high-energy polarization sensitivity
will become available.\cite{new_missions}
The IXPE satellite scheduled for launch in early 2021
will measure x-ray polarization
in the \mbox{\SIrange{2}{8}{\kilo\electronvolt}} range.
Compton telescopes,
such as the balloon-borne COSI or the proposed AMEGO mission concept,
are sensitive to polarization in the
\mbox{\SI{500}{\kilo\electronvolt} -- \SI{5}{\mega\electronvolt}} energy range.
A slightly lower energy range is covered by the proposed GRB polarimeter LEAP.
AdEPT is a concept for a gamma-ray pair-production telescope
that would allow
measuring the polarization of gamma-rays of tens of \si{\mega\electronvolt}.
The ability to measure polarization at high energies
is crucial in particular for constraining coefficients of $d \geq 6$.
|
2,877,628,088,420 | arxiv | \section{Introduction}
The $\sim$5800\,K temperature of the solar photosphere naturally decreases outwards. Beyond a height of $\sim$500\,km, i.e., in the chromosphere, the beginning of a mysterious behaviour appears. The atmospheric temperature, instead of decreasing, starts rising again, up to 1-2\,MK, and this all of a sudden within only $\sim$100\,km. How this temperature behaviour can happen steadily, and all over the entire Sun, is dubbed the solar corona heating problem, {\it one of the most perplexing and unresolved problems in astrophysics to date} \cite{Ant10}. The phenomenology of the solar atmosphere makes this mystery even more enigmatic. For example, the unnaturally high-temperature upper solar atmosphere becomes even hotter above non-flaring magnetized locations, like the puzzling dark sunspots, reaching about 5-10\,MK \cite{Ziou09}, while the underlying surface of the sunspots gets relatively cooler, reaching occasionally $\sim$3000\,K, instead of the ambient $\sim$5800\,K. The temperature difference between two neighbouring solar layers, i.e., that of the photosphere and that of the corona, with the chromosphere sandwiched in between, widens. How can this additional and intriguing behaviour of the magnetized Sun fit conventional thinking? Obviously, the solar magnetic field, is the ingredient adding \textit{somehow} to the solar corona mystery. This finding is a second fingerprint of the corona's mystery, with the first being the formation of the surprisingly strong temperature inversion across the so-called transition region. Furthermore, following conventional reasoning, we still do not know how magnetic energy is converted into thermal energy of the corona \cite{War10}.
The solar corona mystery is not an isolated one, but rather ubiquitous throughout the solar-type stars in the Universe. Astonishingly, the Sun's radiation spectrum deviates strongly from that of a black body, and this reflects the whole mystery. For comparison, an almost perfect black body spectrum is exhibited by the CMB radiation of the infant Universe (3000\,K). Therefore, the question arises as to why the Sun behaves only partly as a perfect black body and how it manages to keep its tiny outer atmosphere, packed so close to its surface, at such an unnaturally high temperature. Note that the Sun is permeated spatiotemporally with unpredictably varying magnetic fields, which is the cause of many puzzling solar phenomena, while the early Universe had diminishingly small magnetic fields. This difference is essential from the axion point of view, since the axion-photon oscillation probability, as most solar phenomena, shows a striking B$^2$-dependence. One should bear in mind that no stellar theory expects a Sun-like star to emit any measurable quantity of X-rays, as we witness since decades with the Sun.
To the best of our knowledge, in recent times, no other solar problem has defied explanation for so long, e.g., take the solution of the solar neutrino deficit problem. It is logical to conclude that the mysterious coronal behaviour must be the manifestation of hidden new (solar) physics. Other solar phenomena associated with the mysterious 11-year clock, like flares, coronal mass ejections, sunspots, spicules, etc. raise also serious questions about their (not much less) mysterious origin, thus further suggesting a (common?) exotic solution.
\section{Solar/Stellar axion manifestation}
How can the solar behaviour be related to exotica like axions? The production of axions inside the Sun's core was widely accepted soon after their theoretical invention, constraining also their coupling strength to matter following the non-observation of additional star ageing effects \cite{Ziou09}. An extra energy escape from their hot core into space should have made them appear older than they actually are. In fact, stellar evolution arguments constrain the allowed escaping solar energy into axions to the \textperthousand-level. Nevertheless, this is still a quite large percentage compared to the solar observations under consideration as being due to or triggered by new exotica. For example, the unexpected quiet Sun X-ray emission makes only $\sim\!10^{-7}$ of its total energy output, while present X-ray missions detect solar fluxes at the level of $\sim\!10^{-14}$. This demonstrates the enormous potential solar observations have to unravel new physics, with the axion scenario inspiring the most, since (most) puzzling solar phenomena correlate with the magnetic field. Therefore, axion involvement in stars can have far reaching consequences, even if it causes only faint emission of radiation, since this leaves no signs of premature ageing.
Encouraged by the Sun's groundbreaking impact in the past in nuclear and astroparticle physics, it was natural to be attracted by the Sun's puzzling and inspiring behaviour, which depicts axion involvement. While we refer only to solar axions, the cause of the multifaceted and unpredictable Sun is not necessarily due to one single process by one single particle's involvement, though in certain cases exotic scenarios remain the only choice. Observations at extreme conditions, like the non-flaring quiet Sun during solar minimum or an isolated active/flaring solar region, could favour the showing-up of one exotic component against other(s), if any.
We refer throughout this work to axions, but we consider them as being representative of any other exotica dubbed WISPs (Weakly Interacting Slim Particles), which can couple similarly to ordinary matter, e.g., intriguing scalar particles like the chameleons, which are potential candidates for the cosmic dark energy. While the QCD-inspired axion implies a particle with one rest mass and one coupling constant, WISPs do not have to follow this constraint, e.g. massive solar axions of the Kaluza--Klein type \cite{DiL00}.
\section{Solar axion signatures}
For the last 15 years, the search for axions in solar X-rays has mostly been oriented towards a very light axion (rest mass$\,\ll\!10^{-4}$\,eV/c$^2$) \cite{Ziou09}. But, if its mass is (far) above this range, any search should fail, and this is the case so far. Therefore, this work addresses a much higher axion rest mass range ($\sim$20\,meV/c$^2$), albeit not intentionally but being observationally driven: if axions play a certain role in the Sun's workings, then some strange phenomena should show up, at least occasionally, i.e., with known physics being unable to provide an explanation. In fact, this is what happens so strikingly, since the Sun is full of mysteries.
We focus here on derived atypical solar axion signatures related to magnetic fields. For example, for the solar corona mystery, massive solar axions of the Kaluza--Klein type have been suggested as the potential source of the steady solar X-ray emission component \cite{DiL00}. The very thin solar corona is rather similarly hot ($\sim$1-10\,MK) as the hot solar core ($\sim$16\,MK), and therefore it requires an energy input, which has been elusive and has kept the corona mystery alive for several decades, even though there is no luck of proposed models. Note, the corona density changes dynamically, e.g., by factor up to $\sim$10-100 \cite{Asc04}. Then, within the axion scenario, the observed solar corona reflects the balance between inwards directed radiation pressure from the spontaneous decay of massive axions of the Kaluza--Klein type or any other massive, radiatively-decaying WISPs, while out-streaming axions being magnetically converted to X-rays exert an additional but outwards directed radiation pressure. To show the dynamical character of the Sun, it is worth mentioning, for example, the mysterious solar spicules, which cover about 1\% of the Sun's surface; their plasma density reaches values of $\sim\! 10^{11}$/cm$^3$, which are also of potential interest for axion-photon oscillations.
Figure \ref{figure1} shows the directly measured ``excess'' X-rays from the quietest to the flaring Sun (see Figure 10 in \cite{Ziou09}). Here we update the first intensity calculations presented in \cite{Ziou09}. Thus, a magnetic-field-related X-ray emission, be it transient be it steady, can be, in principle, axion in origin. The maximum conversion rate of out-streaming axions near the magnetized solar surface was estimated to be $P_{{\rm a}\to\gamma}\!\approx\!10^{-12}$ (see section 3 in \cite{Ziou09}). For comparison, assuming even that the entire quiet Sun soft X-ray luminosity measured recently by the SPhinX detector (L$_x\!\approx\!10^{21}$\,erg/s) is due to converted axions, this requires an even smaller conversion rate ($10^{-13}$). Furthermore, since only $\sim$1\% of the complex magnetism of the quiet Sun is seen \cite{Bue04}, this leaves room for much larger conversion efficiencies ($\sim10^{-8}$), i.e. an X-ray brightness of 10$^3$\,erg/cm$^2$/sec can still be axion related, which is not small. In addition, the fading solar magnetic field during the 2009 solar minimum was correlated with a 100 times weaker soft X-ray emission than during the previous solar minimum measured by the SPhinX mission \cite{Syl10}. But, the only $\sim$25\% decrease of the solar magnetic field cannot justify a 100-fold X-ray luminosity decrease, following a B$^2$-dependence. But if, for example, the conversion occurs deeper inside the photosphere, some X-rays are absorbed, or, in any case they become more red-shifted and might evade observation. These measurements show that the calculated maximum axion conversion in \cite{Ziou09} was (very) conservative. Moreover, there is room for still larger conversion, which could account also for larger X-ray surface brightness from flares: the rarity of such events may eventually reflect the not so easily achievable `fine tuning' of magnetic field and plasma density. While the aim of this work is not to explain all solar X-ray phenomena exclusively by axions, this might be the case to a larger extent than anticipated so far (given the mentioned uncertainties).
In addition, Figure \ref{figure2} shows the B$^2$ dependence of the deficit IR emission above the magnetized sunspots. If this is due to the disappearance of photons into axions, it implies also an overlooked strong solar axion source at low energies with far reaching implications in solar axion research. Finally, Figure \ref{figure3} explains how one may make visible new signatures, hidden in the solar irradiance spectrum, using the normalised residuals from a pure black body distribution.
\begin{figure}[b]
\centerline{\includegraphics[width=0.85\textwidth]{./Zioutas.Konstantin.fig1.eps}}
\caption{Reproduced spectra from directly measured solar X-rays from: a) the main phase of a large flare with T$\sim$20\,MK \cite{Bat09} (green dashed), b) a flare with RESIK and RHESSI (red dots), c) preflaring periods after having subtracted the main X-ray flare component from the original spectra \cite{Bat09} (purple histogram), d) non-flaring active regions with T$\approx$6\,MK, i.e. sunspots (blue dots), e) non-flaring quiet Sun, with T$\approx2.7\,$MK, at solar minimum with SPhinX (blue dashed line) \cite{Syl10}; this is also supported by the recent findings that in the Quiet Sun regions stronger magnetic fields occur in deeper layers than in the ARs \cite{Ziou09}, implying more down-comptonization and giving rise to a larger slope. The initial broad solar axion spectrum is also shown shadowed (pink dashed line). Two GEANT4 simulated spectra following multiple Compton scattering from a depth of $\sim$350\,km and $\sim$400\,km are also shown for comparison (thin histograms), where the estimated plasma frequency, i.e., also the axion rest mass, is $\sim$17\,meV/c$^2$. The uncertainty is a factor of $\sim$2, since the density changes by factor of $\sim$4 between the $\sim$300\,km and $\sim$1000\,km depth. Note the strong deviations of the indirectly derived spectra (grey lines) in the past \cite{per00} of the non-flaring quiet Sun at solar minimum and that of the active Sun at solar maximum below $\sim1\,$keV (SPhinX measurements). The spectra are not to scale.}
\label{figure1}
\end{figure}
\begin{figure}[hbt*]
\centerline{\includegraphics[width=0.5\textwidth]{./Zioutas.Konstantin.fig2.eps}}
\caption{The observed infrared (IR) intensity in the darkest position of sunspot cores (i.e., umbrae) is plotted vs. \textbf{B}$^2$, as derived from a total number of 1392 sunspots \cite{Liv09,LivPP}. The measurements were performed from 1992 to 2009. The red line shows the \textbf{B}$^2$-dependence as a guide. It is not a fit to the IR intensity loss data. An example: intensity loss of 0.4 means that the number of IR photons is reduced by 40\% (with zero being the reference quiet Sun value) \cite{LivPP}. (Courtesy W. Livingston, NOAO/NSO, Tucson, Arizona.)}
\label{figure2}
\end{figure}
\begin{figure}[hbt*]
\begin{minipage}{0.49\textwidth}
\centerline{\includegraphics[width=0.99\textwidth]
{./Zioutas.Konstantin.fig3a.eps}}
\end{minipage}
\begin{minipage}{0.49\textwidth}
\centerline{\includegraphics[width=0.99\textwidth]
{./Zioutas.Konstantin.fig3b.eps}}
\end{minipage}
\caption{((\textbf{Left}) The approximated solar radiation spectrum during solar minimum (red line) and the black body spectrum for T=5800\,K (blue line). (\textbf{Right}) The normalized relative difference between the two spectra can be used to unravel non-thermal contributions, whatever their origin. The residuals below $\sim$100\,nm correspond to the hot corona excess. Residuals at $\sim$10$^6$\,nm might be a contamination of the CMB radiation, though the peak appears too broad towards shorter wavelengths to be eventually explained exclusively by CMB (either directly or reflected from the Sun). The origin of the excess around $\sim$2000--3000\,nm is real, but it is not yet identified. (Courtesy Marlene DiMarco/ UCAR Office of Education and Outreach/2009.)}
\label{figure3}
\end{figure}
\section{Conclusions}
We present observational evidence in favour of the solar axion scenario. Both massive and light axions are required, in order to explain the celebrated solar coronal heating mystery and unexpected (transient) X-ray activity. The suggested axion scenario does not exclude the involvement of other WISPs (or the synergy with conventional phenomena). For example, the solar chameleon might be a potential candidate. This work is observationally driven. The accumulating axion signatures, when considered coherently all together, increase their combined significance in favour of solar axions as being at the origin of often miraculous solar behaviour. Nevertheless, each finding reflects an axion signature in its own right. On top of every other argument, we keep in mind that such a large amount of X-rays is anyhow not expected to be emitted by a cool star like the Sun, and this is what triggered this work.
\begin{footnotesize}
|
2,877,628,088,421 | arxiv |
\section{Introduction}
We consider the complexity of three related and fundamental problems: computing the convolution of two vectors, multiplying two integers, and computing the Hamming distance between two strings. We study these problems in an online or streaming context and provide matching upper and lower bounds in the cell-probe model.
Lower bounds in the cell-probe model also hold for the popular word-RAM model in which many of today's algorithms are given.
The importance of these problems is hard to overstate. The integer multiplication and convolution problems have played a central role in modern algorithms design and theory. The question of how to compute the Hamming distance efficiently has a rich literature, spanning many of the most important fields in computer science. Within the theory community, communication complexity based lower bounds and streaming model upper bounds for the Hamming distance problem have been the subject of particularly intense study~\cite{CDIM:03,Woodruff:04,HSZZ:06,JKS:08,BCRT:10,Chakrabarti:11}. This previous work has however almost exclusively focussed on providing resource bounds either in terms of space or bits of communication rather than time complexity.
We begin by introducing the problems and stating our results. In the following problem definitions and throughout, we write $[q]$ to denote the set $\{0,\dots,q-1\}$, where $q$ is a positive integer and a parameter of the problem.
\begin{problem}[\textbf{Online convolution}]
For a fixed vector $F\in[q]^n$ of length $n$, we consider a stream in which numbers from $[q]$ arrive one at a time. For each arriving number, before the next number arrives, we output the inner product (modulo~$q$) of $F$ and the vector that consists of the most recent $n$ numbers of the stream.
\end{problem}
\begin{thm}[\textbf{Online convolution}]
\label{thm:conv}
In the cell-probe model with $w$ bits per cell, for any positive integers~$q$ and~$n$, and any randomised algorithm solving the online convolution problem, there exist instances such that the expected amortised time per arriving value is $\Omega{\left(\frac{\delta}{w}\log n\right)}$, where $\delta=\lceil \log_{2}{q} \rceil$.
\end{thm}
\begin{problem}[\textbf{Online multiplication}]
Given two numbers $F,X\in [q^n]$, where $q$ is the base and $n$ is the number of digits per number, we want to output the $n$ least significant digits of the product of $F$ and $X$, in base $q$.
We must do this under the constraint that only $F$ is known in advance and the digits of $X$ arrive one at a time, starting from the lower-order end.
When the $i$-th digit of $X$ arrive, before the $(i+1)$-th digit arrive, we output the $i$-th digit of the product.
\end{problem}
\begin{thm}[\textbf{Online multiplication}]
\label{thm:mult}
In the cell-probe model with $w$ bits per cell, for any positive integers~$q$ and~$n$, and any randomised algorithm solving the online multiplication problem in base~$q$, there exist instances such that outputting the $n$ least significant digits of the product takes $\Omega{\left(\frac{\delta}{w}n\log n\right)}$ expected time, where $\delta=\lceil \log_2q \rceil$.
\end{thm}
\begin{problem}[\textbf{Online Hamming distance}]
For a fixed string $F$ of length $n$, we consider a stream in which symbols from the alphabet $[q]$ arrive one at a time. For each arriving symbol, before the next symbol arrives, we output the Hamming distance between $F$ and the last $n$ symbols of the stream.
\end{problem}
\begin{thm}[\textbf{Online Hamming distance}]
\label{thm:ham}
In the cell-probe model with $w$ bits per cell, for any positive integers~$q$ and~$n$, and any randomised algorithm solving the online Hamming distance problem, there exist instances such that the expected amortised time per arriving value is $\Omega\left(\frac{\delta}{w}\log n\right)$, where $\delta=\lceil \min\{\log_2q,\log_2n\} \rceil$.
\end{thm}
Our Hamming distance lower bound also implies a matching lower bound for any problem that Hamming distance can be reduced to. The most straightforward of these is online $L_1$ distance computation, where the task is to output the $L_1$ distance between a fixed vector of integers and the last $n$ numbers in the stream.
A suitable reduction was shown in~\cite{LP:2008}. The expected amortised cell probe complexity for the online $L_1$ distance problem is therefore also $\Omega{\left(\frac{\delta}{w}\log n\right)}$ per new output.
One of our main technical innovations is to extend recently developed methods designed to give lower bounds on dynamic data structures to the seemingly distinct field of online algorithms. Where $\delta = w$, for example, we have $\Omega(\log{n})$ lower bounds for all three problems. In particular for online multiplication and convolution, these lower bounds match the currently best known offline upper bounds in the RAM model. As we discuss in the Section~\ref{sec:previous}, this may be the highest lower bound that can be formally proved for all the problems we consider without a further significant theoretical breakthrough.
In order to prove our lower bounds we show the existence of probability distributions on the inputs for which we can prove lower bounds on the expected running time of any deterministic algorithm. By Yao's minimax principle~\cite{Yao1977:Minimax} this immediately implies that for every (randomised) algorithm there is a worst-case input such that the (expected) running time is equally high. Therefore our lower bounds hold equally for randomised algorithms as for deterministic ones.
The lower bounds we give are also tight within the cell-probe model. This can be seen by application of reductions described in~\cite{FS:1973, CEPP:2011}. It was shown there that any offline algorithm for convolution~\cite{CEPP:2011} or multiplication~\cite{FS:1973} can be converted to an online one with at most an $O(\log{n})$ factor overhead. For details of these reductions we refer the reader to the original papers. In our case, the same approach also allows us to directly convert any cell-probe algorithm from an offline to online setting. An offline cell-probe algorithm for convolution, multiplication or Hamming distance could first read the whole input, then compute the answers and finally output them. This takes $O{(\frac{\delta}{w} n)}$ cell probes. We can therefore derive online cell-probe algorithms which take only $O{(\frac{\delta}{w}n\log n)}$ probes over $n$ inputs, hence $O{(\frac{\delta}{w}\log n)}$ (amortised) probes per output. This upper bound matches the new lower bounds we give.
We summarise this in the following corollary.
\begin{cor}\label{cor:final}
The expected amortised cell-probe complexity of the online convolution, multiplication, Hamming distance and $L_1$-distance problems is $\Theta(\frac{\delta}{w}\log n)$ per arriving value.
\end{cor}
One consequence of our results is the first strict separation between the complexity of exact and inexact pattern matching. Online exact matching can be solved in constant time~\cite{Galil:1981} per new input symbol and our new lower bound proves for the first time that this is not possible for Hamming distance.
Another consequence of our results is a new separation between the time complexity of online exact matching and any convolution-based online pattern matching algorithm. Convolution has played a particularly important role in the field of combinatorial pattern matching where many of the fastest algorithms rely crucially for their speed on the use of fast Fourier transforms (FFTs) to perform repeated convolutions. These methods have also been extended to allow searching for patterns in rapidly processed data streams~\cite{CEPP:2011,CS:2011}.
\subsection{Previous results and upper bounds in the RAM model} \label{sec:previous}
Almost all previous algorithmic work for exact Hamming distance computation has considered the problem in an offline setting. Given a pattern~$P$ and a text~$T$ of length $m$ and $n$ respectively, the best current deterministic upper bound for offline Hamming distance computation is an $O(n\sqrt{m\log{|m|}})$ time algorithm based on convolutions~\cite{Abrahamson:1987, Kosaraju:1987}. In \cite{Karloff:1993} a randomised algorithm was given that takes $O((n/{\epsilon}^2)\log^2{n})$ time which was subsequently modified in~\cite{Indyk:1998} to $O((n/{{\epsilon}^3}) \log{n})$. Particular interest has also been paid to a bounded version of this problem called the $k$-mismatch problem. Here a bound $k$ is given and we need only report the Hamming distance if it is less than or equal to $k$. In \cite{LV:1986a}, an $O(nk)$ algorithm was given that is not convolution based and uses $O(1)$ time lowest common ancestor (LCA) operations on the suffix tree of $P$ and $T$. This was then improved to $O(n\sqrt{k\log{k}})$ time by a method that combines LCA queries, filtering and convolutions~\cite{ALP:2004}.
The best time complexity lower bounds for online multiplication of two $n$-bit numbers were given in the 1974 by Paterson, Fischer and Meyer. They presented an $\Omega(\log{n})$ lower bound for multitape Turing machines~\cite{PFM:1974} and also gave an $\Omega(\log{n}/\log{\log n})$ lower bound for the \emph{bounded activity machine} (BAM). The BAM, which is a strict generalisation of the Turing machine model but which has nonetheless largely fallen out of favour, attempts to capture the idea that future states can only depend on a limited part of the current configuration. To the authors' knowledge, there has been no progress on cell-probe lower bounds for online multiplication, convolution or Hamming distance previous to the work we present here.
There have however been attempts to provide offline lower bounds for the related problem of computing the FFT. In~\cite{Morgenstern:1973} Morgenstern gave an $\Omega(n \log{n})$ lower bound conditional on the assumption that the underlying field of the transform is the complex numbers and that the modulus of any complex numbers involved in the computation is at most one. Papadimitriou gave the same $\Omega(n \log{n})$ lower bound for FFTs of length a power of two, this time excluding certain classes of algorithms including those that rely on linear mathematical relations among the roots of unity~\cite{Papadimitriou:1979}. This work had the advantage of giving a conditional lower bound for FFTs over more general algebras than was previously possible, including for example finite fields. In 1986, Pan~\cite{Pan:1986} showed that another class of algorithms having a so-called synchronous structure must require $\Omega(n \log{n})$ time for the computation of both the FFT and convolution.
The fastest known algorithms for both offline integer multiplication and convolution in the word-RAM model require $O(n\log{n})$ time by a well known application of a constant number of FFTs. As a consequence our online lower bounds for these two problems match the best known time upper bounds for the offline problem. As we discussed above, our lower bounds for all three problems are also tight within the cell-probe model for the online problems.
The question now naturally arises as to whether one can find higher lower bounds in the RAM model. This appears as an interesting question as there remains a gap between the best known time upper bounds provided by existing algorithms and the lower bounds that we give within the cell-probe model. However, as we mention above, any offline algorithm for convolution, Hamming distance or multiplication can be converted to an online one with at most an $O(\log{n})$ factor overhead~\cite{FS:1973,CEPP:2011}.
As a consequence, a higher lower bound than $\Omega(\log{n})$ for any of these problems would immediately imply a superlinear lower bound for the offline version of the corresponding problem. This would be a truly remarkable breakthrough in the field of computational complexity as no such offline lower bound is known even for the canonical NP-complete problem SAT.
Our only alternative route to find tight time bounds would be to find better upper bounds for the online problems. For the case of online multiplication at least, where the fastest online RAM algorithm takes $O(\log^2{n})$ time per arriving pair of digits, this has been an open problem since at least 1973 and has so far resisted our best attempts.
On the other hand, for online Hamming distance, while our lower bound is tight within the model, it is still distant from the time complexity of the fastest known RAM algorithms. The best known online complexity is $O(\sqrt{n\log{n}})$ time per arriving symbol~\cite{CEPP:2011}. An improvement of the upper bound for Hamming distance computation to meet our new lower bound would also have significant implications. A reduction that is now regarded as folklore tells us that any $O(f(n))$~time algorithm for computing the Hamming distance between a pattern and all substrings of a text, assuming a pattern of length~$n$ and a text of length~$2n$, implies an $O(f(n^2))$~time algorithm for multiplying binary $(n\!\times\! n)$-matrices over the integers. Therefore an $O(\log{n})$ time online Hamming distance algorithm would imply an $O(n\log{n})$ offline Hamming distance algorithm, which would in turn imply an $O(n^2\log{n})$ time algorithm for binary matrix multiplication. Although such a result would arguably be less shocking than a proof of a superlinear offline lower bound for Hamming distance computation, it would nonetheless be a significant breakthrough in the complexity of a classic and much studied problem.
\subsection{The cell-probe model}
Our bounds hold in the \emph{cell-probe model} which is a particularly strong computational model that was introduced originally by Minsky and Papert~\cite{MP:1969} in a different context and then subsequently by Fredman~\cite{Fredman:1978} and Yao~\cite{Yao1981:Tables}.
In the cell-probe model there is a separation between the computing unit and the memory, which is external and consists of a set of cells of $w$ bits each. The computing unit cannot remember any information between operations. Computation is free and the cost is measured only in the number of cell reads or writes (cell~probes). This general view makes the model very strong, subsuming for instance the popular word-RAM model. In the word-RAM model certain operations on words, such as addition, subtraction and possibly multiplication take constant time (see for example~\cite{Hagerup:1998} for a detailed introduction). Here a word corresponds to a cell. As is typical, we will require that the cell size $w$ is at least of order $\log n$ bits. This allows each cell, or a constant number of cells, to hold the address of any location in memory.
The generality of the cell-probe model makes it particularly attractive for establishing lower bounds for dynamic data structure problems and many such results have been given in the past couple of decades. The approaches taken had historically been based only on communication complexity arguments and the chronogram technique of Fredman and Saks~\cite{FS1989:chronogram}.
However in 2004, a breakthrough lead by P{\v a}tra{\c s}cu\xspace and Demaine gave us the tools to seal the gaps for several data structure problems~\cite{PD2006:Low-Bounds} as well as giving the first $\Omega(\log{n})$ lower bounds. The new technique is based on information theoretic arguments that we also deploy here. P{\v a}tra{\c s}cu\xspace and Demaine also presented ideas which allowed them to express more refined lower bounds such as trade-offs between updates and queries of dynamic data structures. For a list of data structure problems and their lower bounds using these and related techniques, see for example~\cite{Pat2008:Thesis}. More recently, a new lower bound of $\Omega\left((\log{n}/\log{\log{n}})^2\right)$ was given by Green Larsen for the cell-probe complexity of performing queries in the dynamic range counting problem~\cite{Larsen:2012}.
This result holds under the natural assumptions of $\Theta(\log{n})$~size words and polylogarithmic time updates and is another exciting breakthrough in the field of cell-probe complexity.
\subsection{Technical contributions}
We use one of the most important techniques for proving data structure lower bounds called the \emph{information transfer method} of P{\v a}tra{\c s}cu\xspace and Demaine~\cite{PD2004:Partial-sums,PD2006:Low-Bounds}.
For a pair of adjacent intervals of arriving values in the stream, the information transfer is the set
of memory cells that are written during the first interval and read in
the next interval. These cells must contain \emph{all} the
information from
the updates during the first interval that the algorithm needs in order
to produce correct outputs in the next interval. If
one can prove that this quantity is large for many pairs of intervals
then the desired lower bounds follow. To do this we relate the size
of the information transfer to the conditional entropy of the outputs
in the relevant interval. The main task of proving lower bounds
reduces to that of devising a hard input distribution for which outputs
have high entropy conditioned on selected previous values of the input.
Although the use of information transfer to provide time lower bounds for data structure problems is not new, applying the method to our new online setting has required a number of new insights and technical innovations. At the simplest level, where a standard data structure problem has a number of different possible queries, in our setting there is only one query which is to return the latest result as soon as a new symbol arrives. As a result we provide a complete description of the information transfer method in a form which is relevant to this different setting. At a more detailed mathematical level, perhaps the most surprising innovation we present is a new relationship between the Hamming distance, vector sums and constant weight binary cyclic codes.
For the three problems we consider, our key innovation is the design of a fixed vector or string $F$
which together with some random distribution over possible input streams provide a lower bound for the information transfer between successive intervals.
For the convolution and multiplication problems we show that a randomly picked $F$ has a good chance of being suitable for proving the lower bounds. We also give an explicit description of a particular $F$ for which the lower bounds are obtained when the values of the input stream are drawn independently and uniformly at random. The vector $F$ is easy to describe and naturally yields large conditional entropy of the outputs for intervals of power-of-two lengths.
The results of the convolution and multiplication problems can be seen as a first step towards the lower bound for the Hamming distance problem.
Here the string $F$ is derived by a sequence of transformations. These start with binary cyclic codes and go via binary vectors with many
distinct sums and an intermediate string to finally arrive at $F$ itself.
The use of such a purposefully designed input departs from the closely related work of the convolution and multiplication lower bounds and also from much of the lower bound literature where simple uniform distributions over the whole input space often suffice.
The central fact that enabled a lower bound to be proven for the online convolution problem is that the inner product between a vector and successive suffixes of the stream reveals a lot of information about the history of the stream. Establishing a similar result for online Hamming distance problem appears, however, to be considerably more challenging for a number of reasons.
The first and most obvious is that the amount of information one gains by comparing whether two, potentially large, symbols are equal is at most one bit, as opposed to $O(\log{n})$ bits for multiplication.
The second is that the particularly simple worst-case vector $F$ of the convolution problem greatly eased the resulting analysis.
We have not been able to find such a simple fixed string for the Hamming distance problem and our proof of the existence of a hard instance is non-constructive and involves a number of new insights, combining ideas from coding theory and additive combinatorics.
When computing the Hamming distance there is a balance between the number of symbols being used and the length of the strings. For large alphabets and short strings, one would expect a typical outputted Hamming distance to be close to the length of the string on random inputs and therefore to provide very little information.
This suggests that the length of the strings must be sufficiently long in relation to the alphabet size to ensure that the entropy of the outputs is large, as required by the information transfer method. On a closer look, it is not immediately obvious that large entropy can be obtained unless the fixed string $F$ is \emph{exponentially} larger than the alphabet size. This potentially poses another problem for the information transfer method, namely that a word size $w$ of order $\log n$ would be much larger than $\delta$ (the number of bits needed to represent a symbol), making a $\log n$ lower bound impossible to achieve.
Our main technical contribution is to show that fixed strings of length only polynomial in the size of the alphabet exist which provide outputs of sufficiently high entropy. Such strings, when combined with a suitable input distribution maximising the number of distinct Hamming distance output sequences, give us the overall lower bound.
We design a fixed string $F$ with this desirable property in such a way that there is a one-to-one mapping between many of the different possible input streams and the outputted Hamming distances. This in turn implies large entropy.
The construction of $F$ is non-trivial and we break it into smaller building blocks, reducing our problem to a purely combinatorial question relating to vectors sums.
That is, given a relatively small set $V$ of vectors of length $m$, how many distinct vector sums can be obtained by choosing $m$ vectors from $V$ and adding them. We show that even if we are restricted to picking vectors only from subsets of $V$, there exists a $V$ such that the number of distinct vector sums is $m^{\Omega(m)}$. We believe this result is interesting in its own right. Our proof for the combinatorial problem is non-constructive and probabilistic, using constant weight cyclic binary codes to prove that there is a positive probability of the existence of a set $V$ with the desired property.
\subsection{Organisation}
In Section~\ref{sec:preliminaries} we introduce notation and describe the setup for proving the lower bounds.
In Section~\ref{sec:proofs} we prove the lower bounds for all three problems that we consider. The proofs hinge on a set of lemmas that will be proved separately in subsequent sections.
In Section~\ref{sec:conv} we deal with the lemmas related to the convolution problem, and
in Section~\ref{sec:mult} we deal with the lemmas related to the multiplication problem.
Finally, in Sections~\ref{sec:ham} to~\ref{sec:proofvecsum} we prove the lemma related to the Hamming distance problem.
\section{Basic setup for the lower bounds}\label{sec:preliminaries}
In this section we introduce notation and concepts that are used heavily
in the lower bound proofs.
For an array, vector or string $A$ of length $n$ and $i,j\in[n]$, we write $A[i]$
to denote the value at position~$i$, and where $j\geq i$, $A[i,j]$
denotes the $(j-i+1)$-length subarray of $A$ starting at position
$i$. All logarithms are in base two.
We first introduce a unifying framework for the problems we consider.
\subsection{The framework}
There is a \emph{fixed~array}
$F$ and an array $S$ which is referred
to as the \emph{stream}.
Both $F$ and $S$ are of length $n$ and over the set $[q]$ of integers,
and we let $\delta=\lfloor \log q \rfloor$ denote the number of bits required to encode a value from $[q]$.
The value $q$, or alternatively $\delta$, is a parameter of the problem.
The problem is to maintain $S$ subject to an update operation $\ensuremath{\textsc{update}}(x)$ which takes a symbol $x\in [q]$, modifies~$S$ by appending $x$ to the right of the rightmost symbol $\S[n-1]$ and removing the leftmost symbol $S[0]$, and then outputs the value of a function of $F$ and the updated $S$.
In the \emph{convolution} problem the output is the inner product of $F$ and $S$, that is $\sum_{i\in[n]}(F[i]\cdot S[i])$, and in
the \emph{Hamming~distance} problem the output is the number of positions $i\in[n]$ such that $F[i]\neq S[i]$.
We let $U\in [q]^n$ denote the \emph{update array} which describes a sequence of $n$ $\ensuremath{\textsc{update}}$ operations. That is, for each $t\in[n]$, the operation $\ensuremath{\textsc{update}}(U[t])$ is performed.
We will usually refer to $t$ as the \emph{arrival} of the value $U[t]$.
Observe that just after the arrival $t$, the values $U[t+1,n-1]$ are still not known to the algorithm. Finally, we let the $n$-length array $A$ denote the outputs such that for $t\in [n]$, $A[t]$ is the output of $\ensuremath{\textsc{update}}(U[t])$.
In the \emph{multiplication}~problem we let $F$ denote one of the two operands to be multiplied, hence $F$ is fixed and known in advance by the algorithm.
Specifically we let $F[i]$ denote the $i$-th least significant digit. We let $U$ be the unknown operand so that $U[t]$ is its $t$-th least significant digit. Prior to the arrival of the first digit $U[0]$, the stream $S$ contains only zeros. The output $A[t]$ is the $t$-th digit in the product of $F$ and $S$, which is a function of $F$ and $U[0,t]$ as required.
\subsection{Hard distributions}
Our lower bounds hold for any randomised algorithm on its worst case
input. This will be achieved by applying \emph{Yao's
minimax principle}~\cite{Yao1977:Minimax}. That is, we develop
lower bounds that
hold for any deterministic~algorithm on some random~input. The basic approach is as follows: we devise a fixed array $F$ and describe a probability
distribution for $n$ new values arriving in the stream~$S$.
We then obtain a lower bound on the expected
running time for any deterministic algorithm over these arrivals.
Due to the minimax principle, the
same lower bound must then hold for any randomised~algorithm on its own
worst~case input. The amortised bound is obtained by dividing by $n$.
From this point onwards we consider an arbitrary deterministic algorithm
running with some fixed array $F$ on a random input of $n$ values. The algorithm may depend on~$F$. We refer to the choice of $F$ and distribution on $U$ as a \emph{hard distribution} since it used to show a lower bound.
\subsection{Information transfer} \label{sec:more-notation}
The \emph{information transfer tree}, denoted $\ensuremath{\mathcal{T}}$, is a balanced
binary tree over $n$ leaves. To avoid technicalities we assume that
$n$ is a power of two.
The leaves
of $\ensuremath{\mathcal{T}}$, from left to right, represent the arrivals $t$ from $0$
to $n-1$.
For a node $v$ of $\ensuremath{\mathcal{T}}$, we let $\ell_{v}$
denote the number of leaves in the subtree rooted at $v$.
An internal node $v$ is associated with three arrivals,
$t_{0}$, $t_{1}$ and $t_{2}$. Here $t_{0}$ is the arrival represented
by the leftmost node in subtree rooted at $v$, similarly $t_{2}=t_{0}+\ell_{v}-1$ is the rightmost such node and $t_{1}=t_{0}+\ell_{v}/2-1$ is in the middle. That is, the intervals $[t_{0},t_{1}]$
and $[t_{1}+1,t_{2}]$ span the left and right subtrees of $v$, respectively.
For example, in Figure~\ref{fig:tree},\inserttreefigure the node labelled $v$ is associated with the intervals $[16,23]$ and $[24,31]$.
We define the subarray $\ensuremath{U_v}=U[t_0,t_1]$ to represent the $\ell_{v}/2$ values
arriving in the stream during the arrival interval $[t_{0},t_{1}]$, and we define the subarray $\ensuremath{A_v}=A[t_{1}+1,t_{2}]$ to represent the $\ell_{v}/2$
outputs during the arrival interval $[t_{1}+1,t_{2}]$.
We define $\ensuremath{\widetilde{U}_v}$ to be the concatenation of $U[0,(t_0-1)]$
and $U[(t_2+1),(n-1)]$. That is, $\ensuremath{\widetilde{U}_v}$ contains all symbols
of $U$ except for those in $\ensuremath{U_v}$.
When $\ensuremath{\widetilde{U}_v}$ is fixed to some constant $\ensuremath{\widetilde{u}_v}$ and $\ensuremath{U_v}$ is
random, we write $H(\ensuremath{A_v}\mid\ensuremath{\widetilde{U}_v}=\ensuremath{\widetilde{u}_v})$ to denote the conditional
entropy of $\ensuremath{A_v}$ under the fixed~$\ensuremath{\widetilde{U}_v}$.
We define the \emph{information transfer} of a node $v$ of $\ensuremath{\mathcal{T}}$,
denoted $\ensuremath{\mathcal{I}_v}$, to be the set of memory cells $c$ such that $c$
is probed during the interval $[t_{0},t_{1}]$ and also probed in $[t_{1}+1,t_{2}]$.
The cells in
the information transfer $\ensuremath{\mathcal{I}_v}$ therefore contains
all the information about the values in $\ensuremath{U_v}$ that the algorithm
uses in order to correctly produce the outputs $\ensuremath{A_v}$.
By adding up the sizes of the information transfers $\ensuremath{\mathcal{I}_v}$ over the internal nodes $v$
of $\ensuremath{\mathcal{T}}$ we get a lower bound on the number of cell probes, that is a lower bound on the total running time of the algorithm.
To see this it is important to make the observation
that a particular cell probe is counted for only once.
Suppose that the cell $c\in\ensuremath{\mathcal{I}_v}$ for some node $v$. Let $p$ be the first probe of $c$ in the arrival interval $[t_{1}+1,t_{2}]$. By including the cell $c\in\ensuremath{\mathcal{I}_v}$ in the cell probe count we are in fact counting the probe $p$. Now observe that $p$ cannot be counted for in the information transfer $\ensuremath{\mathcal{I}}_{v'}$ of any node $v'$ where $v'$ is a proper descendant or ascendant of $v$.
Since the concept of the size of the information transfer is central to the lower bound proofs, we define as a shorthand $\ensuremath{I_v}=|\ensuremath{\mathcal{I}_v}|$ to denote the size of the information transfer.
\begin{defn}
[\textbf{Large expected information transfer}]\label{def:large-it}
A node $v$ of $\ensuremath{\mathcal{T}}$ has \emph{large information transfer\xspace} if
\[
\expected{\ensuremath{I_v}} ~\geq~ \frac{k\cdot\delta\cdot\ell_v}{w},
\]
where $k$ is a constant that depends on the problem and input distribution.
\end{defn}
The aim is to show that a substantial proportion of nodes of $\ensuremath{\mathcal{T}}$ have large information transfer\xspace.
\section{Overall proofs of the lower bounds}\label{sec:proofs}
In this section we give the overall proofs for our lower bound
results.
Let $v$ be any node of $\ensuremath{\mathcal{T}}$. Suppose that $\ensuremath{\widetilde{U}_v}$ is fixed but the
symbols in $\ensuremath{U_v}$ are randomly drawn in accordance with the distribution
on $U$, conditioned on the fixed value of $\ensuremath{\widetilde{U}_v}$. This induces
a distribution on the outputs $\ensuremath{A_v}$. If the entropy of $\ensuremath{A_v}$ is
large, conditioned on the fixed $\ensuremath{\widetilde{U}_v}$, then any algorithm must
probe many cells in order to correctly produce the outputs $\ensuremath{A_v}$, as it is
only through the information transfer $\ensuremath{\mathcal{I}_v}$ that the algorithm
can know anything about $\ensuremath{U_v}$. We will soon make this claim more
precise.
\subsection{Upper bound on the entropy}
Towards showing that high conditional entropy $H(\ensuremath{A_v}\mid\ensuremath{\widetilde{U}_v}=\ensuremath{\widetilde{u}_v})$ implies large information transfer we use the information transfer $\ensuremath{\mathcal{I}_v}$ to describe an encoding of the outputs $\ensuremath{A_v}$. The following lemma gives a direct relationship between the size of the information transfer $\ensuremath{\mathcal{I}_v}$ and the entropy.
The lemma was originally stated in~\cite{PD2006:Low-Bounds} but for completeness we restate it here in our notation and provide a full proof.
\begin{lem}[P{\v a}tra{\c s}cu\xspace and Demaine~\cite{PD2006:Low-Bounds}]\label{lem:H-upper-old}
Under the assumption that the address of any cell can be specified in $w$ bits, for any node $v$ of the information transfer tree $\ensuremath{\mathcal{T}}$, the entropy
$$
H(\ensuremath{A_v}\mid\ensuremath{\widetilde{U}_v}=\ensuremath{\widetilde{u}_v})~\leq~w + 2w\cdot \expected{\ensuremath{I_v} \mid \ensuremath{\widetilde{U}_v}=\ensuremath{\widetilde{u}_v}}.
$$
\end{lem}
\begin{proof}
The expected length of any encoding of $\ensuremath{A_v}$, conditioned on $\ensuremath{\widetilde{U}_v}$, is an upper bound on the conditional entropy of $\ensuremath{A_v}$.
We use the information transfer $\ensuremath{\mathcal{I}_v}$ as an encoding in the following way. For every cell $c\in\ensuremath{\mathcal{I}_v}$ we store the address of $c$, which takes at most $w$ bits under the assumption that a cell can hold the address of any cell in memory.
We also store the contents of $c$, which takes $w$ bits.
In total this requires $2w\cdot \ensuremath{I_v}$ bits.
We will use the algorithm, which is fixed, and the fixed values of $\ensuremath{\widetilde{U}_v}$ as part of the decoder to obtain $\ensuremath{A_v}$ from the encoding. Since the encoding is of variable length we also store the size of the information transfer, which requires at most $w$ additional bits.
In order to prove that the described encoding of $\ensuremath{A_v}$ is valid we now describe how to decode it.
First we simulate the algorithm on the fixed input $\ensuremath{\widetilde{U}_v}$ from the first arrival of $U[0]$ until just before the first value in $\ensuremath{U_v}$ arrives.
We then skip over all inputs in $\ensuremath{U_v}$
and resume simulating the algorithm from the beginning of the interval
where $\ensuremath{A_v}$ is outputted until the last value in $\ensuremath{A_v}$ has been obtained.
For every cell being read, we check if it is contained in information transfer $\ensuremath{\mathcal{I}_v}$ by looking up its address in the encoding.
If it is in the information transfer, its contents is fetched from the encoding. If not, its contents is available from simulating the algorithm on the fixed inputs.
Observe that it suffices to
store only the first time a cell in the information transfer is probed as the decoder
remembers every cell it has already accessed.
\end{proof}
\subsection{Lower bounds on the entropy}
Lemma~\ref{lem:H-upper-old} above provides a direct way to obtain a lower bound on the expected size of the information transfer if given a lower bound on the conditional entropy $H(\ensuremath{A_v}\mid\ensuremath{\widetilde{U}_v}=\ensuremath{\widetilde{u}_v})$.
To show that a node has large information transfer\xspace we introduce the following definition.
\begin{defn}
[\textbf{High-entropy node}]\label{def:high-node}A node $v$ in
$\ensuremath{\mathcal{T}}$ is a \emph{high-entropy~node} if there is a positive constant
$k$ such that for \emph{any} fixed $\ensuremath{\widetilde{u}_v}$,
$$
H(\ensuremath{A_v}\mid\ensuremath{\widetilde{U}_v}=\ensuremath{\widetilde{u}_v})\,\geq\, k\cdot\delta\cdot\ell_{v}.
$$
\end{defn}
To put this bound in perspective, note that the maximum conditional
entropy of $\ensuremath{A_v}$ is bounded by the entropy of $\ensuremath{U_v}$, which is at
most $\delta\cdot(\ell_{v}/2)$ and obtained when the values of $\ensuremath{U_v}$
are independent and uniformly drawn from $[q]$. Thus, the
conditional entropy associated with a high-entropy node is the highest
possible up to some constant factor.
Establishing high-entropy nodes is the main contribution of this paper and the results are given in the following lemmas.
\begin{lem}
\label{lem:conv-rand-H-lower}
For the convolution problem,
suppose that $U$ is chosen uniformly at random from $[q]^n$, where $q$ is a prime. For any $v \in \ensuremath{\mathcal{T}}$, at least a $(1-\frac{1}{q})$-fraction of all $F \in [q]^n$ have the property that $v$ is a high-entropy node.
\end{lem}
The proof of the above lemma is given in Section~\ref{sec:conv} and relies on properties of Toeplitz matrices over a finite field of $q$ elements. The proof does not give explicit descriptions of fixed arrays $F$ for which nodes are high-entropy nodes.
In the proof of the next lemma however, we show that there exists a particular array $F$ for which high-entropy nodes are obtained. This $F$ is a 0/1-array and is easy to describe: zeroes everywhere except for at power-of-two positions from the right hand end. The proof is given in Section~\ref{sec:conv}.
\begin{lem}
\label{lem:conv-H-lower}
For the convolution problem there exists a fixed array $F\in [q]^n$ such that when $U$ is chosen uniformly at random from $[q]^n$, all $v \in \ensuremath{\mathcal{T}}$ are high-entropy nodes.
\end{lem}
Before we give the lemmas concerning online multiplication, recall that in this problem there is a fixed operand $F$ multiplied with an operand $U$ for which digits arrive one at a time.
\begin{lem}
\label{lem:mult-rand-H-lower}
For the online multiplication problem, suppose that the operand $U$ is chosen uniformly at random from $[q^n]$. For any $v \in \ensuremath{\mathcal{T}}$, at least half of all operands $F\in [q^n]$ have the property that $v$ is a high-entropy node.
\end{lem}
The proof of Lemma~\ref{lem:mult-rand-H-lower} is given in Section~\ref{sec:mult}. Similarly to the convolution problem we also give an explicit description of a number $F$ for which high-entropy nodes are obtained. This number resembles the fixed array that we described above for the convolution problem.
The proof of the next lemma is also given in Section~\ref{sec:mult}.
\begin{lem}
\label{lem:mult-H-lower}
For the online multiplication problem there exists a fixed operand $F\in [q^n]$ such that when $U$ is chosen uniformly at random from $[q^n]$, all $v \in \ensuremath{\mathcal{T}}$ are high-entropy nodes.
\end{lem}
Finally, for the Hamming distance problem we show that there exists an $F$ and distribution for $U$ such that sufficiently many nodes are high-entropy nodes. The proof of the next lemma is rather involved and is given over the Sections~\ref{sec:ham} to~\ref{sec:proofvecsum}.
\begin{lem}
\label{lem:ham-H-lower}
For the Hamming distance problem there exists a hard distribution with a fixed $F$ and random $U$ such that any node $v\in\ensuremath{\mathcal{T}}$ for which $\ell_v\geq \sqrt{n}$ is a high-entropy node,
where $h$ is a constant.
\end{lem}
In the proof of Lemma~\ref{lem:ham-H-lower} we demonstrate that there exists a very specific set of strings such that when $F$ is drawn randomly from this set, there is a non-zero probability of picking an $F$ for which many nodes are high-entropy nodes.
Unlike the convolution and multiplication problems, the distribution for $U$ is not uniform over of all strings $[q]^n$.
\subsection{Lower bounds on the information transfer}
In the previous section we gave a series of lemmas saying that for all three problems we consider, there are instances for which many nodes of $\ensuremath{\mathcal{T}}$ are high-entropy nodes. In this section we combine these results with the entropy upper bound of Lemma~\ref{lem:H-upper-old} to show that many nodes have large information transfer\xspace.
The following lemmas match the lemmas of the previous section.
We start with the convolution problem.
\begin{lem}
\label{lem:conv-random-it}
For the convolution problem where both $F$ and $U$ are chosen uniformly at random from $[q]^n$, and $q$ is a prime, every $v\in\ensuremath{\mathcal{T}}$ has large information transfer\xspace.
\end{lem}
\begin{proof}
By combining Lemmas~\ref{lem:H-upper-old} and~\ref{lem:conv-rand-H-lower} we have that for any $v\in\ensuremath{\mathcal{T}}$ under fixed $\ensuremath{\widetilde{U}_v}$, at least half of all $F\in [q]^n$ imply that $v$ is a high-entropy node, that is,
\[
k\cdot \delta \cdot\ell_v
~\leq~
w + 2w\cdot \expected{\ensuremath{I_v} \mid \ensuremath{\widetilde{U}_v}=\ensuremath{\widetilde{u}_v}},
\]
where $k$ is the constant from Definition~\ref{def:high-node} of a high-entropy node.
Rearranging terms gives
\[
\expected{\ensuremath{I_v} \mid \ensuremath{\widetilde{U}_v}=\ensuremath{\widetilde{u}_v}}
~\geq~
\frac{\delta \cdot\ell_v}{2k\cdot w} - \frac12.
\]
We remove the conditioning by taking expectation over $\ensuremath{\widetilde{U}_v}$ under a random $U$.
When $F$ is chosen uniformly at random from $[q]^n$ we therefore have
\[
\expected{\ensuremath{I_v}}
~\geq~
\frac{\delta \cdot\ell_v}{4k\cdot w} - \frac14,
\]
hence $v$ has large information transfer\xspace.
\end{proof}
Similarly to Lemma~\ref{lem:conv-random-it}, we combine Lemmas~\ref{lem:H-upper-old} and~\ref{lem:conv-H-lower} to obtain the following property for the case where $F$ is a fixed string and not randomly chosen.
\begin{lem}
\label{lem:conv-it}
For the convolution problem there exists a hard distribution where $F$ is fixed and $U$ is chosen uniformly at random from $[q]^n$, such that every $v\in\ensuremath{\mathcal{T}}$ has large information transfer\xspace.
\end{lem}
\begin{proof}
Similarly to the proof of Lemma~\ref{lem:conv-random-it} we combine Lemmas~\ref{lem:H-upper-old} and~\ref{lem:conv-H-lower} to obtain, for all $v\in\ensuremath{\mathcal{T}}$ under fixed $\ensuremath{\widetilde{U}_v}$,
\[
\expected{\ensuremath{I_v} \mid \ensuremath{\widetilde{U}_v}=\ensuremath{\widetilde{u}_v}}
~\geq~
\frac{\delta \cdot\ell_v}{2k\cdot w} - \frac12,
\]
where $k$ is the constant from Definition~\ref{def:high-node} of a high-entropy node.
The conditioning is removed by taking expectation over $\ensuremath{\widetilde{U}_v}$ under a random $U$.
\end{proof}
The proofs of the following two lemmas, in which we establish large information transfer for the multiplication problem, are similar to the proofs of the previous two lemmas, only that we here combine Lemma~\ref{lem:H-upper-old} with Lemmas~\ref{lem:mult-rand-H-lower} and~\ref{lem:mult-H-lower}, respectively.
\begin{lem}
\label{lem:mult-random-it}
For the online multiplication problem where both operands are chosen uniformly at random from $[q^n]$, every $v\in\ensuremath{\mathcal{T}}$ has large information transfer\xspace.
\end{lem}
\begin{lem}
\label{lem:mult-it}
For the online multiplication problem there exists a fixed operand in $[q^n]$ such that when the other operand is chosen uniformly at random from $[q^n]$, every $v\in\ensuremath{\mathcal{T}}$ has large information transfer\xspace.
\end{lem}
Finally, large information transfer is also established for the Hamming distance problem. The proof of the next lemma is identical to the proof of Lemma~\ref{lem:conv-it}, only that we
combine Lemma~\ref{lem:H-upper-old} with Lemma~\ref{lem:ham-H-lower} instead, and
restrict the nodes $v$ to those for which $\ell_v\geq \sqrt{n}$.
\begin{lem}
\label{lem:ham-it}
There exists a hard distribution for the Hamming distance problem such that every $v\in\ensuremath{\mathcal{T}}$ for which $\ell_v\geq \sqrt{n}$ has large information transfer\xspace.
\end{lem}
\subsection{Obtaining the cell-probe lower bounds}
Now that we have established large information transfer\xspace for sufficiently many nodes of~$\ensuremath{\mathcal{T}}$ we are ready to prove the lower bounds of Theorems~\ref{thm:conv}, \ref{thm:mult} and~\ref{thm:ham}.
For both the convolution and multiplication problems, large information transfer\xspace has been established for every node $v$ of $\ensuremath{\mathcal{T}}$, whereas for the Hamming distance problem, large information transfer\xspace has only been established where $\ell_v\geq \sqrt{n}$.
In order to unify the presentation of the proofs we restrict the summation of $\ensuremath{I_v}$ to nodes for which $\ell_v\geq \sqrt{n}$.
Let $V$ denote this set of nodes.
We have
\begin{equation}
\label{eq:total-it}
\mathbb{E}\left[\sum_{v\in\ensuremath{\mathcal{T}}} \ensuremath{I_v}\right]
~\geq~
\mathbb{E}\left[\sum_{v\in V} \ensuremath{I_v}\right]
~=~
\sum_{v\in V} \mathbb{E}[\ensuremath{I_v}]
~\geq~
\sum_{v\in V} \frac{k\cdot\delta\cdot\ell_v}{w}
~=~
\frac{k'\cdot\delta\cdot n\cdot \log n}{w},
\end{equation}
where $k$ is the constant from Definition~\ref{def:large-it} of large information transfer\xspace and $k'$ is a new suitable constant.
The first equality follows by linearity of expectation and the second inequality follows by Lemmas~\ref{lem:conv-random-it} to~\ref{lem:ham-it}, respectively.
The last equality follows from the fact that
\[
\sum_{\substack{v\in T\\ \ell_v \geqslant \sqrt{n}}} \ell_v ~\in~ \Theta(n\log n).
\]
Since the running time is bounded by the number of cell probes we have from Equation~(\ref{eq:total-it}) that the expected running time for any deterministic algorithm solving the convolution, multiplication or Hamming distance problem, respectively, on $n$ random inputs is
\[
\Omega\left(\frac{\delta\cdot n\cdot \log n}{w}\right).
\]
By Yao's minimax principle, as discussed in Section~\ref{sec:preliminaries}, this implies that any randomised algorithm on its worst case input has the same lower bound on its expected running time.
The amortised time per arriving value is obtained by dividing the running time by $n$.
This concludes the proofs of Theorems~\ref{thm:conv}, \ref{thm:mult} and~\ref{thm:ham}.
\section{Hard distributions for the convolution problem} \label{sec:conv}
In this section we prove Lemmas~\ref{lem:conv-rand-H-lower} and~\ref{lem:conv-H-lower}, that is we show that there are instances to the convolution problem such that the conditional entropy of the outputs $\ensuremath{A_v}$ is large, where all inputs but $\ensuremath{U_v}$ are fixed.
We begin by proving Lemma~\ref{lem:conv-rand-H-lower} because the proof is straightforward and the description of the hard distribution is simple: pick the inputs $U$ uniformly at random from $[q]^n$. As to the choice of $F$ we only argue that a large fraction of all $n$-length arrays have the desired entropy lower bound. In Section~\ref{sec:conv-fixed-F} we will specify a particular $F$ with this property, which will lead to a proof of Lemma~\ref{lem:conv-H-lower}.
\subsection{Entropy lower bound over all arrays $F$} \label{sec:conv-random-F}
We now prove Lemma~\ref{lem:conv-rand-H-lower}.
Let $v$ be any internal node of $\ensuremath{\mathcal{T}}$ and let $t_v\in[n]$ denote the arrival time of $\ensuremath{U_v}[0]$.
Let $\ell=\ell_v/2$.
For $i\in [\ell]$, the $i$-th output in $\ensuremath{A_v}$ can be broken into two sums $\ensuremath{\mathcal{A}}_i$ and $\widetilde{\ensuremath{\mathcal{A}}}_i$, such that $\ensuremath{A_v}[i] = \ensuremath{\mathcal{A}}_i + \widetilde{\ensuremath{\mathcal{A}}}_i$, where
\begin{equation*}
\ensuremath{\mathcal{A}}_i = \sum_{j\in[\ell]} \big(F[n-1-(\ell+i)+j]\cdot U_v[j]\big)
\end{equation*}
is the contribution from the alignment of $F$ with $U_v$, and $\widetilde{\ensuremath{\mathcal{A}}}_i$ is the contribution from the alignments that do not include $U_v$. Hence $\widetilde{\ensuremath{\mathcal{A}}}_i$ is constant under fixed $\ensuremath{\widetilde{U}_v}$.
We define $\ensuremath{M_{F,\ell}}$ to be the $\ell$$\times$$\ell$ matrix with entries $\ensuremath{M_{F,\ell}}(i,j)= F[n-1-(\ell+i)+j]$. That is,
\begin{equation*}
\ensuremath{M_{F,\ell}} =
\begin{pmatrix}
F[n-\ell-1] & F[n-\ell+0] & F[n-\ell+1] & \cdots & F[n-2] \\
F[n-\ell-2] & F[n-\ell-1] & F[n-\ell+0] & \cdots & F[n-3] \\
F[n-\ell-3] & F[n-\ell-2] & F[n-\ell-1] & \cdots & F[n-4] \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
F[n-2\ell] & F[n-2\ell+1] & F[n-2\ell+2] & \cdots & F[n-\ell-1]
\end{pmatrix}.
\end{equation*}
Observe that $\ensuremath{M_{F,\ell}}$ is a \emph{Toeplitz} matrix (or ``upside down'' \emph{Hankel} matrix) since it is constant on each descending diagonal from left to right.
It follows that
\begin{equation}
\label{eq:matrix}
\ensuremath{M_{F,\ell}}\times
\begin{pmatrix}
U_v[0] \\
U_v[1] \\
\vdots \\
U_v[\ell-1]
\end{pmatrix}
=
\begin{pmatrix}
\ensuremath{\mathcal{A}}_0 \\
\ensuremath{\mathcal{A}}_1 \\
\vdots \\
\ensuremath{\mathcal{A}}_{\ell-1}
\end{pmatrix}
\end{equation}
which describes a system of linear equations. Since outputs are given modulo $q$, where $q$ is assumed to be a prime, we operate in the finite field $\mathbb{Z}/q\mathbb{Z}$.
It has been shown in~\cite{KL1996:Toeplitz} that for any $\ell$, out of all the $\ell$$\times$$\ell$ Toeplitz matrices over a finite field of $q$ elements, a fraction of exactly $(1-1/q)$ is non-singular.
This fact was actually already established in~\cite{Day1960:Matrices} almost 40 years earlier but incidentally reproved in~\cite{KL1996:Toeplitz}.
Thus,
a $(1-1/q)$-fraction of all $F$ has the property that all the $\ell$ inputs in $\ensuremath{U_v}$ can be uniquely determined from the outputs in $\ensuremath{A_v}$.
Since the induced distribution for $\ensuremath{U_v}$ under any fixed $\ensuremath{\widetilde{U}_v}$ is the uniform distribution on $[q]^\ell$, the conditional entropy
\[
H(\ensuremath{A_v}\mid\ensuremath{\widetilde{U}_v}=\ensuremath{\widetilde{u}_v})
~=~
\ell \cdot \log_2 q
~\geq~
\frac{\delta \cdot \ell_v}{2},
\]
where $\delta=\lfloor\log_2 q\rfloor$.
This concludes the proof of Lemma~\ref{lem:conv-rand-H-lower}.
\subsection{Entropy lower bound with a fixed array $F$} \label{sec:conv-fixed-F}
We now prove Lemma~\ref{lem:conv-H-lower} by demonstrating that it is possible to design a fixed array $F$ such that for all nodes $v\in\ensuremath{\mathcal{T}}$, a large portion of the values in $\ensuremath{U_v}$ can be uniquely determined from the outputs $\ensuremath{A_v}$. Since $U$ is drawn uniformly at random from $[q]^n$, this implies large entropy of the outputs $\ensuremath{A_v}$.
The fixed array $F$ that we consider consists of stretches of~0s interspersed by~1s. The distance between two succeeding 1s is an increasing power of two, ensuring that for half of the alignments of $F$ and $S$ in the arrival interval where $\ensuremath{A_v}$ is outputted, all but exactly one element of $U_v$ are simultaneously aligned with a 0 in $F$, hence not contributing to the outputted inner product of $F$ and $S$. We define $K_n\in[2]^n$ such that
\begin{equation*}
K_n[0],K_n[1],\dots,K_n[n-1]\;=\;\dots000000000{\bf 1}000000000000000{\bf 1}0000000{\bf 1}000{\bf 1}0{\bf 11}0,
\end{equation*}
where commas between elements on the right hand side have been omitted, or formally,
\begin{equation*}
K_n[i] =
\begin{cases}
1, &\text{if $n-1-i$ is a power of two;}\\
0, &\text{otherwise.}
\end{cases}
\end{equation*}
The hard distribution for Lemma~\ref{lem:conv-H-lower} is $F=K_n$ and the inputs $U$ drawn uniformly at random from $[q]^n$.
Let $v$ be any node of $\ensuremath{\mathcal{T}}$ and consider Figure~\ref{fig:conv} which illustrates three alignments of $F$ and $S$, denoted \alignment{1}, \alignment{2} and~\alignment{3}, respectively.
\begin{figure*}[t]
\centering
\insertdiagram{convolution-sliding}
\caption{\label{fig:conv}Three alignments of $F=K_n$ and the stream $S$:
\alignment{1}~the last value of $\ensuremath{U_v}$ has just arrived, \alignment{2}~half of the outputs in $\ensuremath{A_v}$ have been outputted, and \alignment{3}~all outputs in $\ensuremath{A_v}$ have been outputted.}
\end{figure*}
At alignment~\alignment{1}, the last value of $\ensuremath{U_v}$ has just arrived in the stream. At alignment~\alignment{2}, half of the outputs in $\ensuremath{A_v}$ have been outputted.
At alignment~\alignment{3}, all outputs in $\ensuremath{A_v}$ have been outputted.
The key observation is that between alignment~\alignment{2} and~\alignment{3}, exactly one input $x$ of $\ensuremath{U_v}$ is aligned with a~1 in~$F$, hence $x$ can be uniquely determined from the corresponding output. Thus, over all outputs $\ensuremath{A_v}$, a total of $\ell_v/4$ values of $\ensuremath{U_v}$ can be determined, implying that the entropy of $\ensuremath{A_v}$ must be at least $\delta\cdot\ell_v/4$, where $\delta=\lfloor\log_2 q\rfloor$.
We now formalise this reasoning.
Using the definition of $\ell=\ell_v/2$ and the matrix $\ensuremath{M_{F,\ell}}$ above, recall that entry
$\ensuremath{M_{F,\ell}}(i,j)= F[n-1-(\ell+i)+j]$.
Thus, $\ensuremath{M_{F,\ell}}(i,j)=1$ if and only if
\[
n-1-\big(n-1-(\ell+i)+j\big)~=~\ell+i-j
\]
is a power of two.
Since $\ell$ is a power of two it follows that for row $i\in \{\ell/2,\dots,\ell-1\}$ there can be at most one entry with the value~1. More precisely,
\begin{equation*}
\ensuremath{M_{F,\ell}}(i,j) =
\begin{cases}
1 &\textup{if $j=i$,}\\
0 &\textup{otherwise.}
\end{cases}
\end{equation*}
From the system of linear equations in Equation~(\ref{eq:matrix}) it follows that for $i\in \{\ell/2,\dots,\ell-1\}$,
$\ensuremath{\mathcal{A}}_i = \ensuremath{U_v}[i]$.
Since the induced distribution for $\ensuremath{U_v}$ under any fixed $\ensuremath{\widetilde{U}_v}$ is the uniform distribution on $[q]^\ell$, the conditional entropy
\[
H(\ensuremath{A_v}\mid\ensuremath{\widetilde{U}_v}=\ensuremath{\widetilde{u}_v})
~=~
\frac{\ell}{2} \cdot \log_2 q
~\geq~
\frac{\delta \cdot \ell_v}{4},
\]
where $\delta=\lfloor\log_2 q\rfloor$.
This concludes the proof of Lemma~\ref{lem:conv-H-lower}.
\section{Hard distributions for the multiplication problem} \label{sec:mult}
In this section we prove Lemmas~\ref{lem:mult-rand-H-lower} and~\ref{lem:mult-H-lower}, that is we show that there are instances of the online multiplication problem such that the conditional entropy of the outputs $\ensuremath{A_v}$ is large, where all inputs but $\ensuremath{U_v}$ are fixed.
For the purposes of proving a lower bound we assume that all digits of the operand $F$ are available at any time whereas the digits of the operand $U$ arrive one at a time.
Figure~\ref{fig:mult} illustrates $U\times F$, where $U[0]$ and $F[0]$ are the least significant digits and the product $A$ is capped at $n$ digits.
\begin{figure*}[t]
\centering
\insertdiagram{multiplication}
\caption{\label{fig:mult}An illustration of $A=U\times F$. Digits of $U$ arrive one at a time, where $U[0]$ is the low-order digit that arrives first.}
\end{figure*}
The following property of multiplying binary numbers was established by Paterson, Fischer and Meyer~\cite{PFM:1974}. The lemma is stated in our notation, but the translation from the original notation of~\cite{PFM:1974} is straightforward.
\begin{lem}[Corollary of Lemma~5 in~\cite{PFM:1974}]
\label{lem:PFMlemma5}
Suppose $q=2$. Let $v$ be any node of $\ensuremath{\mathcal{T}}$ and fix the digits of $\ensuremath{\widetilde{U}_v}$ arbitrarily.
At least half of all $F[0,\ell_v-1]\in [q]^{\ell_v}$ (first $\ell_v$ digits of $F$) have the property that
any value of $A_v$ can arise from at most four distinct~$\ensuremath{U_v}$.
\end{lem}
Although Lemma~\ref{lem:PFMlemma5} applies only to binary numbers, it naturally scales to any $q$ that is a power of two. To see this, observe that the property holds for any $v$, and a sequence of digits in base $q$ is after all just a bit sequence.
\begin{cor}
Lemma~\ref{lem:PFMlemma5} holds for any $q$ that is a power of two.
\end{cor}
We use the above corollary to prove Lemma~\ref{lem:mult-rand-H-lower}.
Let $v$ be any node of $\ensuremath{\mathcal{T}}$.
At least half of all $F\in [q^n]$ have the property that $\ensuremath{U_v}$ can be determined to up to set of four possible values given the outputs in $\ensuremath{A_v}$.
Since the induced distribution for $\ensuremath{U_v}$ under any fixed $\ensuremath{\widetilde{U}_v}$ is the uniform distribution on $[q]^\ell$ (the digits of $\ensuremath{U_v}$), the conditional entropy
\[
H(\ensuremath{A_v}\mid\ensuremath{\widetilde{U}_v}=\ensuremath{\widetilde{u}_v})
~\geq~
\log_2 \left(\frac{q^{\ell_v / 2}}{4}\right)
~\geq~
\frac{\delta \cdot \ell_v}{2} - 2,
\]
where $\delta=\log_2 q$.
This concludes the proof of Lemma~\ref{lem:mult-rand-H-lower}.
In order to prove Lemma~\ref{lem:mult-it} we specify a fixed $F$ which together with the uniform distribution for $U$ gives the desired entropy lower bound.
Similarly to the array $K_n$ from Section~\ref{sec:conv-fixed-F} we define $K_{q,n}$ to be the largest number in $[q^n]$ such that the $i$-th bit in the binary expansion of $K_{q,n}$ is $1$ if and only if $i$ is a power of two (starting with $i=0$ at the lower-order end). Thus, the binary expansion of $K_{q,n}$ is the reverse of $K_{n\log_2 q}$.
For example, suppose that $q=16$ (i.e.~hex) and $n=8$. Then $K_{16,8}=10116$ in base 16, or 65,814 in decimal, since the binary expansion of $K_{16,8}$ is
\[
\underbrace{0000}_{0}
\underbrace{0000}_{0}
\underbrace{0000}_{0}
\underbrace{0001}_{1}
\underbrace{0000}_{0}
\underbrace{0001}_{1}
\underbrace{0001}_{1}
\underbrace{0110}_{6}.
\]
Paterson, Fischer and Meyer~\cite{PFM:1974} also studied the multiplication of binary numbers where one operand is fixed. The following property was given in~\cite{PFM:1974}, here translated into our notation.
\begin{lem}[Lemma~1 of~\cite{PFM:1974}]
\label{lem:PFMlemma1}
Suppose $q=2$ and $F=K_{q,n}$. Let $v$ be any node of $\ensuremath{\mathcal{T}}$ and fix the digits of $\ensuremath{\widetilde{U}_v}$ arbitrarily. Any value of $A_v$ can arise from at most two distinct~$\ensuremath{U_v}$.
\end{lem}
Similarly to Lemma~\ref{lem:PFMlemma5} and from our definition of $K_{q,n}$, the above lemma scales to any $q$ that is a power of two.
\begin{cor}
Lemma~\ref{lem:PFMlemma1} holds for any $q$ that is a power of two.
\end{cor}
We use the above corollary to prove Lemma~\ref{lem:mult-H-lower} where $F=K_{q,n}$.
Let $v$ be any node of $\ensuremath{\mathcal{T}}$.
The value of $\ensuremath{U_v}$ can be determined to up to set of two possible values given the outputs in $\ensuremath{A_v}$.
Since the induced distribution for $\ensuremath{U_v}$ under any fixed $\ensuremath{\widetilde{U}_v}$ is the uniform distribution on $[q]^\ell$ (the digits of $\ensuremath{U_v}$), the conditional entropy
\[
H(\ensuremath{A_v}\mid\ensuremath{\widetilde{U}_v}=\ensuremath{\widetilde{u}_v})
~\geq~
\log_2 \left(\frac{q^{\ell_v / 2}}{2}\right)
~\geq~
\frac{\delta \cdot \ell_v}{2} - 1,
\]
where $\delta=\log_2 q$.
This concludes the proof of Lemma~\ref{lem:mult-H-lower}.
\section{Hard distribution for the Hamming distance problem} \label{sec:ham}
In this section we prove Lemma~\ref{lem:ham-H-lower}, that is we show that there are instances of the Hamming distance problem such that the conditional entropy of the outputs $\ensuremath{A_v}$ is large, where all inputs but $\ensuremath{U_v}$ are fixed.
We will show this property for nodes in the upper part of the tree~$\ensuremath{\mathcal{T}}$, namely nodes $v$ such the number of leaves $\ell_v$ is greater than some constant times~$\sqrt{n}$.
Unlike the hard distributions we gave for the convolution and multiplication problems, we will not give an explicit description of the array $F$ for which the Hamming distance lower bound holds. We only show the existence of such an $F$.
Further, for both the convolution and multiplication problems we showed that the lower bound was obtained for a majority of all $F$, where $U$ was chosen uniformly at random from $[q]^n$.
For the Hamming distance problem we will instead show that there exists an $F$ and some particular subset of $[q]^n$ such that when $U$ is drawn uniformly at random from this subset, we obtain the desired lower bound.
\subsection{Terminology, choice of $q$ and rounding issues} \label{sec:rounding}
We will refer to the input arrays, including $F$ and $U$, as \emph{strings}, and the set $[q]$ as the \emph{alphabet}. The values of the alphabet are referred to as \emph{symbols}.
Unlike the convolution and multiplication problems, for the Hamming distance problem there is no benefit in having an alphabet size greater $n$, the length of $F$.
Our hard distribution is constructed such that with an alphabet of size $q$, $n$ has to be roughly $q^3$, or more.
So from now on we assume that~$n\geq q^3$.
Observe that whenever $n$ is polynomial in $q$, the number of bits needed to represent a symbol is $\delta\in\Theta(\log n)$.
We will introduce two special symbols denoted $\ensuremath{\star}$ and $\ensuremath{\diamond}$. It will be tidy to keep them separate throughout the presentation. Once we start digging into the details we will see that for a given $q$, the number of distinct symbols that we actually use in the hard instance is only $q-\sqrt{q}+2$, including the two special symbols. The alphabet $[q]$ is therefore large enough to accommodate every symbol that we use.
We will often treat various roots of integers as integers. For example, we may say that some string of length $q^{3/2}$ is the concatenation of $q$ smaller strings, each of length $q^{1/2}$. This is of course only possible whenever these numbers are integers, which is not necessarily the case for arbitrary $q$. One could overcome this problem by adjusting the values with appropriate floors and ceilings, as well as introducing padding symbols where necessary, but this would without doubt clutter the presentation. We have decided to keep it simple by treating any root of any integer as an integer, and assuming that everything adds up nicely. This is only to keep the presentation clean and it should be obvious from the context that this has no impact on the asymptotic behaviour.
\subsection{The overall structure of the fixed string $F$}
Recall the definition of the array $K_n\in\{0,1\}^n $ from Section~\ref{sec:conv-fixed-F} which consists of 0s everywhere except for at power-of-two positions from the right-hand end.
A hard distribution for the convolution problem was given by setting $F$ to $K_n$ and choosing $U$ uniformly at random from $[q]^n$.
Recall Figure~\ref{fig:conv} which illustrates why we chose this hard distribution: for each output in the second half of $A_v$,
that is between the alignments marked~\alignment{2} and~\alignment{3} in the figure,
exactly one input of $U_v$ is aligned with a 1 in $F$ and all other inputs of $U_v$ are aligned with 0. Thus, the second half of $\ensuremath{U_v}$ can be uniquely determined from the outputs $\ensuremath{A_v}$.
To show a lower bound for the Hamming distance problem we devise a string $F$ that resembles $K_n$.
First we introduce an auxiliary string $R$ of length $q^{3/2}$. We will use $r=q^{3/2}$ as a shorthand for $|R|$. Recall that $n\geq q^3$.
We will give the details of $R$ later but will highlight an important property of it below.
We obtain $F$ from $K_n$ by first replacing each 0 by a symbol that we denote~$\ensuremath{\star}$. The symbol~$\ensuremath{\star}$ will never occur in the stream, hence will always generate a mismatch.
We then replace every $r$-length substring starting at a~1 with a copy of $R$. Any~1 that is closer than $r$ positions from the right-hand end of $F$ is replaced by a $\ensuremath{\star}$-symbol instead.
Figure~\ref{fig:fstring} illustrates $F$.
\begin{figure}[t]
\centering
\insertdiagram{string_F}
\caption{\label{fig:fstring}The string $F$ has a copy of $R$ starting at each position $n-1-i$ where $i\geq |R|$ is a power of two. All other positions have the symbol~$\ensuremath{\star}$ which only occurs in $F$ and not in the stream.}
\end{figure}
\subsection{Properties of the string $R$ and Hamming arrays}
The string $R$ will play the same role as the value~1 in $K_n$ did for the convolution problem, namely it will allow us to uniquely determine symbols from $U$.
To see how, we first introduce the notion of a Hamming array, illustrated in Figure~\ref{fig:hamarray}.
\begin{figure*}[t]
\centering
\insertdiagram{hamarray}
\caption{\label{fig:hamarray}$\ensuremath{\textup{HamArray}}(R,U')$ contains the Hamming distances between $R$ and every $r$-length substring of $U'$ as $R$ slides along $U'$.}
\end{figure*}
For a string $U'$ of length $2r$,
we write $\ensuremath{\textup{HamArray}}(R,U')$ to denote the $(r+1)$-length array such that for $i\in[r+1]$, $\ensuremath{\textup{HamArray}}(R,U')[i]$ is the Hamming distance between $R$ and $U'[i,i+r-1]$. That is, $\ensuremath{\textup{HamArray}}(R,U')$ contains the Hamming distances between $R$ and every $r$-length substring of $U'$.
To see the resemblance with a 1 in $K_n$, we give the following lemma. The proof is non-trivial and deferred to Section~\ref{sec:R}. A high-level explanation of the lemma is given immediately after its statement.
\begin{lem}
\label{lem:combinatorial}
There exists a constant $k>0$ such that for any $r$ there is an $r$-length string $R \in [r^{2/3}]^r$ such that
\[
\Big|\Set{\ensuremath{\textup{HamArray}}(R,U') \;\mid\; \textup{$U' \in [r^{2/3}]^{2r}$}}\Big| \,\geq\, r^{kr}.
\]
\end{lem}
First recall that $q=r^{2/3}$, hence both $R$ and $U'$ of the lemma are over an alphabet of $q$ symbols.
The lemma says that there is a string $R$ such that over all possible $U'$ of length $2|R|$, one can obtain $q^{\Theta(r)}$ distinct Hamming arrays. Since there are only $q^{2r}$ possible values of $U'$, this is means that a non-negligible fraction of all $U'$ can be put in one-to-one correspondence with Hamming arrays.
Thus, as symbols in $\ensuremath{U_v}$ slide past an $R$ in a similar fashion to symbols in $\ensuremath{U_v}$ sliding past a 1 in $K_n$ in the hard distribution for the convolution problem, we can infer a substantial portion of the symbols of $\ensuremath{U_v}$ from the outputs $\ensuremath{A_v}$, hence obtain large entropy.
We formalise this in the next section and explain how the lower bound is obtained.
\subsection{The hard distribution and obtaining the lower bound}
Relying on Lemma~\ref{lem:combinatorial} above we will now describe a hard distribution for the Hamming distance problem and use it to prove Lemma~\ref{lem:ham-H-lower}.
Given a string $R \in [q]^r$, we let
\[
\ensuremath{{\mathcal{U}}}_R \subseteq [q]^{2r}
\]
be any largest set of $2r$-length strings such that for any two distinct strings $U'_1, U'_2\in \ensuremath{{\mathcal{U}}}_R$,
\[
\ensuremath{\textup{HamArray}}(R,\U'_1)\,\neq\, \ensuremath{\textup{HamArray}}(R,\U'_2).
\]
To uniquely specify a string in $\ensuremath{{\mathcal{U}}}_R$ we need $\log_2|\ensuremath{{\mathcal{U}}}_R|$ bits.
By Lemma~\ref{lem:combinatorial} we have that there exists an $R$ such that $\log_2|\ensuremath{{\mathcal{U}}}_R|\in \Theta(r\log q)$ since $q=r^{2/3}$.
For the hard distribution we use $F$ from above with an $R$ that has the properties of Lemma~\ref{lem:combinatorial}. The input $U$ is given by concatenating $n/2r$ strings drawn independently and uniformly at random from $\ensuremath{{\mathcal{U}}}_R$.
Similarly to Figure~\ref{fig:conv} we can now illustrate how strings from $\ensuremath{{\mathcal{U}}}_R$ slide past $R$ during the second half of the outputs in $\ensuremath{A_v}$, where $v$ is any node of $\ensuremath{\mathcal{T}}$ such that $\ell_v\geq \sqrt{n} \geq r$. Recall that we have assumed that $n\geq q^3=r^2$.
In Figure~\ref{fig:doubling} we have illustrated $U_v$ as the concatenation of random strings $U'_1,\dots,U'_m$ drawn from $\ensuremath{{\mathcal{U}}}_R$, where $m=\ell_v/(4r)$.
\begin{figure*}[t]
\centering
\insertdiagram{doubling_slide}
\caption{\label{fig:doubling}
Three alignments of $F$ and the stream $S$:
\alignment{1}~the last value of $\ensuremath{U_v}$ has just arrived, \alignment{2}~half of the outputs in $\ensuremath{A_v}$ have been outputted, and \alignment{3}~all outputs in $\ensuremath{A_v}$ have been outputted. The string $\ensuremath{U_v}$ is here the concatenation of $U'_1,\dots,U'_m\in\ensuremath{{\mathcal{U}}}_R$, where $m=\ell_v/(4r)$.}
\end{figure*}
Between alignments~\alignment{2} and~\alignment{3} in the figure, the second half of the substrings $U'_i$ of $\ensuremath{U_v}$ slide in turn past $R$, and from the outputs in $\ensuremath{A_v}$ we can infer $\ensuremath{\textup{HamArray}}(R,U'_i)$ for each such $U'_i$. By construction of $\ensuremath{{\mathcal{U}}}_R$ this allows us to uniquely determine the strings $U'_i$.
Thus, over all outputs $\ensuremath{A_v}$, a total of $m/2$ (give or take a constant number to compensate for border cases) substrings $U'_i$ of $\ensuremath{U_v}$ can be determined, implying that the entropy of $\ensuremath{A_v}$ must be at least, by Lemma~\ref{lem:combinatorial},
$\Theta((m/2)\cdot r\log q)=\Theta(\ell_v \cdot\delta)$,
where $\delta=\lfloor\log_2 q\rfloor$.
This concludes the proof of Lemma~\ref{lem:ham-H-lower}.
\section{A string with many~different~Hamming~arrays}
In this section we prove Lemma~\ref{lem:combinatorial},
that is we show that there exists a string $R$ which gives many different Hamming arrays.
This is arguably the most technically detailed part of our lower bound proofs.
To recap, we claim that for any $r$ there exists a string $R \in [r^{2/3}]^r$ which permits at least $r^{kr}$ distinct Hamming arrays when combined with every string in $[r^{2/3}]^{2r}$, where $k$ is a constant.
Next we describe the overall structure of an $R$ with this property.
\subsection{The structure of $R$}
To shorten notation it will be convenient to introduce the variable $\mu$ as a shorthand for $r^{1/3}$. Hence $R$ has length $r=\mu^3$ and $q=\mu^2$. The string $R$ is constructed by concatenating $\mu^2$ substrings, each of length $\mu$.
For $i\in[\mu^2]$ we let $\ensuremath{\rho}_i$ denote the $i$-th substring of $R$, that is
\[
R\,=\,\ensuremath{\rho}_0\,\ensuremath{\rho}_1\cdots\ensuremath{\rho}_{(\ensuremath{\mu}^2-1)}.
\]
Each substring $\ensuremath{\rho}_i$ can only contain symbols from the set $\{\ensuremath{\star},i\}$, where $\ensuremath{\star}$ is the special symbol that will not occur in the stream.
Figure~\ref{fig:R} illustrates an example of $R$.
\begin{figure*}[t]
\centering
\insertdiagram{string_R}
\caption{\label{fig:R}An example of the string $R$ of length $r=\mu^3$, which is the concatenation of the $\mu^2$ strings $\ensuremath{\rho}_0,\dots,\ensuremath{\rho}_{\mu^2-1}$, where $\ensuremath{\rho}_i\in \{\ensuremath{\star},i\}^\mu$.}
\end{figure*}
Doing the maths correctly, the total number of distinct symbols in $R$ could reach $\mu^2+1=r^{2/3}+1=q+1$. As pointed out in Section~\ref{sec:rounding} we do indeed introduce two additional symbols, of which one is $\ensuremath{\star}$, however, to keep notation clutter-free we abuse the notion of $q$ by giving it a slack that should obviously be adjusted by some constant where appropriate.
The purpose of the substrings $\ensuremath{\rho}_i$ is to support a reduction from vector addition to Hamming arrays that we explain next.
\subsection{Vector sums and Hamming arrays} \label{sec:vecsums-hamarrays}
The $\mu$-length substring $\ensuremath{\rho}_i$ of $R$ corresponds to a 0/1-vector $v_i\in\{0,1\}^\mu$ such that the $j$-th component of $v_i$ is 0 if and only the $j$-th symbol of $\ensuremath{\rho}_i$ is $\ensuremath{\star}$. For example, $\ensuremath{\rho}_2={2}{\star}{\star}{2}{2}$ from Figure~\ref{fig:R} corresponds the vector $v_2=(1,0,0,1,1)$.
To explain the idea of how vector addition can be carried out by using the concept of a Hamming array of $R$ and some string $U'\in[\mu^2]^{2r}$, consider Figure~\ref{fig:blocks} as an illustrative example.
\begin{figure*}[t]
\centering
\insertdiagram{pattern-blocks}
\caption{\label{fig:blocks}Setting symbols of $\U'$ renders a large set of possible Hamming distance outputs.}
\end{figure*}
Here the string $U'$ contains the other special symbol that we introduce, denoted~$\ensuremath{\diamond}$. This symbol does not occur in $R$, hence will always mismatch.
In the figure we see that all positions of $U'$ have the symbol $\ensuremath{\diamond}$, except for three positions where the symbols are 0,~5 and~7, respectively. The positions holding these symbols are chosen such that in the first alignment between $R$ and $U'$, marked~\alignment{1}, the symbols 0,~5 and~7 sit immediately after $\ensuremath{\rho}_0$, $\ensuremath{\rho}_5$ and $\ensuremath{\rho}_7$ in $R$, respectively.
As $R$ slides $\mu$ steps to the right towards the alignment marked~\alignment{2}, the symbols 0, 5 and~7 of $U'$ will generate matches whenever they are aligned with their corresponding symbols in $R$.
Thus, for $i\in\{1,\dots,\mu\}$,
\[
\ensuremath{\textup{HamArray}}(R,U')[i] ~=~ r - (v_0 + v_5 + v_7)[i],
\]
where $(v_0 + v_5 + v_7)[i]$ is the $i$-th component of the sum of the vectors $v_0$, $v_5$ and~$v_7$.
In other words, from $\ensuremath{\textup{HamArray}}(R,U')[1,\mu]$ we can uniquely determine the sum $v_0+v_5+v_7$.
The idea above can be repeated by populating $U'$ with more symbols from $[\mu^2]$. As an example we have added the symbols 1 and 2, and another copy of~5 to $U'$, which is the string denoted $U''$ in the figure.
As $R$ slides another $\mu$ steps to the right, $\ensuremath{\textup{HamArray}}(R,U'')[\mu+1,2\mu]$ uniquely specifies the sum $v_1+v_2+v_5$.
Observe that as we populate $U'$ with symbols, positions get \emph{blocked}. For example, we cannot obtain the sum $v_1+v_2+v_4$ from $\ensuremath{\textup{HamArray}}(R,U'')[\mu+1,2\mu]$ since the position where the~4 has to be set is already occupied by a~5.
Observe however that setting symbols of $U'$ as above generates matches only in the intended $\mu$-length window of the Hamming array. Thus, we have full control of which vector sums we want to compute, under the constraint that positions get blocked, limiting the choice of vectors.
The conclusion this far is that vector sums have a direct correspondence with the Hamming array. Next we take the ideas from above further and show that if there exists a pool of $\mu^2$ vectors such that many different vector sums can be obtained when adding $\mu$ vectors from the pool, then the number of distinct $\ensuremath{\textup{HamArray}}(R,U')$ one can obtain is large. This would prove Lemma~\ref{lem:combinatorial}.
\subsection{The string $R$ and the proof of Lemma~\ref{lem:combinatorial}} \label{sec:R}
Before we state the next lemma we need to define what we mean by sub-multiset of a multiset~$X$. We consider an arbitrary ordering of the elements of $X$ and refer to $X[i]$ as the $i$-th element of $X$.
We use the term \emph{sub-multiset} of $X$ to denote any multiset obtained from $X$ by removing zero or more elements. We will use the notation $\sqsubseteq$ to denote the sub-multiset relation so that we have, for example, $\{1,1,4,5,5\} \sqsubseteq \{1,1,1,4,4,5,5,7,8\}$.
\begin{lem}
\label{lem:vecsum}
For any $\ensuremath{\mu}>40$ such that $\ensuremath{\mu}-1$ is a prime, there exists a multiset $V$ of vectors from
$\{0,1\}^\ensuremath{\mu}$ such that $|V|=\ensuremath{\mu}(\ensuremath{\mu}-1)$ and for any sub-multiset $V'\subseteq V$ of size at least
$(63/64)|V|$,
%
\begin{align*}
\left|\set{w_1 + \cdots + w_\ensuremath{\mu} \,|\, \{w_1,\dots,w_\ensuremath{\mu}\} \sqsubseteq V'}\right| ~\geq~ \ensuremath{\mu}^{(\ensuremath{\mu}/10)} .
\end{align*}
\end{lem}
The lemma is proved in Section~\ref{sec:proofvecsum} and we will now use it to construct an $R$ that proves Lemma~\ref{lem:combinatorial}. The introduction of a sub-multiset $V'$ in the lemma above is to reflect the fact that positions of $U'$ get blocked as we populate it with symbols. We will see next that at any step, a fraction of at most $1/64$ of the $\mu^2$ vectors are blocked.
Suppose that $V=\{v_0,\dots,v_{\mu(\mu-1)}\}$ is a multiset of $\mu$-length vectors over $\{0,1\}$ with the properties of Lemma~\ref{lem:vecsum}. That is, we assume that $\mu>40$ and $\mu-1$ is a prime. Again as discussed in Section~\ref{sec:rounding}, we can always tweak relevant values in order to meet this criteria.
The string $R$ is simply chosen such that for $i\in[\mu(\mu-1)]$, the substring $\ensuremath{\rho}_i$ corresponds to the vector $v_i$ of $V$. For $i\in\{\mu(\mu-1),\dots,(\mu^2-1)\}$, the substring $\ensuremath{\rho}_i=\{\ensuremath{\star}\}^\mu$ as we will ignore these substrings anyway.
In order to show that this $R$ proves Lemma~\ref{lem:combinatorial} we will populate a $2r$-length vector $U'$ with symbols and show how $\mu$-length subarrays of $\ensuremath{\textup{HamArray}}(R,U')$ correspond to vector sums of $\mu$ vectors chosen arbitrarily from a sub-multiset of $V$. The string $U'$ is obtained as follows:
\begin{enumerate}
\item Set all $2\mu^3$ positions of $U'$ to the symbol $\ensuremath{\diamond}$.
\item Align $R$ with the left half of $U'$ as illustrated in Figure~\ref{fig:hamarray}.
\item Let $V'\subseteq V$ be the set of vectors that are not blocked. (Initially this means that $V'=V$ but as we return to this step, $V'$ shrinks.)
\item Choose any sub-multiset $\{w_1,\dots,v_\mu\} \sqsubseteq V'$ and set their corresponding positions in $U'$ accordingly.
\item Slide $R$ by $\mu$ steps along $U'$. Over these alignments, $\ensuremath{\textup{HamArray}}(R,U')$ uniquely specify the vector sum $w_1+\cdots +w_\mu$.
\item[~] Steps 3--5 are referred to as a \emph{round}.
\item Repeat from Step~3 for a total of $(\mu-1)/64$ rounds. Observe that a total of $\mu(\mu-1)/64=(1/64)|V|$ vectors get blocked, hence $|V'|$ is always at least $(63/64)|V|$.
\item Slide $R$ by \emph{one single step} along $U'$. This will offset all previously blocked vectors and allow us to start over again at Step~3 as if no vectors are blocked.
This is repeated until this step is reached for the $\mu$-th time. At that point the offsetting of blocked vectors has cycled and previously set positions of $U'$ are yet again blocking.
\end{enumerate}
Populating $U'$ according to the procedure above means that $R$ is shifted by a total of
\[
\mu\cdot (\mu-1)/64 \cdot \mu + (\mu-1) ~=~ \mu^3/64 - \mu^2/64 +\mu-1 ~<~ r
\]
steps. Over these steps we have by Lemma~\ref{lem:vecsum} that for each $\mu$-length subarray of $\ensuremath{\textup{HamArray}}(R,U')$ that corresponds to a vector sum, there is a choice of at least $\mu^{(\mu/10)}$ distinct values.
Thus, when $\mu>40$, the number of distinct $\ensuremath{\textup{HamArray}}(R,U')$ is at least
\[
\left(\mu^{(\mu/10)}\right)^{ \mu(\mu-1)/64 }
~=~
\mu^{(\mu^3-\mu^2)/640}
~\geq~
\mu^{(\mu^3/656)}
~=~
\left(r^{(1/3)}\right)^{(r/656)}
~=~
r^{kr},
\]
where $k=1/1968$.
This concludes the proof of Lemma~\ref{lem:combinatorial}.
\section{Vector sets with many distinct sums} \label{sec:proofvecsum}
In this section, we prove Lemma~\ref{lem:vecsum}.
We first rephrase the lemma slightly by introducing some notation.
For any multiset $V'$ of vectors from $\{0,1\}^\ensuremath{\mu}$, we define
\[
\ensuremath{\textup{Sum}}(V') = \set{w_1 + \cdots + w_\ensuremath{\mu} \,|\, \{w_1,\dots,w_\ensuremath{\mu}\} \sqsubseteq V'}
\]
to be the set of distinct vector sums one can obtain by summing the vectors of $\mu$-sized sub-multisets of $V'$
Addition is element-wise and over the integers.
Lemma~\ref{lem:vecsum} says that there exists a multiset $V$ of vectors from $\{0,1\}^\ensuremath{\mu}$ such that $|V|=\ensuremath{\mu}(\ensuremath{\mu}-1)$ and for any sub-multiset $V'\sqsubseteq V$ of size at least $(63/64)|V|$, we have that $|\ensuremath{\textup{Sum}}(V')| \geq \ensuremath{\mu}^{(\ensuremath{\mu}/10)}$.
Our approach will be an application of the probabilistic method. Specifically, we will show that when the vectors of $V$ are sampled uniformly at random, the expected value
\[
\expect{\ensuremath{\textup{Sum}}(V)}\,\geq\, \frac12 (\ensuremath{\mu}-1)^{(\ensuremath{\mu}/9)}.
\]
Thus, there must exist a $V$ such that $\ensuremath{\textup{Sum}}(V)\geq (\ensuremath{\mu}-1)^{(\ensuremath{\mu}/9)}/2$.
Given such a $V$, we then show that for every sub-multiset $V'\sqsubseteq V$ such that $|V'| \geq (63/64)|V|$,
$\ensuremath{\textup{Sum}}(V')\geq \ensuremath{\mu}^{(\ensuremath{\mu}/10)}$.
\subsection{Vectors and codes}
We now describe a connection between vectors and codes. We will require the following lemma from the field of Coding Theory.
The lemma is tailored for our needs and is a special case of ``Construction~II'' in~\cite{AGM:1992}.
For our purposes, a binary constant-weight cyclic code can be seen simply as set of bit-strings (codewords) with two additional properties: the first is that all codewords have constant Hamming weight $\ensuremath{\mu}$, i.e. they have exactly $\ensuremath{\mu}$ 1s, and the second property is that any cyclic shift of a codeword is also a codeword.
\begin{lem}[\cite{AGM:1992}]
\label{lem:cyclic-code}
For any $\ensuremath{\mu}\geq 4$ such that $\ensuremath{\mu}-1$ is a prime and any odd $\gamma\in[\ensuremath{\mu}]$, there is a binary constant-weight cyclic code with $(\ensuremath{\mu}-1)^{\gamma}$ codewords of length $\ensuremath{\mu}(\ensuremath{\mu}-1)$ and Hamming weight $\ensuremath{\mu}$ such that any two codewords have Hamming distance at least $2(\ensuremath{\mu}-\gamma)$.
\end{lem}
Let $\ensuremath{\widetilde{C}}$ be the binary code that contains \emph{all} codewords of length $\mu(\mu-1)$ with Hamming weight $\ensuremath{\mu}$.
We can think of a codeword of $\ensuremath{\widetilde{C}}$ representing a $\mu$-sized sub-multiset $X\sqsubseteq V$ such that the $i$-th vector of $V$ (under any enumeration of the elements of $V$) is in $X$ if and only if position $i$ of the codeword is~$1$.
That is, $\ensuremath{\widetilde{C}}$ represents all possible sub-multisets of $V$ of size $\mu$.
To shorten notation, we refer to $\ensuremath{\widetilde{c}}\in\ensuremath{\widetilde{C}}$ as both a codeword and a sub-multiset of $\mu$ vectors from $V$.
Suppose that $\mu\geq 4$ and $\mu-1$ is a prime.
We let $C\subseteq \ensuremath{\widetilde{C}}$ be a cyclic code of size $(\ensuremath{\mu}-1)^\gamma$,
where $\gamma$ is any odd integer in the interval $[\ensuremath{\mu}/9,\ensuremath{\mu}/8]$,
such that the Hamming distance between any two codewords in $C$ is at least $7\mu/4$.
The existence of such a $C$ is guaranteed by Lemma~\ref{lem:cyclic-code} since
$2(\mu-\mu/8)=7\mu/4$.
Observe that every codeword of $C$ has Hamming weight $\ensuremath{\mu}$.
For $c\in C$ we define the \emph{ball}
\[
\ball{c}=\set{\ensuremath{\widetilde{c}} ~\mid~ \text{$\ensuremath{\widetilde{c}}\in\ensuremath{\widetilde{C}}$ and
Hamming distance between $c$ and $\ensuremath{\widetilde{c}}$ is at most $\ensuremath{\mu}/16$}}
\]
to be the set of bit strings in $\ensuremath{\widetilde{C}}$ at Hamming distance at most $\ensuremath{\mu}/16$ from~$c$.
Hence the $|C|$ balls are all disjoint since the Hamming distance between any two codewords in $C$ is at least than $7\mu/4$.
We have that for any $c\in C$, using the fact $\binom a b \leq (ae/b)^b$,
\begin{equation*}
\label{eq:ball-size}
\big|\ball{c}\big| \,\leq\, \binom \ensuremath{\mu} {\ensuremath{\mu}/16}\cdot \binom{|V|} {\ensuremath{\mu}/16}
\,\leq\, \left(\frac{\ensuremath{\mu} e\cdot|V|e}{(\ensuremath{\mu}/16)^2}\right)^{\ensuremath{\mu}/16}
\,\leq\, \left(\frac{\ensuremath{\mu}}{16}\right)^{\ensuremath{\mu}/16} .
\end{equation*}
For $\ensuremath{\widetilde{c}}\in \ensuremath{\widetilde{C}}$ we write $\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}})$ to denote the vector in $[\ensuremath{\mu}+1]^\ensuremath{\mu}$ obtained by
adding the $\ensuremath{\mu}$ vectors in the vector set $\ensuremath{\widetilde{c}}$, that is $\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}})$ vector sum of the vectors represented by $\ensuremath{\widetilde{c}}$.
Towards proving Lemma~\ref{lem:vecsum} we will show that when the vectors of $V$ are chosen uniformly at random, we expect more than half of all $|C|$ balls to have the property that for every $\ensuremath{\widetilde{c}}$ in the ball, $\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}})$ can only be obtained by summing vectors from that ball.
\subsection{Choosing the vectors in $V$}
So far we have not discussed the choice of vectors in $V$.
We consider the case where the vectors are chosen independently and uniformly at random from $\{0,1\}^\ensuremath{\mu}$.
We will first show that
\[
\expect{\ensuremath{\textup{Sum}}(V)}\,\geq\, \frac12 (\ensuremath{\mu}-1)^{(\ensuremath{\mu}/9)},
\]
then we will fix $V$ and show that it has the property of Lemma~\ref{lem:vecsum}.
For any $\ensuremath{\widetilde{c}}_1\in\ball{c_1}$ and $\ensuremath{\widetilde{c}}_2\in\ball{c_2}$, where $c_1,c_2\in C$ are distinct, we now analyse the probability that
$\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_1)=\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_2)$. From the definitions above it follows that $\ensuremath{\widetilde{c}}_1$ and $\ensuremath{\widetilde{c}}_2$ must differ
on at least $7\ensuremath{\mu}/4-2(\ensuremath{\mu}/16)\geq \ensuremath{\mu}$ positions, implying that the two vector sets $\ensuremath{\widetilde{c}}_1$ and
$\ensuremath{\widetilde{c}}_2$ have at most $\ensuremath{\mu}/2$ vectors in common, thus at least $\ensuremath{\mu}/2$ of the vectors in $\ensuremath{\widetilde{c}}_1$ are not in $\ensuremath{\widetilde{c}}_2$.
Let $w_1,\dots,w_{(\ensuremath{\mu}/2)}$ denote an arbitrary choice of $\mu/2$ of those vectors.
For $i\in[\mu]$ we can write the $i$-th component of $\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_1)$ as
\[
\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_1)[i] ~=~ w_1[i]+\cdots +w_{(\ensuremath{\mu}/2)}[i] + x[i],
\]
where the vector $x$ does not depend on $w_1,\dots,w_{(\ensuremath{\mu}/2)}$.
In order to have $\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_1)=\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_2)$ we must have
\[
w_1[i]+\cdots +w_{(\ensuremath{\mu}/2)}[i] ~=~ \ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_2)[i] - x[i]
\]
for each $i\in [\ensuremath{\mu}]$.
Since the vectors are picked independently and uniformly at random from $\{0,1\}^\mu$,
the most likely value of $w_1[i]+\cdots +w_{(\ensuremath{\mu}/2)}[i]$ is $\ensuremath{\mu}/4$.
The probability that this sum equals $\mu/4$ is
\[
\ensuremath{\textup{Pr}}\left(w_1[i]+\cdots +w_{(\ensuremath{\mu}/2)}[i] = \frac\pi 4\right)
~=~
\binom{\ensuremath{\mu}/2}{\ensuremath{\mu}/4 }\cdot \left(\frac12\right)^{\ensuremath{\mu}/2}
\,\leq~
\left(\frac \ensuremath{\mu} 2\right)^{-1/2},
\]
where the inequality follows from the fact that for any $a$, $\binom a {a/2}\leq 2^a/\sqrt{a}$.
Thus, the probability that $\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_1)=\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_2)$, that is
$\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_1)[i]=\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_2)[i]$ for all $i\in [\mu]$, is
\begin{equation}
\label{eq:sum-equal}
\ensuremath{\textup{Pr}}\big(\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_1)=\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_2)\big)
\, \leq \,
\left(\left(\frac{\ensuremath{\mu}}{2}\right)^{-1/2}\right)^\mu
\, \leq \,
\left(\frac{\ensuremath{\mu}}{2}\right)^{-\ensuremath{\mu}/2}.
\end{equation}
For two distinct $c_1,c_2\in C$, we define the indicator random variable
\begin{equation*}
I(c_1,c_2) =
\begin{cases}
0 & \textup{if $\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_1)=\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_2)$ for some $\ensuremath{\widetilde{c}}_1\in\ball{c_1}$ and $\ensuremath{\widetilde{c}}_2\in\ball{c_2}$,}\\
1 & \textup{otherwise.}
\end{cases}
\end{equation*}
Taking the union bound over all $\ensuremath{\widetilde{c}}_1\in\ball{c_1}$ and
$\ensuremath{\widetilde{c}}_2\in\ball{c_2}$, and using the probability bound in Equation~(\ref{eq:sum-equal}), we have
\begin{align}
\label{eq:union-inner}
\ensuremath{\textup{Pr}} \big( I(c_1, c_2) = 0 \big)
\, &\leq \, \big|\ball{c_1}\big|\cdot \big|\ball{c_2}\big| \cdot \left(\frac{\ensuremath{\mu}}{2}\right)^{-\ensuremath{\mu}/2} \\
\, &\leq \, \left(\frac{\ensuremath{\mu}}{16}\right)^{2(\ensuremath{\mu}/16)} \left(\frac{\ensuremath{\mu}}{2}\right)^{-\ensuremath{\mu}/2} \notag \\
&\leq \, \left(\frac{1}{\ensuremath{\mu}^3}\right)^{\ensuremath{\mu}/8}. \notag
\end{align}
For any $c_1\in C$, we now define the indicator random variable
\begin{equation*}
I'(c_1) =
\begin{cases}
0 & \textup{if $I(c_1,c_2)=0$ for some $c_2\in C\setminus \{c_1\}$,}\\
1 & \textup{otherwise.}
\end{cases}
\end{equation*}
That is, $I'(c_1)=1$ if and only if
$\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_1)\neq\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_2)$
for every $\ensuremath{\widetilde{c}}_1\in\ball{c_1}$ and every $\ensuremath{\widetilde{c}}_2$ from another ball.
In other words, the sums of codewords in $\ball{c_1}$ are unique for this ball.
We say that $\ball{c_1}$ is \emph{good} if and only if $I'(c_1)=1$.
It is possible however that $\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_1)=\ensuremath{\textup{sum}}(\ensuremath{\widetilde{c}}_2)$ if $\ensuremath{\widetilde{c}}_2$ is from the same ball as $\ensuremath{\widetilde{c}}_1$ though.
Taking the union bound over all $c_2\in C$, and using Equation~(\ref{eq:union-inner}) and the fact that $|C|\leq \ensuremath{\mu}^{(\ensuremath{\mu}/8)}$, we have
\begin{align*}
\ensuremath{\textup{Pr}} \big( I'(c_1) = 0 \big) \, &\leq \!\!\! \sum_{c_2 \in C\setminus \{c_1\}} \!\!\!\! \ensuremath{\textup{Pr}} \big( I(c_1, c_2) = 0 \big)\\
\, &\leq \, |C| \left(\frac{1}{\ensuremath{\mu}^3}\right)^{\ensuremath{\mu}/8}
\, \leq \, \ensuremath{\mu}^{(\ensuremath{\mu}/8)} \left(\frac{1}{\ensuremath{\mu}^3}\right)^{\ensuremath{\mu}/8}
\, \leq \, \frac{1}{2} \,. \notag
\end{align*}
By linearity of expectation we have that the expected number of good balls is
\[
\expect{\sum_{c\in C}I'(c)} ~\geq~ \frac{|C|}2.
\]
The conclusion is that there is a multiset $V$ of vectors for
which at least $|C|/2$ balls are good, hence
\[
\ensuremath{\textup{Sum}}(V)\,\geq\, \frac12 (\ensuremath{\mu}-1)^{(\ensuremath{\mu}/9)}
\]
since $|C|\geq (\ensuremath{\mu}-1)^{(\ensuremath{\mu}/9)}$.
\subsection{Many distinct sums for subsets of $V$}
Suppose now that $V$ is a multiset such that the number of good balls is at least $|C|/2$, hence $\ensuremath{\textup{Sum}}(V)\geq (\ensuremath{\mu}-1)^{(\ensuremath{\mu}/9)}/2$. From the conclusion above we know that such a set must exist.
It remains to show that for any sub-multiset $V' \sqsubseteq V$ of size $(63/64)|V|$, $\ensuremath{\textup{sum}}(V')$ is also large.
Over all codewords in $C$, seen as bit strings, the total number of 1s is $|C|\ensuremath{\mu}$. Since $C$ is cyclic, the number of codewords in $C$ that have a 1 in position $i\in [|V|]$ is the same as the number of codewords that have a 1 in position $j$, for any $j\in [|V|]$.
Thus, for each one of the $|V|$ positions there are exactly $|C|\ensuremath{\mu}/|V|$ codewords in $C$ with a 1 in that position.
Let $V'\sqsubseteq V$ be of size $(63/64)|V|$. Let $J$ be the set of $|V|/64$ positions that correspond to the vectors of $V$ that are not in $V'$.
We will now modify the codewords of $C$ as follows.
For each $j\in J$ and codeword $c\in C$ we
set $c[j]$ to 0.
The total number of 1s across all codewords in $C$ is therefore reduced from $|C|\mu$ by exactly
\[
\frac{|V|}{64} \cdot
\frac{|C|\ensuremath{\mu}}{|V|} = \frac{|C|\ensuremath{\mu}}{64}.
\]
The number of codewords of $C$ that have lost $\ensuremath{\mu}/16$ or more 1s is
therefore at most
\[
\frac{|C|\ensuremath{\mu}}{64}\,\Big/\,\frac{\ensuremath{\mu}}{16}=\frac{|C|}4.
\]
Let $C'\subseteq C$ be the set of codewords $c$
that have lost less than $\ensuremath{\mu}/16$ 1s and for which $\ball{c}$ is good. Since there are at least $|C|/2$
good balls, $|C'|\geq |C|/4$.
Let the code $C''$ be obtained from $C'$ by replacing, for each codeword in $C'$, every removed 1 with a 1 at some other arbitrary position that is not in $J$.
Thus, every $c''\in C''$ has Hamming weight $\ensuremath{\mu}$ and belongs to the good ball $\ball{c'}$, where $c''$ was obtained from $c'\in C'$.
Hence $|C''|=|C'|\geq |C|/4$.
Every codeword of $C''$, seen as a sub-multiset of $V$, only contains vectors from the sub-multiset $V'$.
From the definition of a good ball we have that at least $|C|/4$ distinct vector sums can be
obtained by adding $\ensuremath{\mu}$ vectors from $V'$.
Thus,
\[
\ensuremath{\textup{Sum}}(V')
\,\geq\,
\frac14 (\ensuremath{\mu}-1)^{(\ensuremath{\mu}/9)}
\,\geq\,
\ensuremath{\mu}^{(\ensuremath{\mu}/10)}
\]
when $\ensuremath{\mu}>40$. This completes the proof of Lemma~\ref{lem:vecsum}.
\section*{Acknowledgements}
RC would like to thank Elad Verbin, Kasper Green Larsen, Qin Zhang and the members of CTIC for helpful and insightful discussions about lower bounds during a visit to Aarhus University. We thank Kasper Green Larsen in particular for pointing out that the cell-probe lower bounds we give are in fact tight. Some of the work on this paper has been carried out during RC's visit at the University of Washington.
\printbibliography
\end{document}
|
2,877,628,088,422 | arxiv | \section{\label{sec:intro}Introduction}
This paper presents the recently developed framework for the outage probability and \emph{transmission capacity} \cite{WebYan2005} in a one hop wireless {\em ad hoc} network. The transmission capacity is defined as the number of successful transmissions taking place in the network per unit area, subject to a constraint on outage probability. In addition to being of general interest, the advantange of transmission capacity -- relative to, say, the transport capacity or average sum throughput -- lies largely in that it can be exactly derived in some important cases, and tightly bounded in many others, as we shall show. From the expressions and approach given in this paper the exact dependence between system performance (transmission capacity, outage probability) and the possible design choices and network parameters are laid bare. In contrast to the proposed framework, nearly all other work on {\em ad hoc} networks must resort to scaling laws or numerical simulations, in which case intuition and/or precision is usually lost.
The first goal of this paper is to concisely summarize the new analytical tools (largely drawn from the field of stochastic geometry \cite{StoKen1996,Kin1993}) that have been developed over numerous papers by the authors and others. Because these techniques have been developed somewhat independently depending on the problem of interest, the system model in \S\ref{sec:sysmodel} applied to the baseline model of pathloss attenuation without fading in \S\ref{sec:model} will help newcomers to the area understand the various approaches in context.
The second goal is to show how this framework can be used to give crisp insights into wireless network design problems. In the past few years, the transmission capacity approach has been applied to various design problems by a growing group of researchers (see \cite{WebYan2005,StaPro2007b, BloJin2009,HuaLau2009,YinGao2009, LouMcK2009}). Although transmission capacity was originally developed to analyze spread spectrum in ad hoc networks, it has proven to be a metric with considerable breadth of application. Since decentralized wireless networks are generally very difficult to characterize, the intuitive and simple-to-compute qualities of transmission capacity have made it a popular choice for a large number of possible systems, including: $i)$ direct-sequence and frequency-hopping spread spectrum \cite{WebYan2005,AndWeb2007,StaPro2007b}, $ii)$ interference cancellation \cite{WebAnd2007a,BloJin2009}, $iii)$ spectrum sharing in unlicensed, overlaid, and cognitive radio networks \cite{JinAnd2008,HuaLau2009,YinGao2009,ChaAnd2009b}, $iv)$ scheduling \cite{WebAnd2007a} and power control \cite{WebAnd2007b,JinWeb2008}, $v)$ and the use of multiple antennas (which had resisted characterization by other methods) \cite{HunAnd2008a,LouMcK2009, HuaAndSub,StaPro2007a,KouAnd2009a,JinAnd2009,HunAnd2008b,VazHea10}. Other researchers have also further studied the basic tradeoffs between outage probability, data rate, and transmission capacity for general networks \cite{Hae2009}.
We selectively discuss some of these applications. \S\ref{sec:Design2} addresses networks with fading channels, with a focus on Rayleigh (\S\ref{ssec:Rayleigh}) and Nakagami (\S\ref{ssec:Nakagami}) fading, scheduling (\S\ref{ssec:scheduling}), and power control (\S\ref{ssec:FPC}). \S\ref{sec:MIMO} addresses the use of multiple antennas, with discussions of diversity (\S\ref{ssec:diversity}), spatial interference cancellation (\S\ref{ssec:cancel}), and spatial multiplexing (\S\ref{ssec:mux}).
The third goal of the paper is to stimulate new efforts to further the tools presented here, both in making them more general and in applying them to new problems. We readily concede that the presented model has some nontrivial shortcomings at present, and we identify those as well as possible avenues forward in \S\ref{sec:future}.
\section{\label{sec:sysmodel}System model}
We introduce the system model in \S\ref{ssec:2a}, discuss relevant mathematical background in \S\ref{ssec:2b}, and elaborate on the connection with transport capacity in \S\ref{ssec:2c}.
\subsection{\label{ssec:2a}Mathematical model and assumptions}
We consider an {\em ad hoc} wireless network consisting of a large (infinite) number of nodes spread over a large (infinite) area. The network is {\em uncoordinated}, meaning transmitters do not coordinate with each other in making transmission decisions. That is, nodes employ Aloha \cite{Abr1970} ({\em i.e.}, in each slot, each node independently decides whether to transmit or to listen) as the medium access control (MAC) protocol. We view the network at a snapshot in time, where the locations of the transmitting nodes at that snapshot are assumed to form a stationary Poisson point process (PPP) on the plane of intensity $\lambda$, denoted $\Pi(\lambda) = \{X_i\}$, where each $X_i \in \mathbb{R}^2$ is the location of interfering transmitter $i$. The PPP assumption for node locations is valid when the uncoordinated transmitting nodes are independently and uniformly distributed over the network arena, which is often reasonable for networks with indiscriminate node placement or substantial mobility. If intelligent transmission scheduling is performed, the resulting transmitter locations will most certainly not form a PPP, so this paper's analytical framework is primarily applicable to uncoordinated transmitters. Although suboptimal, such a model may be reasonable in cases where the overhead associated with scheduling is prohibitively high, for example due to highly mobile nodes, bursty traffic, or rigid delay constraints. We also note that this framework has been extended to CSMA, and the gains are not that large over Aloha \cite{BacBla2006,HasAnd2007}. Viewing the network at a single snapshot in time restricts our focus to characterizing the performance of one-hop transmissions with specified destinations. That is, our attention is on (uncoordinated) MAC layer performance, but our model neither addresses nor precludes any multi-hop routing scheme. These model limitations are further discussed in \S\ref{sec:future}.
Each transmitter is assumed to have an assigned receiver at a fixed distance $r$ (meters) away. This assumption may be easily relaxed ({\em e.g.}, see \cite{WebAnd2007b} and \cite{JinWeb2008}) but at the cost of complicating the derived expressions without providing additional insight. The set of receivers is disjoint with the set of transmitters. Because the network is infinitely large and spatially homogeneous, the statistics of $\Pi(\lambda)$ are unaffected by the addition of a placed transmitter and receiver pair, and, more importantly, this pair is ``typical'' in that the performance experienced at the reference pair characterizes the node-average performance in the network (Slivnyak's Theorem \cite{StoKen1996}). Without loss in generality we place the reference receiver at the origin ($o$), and the reference transmitter is located $r$ meters away. See Fig.~\ref{fig:a}. Note that the locations of the other receivers are not important because the reference receiver's performance only depends upon the positions of the transmitters.
Each transmitter is usually assumed to employ unit transmission power (except when we discuss power control in \S\ref{ssec:FPC}). The channel strength is assumed to be solely determined by pathloss and fading, {\em i.e.}, the received power at distance $d$ is $H d^{-\alpha}$, where $\alpha > 2$ is the pathloss exponent and $H$ is the fading coefficient. All fading coefficients are assumed to be independent and identically distributed (iid). This simplified model has been shown to capture the key distance dependency in {\em ad hoc} networks, and minor alterations to it such as adding an attenuation constant or forcing the received power to be less than one increase the analytical complexity with little apparent benefit \cite{InaChi2009}. We study networks without fading ($H=1$) in \S\ref{sec:model} then with fading in \S\ref{sec:Design2}.
We treat interference as noise, assume that the ambient/thermal noise is negligible, and assert transmission success to be determined by the signal to interference plus noise ratio (SINR) lying above a specified threshold $\beta$. The assumption of negligible thermal noise may be easily relaxed ({\em e.g.}, see \cite{WebAnd2007b} and \cite{JinWeb2008}) but at the cost of complicating the derived expressions without providing additional insight. The {\em outage probability} (OP), denoted by $q$, is the probability that the signal to interference ratio (SIR) at the reference receiver is below a specified threshold $\beta$ required for successful reception:
\begin{equation}
\label{eqn:1}
q(\lambda) \equiv \mathbb{P}({\rm SIR} < \beta) = \mathbb{P} \left( \frac{S r^{-\alpha}}{\sum_{i \in \Pi(\lambda)} I_i |X_i|^{-\alpha}} < \beta \right) = \mathbb{P} \left( Y > \frac{1}{\beta} \right),
\end{equation}
where $Y \equiv \frac{1}{S r^{-\alpha}} \sum_{i \in \Pi(\lambda)} I_i |X_i|^{-\alpha}$ is defined as the aggregate interference power seen at the reference receiver at the origin, normalized by the signal power $S r^{-\alpha}$. The last expression in (\ref{eqn:1}) highlights the fact that, conditioned on $S$, the OP is the tail probability of the aggregate interference level expressed as a shot noise process.
The randomness is in the interferer locations, $\{X_i\}$, and the fading coefficients, $S$ and $\{I_i\}$. The OP is a function of $\alpha,\beta,\lambda,r$ and the fading statistics. Note that $q$ is continuous monotone increasing in $\lambda$ and is onto $[0,1]$. Our primary performance metric is the {\em transmission capacity} (TC) which takes a target OP $\epsilon$ as a parameter:
\begin{equation}
\label{eqn:2}
c(\epsilon) \equiv q^{-1}(\epsilon)(1-\epsilon), ~~ \epsilon \in (0,1).
\end{equation}
It is the spatial intensity of attempted transmissions $q^{-1}(\epsilon)$ associated with OP $\epsilon$, thinned by the probability of success, $1-\epsilon$. The quantity $\epsilon$ is a network-wide quality of service measure, ensuring a typical attempted transmission will succeed with probability $1-\epsilon$. The transmission capacity has units of number of transmission attempts per unit area, {\em i.e.}, it is a measure of spatial intensity of transmissions. Note that the OP $q(\lambda)$ is defined for an arbitrary transmission intensity $\lambda$, and $c(\epsilon)$ is simply that value of $\lambda$ such that $q(c(\epsilon)/(1-\epsilon)) = \epsilon$. The definition of TC is motivated by several factors: $i)$ fixing the OP at $q =\epsilon$ is a useful and simple, albeit coarse, characterization of network performance, $ii)$ the TC is tractable and can be computed, or at least bounded, for many useful network design questions. A summary of the mathematical notation employed in this paper is given in Table~\ref{tab:1}.
\subsection{\label{ssec:2b}Mathematical background}
The key underlying mathematical concept is the shot-noise process first developed in 1918 \cite{Sch1918},
\begin{equation}
\label{eqn:d} Y(t) = \sum_{j = -\infty}^{\infty} h(t-t_j),
\end{equation}
where $\{t_j\}$ is a stationary Poisson point process (PPP) on $\mathbb{R}$ and $h(t)$ is a (linear, time-invariant) impulse response function \cite{Ric1944,Gub1996}. Here $Y(t)$ is the superposition of all signals, appropriately attenuated to time $t$. If we instead interpret $\{t_j\}$ as locations on the plane, $t$ as the location of a reference receiver, $h(t)$ as a channel attenuation function, and $t-t_j$ as the distance from $t_j$ to $t$, then $Y(t)$ may be interpreted as the cumulative interference power seen at $t$. A power-law impulse response, $h(t) = Kt^{-\alpha}$ \cite{LowTei1990} makes the process $\{Y(t)\}$ L\'{e}vy stable \cite{ShaNik1993}.
The use of spatial models in wireless communications dates back to the late 1970's \cite{KleSil1978,MusWas1978}. There was in fact quite extensive work on the model in which nodes are located according to a 2-D PPP, Aloha is used, a routing protocol determines the node for which each transmitted packet is intended for, and the received SINR and specifics of the communication protocol determine conditions for transmission success; see \cite{KleSil1987} for an overview of early results. The aggregate interference process in an {\em ad hoc} network was first recognized as L\'{e}vy stable in \cite{Sou1990,SouSil1990,Sou1992}, and its characteristic function was studied in \cite{IloHat1998}. A series of papers by Baccelli {\em et al.}
demonstrated the power of stochastic geometry for modeling a wide range of problems within wireless communications, as
summarized in \cite{BacBlaNOW,FraMee2007}.
We note that there have been several very helpful tutorials on applying stochastic geometry to wireless networks developed in the last year,
including the comprehensive two-volume monograph by Baccelli \cite{BacBlaNOW}, a monograph by Ganti and Haenggi that has many of the available results on non-homogeneous Poisson node distributions \cite{HaeGanNOW}, a summary tutorial article for a JSAC special issue on the topic \cite{HaeAnd2009}, and a tutorial by Win {\em et al. } on characterizing interference in Poisson fields \cite{WinPin2009}. We refer readers to those references (and \cite{Kin1993,StoKen1996}) for background.
\subsection{\label{ssec:2c}Relationship to transport capacity}
The general subject of the paper is the analysis of capacity and outage probability of wireless {\em ad hoc} networks. Ideally, one could determine the capacity region of an {\em ad hoc} network, which would be the set of maximum rates that could be achieved simultaneously between all possible pairs in the network, and hence is $n(n-1)$ dimensional for $n$ (full-duplex) users. Even if this was obtainable -- which it has not been despite considerable efforts \cite{TouGol2003} -- it would still likely not capture some key aspects of an {\em ad hoc} network, which call for information to be moved over space. Gupta and Kumar pioneered an important line of work on \emph{transport capacity} in \cite{GupKum2000}, which measures the end-to-end sum throughput of the network multiplied by the end-to-end distance. Representative publications include \cite{XueKum2006,XieKum2004,JovVis2004,LevTel2005,FraDou2007}. A key feature of all these works is that it is not possible to compute the exact transport capacity in terms of the system parameters, and although bounds and closed-form expressions are available in some cases, the best-known results are stated in the form of \emph{scaling laws} that quantify how the volume of the capacity region grows with the number of nodes in the network. The most accepted conclusion is that the capacity grows sublinearly as $\Theta(\sqrt{n})$, which can be achieved with multi-hop transmission and treating multi-user interference as noise, as proven in several different ways \cite{GupKum2000,LevTel2005,TouGol2004} including recently using Maxwell's equations \cite{FraMig2009}. Generous assumptions on mobility \cite{GroTse2002}, bandwidth \cite{NegRaj2004}, or cooperation \cite{OzgLev2007} result in more optimistic scaling laws.
The transport capacity, $C_T(n)$, is defined as the maximum distance-weighted sum rate of communication over all pairs of $n$ nodes \cite{GupKum2000}. In an extensive network, where the density of nodes per unit area is constant, the transport capacity has been shown to grow as $C_T(n) = \Theta(n)$ as $n \to \infty$, with units of bit-meters per second \cite{XueKum2006}. Roughly speaking, there can be $\Theta(n)$ simultaneous nearest-neighbor transmissions in the network, and the distance and the rate of communication between nearest-neighbors are both $\Theta(1)$, yielding $C_T(n) = \Theta(n)$.
Comparison of transport capacity and TC is facilitated by normalizing the transport capacity by the network area, $A(n) = \Theta(n)$, giving $C_T(n)/A(n) = \Theta(1)$ in units of bit-meters per second per unit area. Within the TC framework, assuming communication at the Shannon rate of $\log_2(1 + \beta)$, this metric also is $\Theta(1)$ and is precisely $c(\epsilon)\log_2(1+\beta)r$. Thus, transmission and transport capacity are consistent in the scaling sense. Furthermore, by abstracting out the end-to-end and multihop aspect of the network, the transmission capacity framework allows for a detailed study of the critical constant term; this is generally very difficult to do if using transport capacity. Transport capacity and TC are complementary metrics: transport capacity gives order optimal throughput, optimized over all MAC and routing techniques, while TC gives detailed performance and design insights for the lower layers of the network.
\section{\label{sec:model} Baseline model: Path loss only}
In this section, a baseline model is presented where the only randomness is in the position of the nodes, {\em i.e.}, there is no fading ($S = 1$ and $I_i = 1$ for each $i$ in (\ref{eqn:1})). Upper and lower bounds are given on outage probability and transmission capacity, emphasizing the impact that \emph{dominant} (strong) interferers have on the sum of the interference. The impact of fading is addressed in \S\ref{sec:Design2}.
\subsection{\label{ssec:3a} Exact results}
The points of the 2-D PPP of intensity $\lambda$, i.e., $\Pi(\lambda) = \{X_i\} \subset \mathbb{R}^2$, may be mapped to a 1-D PPP of unit intensity using Corollary 2 in \cite{Hae2005}. In particular, $\pi \lambda |X_i|^2 \sim T_i$, where $|X_i|^2$ is the squared distance from the origin of the $i^{th}$ nearest transmitter, and $T_i$ is the distance from the origin of the $i^{th}$ nearest point in a unit intensity 1-D PPP. Applying this to the normalized interference power $Y$ in (\ref{eqn:1}) gives:
\begin{equation}
Y = r^{\alpha} \sum_{i \in \Pi(\lambda)} |X_i|^{-\alpha} = (\pi \lambda)^{\frac{\alpha}{2}} r^{\alpha} \sum_{i \in \Pi(\lambda)} (\pi \lambda |X_i|^2)^{-\frac{\alpha}{2}} = (\pi r^2 \lambda)^{\frac{\alpha}{2}} \sum_{i \in \Pi_1(1)} T_i^{-\frac{\alpha}{2}},
\end{equation}
where the notation $\Pi_1(1)$ indicates a 1-D PPP of intensity $1$. The corresponding OP in (\ref{eqn:1}) becomes
\begin{equation}
q(\lambda) = \mathbb{P} \left( (\pi r^2 \lambda)^{\frac{\alpha}{2}} \sum_{i \in \Pi_1(1)} T_i^{-\frac{\alpha}{2}} > \frac{1}{\beta} \right) = \mathbb{P} \left( Z_{\alpha} > \frac{1}{ (\pi r^2 \lambda)^{\frac{\alpha}{2}} \beta} \right) = \bar{F}_{Z_{\alpha}} \left( \left( (\pi r^2 \lambda)^{\frac{\alpha}{2}} \beta \right)^{-1} \right),
\end{equation}
where $Z_{\alpha} \equiv \sum_{i \in \Pi_1(1)} T_i^{-\frac{\alpha}{2}}$ is a random variable whose distribution depends only on $\alpha$
and $\bar{F}_{Z_{\alpha}} (\cdot)$ is the complementary cumulative distribution function (CCDF) of $Z_{\alpha}$.
Using $\bar{F}_{Z_{\alpha}}^{-1}(\cdot)$ to denote the inverse, and solving $\bar{F}_{Z_{\alpha}} \left( \left( (\pi r^2 \lambda)^{\frac{\alpha}{2}} \beta \right)^{-1} \right) = \epsilon$ for $\lambda$ allows the TC to be written as:
\begin{equation}
\label{eq-transcap_implicit}
c(\epsilon) = \frac{ \left(\bar{F}_{Z_{\alpha}}^{-1}(\epsilon)\right)^{-\frac{2}{\alpha}} (1 - \epsilon) }{\pi r^2 \beta^{\frac{2}{\alpha}}}.
\end{equation}
These transformations highlight that the essential difficulty in computing the OP and the TC lies in computing the distribution of the stable rv $Z_{\alpha}$.
In fact the only $\alpha > 2$ for which $Z_{\alpha}$ has a distribution expressible in closed-form is for $\alpha=4$, which is the inverse Gaussian distribution. Important early results for this special case are due to Sousa and Silvester \cite{SouSil1990} (Eqn. (21)). In particular, they give an exact expression for the OP in terms of the CDF of the standard normal rv, $Q(z) = \mathbb{P}(Z \leq z)$, for $Z \sim N(0,1)$:
\begin{equation}
\label{eqn:11}
q(\lambda) = 2 Q \left(\sqrt{\pi/2} \lambda \pi r^2 \sqrt{\beta} \right) - 1.
\end{equation}
The corresponding {\em exact} expression for the TC is:
\begin{equation}
\label{eqn:12}
c(\epsilon) = \frac{\sqrt{2/\pi}(1-\epsilon) Q^{-1}\left((1+\epsilon)/2 \right)}{\pi r^2 \sqrt{\beta}} .
\end{equation}
An additional exact result is given for the case of Rayleigh fading in \S\ref{ssec:fading}. The general unavailability of closed form expressions for the distribution of $Z_{\alpha}$ motivates the search for lower and upper bounds, which we discuss next.
\subsection{\label{ssec:LB} Lower outage bound: dominant nodes}
A lower bound on the probability of outage is obtained by partitioning the set of interferers $\Pi$ into dominating and non-dominating nodes. A node $i$ is dominating if its interference contribution alone is sufficient to cause outage at the receiver. We call dominating nodes near (n) nodes and non-dominating nodes far (f) because because dominating nodes must be within some distance of the origin, and non-dominating nodes must be far from the origin. The dominating nodes may be defined geometrically as the interferers located inside a disk centered at the origin of radius $\beta^{\frac{1}{\alpha}}r$:
\begin{equation}
\label{eqn:3}
\Pi^{\rm n}(\lambda) \equiv \left\{X_i : \frac{r^{-\alpha}}{|X_i|^{-\alpha}} < \beta \right\} ~ = ~ \left\{X_i : |X_i| < \beta^{\frac{1}{\alpha}}r \right\} = \Pi(\lambda) \cap b \left(o,\beta^{\frac{1}{\alpha}}r \right).
\end{equation}
Here $b(o,d) = \{ x \in \mathbb{R}^2 : \|x\| \leq d\}$ denotes the ball centered at the origin $o$ of radius $d$. The aggregate interference, normalized by the received signal power $r^{-\alpha}$, may be split into aggregate dominant and aggregate non-dominant interference:
\begin{equation}
\label{eqn:4}
Y \equiv \frac{1}{r^{-\alpha}} \sum_{i \in \Pi(\lambda)} |X_i|^{-\alpha}, ~~ Y^{\rm n} \equiv \frac{1}{r^{-\alpha}} \sum_{i \in \Pi^{\rm n}(\lambda)} |X_i|^{-\alpha}, ~~ Y^{\rm f} \equiv \frac{1}{r^{-\alpha}} \sum_{i \not\in \Pi^{\rm n}(\lambda)} |X_i|^{-\alpha},
\end{equation}
where $Y = Y^{\rm n} + Y^{\rm f}$. The lower bound is obtained by ignoring the non-dominant interference:
\begin{equation}
\label{eqn:5}
q(\lambda) = \mathbb{P}\left( Y^{\rm n} + Y^{\rm f} > \frac{1}{\beta} \right) > \mathbb{P}\left( Y^{\rm n} > \frac{1}{\beta} \right) \equiv q^l(\lambda).
\end{equation}
Note that, by construction, the event $\{Y^{\rm n} > \frac{1}{\beta}\}$ is the same as the event $\{\Pi^{\rm n}(\lambda) \neq \emptyset\}$, which is simply the complement of a void probability for a Poisson process:
\begin{equation}
\label{eqn:6}
q^l(\lambda) = 1 - \mathbb{P}(\Pi^{\rm n}(\lambda) = \emptyset) = 1 - {\rm e}^{- \lambda \left| b \left(o,\beta^{\frac{1}{\alpha}}r \right) \right|} = 1 - {\rm e}^{-\lambda \pi r^2 \beta^{\frac{2}{\alpha}}}.
\end{equation}
By solving $q^l(\lambda) = \epsilon$ for $\lambda$ we get an upper bound on $q^{-1}(\epsilon)$, which yields a TC upper bound:
\begin{equation}
\label{eqn:7}
c^u(\epsilon) = \frac{(1-\epsilon) \log(1-\epsilon)^{-1}}{\pi r^2 \beta^{\frac{2}{\alpha}}} = \frac{1}{\pi \left( \frac{ r \beta^{\frac{1}{\alpha}} }{\sqrt{\epsilon} } \right)^2} + O(\epsilon^2) \mbox{ as } \epsilon \to 0.
\end{equation}
The right hand side is obtained by observing that the first order Taylor series expansion of $(1-\epsilon)\log(1-\epsilon)^{-1}$ around $\epsilon = 0$ equals $\epsilon + O(\epsilon^2)$, where $O(\cdot)$ is the standard ``big-oh'' notation \cite{GraKnu1994}. Neglecting the $O(\epsilon^2)$ term gives an error $\epsilon - (1-\epsilon)\log(1-\epsilon)^{-1} \approx 0.005$ for $\epsilon = 0.1$. The right hand side may be interpreted as a disk packing statement. In particular, the maximum number of transmissions per square meter for fixed $\alpha,\beta,\epsilon,r$ is found by packing disks of radius $R(\alpha,\beta,\epsilon,r) \equiv \frac{r \beta^{\frac{1}{\alpha}}}{\sqrt{\epsilon}}$, each disk with a single transmitter at the center. This radius clarifies the dependence of the supportable density of transmissions on these four key model parameters.
\subsection{\label{ssec:UB1} Upper outage bounds: Markov, Chebychev, and Chernoff bounds}
We may decompose the outage event in \eqref{eqn:5} as:
\begin{equation}
\label{eq:UB0}
q(\lambda) = \mathbb{P} \left( \left\{ Y^{\rm n} > \frac{1}{\beta} \right\} \cup \left\{ Y^{\rm f} > \frac{1}{\beta} \right\} \cup \left\{ Y^{\rm n} \leq \frac{1}{\beta}, Y^{\rm f} \leq \frac{1}{\beta}, Y^{\rm n}+Y^{\rm f}>\frac{1}{\beta} \right\} \right).
\end{equation}
In words: the event $\{Y^{\rm n}+Y^{\rm f} > 1/\beta\}$ means either $Y^{\rm n}$ or $Y^{\rm f}$ individually exceed $1/\beta$, or they are both below $1/\beta$ but their sum exceeds $1/\beta$. By construction, however, the event $\{Y^{\rm n} \leq 1/\beta\}$ is the same as the event $\{Y^{\rm n} = 0\}$, which means the third event in (\ref{eq:UB0}) is null. The probability of the remaining first two events may be written as:
\begin{equation}
\label{eq:UB1}
q(\lambda) = \mathbb{P}\left( Y^{\rm n} > \frac{1}{\beta} \right) + \mathbb{P}\left( Y^{\rm f} > \frac{1}{\beta} \right) - \mathbb{P}\left( Y^{\rm n} > \frac{1}{\beta} \right) \mathbb{P}\left( Y^{\rm f} > \frac{1}{\beta} \right) = q^l(\lambda)+(1-q^l(\lambda)) \mathbb{P}\left( Y^{\rm f} > \frac{1}{\beta} \right),
\end{equation}
where we have exploited the independence of $Y^{\rm n},Y^{\rm f}$ and applied the definition of $q^l(\lambda)$ in (\ref{eqn:5}). Substituting (\ref{eqn:6}) for $q^l(\lambda)$ into (\ref{eq:UB1}), we obtain an upper bound on $q(\lambda)$
by an upper bound on $\mathbb{P}\left(Y^{\rm f}>1/\beta\right)$. We presently give three such bounds, using the Markov and Chebychev inequalities and the Chernoff bound. Although the details of the analysis below differ for each of the three bounds, the general techniques is the same: upper bound $\mathbb{P}\left(Y^{\rm f}>1/\beta\right)$ using the inequality, substitute into (\ref{eq:UB1}), then seek a simple expression that upper bounds the resulting expression.
The Markov inequality \cite{MitUpf2005} gives $\mathbb{P}(Y^{\rm f} > 1/\beta) \leq \beta \mathbb{E}[Y^{\rm f}]$. Campbell's Theorem \cite{StoKen1996} states that if $\{ X_i \}$ are points drawn from a PPP of possibly varying intensity $\lambda(x)$ then
\begin{equation}
\mathbb{E} \left[\sum_{i \in \Pi} f(X_i) \right] = \int_{\mathbb{R}^2} f(x) \lambda( {\rm d}x).
\end{equation}
Applying this to find $\mathbb{E}[Y^{\rm f}]$ is straightforward after a change of variable to polar coordinates:
\begin{equation}
\label{eq:markov}
\mathbb{E}[Y^{\rm f}] = \mathbb{E} \left[ \frac{1}{r^{-\alpha}} \sum_{i \in \Pi \cap \bar{b}(0,s)} |X_i|^{-\alpha} \right] = r^{\alpha} \int_s^{\infty} t^{-\alpha} \lambda 2\pi t {\rm d} t = \frac{2\pi r^2 \beta^{\frac{2}{\alpha}-1}}{\alpha-2} \lambda \equiv \mu \lambda,
\end{equation}
where $s = \beta^{\frac{1}{\alpha}}r$. Multiplying (\ref{eq:markov}) by $\beta$ and combining with \eqref{eq:UB1}, an upper bound on outage is
\begin{equation}
\label{eq:UB2}
q(\lambda) \leq q^{u,{\rm Markov}}(\lambda) = \left(1 - {\rm e}^{-\lambda \pi r^2 \beta^{\frac{2}{\alpha}}} \right) + {\rm e}^{-\lambda \pi r^2 \beta^{\frac{2}{\alpha}}} \frac{2\pi r^2 \beta^{\frac{2}{\alpha}}}{\alpha - 2} \lambda .
\end{equation}
Using the bounds $1-{\rm e}^{-A} \leq A$ and ${\rm e}^{-A} \leq 1$ for $A > 0$ and simplifying gives a ``relaxed Markov'' upper bound:
\begin{equation}
\label{eq:UB3}
q^{u,{\rm Markov}}(\lambda) \leq \pi r^2 \beta^{\frac{2}{\alpha}} \lambda + \frac{2 \pi r^2 \beta^{\frac{2}{\alpha}}}{\alpha - 2} \lambda = \frac{\alpha}{\alpha-2} \pi r^2 \beta^{\frac{2}{\alpha}} \lambda .
\end{equation}
Setting \eqref{eq:UB3} equal to $\epsilon$ and solving for $\lambda$ gives a relaxed Markov lower bound on the TC:
\begin{equation}
\label{eq:UB4}
c^{l,{\rm Markov }}(\epsilon) = \frac{\alpha -2}{\alpha} \frac{\epsilon}{\pi r^2 \beta^{\frac{2}{\alpha}}} + O(\epsilon^2) \mbox{ as } \epsilon \to 0,
\end{equation}
which is clearly smaller than the TC upper bound of \eqref{eqn:7} by a factor $(\alpha-2)/\alpha$. The right hand side is obtained by observing that the first order Taylor series expansion of $\epsilon (1-\epsilon)$ around $\epsilon = 0$ equals $\epsilon + O(\epsilon^2)$. Neglecting the $O(\epsilon^2)$ term gives an error $\epsilon - \epsilon (1-\epsilon) = \epsilon^2 = 0.01$ for $\epsilon = 0.1$.
Campbell's Theorem also gives the variance of the far-field aggregate interference:
\begin{equation}
\label{eqn:varint}
\mathrm{Var}(Y^{\rm f}) = \mathbb{E} \left[\frac{1}{r^{-2\alpha}} \sum_{i \in \Pi \cap \bar{b}(0,s)} \left(|X_i|^{-\alpha} \right)^2 \right] = \lambda r^{2\alpha} \int_s^{\infty} t^{-2\alpha} 2\pi t {\rm d} t =
\frac{\pi r^2 \beta^{\frac{2}{\alpha}-2}}{\alpha-1} \lambda \equiv \sigma^2 \lambda
\end{equation}
We use (\ref{eq:markov}) and (\ref{eqn:varint}) and Chebychev's inequality \cite{MitUpf2005} on the far-field aggregate interference (assuming $\mathbb{E}[Y^{\rm f}] < \frac{1}{\beta}$), as:
\begin{equation}
\label{eqn:Chebfar}
\mathbb{P} \left(Y^{\rm f} > \frac{1}{\beta} \right)
\leq \mathbb{P} \left(\left| Y^{\rm f} - \mathbb{E}[Y^{\rm f}] \right| > \frac{1}{\beta} - \mathbb{E}[Y^{\rm f}] \right)
\leq \frac{\sigma^2 \lambda}{\left( \frac{1}{\beta}- \mu \lambda \right)^2}
\end{equation}
Substituting (\ref{eqn:Chebfar}) into (\ref{eq:UB1}) and using the bounds $1-{\rm e}^{-A} \leq A$ and ${\rm e}^{-A} \leq 1$ for $A > 0$ and simplifying gives a ``relaxed Chebychev'' upper bound:
\begin{equation}
q^{u,{\rm Chebychev}}(\lambda) \leq \pi r^2 \beta^{\frac{2}{\alpha}} \lambda + \frac{\frac{\pi r^2 \beta^{\frac{2}{\alpha}-2}}{\alpha-1} \lambda}
{\left( \frac{1}{\beta}- \frac{2\pi r^2 \beta^{\frac{2}{\alpha}-1}}{\alpha-2} \lambda \right)^2}.
\end{equation}
This expression is quadratic in $\lambda$; setting equal to $\epsilon$ and solving for $\lambda$ gives the relaxed Chebychev lower bound on the TC.
The Chernoff bound \cite{MitUpf2005} may be used to obtain an upper bound on the OP:
\begin{equation}
\label{eqn:9}
\mathbb{P} \left(Y^{\rm f} > \frac{1}{\beta} \right) \leq \inf_{\theta \geq 0} \mathbb{E} \left[{\rm e}^{\theta Y^{\rm f}} \right] {\rm e}^{-\theta \frac{1}{\beta}} = \exp \left\{- \sup_{\theta \geq 0} \left(\theta \frac{1}{\beta} - 2 \pi \lambda \int_{\beta^{\frac{1}{\alpha}}r}^{\infty} \left({\rm e}^{\theta r^{\alpha} x^{-\alpha}} - 1 \right) x {\rm d} x \right) \right\}.
\end{equation}
This expression may be obtained by computing the moment generating function of $Y^{\rm f}$ restricted to $b(o,v)$ and then letting $v \to \infty$, as in \cite{SouSil1990,WebAnd2007a}. The final upper bound on OP is then:
\begin{equation}
\label{eqn:10}
q^{u,{\rm Chernoff}}(\lambda) \equiv 1 - \left(1 - \exp \left\{- \sup_{\theta \geq 0} \left(\frac{\theta}{\beta} - 2 \pi \lambda \int_{\beta^{\frac{1}{\alpha}}r}^{\infty} \left({\rm e}^{\theta r^{\alpha} x^{-\alpha}} - 1 \right) x {\rm d} x \right) \right\} \right) {\rm e}^{-\lambda \pi r^2 \beta^{\frac{2}{\alpha}}}.
\end{equation}
Although the Chernoff OP upper bound is in some cases tighter than its Markov or Chebychev counterparts, it depends upon $\lambda$ in a complicated way which precludes a closed-form expression for the corresponding lower bound on the TC. In this case, numerical inversion techniques must be applied. Sample lower and upper bounds and exact expressions for both OP and TC are shown in Fig.~\ref{fig:b}.
\subsection{\label{ssec:subexp} Tightness of the lower bound: sub-exponential distributions}
Comparing the lower outage bound (\ref{eqn:6}) with the upper outage bound (\ref{eqn:10}), and glancing at Fig.~\ref{fig:b}, it is apparent that the (simple) lower outage bound is much tighter than the (complicated) upper bound. One explanation for this comes from the fact that the random interference contribution of each node obeys a {\em subexponential distribution} \cite{GolKlu1997}. Consider $n$ points distributed independently and uniformly over a disk of radius $d$ centered at the origin, denoted $\{X_1,\ldots,X_n\}$. It is straightforward to establish the CCDF of the individual interference rvs, $V = |X|^{-\alpha}$, to be $\bar{F}_V(v) = \left(v^{\frac{1}{\alpha}} d \right)^{-2}$ for $v \geq d^{-\alpha}$. A sufficient condition for a distribution to be subexponential is that $\lim \sup_{v \to \infty} v h_V(v) < \infty$ where $h_V(v) \equiv \frac{{\rm d}}{{\rm d} v}\left(-\log \bar{F}_V(v) \right)$ is the {\em hazard rate} function. In our case, we find $v h_V(v) = \frac{2}{\alpha}$, ensuring $\bar{F}_V$ is subexponential. A defining characteristic of subexponential distributions is the fact that sums of iid rvs $\{V_1,\ldots,V_n\}$ typically achieve large values $v$ by having one or more large summands (as opposed to a large number of moderate sized summands) \cite{GolKlu1997}:
\begin{equation}
\lim_{v \to \infty} \frac{\mathbb{P}(V_1+\cdots+V_n > v)}{\mathbb{P}(\max\{V_1,\ldots,V_n\} > v)} = 1, ~ n \geq 2.
\end{equation}
Because the interference contributions from each node are subexponential, it follows that the probability of an outage event $\{V_1+\cdots+V_n > v\}$ (for large $v$) approximately equals the probability of there being one or more dominant nodes with $V_i > v$. Replacing $\sum_{i \in \Pi(\lambda)} |X_i|^{-\alpha}$ in (\ref{eqn:1}) with $\sum_{i=1}^n |X_i|^{-\alpha}$ gives $v = r^{-\alpha} \frac{1}{\beta}$. Thus $v$ is large if {\em either} $\beta$ is small (receiver can decode small SIR) or $r$ is small (Tx and Rx are close together) . For small $v$ (meaning {\em both} $\beta$ and $r$ are large), outage occurs more easily, and in particular, outage may occur due to the aggregate interference being large, even though there may not be any dominant nodes. This argument holds for fixed $d$ and $n$, but gives intuition as to why the dominant interference lower bound is tight.
\subsection{\label{ssec:optimization} Optimization of SINR Threshold and Outage Constraint}
The SINR threshold $\beta$ and the outage constraint $\epsilon$, which are treated as constants in the TC framework, are generally under the control of the system designer and should be chosen reasonably. A meaningful objective is maximization of the area spectral efficiency $c(\epsilon) \log_2(1 + \beta)$, i.e., the product of successful density and spectral efficiency. Using (\ref{eq-transcap_implicit}), the joint maximization over $(\beta,\epsilon)$ can be written as:
\begin{align}
\max_{\beta, \epsilon} ~ c(\epsilon) \log_2(1 + \beta) &= \max_{\beta, \epsilon} ~
\frac{ \left(\bar{F}_{Z_{\alpha}}^{-1}(\epsilon)\right)^{-\frac{2}{\alpha}} (1 - \epsilon) }{\pi r^2 \beta^{\frac{2}{\alpha}}} \log_2(1 + \beta).
\end{align}
This clearly allows for separate maximizations of $\beta$ and $\epsilon$:
\begin{align}
\beta^\star = \textrm{arg} \max_\beta ~ \frac{\log_2(1 + \beta)}{\beta^{\frac{2}{\alpha}}}, ~~
\epsilon^\star = \textrm{arg} \max_\epsilon ~ \left(\bar{F}_{Z_{\alpha}}^{-1}(\epsilon) \right)^{-\frac{2}{\alpha}} (1 - \epsilon),
\end{align}
where the optimizers $\beta^\star$ and $\epsilon^\star$ depend only on the path-loss coefficient $\alpha$. In \cite[Section IV]{JinAnd2008}, where a related but slightly different problem is studied, a closed-form solution for $\beta^\star$ was found:
\begin{equation}
\beta^\star = \mathrm{e}^{\frac{\alpha}{2} + \mathcal{W} \left( -
\frac{\alpha}{2} e^{-\frac{\alpha}{2}} \right)} - 1
\end{equation}
where $\mathcal{W}(z)$ is the principle branch of the Lambert $\mathcal{W}$ function.
$\bar{F}_{Z_{\alpha}}(\cdot)$ is not known in closed form, and thus $\epsilon^\star$ must be determined numerically. In Fig.~\ref{fig:opt}, $\beta^\star$ and $\epsilon^\star$ are plotted versus $\alpha$, and both are seen to be increasing in $\alpha$. $\beta^\star$ is consistent with normal operating spectral efficiencies, while $\epsilon^\star$ shows that the optimal $\epsilon$ that maximizes the TC may be unacceptably large. Although such a large outage provides a large area spectral efficiency, it also translates directly to long transmission delays and energy inefficiency. This analysis highlights a key drawback in unrestricted (spatial) throughput maximization: the max-throughput operating point may have an unacceptably high associated OP. The TC framework captures this tradeoff by definition: it gives the maximum spatial throughput subject to a specified OP constraint.
\section{\label{sec:Design2} Transmission Capacity in Fading Channels}
We now evolve the discussion to consider channels that also have a random fluctuation about the path loss, commonly known as fading or shadowing. The SIR in (\ref{eqn:1}) models the scenarios discussed in this section where random variable $S$ represents the desired signal fade and $I_i$ the fading coefficient from the $i$-th interferer. We assume $S$ is drawn according to some distribution $F_S$ and each $I_i$ according to $F_I$ with $S, I_1, I_2, \ldots$ independent. Independent fading is assumed for tractability; computing the OP and TC in correlated fading will be more difficult.
We first develop a framework for analyzing OP and TC with an arbitrary random channel, and then show exact results on OP and TC for Rayleigh and Nakagami fading. It is initially surprising that exact results on OP and TC can be computed with certain types fading, but not without fading; recall in the previous session we had to be content with upper and lower bounds. Although unmitigated fading reduces TC, it raises the possibility of opportunistic scheduling and transmit power control, which are discussed in \S \ref{ssec:scheduling} and \S \ref{ssec:FPC}.
\subsection{\label{ssec:fading} General Fading}
With general fading values as in (\ref{eqn:1}), the set of dominant interferers in (\ref{eqn:3}) becomes
\begin{equation}
\Pi^{\rm n}(\lambda) = \left\{ i : \frac{S r^{-\alpha}}{I_i |X_i|^{-\alpha}} < \beta \right\}.
\end{equation}
Computation of the probability of a dominant interferer ($\mathbb{P}(\Pi^{\rm n}(\lambda) \neq \emptyset)$) yields the following lower bound to OP \cite{WebAnd2007b}:
\begin{equation}
\label{eq-fading_general}
q^l(\lambda) = 1 - \mathbb{E} \left[ \exp \left\{- \lambda \pi r^2 \beta^{\frac{2}{\alpha}} \mathbb{E}[I^{\frac{2}{\alpha}}] S^{-\frac{2}{\alpha}} \right\} \right],
\end{equation}
where the outer expectation is with respect to $S$. This expression is similar to the LB in \eqref{eqn:6}, but the expectation in front of the exponential makes inverting this expression for $\lambda$ infeasible. Applying Jensen's inequality to $q^l(\lambda)$ yields the following \emph{approximations}:
\begin{eqnarray}
\label{eq-op_approx_fading}
q(\lambda) &\approx& 1 - \exp \left\{- \lambda \pi r^2 \beta^{\frac{2}{\alpha}} \mathbb{E}[I^{\frac{2}{\alpha}}] \mathbb{E} [S^{-\frac{2}{\alpha}} ] \right\} \\
\label{eq-tc_approx_fading}
c(\epsilon) &\approx& \frac{ -(1-\epsilon) \log (1-\epsilon)}{\pi r^2 \beta^{\frac{2}{\alpha}} \mathbb{E}[I^{\frac{2}{\alpha}}] \mathbb{E} [S^{-\frac{2}{\alpha}} ] }.
\end{eqnarray}
These quantities are approximations because Jensen's inequality yields inequality in the wrong direction. However, numerical results show that this approximation is reasonably accurate for small values of $\epsilon$ \cite{WebAnd2007b}. It is possible to extend the upper bounds from \S \ref{ssec:UB1} to fading \cite{WebAnd2007b}, but we focus exclusively on the above lower bound and approximation because they are more accurate.
If we assume that the signal and interference coefficients follow the same distribution $F_H$, which is reasonable in most communication environments, the expressions in (\ref{eq-fading_general})-(\ref{eq-tc_approx_fading}) particularize to:
\begin{eqnarray}
\label{eq-outage_lower_fading}
q^l(\lambda) &=& 1 - \mathbb{E}_{H} \left[ \exp \left\{- \lambda \pi r^2 \beta^{\frac{2}{\alpha}} \mathbb{E}[H^{\frac{2}{\alpha}}] H^{-\frac{2}{\alpha}} \right\} \right] \\
q(\lambda) &\approx& 1 - \exp \left\{- \lambda \pi r^2 \beta^{\frac{2}{\alpha}} \mathbb{E}[H^{\frac{2}{\alpha}}] \mathbb{E} [H^{-\frac{2}{\alpha}} ] \right\} \\
c(\epsilon) &\approx& \frac{ (1-\epsilon) \log (1-\epsilon)^{-1}}{\pi r^2 \beta^{\frac{2}{\alpha}} \mathbb{E}[H^{\frac{2}{\alpha}}] \mathbb{E} [H^{-\frac{2}{\alpha}} ] }.
\label{eq-tc_approx_fading2}
\end{eqnarray}
Comparing the TC approximation in (\ref{eq-tc_approx_fading2}) to the TC upper bound in (\ref{eqn:7}) we see that the effect of fading is captured by the term $ \left( \mathbb{E}[H^{\frac{2}{\alpha}}] \mathbb{E} [H^{-\frac{2}{\alpha}} ] \right)^{-1}$. By Jensen's inequality, this quantity is less than one (with equality only if $H$ is deterministic) and thus fading has an overall negative effect relative to pure pathloss attenuation. Furthermore, note that the TC approximation in (\ref{eq-tc_approx_fading2}) is equal to the exact TC in (\ref{eqn:ac}) for Rayleigh fading derived in the next section. For the particular case of Rayleigh fading with $\alpha = 4$, the approximate ratio (\ref{eqn:7}) over (\ref{eqn:ac}) equals $\frac{\pi}{2} \approx 1.5708$, while the exact ratio ((\ref{eqn:12}) over (\ref{eqn:ac})) is $\frac{\pi}{2} \frac{Q^{-1}((1+\epsilon)/2)}{\log(1-\epsilon)^{-1}}$, which rapidly approaches $\frac{\pi}{2}$ as $\epsilon \to 0$. Thus, adding Rayleigh fading to a network with $\alpha=4$ reduces the TC by $57\%$.
\subsection{\label{ssec:Rayleigh}Rayleigh Fading}
The case of Rayleigh fading, where each $H_{ij}$ is exponentially distributed (unit mean), is appealing not only for its practical importance but also because it is one of the few cases for which the OP and TC can be computed in closed form. The following argument was made precise by Baccelli {\em et al.} \cite{BacBla2006}, but can be traced to \cite{Lin1992,ZorPup1995}. Define the aggregate interference seen at the origin as $Z = \sum_{i \in \Pi(\lambda)} H_{i0} |X_i|^{-\alpha}$, and denote the Laplace transform of $Z$ by $\mathcal{L}_Z(s) = \mathbb{E}\left[{\rm e}^{-s Z} \right]$. Then the success probability under Rayleigh fading is the Laplace transform of $Z$ evaluated at $s = \beta r^{\alpha}$:
\begin{equation}
\left. \mathbb{P}({\rm SIR} > \beta) = \mathbb{P}(H_{00} > \beta r^{\alpha} Z) = \int_0^{\infty} {\rm e}^{-\beta r^{\alpha} z} f_Z(z) {\rm d} z = \mathbb{E}\left[{\rm e}^{-s Z} \right] \right|_{s = \beta r^{\alpha}}.
\end{equation}
This transform can be computed explicitly, yielding an exact OP expression ((3.4) in \cite{BacBla2006}):
\begin{equation}
\label{eqn:ab}
q(\lambda) = 1 - \exp \left\{ - \lambda \pi r^2 \beta^{\frac{2}{\alpha}} \frac{2\pi}{\alpha} \csc \left( \frac{2\pi}{\alpha} \right) \right\},
\end{equation}
where $\csc$ denotes the cosecant. The corresponding exact TC expression is
\begin{equation}
\label{eqn:ac}
c(\epsilon) = \frac{(1-\epsilon)\log(1-\epsilon)^{-1}}{\pi r^2 \beta^{\frac{2}{\alpha}} \frac{2\pi}{\alpha} \csc \left( \frac{2\pi}{\alpha} \right) }.
\end{equation}
\subsection{\label{ssec:Nakagami}Nakagami Fading}
The Nakagami-$m$ distribution has power given by
\begin{equation}
f_S(x) = \left( \frac{m}{\mathbb{E}[S]}\right)^m \frac{x^{m-1}}{\Gamma(m)} \exp \left( -\frac{mx}{\mathbb{E}[S]} \right), ~ m \geq 0.5.
\end{equation}
and is quite general in that Rayleigh fading corresponds to $m = 1$ and path loss only corresponds to $m \to \infty$. Because the distribution is also of exponential form, OP and TC can be computed exactly in a manner similar to Rayleigh fading, resulting in a transmission capacity of \cite{HunAnd2008a}
\begin{equation}
\label{eq:nak}
c(\epsilon)=\frac{K_{\alpha,m}(1-\epsilon)\log(1-\epsilon)^{-1}}{C_{\alpha,m}\beta^{\frac{2}{\alpha}}R^2}, ~~{\rm where}
\end{equation}
\begin{eqnarray}
K_{\alpha,m} &=& \left[1+\sum_{k=0}^{m-2}\frac{1}{(k+1)!} \prod_{l=0}^{k}(l-2/\alpha)\right]^{-1},\\
C_{\alpha,m} &=& \frac{2\pi}{\alpha}\sum^{m-1}_{k=0}{\binom{m}{k}} B\left(\frac{2}{\alpha}+k;m-\left(\frac{2}{\alpha}+k\right)\right),
\end{eqnarray}
and $B(a,b)=\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}$ is the Beta function. Although this expression is clearly more complex than \eqref{eqn:ac}, it does describe nearly any fading environment. Interestingly, if $m \to \infty$, i.e. for path loss only, \eqref{eq:nak} converges to the upper bound of \eqref{eqn:7}.
\subsection{\label{ssec:scheduling} Threshold scheduling}
Fading can potentially be exploited if only users experiencing good fading conditions transmit. This can be done through a simple {\em threshold scheduling} rule where each transmitter elects to transmit only if the signal fading coefficient $H_{00}$ is larger than a threshold $t$, as in \cite{WebAnd2007b}. Threshold scheduling is an example of opportunistic scheduling. The spatial intensity of attempted transmissions for threshold $t$ is $\mu(t) \equiv \lambda \mathbb{P}(H_{00}> t) = \lambda \bar{F}_H(t)$, {\em i.e.}, the original intensity $\lambda$ thinned by the probability of being above the threshold. Because the threshold is on the received signal strength rather than the SIR, the decision depends only on local fading and does not affect the interference. Therefore, the outage probability with threshold $t$ is:
\begin{equation}
\label{eq-thresh}
q(\nu,t) = \mathbb{P} \left( \left. \frac{H_{00} r^{-\alpha}}{\sum_{i \in \Pi(\nu)} H_{i0} |X_i|^{-\alpha}} < \beta ~ \right| ~ H_{00} \geq t \right).
\end{equation}
where the $\{H_{ij}\}$ are drawn iid according to $F_H$. The density of active transmissions is kept equal to $\nu$, independent of the value of $t$, by choosing $\lambda = \frac{\nu}{\mathbb{P}(H_{00}> t)}$. Thus, the only change brought about is that the signal distribution follows distribution $F_{H|H \geq t}$ instead of $F_H$. As a result, the OP in (\ref{eq-thresh}) is \textit{decreasing} in $t$ and thus TC \textit{increases} with $t$.\footnote{An outage is declared only if a transmitter actually attempts transmission and fails; not meeting the threshold is not considered an outage because it is essentially the same as not electing to transmit in pure Aloha.} The transmission capacity approximation is given by:
\begin{eqnarray}
c(\epsilon) \approx \frac{ (1-\epsilon) \log (1-\epsilon)^{-1}}{\pi r^2 \beta^{\frac{2}{\alpha}} \mathbb{E}[H^{\frac{2}{\alpha}}] \mathbb{E} [H^{-\frac{2}{\alpha}} | H \geq t ] }.
\end{eqnarray}
Comparing this with (\ref{eq-tc_approx_fading2}), the (approximate) ratio of TC with threshold scheduling to that without it is $\frac{\mathbb{E} [H^{-\frac{2}{\alpha}}] }{ \mathbb{E} [H^{-\frac{2}{\alpha}} | H \geq t ]}$. Because bad signal fades are eliminated, the gains from threshold scheduling can be very substantial: for example, in Rayleigh fading a very reasonable threshold of $t=1$ (i.e., $0$ dB) increases TC by a factor of $4.7$, $3.3$, and $2.25$ for $\alpha=2.5, 3,$ and $4$,
respectively.
\subsection{\label{ssec:FPC} Power control}
While threshold scheduling attempts to completely avoid bad fades, an alternative strategy is to transmit regardless of the fading conditions and adjust transmit power to compensate for fading. In \cite{JinWeb2008} a \textit{fractional power control} policy in which each transmitter \emph{partially} compensates for the signal fading coefficient is proposed. In particular, transmit power is chosen proportional to the fading coefficient raised to the exponent $-\gamma$ where $\gamma \in [0,1]$:
\begin{equation}
P_i^{\rm tx,fpc} = \frac{\rho}{\mathbb{E}[H_{ii}^{-\gamma}]} H_{ii}^{-\gamma} ~~~~ P_i^{\rm rx,fpc} = \frac{\rho}{\mathbb{E}[H_{ii}^{-\gamma}]} H_{ii}^{1-\gamma} r^{-\alpha}.
\end{equation}
Note that $\gamma=0$ corresponds to constant power while $\gamma=1$ corresponds to full channel inversion. The resulting SIR is ${\rm SIR} = H_{00}^{1-\gamma} r^{-\alpha}/ \sum_{i \in \Pi(\lambda)} \left( H_{ii}^{-\gamma} H_{i0} \right) |X_i|^{-\alpha}$.
With channel inversion ($\gamma=1$) there is no signal fading ($S=1$) and each interference coefficient is distributed as $\frac{1}{H_{ii}}$, and thus based on (\ref{eq-fading_general}) we get the following OP lower bound:
\begin{equation}
q^{l,{\rm ci}}(\lambda) = 1 - \exp \left\{- \lambda \pi r^2 \beta^{\frac{2}{\alpha}} \mathbb{E}[H^{\frac{2}{\alpha}}] \mathbb{E}[H^{-\frac{2}{\alpha}}] \right\}.
\label{eq-ci_lower}
\end{equation}
(There is no outer expectation because the signal fading coefficient is deterministic.) By Jensen's inequality, this quantity is larger than the OP lower bound for constant power given (\ref{eq-outage_lower_fading}), and thus the lower bounds indicate that inversion degrades performance. For Rayleigh fading this ordering is precise: the OP lower bound with channel inversion in (\ref{eq-ci_lower}) is equal to the actual OP with constant power given in (\ref{eqn:ac}), and thus constant power is strictly superior to inversion in Rayleigh fading.
Although inversion worsens performance, partial compensation for fading can be beneficial. If we consider general $\gamma$ and substitute the appropriate distributions for $S$ and $I$ in (\ref{eq-tc_approx_fading}), we get:
\begin{equation}
\label{eq-tc_approx_fpc}
c^{{\rm fpc}}(\epsilon,\gamma) \approx \frac{(1-\epsilon)\log(1-\epsilon)^{-1}}{\pi r^2 \beta^{\frac{2}{\alpha}}\mathbb{E}\left[H^{\frac{2}{\alpha}} \right] \mathbb{E}\left[H^{-\gamma\frac{2}{\alpha}} \right] \mathbb{E} \left[ H^{-(1-\gamma) \frac{2}{\alpha}} \right]}.
\end{equation}
This approximation is maximized by minimizing $\mathbb{E}\left[H^{-\gamma\frac{2}{\alpha}} \right] \mathbb{E} \left[ H^{-(1-\gamma) \frac{2}{\alpha}} \right]$ over $\gamma \in [0,1]$, and an application of H\"{o}lder's inequality yields $\gamma^* = 1/2$. Although this only ensures that $\gamma=1/2$ is optimal for the TC approximation, results in \cite{JinWeb2008} confirm that $\gamma=1/2$ is also near-optimal for a wide range of reasonable parameter values.\footnote{An important exception to this is for large values of $\epsilon$, {\em i.e.}, dense networks, in which case the optimum tends towards constant power ($\gamma=0$).} Using $\gamma \gg \frac{1}{2}$ over-compensates for signal fading and leads to interference levels that are too high, while $\gamma \ll \frac{1}{2}$ leads to small interference levels but an under-compensation for signal fading. The benefit of FPC is substantial for small values of $\epsilon$ and $\alpha$. In Rayleigh fading, FPC increases TC by a factor of $2.1$ and $1.2$ for $\alpha=2.5$ and $\alpha=4$, respectively, for small $\epsilon$.
\section{\label{sec:MIMO} Multiple antennas}
The amplitude and phase of fading channels vary quite rapidly over space, with an approximate decorrelation distance of half a wavelength ($6$ cm at $2.5$ GHz). This allows multiple suitably-spaced antennas to be deployed at both the transmitter and receiver to generate $N_t N_r$ Tx-Rx antenna pairs, where $N_t$ and $N_r$ are the number of transmit and receive antennas. Considerable work has been done on multi-antenna systems (MIMO) in the past decade, well summarized by \cite{DigAlD2004,PauGor2003}, and such systems are now quite well understood and are central to all emerging high-data rate broadband wireless standards. However, much less is known regarding the use of antennas in \textit{ad hoc} networks. In addition to providing diversity and spatial multiplexing benefits, multiple antennas also provide the ability to perform interference cancellation. Recent analysis of MIMO systems using the TC framework allows us to evaluate these different antenna techniques, and provides a very optimistic picture of the benefit of MIMO in ad hoc networks.
\subsection{\label{ssec:diversity} Diversity}
Broadly defined, diversity techniques use TX and RX antennas to mitigate fading and increase the received SNR. With maximum-ratio combining/transmission (MRC \& MRT), the transmitter and receiver apply weighting vectors at the antenna arrays based only on the Tx-Rx channel matrix. If the TX and RX weight vectors are denoted by ${\bf t}_0$ and ${\bf r}_0$, respectively, and ${\bf H}_i$ denotes the $N_r \times N_t$ channel matrix from the $i$-th transmitter, then the SIR equation (\ref{eqn:1}) becomes:
\begin{equation}
{\rm SIR} = \frac{|{\bf r}_0^{\dagger} {\bf H}_0 {\bf t}_0|^2 r^{-\alpha}}{ \sum_{i \in \Pi(\lambda)} |{\bf r}_0^{\dagger} {\bf H}_i {\bf t}_i|^2 |X_i|^{-\alpha}}.
\end{equation}
Choosing the TX and RX weights as the right/left singular vectors of the largest singular value of ${\bf H}_0$ results in the signal coefficient being equal to the square of this singular value, and thus boosts signal power by a factor between $\max\{ N_t,N_r \}$ and $N_t N_r$. With an appropriate application of (\ref{eq-tc_approx_fading}), this implies that the TC scales as \cite{HunAnd2008a}:
\begin{eqnarray}
O(\max\{N_t,N_r\}^{\frac{2}{\alpha}}) \leq c(\epsilon) \leq O((N_t N_r)^{\frac{2}{\alpha}}) \mbox{ as } N_t,N_r \to \infty.
\end{eqnarray}
The upper bound is tight for channels with high spatial correlation, while the lower bound is tight for i.i.d. Rayleigh fading. Note that $N_t=1, N_r > 1$ and $N_t > 1, N_r=1$ correspond to maximum-ratio combining (MRC) and maximum-ratio transmission (MRT), respectively.
Orthogonal space-time block coding (OSTBC) is another diversity technique. OSTBC, which intuitively corresponds to repeating each information symbol from different antennas at different times, does not change the transmitted symbol rate but significantly increases received signal power.\footnote{For some combinations of $N_t$ and $N_r$ OSTBCs either lose orthogonality, or reduce the data rate slightly. The results here make the optimistic assumption of rate 1 orthogonal STBCs for general $N_t, N_r$.} However, interference power is also boosted and as a result OSTBCs increase the TC scaling only as $c(\epsilon) = O(N_r^{\frac{2}{\alpha}})$ \cite{HunAnd2008a}. OSTBCs have very little affect on TC -- the scaling gain is due to MRC at the receiver, independent of the code.
\subsection{\label{ssec:cancel} Spatial Interference Cancellation}
If the receiver also has knowledge of the interferer channels, the $N_r$-dimensional RX weight vector can be used to cancel interference. In the single-transmit, multi-receive antenna setting with spatially uncorrelated Rayleigh fading, choosing the RX weight vector orthogonal to the vector channels of the strongest $N_r - 1$ interferers (i.e., ${\bf r}_0 \perp {\bf H}_1, \ldots, {\bf H}_{N_r - 1}$) results in $O( {N_r}^{1 - \frac{2}{\alpha}} )$ TC scaling \cite{HuaAndSub}. An even larger TC increase is obtained if the RX vector is designed to cancel interference and reap diversity. In particular, using about half the RX degrees of freedom for cancellation and the remainder for diversity (i.e., choosing ${\bf r}_0$ as the projection of vector ${\bf H}_0$ on the nullspace of ${\bf H}_1, \ldots, {\bf H}_{N_r/2}$ ) leads to $O(N_r)$ TC scaling \cite{JinAnd2009}.\footnote{Both of these scaling results are obtained using the OP upper bounding techniques described in \S \ref{ssec:UB1}.} In fact, the SIR is maximized, and thus the benefits of interference cancellation and diversity are optimally balanced, if the RX vector is chosen according to the MMSE-criterion: ${\bf r}_0 = \left( \sum_{i \in \Pi(\lambda)} |X_i|^{-\alpha} {\bf H}_i {\bf H}_i^{\dagger} \right)^{-1/2} {\bf H}_0$. The MMSE filter is generally quite difficult to deal with analytically, although large-system results are derived using random matrix theory in \cite{GovBil2007}.
In Fig. \ref{fig:diversity} the TC of diversity (beamforming and OSTBC) and intererence cancellation are plotted versus the number of antennas ($N$) for $\alpha=4$ and $\beta=1$. All of the techniques except OSTBC provide significant gains, but the combination of interference cancellation and diversity clearly provides the largest TC, as predicted by the TC scaling results.
\subsection{\label{ssec:mux} Spatial Multiplexing}
The most aggressive use of the antennas is to use them to form up to $L \leq \min \{N_t, N_r \}$ parallel spatial channels. If the transmitter has knowledge of the channel matrix ${\bf H}_0$, this corresponds to beamforming along the eigenmodes of the channel. The achieved SINR for each spatial channel depends on the eigenvalues of the channel matrix as well as the interference power, so some channels are much better than others. When subject to an SINR target and an outage constraint, it is preferable to transmit only a small number of streams ($L \ll N$) unless the network is very sparse. This is illustrated in Fig. \ref{fig:MIMO} where the optimized number of spatial streams
(as determined in \cite{HunAnd2008b}) is plotted versus the interferer density and this quantity is seen to decrease from $N$ to $1$ with
the density. Ideally, the number of spatial channels can be adapted dynamically based on the channel and interference strengths to maximize the quantity $L c(\epsilon,L)$, which is the area spectral efficiency (ASE) shown in Fig. \ref{fig:MIMO}, and has a unique maximum \cite{HunAnd2008b}. Here $c(\epsilon, L)$ is the TC with target OP $\epsilon$ when $L$ antennas are employed. If each TX wishes to communicate with multiple receviers, \textit{multi-user} MIMO techniques can be used to send separate data streams to each receiver. In the situation where each transmitter and receiver has $N$ antennas, the TC has been shown to increase \textit{super-linearly} with $N$ when dirty paper coding, the optimal multi-user MIMO technique, is used \cite{KouAnd2009a}.
If the transmitter does not know channel matrix ${\bf H}_0$, spatial multiplexing is generally performed by transmitting independent data streams from each transmit antenna. The OP and TC for low-complexity (and sub-optimal) MRC and zero-forcing receivers are known \cite{LouMcK2009}, but many important questions remain unanswered on this topic, e.g., performance with optimal MIMO receivers.
\section{\label{sec:future} Current Limitations and Future Directions}
Although the results presented in this paper have illustrated the value of the transmission capacity framework, they have also failed to capture two important aspects of {\em ad hoc} networks. The first is that they are for a snapshot, or single-hop, of the network. This may be acceptable for unlicensed spectrum analysis or other decentralized networks, but {\em ad hoc} networks must route traffic from source to destination, often over multiple hops through intermediate nodes. A network with higher single-hop TC should be able to achieve higher end-to-end capacity than a network with smaller TC because more simultaneous transmissions are possible. However, important issues such as desired hop length, number of hops, multi-hop routes, and end-to-end delay are not presently addressed. In addition, noise should not be neglected since a principle function of multihop is to increase the SNR for each hop. Some work that attempts to use the results of this paper (or similar results) to address multihop includes \cite{BacBla2006}, where a metric called \emph{expected forward progress} is introduced and used to find the optimum split between transmitters and receivers (potential relays) in terms of the Aloha contention probability. Recently, \cite{StaRos2009} has developed a multihop model and found an end-to-end delay-optimizing strategy in a Poisson field of interference (without noise), while \cite{AndWeb09} finds the end-to-end transmission capacity in closed-form (i.e., transport capacity) with noise under a few restrictive assumptions like equi-distant relays and independent retransmissions. Clearly, this is a line of work that should be pursued and improved upon in the coming years.
The second lacking aspect of the current results is that they rely on a homogeneous Poisson distribution of nodes for tractability, which accurately models only uncoordinated transmissions ({\em e.g.}, Aloha). A well known alternative is to schedule simultaneous transmissions with the objective of controlling interference levels. Local scheduling mechanisms generally space out simultaneous transmissions, thereby significantly changing the interference distribution, while idealized centralized scheduling can eliminate outages altogether and determine the optimal set of transmitters in each slot ({\em e.g.}, max-weight scheduling within the backpressure paradigm \cite{GeoNee2006}). Preliminary work in this direction includes computing the outage probability and transmission capacity under non-Poisson point processes \cite{HaeGanNOW,GanHae09,YanDeV2007}. Although scheduling mechanisms provide obvious gains, these come at the cost of overhead ({\em e.g.}, control messages). Thus, a general open question is understanding the tradeoff between the benefits and overhead costs of different scheduling/routing mechanisms (Aloha is a particular point on this tradeoff curve), and determining the appropriate techniques for different network settings. Furthermore, a fundamental property that applies even to scheduled systems is that transmissions occupy space whenever interference is treated as noise; the transmission capacity provides a clean characterization of this space, and thus many of the insights apply, in principle, to scheduled systems as well.
As is true of any complicated research topic, discussion of a particular model or framework exposes tension between analytical tractability and accuracy/generality. The transmission capacity framework clearly leans towards simplicity and tractability, but nonetheless provides valuable design insight and a launching point for more refined, less tractable network analyses.
\section{Acknowledgements}
The authors appreciate the technical contributions of G. de Veciana, A. Hasan, A. Hunter, X. Yang, and K. Huang, which led to some of the results summarized here. They also are grateful for feedback from M. Haenggi.
\bibliographystyle{IEEEtran}
|
2,877,628,088,423 | arxiv |
\section{Introduction}
Random ferns is a machine learning algorithm proposed first by Özuysal et al \cite{Ozuysal2007} for patch matching, as a faster and simpler alternative to the Random forest algorithm.
Since introduction, it has been used in numerous computer vision application, like image recognition \cite{Bosch2007}, action recognition \cite{Oshin2009} or augmented reality \cite{Wagner2010}.
However, it has not gathered attention outside this field; this work aims to bring this algorithm to much wider spectrum of applications by adapting it to use general information systems composed of continuous and categorical attributes.
Moreover, by further exploiting Random ferns' ensemble structure I have modified it so that it could produce OOB error approximation and attribute importance measure.
Those modifications are described in Section~\ref{sec:algo}, among a brief derivation of the Random ferns algorithm.
I have implemented this version of the Random ferns algorithm as an \lang{R} \cite{R} package \pkg{rFerns}; Section~\ref{sec:bench} contains benchmarks of \pkg{rFerns} accuracy and the quality of error approximation and importance measure in several well known machine learning problems.
Those results and computational performance of the algorithm are compared with Random forest implementation contained in \pkg{randomForest} package \cite{Liaw2002}.
The conclusions are given in Section~\ref{sec:conc}.
\section{\label{sec:algo} Algorithm}
For simplicity, let's consider a classification problem based on an information system $(X_{i,j},Y_i)$ with $M$ binary attributes $X_{\cdot,j}$ and $N$ objects equally distributed over $C$ disjoint classes.
The generic \textit{Maximum a Posteriori} (MAP) Bayes classifier classifies the sample $X_{i,\cdot}$ as
\begin{equation}
Y^{p}_i=\arg\max_y P(Y_i=y|X_{i,1},X_{i,2},\ldots,X_{i,M});
\end{equation}
because of the Bayes theorem, it is equal to
\begin{equation}
Y^{p}_i=\arg\max_y P(X_{i,1},X_{i,2},\ldots,X_{i,M}|Y_i=y).
\end{equation}
This formula is strict, however it is not practically usable due to a combinatorial explosion of the number of possible $X_{i,\cdot}$ value mixes.
The simplest solution to this problem is to assume complete independence of the attributes, what brings us to the Naïve Bayes classification where
\begin{equation}
Y^{p}_i=\arg\max_y \prod_j P(X_{i,j}|Y_i=y).
\end{equation}
The original Random ferns classifier \cite{Breiman2001} was an in-between solution defining a series of $K$ subsets of $D$ features ($\vec{j}_k$, $k=1,\ldots,K$) treated using a corresponding series of simple exact classifiers (ferns), which predictions are assumed independent and thus combined in a naïve way, i.e.
\begin{equation}
Y^{p}_i=\arg\max_y \prod_k P(X_{i,\vec{j}_k}|Y_i=y);
\end{equation}
this way one can still represent some non-linear interactions in the system, possibly achieving better accuracy than in purely naïve case.
On the other hand, such defined classifier is still very simple and computationally manageable for a range of $D$ values.
$\vec{j}_k$ are usually generated randomly to omit the impact of arbitrary order of attributes in the information system; thus $K$ is considered a parameter of the training procedure.
In the original implementation of Random ferns, the probabilities $P_{i,\vec{j}_k}(y):=P(X_{i,\vec{j}_k}|Y_i=y)$ are estimated on a training system $(X^t_{i,j},Y^t_i)$ by counting the frequency of each class in each subspace of attribute space defined by $\vec{j}_k$ with a Dirichlet prior, so
\begin{equation}
P_{i,\vec{j}_k}(y)=\frac{1}{\#L_{i,\vec{j}_k}+C}\left(1+\#\left\{m\in L_{i,\vec{j}_k}:Y^t_m=y\right\} \right),
\end{equation}
where $\#$ denotes set size and
\begin{equation}
L_{i,\vec{j}_k}=\left\{l\in [1;N^t]:X^t_{l,\vec{j}_k}=X_{i,\vec{j}_k}\right\}
\label{eq:classLeaf}
\end{equation}
is the set of objects in the same leaf of fern $k$ as object $i$.
In order to prevent numerical problems, instead of $P_{i,\vec{j}_k}(y)$ one can work on scores $S_{i,\vec{j}_k}(y)$ defined as
\begin{equation}
S_{i,\vec{j}_k}(y):=\log P_{i,\vec{j}_k}(y)+\log C.
\end{equation}
This representation is monotonic, so the MAP rule persists as a selection of class with the maximal sum of scores,
\begin{equation}
Y^{p}_i=\arg\max_y \prod_k P_{i,\vec{j}_k}(y)=\arg\max_y \sum_k S_{i,\vec{j}_k}(y);
\end{equation}
moreover a fern which knows nothing about some objects give it a score vector of $0$s instead of $C^{-1}$s, what greatly reduces the magnitude of final votes.
Furthermore, by using additive scores we can move from a Bayesian legacy of Random ferns to a ensemble of classifiers language --- in this view we have a random subspace \cite{Ho1998} battery of classifiers combined by mean voting.
Going further in that direction, we can replace random subspace ensemble with bagging, i.e. by building each fern on a set of objects build by sampling with replacement $N_t$ objects from an original training system, thus changing Equation~\ref{eq:classLeaf} into
\begin{equation}
L_{i,\vec{j}_k}=\left\{l\in B_k:X^t_{l,\vec{j}}=X_{i,\vec{j}}\right\},
\end{equation}
where $B_k$ is the set of indexes of objects sampled for training, called \textit{bag}.
This way, each fern has a set of at average $0.368N^t$ objects which was not used to build it; it is usually called \textit{out-of-bag} (OOB) subset and will be denoted here as $B_k^\ast$.
This setup allows one to migrate out-of-bag error approximation and permutational importance measure.
\subsection{Generalisation beyond binary attributes}
Both continuous and categorical attributes can be exactly represented as a series of binary attributes, thus any information system containing those type of features is equivalent to a larger binary information system.
We can omit the necessity of building it before training by generating effective binary attributes on-the-fly.
Namely, when a continuous attribute $X_{\cdot,i}$ is selected for creation of a fern level a random threshold $\xi$ is generated and this attribute is treated as a binary $(X_{\cdot,i}<\xi)$.
Correspondingly, for a categorical attribute $X_{\cdot,j}$ a random subset of levels $\Xi$ is generated and this attribute is treated as $(X_{\cdot,j}\in\Xi)$.
In my implementation, I have used a heuristic to select $\xi$ as a mean of two randomly selected values of $X_{\cdot,i}$; $\Xi$ is drawn uniformly from all possible subsets except empty and containing all levels.
Obviously, completely random generation of splits is usually less effective in terms of the accuracy of a final classifier than selecting thresholds in a greedy way via some optimisation procedure (for instance based on information gain or Gini impurity).
It is neither significantly more computationally effective approach because such optimisation may be implemented with even linear complexity in the number of objects.
However, this way the classifier can escape certain overfitting scenarios and unveil more subtle interaction.
This and the more even usage of attributes may be beneficial both for the robustness of the model and the accuracy of the importance measure it provides.
While the scores depend on $\xi$ and $\Xi$ values in this generalisation, from now on I will denote them as $S_{i,F_k}$ where $F_k=(\vec{j}_k,\vec{\xi}_k,\vec{\Xi}_k)$.
\subsection{Error estimate}
While each fern was build only using its in-bag objects, one can use the OOB subset to reliably test its classification performance.
Thus one can calculate an error approximation of the whole ensemble by comparing true classes of objects $Y_i$ with corresponding OOB predictions $Y^\ast_i$ defined as
\begin{equation}
Y^\ast_i=\arg\max_y \sum_{k:i\in B^\ast_k} S_{i,F_k}(y).
\end{equation}
Note that for small values of $K$ some objects may not manage to appear in any bag, and thus get an undefined OOB prediction.
\subsection{Importance measure}
The core idea behind the permutational importance score is to measure how the performance of the classification decreases due to a permutation of values within an investigated attribute.
Instead of a simple approach of measuring the misclassification rate, I have postulated a measure that will provide information even for cases when the object is misclassified by unperturbed classifier, namely the difference in the OOB score of a correct class of an object.
Precisely, the importance of attribute $a$ equals
\begin{equation}
I_a=\left\langle \left\langle S_{i,F_k}(Y_i)- S^p_{i,F_k}(Y_i)\right\rangle_{i\in B^\ast_k} \right\rangle_{ k:a\in\vec{j}_k},
\end{equation}
where $S^p_{i,\vec{j}_k}$ is $S_{i,\vec{j}_k}$ calculated on a permuted $X$ in which values of attribute $a$ have been shuffled.
One should also note that the fully stochastic nature of selecting attributes for building partial ferns guarantees that the attribute space is evenly sampled and all, even marginally relevant attributes are included in the model for large enough number of ferns.
\subsection{Unbalanced classes case}
When the distribution of the classes in the training decision vector is not uniform, the classifier structure gets biased in recognition of the most represented classes.
While this phenomenon may lead to a better accuracy in sharply unbalanced cases, it is usually undesired because it strips the model from information about smaller classes.
Thus, \pkg{rFerns} is internally balancing class' impacts by diving the counts of objects of a certain class in a current leaf by the fraction of objects of that class in the bag set of the current fern --- this is roughly equivalent to oversampling underrepresented classes so that the amounts of objects of each class are equal within bag.
One should note that the knowledge of the class distribution prior $P(y)$ for the test data (for instance the assumption that it will be the same as in training data) can still be used to improve the prediction accuracy.
To this end, one must request the prediction code to return the raw scores and correct them by adding logarithm of respective priors, i.e.
\begin{equation}
S^{c}_{i,F_k}(y)=S_{i,F_k}(y)+\log P(y),
\end{equation}
and use so corrected scores in the MAP rule.
\section{\label{sec:bench} Benchmarks}
I have tested \pkg{rFerns} on 7 classification problems from the \lang{R}'s \pkg{mlbench} \cite{Leish2010} package, namely \textit{DNA} (\ti{dna}), \textit{Ionosphere} (\ti{ion}), \textit{Pima Indian Diabetes} (\ti{pim}), \textit{Satellite} (\ti{sat}), \textit{Sonar} (\ti{son}), \textit{Vehicle} (\ti{veh}) and \textit{Vowel} (\ti{vow}).
\subsection{Accuracy}
\begin{table}[h]
\begin{center}
\input{accTable.tex}
\caption{\label{tab:acc} OOB and cross-validation error of the Random ferns classifier for $5000$ ferns of a depth equal to $5$, $10$ and optimal over $[1;15]$, $D_b$. Those results are compared to the accuracy of a Random forest classifier composed of $5000$ trees. Prediction errors are given as a mean and standard deviation over 10 repetitions of training for OOB and 10 iterations for cross-validation.}
\end{center}
\end{table}
For each of the testing sets, I have built $10$ Random ferns models for each of the depths in range $[1;15]$ and number of ferns equal to $5000$ and collected the OOB error approximations.
Next, I have used those results to find optimal depths for each set ($D_b$) --- for simplicity I selected value for which the mean OOB error from all iterations was minimal.
Finally, I have verified the error approximation by running 10-fold stochastic cross-validation.
Namely, the set was randomly slit into test and training subsets, composed respectively of $10\%$ and $90\%$ of objects; the classifier was then trained on a training subset and its performance was assessed using the test set.
Such procedure has been repeated two times.
As a comparison, I have also built and cross-validated $10$ Random forest models with $5000$ trees.
The ensemble size was selected so that both algorithm would manage to converge for all problems.
The results of those tests are collected in the Table~\ref{tab:acc}.
One can see that as in case of Random forest, OOB error approximation is a good estimate of the final classifier error.
It is also well serves as an optimisation target for the fern depth selection --- only in case of the Sonar data the naïve selection of the depth giving minimal OOB error led to a suboptimal final classifier, however one should note that the minimum was not significant in this case.
Based on the OOB approximations, forest outperforms ferns in all but one case; yet the results of cross-validation show that those differences are in practice masked by the natural variability of both classifiers.
Only in case of the Satellite data Random forest clearly gives almost two times smaller error.
\subsection{Importance}
To test importance measure, I have used two sets for which importance of attributes should follow certain pattern.
\begin{figure}[ht]
\centering
\includegraphics[width=\textwidth]{impDNA}
\caption{\label{fig:impDNA} Attribute importance for \ti{DNA} data, generated by Random ferns (\textit{top}) and, for comparison, by Random forest (\textit{bottom}). Note that the importance peaks around 90th attribute, corresponding to an actual splicing site.}
\end{figure}
\begin{figure}[p]
\centering
\includegraphics[width=\textwidth]{impSON}
\caption{\label{fig:impSon} Importance measure for \ti{Sonar} data, generated by Random ferns (\textit{top right}) and, for comparison, by Random forest (\textit{bottom right}). Those two measures are compared on a scatterplot (\textit{left}).}
\end{figure}
\begin{table}[p]
\begin{center}
\input{timTable.tex}
\caption{\label{tab:tim} Training times of the \pkg{rFerns} and \pkg{randomForest} models made for $5000$ base classifiers, with and without importance calculation. Times are given as a mean over 10 repetitions.}
\end{center}
\end{table}
Each objects in \ti{DNA} set \cite{Noordewier1991} represent 60-residue DNA sequence in a way so that each consecutive triple of attributes encode one residue.
Some of the sequences contain a boundary between exon and intron (or intron and exon\footnote{The direction of a DNA sequence is significant, so those are separate classes.}) and the objective is to recognise and classify those sequences.
All sequences were aligned in a way that the boundary always lies between $30$th and $31$st residue; while the biological process of recognition is local, the most important attributes should be those in vicinity of the boundary.
Objects in \ti{Sonar} set \cite{Gorman1988} correspond to echoes of a sonar signal bounced off either a rock or a metal cylinder (a model of a mine).
They are represented as power spectra, thus each next attribute corresponds to a consecutive frequency interval.
This way, one may expect that there exist frequency bands in which echoes significantly differ between classes --- and this would manifest as a set of peaks in the importance measure vector.
For both of this sets, I have calculated the importance measure using $1000$ ferns of a depth $10$.
As a datum, I have also calculated permutational importance measure using Random forest algorithm with $1000$ trees.
The results are presented on Figure~\ref{fig:impDNA}~and~\ref{fig:impSon}.
The importance measures obtained is both cases are consistent with the expectations based on the sets' structures --- for DNA, one can notice a maximum around attributes $90$--$96$, corresponding the actual cleaving location.
For Sonar, the importance scores reveal a band structure which likely corresponds to the actual frequency intervals in which the echoes differ between stone and metal.
The both results are also qualitatively in agreement with those obtained from Random forest models.
Qualitative difference comes form the completely different formulations of both measures and possibly the higher sensitivity of ferns resulting from its fully stochastic construction.
\subsection{Computational performance}
In order to compare training times of \pkg{rFerns} and \pkg{randomForest} codes, I have trained both models on all $7$ benchmark sets for $5000$ ferns/trees, and, in case of ferns, depths $10$ and $D_b$.
Than I have repeated this procedure, this time making both algorithms calculate importance measure during training.
I have repeated both tests $10$ times to stabilise the results and collected the mean execution times; the results are shown in the Table~\ref{tab:tim}.
The results show that the usage of \pkg{rFerns} may result is significant speedups in certain applications; best speedups are achieved for the sets with larger number of objects, which is caused by the fact that Random ferns' training time scales linearly with the number of objects, while Random forest $\sim N\log N$.
Also the importance can be calculated significantly faster by \pkg{rFerns} than by \pkg{randomForest}, and the gain increases with the size of the set.
\pkg{rFerns} is least effective for sets which require large depths of the fern --- in case of Vowel and Vehicle sets it was even slower than Random forest.
However, one should note that while the complexity of Random ferns $\sim 2^D$, its accuracy usually decreases much less dramatically when decreasing $D$ from its optimal value --- this way one may expect an effective trade-off between speed and accuracy.
\section{\label{sec:conc} Conclusions}
In this paper, I have presented \pkg{rFerns}, a general-purpose implementation of the Random ferns classifier.
Slight modifications of the original algorithm allowed me to additionally implement OOB error approximation and permutational attribute importance measure.
Benchmarks showed that such algorithm can achieve accuracies comparable to Random forest algorithm while usually being much faster, especially for large datasets.
Also the importance measure proposed in this paper can be calculated very quickly and proved to behave in a desired way and be in agreement with the results of Random forest; however the in-depth assessment of its quality and usability for feature selection and similar problems requires further research.
\section*{Acknowledgements}
This work has been financed by the National Science Centre, grant 2011/01/N/ST6/07035. Computations were performed at ICM, grant G48-6.
|
2,877,628,088,424 | arxiv | \section{Introduction}\label{one}
In this note we study the Bass $\Nil$-groups (see \cite[Chap.~XII]{bass1})
$$\NK_n(R G) = \ker(K_n(R G[t]) \to K_n(R G)),$$
where $R$ is an associative ring with unit, $G$ is a finite group, $n \in \mathbf Z$, and the augmentation map sends $t\mapsto 0$. Alternately, the isomorphism
$$\NK_n(RG) \cong \widetilde K_{n-1}(\NIL(RG))$$
identifies the Bass $\Nil$-groups with the $K$-theory of the
category $\NIL(RG)$ of nilpotent endomorphisms $(Q, f)$ on finitely-generated projective $RG$-modules \cite[Chap.~XII]{bass1}, \cite[Theorem 2]{grayson2}. Farrell \cite{farrell1} proved that $\NK_1(RG)$ is not finitely-generated as an abelian group whenever it is non-zero, and the corresponding result holds for $\NK_n(RG)$, $n\in \mathbf Z$ (see \cite[4.1]{weibel1}), so some organizing principle is needed to better understand the structure of the $\Nil$-groups. Our approach is via induction theory.
The functors $\NK_n$ are Mackey functors on the subgroups of $G$, and we ask to what extent they can be computed from the $\Nil$-groups of proper subgroups of $G$.
The Bass-Heller-Swan formula \cite[Chap.~XII, \S 7]{bass1}, \cite[p.~236]{grayson1} relates the Bass $\Nil$-groups with the $K$-theory of the infinite group $G \times \mathbf Z$.
There are two (split) surjective maps
$$N_{\pm} \colon K_n(R[G \times \mathbf Z]) \to \NK_n(R G)$$
which form part of the Bass-Heller-Swan direct sum decomposition
$$ K_n(R[G \times \mathbf Z])=K_n(R G)\oplus K_{n-1}(R G)\oplus \NK_n(R G)\oplus \NK_n(R G).$$
Notice that both $K_n(R[(-)\times \mathbf Z])$ and $\NK_n(R[-])$ are Mackey functors on the subgroups of $G$ (see Farrell-Hsiang \cite[\S 2]{farrell-hsiang2} for this observation about infinite groups).
We observe that the maps $N_{\pm}$ are actually natural transformations of Mackey functors (see Section \ref{seven}). It follows from Dress induction \cite{dress2} that the functors $\NK_n(RG)$ and the maps $N_{\pm}$ are computable from the hyperelementary family (see Section \ref{four}, and Harmon \cite[Cor.~4]{harmon1} for the case $n=1$).
We will show how the results of Farrell \cite{farrell1} and techniques of Farrell-Hsiang \cite{farrell-hsiang3} lead to a better generation statement for the Bass $\Nil$-groups.
\smallskip
We need some notation to state the main result.
For each prime $p$, we denote by $\mathfrak P_p(G)$ the set of finite $p$-subgroups of $G$, and by $\mathfrak E_p(G)$ the set of $p$-elementary subgroups of $G$. Recall that a $p$-elementary group has the form $E=C \times P$, where $P$ is a finite $p$-group, and $C$ is a finite cyclic group of order prime to $p$. For each element $g\in C$, we let
$$I(g) = \{ k \in \mathbb N \, | \, \text{\ a\ prime\ } q \text{\ divides\ } k \Rightarrow q \text{\ divides\ } |g|\}$$
where $|g|$ denotes the order of $g$. For each $P \in \mathfrak P_p(G)$, let
$$C_G^\perp(P) = \{ g\in G \, | \, gx=xg, \forall x \in P,\text{\ and\ } p\nmid |g|\}$$
and for each $g\in C_G^\perp(P)$ we define a functor
$$\phi(P,g)\colon \NIL(RP) \to \NIL(RG)$$
by sending a nilpotent $RP$-endomorphism $f\colon Q\to Q$ of a finitely-generated projective $RP$-module $Q$ to the nilpotent $RG$-endomorphism
$$RG\otimes_{RP} Q \to RG\otimes_{RP} Q, \qquad x\otimes q \mapsto xg\otimes f(q)\ .$$
Note that this $RG$-endomorphism is well-defined since $g \in C_G^\perp(P)$.
The functor $\phi(P,g)$ induces a homomorphism $$\phi(P,g)\colon \NK_n(RP) \to \NK_n(RG)$$ for each $n\in \mathbf Z$.
For each $p$-subgroup $P$ in $G$, define a homomorphism
$$\Phi_P\colon \NK_n(RP) \to \NK_n(RG)$$
by the formula
$$\Phi_P = \sum_{{g\in C_G^\perp(P), \ k \in I(g)}} {\hskip -6pt}V_k \circ \phi(P,g),$$
where
$$V_k \colon \NK_n(RG) \to \NK_n(RG)$$
denotes the Verschiebung homomorphism, $k \geq 1$, recalled in more detail in Section \ref{two}.
\begin{thma}
Let $R$ be an associative ring with unit, and $G$ be a finite group. For each prime $p$, the map
$$\Phi = (\Phi_P)\colon \bigoplus_{P \in \mathfrak P_p(G)} \NK_n(RP)_{(p)} \to \NK_n(RG)_{(p)}$$
is surjective for all $n\in \mathbf Z$, after localizing at $p$.
\end{thma}
For every $g\in C_G^\perp(P)$, the homomorphism $\phi(P,g)$ factorizes as
$$\NK_n(RP) \xrightarrow{\phi(P,g)} \NK_n(R[C \times P]) \xrightarrow{i_*} \NK_n(RG)$$
where $C = \langle g \rangle$ and $i\colon C\times P \to G$ is the inclusion map. Since the Verschiebung homomorphisms are natural with respect to the maps induced by group homomorphisms, we obtain:
\begin{corb}
The sum of the induction maps
$$\bigoplus_{E\in \mathfrak E_p(G)} \NK_n(RE)_{(p)} \to \NK_n(RG)_{(p)}$$
from $p$-elementary subgroups is surjective, for all $n\in \mathbf Z$.
\end{corb}
Note that Theorem A does not show that $\NK_n(RG)$ is generated by induction from $p$-groups, because the maps $\phi(P,g)$ for $g\neq 1$ are not the usual maps induced by inclusion $P \subset G$.
\smallskip
The Bass $\Nil$-groups are non-finitely generated torsion groups (if non-zero) so they remain difficult to calculate explicitly, but we have some new qualitative results. For example, Theorem A shows that the order of every element of $\NK_n(R G)$ is some power of $m=|G|$, whenever $\NK_n(R) =0$ (since its $q$-localization is zero for all $q\nmid m$).
For $R = \mathbf Z$ and some related rings, this is a result of Weibel~\cite[(6.5), p.~490]{weibel1}.
In particular, we know that every element of $\NK _n(\mathbf Z P)$ has $p$-primary order, for every finite $p$-group $P$.
If $R$ is a regular (noetherian) ring (e.g.~$R=\mathbf Z$), then $\NK_n(R) = 0$ for all $n\in \mathbf Z$.
Note that an exponent that holds uniformly for all elements in $\NK_n(RP)$, over all $p$-groups of $G$, will be an exponent for $\NK_n(RG)$. As a special case, we have
$\NK _n(\mathbf Z[\mathbf Z/p])=0$ for $n \le 1$, for $p$ a prime (see
Bass-Murthy \cite{bass-murthy1}), so Theorem A
implies:
\begin{corollary} \label{cor:vanishing_of_NK_n(ZG)_for_n_le_1}
Let $G$ be a finite group and let $p$ be a prime. Suppose that $p^2$ does not divide the
order of $G$. Then
$$\NK _n(\mathbf Z G)_{(p)} = 0$$
for $n \le 1$.
\end{corollary}
As an application, we get a new proof of the fact that $\NK _n(\mathbf Z G) = 0$, for $n \le 1$, if the order of $G$ is square-free (see Harmon \cite{harmon1}).
We also get from Theorem A an improved estimate on the exponent of $\NK _0(\mathbf Z G)$, using a result of Connolly-da Silva \cite{connolly-dasilva1}. If $n$ is a positive integer, and $n_q=q^k$ is its $q$-primary part, then let $c_q(n) = q^l$, where $l \geq \log_q(kn)$.
According to \cite{connolly-dasilva1}, the exponent of $\NK_0(\mathbf Z G)$ divides
$$c(n) = \prod_{q\mid n} c_q(n), \quad \text{where\ } n = |G|,$$
but according to Theorem A, the exponent of $\NK_0(\mathbf Z G)$ divides
$$d(n) = \prod_{q\mid n} c(n_q)\ .$$
For example, $c(60) = 1296000$, but $d(60) = 120$.
\medskip
\noindent
{\textbf{Acknowledgement}}: This paper is dedicated to Tom Farrell and Lowell Jones, whose work in geometric topology has been a constant inspiration. We are also indebted to Frank Quinn, who suggested that the Farrell-Hsiang induction techniques should be re-examined (see \cite{quinn1}).
\section{Bass $\Nil$-groups}\label{two}
Standard constructions in algebraic $K$-theory for exact categories or Waldhausen
categories yield only $K$-groups in degrees $n \ge 0$ (see Quillen \cite{quillen1},
Waldhausen~\cite{waldhausen1}). One approach on the categorial level to negative
$K$-groups has been developed by Pedersen-Weibel (see \cite{pedersen-weibel2}, \cite[\S 2.1]{hp2004}). Another
ring theoretic approach is given as follows (see
Bartels-L\"uck~\cite[Section~9]{bartels-lueck1}, Wagoner~\cite{wagoner1}). The
{\em cone ring} $\Lambda \bbZ$ of $\bbZ$ is the ring of column and row finite $\mathbb N \times
\mathbb N$-matrices over $\bbZ$, i.e., matrices such that every column and every row contains
only finitely many non-zero entries. The {\em suspension ring} $\Sigma \bbZ$ is the
quotient of $\Lambda \bbZ$ by the ideal of finite matrices. For an associative (but not necessarily commutative) ring $A$ with
unit, we define $\Lambda A = \Lambda \bbZ \otimes_{\bbZ} A$ and $\Sigma A = \Sigma
\bbZ \otimes_{\bbZ} A$. Obviously $\Lambda$ and $\Sigma$ are functors from the category
of rings to itself. There are identifications, natural in $A$,
\eqncount
\begin{eqnarray}
K_{n-1}(A) & = & K_n(\Sigma A) \label{K_n-1(A)_is_K_n(SigmaA)}
\\
\eqncount
\NK_{n-1}(A)& = & \NK_n(\Sigma A) \label{NK_n-1(A)_is_NK_n(SigmaA)}
\end{eqnarray}
for all $n \in \bbZ$. In our applications, usually $A = RG$ where $R$ is a ring with unit and $G$ is a finite group.
Using these identifications it is clear how to extend the definitions of certain maps between
$K$-groups given by exact functors to negative degrees. Moreover, we will
explain constructions and proofs of the commutativity of certain diagrams only for
$n \ge 1$, and will not explicitly mention that these carry over to all $n \in \bbZ$,
because of the identifications~\eqref{K_n-1(A)_is_K_n(SigmaA)} and
\eqref{NK_n-1(A)_is_NK_n(SigmaA)} and the obvious identification $\Sigma(RG) = (\Sigma
R)G$, or because of Pedersen-Weibel~\cite{pedersen-weibel2}.
We have a direct sum decomposition
\eqncount
\begin{eqnarray}K_n(A[t]) & = & K_n(A) \oplus \NK_n(A)
\label{K_n(A[t])_is_NK_n(A)_oplus_K_N(A)}
\end{eqnarray}
which is natural in $A$, using the inclusion $A \to A[t]$ and the ring map $A[t] \to A$ defined by $t\mapsto 0$.
Let $\FP(A)$ be the exact category of finitely generated projective $A$-modules, and let
$\NIL(A)$ be the exact category of nilpotent endomorphism of finitely generated projective
$A$-modules. The functor $\FP(A) \to \NIL(A)$ sending $Q\mapsto (Q,0)$ and the
functor $\NIL(A) \to \FP(A)$ sending $(Q,f)\mapsto Q$ are exact functors. They
yield a split injection on the $K$-groups $K_n(A):= K_n(\FP(A)) \to K_n(\NIL(A))$ for $n
\in \bbZ$. Denote by $\widetilde{K}_n(\NIL(A))$ the cokernel for $n \in \bbZ$. There is
an identification (see Grayson~\cite[Theorem~2]{grayson2})
\eqncount
\begin{eqnarray}
\widetilde{K}_{n-1}(\NIL(A)) & = & \NK_{n}(A),
\label{K_n(Nil(A))_is_NK_nplus1(A)}
\end{eqnarray}
for $n \in \bbZ$,
essentially given by the passage from a nilpotent $A$-endomorphism $(Q,f)$ to the $A\bbZ$-automorphism
$$A\bbZ \otimes_A Q \to A\bbZ \otimes_A Q, \quad u \otimes q \mapsto u \otimes q - ut
\otimes f(q),$$
for $t \in \bbZ$ a fixed generator.
The Bass $\Nil$-groups appear in the Bass-Heller-Swan decomposition for $n \in \bbZ$ (see \cite[Chapter~XII]{bass1}, \cite{bass-heller-swan1}, \cite[p.~236]{grayson1}, \cite[p.~38]{quillen1}, and \cite[Theorem~10.1]{swan3}
for the original sources, or the expositions in \cite[Theorems~3.3.3 and 5.3.30]{rosenberg1}, \cite[Theorem~9.8]{srinivas1}
).
\eqncount
\begin{eqnarray}
B \colon K_n(A) \oplus K_{n-1}(A) \oplus \NK_n(A) \oplus \NK_n(A)
\xrightarrow{\cong} K_n(A\bbZ).
\label{Bass-Heller-Swan_decomposition}
\end{eqnarray}
The isomorphism $B$ is natural in $A$ and comes from the localization sequence
\eqncount
\begin{multline}
0 \to K_n(A) \xrightarrow{K_n(i) \oplus -K_n(i)} K_n(A[t]) \oplus K_n(A[t])
\xrightarrow{K_n(j_+) + K_n(j_-)} K_n(A\bbZ)\\
\xrightarrow{\partial_n} K_{n-1}(A) \to 0
\label{localization_sequence}
\end{multline}
where $i \colon A \to A[t]$ is the obvious inclusion and the inclusion $j_{\pm} \colon
A[t] \to A\bbZ$ sends $t$ to $t^{\pm 1}$ if we write $A\bbZ = A[t,t^{-1}]$, the splitting
of $\partial_n$
\eqncount
\begin{eqnarray}
s_n \colon K_{n-1}(A) \to K_n(A\bbZ)
\label{splitting_s_n}
\end{eqnarray}
which is given by the natural pairing
\begin{eqnarray*}
K_{n-1}(A) \otimes K_1(\bbZ[t,t^{-1}]) \to K_n(A \otimes_{\bbZ}\bbZ[t,t^{-1}]) = K_n(A\bbZ)
\end{eqnarray*}
evaluated at the class of unit $t \in \bbZ[t,t^{-1}]$ in $K_1(\bbZ[t,t^{-1}])$, and the
canonical splitting~\eqref{K_n(A[t])_is_NK_n(A)_oplus_K_N(A)}.
Let $B$ be the direct sum of $K_n(j_+\circ i)$, $s_n$, and the restrictions of the maps $ K_n(j_+)$ and $K_n(j_{-})$ to $\NK_n(A)$.
In particular we get two homomorphisms, both natural in $A$, from the Bass-Heller-Swan
decomposition~\eqref{Bass-Heller-Swan_decomposition}\begin{eqnarray*}
i_n \colon \NK_n(A) & \to & K_n(A\bbZ)\\
r_n \colon K_n(A\bbZ) & \to & \NK_n(A),
\end{eqnarray*}
by focusing on the first copy of
$\NK_n(A)$,
such that $r_n \circ i_n$ is the identity on $\NK_n(A)$.
Let $\sigma_k \colon \bbZ \to
\bbZ$ be the injection given by $t\mapsto t^k$. We may consider the ring $A[t]$ as an $A[t]-A[t]$ bimodule with standard left action, and right action $a(t)\cdot b(t) = a(t)b(t^k)$ induced by $\sigma_k$.
This map induces an induction functor
$$\ind_k\colon \FP(A[t]) \to \FP(A[t])$$ defined by
$P \mapsto A[t]\otimes_{\sigma_k} P$. There is also a restriction functor $$\res_k\colon \FP(A[t]) \to \FP(A[t])$$ defined by
equipping $P$ with the new $A[t]$-module structure
$a(t)\cdot p = a(t^k)p$, for all $a(t) \in A[t]$ and all $p \in P$.
The
induction and
restriction functors yield two homomorphisms
\begin{eqnarray*}
\ind_{k} \colon K_n(A\bbZ) & \to & K_n(A\bbZ)
\\
\res_{k } \colon K_n(A\bbZ) & \to & K_n(A\bbZ)\ .
\end{eqnarray*}
See \cite{stienstra1} or \cite[p.~27]{quillen1} for more details.
There are also \emph{Verschiebung} and \emph{Frobenius} homomorphisms
\eqncount
\begin{eqnarray}
V_k, F_k \colon \NK_n(A) & \to & \NK_n(A)
\label{F_k_and_V_k}
\end{eqnarray}
induced on the $\Nil$-groups (and related to $\ind_k$ and $\res_k$ respectively).
The Frobenius homomorphism is induced by the functor
$\NIL(A) \to \NIL(A)$ sending
$$(f \colon Q \to Q) \mapsto (f^k \colon Q \to Q),$$
while the Verschiebung homomorphism is induced by the functor
$$\bigoplus_{i=1}^k Q \to \bigoplus_{i=1}^k Q,\quad (q_1, q_2, \ldots q_k) \mapsto
(f(q_k),q_1, q_2,\ldots, q_{k-1})\ .$$
The next result is proven by
Stienstra~\cite[Theorem~4.7]{stienstra1}). (Notice that
Stienstra considers only commutative rings $A$, but his argument
goes through in our case since the set of polynomials $T$ we
consider is $\{t^n \, | \, n \in \bbZ, n \ge 0\}$ and each
polynomial in $T$ is central in $A[t]$ with respect to the
multiplicative structure.)
\begin{lemma}\label{lem:ind/res_and_V/F}
The following diagrams commute for all $n \in \bbZ$ and $k\in \bbZ, k \ge 1$
$$\xymatrix{\NK_n(A)\ar[r]^{i_n}\ar[d]^{F_k}&{K_n(A\bbZ)}\ar[d]^{\res_{k}}\\
{\NK_n(A)}\ar[r]^{i_n}&{K_n(A\bbZ)}}$$
and
$$\xymatrix{\NK_n(A)\ar[r]^{i_n}\ar[d]^{V_k}&{K_n(A\bbZ)}\ar[d]^{\ind_k}\\
{\NK_n(A)}\ar[r]^{i_n}&{K_n(A\bbZ)}}$$
\end{lemma}
The next result is well-known for $n=1$ (see Farrell \cite[Lemma~3]{farrell1}). The general case is discussed by Weibel \cite[p.~479]{weibel1}, Stienstra \cite[p.~90]{stienstra1}, and Grunewald \cite[Prop.~4.6]{grunewald1}.
\begin{lemma} \label{lem:Frobenius_finally_vanishes}
For every $n \in \bbZ$ and each $x \in \NK_n(A)$, there exists a positive integer $M(x)$ such
that $ \res_{m} \circ\, i_n(x) = 0$ for $m \ge M(x)$.
\end{lemma}
\begin{proof}
The Frobenius homomorphism $F_m \colon \NK_n(A) \to \NK_n(A)$ is induced by the functor
sending $(Q,f)$ in $\NIL(A)$ to $(Q,f^m)$. For a given object $(Q,f)$ in $\NIL(A)$ there
exists a positive integer $M(f)$ with $(Q,f^m) = (Q,0)$ for $m \ge M(f)$. This implies
by a filtration argument
(see \cite[p.~90]{stienstra1} or
\cite[Prop.~4.6]{grunewald1}), that
for $x \in\NK_n(A)$ there exists a positive integer $M(x)$ with $F_m(x) = 0$ for $m \ge
M(x)$.
Now the claim follows from Lemma~\ref{lem:ind/res_and_V/F}.
\end{proof}
\section{Subgroups of $G \times \mathbf Z$}\label{three}
A finite group $G$ is called $p$-hyperelementary if is isomorphic to an extension
$$1\to C \to G \to P \to 1$$
where $P$ is a $p$-group, and $C$ is a cyclic group of order prime to $p$. Such an extension is a semi-direct product, and hence determined by the action map $\alpha\colon P \to \aut(C)$ defined by conjugation. The group $G$ is $p$-elementary precisely when $\alpha$ is the trivial map, or in other words, when there exists a retraction $G \to C$.
Notice that for a cyclic group $C=\cy{q^k}$, where $q\neq p$ is a prime, $\aut(C) = \cy{q^{k-1}(q-1)}$, if $q$ odd, or
$\aut(C) = \cy{2^{k-2}} \times \cy{2}$, $k\geq 2$, if $q=2$.
In either case, $\aut(C)_{(p)} \cong \aut(Q)_{(p)}$ by projection to any non-trivial
quotient group $C \to Q$.
\begin{lemma}
\label{lem_hyper_elem_versus_elem}
Let $p$ be a prime, and let $G$ be a finite $p$-hyper\-elemen\-ta\-ry group. Suppose
that for every prime $q\neq p$ which divides the order of $G$, there
exists an epimorphism $f_q \colon G \to Q_q$ onto a non-trivial cyclic group $Q_q$ of
$q$-power order. Then $G$ is $p$-elementary.
\end{lemma}
\begin{proof}
Let $Q$ be the product of the groups $Q_q$ over all primes $q\neq p$ which divide the order of $G$. Let $f\colon G \to Q$ be the product of the given epimorphisms.
Since every subgroup in $G$ of order prime to $p$ is characteristic, we have a diagram
$$\xymatrix@R-3pt{1 \ar[r] & C \ar[r]\ar[d] & G \ar[r]\ar[d] & P \ar@{=}[d]\ar[r]& 1\ \hphantom{.}\cr
1 \ar[r] & Q \ar[r] & \bar{G} \ar[r]\ & P \ar[r]& 1\ .
}$$
But the epimorphism $f\colon G \to Q$ induces a retraction $\bar{G} \to Q$ of the lower sequence, hence its action map $\bar{\alpha}\colon P \to \aut(Q)$ is trivial. As remarked above, this implies that $\alpha$ is also trivial and hence $G$ is $p$-elementary.
\end{proof}
We now combine this result with the techniques of \cite{farrell-hsiang3}. Given positive integers $m$, $n$ and a prime $p$, we choose
an integer $N=N(m,n,p)$ satisfying the following conditions:
\begin{enumerate}\addtolength{\itemsep}{0.2\baselineskip}
\item $p\nmid N$, but $q\mid N$ if and only if $q\mid n$ for all primes $q\neq p$.
\item $k \geq \log_q(mn)$ (i.e.~$q^k \geq mn$) for each full prime power $q^k\|N$.
\end{enumerate}
The Farrell-Hsiang technique is to compute $K$-theory via $p$-hyperelementary subgroups $H \subset G \times \cy N$, and their inverse images $\Gamma_H = \pr^{-1}(H) \subset G \times \mathbf Z$ via the second factor projection map $\pr\colon G\times \mathbf Z \to G \times \cy N$.
\begin{lemma} \label{lem:deep_and_p-torsion}
Let $G$ be a finite group and let $M$ be a positive integer. Let $p$ be a prime
dividing the order of $|G|$, and choose an integer $N= N(M, |G|,p)$. For every $p$-hyperelementary subgroup $H \subset G \times \cy N$, one of the following holds:
\begin{enumerate}
\item the inverse image $\Gamma_H \subset G \times m\cdot\mathbf Z$, for some $m \geq M$, or
\item \label{lem:deep_and_p-torsion:p-elementary} the group $H$ is $p$-elementary.
\end{enumerate}
In the second case, we have the following additional properties:
\begin{enumerate}\renewcommand{\theenumi}{\alph{enumi}}
\renewcommand{\labelenumi}{(\theenumi)}
\item \label{lem:deep_and_p-torsion:identification_of_H} There exists a finite $p$-group
$P$ and isomorphism
$$\alpha \colon P \times \bbZ \xrightarrow{\ \cong\ } \Gamma_H.$$
\item \label{lem:deep_and_p-torsion:commutative_square} There exists a positive integer
$k$, a positive integer $\ell$ with $(\ell,p) =1$, an element $u \in \cy \ell$ and an injective
group homomorphism
$$j \colon P \times \cy \ell \to G$$
such that the following diagram commutes
$$\xymatrix{\hsp{xxxx}{P \times \bbZ}\hsp{xxxx} \ar[r]^(0.6){\alpha}\ar[d]_{\operatorname{id}_P \times \beta}&\ {\Gamma_H}\ar[d]^{i} \\
{P \times
\cy \ell \times \bbZ}\ar[r]^(0.6){j \times k \cdot \operatorname{id}_{\bbZ}}&\ {G \times \bbZ}}$$
where $i \colon
\Gamma_{H} \to G \times \bbZ$ is the inclusion and $\beta \colon \bbZ \to \cy \ell
\times \bbZ$ sends $n$ to $(nu,n)$.
\end{enumerate}
\end{lemma}
\begin{proof} In the proof we will write elements in $\mathbf Z/N$ additively and elements in $G$ multiplicatively. Let $H \subset G \times \cy N$ be a $p$-hyperelementary subgroup, and suppose that $\Gamma_H$ is \emph{not} contained in $G \times m\cdot \mathbf Z$ for any $m\geq M$.
We have a pull-back diagram
$$\xymatrix@R-2pt{&\ G''_\hsp{x}\ar@{=}[r]\ar@{>->}[d]&\ G''_\hsp{x}\ar@{>->}[d]\\
\cy{N''}\ \ar@{=}[d]\ar@{>->}[r]&H \ar@{->>}[r]\ar@{->>}[d]& G'\ar@{->>}[d]\\
\cy{N''}\ \ar@{>->}[r]&\cy{N'} \ar@{->>}[r]&\cy{\ell}
}$$
where $G'\subset G$ and $\cy{N'}$ are the images of $H\subset G \times \mathbf Z$ under the first and second factor projection, respectively.
Notice that $\cy{\ell}$ is the common quotient group of $G'$ and $\cy{N'}$. In terms of this data, $H \subseteq G'\times \cy{N'}$ and hence the pre-image $\Gamma_H \subseteq G' \times m\cdot \mathbf Z\subseteq G\times m\cdot \mathbf Z$, where $m=N/N'$.
We now show that $G''$ is a $p$-group. Suppose, if possible, that some other prime $q\neq p$ divides $|G''|$. Since $H$ is $p$-hyperelementary the Sylow $q$-subgroup of $H$ is cyclic. However $G''\times \cy{N''} \subseteq H$, so $q\nmid N''$. But $N' = N''\cdot \ell$, hence this implies
that the $q$-primary part $N'_q = \ell_q \leq |G'|\leq |G|$. Now
$$m = N/N' \geq q^k/N'_q \geq q^k/|G| \geq M$$
by definition of $N = N(M, |G|, p)$. This would imply that $\Gamma_H \subset G \times m\cdot \mathbf Z$ for some $m\geq M$, contrary to our assumption. Hence $P:=G''$ is a $p$-group, or more precisely the $p$-Sylow subgroup of $G'$ since $p\nmid \ell$.
Alternative (ii) is an immediate consequence. If $q\neq p$ is a prime dividing $|H|$, then $q\mid N'$ since $G''$ is a $p$-group. Hence $H$ admits an epimorphism onto a non-trivial finite cyclic $q$-group. By Lemma \ref{lem_hyper_elem_versus_elem}, this implies that $H$ is $p$-elementary.
Note that there is an isomorphism $$j'=(id_P\times s)\colon P \times \cy \ell \xrightarrow{\cong} G'$$ defined by the inclusion $id_P\colon P\subset G'$ and a splitting $s\colon \cy \ell \to G'$ of the projection $G' \to \cy \ell$.
Next we consider assertion (a). A similar pull-back diagram exists for the subgroup $\Gamma_H \subset G \times \mathbf Z$. We obtain a pull-back diagram of exact
sequences
$$\xymatrix{ 1\ar[r]&P \ar[r]\ar@{=}[d]&\Gamma_H\ar[r]\ar[d]& \mathbf Z\ar[r]\ar[d]& 1\\
1\ar[r]&P \ar[r]&G'\ar[r]& \cy \ell\ar[r]\ar@/_/[l]_s& 1}$$
since $P = \Gamma_H \cap (G \times 0)$, and $\pr_{\mathbf Z}(\Gamma_H) = k\cdot \mathbf Z$ for some positive integer $k$. This exact sequence splits, since it is the pull-back of the lower split sequence: we can choose the element $(g_0, k) \in \Gamma_H \subseteq G'\times \mathbf Z$ which projects to a generator of $\pr_{\mathbf Z}(\Gamma_H) $, by taking $g_0=s(u)$ where $u \in \cy \ell$ is a generator. The isomorphism $\alpha\colon P \times \mathbf Z \xrightarrow{\approx} \Gamma_H $ is defined by
$\alpha(g, n) = (gg_0^n, n)$ for $g\in P$ and $n \in \mathbf Z$.
Assertion (b) follows by composing the splitting
$ (id_P\times s)\colon P \times \cy \ell \cong G'$ with the inclusion $G'\subseteq G$ to obtain an injection $j\colon P \times \cy \ell\to G$. By the definition of $g_0$, the composite $(j\times k\cdot id_\mathbf Z)\circ (id_P \times \beta) = i \circ \alpha$, where $i\colon \Gamma_H \to G\times \mathbf Z$ is the inclusion.
\end{proof}
\section{The Proof of Theorem A}\label{four}
We will need some standard results from induction theory for Mackey
functors over finite groups, due to Dress (see \cite {dress1}, \cite{dress2}), as well as a refinement called the Burnside quotient Green ring associated to a Mackey functor (see \cite[\S 1]{h2006} for a description of this construction, and \cite{htw2007} for the detailed account).
For any homomorphism $\pr\colon \Gamma \to G$ from an infinite discrete group to a finite group $G$, the functor
$$\mathcal M(H): = K_n(R\Gamma_H),$$
where $\Gamma_H =\pr^{-1}(H)$, is a Mackey functor defined on subgroups $H \subseteq G$. The required restriction maps exist because the index $[\Gamma_H: \Gamma_K]$ is finite for any pair of subgroups $K\subset H$ in $G$. This point of view is due to
Farrell and Hsiang \cite[\S 2]{farrell-hsiang2}. The Swan ring $SW(G,\mathbf Z)$ acts as a Green ring on $\mathcal M$, and it is a fundamental fact of Dress induction theory that the Swan ring is computable from the family $\mathcal H$ of hyperelementary subgroups of $G$. More precisely, the localized Green ring $SW(G,\mathbf Z)_{(p)}$ is computable from the family $\mathcal H_p$ of $p$-hyperelementary subgroups of $G$, for every prime $p$. If follows from Dress induction that the Mackey functor $\mathcal M(G)_{(p)}$ is also $p$-hyperelementary computable. We need a refinement of this result.
\begin{theorem}[{\cite[Theorem 1.8]{h2006}}] \label{thm: computable}
Suppose that $\mathcal G$ is a Green ring which
acts on a Mackey functor $\mathcal M$. If $\mathcal G\otimes \mathbf Z_{(p)}$ is $\mathcal H$-computable, then every
$x\in \mathcal M(G)\otimes \mathbf Z_{(p)}$ can be written as
$$x = \sum_{H \in \mathcal H_p} a_H \Ind_H^G(\Res_G^H(x))$$
for some coefficients $a_H \in \mathbf Z_{(p)}$.
\end{theorem}
We fix a prime $p$. For each element $x\in \NK_n(RG)$, let $M= M(x)$ as in Lemma \ref{lem:Frobenius_finally_vanishes} applied to the ring $A=RG$. Then $$\res_{m} \colon
K_n(R[G\times \bbZ]) \to K_n(R[G\times \bbZ])$$ sends $i_n(x)$ to
zero for $m \ge M(x)$.
Now let $N= N(M, |G|, p)$, as defined in Section \ref{three}, and consider $\mathcal M(H) = K_n(R\Gamma_H)$ as a Mackey functor on the subgroups $H \subseteq G \times \cy N$, via the projection $\pr\colon G\times \mathbf Z \to G \times \cy N$.
Let $\mathcal H_p(x)$ denote the set of $p$-hyperelementary subgroups $H
\subseteq G \times\cy N$, such that $\Gamma_H$ is \emph{not} contained in
$G\times m\cdot \mathbf Z$, for any $m\geq M(x)$.
By the formula of Theorem \ref{thm: computable}, applied to $y = i_n(x)$, we see that $x$ lies in the image of the composite map
\eqncount
\begin{eqnarray}
\bigoplus_{H \in \mathcal H_p(x)} K_n(R\Gamma_H)_{(p)} & \xrightarrow{i_*} &
K_n(R[G\times \bbZ])_{(p)} \xrightarrow{r_n} \NK_n(RG)_{(p)}.
\label{x_lies_in_calh_p(M(x)}
\end{eqnarray}
We conclude from
Lemma~\ref{lem:deep_and_p-torsion}~\eqref{lem:deep_and_p-torsion:commutative_square}
(using that notation) that the composite
\eqncount
\begin{multline}K_n(R[P \times \bbZ]) \xrightarrow{\alpha_*} K_n(R\Gamma_H)
\xrightarrow{i_*} K_n(R[G \times \bbZ]) \xrightarrow{r_n} \NK_n(RG)
\label{comp(1)}
\end{multline}
agrees with the composite
\eqncount
\begin{multline}
K_n(R[P \times \bbZ]) \xrightarrow{(\operatorname{id}_P \times \beta)_*} K_n(R[P
\times \cy \ell\times \bbZ]) \xrightarrow{(j \times \operatorname{id}_{\bbZ})_*}
K_n(R[G\times \bbZ])
\\
\xrightarrow{(\operatorname{id}_G \times k \cdot\operatorname{id}_{\bbZ})_*} K_n(R[G \times
\bbZ]) \xrightarrow{r_n} \NK_n(RG).
\label{comp(2)}
\end{multline}
Recall that $\beta\colon \bbZ \to \cy \ell\times \bbZ$ sends $n$ to $(nu,n)$ for some generator $u \in \cy \ell$. Let
$ \NIL(RP) \to \NIL(R[P \times \cy{\ell}])$ be the functor which sends a nilpotent $RG$-endomorphism $f \colon Q \to Q$ of a finitely generated $RP$-module $Q$
to the nilpotent $R[G \times \cy{\ell}]$-endomorphism
$$
R[P \times \cy l] \otimes_{RP} Q \mapsto R[P \times \cy{\ell}] \otimes_{RP} Q,
\quad x \otimes q \mapsto xu \otimes f(q).$$
Let $\phi\colon
\NK_n(RP) \to \NK_n(R[P \times \cy{\ell}])$ denote the induced homomorphism.
\begin{lemma} \label{lem:three_commutative_diagrams}
\mbox{}
\renewcommand{\theenumi}{\alph{enumi}}
\renewcommand{\labelenumi}{(\theenumi)}
\begin{enumerate}
\item \label{lem:three_commutative_diagrams:(1)}
The following diagram commutes
$$\xymatrix@!C=10em{K_n(R[P \times \bbZ]) \ar[r]^-{(\operatorname{id}_P \times \beta)_*}
\ar[d]^-{r_n} & K_n(R[P \times \cy{\ell} \times \bbZ]) \ar[d]^-{r_n}
\\
\NK_n(RP) \ar[r]^-{\phi} & \NK_n(R[P \times \cy{\ell}])\ .}$$
\item \label{lem:three_commutative_diagrams:(2)}
The following diagram commutes
$$\xymatrix@!C=10em{K_n(R[P \times \cy{\ell}\times \bbZ])
\ar[r]^-{(j\times \operatorname{id}_{\bbZ})_*} \ar[d]^-{r_n} &
K_n(R[G\times \bbZ]) \ar[d]^-{r_n}
\\
\NK_n(R[P \times \cy{\ell}) \ar[r]^-{j_*}& \NK_n(RG)\ .}$$
\item \label{lem:three_commutative_diagrams:(3)}
The following diagram commutes
$$\xymatrix@!C=8em{{K_n(R[G\times \bbZ])}
\ar[r]^{\ind_k}\ar[d]^{r_n}&{K_n(R[G \times \bbZ])}\ar[d]^ {r_n}\\
{\NK_n(RG)}\ar[r]^{V_k}&{\NK_n(RG)}\ .}$$
\end{enumerate}
\end{lemma}
\begin{proof}
\eqref{lem:three_commutative_diagrams:(1)}
The tensor product $\otimes_\mathbf Z$ induces a pairing
\eqncount
\begin{eqnarray}
\label{naturality_of_pairing}
&\mu_{R, \Gamma}\colon K_{n-1}(R) \otimes_{\bbZ} K_1(\bbZ \Gamma) \to K_n(R\Gamma)&
\end{eqnarray}
for every group $\Gamma$, which is natural in $R$ and $\Gamma$.
It suffices to prove that the following diagram is commutative for every ring $R$
(since we can replace $R$ by $RP$). Let $A = R[\cy \ell]$ for short.
$$\xymatrix@R30mm
{K_n(R) \oplus K_{n-1}(R) \oplus \NK_n(R) \oplus \NK_n(R)
\ar[d]_-{\left(\begin{array}{cccc}
i_{1,1} & 0 & 0 & 0
\\
i_{1,2} & i_{2,2} & 0 & 0
\\
0 & 0 & \phi & 0
\\
0 & 0 & 0 & \phi
\end{array}\right)}
\ar[r]^-B_-{\cong} & K_n(R\bbZ) \ar[d]^{\beta_*}
\\
K_n(A) \oplus K_{n-1}(A) \oplus \NK_n(A) \oplus \NK_n(A)
\ar[r]^-B_-{\cong} & K_n(A\bbZ)}$$
Here the vertical arrows are the
isomorphisms given by the Bass-Heller-Swan
decomposition~\eqref{Bass-Heller-Swan_decomposition}, the homomorphisms $i_{1,1}$ and $i_{2,2}$ are induced
by the inclusion $R \to R[\cy \ell]$ and the homomorphism $i_{1,2}$ comes from the pairing
\eqncount
\begin{eqnarray}
&\mu_{R, \cy \ell}\colon K_{n-1}(R) \otimes_{\bbZ} K_1(\bbZ [\cy \ell]) \to K_n(R[\cy \ell])&
\end{eqnarray}
evaluated at the class of the unit $u \in \bbZ[\cy \ell]$ in $K_1(\bbZ[\cy \ell])$ and
the obvious change of rings homomorphisms $K_1(R[\cy \ell]) \to K_1(R[\cy \ell \times \bbZ])$
In order to show commutativity it suffices to prove its commutativity after restricting to
one of the four summands in the left upper corner.
This is obvious for $K_n(RG)$ since induction with respect to group homomorphisms is
functorial.
For $K_{n-1}(R)$ this follows from the naturality of the pairing
(\ref{naturality_of_pairing})
in $R$ and the group $\cy \ell$ and the equality
$$K_1(\beta)(t) = K_1(R[j_{\bbZ}])(t) + K_1(R[j_{\cy \ell}])(u)$$
where $j_{\bbZ} \colon
\bbZ \to \cy \ell \times \bbZ$ and $j_{\cy \ell} \colon \cy \ell \to \cy \ell \times \bbZ$ are the
obvious inclusions.
The commutativity when restricted to the two Bass $\Nil$-groups follows from a result of
Stienstra~\cite[Theorem~4.12 on page~78]{stienstra1}.
\\[1mm]
\eqref{lem:three_commutative_diagrams:(2)} This follows from the naturality in $R$ of
$r_n$.
\\[1mm]
\eqref{lem:three_commutative_diagrams:(3)} It suffices to show that the following diagram
commutes (since we can replace $R$ by $RG$)
$$\xymatrix@R30mm {K_n(R) \oplus K_{n-1}(R) \oplus \NK_n(R) \oplus \NK_n(R)
\ar[d]_-{\left(\begin{array}{cccc} \operatorname{id} & 0 & 0 & 0
\\
0 & k\cdot \operatorname{id} & 0 & 0
\\
0 & 0 & V_k & 0
\\
0 & 0 & 0 & V_k
\end{array}\right)}
\ar[r]^-B_-{\cong} & K_n(R\bbZ) \ar[d]^{\ind_k}
\\
K_n(R) \oplus K_{n-1}(R) \oplus \NK_n(R) \oplus \NK_n(R) \ar[r]^-B_-{\cong} &
K_n(R\bbZ)}$$
where the vertical arrows are the isomorphisms given by the
Bass-Heller-Swan decomposition~\eqref{Bass-Heller-Swan_decomposition}.
In order to show commutativity it suffices to prove its commutativity after restricting to
one of the four summands in the left upper corner.
This is obvious for $K_n(R)$ since induction with respect to group homomorphisms is
functorial.
Next we inspect $K_{n-1}(R)$. The following diagram commutes
$$\xymatrix{{K_{n-1}(R)
\otimes_{\bbZ} K_1(\bbZ[\bbZ])}\ar[r]\ar[d]^{\operatorname{id} \otimes \ind_{k}}&{K_n(R\bbZ)}\ar[d]^ {\ind_{k}}\\
{K_{n-1}(R) \otimes_{\bbZ}
K_1(\bbZ[\bbZ])}\ar[r]&{K_n(R\bbZ)}}$$
where the horizontal pairings are given by $\mu_{R, \mathbf Z}$ from (\ref{naturality_of_pairing}). Since in $K_1(\mathbf Z[\mathbf Z])$, $k$ times the class $[t]$ of the unit $t$ is the class $[t^k]=\ind_k([t])$, the claim follows for $K_{n-1}(R)$.
The commutativity when restricted to the copies of $\NK_n(R)$ follows from
Lemma~\ref{lem:ind/res_and_V/F}. This finishes the proof of
Lemma~\ref{lem:three_commutative_diagrams}.
\end{proof}
Lemma~\ref{lem:three_commutative_diagrams} implies that the composite~\eqref{comp(2)}
and hence the composite~\eqref{comp(1)} agree with the composite
\begin{multline*}
K_n(R[P \times \bbZ]) \xrightarrow{r_n} \NK_n(RP) \xrightarrow{\phi} \NK_n(R[P \times \cy \ell])
\\
\xrightarrow{\ j_*\ } \NK_n(RG) \xrightarrow{V_k} \NK_n(RG).
\label{com(3)}
\end{multline*}
Since we have already shown that the element $x \in \NK_n(RG)_{(p)}$ lies in the image
of \eqref{x_lies_in_calh_p(M(x)}, we conclude that $x$ lies in the image of the map
$$
\Phi = (\Phi_P)\colon
\bigoplus_{P \in \mathfrak P_p(G)}
\NK_n(RP)_{(p)} \to \NK_n(RG)_{(p)}
$$
subject only to the restriction $k \in I(g)$ in the definition of $\Phi_P$.
Consider $k \ge 1$, $P \in \mathfrak P_p$ and $g
\in C^{\perp}_GP_p$. We write $k = k_0k_1$ for $k_1 \in I(g)$ and
$(k_0,|g|) = 1$. We have $V_k = V_{k_1} \circ V_{k_0}$ (see
Stienstra~\cite[Theorem~2.12]{stienstra1}). Since $(k_0,|g|) =
1$, we can find an integer $l_0$ such that $(l_0,|g|) = 1$ and
$(g^{l_0})^{k_0} = g$. We conclude from
Stienstra~\cite[page~67]{stienstra1}
$$V_{k_0} \circ \phi(P,g) = V_{k_0} \circ \phi(P,(g^{l_0})^{k_0})
= \phi(P,g^{l_0}) \circ V_{k_0}.$$
Hence the image of $V_k \circ \phi(P,g)$ is contained in the image of
$V_{k_1} \circ \phi(P,g^{l_0})$ and $g^{l_0} \in C^{\perp}_GP_p$.
This finishes the proof of Theorem A.
\qed
\section{Examples}
We briefly discuss some examples. As usual, $p$ is a prime and $G$ is a finite group. The first example shows that Theorem A gives some information about $p$-elementary groups.
\begin{example} \label{exa:p-elementary}
Let $P$ be a finite $p$-group and let $\ell \ge 1$ an integer with $(\ell,p) = 1$.
Then Theorem A says that $\NK_n(R[P \times\cy \ell])_{(p)}$ is generated by the images of the maps
$$V_k \circ \phi(P,g) \colon \NK_n(RP)_{(p)} \to \NK_n(R[P \times\cy \ell])_{(p)}$$
for all $g \in \cy \ell$, and all $k \in I(g)$.
Since the composite $F_k \circ V_k=k \cdot \operatorname{id}$ for $k\geq 1$
(see~\cite[Theorem~2.12]{stienstra1}) and $(k,p) = 1$, the map
$$V_k \colon \NK_n(RG)_{(p)} \to \NK_n(RG)_{(p)}$$ is injective for all $k \in I(g)$. For $g=1$, $\phi(P, 1)$ is the map induced by the first factor inclusion $P \to P \times\cy \ell$, and this is a split injection.
In addition, the composition of $\phi(P,g)$ with the functor induced by the projection $P\times \cy \ell \to P$ is the identity on $\NIL(RP)$, for any $g\in \cy \ell$.
Therefore, all the maps $V_k \circ \phi(P,g)$ are split injective.
It would be interesting to understand better the images of these maps as $k$ and $g$ vary. For example, what is the image of $\Phi_{\{1\}}$ where we take $P = \{1 \}$~?
\qed
\end{example}
In some situations the Verschiebung homomorphisms and
the homomorphisms $\phi(P,g)$ for $g \not=1$ do not occur.
\begin{example} \label{ex:no_phi(P,g)_and_V_k}
Suppose that $R$ is a regular ring.
We consider the special situation where the $p$-Sylow subgroup $G_p$ is
a normal subgroup of $G$, and furthermore where $C_G(P) \subseteq P$ holds for
every non-trivial subgroup $P \subseteq G_p$. For $P \neq \{ 1\}$, we have $C^\perp_G(P) = \{1\}$ and the homomorphism
$\Phi_P = \phi(P, 1)$, which is the ordinary induction map. We can ignore the map $\Phi_{\{1\}}$ since $\NK_n(R) = 0$ by assumption.
Therefore, the (surjective) image of $\Phi$ in Theorem A is just the image of the induction map
$$ \NK_n(RG_p)_{(p)} \to \NK_n(RG)_{(p)}\ .$$
Note that $\NK_n(RG_p)$ is $p$-local, and we can divide out the conjugation action on $ \NK_n(RG_p)$ because inner automorphisms act as the identity
on $\NK_n(RG)$. However, $G/G_p$ is a
finite group of order prime to $p$, so that
$$H_0(G/G_p; \NK_n(RG_p)) = H^0(G/G_p; \NK_n(RG_p)) = \NK_n(RG_p)^{G/G_p}\ .$$
Hence the induction map on this fixed submodule
$$\lambda_n\colon \NK_n(RG_p)^{G/G_p} \to \NK_n(RG)_{(p)}$$
is surjective.
An easy application of the double coset formula shows that the composition of $\lambda_n$
with the restriction map $\res_{G}^{G_p} \colon \NK_n(RG)_{(p)} \to \NK_n(RG_p)_{(p)}$ is
given by $|G/G_p|$-times the inclusion $\NK_n(RG_p)^{G/G_p} \to \NK_n(RG_p)$. Since
$(|G/G_p|,p) = 1$ this composition, and hence $\lambda_n$, are both injective. We conclude that
$\lambda_n$ is an isomorphism.
Concrete examples are provided by semi-direct products $G = P \rtimes C$, where $P$ is a cyclic $p$-group, $C$ has order prime to $p$, and the action map $\alpha\colon C \to \aut(P)$ is injective. If we assume, in addition, that the order of $C$ is square-free, then
$$\NK_n(\mathbf Z P)^{C} \xrightarrow{\cong} \NK_n(\mathbf Z[P \rtimes C])$$
for all $n\leq 1$ (this dimension restriction, and setting $R=\mathbf Z$, are only needed to apply Bass-Murthy \cite{bass-murthy1} in order to eliminate possible torsion in the $\Nil$-group of orders prime to $p$).
\qed
\end{example}
\section{$NK_n(A)$ as a Mackey functor}\label{seven}
Let $G$ be a finite group.
We want to show that the natural maps
$$i_n \colon \NK_n(RG) \to K_n(R[G\times \bbZ])$$ and
$$r_n \colon K_n(R[G\times \bbZ]) \to \NK_n(RG)$$ in the
Bass-Heller-Swan isomorphism
are maps of Mackey functors (defined on subgroups of $G$).
Hence $\NK_n(RG)$ is a direct summand of $K_n(R[G \times \bbZ])$
as a Mackey functor. Since $RG$ is a finitely generated free $RH$-module, for any subgroup $H\subset G$, it is enough to apply the following lemma to $A = RH$ and $B= RG$.
\begin{lemma} \label{lem:Bass_Heller_Swan_and_Mackey}
Let $i \colon A \to B$ be an inclusion of rings. Then the following diagram commutes
$$\xymatrix{K_n(A) \oplus K_{n-1}(A) \oplus \NK_n(A) \oplus \NK_n(A)
\ar[r]^-{\cong} \ar[d]^-{i_* \oplus i_* \oplus i_* \oplus i_*}
&
K_n(A\bbZ) \ar[d]^-{i[\bbZ]_*}
\\
K_n(B) \oplus K_{n-1}(B) \oplus \NK_n(B) \oplus \NK_n(B)
\ar[r]^-{\cong}
&
K_n(B\bbZ)
}$$
where the vertical maps are given by induction, and
the horizontal maps are the Bass-Heller-Swan
isomorphisms.
If $B$ is finitely generated and projective, considered as an $A$-module, then
$$\xymatrix{K_n(B) \oplus K_{n-1}(B) \oplus \NK_n(B) \oplus \NK_n(B)
\ar[r]^-{\cong} \ar[d]^-{i^* \oplus i^* \oplus i^* \oplus i^*}
&
K_n(B\bbZ) \ar[d]^-{i[\bbZ]^*}
\\
K_n(A) \oplus K_{n-1}(A) \oplus \NK_n(A) \oplus \NK_n(A)
\ar[r]^-{\cong}
&
K_n(A\bbZ)
}$$
where the vertical maps are given by restriction, and
the horizontal maps are the Bass-Heller-Swan
isomorphisms.
\end{lemma}
\begin{proof}
One has to show the commutativity of the diagram when restricted
to each of the four summands in the left upper corner. In each case
these maps are induced by functors, and one shows that the two corresponding composites
of functors are naturally equivalent. Hence the two composites induce the same map on $K$-theory.
As an illustration we do this in two cases.
Consider the third summand $\NK_n(A)$ in the first diagram. The Bass-Heller-Swan
isomorphism restricted to it is given by the the restriction of the map
$(j_+)_* \colon K_n(A[t]) \to K_n(A[t,t^{-1}]) = K_n(A\bbZ)$ induced
by the obvious inclusion $j_+ \colon A[t] \to A[t,t^{-1}]$ restricted to
$\NK_n(A) = \ker \left(\epsilon_* \colon K_n(A[t]) \to K_n(A)\right)$,
where $\epsilon \colon A[t] \to A$ is given by $t = 0$.
Since all these maps come from induction with
ring homomorphisms, the following two diagrams commute
$$\xymatrix{K_n(A[t]) \ar[r]^-{\epsilon_*} \ar[d]^-{i[t]_*} & K_n(A) \ar[d]^-{i_*}
\\
K_n(B[t]) \ar[r]^-{\epsilon_*} & K_n(B)
}
$$
and
$$\xymatrix{K_n(A[t]) \ar[r]^-{(j_+)_*} \ar[d]^-{i[t]_*} &
K_n(A[t,t^{-1}]) \ar[d]^-{i[t,t^{-1}]_*}
\\
K_n(B[t]) \ar[r]^-{(j_+)_*} &
K_n(B[t,t^{-1}])
}
$$
and the claim follows.
Consider the second summand $K_{n-1}(B)$ in the second diagram.
The restriction of the Bass-Heller-Swan isomorphism to $K_{n-1}(B)$
is given by evaluating the pairing~\eqref{naturality_of_pairing} for $\Gamma = \bbZ$
for the unit $t \in \bbZ[\bbZ]$. Hence it suffices
to show that the following diagram commutes, where the horizontal maps
are given by the pairing~\eqref{naturality_of_pairing} for $\Gamma = \bbZ$ and the vertical
maps come from restriction
$$
\xymatrix{K_{n-1}(B) \otimes K_1(\bbZ[\bbZ]) \ar[r] \ar[d]_{i^* \otimes \operatorname{id}}
& K_n(B\bbZ) \ar[d]_{i[\bbZ]^*}
\\
K_{n-1}(A) \otimes K_1(\bbZ[\bbZ]) \ar[r]
& K_n(A\bbZ)
}$$
This follows from the fact that for a finitely generated projective $A$-module
$P$ and a finitely generated projective $\bbZ[\bbZ]$-module $Q$ there
is a natural isomorphism of $B\bbZ$-modules
$$(\res_A P)\otimes_{\bbZ} Q \xrightarrow{\cong}
\res_{A\bbZ} (P \otimes_{\bbZ} Q),
\quad p \otimes q \mapsto p \otimes q. \qedhere$$
\end{proof}
\begin{corollary}
Let $G$ be a finite group, and $R$ be a ring. Then, for any subgroup $H\subset G$, the induction
maps $\ind_H^G\colon \NK_n(RH) \to \NK_n(RG)$ and the restriction maps $\res^H_G\colon \NK_n(RG) \to \NK_n(RH)$ commute with the Verschiebung and Frobenius homomorphisms $V_k$, $F_k$, for $k \geq 1$.
\end{corollary}
\begin{proof} We combine the results of Lemma \ref{lem:Bass_Heller_Swan_and_Mackey} with Stienstra's Lemma \ref{lem:ind/res_and_V/F} (note that these two diagrams also commute with $i_n$ replaced by $r_n$).
\end{proof}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
2,877,628,088,425 | arxiv | \section{Introduction}
The formation of planets involves a huge growth of solids starting from sub-$\mu$m grains and proceeding all the way up to terrestrial, ice and giant planets, with sizes up to hundreds thousands kilometers. Nearly all of this solid growth occurs in disks surrounding young pre-Main Sequence (PMS) stars. This whole process involves a variety of different physical mechanisms, the importance of which strongly depends on the level of aerodynamical coupling between the solids of different sizes and the gas in the disk \citep[for a review see][]{Chiang:2010}.
The first stages of planet formation comprise the growth of dust grains into $\sim$ km-sized bodies called ``planetesimals''. At the typical gas densities of young circumstellar disks, grains with sizes of the order of $\sim 1 - 100~\mu$m or smaller are very well coupled to the gas in the disk and basically follow the dynamics of gas. These small grains collide at relative velocities which are determined by the equations of Brownian motion, and turn out to be much lower than 1 m/s. At these collision velocities small grains stick very efficiently, with binding forces of electromagnetic nature, i.e. van der Waals forces.
The situation changes drastically when solids with sizes of the order of $\sim 1-10$ mm (pebbles) to $\sim 1$ m (rocks) are formed in the disk.
These solids are much less efficient in sticking than smaller particles, making fragmentation a more likely outcome of the collisions between these bodies.
Since the surface-to-mass ratio decreases for solids of larger sizes, these solids start to be less coupled to the gas, and their dynamics changes accordingly. Because of the pressure radial gradient in the disk, gas rotates around the central star with a sub-keplerian speed. Pebbles and rocks do not feel any pressure force, they rotate at keplerian speed and are therefore \textit{faster} than gas at the same location in the disk. These solids perceive gas as a headwind which makes them lose angular momentum and migrate radially toward the central star \citep[][]{Weidenschilling:1977}.
These two effects, fragmentation and fast radial migration, are the main obstacles to our understanding of planetesimal formation. As planetesimals are considered as the building blocks for larger bodies, understanding the behavior of these solids in real disks is critical for the whole process of planet formation.
The presence of mm/cm-sized pebbles in real disks can be inferred observationally by measuring the spectral index $\alpha$ of the sub-mm/mm spectral energy distribution (SED), $F_{\rm{sub-mm}} \propto \nu^{\alpha}$. For optically thin dust emission, the spectral index of the SED is directly related to the spectral index $\beta$ of the dust emissivity coefficient, $\kappa_{\nu} \propto \nu^{\beta}$, where the relation is as simple as $\beta = \alpha - 2$ if dust emission is in the Rayleigh-Jeans regime at the frequencies of interest.
While grains with sizes $< 10 - 100~\mu$m are characterized by $\beta \approx 1.5 - 2.0$, pebbles with sizes of the order of 1~mm and larger have $\beta < 1$ \citep[e.g.][]{Natta:2007}. Therefore, the analysis of the sub-mm SED provides a strong tool to investigate mm/cm-sized pebbles in young disks and can be used to test models of solids evolution \citep{Birnstiel:2010}.
Here we present the first results from an observational program using the ALMA and CARMA sub-mm/mm interferometers aimed at characterizing the dust properties and disk structure of young disks around brown dwarfs (BDs) very low mass stars (VLMs). At the different physical conditions expected for these disks when compared with disks around more massive young Solar-like stars, both the fragmentation and rapid radial drift problems for the growth of solids are amplified \citep[see][]{Pinilla:2013}. For this reason disks around VLMs and BDs are critical test beds for models of dust evolution in proto-planetary disks.
\begin{figure}[t!]
\includegraphics[clip=true,scale=0.35,trim=0 0 0 70]{co_mom0.eps}
\vspace{-1.3cm}
\caption{\footnotesize
ALMA moment 0 map of $\rho$-Oph 102 in CO($J = 3 - 2$). White contour lines are drawn at 2$\sigma$, 4$\sigma$, 6$\sigma$, where $\sigma = 45$ mJy/beam$\cdot$km/s is the measured rms noise on the map. The magenta color in the filled ellipse in the lower left corner indicates the size of the synthesized beam, i.e. FWHM $= 0.57'' \times 0.46''$, P.A. $=99^{\rm{o}}$ \citep[from][]{Ricci:2012b}.
}
\label{co}
\end{figure}
\section{ALMA and CARMA observations}
\subsection{$\rho-$Oph 102}
The first object that was observed under our ALMA program is $\rho-$Oph 102. This is a M5.5-M6 spectral type object with an estimated mass of $60~M_{\rm{Jup}}$ \citep{Natta:2002} in the Ophiuchus star forming region (SFR; age of $\sim 1$ Myr).
$\rho-$Oph 102 is known to be surrounded by a disk of dust from infrared observations, to have a significant mass accretion rate and to drive a wind and molecular outflow \citep[][]{Bontemps:2001,Natta:2002,Natta:2004,Whelan:2005,Phan-Bao:2008}.
We observed $\rho-$Oph 102 using ALMA Early Science in Cycle~0 at about 0.89~mm (Band~7) and 3.2~mm (Band 3). The ALMA Band~7 observations provided the first clear detection of molecular CO gas in a BD disk (Fig.~\ref{co}). By assuming optically thick gas emission and a temperature lower than $\sim 30 - 40$ K, a reasonable limit for a disk heated by the radiation from a BD with the properties of $\rho-$Oph 102, we obtained a lower limit of $\approx 15$ AU for the outer radius of the disk. Future ALMA observations with better sensitivity and angular resolution will allow to constrain the radial structure of this BD disk.
The continuum dust emission was clearly detected at both ALMA bands. We measured a spectral index of $\alpha_{\rm{0.89-3.2mm}} = 2.29 \pm 0.16$. Models of disks heated by the radiation from the central sub-stellar object show that the results of ALMA observations are best reproduced by emission from dust with a spectral index of the dust emissivity coefficient $\beta \approx 0. 4 - 0.6$ \citep[for more details on the disk modeling, see][]{Ricci:2012b}. This implies that the dust emitting at the ALMA wavelengths has grown to sizes of at least $\sim 1$~mm, similarly to what is typically found in disks around young PMS stars.
\begin{figure}[t!]
\includegraphics[clip=true,scale=0.37,trim=0 0 0 150]{BD.eps}
\vspace{-2cm}
\caption{\footnotesize CARMA continuum map at 1.3 mm of the disk surrounding the young brown dwarf 2M0444$+$2512. Contours are drawn at $-3, 3, 6, 9, 12, 15\sigma$, where $1\sigma = 0.3$ mJy/beam is the rms noise. The yellow ellipse in the bottom left corner shows the synthesized beam, with FWHM $= 0.36'' \times 0.44''$, PA $= 80^{\rm{o}}$, obtained with natural weighting.
However, the longest projected baselines of our observations probe angular scales as small as 0.16$''$ \citep[from][]{Ricci:2013}.
}
\label{2M0444_carma}
\end{figure}
\subsection{2M0444+2512}
2M0444$+$2512, hereafter 2M0444, is a young, $\sim 1$ Myr old, brown dwarf with a M7.25 spectral type in the Taurus star forming region. Like in the case of $\rho-$Oph 102, 2M0444, with an estimated mass of about 50~$M_{\rm{Jup}}$ \citep{Luhman:2004}, is known to host a disk in dust from IR to sub-mm excess \citep{Knapp:2004, Scholz:2006, Guieu:2007, Bouy:2008}.
The disk around 2M0444 is brighter than the $\rho-$Oph 102 disk by a factor of $\sim 3$ in the sub-mm continuum, and we could detect its dust continuum emission at 1.3~mm using the CARMA interferometer \citep[][]{Ricci:2013}. With an angular resolution of 0.16$''$ corresponding to a physical scale of 22~AU (11~AU in radius) at the Taurus distance of 140 pc, the CARMA observations resolve the dust thermal emission from the disk (Fig. ~\ref{2M0444_carma}). By analyzing the interferometric visibilities using models of BD disks we obtained the first direct constraints at sub-mm wavelengths for the radial distribution of dust in a BD disk.
We measured a sub-mm spectral index of $2.30 \pm 0.25$ by combining the CARMA flux at 1.3~mm with the flux measurements by \citet{Mohanty:2013} and \citet{Bouy:2008} at 0.87~mm and 3.5~mm, respectively. This is similar to the value measured for the $\rho-$Oph 102 disk, and also for the disks surrounding more massive PMS stars (see Fig. \ref{flux_alpha}). Modeling of the visibilities rules out the possibility of a small ($R_{\rm{out}} < 10$ AU) optically thick disk. A dust emissivity spectral index $\beta = 0.50 \pm 0.25$ is needed to explain the sub-mm observations, thus finding evidence for grain growth to mm-sized pebbles also in the case of the 2M0444 BD disk.
\begin{figure*}[t!]
\includegraphics[clip=true,scale=0.77,trim=0 0 0 0]{flux_alpha.eps}
\caption{\footnotesize Flux at 1 mm vs spectral index between 1 and 3 mm for disks around single PMS stars and brown dwarfs. Different colors and symbols refer to different stellar/sub-stellar spectral types and regions as indicated in the plot. Data for Taurus disks are from \citet{Ricci:2010a,Ricci:2012a}, for Ophiuchus disks from \citet{Ricci:2010b}, for 2M0444 from \citet{Bouy:2008}, \citet{Mohanty:2013} and \citet{Ricci:2013}, and for $\rho$-Oph 102 from \citet{Ricci:2012b}. Note that for same disks the values of the 1 mm flux density has been derived by interpolating between nearby wavelengths. The typical uncertainties of the data are shown in the lower left corner \citep[from][]{Ricci:2012b}.
}
\label{flux_alpha}
\end{figure*}
\section{Discussion and Conclusions}
We have presented the first results of a combined ALMA and CARMA observational program aimed at investigating the properties and radial distribution of dust and gas in disks around VLMs and BDs. These include the evidence for the presence of mm/cm-sized pebbles in the outer regions of two BD disks, $\rho-$Oph 102 and 2M0444, the detection of CO molecular gas in the $\rho-$Oph 102 disks, and direct constraints on the radial distribution of dust grains in the 2M0444 disk.
Our ALMA Cycle 0 project contains two more disks in Taurus, other than 2M0444 which we also observed with CARMA. These ALMA data show values of the spectral index $\alpha$ which are very similar to the ones measured for $\rho-$Oph 102 and 2M0444, showing that mm-sized pebbles are regularly found in the outer regions of BDs/VLMs disks. In the case of 2M0444, CO($J=2-1$) emission is detected at several frequency channels, and this is consistent with gas emission from a disk in keplerian rotation (Ricci et al., in prep.).
The observations presented here show that solid growth to mm-sized pebbles and the retention of these particles in the outer regions are as efficient in disks around VLMs and BDs as in disks around more massive PMS stars. At a first look, these results challenge models of dust evolution in disks, as both these processes should be less efficient at the physical conditions of disks with relatively low mass/density around BDs and VLMs \citep[][]{Birnstiel:2010}.
Based on the results of our ALMA and CARMA observations, \citet{Pinilla:2013} have recently investigated the evolution of solids in disks around BDs and VLMs from a theoretical perspective. They calculated the time evolution of solids starting from sub-$\mu$m sizes and accounting for coagulation/fragmentation and radial migration in gas-rich disks with properties similar to the disks discussed here. They used these models to predict sub-mm fluxes for their simulated disks. They found that, in order to reproduce the observational data, stringent requirements have to be invoked for their disk models: relatively strong inhomogeneities in the pressure field of the gas in the disk to slow down the radial migration of mm-sized pebbles, small disk outer radii ($R_{\rm{out}} \approx 15 - 30$ AU), a moderate turbulent strength ($\alpha_{\rm{turb}} < 10^{-3}$), and average fragmentation velocities for ices of about 10~m/s.
Interestingly, recent ALMA observations of the young transitional disk IRS 48 have revealed a very pronounced azymuthal asymmetry in the distribution of mm-sized pebbles, these solids being strongly concentrated in a peanut-like shaped region in the disk \citep{vanderMarel:2013}. These observations have been interpreted as evidence for the trapping mechanism of mm-sized particles invoked by the \citet{Pinilla:2013} models to explain the sub-mm observations of young disks \citep[see also][]{Armitage:2013}.
Future ALMA observations of disks around BDs and VLMs with better sensitivity and angular resolution will further test the models of the early evolution of solids in disks and shed more light to our understanding of the process of planet formation.
\begin{acknowledgements}
We would like to thank Antonio Magazzu, Eduardo Martin, and all other SOC and LOC members for the organization of this stimulating conference held at a fantastic venue. This work makes use of the following ALMA data: ADS/JAO.ALMA$\#$2011.0.00259.S. ALMA is a partnership of ESO (representing its member states), NSF (USA) and NINS (Japan), together with NRC (Canada) and NSC and ASIAA (Taiwan), in cooperation with the Republic of Chile. The Joint ALMA Observatory is operated by ESO, AUI/NRAO and NAOJ. The National Radio Astronomy Observatory is a facility of the National Science Foundation operated under cooperative agreement by Associated Universities, Inc. Support for CARMA construction was derived from the Gordon and Betty Moore Foundation, Kenneth T. and Eileen L. Norris Foundation, James S. McDonnell Foundation, Associates of the California Institute of Technology, University of Chicago, states of California, Illinois, Maryland, and NSF. Ongoing CARMA development and operations are supported by NSF under a cooperative agreement, and by the CARMA partner universities. We acknowledge support from the Owens Valley Radio Observatory, which is supported by the NSF grant AST-1140063.
\end{acknowledgements}
\bibliographystyle{aa}
|
2,877,628,088,426 | arxiv | \section{Introduction}
Interference experiments with heavy molecules and nanoclusters amount to tests of the quantum superposition principle at unprecedented scales \cite{haslinger2013universal,colloquium,romeroisart2011,stefanjames} and may serve to measure molecular properties with high accuracy \cite{conformationspra,sandraabsoprtion,hackermuller2007optical}. While it is an open question whether the superposition principle is valid at all scales \cite{bassi,macroscopicity,arndt2014testing}, it is without doubt that the ro-vibrational degrees of freedom of increasingly large, non-spherical objects will at some point influence the interference signal.
Far-field matter-wave experiments with heavy particles are challenging due to the small de Broglie wavelength, but near-field interferometry proved to be a powerful tool in the quest for high mass interference \cite{kdtlinature,kdtlinjp,haslinger2013universal,otimanjp,colloquium}. Near-field techniques are based on the Talbot effect, the reproduction of the intensity pattern in the grating at certain distances further downstream. Since near-field interference effects are highly sensitive to even tiny phase modifications, it is natural to expect that the influence of the rotational state is most pronounced in the near field. In fact, signatures of the vibrational molecular dynamics have been observed in near-field experiments in \cite{conformationspra}.
In order to extend the established theory of matter-wave interferometry of spherical particles to non-spherical objects with orientational degrees of freedom, we draw on the results obtained for the deflection of rotating molecules \cite{averbukh2010,averbukh2011a,averbukh2011b,seideman05,stapelfeldt03,denschlag14}. There, it is a central result that the deflection angle of a rapidly rotating molecule is determined by its rotational state when entering the deflection field. In most cases of interest, the rotation of the molecule is initially thermally distributed, in particular in the absence of a pre-aligning pulse. The distribution of the rotational state then translates directly into a range of deflection angles, each observed with the thermal probability of the respective state. The distribution describing the probability of a particular deflection angle is thus a central element in the theory of molecular deflection and it will turn out to be similarly relevant for the theory of
matter-wave interferometry of rotating molecules.
This article presents a full quantum theory of the influence of rotations in matter-wave interferometry, and we illustrate this theory by calculating the near-field interference pattern of symmetric top molecules in the Kapitza-Dirac-Talbot-Lau interferometer (KDTLI). We emphasize that the general formalism presented here is not restricted to the orientation state, but can be applied to other internal degrees of freedom as well.
The article is structured as follows: In Sec. \ref{sec:grattrafo} we derive the quantum mechanical grating transformations for the cases of a rotationally free and a rotationally diabatic transit through the grating as well as their classical analogues. In Sec. \ref{sec:example} we specify the grating transformation for the diffraction of symmetric top molecules from a standing light wave in order to study the near-field interference pattern in the Kapitza-Dirac-Talbot-Lau interferometer (KDTLI), and we identify a signature of the rotational dynamics. We conclude in Sec. \ref{sec:conc}. The appendix provides a derivation of the distribution of classical deflection angles of rotating symmetric molecules, as required in Sec. \ref{sec:example}.
\section{The Grating Transformation} \label{sec:grattrafo}
It is the aim of this section to determine the effect of the grating on the translational and rotational dynamics of the molecule. For this sake we consider a molecule of mass $M$ that traverses with constant velocity $v_z$ a diffraction grating located at $z = 0$ and with the grating axis along the $x$-direction. The assumption that $v_z$ remains constant throughout the diffraction process is well justified because in the experimental realization the kinetic energy of the longitudinal motion exceeds the average interaction potential and the kinetic energy of the transverse motion by orders of magnitude \cite{stefanbeyond}. In addition, the restriction to a single velocity $v_z$ is not a limitation because a finite longitudinal coherence can always be incorporated by averaging over the distribution of $v_z$ in the end \cite{kdtlinjp}.
The orientation-dependent interaction between the molecule and the grating is described by the grating potential $V(x,z,\Omega)$, where $\Omega$ denotes the set of orientational degrees of freedom (DOFs) of the molecule, such as the Euler angles, $z = v_z t$ is the center-of-mass (CM) position of the molecule in the flight direction at time $t$ and $x$ is the CM position in the grating direction. Since the extension of the particle beam in $y$-direction is usually small compared to the extension of the grating, we can neglect the $y$-dependence of the potential $V(x,v_zt,\Omega)$.
In what follows, we distinguish between the rotationally \emph{free} transit, where the (expectation value of the) rotational period $\tau_{\rm rot}$ of the molecule is constant during the passage and much smaller than the transit time $\tau_{\rm CM}$ through the grating, and the rotationally \emph{diabatic} scenario, in which the characteristic time $\tau_{\rm CM}$ is much smaller than the rotational period $\tau_{\rm rot}$. While the rotationally free transit is realized, for instance, in near-field matter-wave interferometry with laser gratings \cite{kdtlinjp}, the diabatic transit can occur in far-field experiments with very thin material gratings \cite{naturephthalo}.
\subsection{Quantum Mechanical Description} \label{sec:quth}
The central tool in the theory of matter-wave interferometry of spherically symmetric particles is the grating transformation operator $\hat t$, that maps the incoming transverse state $\rho$ onto the outgoing transverse state $\rho' = \hat t\rho \hat t^\dagger$ with $\matel{x}{\rho'}{x'} = t(x) t^*(x') \matel{x}{\rho}{x'}$ \cite{kdtlinjp,otimanjp,stefanbeyond}. In the case of extended, non-spherical molecules, the grating transformation operator $\hat t$ must be adapted in order to account for the effect of the orientational DOFs. Let us start by deriving this operator from the time-dependent Schr\"{o}dinger equation.
The total Hamiltonian $\hat H$ contains, in addition to the grating potential $V(x,z,\Omega)$, the CM kinetic energy $\hat H_{\rm CM}$ in transverse direction $x$, as well as the free rotational Hamiltonian $\hat H_{\rm rot}$, whose form is determined by the symmetry of the molecule \cite{sakurai}. In what follows we keep the discussion general and denote by $\ket{\ell m}$ the eigenfunctions of the rotational Hamiltonian with energy $\varepsilon_{\ell}$, where $\ell$ labels the energy levels and $m$ labels the degenerate states for each $\ell$. The symmetry of the rotor is arbitrary and so is the degeneracy $\nu_\ell$ for each $\ell$.
For the sake of a more compact notation we introduce for each $\ell$ the tuple ${\underline \phi}_\ell(\Omega)$ containing all the states with energy $\varepsilon_\ell$, i.e. $({\underline \phi_{\ell}})_m = \braket{\Omega}{\ell m}$. The length of the tuple ${\underline \phi}_\ell(\Omega)$ is thus equal to $\nu_\ell$. With these tuples the total wave function $\Psi(x,\Omega, t) = \braket{x,\Omega}{\Psi(t)}$ can be expanded as
\begin{equation} \label{eq:exp}
\Psi(x,\Omega,t) = \sum_\ell e^{-i \varepsilon_\ell t/ \hbar} {\underline \chi}_\ell(x,t)\cdot{\underline \phi}_\ell(\Omega).
\end{equation}
where ${\underline \chi}_\ell(x,t)$ are the tuples of expansion coefficients $({\underline \chi}_\ell)_m = \chi_{\ell m}$. Again, the length of ${\underline \chi}_\ell(x,t)$ is $\nu_\ell$. Inserting the expansion \eqref{eq:exp} into the time dependent Schr\"{o}dinger equation with the total Hamiltonian $\hat H$ gives the coupled equations
\begin{eqnarray} \label{eq:preeffschr}
i \hbar \partial_t {\underline \chi}_\ell(x,t) & = & \hat H_{\rm CM} {\underline \chi}_\ell(x,t) \notag \\
&& + \sum_{\ell'}e^{- i \Delta_{\ell' \ell} t / \hbar} {\underline {\underline V}}_{\ell \ell'}\left ( x, v_z t \right ) {\underline \chi}_{\ell'}(x,t).
\end{eqnarray}
Here, $\Delta_{\ell'\ell} = \varepsilon_{\ell'} - \varepsilon_\ell$ is the rotational energy-level spacing and we defined the grating potential matrix $({\underline {\underline V}}_{\ell \ell'})_{mm'}(x,v_z t) = \matel{\ell m}{V(x,v_z t,\hat \Omega)}{\ell' m'}$ with dimension $\nu_\ell \times \nu_{\ell '}$. In addition, we denote the initial conditions to the Schr\"{o}dinger equation \eqref{eq:preeffschr} by variables without the time argument, such as ${\underline \chi}_{\ell}(x)$.
Equation \eqref{eq:preeffschr} describes the coupled time evolution of the expansion coefficients $\chi_{\ell m}(x,t)$ due to the effectively time-dependent interaction between the molecule and the grating. The exact grating transformation $\hat t$ is given by the unitary time evolution of the system \eqref{eq:preeffschr}; however, in practice a semiclassical approach is sufficient due to the small de Broglie wavelength in matter-wave experiments with heavy molecules \cite{stefanbeyond}. In most cases it is even sufficient to determine the grating transformation $\hat t$ in the eikonal approximation, which can be regarded as the high energy limit of the semiclassical propagator \cite{stefanbeyond}. Physically speaking, the eikonal approximation treats the particle trajectories appearing in the semiclassical propagator as straight lines \cite{sakurai}. We now specify $\hat t$ explicitly for the rotationally free and for the rotationally diabatic transit through the grating.
\subsubsection*{Free Rotor Transit}
In a rotationally free transit through the grating the molecule rotates rapidly during the passage through the grating, $\tau_{\rm rot} \ll \tau_{\rm CM}$, and the rotational energy clearly exceeds the average potential. Then the transverse CM wavefunctions $\underline{\chi_{\ell}}$ are nearly constant during the transit and one can neglect the non-resonant terms in the Schr\"{o}dinger equation \eqref{eq:preeffschr} (rotating wave approximation),
\begin{equation} \label{eq:effschr}
i \hbar \partial_t {\underline \chi}_\ell(x,t) = \left [ \hat H_{\rm CM}+ {\underline{ \underline V}}_{\ell \ell}\left ( x, v_z t \right ) \right ] {\underline \chi}_{\ell}(x,t).
\end{equation}
The interaction potential is effectively diagonal in the angular momentum quantum numbers $\ell$ due to the fast molecular rotations. The corresponding expansion coefficients ${\underline \chi}_\ell(x,t)$ for different energies $\varepsilon_\ell$ are mutually independent; however, in general the entries within each ${\underline \chi}_\ell(x,t)$ are coupled via Eq. \eqref{eq:effschr}.
In the eikonal approximation \cite{sakurai,kdtlinjp,stefanbeyond} the scattered state ${\underline \chi}'_{\ell}(x)$ behind the grating is according to the Schr\"{o}dinger equation \eqref{eq:effschr} related to the impinging state ${\underline \chi}_\ell(x)$ by ${\underline \chi}'_{\ell}(x) = {\underline {\underline T}}_{\ell}(x){\underline \chi}_{\ell}(x)$ where the grating transformation matrix is given by
\begin{equation} \label{eq:transmat}
{\underline {\underline T}}_\ell(x) = \exp \left [ - \frac{i}{\hbar v_z} \int_{-\infty}^{\infty} \mathrm{d}{z}~ {\underline {\underline V}}_{\ell \ell}(x,z) \right ].
\end{equation}
It is a square matrix of dimension $\nu_\ell$. The corresponding grating transformation operator $\hat t$ can be expressed in terms of the matrix elements $t_\ell^{mm'} = ({\underline {\underline T}}_\ell)_{mm'}$ of the grating transformation matrix \eqref{eq:transmat} as
\begin{equation} \label{eq:grattrafo}
\hat t = \sum_\ell \sum_{mm'} \left ( t_\ell^{mm'}(\hat x) \otimes \ketbra{\ell m}{\ell m'} \right ) \vert t(\hat x, \hat \Omega) \vert .
\end{equation}
Here, we included the aperture function $\vert t(x,\Omega) \vert \in \{0,1 \}$ describing the grating structure. Pure phase-gratings are characterized by $\vert t(x, \Omega) \vert = 1$ while for ideal interaction-free gratings the eikonal phase vanishes \cite{stefanbeyond}.
The grating transformation \eqref{eq:grattrafo} is valid for any interaction potential as long as the rotating wave approximation is justified. This is the case in most matter-wave experiments with large particles, such as near-field interference experiments with laser gratings \cite{kdtlinature,kdtlinjp,otimanjp} or far-field experiments with thick gratings \cite{colloquium}. In Sec. \ref{sec:example} we will specify the grating transformation \eqref{eq:grattrafo} for laser gratings. In addition, we note that in many practical cases the initial rotational state is a thermal mixture of angular momentum states and, hence, the grating transformation \eqref{eq:grattrafo} can be regarded as the thermal average of angular momentum dependent grating transformations $t_\ell^{mm'}(\hat x)$.
\subsubsection*{Diabatic Transit}
In a rotationally diabatic grating passage the interaction time $\tau_{\rm CM}$ is much smaller than the rotational period $\tau_{\rm rot}$ and, in a classical picture, the orientation $\Omega$ remains constant during the transit. In this case the quantum mechanical grating transformation gets diagonal in the orientational coordinates $\Omega$. In order to see this we note that for short transit times the rotating phases $\Delta_{\ell \ell'} t / \hbar \simeq 2 \pi t / \tau_{\rm rot}$ in the Schr\"{o}dinger equation \eqref{eq:preeffschr} can be neglected. This yields the coupled equations
\begin{equation} \label{eq:effschr2}
i \hbar \partial_t {\underline \chi}_\ell(x,t) = \hat H_{\rm CM} {\underline \chi}_\ell(x,t) + \sum_{\ell'}{\underline {\underline V}}_{\ell \ell'}\left ( x, v_z t \right ) {\underline \chi}_{\ell'}(x,t),
\end{equation}
which depend on the orientational DOFs only in parametric fashion, since these equations are independent of the rotational energy levels $\varepsilon_\ell$. Defining the wavefunction,
\begin{equation}
\Phi(x,\Omega,t) = \sum_{\ell} {\underline \chi}_\ell(x,t) \cdot {\underline \phi}_\ell(\Omega),
\end{equation}
allows us to rewrite the coupled Eqs. \eqref{eq:effschr2} in the form
\begin{equation}
i \hbar \partial_t \Phi(x, \Omega,t) = \left [ \hat H_{\rm CM} + V \left(x, v_z t,\Omega \right ) \right ] \Phi(x,\Omega,t),
\end{equation}
which can now be solved in the eikonal approximation. The diabatic grating transformation operator $\hat t$, mapping the initial state $\Phi(x,\Omega)$ onto the outgoing state $\Phi'(x,\Omega)$, can now be written in the eikonal approximation as
\begin{equation} \label{eq:grattrafo2}
\hat t = \vert t(\hat x, \hat \Omega) \vert \exp \left [ - \frac{i}{\hbar v_z} \int_{-\infty}^\infty \mathrm{d}{z}~ V(\hat x, z, \hat \Omega) \right ].
\end{equation}
As anticipated, the diabatic grating transformation \eqref{eq:grattrafo2} is diagonal in the orientational DOFs $\Omega$. This coincides with the classical perception that the orientation is conserved during the diabatic passage through the grating, and is in contrast to the free rotor case \eqref{eq:grattrafo}, which is diagonal in the angular momentum quantum numbers $\ell$. The diabatic transformation \eqref{eq:grattrafo2} can be appropriate to describe the transit through ultra-thin material gratings, e.g. made of graphene.
\subsection{Classical Grating Transformation}
In order to identify genuine quantum effects in near-field matter-wave interferometry, it is necessary to compare the quantum interference signal to the classically expected shadow pattern of the grating \cite{kdtlinjp}. This moir\'{e}-type signal can in principle be obtained by solving Hamilton's equations of motion for a rotating molecule in the grating potential $V(x,v_z t,\Omega)$. However, the problem is significantly simplified by the classical analogue of the eikonal approximation.
The classical state of the rotating molecule approaching the grating is described by its phase space distribution function $f(x,p_x,\Omega,p_\Omega)$, where $p_\Omega$ denotes the vector of conjugate momenta to the angles $\Omega$. We seek the classical grating transformation, that maps the incoming state $f(x,p_x,\Omega,p_\Omega)$ onto the outgoing state $f'(x,p_x,\Omega,p_\Omega)$ \cite{kdtlinjp,klausdecoherence1}, i.e. the classical analogue of the grating transformation operator $\hat t$ mapping $\rho$ onto $\rho'$. We discuss the free rotor scenario first.
\subsubsection*{Free Rotor Transit}
For a rapidly rotating molecule, the CM motion is determined by the grating potential averaged over a rotational period \cite{goldstein}. The resulting eikonal momentum kick $\Delta p_x$ experienced by the molecule while passing through the grating reads as
\begin{eqnarray} \label{eq:deltap}
\Delta p_x(x,\Omega,p_\Omega) & = & - \frac{1}{\tau_{\rm rot} v_z} \int_0^{\tau_{\rm rot}} \mathrm{d}{t'} ~\int_{- \infty}^\infty \mathrm{d}{z} \notag \\
&& \times \partial_x V \left [ x,z, \Omega(t') \right].
\end{eqnarray}
The transferred momentum \eqref{eq:deltap} is a function of both the transverse CM coordinate $x$ and of the initial orientation state $(\Omega,p_\Omega)$, that determines the rotational dynamics. In addition, it is reasonable to neglect the influence of the grating on the rotational dynamics, since the rotational energy is much higher than the average interaction potential. The free rotor approximation is in most practical cases well justified due to the high rotational temperature of the molecules in the experiments \cite{colloquium}.
With the help of the eikonal momentum kick \eqref{eq:deltap}, we can express the outgoing distribution $f'(x, p_x,\Omega,p_\Omega)$ as a CM momentum convolution of the impinging distribution $f(x, p_x, \Omega, p_\Omega)$,
\begin{eqnarray} \label{eq:psconv}
f'(x,p_x,\Omega,p_\Omega) & = & \int \mathrm{d}{p_x'}~ T_{\rm cl}(x,p_x - p_x',\Omega, p_\Omega), \notag \\
&& \times f(x,p_x',\Omega, p_\Omega),
\end{eqnarray}
with the grating transformation function
\begin{eqnarray}
T_{\rm cl} \left ( x, p_x, \Omega,p_\Omega \right ) = \vert t(x, \Omega) \vert \delta \left [p_x - \Delta p_x (x, \Omega, p_\Omega ) \right ].
\end{eqnarray}
The function $T_{\rm cl}(x,p_x,\Omega,p_\Omega)$ is the classical analogue of the quantum mechanical grating transformation operator $\hat t$. We remark that the transformation \eqref{eq:psconv} conserves the angular momenta $p_\Omega$, as it was the case in the quantum mechanical case \eqref{eq:transmat}. We will now specify the grating transformation function for the rotationally diabatic case.
\subsubsection*{Diabatic Transit}
In the case of a rotationally diabatic transit, the rotational period is much longer than the transit time and thus the orientation $\Omega$ of the molecule can be considered as being constant during the passage through the grating. The resulting CM momentum kick \eqref{eq:deltap} is
\begin{equation} \label{eq:deltap2}
\Delta p_x(x,\Omega) = - \frac{1}{v_z} \int_{- \infty}^\infty \mathrm{d}{z}~ \partial_x V \left ( x,z, \Omega \right),
\end{equation}
which is a function of the transverse CM position $x$ and of the orientation $\Omega$. In a similar fashion, we obtain the eikonal angular momentum kick $\Delta p_\Omega$,
\begin{equation}
\Delta p_\Omega = - \frac{1}{v_z} \int_{- \infty}^\infty \mathrm{d}{z}~ \partial_\Omega V \left ( x,z, \Omega \right),
\end{equation}
which is also a function of $x$ and $\Omega$. The resulting grating transformation is now given by a CM momentum and an angular momentum convolution analogous to Eq. \eqref{eq:psconv} with the grating transformation function $T_{\rm cl}(x,p_x,\Omega,p_\Omega)$,
\begin{eqnarray} \label{eq:tclas2}
T_{\rm cl} \left ( x, p_x, \Omega, p_\Omega \right ) & = & \vert t(x, \Omega) \vert \delta \left [p_x - \Delta p_x(x, \Omega ) \right ] \notag \\
&& \times \delta \left [p_\Omega - \Delta p_\Omega(x, \Omega ) \right ].
\end{eqnarray}
Having derived the quantum and classical grating transformations, we can next apply them to the molecular diffraction from standing wave laser gratings.
\section{Interference of Symmetric Top Molecules in the KDTLI} \label{sec:example}
Here, we first discuss the diffraction of rapidly rotating symmetric top molecules from laser gratings in order to illustrate the previously derived grating transformation. The obtained transformation operator is then used to determine the quantum fringe visibility as well as the classical shadow pattern in the Kapitza-Dirac-Talbot-Lau interferometer (KDTLI) \cite{kdtlinature,kdtlinjp,colloquium}. The KDTLI is a near-field interferometer consisting of three gratings that all share the same grating period $d$. The first and third grating are material masks, while the central one is a standing light wave. The transverse coherence of the incoming particle beam is prepared by the first material grating at distance $L$ in front of the standing wave. Diffraction occurs at the central grating and the signal is detected with the help of the third one at distance $L$ further downstream. The KDTLI is currently the working-horse for high mass interference experiments in Vienna \cite{colloquium}.
For the theoretical description of rotating molecules in the KDTLI it is a reasonable approximation to neglect the influence of the first and the third grating on the orientational DOFs because the diffraction relevant for interference takes place only at the central grating. The total transverse state operator $\rho$ behind the third grating can be obtained by successively applying the grating transformations of the three gratings as well as the intermediate unitary evolutions \cite{stefanbeyond,kdtlinjp}. Finally, the orientational DOFs are traced out in order to obtain the interference pattern on the screen.
\subsection{Standing Wave Grating Transformation}
We now evaluate the grating transformation $\hat t$ for symmetric top molecules traversing a standing-wave laser grating. This operator will then be used to calculate the near-field interference pattern of symmetric molecules in the KDTLI, but it can also be applied to other situations. We consider a polarizable, symmetric molecule with moments of inertia $I = I_1 = I_2$ and $I_3$ ($I / I_3 \geq 1$ for prolate particles and $1/2 < I / I_3 < 1$ for oblate discs), that is diffracted from a Gaussian standing laser wave of wavelength $\lambda$. The laser wave is linearly polarized in the direction ${\bf n}$ and acts as a pure phase grating, $\vert t(x,\Omega) \vert = 1$. The intensity of the Gaussian standing laser beam averaged over one optical cycle is given by \cite{kdtlinjp}
\begin{equation} \label{eq:int}
I(x,z) = \frac{8 P}{\pi w_y w_z} \exp \left ( - \frac{2 z^2}{w_z^2} \right ) \sin^2 \left ( \pi \frac{x}{d} \right ),
\end{equation}
where $d = \lambda / 2$ is the grating period, $P$ the laser power and $w_z$ is the beam waist in $z$-direction. Since the extension of the incoming particle beam in $y$-direction is usually small compared to the beam waist in $y$-direction \cite{kdtlinature} it is natural to neglect the $y$-dependence of the intensity \eqref{eq:int}.
Denoting by $\alpha_\|$ and $\alpha_\bot$ the two independent components of the polarizability tensor of the particle (along its symmetry axis and perpendicular to it, respectively), the grating potential can be expressed as \cite{kdtlinjp,averbukh2010,averbukh2011a,averbukh2011b} \footnote{We assume here that the extension of the molecule is much less than the period of the standing wave; otherwise the use of a polarizability tensor would be invalid.}
\begin{eqnarray} \label{eq:pot1}
V \left ( x, z, \theta \right ) & = & -\frac{4 P}{\pi \varepsilon_0 c w_z w_y} \exp \left ( - \frac{2 z^2}{w_z^2} \right ) \notag \\
&& \times \left ( \alpha_\| - \Delta \alpha \sin^2 \theta \right )\sin^2 \left ( \pi \frac{x}{d} \right ) .
\end{eqnarray}
Here $\Delta \alpha = \alpha_\| - \alpha_\bot$ is the polarizability anisotropy of the molecule ($\Delta \alpha > 0$ for prolate particles) and $\theta$ is the nutation angle with respect to the field polarization ${\bf n}$. The grating transformation can be safely determined in the free rotor approximation, Eq. \eqref{eq:grattrafo}, since the laser beam waist along the flight direction is approximately $w_z = 20$ $\mu$m \cite{kdtlinature,kdtlinjp}. For an exemplary molecule (diazobenzene with mass $M \simeq 1030$ amu and length $L_{\rm mol} \simeq 3.5$ nm) with velocity $v_z = 100$ m s$^{-1}$ and rotational temperature $T = 600$ K, we thus have $\tau_{\rm rot} / \tau_{\rm CM} \sim 10^{-4}$.
In order to calculate the grating transformation matrix \eqref{eq:transmat} we need to specify the grating potential matrices, whose particular form depends on the symmetry of the rotor. Here we restrict our discussion to symmetric top molecules for reasons of simplicity. The eigenstates $\ket{\ell m k}$ of the symmetric rotor (with classical Hamilton function \eqref{eq:hamrot}, see appendix) are labeled by the three quantum numbers $\ell \in {\mathbb N}_0$, $m \in \{-\ell,\ldots, \ell \}$ and $k \in \{ -\ell, \dots, \ell \}$ with eigenenergies $\varepsilon_{\ell k}$ \cite{edmonds,brink}. Denoting by $\varphi \in [0,2 \pi)$, $\theta \in [0,\pi)$ and $\psi \in [0,2 \pi)$ the three Euler angles with respect to the field polarization ${\bf n}$ ($z$-$y'$-$z''$ convention), the configuration space representation of the states $\ket{\ell m k}$ can be given explicitly in terms of the Wigner $D$-matrices. In particular, the eigenfunctions are $\braket{\Omega}{\ell m k} = \sqrt{2 \ell +1}/ (2 \pi \sqrt{2} ) D^{\ell *}_{m k}(\Omega)
$ \cite{edmonds,brink}, where $D^\ell_{mk}(\Omega)$ is related to the (small) Wigner $d$-matrix by $D_{mk}^\ell(\Omega) = e^{-im\varphi} d_{mk}^\ell(\theta) e^{-i k \psi}$. We remark that the (small) Wigner $d$-matrices $d_{mk}^\ell(\theta)$ are real due to the employed definition of the Euler angles \cite{sakurai}. The basis kets $\ket{\ell m k}$ are complete and orthonormal with respect to the infinitesimal volume element $\mathrm{d}{\Omega} = \mathrm{d}{\varphi}~ \mathrm{d}{\theta}~ \mathrm{d}{\psi}~ \sin \theta$.
The interaction potential \eqref{eq:pot1} is a function of the nutation angle $\theta$ only and thus the quantum numbers $m$ and $k$ are conserved. The resulting grating potential is diagonal in all three quantum numbers $\ell$, $m$, $k$, and the diagonal elements can be given with the help of the expectation values $Q_{\ell m k} := \matel{\ell m k}{\sin^2 \hat \theta}{\ell m k}$. Expressing these expectation values in terms of Wigner $D$-matrices \cite{seideman05} and using the properties of the Wigner $3$-j symbol \cite{edmonds,brink} yields the algebraic form
\begin{eqnarray} \label{eq:qlmk}
Q_{\ell m k} & = & \frac{1}{2} + \frac{1}{2} \frac{(2m)^2 + (2 k )^2 - 1}{(2 \ell -1 )(2 \ell + 3)} \notag \\
&& - \frac{3}{2} \frac{(2 m k)^2}{\ell ( \ell + 1) ( 2 \ell -1 )(2 \ell +3)}.
\end{eqnarray}
We remark that in the limit of a linear rotor, $I / I_3 \to \infty$ and thus $k = 0$, the well-known expectation value \cite{averbukh2010}
\begin{equation}\label{eq:qlm}
Q_{\ell m 0} = \frac{1}{2} + \frac{1}{2} \frac{(2 m)^2 - 1}{(2 \ell - 1)(2 \ell + 3)},
\end{equation}
of the linear rigid rotor is recovered.
We are now ready to identify the grating transformation operator for symmetric top molecules traversing a standing wave laser grating by inserting the expectation value \eqref{eq:qlmk} into the grating transformation matrix \eqref{eq:transmat}. The resulting operator \eqref{eq:grattrafo} is
\begin{eqnarray} \label{eq:transmatlaser}
\hat t & = & \sum_{\ell = 0}^\infty \sum_{m, k = - \ell}^\ell \exp \left [ i \phi_0 \left ( 1 - \frac{\Delta \alpha}{\alpha_\|} Q_{\ell mk} \right ) \sin^2 \left ( \pi \frac{\hat x}{d} \right ) \right ] \notag \\
&& \otimes \ketbra{\ell m k}{\ell m k},
\end{eqnarray}
where $\phi_0 = 4 \alpha_\| P / \varepsilon_0 c\hbar w_y v_z \sqrt{2 \pi}$ is the eikonal phase \cite{kdtlinjp} defined with the polarizability $\alpha_\|$ along the symmetry axis of the molecule. It is important to note that the eikonal phase imprinted on a particle during the grating passage depends on all of its angular momentum quantum numbers $\ell$, $m$ and $k$. The final signal, obtained by a trace over the orientational DOFs, is thus an average over signals from different grating transformations \eqref{eq:transmatlaser}, each weighted with the probability of the corresponding angular momentum state $\ket{\ell m k}$. This matches with the fact that the classical deflection angle of molecules traversing an electrostatic field depends on the angular momentum of the deflected particle \cite{averbukh2010,averbukh2011a,averbukh2011b}. For nearly isotropic particles, $\Delta \alpha / \alpha_\| \ll 1$, the transformations \eqref{eq:transmatlaser} are all equal and the average over angular
momentum states can be neglected.
\subsection{Quantum and Classical Fringe Visibility}
Having specified the grating transformation \eqref{eq:transmatlaser}, the quantum fringe visibility $\mathcal V$ of symmetric top molecules in the KDTLI can be calculated by applying the transformation \eqref{eq:grattrafo} with \eqref{eq:transmatlaser} to the quantum phase space formalism presented in \cite{kdtlinjp}. The common period of all three gratings is denoted by $d$ and the de Broglie wavelength of the incoming rod by $\lambda_{\rm dB} = h / M v_z$. The Talbot length, the characteristic length scale in near-field interferometry \cite{colloquium}, is $L_{\rm T} = d^2 / \lambda_{\rm dB}$. A straightforward calculation yields the sinusoidal quantum fringe visibility
\begin{eqnarray} \label{eq:vis}
{\mathcal V} & = & 2 \, {\rm sinc}^2 ( \pi f) \sum_{\ell = 0}^\infty \sum_{m,k = - \ell}^\ell p_{\ell m k} \notag \\
&& \times J_2 \left [\phi_0 \left ( 1 - \frac{\Delta \alpha}{\alpha_\|} Q_{\ell m k} \right ) \sin \left ( \pi \frac{L}{L_{\rm T}} \right ) \right ],
\end{eqnarray}
where $f$ is the opening fraction of the first and the third grating, $J_2( \cdot )$ is the second order Bessel function of the first kind, $L$ is the distance between the gratings and $p_{\ell m k}$ is the statistical weight of the angular momentum state $\ket{\ell m k}$. The interference contrast can be regarded as the average of point particle visibilities with $(\ell,m,k)$-dependent eikonal phases and weights $p_{\ell m k}$.
Since the molecules are emitted from a thermal source into vacuum the rotational DOFs follow a thermal distribution, $p_{\ell m k} \sim \exp ( - \varepsilon_{\ell k} / k_{\rm B} T )$, at a very high temperature, $k_{\rm B} T \gg \hbar^2 / I$. Then the sum over angular momenta in Eq. \eqref{eq:vis} can be replaced by the integral over the corresponding classical distribution \cite{brink} and Eq. \eqref{eq:vis} is further simplified. In particular, we denote by $p_{\rm th}(q)$ the probability density of the variable $q = Q(E_{\rm rot}, p_\varphi, p_\psi)$, where $Q(E_{\rm rot},p_\varphi,p_\psi)$ is the classical free temporal mean value of $\sin^2 \theta(t)$ depending on the conserved rotational energy $E_{\rm rot}$ and on the canonical momenta $p_\varphi$ and $p_\psi$ of $\varphi$ and $\psi$ rotations, respectively. A simple expression for $Q(E_{\rm rot},p_\varphi,p_\psi)$, as well as for the thermal distribution $p_{\rm th}(q)$, is derived in the appendix. This probability density is depicted in Fig.~\ref{fig:rotdist} and reads
\begin{eqnarray} \label{eq:pth}
p_{\rm th}(q) & = & \sqrt{\frac{I}{3 I_3}} \int_{\zeta(q)} \mathrm{d}{u}~ \left [1 - \left (1 - \frac{I}{I_3} \right) u^2 \right ]^{- 3 / 2} \notag \\
&& \times \left [\left (u^2 - \frac{1}{3} \right ) \left ( u ^2 + 1 - 2q \right ) \right ]^{-1/2}.
\end{eqnarray}
where the integral must be taken over the union of two intervals, $\zeta(q) = \zeta_1(q) \cup \zeta_2(q)$. The first interval $\zeta_1(q) = [0,\sqrt{\mathrm{min}[A(q)]}]$, where $A(q) = \{ 1/3, 2 q - 1, 1 - q\}$. This contribution to the distribution \eqref{eq:pth} vanishes for $q \leq 1/2$. The second interval $\zeta_2(q) = [\sqrt{\mathrm{max}[A(q)]},1]$. The distribution \eqref{eq:pth} depends on the fraction $I / I_3$ only and is independent of the thermal energy $k_{\rm B} T$.
\begin{figure}
\centering
\includegraphics[width = 80mm]{rotdist2-eps-converted-to.pdf}
\caption{(Color online) The thermal distribution $p_{\rm th}(q)$, Eq. \eqref{eq:pth}, of the temporal average $q$ of $\sin^2 \theta(t)$ for the free symmetric rotor with different moments of inertia, $I / I_3 = 1/2$ (solid line), $I / I_3 = 10$ (dashed line) and $I / I_3 \to \infty$ (dot-dashed line).}\label{fig:rotdist}
\end{figure}
In Fig.~\ref{fig:rotdist} we show the distribution \eqref{eq:pth} of the symmetric rotor for the oblate limit ($I / I_3 = 1/2$), a prolate particle ($I / I_3 = 10$) and the linear rotor ($I / I_3 \to \infty$). Similar figures were obtained numerically in \cite{averbukh2011a}. For finite $I / I_3$ the probability density \eqref{eq:pth} is discontinuous at $q = 1/2$, which follows from the definition of the set $\zeta(q)$ and it diverges at $q = 2/3$, as can be observed directly from Eq. \eqref{eq:pth}. In the limit of the linear rotor, $I / I_3 \to \infty$, the established \cite{averbukh2010} form $p_{\rm th}(q) = 1 / \sqrt{2 q - 1}$ is recovered.
Using the probability density \eqref{eq:pth} the quantum fringe visibility $\mathcal V$ takes on its final form
\begin{eqnarray} \label{eq:vis2}
{\mathcal V} & = & 2 \, {\rm sinc}^2 ( \pi f) \int_0^1 \mathrm{d}{q}~ p_{\rm th}(q) \notag \\
&& \times J_2 \left [\phi_0 \left ( 1 - \frac{\Delta \alpha}{\alpha_\|} q \right ) \sin \left ( \pi \frac{L}{L_{\rm T}} \right ) \right ] .
\end{eqnarray}
In order to identify genuine quantum interference effects we must compare the visibility \eqref{eq:vis2} to the visibility $\mathcal{V}_{\rm cl}$ of the classical shadow pattern \cite{kdtlinjp}, which is most conveniently calculated with the help of the phase space grating transformations \eqref{eq:psconv}. The classical momentum kick \eqref{eq:deltap} transferred to the molecule by the grating potential \eqref{eq:pot1} is
\begin{eqnarray}
\Delta p_x(x,E_{\rm rot}, p_\varphi) &=& \frac{\pi \hbar \phi_0}{d}\left [ 1 - \frac{\Delta \alpha}{\alpha_\|} Q(E_{\rm rot},p_\varphi,p_\psi) \right ] \notag \\
&& \times \sin \left ( 2 \pi \frac{x}{d} \right ).
\end{eqnarray}
Following the treatment in \cite{kdtlinjp} one obtains the classical fringe visibility
\begin{eqnarray} \label{eq:nucl}
\mathcal{V}_{\rm cl} & = & 2 \, \mathrm{sinc}^2 ( \pi f) \int_{0}^1 \mathrm{d}{q}~ p_{\rm th}(q) \notag \\
& & \times J_2 \left [\left ( 1 - \frac{\Delta \alpha}{\alpha_\|} q \right ) \frac{\phi_0 \pi L}{L_{\rm T}} \right ],
\end{eqnarray}
where we assumed the orientational DOFs to be thermally distributed.
The classical visibility \eqref{eq:nucl} decays as $\sqrt{L_{\rm T} / L}$ with increasing grating separation $L$ (and decreasing particle velocity $v_z$) while the quantum visibility \eqref{eq:vis2} is periodic in $L / L_{\rm T}$ \cite{kdtlinature,kdtlinjp}. This can be used to discriminate between genuine quantum behavior and classical shadow effects \cite{kdtlinature} as illustrated in Fig.~\ref{fig:vis3} for the linear rotor $I / I_3 \to \infty$. In Figs.~\ref{fig:vis3} and \ref{fig:vis1} we consider an exemplary molecule ($M = 1030$ amu, $L_{\rm mol} = 3.5$ nm, $\overline \alpha = 4 \pi \varepsilon_0 \times 50$ \AA$^3$, $v_z = 100$ m s$^{-1}$, $I / I_3 \simeq \infty$ \cite{kdtlinature}) traversing the KDTLI ($d = 266$ nm, $L / L_T = 0.5$, $f = 0.42$, $w_z = 20$ $\mu$m \cite{kdtlinature,kdtlinjp}) for three different relative anisotropies $\Delta \alpha / \alpha_\|$.
An experimentally observable signature of the orientational DOFs can be found in the absolute value of the quantum fringe visibility \eqref{eq:vis2} as a function of laser power $P$. This is illustrated in Fig.~\ref{fig:vis1}. While the value of subsequent maxima in the visibility are strictly decreasing for spherical particles, this is not the case for non-spherical molecules. For large relative anisotropies $\Delta \alpha / \alpha_\|$ the thermal average over angular momentum states leads to the appearance of additional side peak between the major recurrences. This is a signature of the orientational DOFs of the molecule traversing the grating.
In Fig.~\ref{fig:vis5} we show the absolute value of the quantum fringe visibility \eqref{eq:vis2} as a function of laser power for three differently shaped prolate molecules. All other parameters are as for the linear molecules of Fig.~\ref{fig:vis1} ($\Delta \alpha / \alpha_\| = 0.9$). While the visibility coincides with the visibility of the linear molecule for large ratios $I / I_3$, it approaches the behavior of spherically symmetric object for $I / I_3 \to 1$. The signature of the orientational DOFs discussed above is most pronounced for linear molecules but can be observed for all prolately shaped molecules.
\begin{figure}
\centering
\includegraphics[width = 80mm]{vis4a-eps-converted-to.pdf}
\caption{(Color online) Quantum (upper curves) and classical (lower curves) absolute sinusoidal fringe visibility as a function of relative separation $L / L_T$ for a linear molecule in the KDTLI for three different values of $\Delta \alpha / \alpha_\|$.}\label{fig:vis3}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 80mm]{vis2a-eps-converted-to.pdf}
\caption{(Color online) Absolute value of the sinusoidal quantum fringe visibility \eqref{eq:vis} as a function of laser power for a linear rigid molecule in the KDTLI for three different values of $\Delta \alpha / \alpha_\|$.}\label{fig:vis1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width = 80mm]{vis5-eps-converted-to.pdf}
\caption{(Color online) Absolute value of the sinusoidal quantum fringe visibility \eqref{eq:vis2} as a function of laser power for the prolate symmetric top in the KDTLI for different values of $I / I_3$ ($\Delta \alpha / \alpha_\| = 0.9$).}\label{fig:vis5}
\end{figure}
\section{Conclusion} \label{sec:conc}
We extended the theory of matter-wave interferometry to large, non-spherical particles by accounting for the influence of the rotational dynamics. In particular, we derived the grating transformation operator for the rotationally free and for the rotationally diabatic transit. This operator describes the modification of the transverse quantum state due to the orientation-dependent interaction with the grating. In addition, the classical shadow pattern was derived in order to provide the tools required for the identification of genuine quantum effects in near-field matter-wave interferometry.
If the molecule rotates rapidly with high energy, the transit is rotationally free and the grating transformation depends only on the angular momentum of the impinging particle. On the other hand, if the transit time is much shorter than the average rotational period, the grating transformation depends on the orientation of the particle.
We worked out the grating transformation for symmetric top molecules traversing a standing-wave laser grating, and we showed how it enters the description of symmetric top particles in the Kapitza-Dirac-Talbot-Lau interferometer as performed at the University of Vienna \cite{kdtlinature}. In these experiments the typical transit time exceeds the rotational period by orders of magnitude, making the transit rotationally free. A signature of the rotational dynamics was pointed out in the predicted quantum fringe visibility as a function of laser power. We also derived a closed-form expression for the distribution of deflection angles in classical deflection experiments with symmetric top molecules, as required in this context.
\section{Acknowledgments}
We acknowledge support by the European Commission within NANOQUESTFIT (No. 304886).
|
2,877,628,088,427 | arxiv | \section{Introduction.}
The aim of this article is to complete results of [M.00] and [B.08] and to show that they imply a rather general existence theorem for meromorphic quotient of quasi-proper meromorphic equivalence relations. We try also to put some more light on the "topological" condition which is needed to have such a meromorphic quotient. I hope that these stronger results and this new presentation of this topological condition will help potential users.\\
Of course the results here are in some sense a wide generalization of the classical Henri Cartan's quotient theorem [C.60]. But without requiring compactness of equivalence classes and in a more "geometrical" spirit which allows for instance, to always have a meromorphic quotient in the proper case. Remember that to have a functorial (holomorphic) quotient as complex space, H. Cartan gaves a necessary and sufficient condition which is not automatic.\\
The method we use is deeply related to the study of geometric f-flattening of a quasi-proper map, which is a generalization in the case of non compact fibers of the geometric flattening theorem of [B.78] for proper holomorphic maps. We can only prove the existence of a meromorphic quotient with a strong quasi-properness assumption, and in this case the corresponding meromorphic quotient map admits a geometric f-flattening. It is easy to see that there exists examples where a quasi-proper meromorphic quotient exists such that the quotient map does not admit geometric f-flattening. But this phenomenon is related to the fact that quasi-properness is a notion which is too weak for non equidimensional map : \\
the fact that all irreducible components of a big fiber meet a compact set does not imply that the generic nearby fibers have not irreducible components which escape to infinity (see [M.00] or [B.08] for the phenomenon of "escape at infinity"). \\
The topological condition we add garanties that this pathology does not happen and gives in fact a notion of "strong quasi-properness" which is equivalent to the (local) existence of geometric f-flattening. This corresponds also to the fact that the generic fibers of the map may be completed in a f-meromorphic family of cycles. As quasi-properness this notion is local on the target space, but, in contrast with the quasi-proper case, this condition is stable by proper modification of the image.\\
Another key point for proving this rather general existence theorem is the reparametrisation theorem of D. Mathieu [M.00] which is a consequence of his generalization of N. Kuhlmann's semi-proper direct image theorem [K.64] and [K.66] in the case where the target space is an open set of a Banach space. We show in the appendix how it implies in fact a semi-proper direct image theorem\footnote{but of course the functor "f-analytic families of \ $n-$cycles in \ $M$" is not representable in the category of reduced complex spaces.} with values in \ $\mathcal{C}_n^f(M)$ \ the space of finite type closed \ $n-$cycles in a complex space \ $M$.\\
As an application of these ideas, we give a Stein's factorization theorem for a strongly quasi-proper holomorphic map.\\
To conclude this introduction let me say that I consider this work as a tribute to professors R. Remmert and H. Grauert : the direct image theorem of R. Remmert [R.57] is the basic idea used here to produce meromorphic quotients and I think that the present work is also a far reaching conclusion on this problem which was initiated by H. Grauert [G.83] and [G.86] and his student B. Siebert [S.93] and [S.94].\\
During the final draft of this article the first discussions about the article [B.Mg.10] were going on and several interesting points were clarified. So I want to thanks J\'on Magn\'usson for his help.
\section{Quasi-proper geometric flattening.}
\subsection{Geometric flattening.}
Let \ $f : M \to S$ \ be a holomorphic map between two reduced and irreducible complex spaces and put \ $n : = \dim M - \dim S$.\\
Recall that such a map is called {\bf geometrically flat} when there exists an analytic family \ $(X_s)_{s \in S}$ \ of \ $n-$cycles in \ $M$ \ parametrized by \ $S$ \ such that for any \ $s \in S$ \ the support \ $\vert X_s\vert$ \ of the cycle \ $X_s$ \ is the set theoretic fiber \ $f^{-1}(s)$ \ and such that, for general \ $s$, the cycle \ $X_s$ \ is reduced (i.e. equal to its support, so each irreducible component is of multiplicity 1 in this cycle). Notice that such a map \ $f$ \ is open and that, in the case where \ $S$ \ is normal, the map \ $f$ \ is geometrically flat if and only if it is \ $n-$equidimensional (see [B.75] or [BOOK]).\\
For a geometrically flat \ $f$ \ we have a classifying map for the analytic family \ $(X_s)_{s \in S}$ \ which is a map \ $ \varphi : S \to \mathcal{C}_n^{loc}(M)$ \ given by \ $\varphi(s) : = X_s$.
\begin{defn}\label{geom.flattning}
Given an arbitrary holomorphic map \ $f : M \to S$ \ between two reduced and irreducible complex spaces, a {\bf geometric flattening} for \ $f$ \ is a proper holomorphic modification \ $\tau : \tilde{S} \to S$ \ of \ $S$ \ such that the strict transform\footnote{In general \ $\tilde{M}$ \ is, by definition, the union of irreducible components of \ $\tilde{S}\times_S M$ \ which dominate an irreducible component of \ $S$. When \ $M$ \ is irreducible and \ $f$ \ surjective \ $\tilde{M}$ \ is the irreducible component of \ $\tilde{S}\times_S M$ \ which surjects on \ $S$.} \\ $\tilde{f} : \tilde{M} \to \tilde{S} $ \ is a geometrically flat holomorphic map.
\end{defn}
As the holomorphic map \ $\pi : \tilde{M} \to M$ \ is proper we have (see [B.75] or [BOOK]) a direct image map \ $\pi_* : \mathcal{C}_n^{loc}(\tilde{M}) \to \mathcal{C}_n^{loc}(M)$ \ which is holomorphic in the sense that it preserves the analyticity of families of \ $n-$cycles. So when we have a geometric flattening for \ $f$ \ the classifying map \ $\tilde{\varphi} : \tilde{S} \to \mathcal{C}_n^{loc}(\tilde{M})$ \ composed with the direct image \ $\pi_*$ \ gives a map \ $\pi_*\circ \tilde{\varphi} : \tilde{S} \to \mathcal{C}_n^{loc}(M)$ \ which will classify the "fibers" of \ $f$. The existence of a geometric flattening for the map \ $f$ \ may be considered as the meromorphy of the classifying map of the analytic family of generic fibers of \ $f$ \ along the center \ $\Sigma$ \ of the proper holomorphic modification \ $\tau$ \ (see the definition \ref{f-mero.} below).\\
\subsection{Quasi-proper geometric flattening.}
In what follows we shall consider mainly finite type \ $n-$cycles, that is to say \ $n-$cycles having finitely many irreducible components. Recall that the classical corresponding relative notion is given by the following definition
\begin{defn}\label{Quasi-proper}
Let \ $f : M \to S$ \ be a holomorphic map between two reduced complex spaces. We say that \ $f$ \ is {\bf quasi-proper} if for any point \ $s_0 \in S$ \ there exists an open neighbourhood \ $S'$ \ of \ $s_0$ \ in \ $S$ \ and a compact set \ $K$ \ in \ $M$ \ such that for any \ $s \in S'$ \ any irreducible component \ $\Gamma$ \ of \ $f^{-1}(s)$ \ meets \ $K$.
\end{defn}
Of course the fibers of a quasi-proper map are finite type cycles and with some "uniform" local condition for the finiteness of the irreducible components of the fibers. The notion of quasi-proper map is not topological. Nevertheless, it may be defined for a continuous map \ $f : M \to S$ \ where \ $M$ is a complex space and \ $S$ \ a topological space, if we know that any fiber of \ $f$ \ is an analytic subset of \ $M$. This is, for instance, the case for the projection on \ $S$ \ of the (set theoretic) graph \ $\vert G \vert \subset S \times M$ \ of a continuous family of \ $n-$cycles \ $(X_s)_{s \in S}$ \ of \ $M$ \ parametrized by a Hausdorff topological space \ $S$.\\
The following notion is purely topological.
\begin{defn}\label{semi-proper}
Let \ $f : M \to S$ \ be a continuous map between two Hausdorff topological spaces. We say that \ $f$ \ is {\bf semi-proper} if for any point \ $s_0 \in S$ \ there exists an open neighbourhood \ $S'$ \ of \ $s_0$ \ in \ $S$ \ and a compact set \ $K$ \ in \ $M$ \ such that \ $f(M)\cap S' = f(K) \cap S'$.
\end{defn}
Of course a quasi-proper map is always semi-proper. Kuhlmann's theorem (see [K.64] and [K.66] ) generalizes Remmert 's direct image theorem [R. 57] to the semi-proper case :
\begin{thm}[Kuhlmann]
Let \ $f : M \to S$ \ be an holomorphic map between two reduced complex spaces. Assume that \ $f$ \ is semi-proper. Then \ $f(M)$ \ is a (closed) analytic subset of \ $S$.
\end{thm}
Recall now that we introduced in [B.08], for any given complex space \ $M$, the topological space \ $\mathcal{C}_n^f(M)$ \ of finite type \ $n-$cycles with a topology finer that the topology induced by the obvious inclusion \ $\mathcal{C}_n^f(M) \subset \mathcal{C}_n^{loc}(M)$, where \ $\mathcal{C}_n^{loc}(M)$ \ is the (topological) space of closed $n-$cycles in \ $M$. We defined also
the notion of an f-analytic family of (finite type) \ $n-$cycles in \ $M$. Let \ $S$ \ be a reduced complex space. The definition of the topology on \ $\mathcal{C}_n^f(M)$ \ is given in order to have that f-analytic families of finite type \ $n-$cycles in \ $M$ \ are exactely the analytic families \ $(X_s)_{s \in S}$ \ of \ $n-$cycles in \ $M$ \ that satisfy the following condition :
\begin{itemize}
\item The projection of the support \ $\vert G\vert \subset S \times M$ \ of the graph \ $G$ \ of the family is quasi-proper on \ $S$.
\end{itemize}
It of course implies that for each \ $s \in S$ \ the \ $n-$cycle \ $X_s$ \ is of finite type (i.e. has finitely many irreducible components). \\
This is a purely topological requirement on the family, corresponding to the continuity of the family for the finer topology defined on \ $\mathcal{C}_n^{f}(M)$.\\
For a f-analytic family the classifying map \ $\varphi : S \to \mathcal{C}_n^{loc}(M)$ \ factors through a continuous classifying map \ $\varphi^f : S \to \mathcal{C}_n^f(M)$ \ and the (continuous) inclusion
$$ i : \mathcal{C}_n^f(M) \to \mathcal{C}_n^{loc}(M) .$$
\begin{defn}\label{geometrically f-flat}
Let \ $M$ \ and \ $S$ \ be two reduced complex spaces and let \ $f : M \to S$ \ be a quasi-proper holomorphic map which is \ $n-$equidimensional. We shall say that \ $f$ \ is { \bf geometrically f-flat} when there exists an f-analytic family \ $(X_s)_{s \in S}$ \ of \ $n-$cycles in \ $M$ parametrised by \ $S$ \ such that we have for each \ $s \in S$ \ the equality \ $\vert X_s\vert = f^{-1}(s)$ \ and such that for generic \ $s \in S$ \ the cycle \ $X_s$ \ is reduced (so each irreducible component is of multiplicity 1 in \ $X_s$).
\end{defn}
It is easy to see, using local compactness of \ $S$ \ that a geometrically flat map \ $f : M \to S$ \ is geometrically f-flat if and only if \ $f$ \ is quasi-proper.\\
For a geometrically f-flat map \ $f$ \ we have an "holomorphic" classifying map
$$ \varphi^f : S \to \mathcal{C}_n^f(M) $$
associated to the f-analytic family of "fibers" of \ $f$.
\bigskip
Let us now consider a quasi-proper surjective holomorphic map \ $f : M \to S$ \ between two reduced and irreducible complex spaces, and let \ $n : = \dim M - \dim S$. As the set of points \ $x$ \ in \ $M$ \ where the fiber at \ $x$ \ has dimension \ $> n$ \ is a closed analytic subset of \ $M$ \ which is \ $f-$saturated in the sense that, in each fiber of \ $f$, this subset is an union of irreducible components of the fiber, its image is an analytic subset \ $\Sigma$ \ in \ $S$ \ which has no interior point in \ $S$. This is a consequence of Kuhlmann's theorem, because the restriction of a quasi-proper map \ $f$ \ to a \ $f-$saturated analytic subset is again quasi-proper.\\
Without loss of generality, we may assume that the non normal points in \ $S$ \ are in \ $\Sigma$. And now the restriction of \ $f$ \ to \ $M \setminus f^{-1}(\Sigma)$ \ is quasi-proper and equidimensional on the normal complex space \ $S \setminus \Sigma$. So we have a f-analytic family of \ $n-$cycles \ $(X_s)_{s \in S \setminus \Sigma}$ \ associated to the fibers of this map and a corresponding classifying map
$$ \varphi^f : S\setminus \Sigma \to \mathcal{C}_n^f(M). $$
Recall that a subset \ $Q $ \ in \ $\mathcal{C}_n^f(M)$ \ is compact if and only if it is compact in \ $\mathcal{C}_n^{loc}(M)$ \ and such that the topologies induced by \ $\mathcal{C}_n^{loc}(M)$ \ and\ $\mathcal{C}_n^f(M)$ \ co{\"i}ncide. In general the uniform boundness of the volume in each compact set of \ $M$, which is equivalent, thanks to Bishop's theorem [Bi.64], to the relative compactness in \ $\mathcal{C}_n^{loc}(M)$ \ (see [BOOK] ch.IV), will be easy to check. But to check whether the topologies induced by \ $\mathcal{C}_n^f(M)$ \ and \ $\mathcal{C}_n^{loc}(M)$ \ co{\"i}ncide (which is equivalent to the "non escape at infinity") is a sticky point in the sequel. For a precise description of the topologies on \ $\mathcal{C}_n^{loc}(M)$ \ and \ $\mathcal{C}_n^f(M)$ \ and their comparison see [B.08].\\
The following lemma gives a precise characterization of a compact subset in \ $\mathcal{C}_n^f(M)$ \ without any reference to the "escape at infinity".
\begin{lemma}\label{compact in f-}
A closed subset \ $\mathcal{B}$ \ in \ $\mathcal{C}_n^f(M)$ \ is compact in \ $\mathcal{C}_n^f(M)$ \ if and only if it is a compact subset of \ $\mathcal{C}_n^{loc}(M)$ \ and there exists a compact set \ $K \subset M$ \ such that any irreducible component of any cycle in \ $\mathcal{B}$ \ meets \ $K$.
\end{lemma}
\parag{Proof}
A compact subset in \ $\mathcal{C}_n^f(M)$ \ is clearly a compact in \ $\mathcal{C}_n^{loc}(M)$ \ because the inclusion map is continuous. For any point \ $X \in \mathcal{B}$ \ let \ $W_X$ \ be a relatively compact open set in \ $M$ \ which meets all irreducible components of \ $X$. Then \ $\Omega(W_X)$, the set of cycles in \ $\mathcal{C}_n^f(M)$ \ such that each irreducible component meets \ $W$, is an open neighbourhood of \ $X$ \ in \ $\mathcal{C}_n^f(M)$. Choosing a finite sub-covering \ $(\Omega(W_{X_i}))_{i \in I}$ \ of the covering of \ $\mathcal{B}$ \ by the open sets \ $\Omega(W_X)_{X \in \mathcal{B}}$, gives a relatively compact open set \ $W : = \cup_{i \in I} \ W_{X_i}$ \ such that any irreducible component of any \ $Y \in \mathcal{B}$ \ meets \ $W$. \\
Conversely, if a closed subset \ $\mathcal{B} $ \ in \ $\mathcal{C}_n^f(M)$ \ is compact in \ $\mathcal{C}_n^{loc}(M)$ \ and if any irreducible component of any cycle in \ $\mathcal{B}$ \ meets a compact \ $K$ \ in \ $M$, to show compactness in \ $\mathcal{C}_n^f(M)$ \ take any sequence \ $(X_{\nu})_{\nu \in \mathbb{N}}$ \ in \ $\mathcal{B}$. Up to pass to a subsequence, we may assume that \ $(X_{\nu})_{\nu \in \mathbb{N}}$ \ converges to a cycle \ $X \in \mathcal{B}$ \ in the topology of \ $\mathcal{C}_n^{loc}(M)$. We want to show that the convergence takes place in the sense of the topology of \ $\mathcal{C}_n^f(M)$.\\
If \ $X$ \ is the empty \ $n-$cycle, cover \ $K$ \ with finitely many \ $n-$scales (always adapted to \ $\emptyset$). Then for \ $\nu \gg 1$ \ the degree of \ $X_{\nu}$ \ in each of these scales has to be \ $0$ \ and so \ $\vert X_{\nu}\vert \cap K = \emptyset$. So the only possibility is that \ $X_{\nu} = X$ \ for \ $\nu \gg 1$.\\
When \ $X$ \ is not the empty \ $n-$cycle, we have to prove that if any irreducible component of \ $X$ \ meets an open set \ $W$, then for \ $\nu \gg 1$ \ any irreducible component of \ $X_{\nu}$ \ also meets \ $W$. But if, for an infinite sequence of \ $\nu \geq 0$, there exists an irreducible component \ $C_{\nu}$ \ of \ $X_{\nu}$ \ which does not meet \ $W$, choose a point \ $x_{\nu} \in K \cap C_{\nu}$. The points \ $x_{\nu}$ \ are then in \ $K \setminus W$ \ which is compact. So, up to pass to a subsequence we may assume, first that the sequence \ $(C_{\nu})_{\nu \in \mathbb{N}}$ \ converges to a cycle \ $Y$ \ in the sense of the topology of \ $\mathcal{C}_n^{loc}(M)$ \ and that the sequence\ $(x_{\nu})_{\nu \in \mathbb{N}}$ \ converges to \ $x \in K \setminus W$. But then we have \ $\vert Y \vert \subset \vert X\vert $ \ and \ $x \in \vert Y\vert$. So \ $Y$ \ is not the empty cycle and any of its irreducible components meet \ $W$. So for \ $\nu$ \ large enough \ $C_{\nu}$ \ meets \ $W$. Contradiction. As \ $\mathcal{B}$ \ is closed in \ $\mathcal{C}_n^f(M)$ \ we conclude that the sequence \ $(X_{\nu})_{\nu \in \mathbb{N}}$ \ converges to \ $X \in \mathcal{B}$ \ in the topology of \ $\mathcal{C}_n^f(M)$.$\hfill \blacksquare$\\
\parag{Remark} If \ $\mathcal{B}$ \ is a compact set in \ $\mathcal{C}_n^f(M)$ \ then the subset $$\widehat{\mathcal{B}} : = \{ X \in \mathcal{C}_n^f(M) \ / \ \exists Y \in \mathcal{B} \quad X \leq Y \}$$
is also compact. The compactness of \ $\widehat{\mathcal{B}}$ \ in \ $\mathcal{C}_n^{loc}(M)$ \ is obvious, the same compact \ $K \subset M$ \ which meets all irreducible components of a cycle in \ $\mathcal{B}$ \ meets also each irreducible component of a cycle in \ $\widehat{\mathcal{B}}$; finally this subset is closed in \ $\mathcal{C}_n^f(M)$ \ thanks to the lemma \ref{limite inegalite}.\\
Note that \ $\emptyset$ \ is always in \ $\widehat{\mathcal{B}}$ \ but that \ $\widehat{\mathcal{B}} \setminus \{\emptyset\}$ \ is closed in \ $\widehat{\mathcal{B}}$ \ as \ $\{\emptyset\}$ \ is open in \ $\mathcal{C}_n^f(M)$ ; so \ $\widehat{\mathcal{B}} \setminus \{\emptyset\}$ \ is also a compact set.\\
Now comes the main difference between the fact that we consider arbitrary closed \ $n-$cycles or finite type closed \ $n-$cycles. To make this difference transparent, let me use the following definition :
\begin{defn}\label{properly extendable map}
Let \ $S$ \ be an irreducible complex space and \ $\Sigma \subset S$ \ be a closed analytic subset with no interior point in \ $S$. Let \ $\varphi : S \setminus \Sigma \to Z$ \ be a continuous map to a Hausdorff topological space \ $Z$. We say that the map \ $\varphi$ \ is {\bf properly extendable along} \ $\Sigma$ \ if there exists a Hausdorff topological space \ $Y$, a continuous map \ $\sigma : Y \to S$ \ and a continuous map
$$ \psi : Y \to Z$$
such that the map \ $\sigma$ \ is a { \bf proper topological modification along \ $\Sigma$} \ and such that \ $\psi$ \ extends \ $\varphi$. Topological modification means that \ $\sigma$ \ is continuous and proper, that the set \ $\sigma^{-1}(\Sigma)$ \ has no interior point in \ $Y$ \ and that the restriction
$$ \sigma : Y \setminus \sigma^{-1}(\Sigma) \to S \setminus \Sigma$$
is a homeomorphism. The fact that \ $\psi$ \ extends \ $\varphi$ \ means that on \ $S \setminus \Sigma$ \ we have \ $\varphi = \psi\circ \sigma^{-1} $.
\end{defn}
Now we have the following key theorem :
\begin{thm}\label{flattening 1} {\bf [Geometric f-flattening; first version.]}
Let \ $M$ \ and \ $S$ \ be two reduced and irreducible complex spaces. Let \ $f : M \to S$ \ be a holomorphic map and assume that \ $\Sigma \subset S$ \ is a (closed) analytic subset in \ $S$ \ with no interior point, containing the non normal points of \ $S$ \ and such that the restriction of \ $f$ \ to \ $M \setminus f^{-1}(\Sigma)$ \ is \ $n-$equidimensional, where \ $n : = \dim M - \dim S$.
\begin{enumerate}[i)]
\item The map \ $\varphi : S \setminus \Sigma \to \mathcal{C}_n^{loc}(M) $ \ classifying the fibers of \ $f$ \ over \ $S \setminus \Sigma$ \ is always properly extendable along \ $\Sigma$.\\
\item Assume that \ $f : M \to S$ \ is quasi-proper, and let \ $\varphi^f : S \setminus \Sigma \to \mathcal{C}_n^f(M)$ \ be the classifying map of the fibers of \ $f$ \ over \ $S \setminus \Sigma$. \\
If \ $\varphi^f$ \ is properly extendable along \ $\Sigma$ \ then there exists a proper holomorphic modification \ $\tau :\tilde{S} \to S$ \ and an f-analytic family of finite type \ $n-$cycles in \ $M$ \ which extends \ $\varphi^f$ \ to \ $\tilde{S}$.\\
\end{enumerate}
\end{thm}
So in the case of arbitrary closed \ $n-$cycles, the continuous extension of the classifying map \ $\varphi$ \ to a proper topological modification of \ $S$ \ along \ $\Sigma$ \ is always possible, but this does not allow us to obtain a holomorphic extension on a proper holomorphic modification of \ $S$ along \ $\Sigma$. \\
In the case of finite type \ $n-$cycles the quasi-properness asumption on \ $f$ \ does not imply automatic topological extension for the f-classifying map \ $\varphi^f$. But when this topological extension exists, the f-classifying map can be holomorphically extended on a suitable proper holomorphic modification of \ $S$ \ along \ $\Sigma$.
\parag{Proof of the theorem} Let us begin by the case i): define \ $\Gamma \subset S \times \mathcal{C}_n^{loc}(M)$ \ as the closure of the graph of \ $\varphi$. Then \ $\Gamma$ \ is \ $S-$proper: this is an easy consequence of the characterisation of compact sets in \ $\mathcal{C}_n^{loc}(M)$ \ via Bishop's theorem (see [Bi.64] and [BOOK] ch.IV), and the result of [B.78] on the local boundness of the volume of generic fibers for an holomorphic map ; so we want to prove two facts\footnote{Recall that we don't know that \ $\Gamma$ \ is locally compact ; so \ $p$ \ proper means that \ $p$ \ is closed with compact fibers.}
\begin{enumerate}[1)]
\item The projection \ $p : \Gamma \to S$ \ is a closed map ;
\item its fibers are compact subset in \ $\mathcal{C}_n^{loc}(M)$.
\end{enumerate}
Let \ $F$ \ be a closed set in \ $\Gamma$, and assume that the sequence \ $(s_{\nu})_{\nu \in \mathbb{N}}$ \ in \ $p(F)$ \ converges to \ $\sigma \in S$. Let \ $(s_{\nu}, C_{\nu}) \in F$ \ for \ $\nu \in \mathbb{N}$. Now fix a compact neighbourhood \ $L$ \ of \ $\sigma$ \ in \ $S$ ; then using [B.78], for each compact set \ $K \subset M$ \ and any fixed continuous hermitian metric \ $h$ \ in \ $M$ \ we may find a constant \ $\gamma(K,h)$ \ such for any \ $s \in L \setminus \Sigma$ \ we have \ $vol_h(K \cap \varphi(s)) \leq \gamma(K,h)$. This inequality extends by continuity to the closure of \ $\varphi(L \setminus \Sigma)$ \ in \ $\mathcal{C}_n^{loc}(M)$, so to \ $p^{-1}(L)$ \ in \ $F$. This implies that \ $p^{-1}(L)$ \ is a compact set in \ $F$. This allows us, up to pass to a subsequence, to assume that the sequence\ $(C_{\nu})_{\nu \in \mathbb{N}}$ \ converges\footnote{\ $C$ \ may be the empty \ $n-$cycle.} to \ $C \in \mathcal{C}_n^{loc}(M)$. Then \ $(\sigma, C)$ \ lies in \ $F$ \ and 1) is proved. But 2) is already a consequence of the compactness of \ $p^{-1}(L)$, so \ $p : \Gamma \to S$ \ is proper.\\
To prove that \ $p : \Gamma \to S$ \ is a topological modification of \ $S$ \ along \ $\Sigma$, it is now enough to prove that \ $p^{-1}(\Sigma)$ \ has no interior point in \ $\Gamma$. But this is obvious from the density of the graph of \ $\varphi$ \ in \ $\Gamma$. To conclude case i), notice that the projection \ $q : \Gamma \to \mathcal{C}_n^{loc}(M)$ \ is continuous and extends \ $\varphi$.\\
We shall try to explain in the comment following theorem \ref{f-mero. quotient} why in this case the locally compact subset \ $\Gamma$ \ which is proper on \ $S$ \ may not be, in general, a finite dimensional complex space.\\
To prove ii) we first notice that the assumption that the map \ $\varphi^f$ \ is properly extendable along \ $\Sigma$ \ is equivalent to the fact that \ $\Gamma^f$, the closure of the graph of \ $\varphi^f$ \ in \ $S \times \mathcal{C}_n^f(M)$, is proper on \ $S$. This is proved in the following lemma.
\begin{lemma}
The map \ $\varphi^f$ \ is properly extendable along \ $\Sigma$ \ if and only if the closure \ $\Gamma^f$ \ in \ $S \times \mathcal{C}_n^f(M)$ \ of the graph of \ $\varphi^f$ \ is\ $S-$proper.
\end{lemma}
\parag{Proof} Of course the properness of \ $\Gamma^f$ \ on \ $S$ \ gives a topological modification of \ $S$ \ along \ $\Sigma$ \ with a continuous extension for \ $\varphi^f$ \ given by the projection of \ $\Gamma^f$ \ on \ $\mathcal{C}_n^f(M)$.
Conversely, if we have a proper topological modification \ $\tau : Y \to S$ \ and a continuous map \ $\psi : Y \to \mathcal{C}_n^f(M)$ \ extending \ $\varphi^f$, let \ $\tilde{\Gamma}$ \ be the graph of \ $\psi$. Then \ $(\tau\times Id)(\tilde{\Gamma})$ \ is obviously proper on \ $S$ \ and contained in \ $\Gamma^f$, the closure of the graph of \ $\varphi^f$. But the continuity of \ $\psi$ \ implies that \ $\tilde{\Gamma} = \Gamma^f$, as \ $\tau^{-1}(\Sigma)$ \ has no interior point in \ $Y$. $\hfill \blacksquare$
\bigskip
So to finish the proof of ii), it is enough to endow the locally compact topological space \ $\Gamma^f$ \ with a natural structure of a weakly normal complex space such that its projection on \ $S$ \ becomes holomorphic. This is done by induction on the dimension of the "big" fibers of \ $M$ \ on \ $S$ \ in [M.00]. $\hfill \blacksquare$
\parag{Remarks}
The key points of the proof of [M.00] are the following facts :
\begin{itemize}
\item The properness condition on \ $\Gamma^f$ \ is local along \ $\Sigma$ \ and invariant by local (proper) modification on \ $S$.
\item The quasi-properness of \ $f : M \to S$ \ is preserved by local proper modification along \ $\Sigma$ \ because of the assumption on \ $\Gamma^f$. This may not be true in presence of escape at infinity for a limit of generic fibers, so without the properness of \ $\Gamma^f$ \ on \ $S$. For the convenience of the reader, we give in the next proposition a proof of this fact as it is a key point in the induction for proving the geometric f-flattening theorem.
\item Because of the quasi-properness of \ $f$ \ (which stays in the induction), to decrease the dimension of the biggest fibers, a local blowup of an analytic subset of \ $\Sigma$ \ is enough, as all irreducible components of all nearby fibers meet a given compact set. Of course, with infinitely many components of dimension \ $> n$ \ in a fiber, this argument would not work.
\item The Kuhlmann's theorem is generalized to the case of a semi-proper holomorphic map with values in an open set of a Banach space in [M.00]. We explicit in the appendix, for the convenience of the reader, how this generalization is used to have a semi-proper direct image theorem with values in \ $\mathcal{C}_n^f(M)$ \ in a sense which is precised there. This formulation of the semi-proper direct image theorem with values in \ $\mathcal{C}_n^f(M)$ \ is interesting by itself. Of course an easy corollary is the "universal reparametrization theorem" of [M.00]. \\
\end{itemize}
\begin{prop}[Stability of strong quasi-properness by modification.]
Let \ $f : M \to S$ \ be a quasi-proper holomorphic map between two reduced irreducible complex spaces. Let \ $\Sigma \subset S$ \ be a closed analytic subset with no interior point such that the restriction of \ $f$ \ to \ $M \setminus f^{-1}(\Sigma)$ \ is geometrically f-flat. Let \ $\varphi^f : S \setminus \Sigma \to \mathcal{C}_n^f(M)$ \ be the classifying map of the f-analytic family of \ $n-$cycles in \ $M$ associated to the fibers of this restriction of \ $f$. Assume that \ $\Gamma^f$, the closure in \ $S \times \mathcal{C}_n^f(M)$ \ of the graph of \ $\varphi^f$, is \ $S-$proper. Let \ $\tau : \tilde{S} \to S$ \ be a proper modification of \ $S$ \ with center contained in \ $\Sigma$ \ and let \ $\tilde{M}$ \ be the strict transform of \ $M$ \ by \ $\tau$ \ and \ $\tilde{f} : \tilde{M} \to \tilde{S}$.\\
Then \ $\tilde{f}$ \ is quasi-proper and if \ $\tilde{\Sigma} : = \tau^{-1}(\Sigma)$, the closure \ $\tilde{\Gamma}^f$ \ in \ $\tilde{S} \times \mathcal{C}_n^f(\tilde{M})$ \ of the graph of the classifying map \ $\tilde{\varphi}^f : \tilde{S} \setminus \tilde{\Sigma} \to \mathcal{C}_n^f(\tilde{M})$ \ is proper on \ $\tilde{S}$.
\end{prop}
The condition that the center of \ $\tau$ \ is contained in \ $\Sigma$ \ does not reduce the generality of the statement because f-geometric flatness is invariant by pull-back.
\parag{Proof} Recall first that the strict transform \ $\tilde{M}$ \ of \ $M$ \ by \ $\tau$ \ is the irreducible component of the fiber product \ $\tilde{S}\times_S M$ \ which dominates \ $M$. As \ $\tau$ \ is proper, the projection \ $\pi : \tilde{M} \to M$ \ is also proper and it is a proper modification along \ $f^{-1}(\Sigma)$. To show that \ $\tilde{f} : \tilde{M} \to \tilde{S}$ \ is quasi-proper, take \ $\tilde{s}_0 \in \tilde{S}$ \ and let \ $s_0 = \tau(\tilde{s}_0)$. The quasi-properness of \ $f$ \ gives an open set \ $S'$ \ in \ $S$ \ containing \ $s_0$ \ and a compact set \ $K$ \ in \ $M$ \ such that any irreducible component of \ $f^{-1}(s)$ \ for \ $s \in S'$ \ meets \ $K$. Let us show that any irreducible component of \ $\tilde{f}^{-1}(\tilde{s})$ \ for \ $\tilde{s} \in \tilde{S}' : = \tau^{-1}(S')$ \ meets the compact \ $(\tau^{-1}(f(K))\times K) \cap \tilde{M}$.\\
Let \ $C$ \ be an irreducible component of \ $\tilde{f}^{-1}(\tilde{s})$ \ for some \ $\tilde{s} \in \tilde{S}'$, and choose a smooth point \ $(\tilde{s},x)$ \ of this fiber belonging to \ $C$. Let \ $(s_{\nu}, x_{\nu})_{\nu \in \mathbb{N}}$ \ be a sequence of points in \ $M \setminus f^{-1}(\Sigma)$ \ converging to \ $(\tilde{s},x)$ \ in \ $\tilde{M}$. Up to choose a subsequence, we may assume that the fibers \ $f^{-1}(s_{\nu})$ \ converge to a \ $n-$cycle \ $C_0$ \ in \ $\mathcal{C}_n^f(M)$, thanks to the properness of \ $\Gamma^f$ \ on \ $S$. Now we have the inclusion \ $\{\tilde{s}\}\times \vert C_0\vert \subset \tilde{f}^{-1}(\tilde{s})$, and \ $x$ \ belongs to some irreducible component \ $C_1$ \ of \ $C_0$. Then we have \ $C_1 = C$, because \ $C$ \ is the only irreducible component of \ $\tilde{f}^{-1}(\tilde{s})$ \ containing the point \ $(\tilde{s},x)$.\\
But now, we know that \ $C_1$ \ meets \ $K$. So \ $C$ \ meets \ $\tilde{K}$.\\
The proof that \ $\tilde{\Gamma}^f$ \ is proper over \ $\tilde{S}$ \ is easy : the generic fibers of \ $\tilde{f}$ \ correspond to the generic fibers of \ $f$ \ via the direct image map \ $\pi_*$. So we have a commutative diagram of continuous maps
$$ \xymatrix{\tilde{\Gamma}^f \ar[d]^{\tilde{p}_1} \ar[r]^{\pi_*} & \Gamma^f \ar[d]^{p_1} \\
\tilde{S} \ar[r]^{\tau} & S } $$
with \ $\tau$, $p_1$ \ and \ $\pi_*$ \ proper. So \ $\tilde{p}_1$ \ is proper. $\hfill \blacksquare$
\parag{Example} Let \ $f : M \to S$ \ be a quasi-proper $n-$equidimensional map between two reduced and irreducible complex spaces, where \ $n : = \dim M - \dim S$. Assume \ $S$ \ is normal so that \ $f$ \ is a geometrically f-flat map. Assume that at the generic points of a closed irreducible curve \ $C \subset S$ \ the fibers of \ $f$ \ have two distinct reduced and irreducible components. Denote them by \ $X_c$ \ and \ $Y_c$ \ for \ $c \in C$. Fix a generic point \ $c_0 \in C$ \ such that \ $X_c$ \ converges to \ $X_{c_0}$ \ when \ $c \to c_0$.\\
Define now \ $M' : = M \setminus X_{c_0}$. Then the induced map \ $f' : M' \to S$ \ is still geometrically flat (but not f-flat, as we shall see), because we keep the equidimensionality for \ $f'$ \ and the normality of \ $S$. For \ $s \in S$ \ the fiber of \ $f'$ \ has only finitely many irreducible components, so the classifying map \ $\varphi : S \to \mathcal{C}_n^{loc}(M')$ \ for the fibers of the map \ $f'$ \ factors set theoretically through the inclusion \ $\mathcal{C}_n^f(M') \hookrightarrow \mathcal{C}_n^{loc}(M')$. Let \ $\varphi' : S \to \mathcal{C}_n^f(M')$ \ be the corresponding map. \\
Now we shall show that the closure of the graph of \ $\varphi'$ \ in \ $S \times \mathcal{C}_n^f(M')$ \ is not proper on \ $S$. Assume the contrary. Then there exists an open set \ $S' \subset S$ \ contaning \ $c_0$ \ and a compact set \ $K$ \ in \ $M'$ \ such that for any \ $c \in S', c \not= c_0$, we have \ $X_c \cap K \not= \emptyset$. Let \ $(c_{\nu})_{\nu \geq 1}$ \ be a sequence in \ $C \setminus \{c_0\}$ \ converging to \ $c_0$. Then for \ $\nu \gg 1$ \ we have \ $c_{\nu} \in S'$ \ and so \ $X_{c_{\nu}} $ \ meets \ $K$. So, up to pass to a subsequence, we may assume that we have a converging sequence \ $(x_{\nu})_{\nu \in \mathbb{N}}$ \ in \ $K$ \ to a point \ $x \in K$ \ and such that \ $x_{\nu} \in X_{c_{\nu}} $. But as the cycles \ $X_{c_{\nu}}$ \ converge to the empty $n-$cycle in \ $M'$, we obtain a contradiction.\\
Of course this shows that \ $\varphi'$ \ is not continuous. But remark that the map \ $\varphi : S \to \mathcal{C}_n^{loc}(M')$ \ is continuous (it classifies an analytic family of cycles) so its graph is closed in \ $S \times \mathcal{C}_n^{loc}(M')$ \ and its projection on \ $S$ \ is an homeomorphism (so it is proper). \\
It is easy to see that for any local proper holomorphic modification \ $\tau : \tilde{S}' \to S'$ \ where \ $S'$ \ is an open neighbourhood of \ $c_0$, the strict transform of \ $M'$ \ cannot be geometrically f-flat on \ $\tilde{S}'$. \\
A concrete example is given as follows : let \ $M : = \{(t,x) \in \C^2 \ / \ x^2 = t \}, S : = \C$ \ and \ $f(t,x) = t$. Then \ $f' : M' : = M \setminus \{(1,1)\} \to S$ \ is an example of a geometrically flat map where the classifying maps
$$ \varphi : S \to \mathcal{C}_0^{loc}(M')\quad {\rm and} \quad \varphi' : S \to \mathcal{C}_0^{f}(M')$$
have closed graphs which are respectively proper and not proper on \ $S$. Of course \ $\varphi'$ \ is not continuous at \ $t = 1$ \ and the map \ $f'$ \ is not quasi-proper.
\subsection{Geometric f-flattening.}
As it becomes apparent, the notion of quasi-properness is not strong enough in presence of big fibers for an holomorphic map \ $f : M \to S$ \ between reduced irreducible complex spaces : the irreducible components of a big fibers may meet a compact set but limits of generic fibers may have an irreducible component escaping at infinity inside these big fibers.\\
These considerations lead to the following definition.
\begin{defn}\label{strongly quasi-proper}
Let \ $f : M \to S$ \ be a quasi-proper holomorphic map between two reduced and irreducible complex spaces. We shall say that \ $f$ \ admits {\bf local geometric f-flattenings} if for each point in \ $S$ \ there exists an open neighbourhood \ $S'$ \ and a proper modification \ $\tau' : \tilde{S'} \to S'$ \ such that the strict transform \ $\tilde{M}' $ \ of \ $M' = f^{-1}(S')$ \ has a geometric f-flat projection \ $\tilde{f}' : \tilde{M}' \to \tilde{S}' $. So the following diagram commutes
$$\xymatrix{\tilde{M}' \ar[d]^{\tilde{f}'} \ar[r]^{\pi'} & M' \ar[d]^{f'} \\ \tilde{S}' \ar[r]^{\tau'} & S' } $$
and \ $\pi'$ \ is a proper modification.
In this situation we shall say that such a quasi-proper map \ $f$ \ is {\bf strongly quasi-proper}.
\end{defn}
From the previous theorem, a quasi-proper map \ $f$ \ admits local geometric flattenings if and only if the classifying map \ $\varphi^f$ \ is properly extendable along \ $\Sigma$, and in this case, there exists a global quasi-proper f-flattening for \ $f$.\\
Coming back to the point of view of families of finite type \ $n-$cycles, this gives a good definition for f-meromorphic families :
\begin{defn}\label{f-mero.}
Let \ $S$ \ be a reduced an irreducible complex space and let \ $\Sigma \subset S$ \ be a closed analytic subset with no interior point in \ $S$. Let \ $M$ \ be a reduced complex space.
We shall say that an f-analytic family \ $(X_s)_{s \in S \setminus \Sigma}$ \ of \ $n-$cycles of \ $M$ \ is \ {\bf f-meromorphic along \ $\Sigma$} \ if there exists, for each point in \ $\Sigma$, an open neighbourhood \ $S'$, a proper modification \ $\tau : \tilde{S}' \to S'$ \ and a f-analytic family \ $(X_{\tilde{s}})_{\tilde{s} \in \tilde{S}'}$ \ extending the restriction of the given family to \ $S' \setminus \Sigma$.
\end{defn}
Then we have the following necessary and sufficient condition for meromorphy along \ $\Sigma$
\begin{prop}\label{f-mero}
A necessary and sufficient condition for an f-analytic family \ $(X_s)_{s \in S \setminus \Sigma}$ \ of \ $n-$cycles of \ $M$ \ to be f-meromorphic along \ $\Sigma$ \ is that there exists a closed analytic subset \ $\tilde{G} \subset S \times M$ \ with the following properties :
\begin{enumerate}[i)]
\item \ $\tilde{G} \cap (S \setminus \Sigma)\times M = \vert G\vert $ \ where \ $G$ \ is the graph of the given family.
\item The projection \ $ \tilde{p} : \tilde{G} \to S$ \ is strongly quasi-proper.
\end{enumerate}
\end{prop}
\parag{Proof} The condition is clearly necessary, because the image of the graph of the "extended" family by the map \ $(\tau, \pi) : \tilde{S} \times \tilde{M} \to S \times M$ \ is proper.\\
The condition is sufficient, because the existence of \ $\tilde{G}$ \ shows that the closure of the graph of classifying map of the given family is proper on \ $S$. Then we may apply the f-flattening theorem \ref{flattning 1} and use a normal \ $\tilde{S}$ \ to conclude the f-analyticity of the extended family . $\hfill \blacksquare$\\
So the most significative part of the f-flattening theorem \ref{flattning 1} may be reformulate in the following way
\begin{thm}[f-flattening, second version]
Let \ $M$ \ and \ $S$ \ be reduced irreducible complex spaces and \ $(X_s)_{s \in S \setminus \Sigma}$ \ be an f-analytic family of cycles in \ $M$ \ parametrized by \ $S \setminus \Sigma$, where \ $\Sigma$ \ is a closed analytic subset with no interior points in \ $S$. Assume that the support of the graph of this family is the restriction to \ $(S \setminus \Sigma) \times M$ \ of an irreducible analytic subset \ $X$ \ of \ $S \times M$ \ which is quasi-proper over \ $S$. Then the following conditions are equivalent :
\begin{enumerate}
\item The quasi-proper map \ $p : X \to S$ \ admits local f-flattenings along \ $\Sigma$ ;
\item The quasi-proper map \ $p : X \to S$ \ admits a global f-flattening along \ $\Sigma$ ;
\item The family \ $(X_s)_{s \in S \setminus \Sigma}$ \ of \ $n-$cycles of \ $M$ \ is f-meromorphic along \ $\Sigma$.
\item The classifying map \ $\varphi^f : S \setminus \Sigma \to \mathcal{C}_n^f(M)$ \ of the family \ $(X_s)_{s \in S \setminus \Sigma}$ \ is properly extendable along \ $\Sigma$.
\item There exists a proper (holomorphic) modification \ $\tau : \tilde{S} \to S$ \ with center contained in \ $\Sigma$ \ and a f-analytic family \ $(X_{\tilde{s}})_{\tilde{s} \in \tilde{S}}$ \ of \ $n-$cycles in \ $M$ \ extending the given f-analytic family on \ $S \setminus \Sigma$.
\end{enumerate}
\end{thm}
Now using the reparametrization theorem for a global f-flattening of the map \ $f$ \ we find a {\bf canonical modification} \ $\tau : \tilde{S} \to S$ \ where \ $\tilde{S}$ \ is simply the closure of the graphe of the map \ $\varphi^f$ \ in \ $S \times \mathcal{C}_n^f(M)$ \ endowed with a structure of a weakly normal complex space, thanks to the semi-proper direct image theorem of D. Mathieu (used in a Banach analytic setting, see [M.00] or the appendix).
\begin{defn}
The geometrically f-flat map \ $\tilde{f} : \tilde{M} \to \tilde{S}$ \ obtained in this way, where \ $\tilde{M}$ \ is the strict transform by \ $\tau$ \ of \ $M$, will be called the {\bf canonical f-flattening} of \ $f$.
\end{defn}
We let the reader give the obvious universal property of the canonical f-flattening.
\begin{prop}\label{big fiber}
Let \ $f : M \to S$ \ be a strongly quasi-proper holomorphic map between reduced and irreducible complex spaces. Let \ $\tau : \tilde{S} \to S$ \ be a proper modification of \ $S$ \ such that the strict transform \ $\tilde{f} : \tilde{M} \to \tilde{S}$ \ is geometrically f-flat. Then for any \ $s \in S$ \ we have
$$f^{-1}(s) = \bigcup_{\tau(\tilde{s}) = s} \ p\big(\tilde{f}^{-1}(\tilde{s})\big) ,$$
where \ $p : \tilde{M} \to M$ \ is the projection.
\end{prop}
This proposition shows that in a strongly quasi-proper map the limits of generic fibers in the sense of the topology of \ $\mathcal{C}_n^f(M)$) fill up the big fibers.
\parag{Proof} Let \ $\vert \tilde{G}\vert$ \ be the graph of the f-analytic family \ $(X_{\tilde{s}})_{\tilde{s} \in \tilde{S}}$ \ extending the analytic family of fibers of \ $S$ \ on \ $S \setminus \Sigma$. Then the holomorphic map \ $\tau\times id_M : \vert \tilde{G}\vert \to M$ \ is proper so for each \ $s \in S$ \ the image of \ $\tilde{f}^{-1}(\tau^{-1}(s))$ \ which is equal to \ $ \cup_{\tau(\tilde{s})} \ p\big(\tilde{f}^{-1}(\tilde{s})\big)$ \ is a closed analytic subset in \ $f^{-1}(s)$. \\
If \ $(s,x)\in f^{-1}(s)$ \ is not in this subset, the point \ $x \in M$ \ has an open neighbourhood \ $V$ \ which does not meet \ $X_{s'}$ \ for any \ $s' \in S\setminus \Sigma$. But then an irreducible component of \ $M$ \ containing a non empty open set in \ $V$ \ does not meet \ $X_{s'}$ \ for any \ $s' \in S\setminus \Sigma$ \ for \ $s'$ \ near enough to \ $s$. As \ $M$ \ is irreducible, this is impossible. $\hfill \blacksquare$
\parag{Remark} Let \ $S$ \ be a compact topological space and let \ $(X_s)_{s \in S}$ \ be a continuous family of \ $n-$cycles in a reduced complex space \ $M$. Then the projection of the set-theoretic graph \ $p : \vert G\vert \to M$ \ of the family is proper on \ $M$. The part i) of the first version of the flattening theorem \ref{flattning 1}, gives always a proper topological modification; so we may conclude that the union of limits of generic fibers, in the sense of the topology of \ $\mathcal{C}_n^{loc}(M)$, in a big fiber is a closed set which is a union of closed analytic \ $n-$dimensional subsets. Now the same argument as in the proof above shows that, assuming again that \ $M$ \ is irreducible, the big fibers are filled up by limits in the topology of \ $\mathcal{C}_n^{loc}(M)$, of generic fibers \\
But notice that we have more limits for this topology than for the topology of \ $\mathcal{C}_n^f(M)$. And the assertion would not be true with this topology even if we assume \ $f$ \ quasi-proper because in this case the quasi-properness of \ $f$ \ is not enough, in general, to show that the closure of the graph of the family of generic fibers is proper on \ $S$.
\section{Quasi-proper meromorphic quotients.}
\subsection{Quasi-proper analytic equivalence relations.}
Let now \ $X$ \ be an irreducible complex space and \ $R \subset X \times X$ \ an analytic subset which is the graph of an equivalence relation denoted by \ $\mathcal{R}$.
\begin{defn}
We shall say that \ $\mathcal{R}$ \ is a {\bf quasi-proper (resp. strongly quasi-proper) analytic equivalence relation} on \ $X$ \ if the following conditions hold
\begin{enumerate}[i)]
\item \ $R$ \ has a finite number of irreducible components \ $(R_i)_{i \in I}$.
\item The projection \ $\pi_i : R_i \to X$ \ (on the first factor) is quasi-proper for each \ $ i \in I$.
\item All irreducible components of \ $R$ \ which surject on \ $X$ \ have the same dimension \ $\dim X + n$ \ (and are strongly quasi-proper on \ $X$).
\end{enumerate}
\end{defn}
In this situation the irreducible components of \ $R$ \ which does not surject on \ $X$ \ will be forgotten and we shall denote by \ $R_s$ \ their union and by \ $R_b$ \ the union of the others components. Using the direct image theorem of Kuhlmann gives the following facts
\begin{enumerate}
\item The projection on \ $X$ \ of \ $R_s$ \ is an analytic set with empty interior.
\item The set in \ $X$ \ where the dimension of \ $\pi^{-1}(x)$ \ is \ $> n$ \ is an analytic set in \ $X$ \ with no interior point.
\end{enumerate}
\parag{Notation}
We shall denote by \ $\Sigma$ \ the analytic subset in \ $X$ \ which is the union of the non normal points in \ $X$, the image of \ $R_s$ \ in \ $X$ \ and the set in \ $X$ \ where the dimension of \ $\pi^{-1}(x)$ \ is \ $> n$. It has no interior point in \ $X$, and the holomorphic map
$$ \pi : R \setminus \pi^{-1}(\Sigma) \to X \setminus \Sigma $$
is a quasi-proper surjective \ $n-$equidimensional holomorphic map on a normal complex space. So we have an f-analytic family of \ $n-$cycles in \ $X$ \ associated to its fibers and so a classifying map
$$ \varphi : X \setminus \Sigma \to \mathcal{C}_n^f(X) $$
for this family.
\begin{defn}\label{mero. quotient}
In the situation describe above we shall say that a meromorphic map
$$q : X \dashrightarrow Q $$
is a { \bf quasi-proper meromorphic quotient} (resp. {\bf strongly quasi-proper meromorphic quotient}) for the equivalence relation \ $\mathcal{R}$ \ when there exists a proper modification \ $\tau : \tilde{X} \to X$ \ with center contained in \ $\Sigma$ \ and a holomorphic map \ $\tilde{q} : \tilde{X} \to Q$ \ inducing the meromorphic map \ $q$ \ (note that this implies that \ $q$ \ is holomorphic on \ $X \setminus \Sigma$) such that the following conditions are satisfied :
\begin{enumerate}[i)]
\item For \ $x, y \in X \setminus \Sigma$ \ we have \ $x \mathcal{R} y$ \ if and only if \ $q(x) = q(y)$.
\item The map \ $\tilde{q}$ \ is quasi-proper (resp. strongly quasi-proper) and surjective.
\end{enumerate}
When \ $R$ \ is strongly quasi-proper on \ $X$ \ we shall say that the meromorphic map \ $q : X \dashrightarrow Q$ \ is the { \bf universal meromorphic quotient} when the corresponding holomorphic map \ $\tilde{q} : \tilde{X} \to Q$ \ is the universal reparametrization of the family of fibers of the canonical f-flattening of the strongly quasi-proper map \ $p_1 : R_b \to X$.
\end{defn}
\parag{Remarks}
\begin{enumerate}
\item It is easy to see that if any strongly quasi-proper quotient exists, then the universal meromorphic quotient also exists and they are meromorphically equivalent.
\item For a given quasi-meromorphic quotient \ $q : X \dashrightarrow Q$ \ the fact that after some proper modification the map \ $\tilde{q}$ \ becomes strongly quasi-proper is independant of the choice of the modification \ $\tau : \tilde{X} \to X$ \ on which \ $q$ \ extends holomorphically because this property only depends on generic fibers of \ $q : X \setminus \Sigma \to \mathcal{C}_n^f(M)$.\\
\end{enumerate}
\subsection{Existence of a meromorphic quotient.}
Our main result is the following theorem.
\begin{thm}\label{f-mero. quotient}
Let \ $X$ \ be a reduced irreductible complex space and \ $\mathcal{R}$ \ be a strongly quasi-proper analytic equivalence relation, so satisfying the following condition :
\begin{itemize}
\item Each irreducible component of the graph \ $R$ \ of \ $\mathcal{R}$ \ which surjects on \ $X$ \ is strongly quasi-proper on \ $X$ (or admits a geometric f-flattening on \ $X$). $\hfill (@) $
\end{itemize}
Then \ $\mathcal{R}$ \ admits a universal strongly quasi-proper meromorphic quotient. The set \ $Q$ \ is the weak normalisation of the image in \ $\mathcal{C}_n^f(X)$ \ of the closure in \ $X \times \mathcal{C}_n^f(X)$ \ of the graph of the map \ $\varphi^f : X \setminus \Sigma \to \mathcal{C}_n^f(X)$ \ classifying the f-analytic family whose graph is \ $R_b \cap ((X \setminus \Sigma)\times X)$.
\end{thm}
\parag{Comment} With our method we cannot prove the existence of a quasi-proper meromorphic quotient
in the case of a quasi-proper equivalence relation \ $\mathcal{R}$ \ which does not satisfy the condition \ $(@)$. This comes from the fact that the generalization given in [M.00] of Kuhlmann's theorem may be used only in the case of the classifying map of a f-analytic family; in the case of a semi-proper "holomorphic" map of a finite dimensional complex space with values in \ $\mathcal{C}_n^f(X)$ \ we can use banach analytic sets to determine (locally) finite type cycles (see the appendix); but this does not seem possible for the analoguous case with values in \ $\mathcal{C}_n^{loc}(X)$ : for the topology of \ $\mathcal{C}_n^{loc}(X)$ \ cycles near a given cycle cannot be determined by a finite number of (adapted) scales.
\parag{Proof} Consider the modification \ $\tau : \tilde{X} \to X$ \ given by the f-flattening theorem applied to the quasi-proper map \ $R_b \to X$ \ which admits a f-geometric flattening by assumption. Now the strict transform \ $\tilde{R}$ \ of \ $R_b$ \ via \ $\tau$ \ has a geometrically f-flat projection on \ $\tilde{X}$. So we get a classifying map
$$ \tilde{\varphi}^f : \tilde{X} \to \mathcal{C}_n^f(X).$$
Now the point is to prove that this map is semi-proper.\\
Let \ $C_0 \in \mathcal{C}_n^f(X)$; by definition of the topology of this space, for each open set \ $W \subset\subset X$ \ such that each irreducible components of \ $C_0$ \ meets \ $W$, the set \ $\Omega(W)$ \ of all \ $C$ \ with this property is an open set in \ $\mathcal{C}_n^f(X)$. Let us show that we have \ $\tilde{\varphi}(\tilde{X}) \cap \Omega(W) = \tilde{\varphi}(\tilde{K}) \cap \Omega(W)$, where \ $\tilde{K} = (\tau^{-1}(f(K))\times K) \cap \tilde{X}$ \ with \ $K : = \bar W$.\\
Let \ $C = \tilde{\varphi}^f(y)$ \ be in \ $\Omega(W)$ \ for some \ $y \in \tilde{X}$. Then \ $\tau(y)$ \ is the limit of a sequence of points \ $(x_{\nu})_{\nu\in \mathbb{N}}$ \ in \ $X \setminus \Sigma$. So \ $\tilde{\varphi}^f(x_{\nu})$ \ converges to \ $C$. For \ $\nu$ \ large enough \ $\tilde{\varphi}^f(x_{\nu}) $ \ lies in \ $\Omega(W)$. This means that the equivalence class of \ $x_{\nu}$ \ which is \ $\vert \varphi^f(x_{\nu})\vert$ \ meets \ $W$. Then, for \ $\nu$ \ large enough, there exists a point \ $x'_{\nu}\in W$ \ such \ $\tilde{\varphi}^f(x'_{\nu}) = \tilde{\varphi}^f(x_{\nu})$. Consider now the sequence \ $(x'_{\nu})_{\nu \geq \nu_0}$ \ as a sequence in \ $\tau^{-1}(\bar W)$. We may, up to pass to a subsequence, assume that it converges to a point \ $z \in \tau^{-1}(\bar W)$. By continuity of \ $\tilde{\varphi}^f$ \ we shall have \ $\tilde{\varphi}^f(z) = \varphi^f(y)$ \ and by construction, \ $z$ \ is in \ $ \tilde{K}$. \\
So the semi-properness of \ $\tilde{\varphi}^f$ \ is proved, and the "reparametrization theorem" of [M.00] (see also [B.08] or the Appendix) may be applied to \ $\tilde{\varphi}^f$. Then it gives a weakly normal f-meromorphic quotient for \ $\mathcal{R}$ \ in the sense of the definition \ref{mero. quotient}, because our proof applies now to the map \ $\tilde{q} : \tilde{X} \to Q$ \ where \ $Q$ \ is the weak normalisation of the image of \ $\tilde{\varphi}^f$; it gives that \ $\tilde{q}$ \ is in fact geometrically f-flat. $\hfill \blacksquare$
\subsection{Extension to meromorphic equivalence relations.}
Let me give two definitions in order to formulate a simple corollary of the meromorphic quotient theorem.
\begin{defn}\label{mero. equiv.}
Let \ $X$ \ be a reduced and irreducible complex space. A {\bf quasi-proper meromorphic equivalence relation} \ $\mathcal{R}^m$ \ on \ $X$ \ will be a closed analytic subset \ $R^m \subset X \times X$ \ with finitely many irreducible components, such that there exists a closed analytic subset \ $Y \subset X$ \ with no interior point in \ $X$ \ with the following conditions :
\begin{enumerate}[i)]
\item \ $R : = R^m \cap\big[ (X\setminus Y)\times(X \setminus Y)\big] $ \ is the graph of an equivalence relation \ $\mathcal{R}$ \ on \ $X \setminus Y$.
\item We have \ $\bar R = R^m$ \ in \ $X \times X$.
\item Each irreducible component of \ $R^m$ \ is quasi-proper on \ $X$ \ via the first projection.
\end{enumerate}
We shall say that \ $\mathcal{R}^m$ \ is {\bf strongly quasi-proper} when there exists an integer \ $n $ \ such that the first projection of each irreducible component of \ $R$ \ which surjects on \ $X$ \ is strongly quasi-proper with generic fiber of pure dimension \ $n$.
\end{defn}
\parag{Remark} If, for a meromorphic equivalence relation on \ $X$, the subset \ $R^m$ \ is proper on \ $X$ \ via the first projection, we shall say that the meromorphic equivalence relation \ $\mathcal{R}^m$ \ is proper. Of course, in this case, the geometric flattening theorem for compact cycles (see [B.78]) implies that \ $\mathcal{R}^m$ \ is strongly quasi-proper as soon as \ $R$ \ is generically equidimensional on \ $X$.
\bigskip
\begin{defn}\label{mero.quot.}
In the situation of the previous definition, we shall say that \ $\mathcal{R}^m$ \ has a {\bf quasi-proper (resp. strongly quasi-proper, resp. proper) meromorphic quotient} when there exists a meromorphic surjective map \ $ q : X \dashrightarrow Q$ \ whose graph is quasi-proper (resp. strongly quasi-proper, resp. proper) on \ $Q$ \ such that there exists a dense open set \ $\Omega$ \ of \ $X \setminus Y$, on which \ $q$ \ is holomorphic and such that for \ $x,x'$ \ in \ $\Omega$ \ we have \ $x\mathcal{R} x' $ \ if and only if \ $q(x) = q(x')$.
\end{defn}
\begin{thm}
Let \ $X$ \ be a reduced irreductible complex space and \ $\mathcal{R}^m$ \ be a strongly quasi-proper (resp. proper) meromorphic equivalence relation on \ $X$. Then \ $\mathcal{R}^m$ \ admits a strongly quasi-proper (resp. proper) meromorphic quotient.
\end{thm}
\parag{Proof} Define \ $R_b$ \ as the union of the components of \ $R^m$ \ which dominates \ $X$ \ and \ $R_s$ \ as the union of the other components. Let \ $Z$ \ be the projection in \ $X$ \ of \ $X_s$. It is a closed analytic subset with no interior points. We choose \ $\Sigma \subset X$ \ such that it contains \ $Z$. non normal points of \ $X$, the set of points \ $x \in X$ \ such that the fiber at \ $x$ \ of the projection \ $R_b \to X$ \ is \ $> n$ \ and also \ $Y$. Then we have a classifying map
$$ \varphi^f : X \setminus \Sigma \to \mathcal{C}_n^f(X) $$
and the closure \ $\Gamma^f$ \ of its graph in \ $X \times \mathcal{C}_n^f(X)$ \ is proper on \ $X$, by the strongly proper assumption of the projection \ $R_b \to X$. But now for \ $x \not\in \Sigma$ \ the set \ $q(x)$ \ may not be equal to its equivalence class in \ $X \setminus Y$, which is equal to \ $q(x) \cap (X \setminus Y)$. But for \ $x,x' \not\in \Sigma$ \ the equality \ $q(x) = q(x')$ \ either implies \ $x \mathcal{R} y$ \ when the two fibers of \ $R$ \ over \ $x, x'$ \ are equal when cut with \ $X \setminus Y$, or at least one has an irreducible component contained in \ $Y$. So it is enough to delete in \ $X \setminus \Sigma$ \ the analytic subset of \ $x$ \ such this happens ; this is a closed analytic subset thanks to the lemma \ref{facile}; then we define the dense open set \ $\Omega \subset X \setminus \Sigma$ \ as its complement. $\hfill \blacksquare$
\begin{lemma}\label{facile}
Let \ $(X_s)_{s \in S}$ \ be a f-analytic family of cycles in \ $M$ \ and \ $Y \subset M$ \ a closed analytic subset. The set of \ $s \in S$ \ such \ $X_s$ \ has an irreducible component contained in \ $Y$ \ is a closed analytic subset in \ $S$.
\end{lemma}
\parag{Proof} This set is clearly closed because a limit of cycles contained in \ $Y$ \ is contained in \ $Y$. The problem is then local on \ $S$. But, thanks to the quasi-properness on \ $S$ \ of the graph of the family, it is enough to consider finitely many scales ; this reduces to the analoguous lemma for multiform graphs, so when \ $S = H(\bar U, \Sym^k(B))$ \ and \ $Y \subset U\times B = M$. This case is elementary. $\hfill \blacksquare$\\
\parag{Remark} In the case of a proper meromorphic equivalence relation which has pure dimensional generic fibers we obtain that there always exists an universal proper meromorphic quotient. Compare with [C.60].
\section{Stein factorization.}
The aim of this section is to apply the previous methods to build a Stein's factorization for strongly quasi-proper map. We shall begin by the geometric f-flat case.
\subsection{The geometric f-flat case.}
This paragraph is devoted to prove the following result :
\begin{thm}\label{Stein f-flat}
Let \ $f : M \to S$ \ be a geometrically f-flat holomorphic surjective map between two reduced complex spaces. Assume that \ $M$ \ is normal and denote \ $n : = \dim M - \dim S$. Then there exists a geometrically f-flat holomorphic map \ $g : M \to T$ \ on a weakly normal complex space \ $T$ \ and a proper and finite surjective map \ $p : T \to S $ \ such that \ $f = p\circ g$ \ and such that the generic fiber of \ $g$ \ is irreducible.
\end{thm}
We shall begin by some lemmas.
\begin{lemma}\label{irred.}
Let \ $M$ \ be a reduced complex space. The subset \ $Irr_n(M)$ \ of cycles in \ $\mathcal{C}_n^f(M)$ \ which are reduced (all multiplicities are \ $1$) and irreducible is an open set in \ $\mathcal{C}_n^f(M)$.
\end{lemma}
Notice that this lemma is false in \ $\mathcal{C}_n^{loc}(M)$.
\parag{Proof} Let \ $C_0 \in \mathcal{C}_n^f(M)$ \ be reduced and irreducible. Choose a smooth point \ $x_0$ \ in \ $C_0$ \ and an \ $n-$scale \ $E : = (U,B,j)$ \ on \ $M$ \ adapted to \ $C_0$ \ whose center \ $C(E) : = j^{-1}(U \times B)$ \ contains \ $x_0$ \ and such that \ $deg_E(C_0) = 1$. Now consider the open set \ $\Omega_1(E) \cap \Omega(C(E))$ \ in \ $\mathcal{C}_n^f(M)$, where \ $\Omega_1(E) $ \ is the set of cycles for which \ $E$ \ is adapted and have degree 1 in \ $E$, and where \ $\Omega(W)$ \ is the set of cycles such that each irreducible component meets \ $W$. This open set contains \ $C_0$. Now each cycle \ $C$ \ in this open set is clearly reduced and irreducible. $\hfill \blacksquare$
\parag{Notation} For a reduced complex space \ $M$ \ denote by \ $red\mathcal{C}_n^f(M) \subset \mathcal{C}_n^f(M)$ \ the subset of reduced cycles of \ $M$.\\
\begin{cor}\label{reduced}
Let \ $M$ \ be a reduced complex space. The subset \ $red\mathcal{C}_n^f(M)$ \ of reduced cycles in \ $\mathcal{C}_n^f(M)$ \ is open.
\end{cor}
\parag{Proof} Let \ $C_0 \in red\mathcal{C}_n^f(M)$. Denote by \ $J$ \ the finite set of irreducible components of \ $C_0$. Choose for each irreducible component \ $C_0^j, j \in J$, of \ $C_0$ \ a smooth point \ $x^j$ \ in \ $C_0$ \ which belongs to \ $C_0^j$. Choose then, for each \ $j \in J$, an \ $n-$scale \ $E_j$ \ adapted to \ $C_0$ \ such that \ $x^j \in C(E_j)$ \ and \ $deg_{E_j}(C_0) = 1$. Then any cycle \ $C$ \ in the open set
$$ \bigcap_{j \in J} \ \Omega_1(E_j) \bigcap \Omega(\cup_{j \in J} C(E_j)) $$
is reduced. $\hfill \blacksquare$\\
\begin{lemma}\label{non reduced}
Let \ $M$ \ be a reduced complex space. The subset \ $\Sigma$ \ of cycles in \ $\mathcal{C}_n^f(M)$ \ which have a non reduced irreducible component (so at least one multiplicity is \ $\geq 2$) is a closed analytic subset in the following sense : for any f-analytic family \ $(X_s)_{s \in S}$ \ parametrized by a reduced complex space \ $S$, the pull-back \ $\varphi^{-1}(\Sigma)$ \ is a closed analytic subset of \ $S$, where \ $\varphi$ \ is the classifying map \ $\varphi : S \to \mathcal{C}_n^f(M)$ \ of the f-analytic family.
\end{lemma}
Notice again that this lemma is false in \ $\mathcal{C}_n^{loc}(M)$.
\parag{Proof} As the complement of \ $\Sigma$ \ is open, thanks to corollary \ref{reduced},
\ $\Sigma$ \ is closed.\\
We shall now give "local holomorphic equations" for \ $\Sigma$ \ in \ $\mathcal{C}_n^f(M)$. Let \ $C_0 \in \Sigma$. Choose for each irreducible component \ $C_0^j, j \in J$, of \ $\vert C_0\vert$ \ a smooth point \ $x_j$ \ of \ $\vert C_0\vert$ in \ $C_0^j$. Let \ $E_j : = (U_j,B_j, i_j)$ \ be a \ $n-$scale adapted to \ $C_0$ \ and whose center \\ $C(E_j) : = i_j^{-1}(U_j\times B_j)$ \ contains \ $x_j$. Put \ $W : = \cup_{j \in J} \ C(E_j)$. Let \ $k_j : = deg_{E_j}(C_0)$ \ and define the open set \ $\mathcal{V}$ \ in \ $\mathcal{C}_n^f(M)$ \ as
$$ \mathcal{V} : = \big(\bigcap_{j\in J} \ \Omega_{k_j}(E_j)\big) \bigcap \Omega(W).$$
Now consider the map
$$ \alpha : \mathcal{V} \to \prod_{j \in J} \ H(\bar U_j, \Sym^{k_j}(B_j)) $$
which associates to \ $C \in \mathcal{V}$ \ the collection of multiform graphs that \ $C$ \ defines in the adapted scales \ $(E_j)_{j \in J}$. Then \ $\alpha$ \ is continuous and injective. Continuity is obvious form the definition of the topology of \ $\mathcal{C}_n^f(M)$. Injectivity comes from the fact that for \ $C \in \mathcal{V}$ \ any irreducible component of \ $C$ \ meets \ $W$. So, if \ $\alpha(C) = \alpha(C')$ \ for \ $C, C' \in \mathcal{V}$ \ we have \ $C \cap W = C'\cap W$ \ and hence \ $C = C'$. Define \ $J' : = \{ j \in J \ / \ k_j\geq 2\}$, and for each \ $j \in J'$ \ let \ $Z_j \subset H(\bar U_j, \Sym^{k_j}(B_j))$ \ be the subset of \ $X \in H(\bar U_j, \Sym^{k_j}(B_j))$ \ such that at least one irreducible component of \ $X \cap (U_j\times B_j)$ \ is not reduced. Then \ $Z_j$ \ is a closed banach analytic subset of \ $H(\bar U_j, \Sym^{k_j}(B_j))$ \ because it is the zero fiber of the holomorphic map given by the discriminant
$$ \Delta : H(\bar U_j, \Sym^{k_j}(B_j)) \to H(\bar U_j, S^{k_j.(k_j-1)}(\C^p)). $$
Now it is clear that the subset \ $\Sigma \cap \mathcal{V}$ \ is the pull-back by \ $\alpha$ \ of the closed banach analytic subset
$$\mathcal{Z} : = \bigcup_{j \in J'} \ \big(Z_j\times\prod_{j \in J\setminus\{j\}} H(\bar U_j, \Sym^k(B_j))\big) .$$
If we have \ $\varphi : S' \to \mathcal{V}$ \ which is the classifying map of an f-analytic family of $n-$cycles in \ $M$ \ parametrized by a reduced complex space \ $S'$, the map \ $\alpha\circ \varphi$ \ is holomorphic and so the set \ $\varphi^{-1}(\Sigma) = (\varphi\circ\alpha)^{-1}(\mathcal{Z})$ \ is a closed analytic subset in \ $S'$. $\hfill \blacksquare$
\begin{lemma}\label{universal irred}
Let \ $M$ \ be a reduced complex space and \ $n$ \ an integer. Define the set \ $\Phi : = \{ (X,Y) \in \mathcal{C}_n^f(M)\times \mathcal{C}_n^f(M) \ / \ X \leq Y \}$. Then \ $\Phi$ \ is closed and its second projection on \ $\mathcal{C}_n^f(M)$ \ is proper finite and surjective.
\end{lemma}
\parag{Proof} The closedness of \ $\Phi$ \ is a consequence of the next lemma. The fact that the second projection is surjective with finite fibers is obvious. So it is enough to prove that it is a closed map. But this is an easy consequence of the description of compact sets in \ $\mathcal{C}_n^f(M)$ \ combined with the next lemma. $\hfill \blacksquare$
\begin{lemma}\label{limite inegalite}
Let \ $(X_{\nu})_{\nu\in \mathbb{N}}$ \ and \ $(Y_{\nu})_{\nu\in \mathbb{N}}$ \ be two converging sequences in \ $\mathcal{C}_n^f(M)$. If we have \ $X_{\nu} \leq Y_{\nu}$ \ for each \ $\nu \gg 1$ \ then the limits \ $X$ \ and \ $Y$ \ satisfy \ $X \leq Y$.
\end{lemma}
\parag{Proof} The inclusion \ $\vert X\vert \subset \vert Y\vert $ \ is clear. Let \ $C$ \ be an irreducible component of \ $Y$. Let \ $y$ \ be a smooth point of \ $\vert Y\vert$ \ in \ $C$, and let \ $E$ \ be an \ $n-$scale on \ $M$ \ whose center contains \ $y$ \ and such that \ $deg_E(C) = 1$. Let \ $k : = deg_E(Y)$. For each \ $\nu \gg 1$ \ the scale \ $E$ \ is adapted to \ $X_{\nu}$ \ and \ $Y_{\nu}$ \ and we have \ $deg_E(X_{\nu}) \leq k = deg_E(Y_{\nu})$. So we have \ $deg_E(X) \leq k$. Then the irreducible component \ $C$ \ of \ $Y$ \ has multiplicity less than \ $k$ \ in \ $X$. $\hfill \blacksquare$\\
Observe that this lemma is also true for converging sequences in \ $\mathcal{C}_n^{loc}(M)$.\\
\begin{lemma}\label{obvious 1}
Let \ $f : M \to S$ \ be a geometrically f-flat map and let \ $N \subset M$ \ be a closed analytic subset such \ $N$ \ has empty interior in each fiber of \ $f$. Then the restriction \ $f' : M \setminus N \to S$ \ is again a geometrically f-flat map.
\end{lemma}
\parag{Proof} If \ $(X_s)_{s \in S}$ \ is the f-analytic family of fibers of \ $f$, then \ $(X_s\setminus N)_{s \in S}$ \ is an analytic family of cycles in \ $M \setminus N$. The only point to see to conclude the proof is to show that \ $f'$ \ is quasi-proper. For any compact \ $K$ \ in \ $S$ \ there exists a relatively compact open set \ $W_K$ \ in \ $M$ \ such that any irreducible component of any \ $X_s$ \ for \ $s \in K$ \ meets \ $W$. Let \ $L : = \bar W\cap N$, and choose a basis of a compact neighbourhood \ $(\Lambda_{\nu})_{\nu \in \mathbb{N}}$ \ of \ $L$. If, for infinitely many \ $\nu$, there exists an irreducible component \ $C_{\nu}$ \ of a fiber \ $f^{-1}(s_{\nu})$ \ with \ $s_{\nu} \in K$ \ such that \ $C_{\nu}\setminus N$ \ does not meet \ $\Lambda_{\nu}$, up to pass to a subsequence we may assume that \ $s_{\nu}$ \ converges to a point \ $s \in K$ \ and that \ $C_{\nu}$ \ converges in \ $\mathcal{C}_n^f(M)$ \ to a non empty cycle \ $C$ \ such that \ $\vert C\vert \leq f^{-1}(s) \cap N$. This contradicts our assumption that \ $N$ \ has empty interior in each fiber of \ $f$. $\hfill \blacksquare$
\parag{Proof of theorem \ref{Stein f-flat}}Define \ $\Sigma \subset S$ \ as the union of non normal points in \ $S$ \ and of the set of points in \ $S$ \ where the fiber \ $f^{-1}(s)$ \ is not reduced (it is a closed analytic subset of \ $S$ \ thanks to lemma \ref{non reduced}). Let \ $N$ \ be the subset of points \ $x$ \ in \ $M$ \ which have multiplicity \ $\geq 2$ \ in the cycle \ $X_{f(x)}$. From ch.IV of [BOOK], we know that \ $N$ \ is a closed analytic subset in \ $M$. Let \ $M' : = M \setminus (N \cup f^{-1}(\Sigma))$. Then each point in \ $M'$ \ is a smooth point of the reduced cycle \ $X_{f(x)} \cap M'$. Denote by \ $C_x$ \ for \ $x \in M'$ \ the irreducible component of \ $f^{-1}(x)$ \ containing \ $x$. It is unique, by definition of \ $M'$, and we have an f-analytic family of cycles in \ $M$, $(C_x)_{x \in M'}$, parametrized by \ $M'$ : analyticity is clear from the criterion of analytic extension for analytic families of cycles because \ $N $ \ has empty interior in each cycle in any \ $X_s$ \ with \ $s \not\in \Sigma$ \ (see [BOOK]) ; quasi-properness on \ $M'$ \ comes from the quasi-properness of \ $f'$ \ proved in lemma \ref{obvious 1}. Let
$$ g_0 : M' \to \mathcal{C}^f_n(M)$$
be the classifying map of this family. Then the map \ $f'\times g_0$ \ takes values in the set
$$ \hat{S} : = \{ (s, C) \in S\times \mathcal{C}_n^f(M) \ / \ C \leq X_s \} $$
and more precisely in \ $p^{-1}_1(S \setminus \Sigma)$ \ where \ $p_1 : \hat{S} \to \mathcal{C}_n^f(M)$ \ is the first projection. Let \ $\Gamma$ \ be the image \ $(f', g_0)(M')$ \ and \ $\bar \Gamma$ \ the closure of the image of \ $M'$ \ in \ $\hat{S}$. Note that we know, thanks to lemma \ref{universal irred}, that \ $\hat{S}$ \ is closed in \ $S \times \mathcal{C}_n^f(M)$. But as \ $f'$ \ is quasi-proper, the map \ $(f', g_0)$ \ is semi-proper, and using D. Mathieu's semi-proper direct image [M.00] (see also the appendix) we obtain that \ $\Gamma$ \ has a natural structure of weakly normal complex space ; then \ $p_1 : \Gamma \to S \setminus \Sigma$ \ is a branched covering. But using again the lemma \ref{universal irred} we obtain that \ $p_1 : \bar\Gamma \to S$ \ is also a branched covering. And now [G.R.58] (or [D.90]) gives a natural structure of weaky normal complex space on \ $\bar\Gamma$.\\
The holomorphic map \ $g' : M' \to \Gamma \subset \bar\Gamma$ \ induced by \ $(f', g_0)$ \ is locally bounded along the analytic subset \ $f^{-1}(\Sigma)\cup N$ \ in the sense that, locally in \ $S$ \ we can embed \ $\bar\Gamma$ \ in \ $S \times B$ \ where \ $B$ \ is a polydisc in an affine space, so we have, by normality of \ $M$, a holomorphic extension \ $g : M \to \bar\Gamma$.\\
It is now an exercice to check that \ $g$ \ is geometrically f-flat with irreducible generic fibers, and satifies \ $f = p_1\circ g$. So we conclude by putting \ $T : = \bar\Gamma$ \ and \ $p : = p_1$. $\hfill \blacksquare$
\subsection{The strongly quasi-proper case.}
Now we shall give the Stein's factorization theorem for a strongly quasi-proper surjective holomorphic map.\\
The main problem comes from the hypothesis of normality for \ $M$ \ in our theorem \ref{Stein f-flat}. Notice first that this assumption was essential to extend to \ $M$ \ the map \ $g$ \ defined on \ $M'$.\\
The problem comes from the following fact :
\begin{itemize}
\item Let \ $\nu : \tilde{M} \to M$ \ be the normalisation of a reduced irreductible complex space \ $M$. Let \ $Z \subset M$ \ be an irreducible subset. Then \ $\nu^{-1}(Z)$ \ may have infinitely many irreducible components in \ $\tilde{M}$. This is shown by the next example.
\end{itemize}
\parag{Example} Let \ $\tilde{M} : = \C^2$ \ and consider on \ $\tilde{M}$ \ the equivalence relation which identifies \ $(n,z)$ \ and \ $(-n,z)$ \ for any \ $n \in \mathbb{N}^*$ \ and any \ $z \in \C$. The quotient is a reduced and irreducible complex space of dimension 2 with normal crossing singularities along the curves \ $\Delta_n : = q(D_n)$ \ where \ $q : \tilde{M} \to M$ \ is the quotient map and \ $D_n : = \{(z_1, z_2) \in \C^2\ / \ z_1 = n \}$ \ for \ $n \in \mathbb{N}^*$.\\
Now let \ $Z_0 : = \{(z_1, z_2) \in \C^2 \ / \ z_1 = z_2\}$. Then \ $Z : = q(Z_0)$ \ is an irreducible curve in \ $M$ \ and \ $q^{-1}(Z) = Z \cup\big(\cup_{n \in \mathbb{N}*} \ \{(-n, n)\}\big) $. So the pull-back of the irreducible curve \ $Z$ \ in \ $M$ \ has infinitely many irreducible components in \ $\tilde{M}$. Of course \ $q$ \ is the normalisation map for \ $M$.\\
The situation is not too bad, thanks to the following simple observation.
\begin{lemma}\label{finitude grosses}
Let \ $\pi : M \to N$ \ be a proper finite and surjective holomorphic map between irreducible complex spaces. Let \ $Z \subset N$ \ be an irreducible analytic subset of dimension \ $d$. Then for any compact set \ $K \subset N$ \ meeting \ $Z$, the compact set \ $\pi^{-1}(K)$ \ meets any irreducible component of \ $\pi^{-1}(Z)$ \ which is of dimension \ $d$.
\end{lemma}
\parag{Proof} It is enough to remark that any irreducible component \ $C$ \ of \ $\pi^{-1}(Z)$ \ which is of dimension \ $d$ \ satisfies \ $\pi(C) = Z$, because the irreduciblity of \ $Z$ \ and the fact that \ $\pi(C)$ \ is an analytic subset of dimension \ $d$. $\hfill \blacksquare$\\
In such a situation the union of irreducible components of \ $\pi^{-1}(Z)$ \ of dimension \ $d$ \ is a \ $d-$cycle of finite type in \ $M$. So we have in this way a map
$$ \pi^* : \mathcal{C}_n^f(N) \to \mathcal{C}_n^f(M) .$$
The next result gives sufficient conditions on an f-analytic family of \ $n-$cycles in \ $N$ \ in order that the corresponding family of \ $n-$cycles in \ $M$ \ is again f-analytic.
\begin{prop}\label{pull-back}
Let \ $\pi : M \to N$ \ be a proper finite and surjective map between irreducible complex spaces. Let \ $R \subset N$ \ be the ramification set of \ $\pi$, so the minimal closed analytic subset such that \ $N \setminus R$ \ is normal and \ $\pi$ \ induced a k-sheeted (unbranched) covering \ $M \setminus \pi^{-1}(R) \to N \setminus R$. Let \ $S$ \ be a normal complex space and \ $(X_s)_{s \in S}$ \ be a f-analytic family of \ $n-$cycles in \ $N$. Assume that the generic cycle is reduced and that the ramification set \ $R \subset N$ \ has empty interior in the generic cycle \ $X_s$. Then there exists a unique f-analytic family \ $(Y_s)_{s\in S}$ \ of \ $n-$cycles in \ $M$ \ such that we have \ $\pi_*(Y_s) = k.X_s$ \ for each \ $s \in S$.
\end{prop}
Observe that, as \ $Y_s$ \ has pure dimension \ $n$, it cannot contain any irreducible components of dimension \ $< n$ \ in \ $\pi^{-1}(X_s)$.
\parag{Proof} Let \ $\sigma : = \dim S$. Let \ $G \subset S \times M$ \ be the graph of the f-analytic family. Define \ $\tilde{G} \subset S \times \tilde{M}$ \ as the union of the irreducible components of \ $(id_S\times \nu)^{-1}(G)$ \ which have dimension \ $\sigma + n$. The main point is to show that \ $\tilde{G}$ \ is quasi-proper on \ $S$. As \ $G$ \ has only finitely many irreducible components and they all are of dimension \ $\sigma + n$, it is enough to prove this in the case where \ $G$ \ is irreducible\footnote{Note that quasi-properness of \ $G$ \ implies quasi-properness of its irreducible components because we are in the strongly quasi-proper case. See [B. Mg. 10].}. Now apply the previous lemma to the map \ $\pi : = id_S \times \nu $ \ and \ $G$. It follows that \ $\tilde{G}$ \ has only finitely many irreducible components. Let \ $\Gamma$ \ be such a component. We want to prove that \ $\Gamma \to S$ \ is quasi-proper. But if \ $K$ \ is a compact in \ $S$, there exists a compact \ $L$ \ in \ $M$ \ such that any irreducible component of any cycle \ $X_s$ \ with \ $s \in K$ \ meets \ $L$. Then, again by the lemma, any irreducible component of a fiber\footnote{Note that \ $\Gamma$ \ has only pure dimensional fibers over \ $S$.} of \ $\Gamma$ \ over a point in \ $K$ \ has to meet the compact set \ $\Gamma \cap (K \times \nu^{-1}(L))$. To conclude, it is enough to recall that \ $\tilde{G}$ \ is the graph of an unique f-analytic family of \ $n-$cycles in \ $\tilde{M}$ \ because \ $S$ \ is normal. The fact that \ $\pi_*(Y_s) = k.X_s$ \ for generic \ $s$ \ is then obvious. The analyticity of the direct image family implies equality for any \ $s \in S$. $\hfill \blacksquare$\\
The following corollary is an easy consequence of the proposition applied to the normalization map of an irreducible complex space.
\begin{cor}\label{utile}
Let \ $f : M \to S$ \ be a quasi-proper surjective and equidimensional holomorphic map of an irreducible complex space \ $M$ \ on a normal complex space \ $S$. Then there exists a unique geometrically f-flat map \ $\tilde{f} : \tilde{M} \to S$ \ such that \ $\tilde{f} = f \circ \nu $, where \ $\nu : \tilde{M} \to M$ \ is the normalization map of \ $M$.
\end{cor}
Notice that \ $f$ \ is geometrically f-flat under the hypothesis of the corollary. Observe also that the direct image via \ $\nu$ \ of the fibers of \ $\tilde{f}$ \ are the fibers of \ $f$ \ as \ $\nu$ \ has generic degree \ $1$.
\begin{thm}\label{Stein red.}
Let \ $f : M \to S$ \ be a strongly quasi-proper surjective holomorphic map between two reduced complex spaces. Let \ $n : = \dim M - \dim S$ \ and assume \ $M$ \ normal. Then there exist normal complex spaces \ $\tilde{\tilde{M}}$ \ and \ $\tilde{\tilde{S}}$, holomorphic maps \
$ g : \tilde{\tilde{M}}\rightarrow \tilde{\tilde{S}}, \quad \tilde{\tilde{q}} : \tilde{\tilde{S}} \to S, \quad \tilde{\tilde{\tau}} : \tilde{\tilde{M}} \to M$ \ with the following properties :
\begin{enumerate}[i)]
\item The holomorphic map \ $g$ \ is geometrically f-flat with irreducible generic fibers\footnote{In the sense that there exists an open dense set where each fiber is irreducible (and reduced).}.
\item The map \ $\tilde{\tilde{\tau}}$ \ is a proper modification.
\item The map \ $\tilde{\tilde{q}}$ \ is proper, generically finite and surjective.
\item We have \ $f\circ \tilde{\tilde{\tau}} = \tilde{\tilde{q}}\circ g$.
\end{enumerate}
\end{thm}
\parag{Proof} Consider first a modification \ $\tau : \tilde{S} \to S$ \ with \ $\tilde{S}$ \ normal, such that the strict transform \ $\tilde{f} : \tilde{M} \to \tilde{S} $ \ is geometrically f-flat. Then let \ $\nu : \tilde{\tilde{M}} \to \tilde{M}$ \ be the normalisation, and apply the corollary \ref{utile} and the theorem \ref{Stein f-flat} to obtain the following commutative diagram
$$ \xymatrix{ \tilde{\tilde{M}} \ar[d]^g \ar[dr]^{\tilde{\tilde{f}}} \ar[r]^{\nu} & \tilde{M} \ar[d]^{\tilde{f}} \ar[r]^{\tilde{\tau}} & M \ar[d]^f \\
\tilde{\tilde{S}} \ar[r]^{\tilde{q}} & \tilde{S} \ar[r]^{\tau} & S} $$
where \ $g$ \ is geometrically f-flat with irreducible generic fiber, $\tilde{q}$ \ is proper finite and surjective, $\nu$ \ is the normalization map of \ $\tilde{M}$ \ and \ $\tilde{\tau}$ \ and \ $\tau$ \ are proper modifications. To conclude define \ $\tilde{\tilde{\tau}} : = \tilde{\tau}\circ \nu$ \ and \ $\tilde{\tilde{q}} : = \tau\circ \tilde{q}$. $\hfill \blacksquare$\\
\section{Appendix.}
The aim of this appendix is to show how the generalization by D. Mathieu [M.00] of Khulmann's semi-proper direct image theorem to the case where the target space is a Banach analytic set has the following consequence :
\begin{thm}\label{semi-proper d.i. f-cycles}
Let \ $S$ \ and \ $M$ \ be reduced complex spaces, and denote by \\
$\varphi : S \to \mathcal{C}_n^f(M)$ \ the classifying map of a f-analytic family of \ $n-$cycles in \ $M$. Assume that \ $\varphi$ \ is semi-proper. Then \ $\varphi(S)$ \ has a natural structure of weakly normal complex space such that the tautological family of \ $n-$cycles parametrized by \ $\varphi(S)$ \ is an f-analytic family of cycles in \ $M$.
\end{thm}
Of course this result has an easy corollary which is the following property of "universal reparametrization " (see the universal reparametrization theorem of [M.00]).
\begin{cor}\label{reparametrization}
In the situation of the previous theorem, consider an f-analytic family \ $(Y_t)_{t \in T}$ \ of \ $n-$cycles in \ $M$ \ parametrized by a weakly normal complex space \ $T$ \ such that for any \ $t \in T$ \ there exists an \ $s \in S$ \ such that \ $Y_t = X_s$. Then there exists an unique holomorphic map
$$ \gamma : T \to \varphi(S) $$
such that we have \ $Y_t = X_{\gamma(t)}$ \ for all \ $t \in T$.
\end{cor}
\parag{Proof of the theorem} We know that \ $\varphi(S)$ \ is closed and locally compact. Let \ $s_0 \in S$ \ and choose relatively compact open set \ $W' \subset\subset W \subset\subset M$ \ meeting all irreducible components of \ $C_0 : = \varphi(s_0)$. Cover \ $\bar W$ \ by a finite set of adapted scales \ $E_i : = (U_i, B_i,j_i), i \in I$, and let \ $k_i : = \deg_{E_i}(C_0)$. Define the open neighbourhood
$$ \mathcal{V} : = \Omega(W') \bigcap \big(\cap_{i \in I} \Omega_{k_i}(E_i)\big)$$
and consider the maps
$$\varphi' : \varphi^{-1}(\mathcal{V}) \to \mathcal{V}, \quad \alpha : \mathcal{V} \to \prod_{i \in I} \ H(\bar U_i, \Sym^{k_i}(B_i)), \quad \psi : = \alpha\circ\varphi' .$$
Then \ $\varphi'$ \ is semi-proper as we know that \ $\varphi(S)$ \ is locally compact. It is now easy to check that \ $\varphi(S')$ \ is homeomorphic to \ $\psi(S')$ \ by construction, using the characterization of compact sets in \ $\mathcal{C}_n^f(M)$. Now the semi-proper direct image of D. Mathieu [M.00] gives a natural weakly normal complex structure on \ $\psi(S')$. $\hfill \blacksquare$\\
\parag{Remark} Using the proof above it is easy to show that, if we have an f-analytic family \ $(X_s)_{s \in S}$ \ with classifying map \ $\varphi : S \to \mathcal{C}_n^f(M)$ \ and an holomorphic map \ $g : S \to T$ \ between reduced complex spaces, such that \ $\varphi\times g$ \ is semi-proper, then there exists a natural structure of weakly normal complex space on \ $(\varphi\times g)(S)$ \ such the projection on \ $T$ \ is holomorphic and the projection on \ $\mathcal{C}_n^f(M)$ \ is the classifying map of an f-analytic family of cycles in \ $M$.\\
\parag{References}
\begin{enumerate}
\item{[B.75]} Barlet, D. \textit{Espace analytique r\'eduit $\cdots$} S\'eminaire F.Norguet 1974-1975, L.N. 482 (1975), p.1-158.
\item{[B.78]} Barlet, D. \textit{ Majoration du volume $\cdots$ } S\'eminaire Lelong-Skoda 1978-1979, L.N. 822 (1980), p.1-17.
\item{[B.08]} Barlet, D. \textit{Reparam\'etrisation universelle de familles f-analytiques de cycles et f-aplatissement g\'eom\'etrique} Comment. Math. Helv. 83 (2008),\\ p. 869-888.
\item{[BOOK]} Barlet, D. and Magnusson, J. \textit{Book to appear}.
\item{[B.Mg.10]} Barlet, D. and Magnusson, J. article in preparation.
\item{[Bi.64]} Bishop, E \textit{Conditions for the analyticity of certain sets} Mich. Math. J. 11 (1964), p. 289-304.
\item{[C.60]} Cartan, H. \textit{Quotients of complex analytic spaces} International Colloquium on Function Theory, Tata Institute (1960), p.1-15.
\item{[D.90]} Dethloff, G. \textit{A new proof of a theorem of Grauert and Remmert by\ $ L_2$ \ methods}, Math. Ann. 286 (1990) p.129-142.
\item{[G.83]} Grauert, H. \textit{Set theoretic complex equivalence relations} Math. Ann. 265 (1983), p.137-148.
\item{[G.86]} Grauert, H. \textit{On meromorphic equivalence relations} Proc. Conf. Complex Analysis, Notre-Dame (1984) Aspects Math. E9 (1986), p.115-147.
\item{[G.R.58]} Grauert, H. and Remmert, R. \textit{Komplexe R{\"a}ume}, Math. Ann. 136 (1958) p. 245-318.
\item{[K.64]} Kuhlmann, N. \textit{{\"U}ber holomorphe Abbildungen komplexer R{\"a}ume} Archiv der Math. 15 (1964), p.81-90.
\item{[K.66]} Kuhlmann, N. \textit{Bemerkungen {\"u}ber holomorphe Abbildungen komplexer R{\"a}ume} Wiss. Abh. Arbeitsgemeinschaft Nordrhein-Westfalen 33, Festschr. Ged{\"a}achtnisfeier K. Weierstrass (1966), p.495-522.
\item{[M.00]} Mathieu, D. \textit{Universal reparametrization of a family of cycles : a new approach to meromorphic equivalence relations}, Ann. Inst. Fourier (Grenoble) t. 50, fasc.4 (2000) p.1155-1189.
\item{[R.57]} Remmert, R. \textit{Holomorphe und meromorphe Abbildungen komplexer R{\"a}ume}, Math. Ann. 133 (1957), p.328-370.
\item{[S.93]} Siebert, B. \textit{Fiber cycles of holomorphic maps I. Local flattening} Math. Ann. 296 (1993), p.269-283.
\item{[Si.94]} Siebert, B. \textit{Fiber cycle space and canonical flattening II} Math. Ann. 300 (1994), p. 243-271.
\end{enumerate}
\end{document}
\newpage
\bigskip
\begin{thm}
Let \ $M$ \ be an irreducible complex space and let \ $\tau : \tilde{M} \to M$ \ be a proper (holomorphic) modification of \ $M$ \ along a (closed) analytic subset \ $\Sigma$ \ of \ $M$, where we assume that \ $\tilde{M}$ and \ $M \setminus \Sigma$ \ are normal complex spaces\footnote{This normality assumption is not a restriction as we always may enlarge \ $\Sigma$ \ in order to contain the non normal locus of \ $M$, and then replace \ $\tilde{M}$ \ by its normalization.}. Let \ $\tilde{R} \subset \tilde{M}\times M$ \ be an irreducible\footnote{We may assume only that \ $\tilde{R}$ \ is of pure dimension with finitely many irreducible components such each component satisfies the conditions asked in the irreducible case.} analytic subset such that its projection \ $\tilde{\pi} $ \ on \ $\tilde{M}$ \ is quasi-proper, surjective and \ $n-$equidimensional.\\
Then there exits an holomorphic map \ $\tilde{q} : \tilde{M} \to S$ \ onto a weakly normal complex space \ $S$ \ which is quasi-proper and satifies the following properties :
\begin{itemize}
\item We have \ $q(x) = q(y)$ \ if and only if \ $\tilde{\pi}^{-1}(x) = \tilde{\pi}^{-1}(y)$ \ as \ $n-$cycles in \ $X$.
\item The holomorphic map \ $q : \tilde{M} \to S$ \ admits a f-geometric flattning.
\end{itemize}
\end{thm}
\parag{Remark} With our method we cannot prove the existence of a meromorphic quotient
Remark that our hypothesis on \ $\tilde{R}$ \ implies that there exists an f-analytic family of \ $n-$cycles in \ $M$ \ parametrized by \ $\tilde{M}$ \ taking the value \ $\vert \pi^{-1}(x)\vert$ \ for generic \ $x \in \tilde{M}$. This fact is a consequence of the normality of \ $\tilde{M}$ \ and the quasi-properness of \ $\tilde{R}$ \ over \ $\tilde{M}$ (see [RUTAG] and [BOOK]). \\
Remark also that, if \ $\mathcal{R}$ \ is an analytic equivalence relation such its graph \ $R$ \ is quasi-proper on \ $M$, the existence of a modification \ $\tau : \tilde{M} \to M$ \ with such an \ $\tilde{R}$ \ is satisfied if and only if the projection on \ $M$ \ of the graph \ $R \subset M \times M$ \ of \ $\mathcal{R}$ \ admits a quasi-proper f-flattning.\\
The proof is a consequence of the reparametrization theorem for a f-analytic family of cycles when the classifying map is semi-proper ; see [M.00] and [RUTAG]. Let me sketch here why the classifying map \ $\varphi^f : \tilde{M} \to \mathcal{C}_n^f(X)$ \ is semi-proper.\\
Let \ $C_0 \in \mathcal{C}_n^f(M)$ ; by definition of the topology of this space, there exists an open set \ $W \subset\subset M$ \ such any irreducible components of \ $C_0$ \ meets \ $W$, and the set \ $\Omega(W)$ \ of all \ $C$ \ with this property is an open set in \ $\mathcal{C}_n^f(M)$. Let us show that we have \ $\tilde{\varphi}(\tilde{M}) \cap \Omega(W) = \tilde{\varphi}(K) \cap \Omega(W)$.\\
Let \ $C = \varphi^f(y)$ \ be in \ $\Omega(W)$ \ for some \ $y \in \tilde{M}$. Then \ $\tau(y)$ \ is the limit of a sequence of points \ $(x_{\nu})_{\nu\in \mathbb{N}}$ \ in \ $M \setminus \Sigma$. So \ $\varphi^f(x_{\nu})$ \ converges to \ $C$. For \ $\nu$ \ large enough \ $\varphi^f(x_{\nu}) $ \ lies in \ $\Omega(W)$. This means that the equivalence class of \ $x_{\nu}$ \ which is \ $\vert \varphi^f(x_{\nu})\vert$ \ meets \ $W$\footnote{it is not empty as it contains \ $x_{\nu}$.}. Then, for \ $\nu$ \ large enough, there exists a point \ $x'_{\nu}\in W$ \ such \ $\varphi^f(x'_{\nu}) = \varphi^f(x_{\nu})$. Consider now the sequence \ $(x'_{\nu})_{\nu \geq \nu_0}$ \ as a sequence in \ $\tau^{-1}(\bar W)$. We may, up to pass to a subsequence, assume that it converges vers a point \ $z \in \tau^{-1}(\bar W)$. By continuity of \ $\varphi^f$ \ we shall have \ $\varphi^f(z) = \varphi^f(y)$ \ and by construction, \ $z$ \ is in \ $ K$. So the semi-properness of \ $\varphi^f$ \ is proved, and the "reparametrization theorem" of [M.00] (see also [RUTAG]) may be applied to \ $\varphi^f$. Then it gives a f-meromorphic quotient for \ $\mathcal{R}$ \ in the sens of the definition \ref{}, because our proof apply now to the map \ $q : \tilde{M} \to Q$ \ where \ $Q$ \ is the weak normalisation of the image of \ $\varphi^f$, gives that \ $q$ \ is in fact quasi-proper and geometrically f-flat. $\hfill \blacksquare$
\bigskip
\subsection{Quasi-proper quotients.}
Now it clear that we shall call a quasi-proper meromorphic quotient for a quasi-proper analytic equivalence relation on \ $X$ \ a irreducible complex space, a meromorphic map
$$ q : X \dashrightarrow S $$
on a weekly normal complex space which is surjective, quasi-proper, holomorphic on \ $X \setminus \Sigma$ \ and such that for any \ $(x,y) \in (X\setminus \Sigma)^2$ \ we have the equivalence between
$$ x \,\mathcal{R}\, y \quad {\rm and} \quad q(x) = q(y).$$
Remark that in the quotient theorem we obtain that the graph of the meromorphic map \ $q$ \ is geometrically f-flat. It is easy to see that, in fact, if there exists a meromorphic quotient with this property, then the condition that the projection on \ $X$ \ of the graph of\ $\mathcal{R}$ \ admits a quasi-proper geometric f-flattning is necessary. So the only possible amelioration on this theorem would be to find a quasi-proper meromorphic quotient without this condition on \ $R$. But remark that we would obtain quasi-proper meromorphic quotient map which would not admit a quasi-proper geometric flattning.\\
I have presently no idea how to find such quotient, because, in some sens, it means to be able to make a reparametrisaion theorem of the following type :\\
Assume we have an "holomorphic map" \ $\varphi : S \to \mathcal{C}_n^f(M)$ \ classifying an f-holomorphic family of \ $n-$cycles in \ $M$. Now assume that \ $j\circ \varphi$ \ is semi-proper, where \ $j : \mathcal{C}_n^f(M) \hookrightarrow \mathcal{C}_n^{loc}(M)$ \ is the obvious map. Then the image of \ $\varphi$ \ has a natural structure of a weakly normal complex space of finite dimension.\\
The probleme is that the semi-proper direct image theorem of D. Mathieu works with values in a Banach analytic set, and that means that we have to control the cycles with a finite number of scales, at least locally. But the a sequence in \ $\mathcal{C}_n^f(M)$ \ may converges to a point in \ $\mathcal{C}_n^f(M)$ \ in the sens of \ $\mathcal{C}_n^{loc}(M)$ \ but not in the sens of \ $\mathcal{C}_n^f(M)$ \ because of the escape at infinity of some component !\\
Probably the good notion of quasi-properness is not the naive one (usual one) which is to weak for the big fiber, but the following, which is stronger:\\
Let \ $\pi : G \to S$ \ an holomorphic map between irreducible complex spaces, and let \ $n : \dim G - \dim S $. Then \ $\pi$ \ is {\bf strongly quasi-proper} if for each \ $s \in S$ \ there exists an open set \ $S'$ \ containig \ $s$ \ and a compact set \ $K$ \ in \ $G$ \ such for any \ $y \in \mathcal{C}^f_n(G)$ \ which is limit of \ $n-$dimensional generic fibers over \ $S'$, each irreducible components of \ $y$ \ meets \ $K$.\\
Remark that this definition is local on \ $S$, stable by proper modification of the base, and implies that such a map admits a quasi-proper geometric flattning.\\
Remark also that for an equidimensional map, quasi-proper or strongly quasi-proper is the same.
\newpage
\section{Modifications topologiques.}
Soit \ $S$ \ un espace complexe r\'eduit et soit \ $\Sigma $ \ un sous-ensemble analytique ferm\'e d'int\'erieur vide de \ $S$. Soit \ $\tilde{S}$ \ un espace topologique s\'epar\'e et soit \ $\tau : \tilde{S} \to S$ \ une application continue.
\begin{defn}
Nous dirons que \ $\tau : \tilde{S} \to S$ \ est une { \b modification topologique de \ $S$ \ le long de \ $\Sigma$} \ si les conditions suivantes sont remplies :
\begin{enumerate}[i)]
\item L'application \ $\tau$ \ est propre et surjective.
\item Le sous-ensemble \ $\tilde{\Sigma} : = \tau^{-1}(\Sigma)$ \ est d'int\'erieur vide dans \ $\tilde{S}$.
\item La restriction de \ $\tau$ \ induit un hom\'eomorphisme de \ $ \tilde{S} \setminus \tilde{\Sigma}$ \ sur \ $S \setminus \Sigma$.
\end{enumerate}
\end{defn}
\begin{defn}
Gardons les notations ci-dessus, et soit \ $T$ \ un espace topologique s\'epar\'e.
Nous dirons qu'une application continue \ $\varphi : S \setminus \Sigma \to T$ \ est {\bf topologiquement compl\`ete} le long de \ $\Sigma$ \ s'il existe une modification topologique \ $\tau : \tilde{S} \to S$ \ le long de \ $\Sigma$ \ et une application continue \ $\tilde{\varphi} : \tilde{S} \to T$ \ dont la restriction \`a \ $\tilde{S} \setminus \tilde{\Sigma}$ \ co{\"i}ncide avec \ $\varphi\circ\tau^{-1}$.
\end{defn}
\begin{lemma}
Une condition mecessaire et suffisante pour qu'une application continue \ $\varphi : S \setminus \Sigma \to T$ \ soit topologiquement compl\`ete le long de \ $\Sigma$ \ est que l'adh\'erence dans \ $S \times T$ du graphe \ $\Gamma$ \ de \ $\varphi$ \ soit propre sur \ $S$.
\end{lemma}
\parag{Preuve} Commen{\c c}ons par pr\'eciser que si \ $\tau : \tilde{S} \to S$ \ est propre, alors \ $\tilde{S}$ \ est localement compact, puisque \ $S$ \ est localement compact.\\
Montrons d\'ej\`a la partie directe. Soit \ $\tilde{\Gamma} \subset \tilde{S} \times T$ \ le graphe de \ $\tilde{\varphi}$. Son image $\Theta \in S \times T$ \ par \ $\tau \times Id_T$ \ est un ferm\'e\footnote{$\tau\times Id_T$ \ est propre.} de \ $\tilde{S} \times T$ \ qui propre sur \ $S$, puisque l'image r\'eciproque \ $\tilde{K}$ \ sur \ $\tilde{S}$ \ d'un compact \ $K$ de \ $S$ \ par \ $\tau$ \ est compacte et que l'image r\'eciproque de \ $K$ \ sur \ $\Theta$ \ est contenue dans le compact \ $K\times \tilde{\varphi}(\tilde{K})$. D'autre part il est clair que la restriction de \ $\Theta$ \ au dessus de \ $S \setminus \Sigma$ \ est \ $\Gamma$. De plus, comme \ $\tilde{\Sigma}$ \ est d'int\'erieur vide dans \ $\tilde{S}$, \ $\Gamma$ \ est dense dans \ $\Theta$. Donc \ $\Theta$ \ est \'egal \`a \ $\bar \Gamma$ \ dans \ $S \times T$, et il est propre sur \ $S$.\\
R\'eciproquement, si \ $\bar \Gamma$ \ est propre sur \ $S$, on voit imm\'ediatement que l'on peut prendre \ $\tilde{S} : = \bar \Gamma$ \ et pour \ $\tau$ la projection sur \ $S$. On obtient une modification topologique de \ $S$ le long de \ $\Sigma$ \ sur laquelle on a un prolongement continu pour \ $\varphi$: \`a savoir la projection de \ $\bar \Gamma$ \ sur \ $T$. $\hfill \blacksquare$.\\
On remarquera que tout ceci est parfaitement d\'epourvu d'int\'er\^et dans le cas o\`u \ $T$ \ est compact !\\
\parag{Deux exemples.}
\begin{enumerate}
\item Supposons \ $S$ \ irr\'eductible. Soit \ $M$ \ un espace complexe r\'eduit et soit \\
$\varphi : S \setminus \Sigma \to \mathcal{C}_n^{loc}(M) $ \ une application holomorphe\footnote{c'est-\`a-dire classifiant une famille analytique de \ $n-$cycles de \ $M$.}. Supposons qu'il existe un sous-ensemble analytique (ferm\'e) \ $G \in S \times M$ \ tel que \ $G \cap (S\setminus \Sigma)\times M$ \ co{\"i}ncide avec le graphe (ensembliste) de la famille analytique de \ $n-$cycles de \ $M$ \ param\'etr\'ee par \ $S \setminus \Sigma$. Alors \ $\varphi$ \ est topologiquement compl\`ete le long de \ $\Sigma$.\\
En effet il s'agit simplement de montrer qu'au voisinage d'un point \ $s_0 \in \Sigma$ \ les cycles \ $X_s : = \varphi(s)$ \ pour \ $s \not\in \Sigma$ \ restent dans un compact de \ $\mathcal{C}_n^{loc}(M)$. Mais ceci r\'esulte du th\'eor\`eme de majoration du volume des fibres g\'en\'eriques appliqu\'e \`a la projection de \ $G$ \ sur \ $S$ \ et de la caract\'erisation des compacts de \ $\mathcal{C}_n^{loc}(M)$ \ gr\^ace au th\'eor\`eme de Bishop.
\item On consid\`ere la m\^eme situation que pr\'ec\'edemment, mais on suppose maintenant que l'application \ $\varphi : S \setminus \Sigma \to \mathcal{C}_n^f(M)$ \ classifie une famille \ $f-$analytique de cycles (de type fini) de \ $M$ \ et que \ $G$ \ est quasi-propre sur \ $S$. Alors la propret\'e sur \ $S$ \ de l'adh\'erence du graphe de \ $\varphi$ \ dans \ $S \times \mathcal{C}_n^f(M)$ \ n'est plus automatique, car les compacts de \ $\mathcal{C}_n^{f}(M)$ \ sont caract\'eris\'es par le fait d'\^etre born\'es et de ne pas pr\'esenter de fuite \`a l'infini. Ceci signifie qu'il doit \^etre ferm\'e dans \ $\mathcal{C}_n^{loc}(M)$. Le probl\`eme est alors qu'un ferm\'e de \ $\mathcal{C}_n^{f}(M)$ \ n'est pas necessairement ferm\'e dans \ $\mathcal{C}_n^{loc}(M)$ \ et que si on prend l'adh\'erence dans \ $\mathcal{C}_n^{loc}(M)$, on sort \'eventuellement de \ $\mathcal{C}_n^{f}(M)$.\\
Dans ce cas l'existence d'une modification topologique propre de \ $S$ \ le long de \ $\Sigma$ \ sur laquelle la famille \ $f-$analytique se prolonge continuement (en une famille \ $f-$continue, c'est-\`a-dire que \ $\tilde{\varphi}$ \ est continue \`a valeurs dans \ $ \mathcal{C}_n^{f}(M)$) \ n'est pas assur\'ee par l'existence d'un \ $G$ \ quasi-propre. C'est la condition suppl\'ementaire du th\'eor\`eme de \ $f-$aplatissement g\'eom\'etrique de [RUTAG] (voir aussi [M.00]) sous une forme \'equivalente \`a celle \'enonc\'ee dans {\it loc.cit.} gr\^ace au lemme ci-dessus.
\end{enumerate}
\newpage
\section{Sans inter\^et ?}
Passer \`a la limite inductive quand \ $M$ \ est un voisinage du compact \ $\bar M'$ \ est probablement la seule chose qui peut pr\'esenter un inter\^et dans ce qui suit ...
\begin{defn} Let \ $M$ \ be a reduced complex space and \ $M ' \subset\subset M$ \ a relatively compact open set in \ $M$. Let us denote \ $c(M')$ \ the following condition on a finite type closed cycle \ $X$ \ of pure dimension \ $n$ \ in \ $M$ :
\begin{itemize}
\item For each irreducible component \ $\Gamma$ \ of \ $\vert X\vert$ \ in \ $M$, \ $\Gamma \cap M'$ \ is a non empty subset of \ $M'$. $\hfill c(M')$
\end{itemize}
and define
$$ \mathcal{C}_n(M,M') : = \{ X \in \mathcal{C}_n^f(M) \ / \ {\rm such \ that} \ X \ {\rm satifies} \ c(M') \} .$$
We endow this set with the topology induced by \ $\mathcal{C}_n^f(M)$.
\end{defn}
\parag{Remark} In practice we shall assume that \ $\bar M'$ \ is a compact real semi-analytic subset of \ $M$ \ in order to know that \ $\Gamma \cap M'$ \ has only finitely many irreducible components.\\
\begin{prop}\label{debut}
We have the following properties :
\begin{enumerate}[i)]
\item \ $ \mathcal{C}_n(M,M') $ \ is open in \ $\mathcal{C}_n^f(M)$.
\item The topology induced by \ $\mathcal{C}_n^{loc}(M)$ \ and \ $\mathcal{C}_n^f(M)$ \ co{\"i}ncide on \ $ \mathcal{C}_n(M,M') $.
\item A set \ $B \subset \mathcal{C}_n(M,M') $ \ is compact in \ $ \mathcal{C}_n(M,M') $ \ if and only if it a closed set in \ $\mathcal{C}_n^f(M)$ \ which is a bounded set \ in \ $\mathcal{C}_n^{loc}(M)$.
\item Let \ $G \subset \mathcal{C}_n(M,M') \times M$ \ the graph of the universal family on \ $\mathcal{C}_n(M,M')$. Then \ $G$ \ is quasi proper on \ $\mathcal{C}_n(M,M')$.
\item Let \ $\Omega(\partial M')$ \ be the open set in \ $\mathcal{C}^f_n(M)$ \ of cycles \ $X$ \ satisfying \ $\vert X\vert \cap \partial M' = \emptyset$. Then \ $ \mathcal{C}_n(M,M') \cap \Omega(\partial M')$ \ is canonically homeomorphic to the open set \ $\mathcal{C}_n(M')$ \ in \ $\mathcal{C}_n(M)$.
\end{enumerate}
\end{prop}
\parag{Proof} The point i) and ii) are obvious from the definition of the topology of \ $\mathcal{C}_n^f(M)$.\\
The point iii) comes from the characterization of compact sets in \ $\mathcal{C}_n^f(M)$ \ given in [RUTAG] : it is enough to show that the topology induced by \ $\mathcal{C}_n^{loc}(M)$ \ on \ $B$ \ in \ $\mathcal{C}_n^f(M)$ \ is the topology induced by \ $\mathcal{C}_n^f(M)$. As this property is true for the space \ $\mathcal{C}_n(M,M')$ \ itself, the conclusion follows.\\
Remark that for deciding if a subset \ $B$ \ is relatively compact in \ $\M'$ \ we first have to check that its closure in \ $\f$ \ is contains in \ $\M'$. This is of course equivalent to the fact that ist closure in \ $\M'$ \ in a closed subset of \ $\f$. That means that going to the closure \ $(X)_{\nu} \to X$ \ no irreducible component of \ $X_{nu}$ \ will break adding a new component which will not meet \ $M'$.\\
This is precisely this phenomenon we want to study more closely here in order to avoid with suitable sufficient hypothesis.\\
The iv) is tautological in the sens that the compact \ $\bar M'$, by definition, meets all fibers of \ $G$.\\
The point v) is just the remark that if a irreducible component \ $\Gamma$ \ does mot meet \ $\partial M'$ \ but meets \ $M'$ \ then it has to be in \ $M'$, and so, it is compact. $\hfill \blacksquare$
\newpage
\end{document}
|
2,877,628,088,428 | arxiv | \section{Introduction}
Over the last years we have witnessed a technological trend towards
miniaturization of electronic circuits. This tendency has been
accompanied by a growing research activity focused on achieving a
better understanding of the mechanisms for heat dissipation and energy
flow in mesoscopic systems. However, the motivation for the research
in this area is not only technological, because the very fundamental
concepts of standard statistical mechanics and thermodynamics are put
into test when studying these systems, even more when the process
under consideration corresponds to an out-of-equilibrium situation.
Several efforts have been made towards the extension of standard
thermodynamical concepts to the out-of-equilibrium evolution of
different systems. Some well-known examples are the aging regime of
glassy systems, sheared glasses, granular materials and
colloids.\cite{fdr,cukupel,letoandco} A very successful achievement in
the characterization of such nonequilibrium states has been the
identification of an {\em effective temperature}, i.e., a parameter
with the same properties of the temperature of a system at equilibrium
that is useful to describe the evolution of nonequilibrium
systems. For instance, for glassy systems the definition of effective
temperature was introduced\cite{fdr} by means of a generalization of
the equilibrium fluctuation-dissipation relations (FDR) and the
physical meaning of this concept was supported by showing that such a
temperature would coincide with the one measured by a
thermometer.\cite{cukupel} The definition of an effective temperature
from a FDR was introduced for quantum glassy systems in
Ref. \onlinecite{letogus} and later analyzed in electronic
systems\cite{lilileto}. More recently, these temperatures were studied
in an Ising chain after a sudden quench.\cite{foini}
The physics of the mesoscopic scale is ruled by the quantum coherence
of the particle propagation. This originates non-trivial interference
mechanisms and surprising effects. Well known examples are the
violation of the Fourier's Law in low-dimensional phononic
systems\cite{dhar} as well as the $2k_F$ oscillations of the local
voltage and the negative electrical resistance.\cite{fourpoint}
In the past few years there have been many experimental attempts to
locally characterize the heat flow in non-equilibrium systems. For
example, Pothier and coworkers\cite{pothier} measured the local energy
distribution function in metallic diffusive wires in a stationary
out-of-equilibrium situation. More recently, Altimiras and
coworkers\cite{altimiras} measured the electron energy distribution in
an integer quantum Hall regime with one of the edge channels driven
out-of-equilibrium. Chiral heat transport has been investigated in the
quantum Hall regime using micron-scale thermometers\cite{granger} and
later explained with the introduction of a local temperature along the
edge.\cite{fradkin} The idea of defining a non-equilibrium local
temperature has been useful to study out-of-equilibrium transport in
other mesoscopic systems. For example, thermoelectric transport has
been studied with the aid of the local temperature determined by an
ideal thermometer. \cite{casati-sanchez} Also the concept of effective
temperature has been useful to study heat exchange between a
nanojunction and its environment, which can act as a freezing agent,
\cite{Cht} and to study mesoscopic superconductors.\cite{Cht2} Another
example is the prediction that a superconducting wire can remain in
superconducting state even in contact with a bath that greatly exceeds
the critical temperature if the effective local temperature is
maintained below the critical value. \cite{dubi} A local temperature
has also been defined to characterize the heat transport in molecular
devices.\cite{galp} Additional studies have been reviewed in
Ref. \onlinecite{dubi-colloquium}.
In a previous work \cite{cal} we defined {\em local} and {\em
effective} temperatures in electronic quantum systems driven out of
equilibrium by external ac potentials. Examples of such systems are
quantum dots with ac voltages acting at their walls (quantum
pumps)\cite{pump} and quantum capacitors.\cite{qcapexp} In that work
we presented two concepts, which are the {\em local} and the {\em
effective} temperatures. The local temperature was introduced
following a procedure inspired in a work by Engquist and Anderson.
\cite{engq-an} The idea is to include a thermometer in the
microscopic description of the system. On the other hand the effective
temperature is defined from a local FDR involving single-particle
Green's functions. We showed that for low driving frequencies both
ways of defining the temperature coincide. In a more recent
work\cite{cal2} we slightly generalized the definition of the
thermometer to consider the possibility of simultaneously sensing the
local temperature and the local chemical potential of the sample. We
showed that the new local temperature determined by this new
definition coincides with the previous one. Even more, we showed that
such a parameter verified the thermodynamical properties of a
temperature, meaning that its gradient signals the direction for heat
flow at the contacts.
The aim of this work is to analyze the role of effective temperatures
within the context of a FDR for current-current correlation
functions. The motivation is twofold. On one hand we are interested in
testing the robustness of the definition of an effective temperature
from a FDR, at the level of a correlation function different from the
one we have considered in our previous work. On the other hand,
current-current correlation functions are particularly appealing
quantities since they are related to noise, which can be
experimentally measured and contain valuable information on the nature
of the elementary particles that take part in the transport
process. The zero-frequency noise is usually used to characterize the
correlations between particles in mesoscopic
systems.\cite{SamButtiker} Additionally in quantum pumps, noise is
related to the possibility of having quantized pumping\cite{kamenev}
and it contains information that cannot be extracted from the
time-averaged current.\cite{ButtikerNoise0} Current correlations in
mesoscopic coherent conductors were first discussed by M. B\"uttiker
in Ref. \onlinecite{buttiker90} and since then an extensive
theoretical literature on noise in mesoscopic systems analyzed within
the scattering matrix formalism has been
developed. \cite{reviews,ButtikerNoise0,ButtikerNoise,hanggi} We use
here another approach, which is based Keldysh formalism. For
non-interacting systems both treatments were proved to coincide at the
level of the description of the current for dc\cite{fisher-lee} and
ac-driven systems.\cite{liliflo} In the present work we show that this
is also the case for the current fluctuations correlations. The main
goal of this work is to show that the effective temperature obtained
from a fluctuation-dissipation relation for current-current
correlation functions coincides with the local temperature defined
using a thermometer and thus verifies the same thermodynamical
properties of the latter.
This paper is organized as follows. In Sec. \ref{model}, we present
the model and summarize the theoretical treatment. In
Sec. \ref{temperatures} we review three definitions of temperature
addressed in recent works\cite{cal,cal2}. In Sec. \ref{correlations}
we derive general expressions for current-current correlation
functions and an explicit expression for the zero-frequency noise
within the Keldysh Green's functions formalism. In Sec. \ref{results} we
present numerical results for a particular system. Section
\ref{conclusions} is devoted to discussion and conclusions. We give
some details of the calculation in the Appendix.
\section{Model and theoretical treatment}\label{model}
In Fig. \ref{setup} we display the same setup as in
Refs. \onlinecite{cal} and \onlinecite{cal2} representing a quantum
driven system, with the Hamiltonian $H_{sys}(t)$, connected to a
probe characterized by $H_P$. The total system is then described by
\begin{equation}
H(t) = H_{sys}(t) + H_{cP} + H_P,
\end{equation}
with $H_{cP}$ implementing the local coupling between the system and
the probe. The Hamiltonian corresponding to the driven system can in
turn be written as
\begin{equation}
H_{sys}(t) = H_L + H_{cL} + H_C(t) + H_{cR} + H_R,
\end{equation}
where $H_C(t)$, $H_L$ and $H_R$ stand for the Hamiltonians of the
central part and left and right reservoirs, coupled among
themselves via the Hamiltonians $H_{cL}$ and $H_{cR}$.
The Hamiltonian describing the central system ($C$) contains the ac
fields and can be written as $H_C(t) = H_0 + H_V(t)$. We assume that
$H_0$ is a Hamiltonian for non-interacting electrons while $H_V(t)$ is
harmonically time dependent with a fundamental driving frequency
$\Omega_0$. We leave further details of the model undetermined as much
of the coming discussion is model independent.
All three reservoirs (left, right and the probe) are modeled by
systems of non-interacting electrons with many degrees of freedom,
i.e., $H_\alpha = \sum_{k\alpha} \varepsilon_{k\alpha}
c^\dagger_{k\alpha} c_{k\alpha}$, where $\alpha=L,R,P$. The
corresponding contacts are described by $H_{c\alpha} = w_{c\alpha}
\sum_{k\alpha}(c^\dagger_{k\alpha} c_{l\alpha} + c^\dagger_{l\alpha}
c_{k\alpha})$, where $l\alpha$ denotes the coordinate of $C$ where the
reservoir $\alpha$ is connected. As in previous works,
\cite{cal,cal2,cal3,fourpoint,fourpointfed} we consider non-invasive
probe and we treat $w_{cP}$ at the lowest order of perturbation theory
when necessary.
\begin{figure}
\centering
\includegraphics[width=80mm,angle=0,clip]{figure0.eps} {\small {} }
\caption{{ Scheme of the setup. The central device is a wire with two
barriers of height $E_B$ connected by its ends to two reservoirs
($L$ and $R$). The third reservoir ($P$) represents the probe,
which consists of a macroscopic system weakly coupled to a given
point of the central device. In this setup, transport is induced
by two oscillating ac fields (both with the same amplitude and
frequency but with a phase lag) applied at the points where the
barriers are located. The Left and Right regions depicted in this
scheme are related to the heat current that flows into the
respective reservoirs.\cite{cal2,cal3}}}
\label{setup}
\end{figure}
We will analyze the out-of-equilibrium dynamics of this system within
the Schwinger-Keldysh Green's functions formalism. Within this
formalism, instead of the usual time-ordering operator used in
equilibrium theory a contour-ordering operator which orders
time-labels according to their order on the Keldysh contour is
introduced. The single particle propagator reads
\begin{equation} \label{green-Def}
i G_{j,j'}(t,t') = \langle T_\mathcal{C} [ c_j (t)
c^{\dagger}_{j'}(t') ] \rangle.
\end{equation}
The contour-ordered Green's function contains four different functions
depending on where the times $t$ and $t'$ are over the Keldysh
contour.\cite{keldysh} It is easy to see that they are not all
independent. We then consider the {\em lesser}, {\em greater} and {\em
retarded} Green's functions,
\begin{eqnarray}
i G^{<}_{j,j'}(t,t') &=& - \langle c^{\dagger}_{j'}(t') c_j(t) \rangle,
\nonumber \\
i G^{>}_{j,j'}(t,t') &=& \langle c_j(t) c^{\dagger}_{j'}(t') \rangle,
\nonumber \\
i G^R_{j,j'}(t,t')&=& \Theta(t-t') \langle \left[
c_j(t),c^{\dagger}_{j'}(t') \right]_+
\rangle ,
\label{green}
\end{eqnarray}
where $[,]_+$ denote the anticommutator of the fermionic operators,
$\langle ...\rangle$ is the quantum statistical average and the
indexes $j,j'$ denote spatial coordinates of the system. These Green's
functions can be evaluated after solving the Dyson equations.
In this work we will focus on current-current correlation
functions. The current in reservoir $\alpha$ at time $t$ is defined by
the operator\cite{liliflo}
\begin{equation} \label{current-op}
\hat{J}_\alpha(t) = i w_{c \alpha} \sum_{k \alpha} \left( \hat{c}_{k
\alpha}^\dagger (t) \hat{c}_{l \alpha}(t) - \hat{c}_{l
\alpha}^\dagger(t) \hat{c}_{k \alpha} (t) \right),
\end{equation}
which obeys bosonic commutation rules. The ensuing connected
contour-ordered propagator reads in this case
\begin{equation} \label{cc-Def}
i C_{\alpha \beta} (t,t') = \langle T_\mathcal{C} [ \hat{J}_\alpha (t)
\hat{J}_\beta (t') ] \rangle - \langle \hat{J}_\alpha(t) \rangle
\langle \hat{J}_\beta(t') \rangle,
\end{equation}
while the {\em lesser}, {\em greater} and {\em retarded} Green's
functions are
\begin{eqnarray} \label{cc-Def2}
i C^<_{\alpha \beta} (t,t') & = & \langle \hat{J}_\beta (t')
\hat{J}_\alpha (t) \rangle - \langle \hat{J}_\alpha(t) \rangle
\langle \hat{J}_\beta(t') \rangle,
\nonumber \\
i C^>_{\alpha \beta} (t,t') & = & \langle \hat{J}_\alpha (t)
\hat{J}_\beta (t') \rangle - \langle \hat{J}_\alpha(t) \rangle
\langle \hat{J}_\beta(t') \rangle,
\nonumber \\
i C_{\alpha \beta}^R(t,t') & = & \Theta(t-t') \langle \left[
\hat{J}_\alpha (t), \hat{J}_\beta (t') \right]_- \rangle.
\end{eqnarray}
where $[,]_-$ denote the commutator of the currents.
For the case of harmonic driving it is convenient to use the
Floquet-Fourier representation of the Green's functions:
\cite{liliflo}
\begin{equation}\label{floquet}
A_{j,j'}(t, t-\tau) = \sum_{k=-\infty}^{\infty}
\int_{-\infty}^{\infty} \frac{d\omega}{2 \pi}
e^{-i (k \Omega_0 t + \omega \tau)} A_{j,j'}(k,\omega).
\end{equation}
where $A$ stands for single-particle (\ref{green-Def}) or
current-current (\ref{cc-Def}) propagators.
In general the {\em Keldysh} and {\em retarded} Green's functions, can
be expressed in terms of the {\em lesser} and {\em greater} Green's
functions via
\begin{eqnarray}
A_{j,j'}^K(t,t') & = & A_{j,j'}^>(t,t') +
A_{j,j'}^<(t,t'),
\nonumber \\
A_{j,j'}^R(t,t') & = & \Theta(t-t') \left[
A_{j,j'}^>(t,t') - A_{j,j'}^<(t,t')\right] .
\end{eqnarray}
From the definition given in Eq. (\ref{floquet}) it is straightforward
to see that the Floquet-Fourier components of these functions can be
written as
\begin{eqnarray} \label{greenFF}
A_{j,j'}^K(k,\omega) & = & A_{j,j'}^>(k,\omega) +
A_{j,j'}^<(k,\omega),
\nonumber \\
A_{j,j'}^R(k,\omega) & = & i \int_{-\infty}^\infty \frac{d\omega'}{2\pi}
\frac{A_{j,j'}^>(k,\omega') -
A_{j,j'}^<(k,\omega')}{\omega
- \omega' + i 0^+}.
\nonumber \\
\end{eqnarray}
\section{Defining the temperature}\label{temperatures}
\subsection{Local temperature determined by a probe}
In Ref. \onlinecite{cal} we defined the local temperature ($T_{lP}$)
of the site $lP$ of the system as the value of the temperature of the
probe such that the time-averaged heat exchange between the central
system and the probe vanishes.
It can be shown\cite{liliheatpump} that, given $H_C(t)$ without
many-body interactions, the dc component of the heat current flowing
from the central system to the thermometer can be expressed as ($\hbar
= k_B = e = 1$)
\begin{eqnarray}
& &J_P^Q = \sum_{\alpha=L,R,P} \sum_{k=-\infty}^{\infty}
\int_{-\infty}^{\infty} \frac{d\omega}{2 \pi}
\Big\{ [f_\alpha(\omega)-f_P(\omega_k)] \nonumber \\
& & \times (\omega_k - \mu) \Gamma_P(\omega_k) \Gamma_\alpha(\omega)
\left| G^R_{lP,l\alpha}(k,\omega)\right|^2 \Big\}, \label{jq}
\end{eqnarray}
where $\omega_k=\omega+k\Omega_0$, while $\Gamma_{\alpha}(\omega) = -2
\pi |w_{\alpha}|^2 \sum_{k \alpha} \delta(\omega-\varepsilon_{k
\alpha})$ are the spectral functions characterizing
the reservoirs ($\alpha = L, R, P$), and $f_\alpha(\omega)=
1/[e^{\beta_{\alpha}(\omega -\mu_{\alpha})}+1]$ is the Fermi
function, which depends on $T_{\alpha}=1/\beta_{\alpha}$ and
$\mu_\alpha $ respectively the temperature and the chemical potential
of the reservoir $\alpha$.
Thus, the local temperature $T_{lP}$ corresponds to the solution of
the equation
\begin{equation} \label{Tlocal}
J_P^Q(T_{lP}) = 0.
\end{equation}
In general, Eq. (\ref{Tlocal}) must be solved numerically, but under
certain conditions, an analytical expression can be found. In
particular, for the low temperature weak-driving adiabatic regime,
which corresponds to small amplitudes and frequencies of the driving
potential, and for $\Omega_0 \ll T$,\cite{cal}
\begin{equation} \label{Tlocal-final}
T_{lP} = T \left[ 1 + \lambda^{(1)}_{lP}(\mu) \Omega_0 \right],
\end{equation}
where
\begin{equation} \label{lambda-n}
\lambda_{l}^{(n)}(\omega)= \frac{1}{ \sum_{k=-1}^{1}
\varphi_{l}(k,\omega)}
\sum_{k=-1}^{1} (k)^{n+2} \frac{d^n[ \varphi_{l}(k,\omega)]}{d
\omega^n} ,
\end{equation}
\begin{equation} \label{varphi}
\varphi_{l}(k,\omega) = \sum_{\alpha= L,R}
\left|G^R_{l, l \alpha}(k,\omega)\right|^2 \Gamma_{\alpha}(\omega).
\end{equation}
An alternative definition of local temperature was discussed in
Ref. \onlinecite{cal2}, where the fact that the heat current is
related to the charge current was taken into account. Then, the local
temperature ($T^*_{lP}$) and the local chemical potential
($\mu^*_{lP}$) were defined from the condition of simultaneously
vanishing of the time-averaged charge and heat currents between the
probe and the system. That is
\begin{equation}
\left\{ \begin{array}{rcl}
J^Q_P (T_{lP}^*,\mu_{lP}^*) & = & 0, \\\nonumber
J^e_P (T_{lP}^*,\mu_{lP}^*) & = & 0,
\end{array} \right.
\label{Tlocal2}
\end{equation}
where (see Refs. \onlinecite{liliflo,fourpointfed})
\begin{eqnarray}
& &J_P^e = \sum_{\alpha=L,R,P} \sum_{k=-\infty}^{\infty}
\int_{-\infty}^{\infty} \frac{d\omega}{2 \pi}
\Big\{ [f_\alpha(\omega)-f_P(\omega_k)] \nonumber \\
& & \times
\Gamma_P(\omega_k) \Gamma_\alpha(\omega) \left|
G^R_{lP,l\alpha}(k,\omega)\right|^2 \Big\}, \label{je}
\end{eqnarray}
is the dc component of the charge current flowing through the contact
between the system and the probe.
The simultaneous equations given in Eq. (\ref{Tlocal2}) can be solved
numerically for any situation, but an analytical expression can be
found within the low temperature weak-driving adiabatic
regime, when $\Omega_0 \ll T$, and leads to $T^*_{lP} = T_{lP}$,
given in Eq. (\ref{Tlocal-final}).
\subsection{Effective temperature from a single-particle
fluctuation-dissipation relation (FDR)}
For systems in equilibrium, the fluctuation-dissipation theorem
establishes a relation between the Keldysh (correlation) and the
retarded Green's functions. In Ref. \onlinecite{cal}, we defined a
local FDR involving single-particle Green's functions from which an
effective temperature for the site $l$ ($T_{l}^{eff} =
1/\beta_{l}^{eff}$) can be extracted,
\begin{eqnarray} \label{Teff}
iG^K_{l,l}(0,\omega)-iG^K_{l,l}(0,\mu) & = &
\tanh \left[ \frac{\beta^{eff}_{l} (\omega-\mu)}{2} \right]
\overline{\varphi}_{l}(\omega), \label{fdr}
\nonumber \\
\end{eqnarray}
with $\overline{\varphi}_{l}( \omega)=-2 \, \mbox{Im}
[G_{l,l}^R(0,\omega)]= \sum_k \varphi_l(k,\omega_{-k})$. In general,
Eq. (\ref{Teff}) defines an effective temperature that might depend on
$\omega$, so the limit $\omega \rightarrow \mu$ is taken. An extra
term is added to the lhs of Eq. (\ref{Teff}) because the rhs is always
zero at $\omega = \mu$ but $G^K_{l,l}(0,\mu)$ is not necessarily
zero in an arbitrary out-of-equilibrium situation.
Within the low temperature weak-driving adiabatic regime, when
$\Omega_0 \ll T$, we showed\cite{cal} that $T^{eff}_{lP} =
T_{lP}$. Then, the conclusion of our previous investigations is that
for the weak driving adiabatic regime the effective temperature
defined from a single-particle FDR coincides with that determined by a
thermometer.
\section{Current-current correlation functions and Effective
Temperature}
\label{correlations}
\subsection{A non-equilibrium fluctuation-dissipation
relation} \label{Teff2-sec}
We analyze the role of effective temperatures ($T^{eff*}$) from a FDR
in the framework of two-particle correlation functions. As {\em a
priori} they are not necessarily the same as the effective
temperatures defined above we use an asterisk to refer to them. We
will focus on current-current correlation functions since they are
more easily accessible from an experimental point of view. We are
particularly interested in a local relation, that is both currents
evaluated at the same point.
As in the case of the single-particle FDR we focus on the dc
components of the correlation functions to define the effective
temperature. This corresponds to assuming that an equilibrium-like FDR
holds for the $k=0$ Floquet component with $\beta^{eff*}$ playing the
role of the inverse of temperature,
\begin{equation} \label{fdr2}
C_{\alpha \alpha}^R(0,\omega) = i \int_{-\infty}^\infty
\frac{d\omega'}{2\pi} \frac{C_{\alpha \alpha}^K(0,\omega')}{\omega -
\omega' + i 0^+} \tanh
\left[\frac{\beta^{eff*}_{l\alpha}\omega'}{2}\right].
\end{equation}
An equivalent expression for the FDR given in Eq. (\ref{fdr2}) is
obtained by considering the imaginary part, which leads to
\begin{equation} \label{Teff2}
i C_{\alpha \alpha}^K(0,\omega) =
\coth \left[ \frac{\beta^{eff*}_{l\alpha} \omega}{2} \right]
\overline{\varphi}^*_\alpha(\omega),
\end{equation}
where $\overline{\varphi}^*_\alpha(\omega) = - 2 \, \mbox{Im} \left[
C_{\alpha \alpha}^R(0,\omega) \right]$. (Notice that the real part is
simply derived by means of Kramers-Kronig relations.) As in the case
of the single-particle FDR in Eq. (\ref{Teff}), Eq. (\ref{Teff2})
defines an effective temperature that might depend on $\omega$, so the
limit $\omega \rightarrow 0$ is taken.
It is important to notice the similarity of this expression with the
one shown in Eq. (\ref{Teff}) for single-particle Green's functions
(fermionic operators). In this case the hyperbolic tangent is replaced
by an hyperbolic cotangent due to the bosonic statistic of current
operators.
\subsection{Current-current correlation and noise}
Although we are more interested in the case of local current
correlations, let us start by considering the more general case of
correlation at different points. If we consider two reservoirs
($\alpha$ and $\beta$) and two times (an absolute time $t$ and a
relative time $\tau$) we can define the correlation function of
currents as
\begin{equation} \label{corr-cc}
P_{\alpha \beta}(t,t-\tau) = \frac{1}{2} \langle \Delta \hat{J}_\alpha(t)
\Delta \hat{J}_\beta(t-\tau) + \Delta \hat{J}_\beta(t-\tau) \Delta
\hat{J}_\alpha(t) \rangle,
\end{equation}
where $\Delta \hat{J}_\alpha(t) = \hat{J}_\alpha(t) - \langle
\hat{J}_\alpha(t) \rangle$.
With the definition of the contour-ordered current-current correlation
function given in Eq. (\ref{cc-Def}), the correlation function of
currents given in Eq. (\ref{corr-cc}) can be expressed as
\begin{eqnarray} \label{corr-cc2}
P_{\alpha \beta} (t,t-\tau) & = & \frac{i}{2} \left( C_{\alpha
\beta}^>(t,t-\tau)
+ C_{\alpha \beta}^< (t,t-\tau) \right) \nonumber \\
& = & \frac{i}{2} C_{\alpha \beta}^{K} (t,t-\tau).
\end{eqnarray}
If instead of a symmetrized current-current correlation we are
interested in a nonsymmetrized one,
\begin{equation}
P^{ns}_{\alpha \beta}(t,t-\tau) = \langle \Delta \hat{J}_\alpha(t)
\Delta \hat{J}_\beta(t-\tau) \rangle,
\end{equation}
the correlation becomes
\begin{equation}
P^{ns}_{\alpha \beta} (t,t-\tau) = i C_{\beta \alpha}^< (t-\tau,t).
\end{equation}
In this work we will give results for the symmetrized correlation only
but it is straightforward to obtain the results for the nonsymmetrized
one.
Since experimentally the noise spectrum is averaged over the absolute
time $t$, the relevant quantity here is
\begin{equation}
\mathcal{P}_{\alpha \beta}(\omega) = 2 \int d\tau \langle P_{\alpha
\beta}(t,t-\tau) \rangle_t e^{i \omega \tau}
\end{equation}
where $\langle \ldots \rangle_t$ denotes the time average. From the
definition of the Floquet-Fourier components given in
Eq. (\ref{floquet}) it is easy to see that
\begin{equation}
\mathcal{P}_{\alpha \beta}(\omega) = i C_{\alpha \beta}^K(0,\omega).
\end{equation}
Hence, the only relevant Floquet-Fourier component is the one with
$k=0$.
As we are considering non-interacting electrons the
contour-ordered propagator given in Eq. (\ref{cc-Def}) can be
exactly evaluated in terms of single-particle propagators
(\ref{green-Def}). Using Wick's theorem (see the Appendix for the
details), this contour-ordered function can be written as
\begin{eqnarray} \label{C-calc1}
i C_{\alpha \beta} (t,t') & = & - w_{c \alpha} w_{c \beta} \sum_{k
\alpha, k \beta}
\left\{ G_{l\beta,k\alpha}(t',t) G_{l\alpha,k\beta}(t,t') \right.
\nonumber \\ & &
- G_{k\beta,k\alpha}(t',t) G_{l\alpha,l\beta}(t,t')
\nonumber \\ & &
- G_{l\beta,l\alpha}(t',t) G_{k\alpha,k\beta}(t,t')
\nonumber \\ & &
\left. + G_{k\beta,l\alpha}(t',t) G_{k\alpha,l\beta}(t,t') \right\} .
\end{eqnarray}
In the Appendix we show the detailed calculation leading from this
expression to the Floquet-Fourier components $C^\gtrless_{\alpha
\beta}(0,\omega)$. Here we only reproduce the results for two cases
of particular interest.
The first case is the zero-frequency limit of $C^K_{\alpha
\beta}(0,\omega)$, which reads
\begin{equation}
P_{\alpha \beta} \equiv \frac{i}{2} C^K_{\alpha \beta}(k=0,\omega=0) =
\delta_{\alpha \beta} P_\alpha + P_{\alpha \ne \beta},
\end{equation}
where
\begin{eqnarray}
P_\alpha & = & \int_{-\infty}^\infty \frac{d\omega'}{2\pi}
\Gamma_\alpha(\omega') \sum_{k=-\infty}^\infty \sum_\gamma
\Gamma_\gamma(\omega'_k) f_{\alpha \gamma}(\omega',\omega'_k)
\nonumber \\ &&
\times |G^R_{l\alpha,l\gamma}(-k,\omega'_k)|^2,
\nonumber \\
P_{\alpha \ne \beta} & = & - \frac{1}{2} \int_{-\infty}^\infty
\frac{d\omega'}{2\pi}
\Gamma_{\alpha}(\omega') \sum_{k=-\infty}^\infty
\Gamma_\beta(\omega'_k) \Big\{
f_{\alpha \beta}(\omega',\omega'_k)
\nonumber \\ &&
\times \mbox{Re} \Big[ G^R_{l\beta,l\alpha}(k,\omega')
G^R_{l\alpha,l\beta}(-k,\omega'_k) \Big]
\nonumber \\ &&
- 2 \sum_{k=-\infty}^\infty \sum_\gamma \Gamma_\gamma(\omega'_{k'})
f_{\alpha \gamma} (\omega',\omega'_{k'}) \mbox{Im} \Big[
G^R_{l\beta,l\alpha}(k,\omega')
\nonumber \\ &&
\times G^R_{l\alpha,l\gamma}(-k',\omega'_{k'})
G^R_{l\beta,l\gamma}(k-k',\omega'_{k'})^* \Big]
\nonumber \\ &&
+ G^>_{l\beta,l\alpha}(k,\omega')G^<_{l\beta,l\alpha}(k,\omega')^* \Big\}
\nonumber \\ &&
+ \Big\{ \mbox{same with } \alpha \leftrightarrow \beta \Big\},
\end{eqnarray}
being
\begin{equation}
f_{\alpha \beta}(\omega,\omega') = f_\alpha(\omega)
(1 - f_\beta(\omega')) + f_\beta(\omega') (1 - f_\alpha(\omega)).
\end{equation}
It is important to notice that this is a general result for
multiterminal quantum driven systems and the sum over $\gamma$ extends
over all reservoirs connected to the central system. For this work we
chose a two terminal system, but this result is completely general as
no assumption concerning the reservoirs was made in the calculation.
At this point it is interesting to compare with previous results
obtained within the scattering matrix formalism (see
Ref. \onlinecite{ButtikerNoise}). In order to do so, we need to assume
that all reservoirs are at equal temperature and chemical potential
(unbiased pump). We split the zero frequency noise into two
contributions
\begin{equation}
P_{\alpha \beta} \equiv P_{\alpha \beta}^{(th)} + P_ {\alpha \beta}^{(sh)},
\end{equation}
where
\begin{eqnarray} \label{zero-freq-noise}
P_{\alpha \beta}^{(th)} & = & \int_{-\infty}^\infty \frac{d\omega'}{2\pi}
f(\omega') (1 - f(\omega')) \Gamma(\omega') \sum_{k=-\infty}^\infty
\Gamma(\omega'_k)
\nonumber \\ &&
\times \bigg\{ \delta_{\alpha \beta} \sum_\gamma \Big(
|G^R_{l\alpha,l\gamma}(k,\omega')|^2 +
|G^R_{l\alpha,l\gamma}(-k,\omega'_k)|^2
\Big)
\nonumber \\ &&
- |G^R_{l\alpha,l\beta}(k,\omega')|^2 -
|G^R_{l\beta,l\alpha}(k,\omega')|^2 \bigg\},
\nonumber \\
P^{(sh)}_{\alpha \beta} & = & \int_{-\infty}^\infty \frac{d\omega'}{2\pi}
\Gamma(\omega') \sum_{k=-\infty}^\infty \Gamma(\omega'_k) \bigg\{
\delta_{\alpha \beta}
\left( f(\omega') - f(\omega'_k) \right)^2
\nonumber \\ &&
\times |G^R_{l\alpha,l\gamma}(k,\omega')|^2 - f(\omega')^2 \Big(
|G^R_{l\alpha,l\beta}(k,\omega')|^2
\nonumber \\ &&
+ |G^R_{l\beta,l\alpha}(k,\omega')|^2 \Big) + 2 f(\omega')f(\omega'_k)
\mbox{Re} \Big[ G^R_{l\beta,l\alpha}(k,\omega')
\nonumber \\ &&
\times G^R_{l\alpha,l\beta}(-k,\omega'_k) \Big] - 2 \mbox{Re} \Big[ \Big(
f(\omega') G^R_{l\beta,l\alpha}(k,\omega')
\nonumber \\ &&
- f(\omega'_k) G^R_{l\alpha,l\beta}(-k,\omega'_k)^* \Big)
G^<_{l\beta,l\alpha}(k,\omega')^* \Big]
\nonumber \\ &&
- |G^<_{l\beta,l\alpha}(k,\omega')|^2 \bigg\}.
\end{eqnarray}
The first term $P^{(th)}_{\alpha \beta}$ is the Nyquist-Johnson noise
while $P^{(sh)}_{\alpha \beta}$ is the shot noise. Using the relation
between the Floquet S-matrix and Green's functions \cite{liliflo}
\begin{eqnarray}
S_{F,\alpha \beta}(\omega_m,\omega_n) & = & \delta_{\alpha \beta}
\delta_{n,m} - i \sqrt{\Gamma(\omega_m)\Gamma(\omega_n)}
\nonumber \\ &&
\times G^R_{l\alpha,l\beta}(m-n,\omega_n),
\end{eqnarray}
it is easy to show that the result given in
Eq. (\ref{zero-freq-noise}) coincides with the one obtained using the
Floquet S-matrix formalism in Ref. \onlinecite{ButtikerNoise}.
The other case of interest is the one in which $\alpha=\beta=P$,
i.e. we concentrate in current fluctuations of the probe. Using the
fact that the probe is noninvasive, we only keep terms to the lowest
order in the coupling $w_{cP}$ between the system and the
thermometer,
\begin{eqnarray} \label{ccK}
i C_{PP}^K(0,\omega) & = & \Gamma_P \int_{-\infty}^\infty
\frac{d\omega'}{2\pi} \sum_{k=-\infty}^\infty\sum_{\gamma=L,R}
\Gamma_\gamma(\omega') \Big\{ f_\gamma(\omega')
\nonumber \\
& & \times \big[2 - f_P(\omega'_{k} + \omega) - f_P(\omega'_{k} -
\omega) \big]
\nonumber \\
&& + \big[f_P(\omega'_{k} + \omega) + f_P(\omega'_{k} - \omega)\big]
\nonumber \\
&& \times (1 - f_\gamma(\omega')) \Big\} |
G_{lP,l\gamma}^R(k,\omega') |^2.
\end{eqnarray}
On the other hand we need $\overline{\varphi}^*_P(\omega)$, which
is
\begin{eqnarray} \label{ccR}
\overline{\varphi}^*_P(\omega)
& = & \Gamma_P \int_{-\infty}^\infty \frac{d\omega'}{2\pi}
\sum_{k=-\infty}^\infty \sum_{\gamma=L,R}
\Gamma_\gamma(\omega')
\nonumber \\
& & \times \big[f_P(\omega'_{k} - \omega) - f_P(\omega'_{k} +
\omega) \big]
\nonumber \\
& & \times | G_{lP,l\gamma}^R(k,\omega') |^2.
\end{eqnarray}
The functions entering Eqs. (\ref{ccK}) and (\ref{ccR}) are the ones
involved in the definition of effective temperature given in
Eq. (\ref{Teff2}).
\section{Results}\label{results}
In this section we present results for a central device consisting of
non-interacting electrons in a one-dimensional lattice:
\begin{equation}
H_0= -w \sum_{l,l^{\prime}} (c^\dagger_{l} c_{l^{\prime}} + H.c.),
\end{equation}
where $w$ denotes a hopping matrix element between neighboring
positions $l,l^{\prime}$ on the lattice. The driving term is chosen as
\begin{equation} \label{hv}
H_V(t)= \sum_{j=1}^2 e V_j(t) c^\dagger_{lj} c_{lj} ,
\end{equation}
with $V_j(t)= E_B + V_0 \cos( \Omega_0 t + \delta_j)$, $lj$ being the
positions where two oscillating fields with frequency $\Omega_0$ and
phase-lag $\delta$ are applied.
This defines a simple model for a quantum pump where two ac gate
voltages are applied at the walls of a quantum
dot. \cite{liliflo,adia,pump}
\subsection{Equivalence between effective and local temperature at
weak driving}
As in Refs. \onlinecite{cal} and \onlinecite{cal2} we are interested
in the weak driving regime, which corresponds to a situation where the
ac voltage amplitudes are lower than the kinetic energy of the
electrons in the structure and the driving frequency is much smaller
than the inverse of the dwell time of these electrons. We have shown
that in this regime the local temperature defined from
Eq. (\ref{Tlocal}), with the chemical potential of the probe fixed, is
identical to the local temperature defined from Eq. (\ref{Tlocal2}),
where the chemical potential of the probe has to be determined in
order to satisfy both equations, and it is also identical to an
effective temperature defined from a local fluctuation-dissipation
relation of single-particle Green's functions (see Eq. (\ref{Teff})).
We now turn our attention to the effective temperature $T^{eff*}$
defined in Eq. (\ref{Teff2}), involving current-current correlation
functions. The correlation functions given in Eqs. (\ref{ccK}) and
(\ref{ccR}) depend on the temperature $T_P$ and the chemical potential
$\mu_P$ of the probe via the Fermi function $f_P$. Thus, the effective
temperature $T^{eff*}$, so calculated, also depends on $T_P$ and
$\mu_P$. There are many possible reasonable choices for the latter
quantities. In this subsection we will concentrate in only one choice
and leave for the next subsection the analysis of other
possibilities. We choose $\mu_P$ equal to the chemical potential $\mu$
of the reservoirs and $T_P$ equal to the local temperature $T_{lP}$,
i.e. the one for which the heat flow between the system and the probe
vanishes.
In Fig. \ref{tanh} we show a typical plot for $iC_{PP}^K(0,\omega)$,
$\overline{\varphi}_P^*(\omega)$ and their ratio as a function of
$\omega$. According to the definition of effective temperature given
in Eq. (\ref{Teff2}), the derivative of this ratio at $\omega = 0$
corresponds to $\beta^{eff*}/2$. This derivative is calculated
numerically. In the same figure we plot $\tanh\left[
\beta^{eff*} \omega/2 \right]$ and we see that the quotient
$\overline{\varphi}_P^*(\omega)/iC_{PP}^K(0,\omega)$ is well fitted by
a FDR-type relation for a reasonably large frequency interval.
\begin{figure}
\centering
\includegraphics[width=80mm,angle=0,clip]{tanh2.eps} {\small {} }
\caption{{ (Color online) Current-current correlation functions
$\overline{\varphi}_P^*(\omega)$ (dotted red),
$iC^K_{PP}(0,\omega)$ (dashed blue), their quotient (green
diamonds), and $\tanh [\beta^{eff*} \omega/2]$ (solid black) as a
function of $\omega$. The reservoirs have chemical potential
$\mu=0.2$ and temperature $T=0.025$. The driving frequency is
$\Omega_0=0.01$, the amplitude is $V_0=0.05$ and $E_B = 0.2$.}}
\label{tanh}
\end{figure}
In Fig. \ref{Omega0} we show the behavior of the effective temperature
$T^{eff*}$ and the local temperature $T_{lP}$, calculated for the site
connected to the left reservoir, as a function of the driving
frequency $\Omega_0$ for two different temperatures of the
reservoirs. This analysis can be done for any site of the central
system but we chose this particular site because its local temperature
determine the heat current that flows into the left
reservoir.\cite{cal2} Results for any other site of the central system
are similar. In Fig. \ref{Omega0} the upper panel corresponds to
$T=0.016$, while the lower corresponds to $T=0.005$. We see that both
ways of defining the temperature coincide at low frequencies. This
supports the idea that, for a given temperature $T$ of the reservoirs,
$T^{eff*}$ is a {\em bona fide} temperature within the low driving
regime. As we can see from Fig. \ref{Omega0}, the higher the
temperature $T$ of the reservoirs, the broader the region of low
driving frequency $\Omega_0$ in which the two definitions of the
temperature agree.
\begin{figure}
\centering
\includegraphics[width=80mm,angle=0,clip]{Omega0.eps} {\small {} }
\caption{{ (Color online) Local temperature $T_{lP}$ (dashed black)
and effective temperature $T^{eff*}_{lP}$ (solid red) for the site
$lP = lL$ (i.e. the site connected to the left reservoir) as a
function of driving frequency $\Omega_0$. The reservoirs have
chemical potential $\mu=0.2$. The upper panel corresponds to
$T=0.016$, while the lower panel corresponds to $T=0.005$.}}
\label{Omega0}
\end{figure}
As we mentioned earlier, the definition given in Eq. (\ref{Teff2}) can
be used to calculate the effective temperature in each site of the
central system. In Fig. \ref{sitios} we show the comparison between
the local temperature $T_{lP}$ and the effective temperature
$T^{eff*}$ all along the sample. The values of $T_{lP}$ and $T^{eff*}$
are plotted for each point of the linear chain, for $T_L = T_R = T =
0.02$, $\mu_L = \mu_R = \mu = 0.2$ and a particular low value of
$\Omega_0 = 0.001$. We can see that there is a good agreement between
the two temperatures along the whole structure and an almost perfect
agreement within the ``Left'' and ``Right'' regions (defined in
Fig. \ref{setup}), which are the ones from where we can determine the
heat flow between the system and each one of the reservoirs (see
Refs. \onlinecite{cal2},\onlinecite{cal3}). It is also important to
notice the existence of $2k_F$ Friedel-like oscillations, $k_F$ being
the Fermi vector of the electrons leaving the reservoirs. These
oscillations are an indication of quantum interference. They were
previously reported for exactly the same setup we study in this work
\cite{cal,cal2} and also predicted in other mesoscopic systems under a
stationary driving. \cite{dubi-diventra}
\begin{figure}
\centering
\includegraphics[width=80mm,angle=0,clip]{sites.eps} {\small {} }
\caption{{
(Color online) Local temperature $T_{lP}$ (black diamonds) and
effective temperature $T_{lP}^{eff*}$ (red circles) along a
one-dimensional model of $N=30$ sites with two ac fields operating
with a phase lag of $\delta=\pi/2$ at the positions indicated by
dotted lines. The system is in contact with reservoirs with
chemical potentials $\mu=0.2$ and temperature $T=0.02$. The
driving frequency is $\Omega_0=0.001$, the amplitude is $V_0
=0.05$ and $E_B = 0.2$. }}
\label{sitios}
\end{figure}
\subsection{Different choices of $T_P$ and $\mu_P$}
The effective temperature $T^{eff*}$ depends on the values of $T_P$
and $\mu_P$ (respectively the temperature and chemical potential of
the probe). The choice analyzed in the previous section was $\mu_P$
equal to the chemical potential $\mu$ of the reservoirs and $T_P$
equal to the local temperature $T_{lP}$. We will call this choice Case
I. Another suitable choice (Case II) could be to choose $\mu_P = \mu$,
as in the previous case but $T_P$ such that $T^{eff*} = T_P$. A third
choice (Case III) could be to choose $T_P$ such that $T^{eff*} = T_P$
but at the same time $\mu_P = \mu_{lP}$ (the local voltage) in order
to have a vanishing charge current between the system and the probe at
that temperature. In this work we will only deal with these three
possibilities.
If Fig. \ref{comp} we show the three different effective temperatures
corresponding to the above mentioned cases together with the local
temperature as a function of the driving frequency $\Omega_0$ for a
given temperature of the reservoirs. As we can see, all three cases
give a good estimate of the local temperature in the regime of
interest (i.e. low driving frequencies). This behavior supports the
robustness of the definition of the local temperature from a FDR.
\begin{figure}
\centering
\includegraphics[width=80mm,angle=0,clip]{comp2.eps} {\small {} }
\caption{{ (Color online) Local temperature $T_{lP}$ (dashed black)
and effective temperature $T^{eff*}_{lP}$ for Case I (solid red),
Case II (dotted blue) and Case III (dashed and dotted orange) for
the site $lP = lL$ as a function of driving frequency
$\Omega_0$. The reservoirs have chemical potential $\mu=0.2$ and
temperature $T=0.01$.}}
\label{comp}
\end{figure}
\section{Summary and Conclusions}\label{conclusions}
In this work we have calculated the current-current correlation
functions for quantum driven systems and found an explicit expression
for the zero-frequency noise within the Schwinger-Keldysh Green's
functions formalism. In the particular case of multiterminal unbiased
quantum driven systems our result is in agreement with previous
results obtained within the scattering matrix
approach.\cite{ButtikerNoise} For non-interacting systems both
descriptions agree, while the Green's functions has the advantage of
providing a systematic framework for the study of interacting systems.
We have also defined an effective temperature from a local
fluctuation-dissipation relation for current-current correlation
functions and showed that for low frequencies it coincides with the
local temperature defined with a thermometer and from a FDR at the
level of single-particle propagators. This result opens the
possibility of using current-current correlation in real experiments,
in order to define the local temperature of a driven sample.
\begin{acknowledgments}
We thank M. B\"uttiker and L. Cugliandolo for valuable discussions. We
acknowledge support from CONICET, ANCyT, UBACYT, Argentina and
J. S. Guggenheim Memorial Foundation (LA).
\end{acknowledgments}
|
2,877,628,088,429 | arxiv | \section*{References}}
\usepackage{amssymb}
\usepackage{amsthm}
\usepackage{amsmath,amssymb,amsopn,amsfonts,mathrsfs,amsbsy,amscd}
\usepackage{longtable}
\usepackage{caption}
\usepackage{multirow}
\usepackage{lineno}
\usepackage{systeme}
\setlength{\parindent}{0pt}
\newcommand{{\mathfrak X}}{{\mathfrak X}}
\newcommand{\longrightarrow}{\longrightarrow}
\newcommand{\wedge}{\wedge}
\newcommand{\mathrm{Im}}{\mathrm{Im}}
\newcommand{\mathrm{sl}}{\mathrm{sl}}
\newcommand{\omega}{\omega}
\newcommand{\quad\mbox{and}\quad}{\quad\mbox{and}\quad}
\newcommand{{\mathfrak{g}}}{{\mathfrak{g}}}
\newcommand{{\mathfrak{h}}}{{\mathfrak{h}}}
\newcommand{{\mathrm{ad}}}{{\mathrm{ad}}}
\newcommand{{\mathrm{L}}}{{\mathrm{L}}}
\newcommand{{\mathrm{aff}}}{{\mathrm{aff}}}
\newcommand{{\mathrm{tr}}}{{\mathrm{tr}}}
\newcommand{{\mathrm{d}}}{{\mathrm{d}}}
\newcommand{\alpha}{\alpha}
\newcommand{\overline\al}{\overline\alpha}
\newcommand{\overline\om}{\overline\omega}
\newcommand{\overline\h}{\overline{\mathfrak{h}}}
\newcommand{\overline\xi}{\overline\xi}
\newcommand{\overline\G}{\overline{\mathfrak{g}}}
\newcommand{\beta}{\beta}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\lambda}{\lambda}
\newtheorem{Def}{Definition}[section]
\newtheorem{theo}{Theorem}[section]
\newtheorem{pr}{Proposition}[section]
\newtheorem{Le}{Lemma}[section]
\newtheorem{co}{Corollary}[section]
\newtheorem{exem}{Example}
\newtheorem{exems}{Examples}
\newtheorem{remark}{Remark}
\newenvironment{eqgauche}%
{%
\stepcounter{equation}
\begin{equation*}}{%
\leqno(\arabic{equation})
\end{equation*}
}
\title{On cosymplectic Lie Algebras}
\author{S. El bourkadi and M. W. Mansouri\\Universit\'e Ibn Tofail\\ Facult\'e des Sciences. Laboratoire L.A.G.A\\ K\'enitra-Maroc\\e-mail: [email protected]\\
[email protected]}
\begin{document}
\maketitle
\begin{abstract}
We give some properties of cosymplectic Lie algebras, we show, in particular, that they support a left symmetric product. We also give some constructions of cosymplectic Lie algebras, as well as a classification in three and five-dimensional cosymplectic Lie algebras.
\end{abstract}
key words:
Cosymplectic structures, Left-symmetric product, Double extensions.\\
AMS Subject Class (2010): 53D15, 22E25.
\section{Introduction}
Cosymplectic manifolds were introduced by Libermann In 1958. He defined it as: An {\it almost cosymplectic structure} on a manifold $M$ of odd dimension $2n+1$ is a pair $(\alpha,\omega)$, where $\alpha$ is a $1$-form and $\omega$ is a $2$-form such that
$\alpha\wedge\omega^n$ is a volume form on $M$. The structure is
said to be cosymplectic if $\alpha$ and $\omega$ are closed. Any almost cosymplectic structure $(\alpha,\omega)$ uniquely determines a smooth vector field
$\xi$ on $M$, called the {\it Reeb vector field} of the almost cosymplectic manifold $(M,\alpha,\omega)$ and is completely characterized by the following conditions
\begin{equation} \label{1}
\alpha(\xi)=1\quad\mbox{and}\quad \iota_\xi\omega=0,
\end{equation}
where $\iota$ denotes
the inner product. If we consider the vector bundle morphism $\varPhi : {\mathfrak X}(M) \longrightarrow\Omega^1(M)$
defined by
\begin{equation} \label{2}
\varPhi(X)=\iota_X\omega+\alpha(X)\alpha.
\end{equation}
The condition that $\alpha\wedge\omega^n$ is a volume form is equivalent to the condition that $\varPhi$ is a vector
bundle isomorphism. In this case the Reeb vector is given by $\xi= \varPhi^{-1}(\alpha)$.
For more details on cosymplectic geometry, we refer the reader to the survey article \cite{C-N-Y} and the references therein.
In this paper, we are interested in the Lie groups admit a cosymplectic structure which is invariant under left translations (left invariant).
Let $G$ be a real Lie group of dimension $2n+1$ and ${\mathfrak{g}}$ the corresponding Lie algebra. If $G$ is endowed with a left invariant differential $1$-form $\alpha^+$ and $2$-form $\omega^+$ such that $(\alpha^+,\omega^+)$ is cosymplectic structure, we will say that $(G,\alpha^+,\omega^+)$ is a {\it cosymplectic Lie group} and that
$({\mathfrak{g}},\alpha,\omega)$ is a {\it cosymplectic Lie algebra}, where $\alpha=\alpha^+(e)$ and $\omega=\omega^+(e)$, with $e$ is the unit element of $G$. $({\mathfrak{g}},\alpha,\omega)$ is a cosymplectic Lie algebra this is equivalent to
\begin{enumerate}
\item $\alpha([x,y])=0$, \qquad\qquad $\forall x, y\in{\mathfrak{g}}$.
\item $\omega([x,y],z)+\omega([y,z],x)+\omega([z,x],y)=0$, \qquad $\forall x, y, z\in{\mathfrak{g}}$.
\item $\alpha\wedge\omega^n\not=0$.
\end{enumerate}
The Reeb vector field is the unique left invariant vector field $\xi^+$ satisfying $\alpha(\xi)=1$ and $i_\xi\omega=0$, where $\xi=\xi^+(e)$ is called the Reeb vector of $({\mathfrak{g}},\alpha,\omega)$. Note that from 1. a semi-simple Lie algebra (in particular if $[{\mathfrak{g}},{\mathfrak{g}}]={\mathfrak{g}}$) cannot support a cosymplectic structure. see \cite{C-P} for the study of cosymplectic Lie algebras and a characterization.
Recall that a finite-dimensional algebra $({\mathfrak{g}},.)$ is called \it{left-symmetric} if it satisfies the identity
\begin{equation*}
ass(x,y,z)=ass(y,x,z)\qquad \forall x,y,z \in{\mathfrak{g}},
\end{equation*}
where $ass(x,y,z)$ denotes the associator $ass(x,y,z)=(x.y ).z-x.(y.z)$. In this case, the commutator $[x,y]= x.y-y.x$
defines a bracket that makes ${\mathfrak{g}}$ a Lie algebra.
Clearly, any \it{associative algebra} product (i.e. $ass(x,y,z)=0$, $\forall x,y,z\in{\mathfrak{g}}$) is a left symmetric product.
A \it{symplectic Lie algebra} $({\mathfrak{g}},\omega)$ is a real Lie algebra with a skew-symmetric
non-degenerate bilinear form $\omega$ such that for any $x$, $y$, $z\in{\mathfrak{g}}$,
\begin{equation}\label{3}
\oint\omega([x,y],z)=0,
\end{equation}
this is to say, $\omega$ is a non-degenerate $2$-cocycle for the scalar cohomology of ${\mathfrak{g}}$, where $\oint$ denotes summation over the cyclic permutation.
It is known that (see \cite{C} and \cite{M-R}) the product given by
\begin{equation}\label{4}
\omega(x*y,z)=-\omega(y,[x,z]),\qquad\forall x,y,z\in{\mathfrak{g}},
\end{equation}
induces a left symmetric algebra structure that satisfies $x*y-y*x=[x,y]$ on ${\mathfrak{g}}$, we say that the left symmetric product is \it{associated with symplectic Lie algebra} $({\mathfrak{g}},\omega)$.
Geometrically, this is equivalent to the existence in a symplectic Lie group of \it{left-invariant affine structure} (a left-invariant linear connection with zero torsion and zero curvature).
The paper is organized as follows. In Section $2$, we show that any symplectic Lie algebra supports a left symmetric product and we give some properties. In Section $3$, we give some procedures to construct cosymplectic Lie algebras. In particular, we suggest two different constructions of the cosymplectic double extension. In the last section we will give some results in low dimension and we also give a classification in three and five-dimensional cosymplectic Lie algebras.
\textit{Notations}: For $\{e_i\}_{1\leq i\leq n}$ a basis of ${\mathfrak{g}}$, we denote by $\{e^i\}_{1\leq i\leq n}$ the
dual basis on ${\mathfrak{g}}^\ast$ and $e^{ij}$ the 2-form $e^i\wedge e^j\in\wedge^2{\mathfrak{g}}^*$. Set by $\langle e \rangle:= span\{e\}$ the one-dimensional trivial Lie algebra.
The software Maple 18$^\circledR$ has been used to check all needed calculations.
\section{Left-symmetric product associated with the cosymplectic Lie algebra}
In the following, $({\mathfrak{g}},\alpha,\omega)$ is a real cosymplectic Lie algebra of dimension $2n+1$ with the Reeb vector $\xi$. Therefore, we have an isomorphism
\begin{equation}\label{5}
\begin{array}{rcl}
\varPhi : {\mathfrak{g}}& \longrightarrow & {\mathfrak{g}}^* \\
x & \longmapsto & i_x\omega+\alpha(x)\alpha. \\
\end{array}
\end{equation}
Throughout the remainder of this paper posing ${\mathfrak{h}}= \ker\alpha$ and $\omega_{{\mathfrak{h}}}=\omega_{|{\mathfrak{h}}\times{\mathfrak{h}}}$.
\begin{Le}
Let $({\mathfrak{g}},\alpha,\omega)$ be a cosymplectic Lie algebra with the Reeb vector $\xi$.
Then ${\mathfrak{h}}$ is an ideal of ${\mathfrak{g}}$ and $ ({\mathfrak{h}},\omega_{{\mathfrak{h}}}) $ is a symplectic Lie algebra.
\end{Le}
\begin{proof} Let $x\in{\mathfrak{h}}$ and $y\in{\mathfrak{g}}$ we have
\[\alpha([x,y])=-{\mathrm{d}}\alpha(x,y)=0,\]
then ${\mathfrak{h}}$ is an ideal of ${\mathfrak{g}}$. Now proving that $ ({\mathfrak{h}},\omega_{{\mathfrak{h}}}) $ is a symplectic Lie algebra, it is clear that $\omega_{\mathfrak{h}}$ is 2-cocycle, consider a basis
$\{\xi,e_1,...,e_{2n}\}$ of ${\mathfrak{g}}$, with $\{e_1,...,e_{2n}\}$ is a basis of ${\mathfrak{h}}$. We have
\[0\not=\alpha\wedge\omega^n(\xi,e_1,...,e_{2n})=\omega_{{\mathfrak{h}}}^n(e_1,...,e_{2n}).\]
Then $ ({\mathfrak{h}},\omega_{\mathfrak{h}}) $ is a symplectic Lie algebra.
\end{proof}
Denote by $*$ the left-symmetric product associated with the symplectic Lie algebra $({\mathfrak{h}},\omega_{{\mathfrak{h}}})$ and $ass^*$ its associator (i.e. $ass^*(x,y,z)=(x*y)*z-x*(y*z)$, $\forall x,y,z\in{\mathfrak{h}}$ ).
As in the symplectic framework, the non-degeneration of $\varPhi$ defines a product on ${\mathfrak{g}}$ by
\begin{equation} \label{6}
\varPhi(x.y)(z)=- \varPhi(y)([x,z])\qquad x,y,z\in{\mathfrak{g}}.
\end{equation}
\begin{pr}\label{pr2.1}
The product defined by $(\ref{6})$, is characterized by
\begin{enumerate}
\item For $x,\,y\in{\mathfrak{h}}$, we have
\[x.y=x*y+\omega(x,{\mathrm{ad}}_{\xi}y)\xi\]
\item For $ x\in{\mathfrak{g}}$, we have
\[\xi.x={\mathrm{ad}}_\xi x\quad\mbox{and}\quad x.\xi=0.\]
\end{enumerate}
\end{pr}
\begin{proof}
\begin{enumerate}
\item For all $x,y\in{\mathfrak{h}}$ and $z\in{\mathfrak{g}}$ the relation $(\ref{6})$ becomes
\begin{align*}
\omega(x.y,z)+\alpha(x.y)\alpha(z)&=-\omega(y,[x,z])-\alpha(y)\alpha([x,z])\qquad\\
&=-\omega(y,[x,z]).
\end{align*}
If $z\in{\mathfrak{h}}$, we have $\omega_{|{\mathfrak{h}}}(x.y,z)=-\omega_{|{\mathfrak{h}}}(y,[x,z])$
and if $z=\xi$, we have
\begin{align*}
\alpha(x.y)&=-\omega(y,[x,\xi])\\
&=\omega(x,[\xi,y]).
\end{align*}
This shows 1.
\item For $x=\xi$ and $y\in{\mathfrak{h}}$, the relation $(\ref{6})$ becomes
\begin{align*}
\omega(\xi.y,z)+\alpha(\xi.y)\alpha(z)&=-\omega(y,[\xi,z]),
\end{align*}
On one hand, for $z=\xi$ we obtain $\alpha(\xi.y)=0,$ then $\xi.y\in{\mathfrak{h}}$, on the other hand for $z\in{\mathfrak{h}}$, we have
\begin{align*}
\omega_{{\mathfrak{h}}}(\xi.y,z)&=-\omega_{{\mathfrak{h}}}(y,[\xi,z])\\
&=\omega_{{\mathfrak{h}}}([\xi,y],z),
\end{align*}
hence, $\xi.x={\mathrm{ad}}_\xi x$, $\forall x\in{\mathfrak{g}}$.
Finally, for $y=\xi$ we also have
\begin{align*}
\omega(x.\xi,z)+\alpha(x.\xi)\alpha(z)&=0,
\end{align*}
then $x.\xi=0$, $\forall x\in{\mathfrak{g}}$.
\end{enumerate}
\end{proof}
The following lemma shows that ${\mathrm{ad}}_{\xi}$ is a derivation relatively to the left-symmetric product associated with the symplectic Lie algebra $({\mathfrak{h}},\omega_{{\mathfrak{h}}})$.
\begin{Le}\label{l2}
For all $x$, $y\in{\mathfrak{h}}$ we have
\begin{equation*}
{\mathrm{ad}}_{\xi}(x*y)={\mathrm{ad}}_{\xi}x*y+x*{\mathrm{ad}}_{\xi}y
\end{equation*}
\end{Le}
\begin{proof}
Note first that, ${\mathrm{ad}}_\xi x\in{\mathfrak{h}}$ for all $\in{\mathfrak{h}}$, for all $x$, $y\in{\mathfrak{h}}$ we have
(using $(\ref{3})$, $(\ref{4})$ and Jacobi identity):
\begin{align*}
\omega_{{\mathfrak{h}}}({\mathrm{ad}}_{\xi}(x*y)-{\mathrm{ad}}_{\xi}x*y,z)
&=\omega([\xi,x*y],z)+\omega(y,[[\xi,x],z])\\
&=-\omega([z,\xi],x*y)+\omega(y,[[\xi,x],z])\\
&=\omega([x,[z,\xi]],y)-\omega([[\xi,x],z],y)\\
&=\omega([[x,z],\xi],y)\\
&=-\omega([\xi,y],[x,z])\\
&=\omega_{{\mathfrak{h}}}(x*{\mathrm{ad}}_{\xi}y,z).
\end{align*}
\end{proof}
The following theorem shows in particular, that a cosymplectic Lie algebra supports a left-symmetric product structure.
\begin{theo}
The product defined by
\begin{equation}
\varPhi(x.y)(z)=- \varPhi(y)([x,z])\qquad x,y,z\in{\mathfrak{g}},
\end{equation}
is a left-symmetric product in ${\mathfrak{g}}$.
\end{theo}
\begin{proof}
On one hand, for all $x$, $y$ and $z\in{\mathfrak{h}}$, we have
\begin{align*}
ass(x,y,z)&=(x.y).z-x.(y.z)\\
&=(x*y+\omega(x,[\xi,y])\xi).z-x.(y*z+\omega(y,[\xi,z])\xi)\\
&=(x*y).z+\omega(x,[\xi,y])\xi.z-x.(y*z)\\
&=(x*y)*z+\omega(x*y,[\xi,z])\xi+\omega(x,[\xi,y])\xi.z-x*(y*z)-\omega(x,[\xi,y*z])\xi\\
&=(x*y)*z-x*(y*z)+\omega(x,[\xi,y])\xi.z+A(x,y)\xi.
\end{align*}
With $A(x,y)=\omega(x*y,[\xi,z])-\omega(x,[\xi,y*z])$. It is clear that $(x*y)*z-x*(y*z)=(y*x)*z-y*(x*z)$, using $(\ref{3})$ and the fact that $\omega(\xi,.)=0$, we get
\[\omega(x,[\xi,y])=\omega(y,[\xi,x]).\]
We also have
\begin{align*}
A(x,y)-A(y,x)&=\omega([x,y],[\xi,z])-\omega(x,[\xi,y*z]+\omega(y,[\xi,x*z])\\
&=-\omega(z,[[x,y],\xi])+\omega(y*z,[x,\xi])-\omega(x*z,[y,\xi])\\
&=\omega(z,[\xi,[x,y]])+\omega(z,[y,[\xi,x]])+\omega(z,[x,[y,\xi]])\\
&=0.
\end{align*}
It follows that, $ass(x,y,z)=ass(y,x,z)$ for all $x$,$y$,$z\in{\mathfrak{h}}$.
On the other hand, for all $x$ and $y\in{\mathfrak{h}}$, we have
\begin{align*}
ass(\xi,x,y)-ass(x,\xi,y)&=(\xi.x).y-\xi.(x.y)-(x.\xi).y+x.(\xi.y)\\
&=({\mathrm{ad}}_{\xi}x)*y+\omega([\xi,x],[\xi,y])\xi-{\mathrm{ad}}_{\xi}(x.y)+x*{\mathrm{ad}}_{\xi}y+\omega(x,[\xi,[\xi,y]])\xi.
\end{align*}
Then $ass(\xi,x,y)-ass(x,\xi,y)=0$ is equivalent to
\[
\left\{
\begin{array}{l}
{\mathrm{ad}}_{\xi}(x*y)={\mathrm{ad}}_{\xi}x*y+x*{\mathrm{ad}}_{\xi}y\\
\omega([\xi,x],[\xi,y])=-\omega(x,[\xi,[\xi,y]]).
\end{array}
\right.
\]
The first equation is ensured by Lemma $\ref{l2}$. while the second follows from the closure of $\omega$ and from the fact that $\omega(\xi,.)=0$.
\end{proof}
It is known that (see, for example, \cite{M} Proposition 1-1), in a Lie group an affine connection is bi-invariant if and only if its associated left symmetric product is associative. We get the following result.
\begin{co}\label{co 2.1}
Let $(G,\alpha^+,\omega^+)$ be a cosymplectic Lie group with the Reeb vector $\xi^+$, the affine structure associated is bi-invariant if and only if the following conditions are satisfied
\begin{enumerate}
\item $ass^*(x,y,z)=\omega({\mathrm{ad}}_\xi y,x)\,{\mathrm{ad}}_\xi z$.
\item $x*{\mathrm{ad}}_\xi y=0$.
\item $({\mathrm{ad}}_\xi x)*y=({\mathrm{ad}}_\xi y)*x$
\item ${\mathrm{ad}}^2_\xi x=0$.
\end{enumerate}
for all $x,y,z\in{\mathfrak{h}}$.
\end{co}
\section{Double extensions cosymplectic}
We suggest two different constructions of the cosymplectic double extension. In the first construction, we combine two characterizations of cosymplectic Lie algebras. The second way of construction is inspired from the notion of the double symplectic extension.
\textbf{Double extensions of Lie algebras}: Let $(\overline\G,\overline{[\,,\,]})$ be a Lie algebra and let $\theta\in Z^2(\overline\G)$ be a $2$-cocycle. We will denote by $(\overline\G_{(\theta,e)},\overline{[\,,\,]}_\theta)$
the central extension of $\overline\G$ by the 2-cocycle $\theta$, i.e.,
\begin{eqnarray*}
\overline\G_{(\theta,e)}=(\overline\G\oplus \langle e\rangle, [.,.]_\theta)\qquad with\qquad
[x,y]_\theta &=& \overline{[x,y]}+\theta(x,y)e\quad x,y\in\overline\G.
\end{eqnarray*}
Let \[{\mathfrak{g}}=\langle d\rangle\oplus\overline\G\oplus \langle e\rangle\]
be direct sum of $\overline\G$ with one-dimensional vector spaces $\langle e\rangle$ and $\langle d\rangle$ .
Define an alternating bilinear product $[.,.] : {\mathfrak{g}}\times{\mathfrak{g}}\longrightarrow{\mathfrak{g}}$ by
\begin{align*}
[x,y] &= \overline{[x,y]}+\theta(x,y)e\quad\qquad x, y\in\overline\G \\
{[d,x]} &= \varphi(x)+\lambda(x)e \qquad\qquad x\in\overline\G\\
{[d,e]} &=v+te.
\end{align*}
with, $D=(\varphi,\lambda,v,t)\in End(\overline\G)\times\overline\G^*\times\overline\G\times\mathbb{R}$.
Note by
\begin{align*}
\partial\varphi(x,y)&=\varphi(\overline{[x,y]})-\overline{[\varphi(x), y]}-\overline{[x,\varphi(y)]}\\ \theta_\varphi(x,y)&=\theta(\varphi(x),y)+\theta(x,\varphi(y)).
\end{align*}
We have the following result
\begin{pr}\label{pr 3.1}
The alternating product $[\,,\,]$ above, defines a Lie algebra $({\mathfrak{g}},[\,,\,])$ if and only if
\begin{enumerate}
\item $\partial\varphi=\theta v$
\item $t\theta-\theta_\varphi={\mathrm{d}}\lambda$
\item $ v\in Z(\overline\G) \cap\ker(\theta)$.
\end{enumerate}
where $\ker(\theta)=\{x\in\overline\G\; |\;\theta(x,y) = 0,\quad \forall y \in\overline\G\}$.
\end{pr}
\begin{proof}
On one hand, for $x$, $y\in\overline\G$ we have
\begin{align*}
\oint[[d,x],y] &=[[d,x],y]+[[x,y],d]+[[y,d],x] \\
&=[\varphi(x)+\lambda(x)e,y]+[\overline{[x,y]}+\theta(x,y)e,d]-[\varphi(y)+\lambda(y)e,x] \\
&=\overline{[\varphi(x),y]}+\theta(\varphi(x),y)e- \varphi(\overline{[x,y]})-\lambda(\overline{[x,y]})e-\theta(x,y)v\\
&-t\theta(x,y)e-\overline{[\varphi(y),x]}-\theta(\varphi(y),x)e\\
&=\big(\partial\varphi(x,y)-\theta(x,y)v\big)+\big(\theta_\varphi(x,y)-t\theta(x,y)-\lambda(\overline{[x,y]})\big)e,
\end{align*}
hence $1.$ and $2.$, on the other hand, we have for $x\in\overline\G$:
\begin{align*}
\oint[[x,e],d]
&=[[x,e],d]+[[e,d],x]+[[d,x],e]\\
&=0+[v+te,x]+0\\
&=\overline{[v,x]}+\theta(v,x)e,
\end{align*}
hence $v\in Z(\overline\G)\cap\ker(\theta)$. The other possible Jacobi identities are immediately checked.
\end{proof}
If the conditions of the Proposition $\ref{pr 3.1}$ hold we call the Lie algebra $\overline\G(D,\theta)=({\mathfrak{g}},[\,,\,])$ the \it{double extension} of $(\overline\G,\overline{[\,,\,]})$, by $(D,\theta)$, with $D=(\varphi,\lambda,v,t)\in End(\overline\G)\times\overline\G^*\times\overline\G\times\mathbb{R}$.
\begin{remark}
\begin{enumerate}
\item The Lie algebra $\overline\G(D,\theta)$ is a semi-direct product $\langle d\rangle\ltimes(\overline\G\oplus \langle e\rangle)$ of the abelian Lie algebra $\langle d\rangle$ with central extension $\overline\G\oplus \langle e\rangle$ relatively to the derivation $D\in Der( \overline\G\oplus \langle e\rangle)$ given by
\begin{align*}
D(x) &= \varphi(x)+\lambda(x)e, \qquad x\in\overline\G\\
D(e) &= v+te.
\end{align*}
\item $\varphi$ is a derivation of the Lie algebra $\overline\G$ if and only if $v=0$ or $\theta=0$. In particular, this holds when either $Z(\overline\G)=\{0\}$ or $\ker(\theta)=\{0\}$.
\end{enumerate}
\end{remark}
\subsection{The first construction}
We start with two characterizations of symplectic Lie algebras. The first was given in \cite{L-S} characterizes cosymplectic manifolds. We state here its analogue for cosymplectic Lie algebras
\begin{pr}\cite{L-S}\label{pr 3.2}
Let $\overline\G$ be a Lie algebra and $\overline\al$, $\overline\om$ two forms on $\overline\G$ with degrees $1$ and $2$ respectively. Consider on the direct sum ${\mathfrak{g}}=\overline\G\oplus\langle e \rangle $ the 2-form $\omega=\overline\om+\overline\al\wedge e^*$. Then $(\overline\G,\overline\al,\overline\om)$ is a cosymplectic Lie algebra if and only if $({\mathfrak{g}},\omega)$ is a symplectic Lie algebra.
\end{pr}
\begin{remark}
The Lie algebra ${\mathfrak{g}}=\overline\G\oplus\langle e \rangle $ is a central extension of $\overline\G$ by $\theta=0$.
\end{remark}
As for the second characterization (see \cite{B-F-M} and \cite{C-P}).
\begin{pr}\label{pr 3.3}
There exist a one-to-one correspondence between $(2n+1)$-dimensional
cosymplectic Lie algebras $({\mathfrak{g}},\alpha,\omega)$, and (2n)-dimensional symplectic Lie algebras $({\mathfrak{h}}=\ker \alpha,\omega_{{\mathfrak{h}}})$ together with a derivation $D\in Der({\mathfrak{h}})$, such that $D$ satisfied
\begin{equation}\label{ist}
\omega_h(Dx,y)=-\omega_{{\mathfrak{h}}}(x,Dy)\qquad x,y\in{\mathfrak{h}}.
\end{equation}
\end{pr}
An endomorphism satisfied $(\ref{ist})$ is called an infinitesimal symplectic transformation (for short, an i.s.t.).
By combining these two characterizations, we propose the following constructions of cosymplectic Lie algebras.
Let $(\overline\G,\overline\al,\overline\om)$ be a cosymplectic Lie algebra with the Reeb vector $\overline\xi$, recall that $\overline\G=\overline\h\oplus\langle\overline\xi\rangle$ with $\overline\h=\ker(\overline\al)$.
Let $(\overline\G\oplus\langle e \rangle, {\omega}=\overline\om+\alpha\wedge e^*)$ be a symplectic Lie algebra associated with $(\overline\G,\overline\al,\overline\om)$ (see Proposition $\ref{pr 3.2}$). A derivation $D \in Der(\overline\G\oplus\langle e \rangle)$ consists of a $4$-tuple $(\varphi,\lambda,v,t)\in End(\overline\G)\times\overline\G^*\times\overline\G\times\mathbb{R}$ such that
\begin{align*}
D(x) &= \varphi(x)+\lambda(x)e, \qquad x\in\overline\G\\
D(e) &= v+te.
\end{align*}
\begin{Le}\label{L3.1}
The derivation $D \in Der({\mathfrak{g}})$ is an i.s.t if and only if the fillowing four conditions are satisfied
\begin{enumerate}
\item[(i)] $\varphi$ is an i.s.t in $\overline\h$,
\item[(ii)] $\lambda(x)= \overline\om(x,\varphi(\overline\xi))$,\qquad for all $x\in\overline\h$,
\item[(iii)] $\overline\om(v,x)=\overline\al\circ\varphi(x)$,\qquad for all $x\in\overline\h$,
\item[(iv)] $t=-\overline\al\circ\varphi(\overline\xi)$.
\end{enumerate}
\end{Le}
\begin{proof}
Indeed. For $x$, $y\in\overline\G$, we have
\begin{align}\label{8}
\omega(Dx,y)+\omega(x,Dy)&= \overline\om(\varphi(x),y)+\overline\om(x,\varphi(y))+\lambda(x)\omega(e,y)+\lambda(y)\omega(x,e).
\end{align}
If $x\in\overline\h$ and $y\in\overline\h$. Using $(\ref{8})$, $D$ is an i.s.t yeilds to $\overline\om(\varphi(x),y)=-\overline\om(x,\varphi(y))$.
If $x\in\overline\h$ and $y=\xi$. Once again, by applying $(\ref{8})$, the fact that $D$ is an i.s.t gives $\lambda(x)= \overline\om(x,\varphi(\xi))$.
New for $x\in\overline\h$ and $y=e$, we have
\begin{align*}
\omega(Dx,e)+\omega(x,De)&= \omega(\varphi(x)+\lambda(x)e,e)+\omega(x,v+te)\\
&= \omega(\varphi(x),e)+\overline\om(x,v),
\end{align*}
since $D$ is an i.s.t, it follows that $\overline\om(x,v)=\overline\al\circ\varphi(x)$.
Finally for $x=\overline\xi$ and $y=e$, we have
\begin{align*}
\omega(D\overline\xi,e)+\omega(\overline\xi,De) &=\omega(\varphi(\overline\xi)+\lambda(\overline\xi)e,e)+\omega(\overline\xi,v+te)\\
&=\omega(\varphi(\overline\xi),e)+t,
\end{align*}
so $t=-\overline\al\circ\varphi(\overline\xi)$. This completes the proof.
\end{proof}
Let $D=(\varphi,\lambda,v,t)$ satifying Lemma \ref{L3.1} conditions.
Denote by $*$ the left-symmetric product associated with the symplectic Lie algebra $(\overline\h,\overline\om)$ and let $\overline{R}_x$ the right multiplication by the element $x$, that is $\overline{R}_xy=y*x$. we obtain the following result.
\begin{theo}
Let $(\overline\G,\overline\al,\overline\om)$ be a cosymplectic Lie algebra with the Reeb vector $\overline\xi$. Let ${\mathfrak{g}}=\langle d \rangle\oplus\overline\G\oplus\langle e \rangle$ be a double extension of $\overline\G$, by $(D,\theta=0)$, with $D=(\varphi,\lambda,v,t)$ satisfies the hypotheses of Lemma \ref{L3.1}. Then
\[{\omega}= \overline\om+\overline\al\wedge e^*,\qquad
{\alpha}= d^*,\]
defines a cosymplectic structure in ${\mathfrak{g}}$ if and only if
\begin{enumerate}
\item $\varphi\in Der(\overline\G)$,
\item $v\in Z(\overline\G)$
\item $\overline{R}_{\varphi(\overline\xi)}=0$ and $[\varphi(\overline\xi),\overline\xi]=0$.
\end{enumerate}
Also, the Reeb vector is $d$.
\end{theo}
\begin{proof} With $\theta=0$, Proposition \ref{pr 3.1} gives $\varphi\in Der(\overline\G)$, $v\in Z(\overline\G)$ and ${\mathrm{d}}\lambda=0$.
\begin{eqnarray*}
{\mathrm{d}}\lambda=0 &\Leftrightarrow& \left \{
\begin{array}{rll}
\overline\om([x,y],\varphi(\overline\xi))&=0&\forall x,y\in\overline\h \qquad(^*)\\
\overline\om([\overline\xi,x],\varphi(\overline\xi))&=0&\forall x\in\overline\h.\qquad(^{**})
\end{array}
\right.
\end{eqnarray*}
The $(^*)$ condition is equivalent to $\overline\om(x,y*\varphi(\overline\xi))=0$ for all $x,y\in\overline\h$ then $\overline{R}_{\varphi(\overline\xi)}=0$.
Since $\overline\om$ is 2-cocycle $(^{**})$ becomes
\[ \overline\om([\varphi(\overline\xi),\overline\xi],x)=0,\qquad\forall x\in\overline\h,\]
as $\overline\om$ is non-degenerate 2-form on $\overline\h$, that holds $[\varphi(\overline\xi),\overline\xi]=0$. To complete the proof, it remains to verify that $\alpha\wedge\omega^n \neq0$, which equivalent to prove that $\varPhi : {\mathfrak{g}}\longrightarrow{\mathfrak{g}}^*$ is an isomorphism. Indeed for $x\in\langle d \rangle\oplus\overline\G\oplus\langle e \rangle$ we can write $x=x_1d+\overline{x}+x_2\overline\xi+x_3e$ with $\overline{x}\in\overline\h$ and $x_1,\,x_2,x_3\in\mathbb{R}$. A direct calculation gives us.
\[\left\{
\begin{array}{ll}
\varPhi(x,\overline{y})&=\overline\om_{\overline\h}(\overline{x},\overline{y})\qquad \forall \overline{y}\in\overline\h\\
\varPhi(x,\overline\xi)&=-x_3\\
\varPhi(x,e)&=x_2\\
\varPhi(x,d)&=x_1.
\end{array}
\right.\] Which means that if $\varPhi(x,.)=0$,
then $x=0$.
\end{proof}
\subsection{The second construction}
Now let $(\overline\G,\overline\al,\overline\om)$ be a cosymplectic Lie algebra and $\overline{[.,.]}$ its Lie bracket and let
\[{\mathfrak{g}}=\langle d\rangle\oplus\overline\G\oplus \langle e\rangle\]
be the double extension of $(\overline\G,\overline{[\,,\,]})$ by $(D,\theta)$. We define a 2-form $\omega$ and a 1-form $\alpha$ on the vector space ${\mathfrak{g}}$ by requiring that
\[\omega=\overline{\omega}+d^*\wedge e^*\]
and
\[\alpha(x)=\overline{\alpha}(x),\;\;\forall x\in\overline\G,\quad
\alpha(d)\in {\mathfrak{g}},\quad\alpha(e)\in {\mathfrak{g}}.\]
On one hand , if we assume that $\alpha$ is 2-cocycle, we get
\[
\left\{
\begin{array}{l}
\alpha([x,y])=\theta(x,y)\alpha(e)=0\\
\alpha([d,x])=\overline\al(\varphi(x))+\lambda(x)\alpha(e)=0\\
\alpha([d,e])=\alpha(v)+t\alpha(e)=0.
\end{array}
\right.\qquad (\dagger)
\]
On the other hand, while $\omega$ is 2-cocycle, consequentlly for all $x$, $y\in\overline\G$
\[
\left\{
\begin{array}{ll}
\oint\omega([x,y],d)&=\omega(\overline{[x,y]},d)+\theta(x,y)\omega(e,d)+\overline\om(\varphi(x),y)+\lambda(x)\omega(e,y)-\overline\om(\varphi(y),x)-\lambda(y)\omega(d,y)\\
&=-\theta(x,y)+\overline\om(\varphi(x),y)+\overline\om(x,\varphi(y))=0\\
\oint\omega([x,e],d)&=\overline\om(v,x)=0.
\end{array}
\right.\quad (\ddagger)
\]
We distinguish two cases.
\textbf{The first case}: $\alpha(e)=0$. We use $(\dagger)$ and $(\ddagger)$ to obtain
\begin{equation*}
\overline\al\circ\varphi=0\quad\mbox{and}\quad\alpha(v)=0,
\end{equation*}
\begin{equation*}
\theta=\overline\om_\varphi\quad\mbox{and}\quad v\in\ker(\overline\om).
\end{equation*}
Note that $\alpha(v)=0$ and $v\in\ker(\overline\om)$ trains that $v=0$.
Note by $\omega_{\varphi,\varphi}=\theta_\varphi$. We obtain by combining with Proposition $\ref{pr 3.1}$ the following result.
\begin{theo}
Let $(\overline\G,\overline\al,\overline\om)$ be a cosymplectic Lie algebra with the Reeb vector $\overline\xi$. Let ${\mathfrak{g}}=\langle d\rangle\oplus\overline\G\oplus \langle e\rangle$ be a double extension of $\overline\G$ by $(D,\overline\om_\varphi)$ with $D=(\varphi,\lambda,0,t)\in Der(\overline\G)\times\overline\G^*\times\overline\G\times\mathbb{R}$. Then
\[{\omega}= \overline\om+d^*\wedge e^*\]
\[\alpha(x)=\overline{\alpha}(x),\;\;\forall x\in\overline\G,\quad
\alpha(d)\in \mathbb{R},\]
defines a cosymplectic structure in ${\mathfrak{g}}$ if and only if
\begin{enumerate}
\item $\alpha(e)=0$, \item $\overline\al\circ\varphi=0$,
\item $t\omega_\varphi-\omega_{\varphi,\varphi}={\mathrm{d}}\lambda$.
\end{enumerate}
Also, the Reeb vector is $\overline\xi$.
\end{theo}
\begin{proof}
It remains to verify that $\varPhi : {\mathfrak{g}}\longrightarrow{\mathfrak{g}}^*$ is an isomorphism. Let $x\in\langle d \rangle\oplus\overline\G\oplus\langle e \rangle$ we can write $x=x_1d+\overline{x}+x_2\overline\xi+x_3e$ with $\overline{x}\in\overline\h$ and $x_1,\,x_2,\,x_3\in\mathbb{R}$. A direct calculation gives us
\[\left\{
\begin{array}{ll}
\varPhi(x,\overline{y})&=\overline\om_{\overline\h}(\overline{x},\overline{y})\qquad \forall \overline{y}\in\overline\h\\
\varPhi(x,e)&=x_1\\
\varPhi(x,\overline\xi)&=x_1\alpha(d)+x_2\\
\varPhi(x,d)&=-x_3+x_1\alpha^2(d)+x_2\alpha(d).
\end{array}
\right.\]
Which means that if $\varPhi(x,.)=0$, then $x=0$.
\end{proof}
\textbf{The second case}: $\alpha(e)\not=0$. To simplify, we can always take $\alpha(e)=-1$. We use $(\dagger)$ and $(\ddagger)$ to obtain
\begin{equation*}
\overline\om_\varphi=\theta=0\quad\mbox{and}\quad \varphi\in Der(\overline\G)
\end{equation*}
\begin{equation*}
\lambda=\overline\al\circ\varphi\quad\mbox{and}\quad t=\overline\al(v)
\end{equation*}
\begin{equation*}
v\in\ker(\overline\om).
\end{equation*}
We obtain by combining with Proposition $\ref{pr 3.1}$ the following result.
\begin{theo}
Let $(\overline\G,\overline\al,\overline\om)$ be a cosymplectic Lie algebra with the Reeb vector $\overline\xi$. Let ${\mathfrak{g}}=\langle d \rangle\oplus\overline\G\oplus\langle e \rangle$ be a double extension of $\overline\G$, by $(D,\theta=0)$, with $D=(\varphi,\overline\al\circ\varphi,v,\overline\al(v))\in Der({\overline\G})\times{\overline\G}^*\times{\overline\G}\times\mathbb{R}$. Then
\[{\omega}= \overline\om+d^*\wedge e^*\]
\[\alpha(x)=\overline{\alpha}(x),\;\;\forall x\in\overline\G,\quad
\alpha(d)\in \mathbb{R},\quad\alpha(e)=-1,\]
defines a cosymplectic structure in ${\mathfrak{g}}$ if and only if
\begin{enumerate}
\item $\overline\om_\varphi=0$,
\item $\overline\al\circ\varphi([x,y])=0$,\quad $\forall x,y\in\overline\G$
\item $ v\in Z(\overline\G)\cap \ker(\overline\om)$.
\end{enumerate}
Also, the Reeb vector is $\overline\xi$.
\end{theo}
\begin{proof}
It remains to verify that $\varPhi : {\mathfrak{g}}\longrightarrow{\mathfrak{g}}^*$ is an isomorphism. Let $x\in\langle d \rangle\oplus\overline\G\oplus\langle e \rangle$ we can write $x=x_1d+\overline{x}+x_2\overline\xi+x_3e$ with $\overline{x}\in\overline\h$ and $x_1,\,x_2,\,x_3\in\mathbb{R}$. A direct calculation gives us
\[\left\{
\begin{array}{ll}
\varPhi(x,\overline{y})&=\overline\om_{\overline\h}(\overline{x},\overline{y})\qquad \forall \overline{y}\in\overline\h\\
\varPhi(x,\overline\xi)&=x_1\alpha(d)+x_2-x_3\\
\varPhi(x,e)&=x_1-x_1\alpha(d)-x_2+x_3\\
\varPhi(x,d)&=-x_3+x_1\alpha^2(d)+x_2\alpha(d)-x_3\alpha(d),
\end{array}
\right.\]
which means that if $\varPhi(x,.)=0$, then $x=0$.
\end{proof}
\begin{exems}
In order to illustrate and compare these three constructions. We end this chapter with examples of constructions of five-dimensional cosymplectic Lie algebras from the same three-dimensional cosymplectic Lie algebra.
We consider the following three-dimensional cosymplectic Lie algebra:
\[\overline\G : [e_1,e_2]=e_1\quad\mbox{and}\quad\overline\al=e^3,\quad\overline\om=e^{12}.\]
Let ${\mathfrak{g}}=\langle e_4 \rangle\oplus\overline\G\oplus\langle e_5 \rangle$ be a double extension of $(\overline\G,\overline{[\,,\,]})$, by $(D,\theta)$, with $D=(\varphi,\lambda,v,t)$.
It is straightforward to check that the derivations on the Lie algebra $\overline\G$ take the form
\[\varphi=\begin{pmatrix}
a&b&0\\0&0&0\\0&c&f
\end{pmatrix},\quad a,b,c,f\in\mathbb{R}.\]
Set $v=ze_3\in Z(\overline\G)$ and $\lambda=\lambda_1e_1+\lambda_2e_2+\lambda_3e_3\in\overline\G^*$ ,\quad$z,\lambda_i\in\mathbb{R}$.
After standard calculations.
\begin{enumerate}
\item[]\textit{The first construction}, gives us the following cosymplectic Lie algebra
\[{\mathfrak{g}}_1 : \begin{array}{ll}
[e_1,e_2]=e_1, &[e_4,e_3]=fe_3+\lambda_3e_5\\
{[e_4,e_2]=be_1}, &[e_4,e_5]=ze_3-fe_5
\end{array}\]
\[\alpha=e^4,\quad\omega=e^{12}+e^{35},\quad\xi=e_4.\]
\item[]\textit{The second construction}:
\[{\mathfrak{g}}_2 : \begin{array}{ll}
[e_1,e_2]=e_1+ae_5, &[e_4,e_2]=be_1+\lambda_2e_5\\
{[e_4,e_1]=ae_1+(a^2-ta)e_5}, &[e_4,e_3]=\lambda_3e_5\\
{[e_4,e_5]=te_5}
\end{array}\]
\[\alpha=e^3+xe^4,\quad\omega=e^{12}+e^{45},\quad\xi=e_3,\quad x\in\mathbb{R}.\]
\item[]\textit{The third construction}:
\[{\mathfrak{g}}_3 : \begin{array}{ll}
[e_1,e_2]=e_1, &[e_4,e_3]=f(e_3+e_5)\\
{[e_4,e_2]=be_1+c(e_3+e_5)}, &[e_4,e_5]=t(e_3+e_5)
\end{array}\]
\[\alpha=e^3+xe^4-e^5,\quad\omega=e^{12}+e^{45},\quad\xi=e_3,\quad x\in\mathbb{R}.\]
\end{enumerate}
It is clear that for a suitable choice of the constants $(b,f,\lambda_3,z)$ the derived Lie algebra $D({\mathfrak{g}}_1)$ is a three-dimensional, while $\dim(D({\mathfrak{g}}_2)) $ and $\dim(D({\mathfrak{g}}_3))$ is less than or equal to two. Therefore the cosymplectic Lie algebra ${\mathfrak{g}}_1$ never comes from the other two constructions.
\end{exems}
\section{Classification in low dimensional}
\subsection{Three-dimensional cosymplectic Lie algebras}
Let ${\mathfrak{g}}$ be a three-dimensional Lie algebra. Denoting by $\alpha=a_1e^1+a_2e^2+a_3e^3$ one form and $\omega=a_{12}e^{12}+a_{13}e^{13}+a_{23}e^{23}$ the two form on ${\mathfrak{g}}$. On each three-dimensional Lie algebra we compute the one and two cocycle condition for $\alpha$ and $\omega$ respectively. Next, we calculate the rank of $\varPhi$, if $\varPhi$ has a maximum rank, then ${\mathfrak{g}}$ supports a cosymplectic structure.
\begin{pr}
Let $({\mathfrak{g}},\alpha,\omega)$ be a three-dimensional cosymplectic real Lie algebras. Then
${\mathfrak{g}}$ is isomorphic to one of the following Lie algebras equipped with a cosymplectic structure:
\begin{enumerate}
\item[${\mathfrak{g}}_{2.1}\oplus {\mathfrak{g}}_1:$] $[e_1,e_2]=e_1$ (decomposable solvable, Bianchi III)
\begin{align*}
\alpha&=a_2e^2+a_3e^3\\
\omega&=a_{12}e^{12}+a_{23}e^{23} \qquad a_3a_{12}\not=0.
\end{align*}
\item[${\mathfrak{g}}_{3.1}:$] $[e_2,e_3]=e_1$ (Weyl algebra, nilpotent, Bianchi II)
\begin{align*}
\alpha&=a_2e^2+a_3e^3\\
\omega&=a_{12}e^{12}+a_{13}e^{13}+a_{23}e^{23} \qquad a_3a_{12}-a_2a_{13}\not=0.
\end{align*}
\item[${\mathfrak{g}}_{3.4}^{-1}:$] $[e_1,e_3]=e_1$ and $[e_2,e_3]=-e_2$ (solvable, Bianchi VI, $a=-1$)
\begin{align*}
\alpha&=a_3e^3\\
\omega&=a_{12}e^{12}+a_{13}e^{13}+a_{23}e^{23} \qquad a_3a_{12}\not=0.
\end{align*}
\item[${\mathfrak{g}}_{3.5}^0:$] $[e_1,e_3]=-e_2$ and $[e_2,e_3]=e_1$ (solvable, Bianchi VII, $\beta=0$)
\begin{align*}
\alpha&=a_3e^3\\
\omega&=a_{12}e^{12}+a_{13}e^{13}+a_{23}e^{23} \qquad a_3a_{12}\not=0.
\end{align*}
\end{enumerate}
\end{pr}
Two cosymplectic Lie algebra $({\mathfrak{g}}_1,\alpha_1, \omega_1)$ and $({\mathfrak{g}}_2,\alpha_2, \omega_2)$ are said to be isomorphic if there exists a Lie algebra isomorphism $L : {\mathfrak{g}}_1\longrightarrow{\mathfrak{g}}_2$ such that
\[L^*(\alpha_2)=\alpha_1\quad\mbox{and}\quad L^*(\omega_2)=\omega_1.\]
The following proposition gives a complete classification up to an automorphism to cosymplectic structures on three-dimensional Lie algebras.
\begin{pr}\label{pr 4.2}
Let $({\mathfrak{g}},\alpha,\omega)$ be a three-dimensional cosymplectic real Lie algebras. Then
$({\mathfrak{g}},\alpha,\omega)$ is isomorphic to one of the following cosymplectic Lie algebras:
\begin{enumerate}
\item[${\mathfrak{g}}_{2.1}\oplus {\mathfrak{g}}_1:$] $(\alpha,\omega)=( e^3,e^{12})$
\item[] $(\alpha,\omega)=(\lambda e^3,e^{12}+e^{23})$,\quad $\lambda\in\mathbb{R}-\{0\}$
\item[${\mathfrak{g}}_{3.1}:$] $(\alpha,\omega)=(\lambda e^2,e^{13})$,\quad$\lambda\in\mathbb{R}-\{0\}$
\item[${\mathfrak{g}}_{3.4}^{-1}:$] $(\alpha,\omega)=(\lambda e^3,e^{12})$,\quad $\lambda>0$
\item[${\mathfrak{g}}_{3.5}^0:$] $(\alpha,\omega)=(\lambda e^3,e^{12})$,\quad $\lambda>0$
\end{enumerate}
\end{pr}
\begin{proof}
We proceed as follows. We act the automorphisms of ${\mathfrak{g}}$ on $\omega$ to find the simplest possible form $\omega_0$ then we seek all the automorphisms which transforms $\omega$ into $\omega_0$ and finally we act these automorphisms on $\alpha$ in order to simplify it. These calculations are made using computation software Maple 18$^\circledR$.
We will give the proof for the Lie algebra ${\mathfrak{g}}_{4.3}^{-1}$: $[e_1,e_3]=e_1$, $[e_2,e_3]=-e_3$, since all cases must be treated in the same way. The cosympectics structures in ${\mathfrak{g}}_{4.3}^{-1}$ are given by the following family
\begin{align*}
\alpha&=a_3e^3\\
\omega&=a_{12}e^{12}+a_{13}e^{13}+a_{23}e^{23} \qquad a_3a_{12}\not=0.
\end{align*}
In this case the automorphisms are given by
\[T_1=\begin{pmatrix}
t_{1,1}&0&t_{1,3}\\
0&t_{2,2}&t_{2,3}\\
0&0&1\end{pmatrix}\quad\mbox{and}\quad T_2=\begin{pmatrix}
0&t_{1,2}&t_{1,3}\\
t_{2,1}&0&t_{2,3}\\
0&0&-1\end{pmatrix}.
\]
The automorphism $L=\begin{pmatrix}
1&0&\frac {a_{2,3}}{a_{1,2}}\\ 0&\frac{1}{a_{1,2}}&-\frac{a_{1,3}}{a_{1,2}}\\
0&0&1
\end{pmatrix}$, satisfying that $L^*(\omega)=e^{12}$,
all automorphisms that satisfy $L^*(\omega)=e^{12}$ are
\[L_1=\begin{pmatrix} t_{{1,1}}&0&{\frac {a_{{2,3}}}{a_{{1,2}}}}\\
0&{\frac{1}{t_{{1,1}}a_{{1,2}}}}&-{\frac{a_{{1,3}}}{a_{{1,2}}}}\\ 0&0&1\end{pmatrix}\quad\mbox{and}\quad L_2=\begin{pmatrix}
0&t_{{1,2}}&-{\frac {a_{{2,3}}}{a_{{1,2}}}
}\\-{\frac {1}{t_{{1,2}}a_{{1,2}}}}&0&{\frac {a_{1,3}}{a_{1,2}}}\\ 0&0&-1
\end{pmatrix}.\]
A direct calculation gives us $L_1^*(\alpha)=\alpha$ and $L_2^*(\alpha)=-\alpha$. So, we can take $\alpha=\lambda e^3$ with $\lambda>0$.
\end{proof}
By using the Proposition \ref{pr2.1}, a direct calculation gives us the following corollary.
\begin{co}
The left symmetric product associated to the three-dimensional cosymplectic Lie algebras is given by
\begin{enumerate}
\item[${\mathfrak{g}}_{2.1}\oplus {\mathfrak{g}}_1:$] $e_1.e_2=e_1$,\,$e_2.e_2=e_2$.
\item[${\mathfrak{g}}_{3.1}:$] $e_2.e_2=\frac{1}{\lambda^2}e_3$,\, $e_3.e_2=-e_1$.
\item[${\mathfrak{g}}_{3.4}^{-1}:$] $e_1.e_2=e_2.e_1=\frac{1}{\lambda^2} e_3$,\; $e_3.e_1=-e_1$,\, $e_3.e_2=e_2$.
\item[${\mathfrak{g}}_{3.5}^0:$] $e_1.e_1=\frac{1}{\lambda^2}e_3$,\, $e_2.e_2=\frac{1}{\lambda^2}e_3$,\, $e_3.e_1=e_2$,\, $e_3.e_2=-e_1$.
\end{enumerate}
\end{co}
\subsection{Five-dimensional cosymplectic Lie algebras }
In \cite{C-P} the authors give a classification of five-dimensional cosymplectic Lie algebras, starting from four-dimensional symplectic Lie algebras endowed with a derivation. The disadvantage of this method is that the correspondence with five-dimensional Lie algebras well known in the literature is ignored. We propose here a direct method (case by case), by searching among the $40$ five-dimensional Lie algebras (listed in \cite{P-W}), those which are cosymplectic and we give their structures.
\begin{pr}
Let $({\mathfrak{g}},\alpha,\omega)$ be a five-dimensional cosymplectic real Lie algebra. Then
$({\mathfrak{g}},\alpha,\omega)$ is isomorphic to one of the following cosymplectic Lie algebras:
\begin{enumerate}
\item[$A_{5,1}:$] $[e_3,e_5] = e_1$, $[e_4,e_5]=e_2$ (nilpotent)
\begin{align*}
\alpha&=a_3e^3+a_4e^4+a_5e^5\\
\omega&=a_{13}e^{13}+a_{23}e^{14}+a_{15}e^{15}+a_{23}e^{23}+a_{24}e^{24}+a_{25}e^{25}+a_{34}e^{34}+a_{35}e^{35}+a_{45}e^{45}\\
&with\qquad a_3a_{15}a_{24}-a_3a_{23}a_{25}+a_4a_{13}a_
{25}-a_4a_{15}a_{23}-a_5a_{13}a_{24}+a_5{a_
{23}}\not=0.
\end{align*}
\item[$A_{5,2}:$] $[e_2,e_5] = e_1$, $[e_3,e_5]=e_2$, $[e_4,e_5] = e_3$ (nilpotent)
\begin{align*}
\alpha&=a_4e^4+a_5e^5\\
\omega&=a_{23}e^{14}+a_{15}e^{15}-a_{23}e^{23}+a_{25}e^{25}+a_{34}e^{34}+a_{35}e^{35}+a_{45}e^{45}\\
&with\qquad \qquad a_{23}(a_4a_{15}+a_5a_{23})\not=0.
\end{align*}
\item[$A_{5,5}:$] $[e_2,e_5] = e_1$, $[e_3,e_4]=e_1$, $[e_3,e_5] = e_2$ (nilpotent)
\begin{align*}
\alpha&=a_3e^3+a_4e^4+a_5e^5\\
\omega&=a_{24}e^{15}+a_{23}e^{23}+a_{24}e^{24}+a_{25}e^{25}+a_{34}e^{34}+a_{35}e^{35}+a_{45}e^{45}\\
&with\qquad a_{24}(a_3a_{25}-a_4a_{23}) \not=0.
\end{align*}
\item[$A_{5,6}:$] $[e_2,e_5] = e_1$, $[e_3,e_5]=e_2$, $[e_3,e_4]=e_1$, $[e_4,e_5] = e_3$ (nilpotent)
\begin{align*}
\alpha&=a_4e^4+a_5e^5\\
\omega&=a_{23}e^{14}+a_{24}e^{15}-a_{23}e^{23}+a_{24}e^{24}+a_{25}e^{25}+a_{34}e^{34}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_{23}(a_4a_{24}+a_5a_{23})\not=0.
\end{align*}
\item[$A^{a,-a,-1}_{5,7}:$] $[e_1, e_5] = e_1$, $[e_2,e_5]=ae_2$, $[e_3, e_5] =-ae_3$, $[e_4, e_5]=-e_4$,\quad $a\not\in\{-1,0,1\}$
\begin{align*}
\alpha&=a_5e^5\\
\omega&=a_{14}e^{14}+a_{15}e^{15}+a_{23}e^{23}+a_{24}e^{24}+a_{25}e^{25}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_5a_{14}a_{23}\not=0.
\end{align*}
\item[$A^{1,-1,-1}_{5,7}:$] $[e_1, e_5] = e_1$, $[e_2,e_5]=e_2$, $[e_3, e_5] =-e_3$, $[e_4, e_5]=-e_4$,
\begin{align*}
\alpha&=a_5e^5\\
\omega&=a_{13}e^{13}+a_{14}e^{14}+a_{15}e^{15}+a_{23}e^{23}+a_{24}e^{24}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_5(a_{24}a_{13}-a_{14}a_{2 3})\not=0.
\end{align*}
\item[$A^{-1}_{5,8}:$]$[e_2, e_5] = e_1$,$[e_3, e_5] =e_3$, $[e_4, e_5]=-e_4$,
\begin{align*}
\alpha&=a_2e^2+a_5e^5\\
\omega&=a_{12}e^{12}+a_{15}e^{15}+a_{25}e^{25}+a_{34}e^{34}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_{34}(a_5a_{12}-a_2a_{15}) \not=0.
\end{align*}
\item[$A^{-1,0,q}_{5,13}:$]$[e_1, e_5] = e_1$, $[e_2,e_5]=-e_2$, $[e_3, e_5] =-qe_4$, $[e_4, e_5]=qe_3$,
\begin{align*}
\alpha&=a_5e^5\\
\omega&=a_{12}e^{12}+a_{15}e^{15}+a_{25}e^{25}+a_{34}e^{34}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_5a_{12}a_{34} \not=0.
\end{align*}
\item[$A^{0}_{5,14}:$]$[e_2, e_5] = e_1$, $[e_3, e_5] =-e_4$, $[e_4, e_5]=e_3$,
\begin{align*}
\alpha&=a_2e^2+a_5e^5\\
\omega&=a_{12}e^{12}+a_{15}e^{15}+a_{25}e^{25}+a_{34}e^{34}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_{34}(a_5a_{12}-a_2a_{15}) \not=0.
\end{align*}
\item[$A^{-1}_{5,15}:$]$[e_1, e_5] = e_1,[e_2, e_5] = e_1+e_2$, $[e_3, e_5] =-e_3$, $[e_4, e_5]=e_3-e_4$,
\begin{align*}
\alpha&=a_5e^5\\
\omega&=-a_{23}e^{14}+a_{15}e^{15}+a_{23}e^{23}+a_{24}e^{24}+a_{25}e^{25}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_5a_{23} \not=0
\end{align*}
\item[$A^{1,p,-p}_{5,17}:$] $[e_1, e_5] = pe_1-e_2$, $[e_2,e_5]=e_1+pe_2$, $[e_3, e_5] =-pe_3-e_4$, $[e_4, e_5]=e_3-pe_4$,$\quad p \not=0$.
\begin{align*}
\alpha&=a_5e^5\\
\omega&=a_{24}e^{13}-a_{23}e^{14}+a_{15}e^{15}+a_{23}e^{23}+a_{24}e^{24}+a_{25}e^{25}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_5(a^{2}_{23}+a^{2}_{2 4})\not=0.
\end{align*}
\item[$A^{1,0,0}_{5,17}:$] $[e_1, e_5] = -e_2$, $[e_2,e_5]=e_1$, $[e_3, e_5] =-e_4$, $[e_4, e_5]=e_3$
\begin{align*}
\alpha&=a_5e^5\\
\omega&=a_{12}e^{12}+a_{24}e^{13}-a_{23}e^{14}+a_{15}e^{15}+a_{23}e^{23}+a_{24}e^{24}+a_{25}e^{25}+a_{34}e^{34}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_5(a_{12}a_{34}-a^{2}_{23}-a^{2}_{24})\not=0.
\end{align*}
\item[$A^{-1,p,-p}_{5,17}:$] $[e_1, e_5] = pe_1-e_2$, $[e_2,e_5]=e_1+pe_2$, $[e_3, e_5] =-pe_3+e_4$, $[e_4, e_5]=-e_3-pe_4$,$\quad p \not=0$.
\begin{align*}
\alpha&=a_5e^5\\
\omega&=-a_{24}e^{13}+a_{23}e^{14}+a_{15}e^{15}+a_{23}e^{23}+a_{24}e^{24}+a_{25}e^{25}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_5(a^{2}_{23}+a^{2}_{2 4})\not=0.
\end{align*}
\item[$A^{-1,0,0}_{5,17}:$] $[e_1, e_5] = -e_2$, $[e_2,e_5]=e_1$, $[e_3, e_5] =e_4$, $[e_4, e_5]=-e_3$
\begin{align*}
\alpha&=a_5e^5\\
\omega&=a_{12}e^{12}-a_{24}e^{13}+a_{23}e^{14}+a_{15}e^{15}+a_{23}e^{23}+a_{24}e^{24}+a_{25}e^{25}+a_{34}e^{34}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_5(a_{12}a_{34}+a^{2}_{23}+a^{2}_{24})\not=0.
\end{align*}
\item[$A^{0}_{5,18}:$] $[e_1, e_5] = -e_2$, $[e_2,e_5]=e_1$, $[e_3, e_5] =e_1-e_4$, $[e_4, e_5]=e_2+e_3$
\begin{align*}
\alpha&=a_5e^5\\
\omega&=a_{24}e^{13}+a_{15}e^{15}+a_{24}e^{24}+a_{25}e^{25}+a_{34}e^{34}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_5a_{24}\not=0.
\end{align*}
\item[$A^{1,-1}_{5,19}:$] $[e_1, e_5] = e_1$, $[e_2, e_3] = e_1$, $[e_2,e_5]=e_2$, $[e_4, e_5]=-e_4$
\begin{align*}
\alpha&=a_3e^3+a_5e^5\\
\omega&=a_{23}e^{15}+a_{23}e^{23}+a_{24}e^{24}+a_{25}e^{25}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_3a_{23}a_{24}\not=0.
\end{align*}
\item[$A^{\frac{1}{2},-1}_{5,19}:$] $[e_1, e_5] = \frac{1}{2}e_1$, $[e_2, e_3] = e_1$, $[e_2,e_5]=e_2$, $[e_3, e_5] =-\frac{1}{2}e_3$, $[e_4, e_5]=-e_4$
\begin{align*}
\alpha&=a_5e^5\\
\omega&=a_{13}e^{13}+a_{15}e^{15}+2a_{15}e^{23}+a_{24}e^{24}+a_{25}e^{25}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_5a_{13}a_{24}\not=0.
\end{align*}
\item[$A^{-1,2}_{5,19}:$] $[e_1, e_5] = -e_1$, $[e_2, e_3] = e_1$, $[e_2,e_5]=e_2$, $[e_3, e_5] =-2e_3$, $[e_4, e_5]=2e_4$
\begin{align*}
\alpha&=a_5e^5\\
\omega&=a_{12}e^{12}-a_{23}e^{15}+a_{23}e^{23}+a_{25}e^{25}+a_{34}e^{34}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_5a_{12}a_{34}\not=0.
\end{align*}
\item[$A^1_{5,30}:$]$[e_2, e_4] = e_1$,$[e_3, e_4] = e_2$,$[e_1, e_5] = 2e_1$,$[e_2,e_5]=e_2$, $[e_4, e_5]=e_4$
\begin{align*}
\alpha&=a_3e^3+a_5e^5\\
\omega&=2a_{24}e^{15}+a_{24}e^{24}+a_{34}e^{25}+a_{34}e^{34}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_3a_{24}\not=0.
\end{align*}
\item[$A^{0,-1}_{5,33}:$] $[e_1, e_4] = e_1$, $[e_2,e_5]=e_2$, $[e_3, e_4] =-e_3$
\begin{align*}
\alpha&=a_4e^4+a_5e^5\\
\omega&=a_{13}e^{13}+a_{14}e^{14}+a_{25}e^{25}+a_{34}e^{34}+a_{45}e^{45}\\ &with \qquad a_4a_{13}a_{25}\not=0.
\end{align*}
\item[$A^{-1,0}_{5,33}:$] $[e_1, e_4] = e_1$, $[e_2,e_5]=e_2$, $[e_3, e_5] =-e_3$
\begin{align*}
\alpha&=a_4e^4+a_5e^5\\
\omega&=a_{14}e^{14}+a_{23}e^{23}+a_{25}e^{25}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_5a_{14}a_{23}\not=0.
\end{align*}
\item[$A_{5,36}:$] $[e_1,e_4]=e_1$, $[e_2, e_3] = e_1$, $[e_2, e_4] =e_2$, $[e_2, e_5] =-e_2$,$[e_3, e_5] =e_3$
\begin{align*}
\alpha&=a_4e^4+a_5e^5\\
\omega&=a_{23}e^{14}+a_{23}e^{23}-a_{25}e^{24}+a_{25}e^{25}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_5a_{23}\not=0.
\end{align*}
\item[$A_{5,37}:$] $[e_1,e_4]=2e_1$, $[e_2, e_3] = e_1$, $[e_2, e_4] =e_2$,$[e_3, e_4] =e_3$, $[e_2, e_5] =-e_3$,$[e_3, e_5] =e_2$
\begin{align*}
\alpha&=a_4e^4+a_5e^5\\
\omega&=2a_{23}e^{14}+a_{23}e^{23}+a_{35}e^{24}-a_{34}e^{25}+a_{34}e^{34}+a_{35}e^{35}+a_{45}e^{45}\\ &with \qquad a_5a_{23}\not=0.
\end{align*}
\end{enumerate}
\end{pr}
Unlike the three-dimensional Heisenberg algebra, the five-dimensional Heisenberg algebra does not support a cosymplectic structure. We have the following most general result.
\begin{pr}
Let $\mathrm{H}_{2n+1}$ be the $2n+1$-dimensional Heisenberg Lie algebra generated by elements $\{e_i,f_i,z\}_{\{1\leq i\leq n\}}$, with the relations: $[e_i,f_i]=z$. Then $\mathrm{H}_{2n+1}$ supports a cosymplectic structure if and only if $n=1$.
\end{pr}
\begin{proof} The Proposition \ref{pr 4.2} shows that $\mathrm{H}_3={\mathfrak{g}}_{3.1}$ admits cosymplectic structures. Let $n \geq2$, and $\alpha\in\mathrm{H}^*_{2n+1}$, $\omega\in\wedge^2\mathrm{H}^*_{2n+1}$ be a $1$-cocycle and $2$-cocycle respectively. On one hand by using the Maurer-Cartan equations of $\mathrm{H}_{2n+1}$ we get $\alpha(z)=0$, on the other hand for all $e_i$, $e_j$, $f_j$,\, $j\not=i$ we have
\begin{align*}
\omega(e_i,z)&=\omega(e_i,[e_j,f_j])\\
&= \oint\omega(e_i,[e_j,f_j])=0,
\end{align*}
in the same way, we find $\omega(f_i,z)=0$ for all $1\leq i\leq n$. It follows that
$\varPhi(e_i,z)=\varPhi(f_i,z)=0$ for all $1\leq i\leq n$.
\end{proof}
Set by ${\mathrm{aff}}(2,\mathbb{R})$ the affine Lie algebra, generated by elements $\{e_1,...,e_6\}$, with the relations:
\[\begin{array}{lllll}
[e_1,e_3]=-e_1,&[e_2,e_4]=-e_1,&[e_3,e_4]=e_4,&[e_4,e_5]=e_3-e_6,&[e_5,e_6]=-e_5\\
{[e_1,e_5]=-e_2},&[e_2,e_6]=-e_2,&[e_3,e_5]=-e_5,&[e_4,e_6]=e_4.&
\end{array}\]
\begin{pr}
The only non-solvable cosymlectic Lie algebra of dimension less than or equal to seven is isomorphim to
\[({\mathrm{aff}}(2,\mathbb{R})\ltimes\langle e_7 \rangle,\alpha,\omega),\]
the Lie brackets are given by that of ${\mathrm{aff}}(2,\mathbb{R})$, to which we add this new entry: $[e_7,e_6]=\lambda e_2$, $[e_7,e_4]= \lambda e_1$, $\lambda\in\mathbb{R}$ and the cosymlectic structure is $\alpha=e^7$, $\omega=e^{15}+e^{26}+e^{34}+e^{46}$.
\end{pr}
\begin{proof}
Let $({\mathfrak{g}},\alpha,\omega)$ be a five-dimensional cosymplectic non-solvable Lie algebra, it is well known that (see \cite{M-R}) any four-dimensional symplectic Lie algebra is solvable, by Proposition \ref{pr 3.3}, we deduce that ${\mathfrak{g}}$ contains an ideal of codimension $1$, this contradicts the fact that ${\mathfrak{g}}$ is non-solvable.
New let $({\mathfrak{g}},\alpha,\omega)$ be a seven-dimensional cosymplectic non-solvable Lie algebra, by Proposition \ref{pr 3.3}, $({\mathfrak{h}},\omega_{\mathfrak{h}})$ is a six-dimensional non-solvable symplectic Lie algebra. On the one hand, it is known that (see \cite{B-M}) the only non-solvable six-dimensional Lie algebra is (up to isomorphism) $({\mathrm{aff}}(2,\mathbb{R}),\omega_0)$ with
\[\omega_0=e^{15}+e^{26}+e^{34}+e^{46}.\]
On the other hand, in ${\mathrm{aff}}(2,\mathbb{R})$ any derivation $D$ is inner (see \cite{D-M}), then there exists $x\in{\mathfrak{h}}$ such that $D={\mathrm{ad}}_x$. The $2$-cocycle condition of $\omega_0$ gives that
\[\omega_0({\mathrm{ad}}_xy,z)+\omega_0(y,{\mathrm{ad}}_xz)=\omega_0(x,[y,z]),\]
since $D$ is an i.s.t, it follows that, $\omega_0(x,[y,z])=0$, $\forall y,z\in{\mathfrak{h}}$, by direct computation we find that $x=\lambda e_2$, $\lambda\in\mathbb{R}$, by Proposition \ref{pr 3.3} we completes the proof.
\end{proof}
\begin{remark}
A five-dimensional cosymplectic Lie algebra is necessarily solvable.
\end{remark}
|
2,877,628,088,430 | arxiv |
\section{INTRODUCTION}
The Orion giant molecular cloud complex
is located at a distance of $\sim$420 pc from the Sun
and is one of the nearest active star-forming regions
\citep{Menten07, Kim08}.
The molecular complex
largely consists of the Orion A and B clouds.
Numerous observations have been carried out
to investigate the star-formation activities in the Orion molecular clouds.
For example, there are surveys of dense cores
in molecular lines and dust continuum emission
\citep{Lada91, Tatematsu98, Lis98}
and surveys of outflows in the molecular hydrogen and CO lines
\citep{Davis09, Takahashi08}.
Recently, thousands of young stellar objects (YSOs) were identified
in the Orion molecular clouds based on infrared observations
with the {\it Spitzer Space Telescope},
and nearly five hundred objects among them
are likely protostars \citep{Megeath12}.
To study $\sim$300 of the {\it Spitzer}-identified Orion protostars,
the {\it Herschel} Orion Protostar Survey (HOPS) project was conducted
with the {\it Herschel Space Observatory}
\citep{Fischer10,Stutz13}.
Maser emission is an important signpost of star-formation regions
in the early stages of evolution.
Observations of masers allow detailed studies
of the small-scale environments of deeply embedded YSOs.
Many H$_{2}$O\ maser observations of YSOs have shown
that H$_{2}$O\ masers are usually distributed
very close ($\lesssim$ 1000 AU) to the central objects,
are highly variable in both intensity and velocity
with time scales from hours to years,
and trace molecular outflows and protostellar disks
\citep{Genzel77,Elitzur89,Comoretto90,
Torrelles98,Seth02,Furuya05,Goddi05,Felli07,Caswell10}.
Several CH$_{3}$OH\ maser lines also have been detected
toward star-forming regions \citep{Valtts95,Kurtz04,Kalenskii10}.
The methanol maser lines are divided
into class I (36, 44, 84, and 95 GHz lines etc.)
and class II (6.7, 12, and 157 GHz lines etc.) \citep{Menten91}.
CH$_{3}$OH\ class I masers are usually offset by 0.1--1 pc
from star-formation phenomena
(such as hot molecular cores, ultracompact H {\small II} regions, and other maser sources)
and well-correlated with molecular outflows
\citep{Plambeck90, Cragg92, Kurtz04, Cyganowski09}.
In general, CH$_{3}$OH\ class I lines show little flux variability
\citep{Kurtz04,Kalenskii10}.
To investigate the star-formation activities
in the early stages,
we carried out a maser survey of {\it Spitzer}-identified protostars
distributed over the Orion molecular cloud complex.
Out of the protostars listed in the HOPS catalogue,
we selected protostars showing line wings
in the CO $J = 2 \rightarrow 1$ line spectra
obtained with the Seoul Radio Astronomy Observatory 6 m telescope.
All HOPS sources were observed in CO with a 48\arcsec\ beam
down to an rms noise of 0.15 K or smaller.
The source selection was not affected by previously known masers.
In this paper, we present the results of the survey
in the H$_{2}$O\ and CH$_{3}$OH\ maser lines
with the Korean Very Long Baseline Interferometry Network (KVN) antennas.
In Section 2, we describe the KVN observations.
In Section 3, we present the results of the survey.
In Section 4, we describe the detected sources in detail.
In Section 5, we discuss the implications of the survey.
A summary is given in Section 6.
\input{table1}
\section{OBSERVATIONS}
Ninety-nine protostars in the Orion molecular cloud complex were observed
using the KVN 21 m radio antennas in the single-dish telescope mode
during the 2010 and 2011--2012 observing seasons.
The observations were carried out
with the KVN Yonsei telescope at Seoul,
the KVN Ulsan telescope at Ulsan,
and the KVN Tamna telescope at Seogwipo, Korea.
The KVN telescopes are equipped with multi-frequency receiving systems
simultaneously operating at 22, 44, 86, and 129 GHz bands \citep{Han13}.
Telescope pointing was checked by observing Orion IRc2 \citep{Baudry95}
in the SiO $v=1$ $J=1\rightarrow0$ maser line at 43 GHz.
The pointing observations were performed about once every two hours.
The rms pointing accuracy was better than $\sim$5\arcsec.
The alignments among the beams of different frequency bands are
better than $\sim$3\arcsec\ \citep{Han13}.
The target lines were
the H$_{2}$O\ $6_{16}\rightarrow5_{23}$ (22.23508 GHz) line
and the CH$_{3}$OH\ $7_{0} \rightarrow 6_{1}$ $A^{+}$,
$8_{0}\rightarrow 7_{1}$ $A^{+}$, and $6_{-1}\rightarrow5_{0}$ $E$
(44.06943, 95.169516, and 132.890800 GHz, respectively) lines.
The 4096-channel digital spectrometers were used as back ends.
A bandwidth of 32 MHz was selected,
which provides velocity resolutions of 0.105, 0.053, 0.0246, and 0.0176 km s$^{-1}$\
for the target lines.
For the 22 GHz H$_{2}$O\ and 44 GHz CH$_{3}$OH\ line spectra,
the Hanning smoothing was applied once and twice, respectively,
resulting in a velocity-channel width of 0.21 km s$^{-1}$\ in both lines.
For the 95 and 133 GHz CH$_{3}$OH\ line spectra,
the Hanning smoothing was applied three times,
which gives a velocity-channel width of 0.19 and 0.14 km s$^{-1}$, respectively.
For each observing season,
the data were calibrated using the standard efficiencies of KVN
listed in Table \ref{table1}
\citep{Lee11,Choi12} (http://kvn-web.kasi.re.kr).
The data were processed with the GILDAS/CLASS software
from Institut de RadioAstronomie Millim\'etrique
(http://www.iram.fr/IRAMFR/GILDAS).
In the 2010 season,
only the 22 GHz and 44 GHz band receivers were available,
and the H$_{2}$O\ line and the CH$_{3}$OH\ 44 GHz line were observed.
Integrations toward each target source were carried out
until the average spectra had a noise rms level down to $\sim$0.5 Jy.
System temperatures were
in the range of 70--190 K at 22 GHz and 140--270 K at 44 GHz.
Tables 2--4 list the target sources, coordinates, detectability,
and noise rms levels of the resulting spectra.
Table 2 lists the sources detected in either line.
When the emission from a single source is detected
toward several nearby target positions,
Table 2 lists only the position of the strongest signal.
Tables 3 and 4 list the sources undetected in either line.
The H$_{2}$O\ spectra of the sources marked by ``S'' in Tables 2 and 3
were affected by the Orion KL masers,
and it was difficult to separate the emission of the target region, if any,
from that of the Orion KL region (see below).
\input{table2}
\input{table3}
\input{table4}
In 2011 November--December,
all the target sources were observed again
in the H$_{2}$O\ line and the CH$_{3}$OH\ 44 GHz line.
Though the integrations were shorter in duration than those in the previous season,
close attention was paid to the effect of the Orion KL masers.
Observations toward the Orion KL region were made several times a day
so that the line profiles could be compared at any observing day.
The coordinates of Orion KL masers
used for this monitoring observations are
(05$^{\rm h}$35$^{\rm m}$14\fs12, $-$05\arcdeg22$'$36\farcs4).
While the H$_{2}$O\ spectra of most of the ``S'' sources
showed only the signal from the Orion KL region,
one source (HOPS 96) showed a positive signal from the target source.
In 2012 January--May, the 86 and 129 GHz band receivers became available,
and we focused on the sources detected in the previous observing runs.
Observations were made in the CH$_{3}$OH\ 95 and 133 GHz lines
toward the sources detected in the 44 GHz line.
The areas around the detected sources were mapped
in the H$_{2}$O\ line and the CH$_{3}$OH\ 95 GHz line
to determine the source positions accurately
and to identify the YSOs associated with the emission sources.
The maps were made with grid spacings smaller than a half beam (FWHM/2).
\subsection{Water Masers of the Orion KL Region}
As the KVN telescopes were designed mainly for interferometry,
the power levels of their beam side lobes are relatively high
\citep{Lee11}.
The Orion KL region contains a dense cluster of YSOs
generating bright H$_{2}$O\ masers \citep{Gaume98}.
Our results show that the spectra toward the target sources
within $\sim$0.5\arcdeg\ from Orion KL
are contaminated by the emission from the Orion KL masers
coming through the side lobes.
The extent of the effect, however, varies
with the intensity of the Orion KL masers and the complex pattern
of the side lobes.
To verify if there is maser emission coming from a target source,
the spectra of the target source and Orion KL should be compared.
In the 2010 observing season,
the spectra of Orion KL were obtained sparsely,
and the comparison spectra are not available for every observing day,
which makes the comparison somewhat ambiguous.
Figure \ref{fig1}(a) shows an example.
HOPS 350 is located $\sim$0.25\arcdeg\ away from Orion KL.
All the velocity components in the spectrum of HOPS 350
can be seen in the spectrum of Orion KL obtained 9 days later.
However, the line profile (intensity ratios among the velocity components)
changed substantially,
and it is not clear
if any of the velocity components (e.g., the 6 km s$^{-1}$\ component)
contains emission from HOPS 350.
Such comparisons suggest
that the Orion KL masers should have been monitored daily.
\input{fig1.tex}
In the 2011--2012 season,
while the target sources were observed again,
the spectra of Orion KL were obtained every $\sim$2 hours.
Figure \ref{fig1}(b) shows the spectra of the H$_{2}$O\ maser lines
toward HOPS 350 and Orion KL, observed on the same day.
The comparison shows
that all the velocity components came from the Orion KL masers.
The line profiles of the two spectra are slightly different
because the Orion KL masers are distributed
over an extended ($\sim$30$''$) region \citep{Gaume98}.
This contamination from Orion KL masers hinders
the detection of weak ($\lesssim$10 Jy) maser emission
from the affected target sources.
It is possible, however, to detect a maser towards these targets
if the intensity is very strong
or if the line velocity is
far from the velocity interval crowded with the Orion KL masers.
\input{table5}
\section{RESULTS}
\subsection{H$_{2}$O\ Masers}
The H$_{2}$O\ maser line was detected toward four target sources:
HOPS 96, 167, 182, and 361.
All the H$_{2}$O\ masers showed large variations in flux and velocity
over the observing runs from 2010 to 2012.
Table \ref{table5} lists the properties of the detected H$_{2}$O\ masers:
peak velocity, integrated line flux, line FWHM, and peak intensity
from Gaussian fits.
While the HOPS 96 maser was detected only once,
the others were detected multiple times.
Even for the multiply detected target sources,
the line velocities of detected spectral features
changed significantly from one observing run to the next,
except for the steady $\sim$6.6 km s$^{-1}$\ component of HOPS 182.
This variability suggests
that the typical lifetime of each velocity component
is about a month or shorter.
The areas around the detected target sources were mapped
to identify the YSOs responsible for the excitation of the masers.
The HOPS 96 field was not mapped
because the maser detected in 2011 November
already disappeared by the time we tried to map it in 2012 January.
Each map was fitted with a Gaussian intensity profile
having the same FWHM as the main beam.
The intensity distribution of each mapping field and each velocity component
is consistent with what is expected from a point-like source
(convolved with the beam).
Table \ref{table6} lists the source positions determined by the mapping.
Two H$_{2}$O\ maser sources (KLC 2/3) were identified in the HOPS 167 field.
For the HOPS 361 field,
all the velocity components (at the mapping epoch)
seem to come from a single source
(or a single region much smaller than the beam size), KLC 6.
\input{table6}
\subsection{CH$_{3}$OH\ Lines}
The CH$_{3}$OH\ 44 GHz line was detected toward four target sources:
HOPS 64, 182, 361, and 362.
Follow-up observations toward these sources showed
that the 95 and 133 GHz lines are also detectable.
For each source, the three CH$_{3}$OH\ lines
have similar velocities and profiles,
which suggests that they have the same origin.
The 95 GHz line maps of HOPS 64/182 show
that these emission sources are compact
(much smaller than the $\sim$29$''$ beam size).
The maps of HOPS 361/362 show
that these sources are extended (20$''$--50$''$).
Table \ref{table6} lists the peak positions,
and Table \ref{table7} lists the properties of the CH$_{3}$OH\ lines
at those positions.
\input{table7}
In contrast with the H$_{2}$O\ masers,
the CH$_{3}$OH\ lines did not show any significant variability.
The flux variability, if any, is smaller
than the calibration uncertainty of the telescopes ($\sim$10\%).
The peak velocities of the CH$_{3}$OH\ lines
are always close (within $\sim$1 km s$^{-1}$)
to the systemic velocity of the ambient cloud.
Unlike with the H$_{2}$O\ maser line,
the detection of CH$_{3}$OH\ class I maser lines
does not necessarily mean that the detected flux is amplified emission.
They can be either maser or thermal emission, or a mixture of both.
The CH$_{3}$OH\ lines of KLC 1 are particularly strong and narrow,
and they are most likely masers.
The KLC 5 lines are probably partial masers
because the lines are narrow and the source size is small.
The KLC 7 and 8 lines are probably thermal
because the lines are relatively wide and the sources are extended.
\section{NOTES ON THE DETECTED SOURCES}
In this section, we describe the detected sources in detail
and present the spectra and maps of the H$_{2}$O\ and CH$_{3}$OH\ lines.
The maps show the source positions determined by mapping observations,
as well as the positions of known YSOs
superposed on the color-composite infrared images
of the Wide-Field Infrared Survey Explorer (WISE) \citep{Wright10}.
\subsection{KLC 1 in the HOPS 64 Field (OMC 2)}
The CH$_{3}$OH\ class I maser lines at 44, 95, and 133 GHz
were detected toward KLC 1 (Figure \ref{fig2}).
All the three lines are quite strong.
Their velocities are close (within 0.3 km s$^{-1}$)
to the systemic velocity of the ambient dense molecular cloud
measured in the CS $J = 2 \rightarrow 1$ line \citep{Tatematsu98}.
The source position was determined using a 95 GHz line map
that is a regular-grid map of 5 $\times$ 5 points
with a spacing of 7\farcs5.
Fitting the map with a Gaussian intensity profile suggests
that the source is compact (much smaller than the beam size).
The best-fit source position of KLC 1
is (10\arcsec, 8\arcsec) with respect to HOPS 64 (Figure \ref{fig3}).
KLC 1 is located between the FIR 3 cluster (FIR 3, IRS 4N/S, and VLA 11)
and the FIR 4 cluster (FIR 4, HOPS 64, and VLA 12),
and it is difficult to specify the YSO
responsible for the excitation of the maser.
\input{fig2.tex}
\input{fig3.tex}
KLC 1 positionally coincides with the mid-IR source MIR 23 within 4$''$
\citep{Nielbock03}.
There is little known about MIR 23,
and its relation with the maser activities is not clear.
Interferometric observations in the 44 and 95 GHz lines
were presented by \cite{Slysh09},
which show several maser spots distributed along a line.
Their maser spot A coincides with KLC 1 within 1$''$,
while other spots are also within the uncertainty circle of KLC 1.
\cite{Slysh09} suggested that the maser is associated with IRS 4S.
The linear distribution of the spots, however,
is neither pointing toward nor perpendicular to IRS 4S.
The line of spots rather points toward the FIR 4 cluster.
Further observations are needed to understand the KLC 1 maser
and its relation with the YSOs around it.
\input{fig4.tex}
The H$_{2}$O\ line spectra toward HOPS 64 were affected by the Orion KL masers.
Figure \ref{fig4} shows an example.
All the velocity components in the HOPS 64 spectrum
may be attributed to the Orion KL masers,
and no H$_{2}$O\ maser associated specifically with HOPS 64 was detected in this survey.
Detections of an H$_{2}$O\ maser in this region, however,
were reported previously \citep{Morris76,Genzel79}.
Interferometric observations showed
that the H$_{2}$O\ maser is associated with FIR 3 \citep{Tofani95}.
The H$_{2}$O\ maser was probably inactive in the 2010--2012 period.
\subsection{HOPS 96 (OMC 3)}
\label{sec:hops096}
There are at least three YSOs in the HOPS 96 field:
HOPS 96 in the SIMBA a condensation
and MIR 1/2 in the SIMBA c condensation \citep{Nielbock03}.
These condensations show molecular outflows
traced by the CO $J = 3 \rightarrow 2$ line \citep{Takahashi08}.
\input{fig5.tex}
Though the H$_{2}$O\ spectra toward HOPS 96 are
occasionally contaminated by the Orion masers,
an emission component was marginally detected in the uncontaminated velocity space
(Figure \ref{fig5}).
This velocity component detected at $-$14 km s$^{-1}$\ in 2011 November
may be coming from a maser source in the HOPS 96 field.
The component, however,
disappeared in all subsequent observing runs,
and there is no map to determine its position accurately.
The maser source can be anywhere in the region shown in Figure \ref{fig6}
and may be excited by one of the three YSOs listed above.
\input{fig6.tex}
\subsection{KLC 2/3 in the HOPS 167 Field (HH 1)}
HOPS 167 is located in the HH 1--2 region that is an extensively
studied star-forming location
\citep{Pravdo85,Reipurth93,Choi97,Rodriguez00,Kwon10,Fischer10}.
There are two class 0 protostars (VLA 1/3) though the classification
is somewhat uncertain \citep{Chini01}. \cite{Fischer10} reported that both
VLA 1 (HOPS 203) and VLA 3 (HOPS 168) are in an active state of
mass infall and accretion. They are probably the youngest and most
luminous protostars in this region.
The HOPS 167 region displayed interesting H$_{2}$O\ maser activities.
The maser emission was relatively strong (30--80 Jy) in 2010--2011,
became weak (2--13 Jy) in 2012 January,
and disappeared in 2012 May (Figure \ref{fig7}).
\input{fig7.tex}
Mapping observations were carried out three times.
In 2011 November,
the HOPS 167 region was mapped with a grid spacing of 65\arcsec.
The best-fit source position of KLC 2
is ($-$18$''$, 37$''$) with respect to HOPS 167 (Figure \ref{fig8}).
On 2012 January 22, the region was mapped again with a 43$''$ spacing.
The position of the 13 km s$^{-1}$\ component
turned out to have an offset from KLC 2 by about a half beam,
and there was no emission from KLC 2.
The best-fit position of this new source, KLC 3,
is (40$''$, 0$''$) with respect to HOPS 167.
The separation between KLC 2 and 3 is $\sim$69$''$.
On 2012 January 29--30,
the region was mapped again with a 43$''$ spacing.
The position of the 0 km s$^{-1}$\ component
was consistent with KLC 2, though the detection was marginal,
and emission from KLC 3 had disappeared.
In short, two H$_{2}$O\ maser sources were detected in the HOPS 167 field,
and both of them displayed a rapid variability.
\input{fig8.tex}
KLC 2 coincides within 5$''$
with the deeply embedded protostar VLA 3 \citep{Rodriguez00}.
The KLC 2 (VLA 3) maser was reported previously
and known to vary rapidly \citep{Lo75,Ho82,Haschick83}.
VLA 3 drives a molecular outflow that is separate from the HH 1--2 outflow
\citep{Choi97,Moro-Martin99}.
KLC 3 is located $\sim$8$''$ northwest of VLA 1 (HOPS 203),
which corresponds to the brightest knot of the HH 1 jet
(knot G in Figure 3 of \cite{Reipurth93}).
Considering the 5$''$ pointing uncertainty of the telescope,
KLC 3 may be associated with either the HH 1 jet or VLA 1.
In any case, VLA 1 seems to be the YSO
responsible for the excitation of the KLC 3 maser.
Detection of H$_{2}$O\ maser emission near VLA 1
has not been reported previously.
\subsection{KLC 4/5 in the HOPS 182 Field (L1641N)}
The central region of the L1641N cluster is crowded
with many YSOs and multiple outflows \citep{Stanke07,Galfalk08},
and it is difficult to study the nature of each individual object.
MM1 (HOPS 182) is a low-mass or intermediate-mass protostar
deeply embedded in an envelope of $\sim$1.6 $M_{\odot}$\ \citep{Chen95,Stanke07}.
MM3 is a ``protrusion'' of MM1
in the 1.3 mm continuum image of \cite{Stanke07}.
It is probably a deeply embedded YSO, but its nature is poorly known.
\input{fig9.tex}
\input{fig10.tex}
The H$_{2}$O\ line and the three CH$_{3}$OH\ lines
were detected toward the L1641N region (Figures \ref{fig9}--\ref{fig10}).
The H$_{2}$O\ maser spectrum
displayed five velocity components in 2010 April:
two near the systemic velocity
and three at relatively high velocities
(one highly blueshifted and two highly redshifted).
In the subsequent observing runs,
the high-velocity components disappeared
while one of the low-velocity components became much brighter than before.
Mapping observations in the H$_{2}$O\ line were carried out
on 2012 January 11 with a grid spacing of 16$''$.
The best-fit position of the source (KLC 4)
is $\sim$6$''$ west of L1641N MM1 (Figure \ref{fig11}).
We presume that KLC 4 is associated with MM1/3.
The H$_{2}$O\ maser reported by \cite{Xiang95} is displaced
by $\sim$40$''$ to the east with respect to KLC 4.
Considering their position uncertainty of $\lesssim$ 22$''$,
the displacement is significant.
It is not clear whether the maser of \cite{Xiang95} is excited by MM1/3
or by a different YSO in the L1641N cluster.
\input{fig11.tex}
The CH$_{3}$OH\ 44 GHz line shows a double-peak profile
while the 95 GHz line shows a single peak (Figure \ref{fig10}).
It is not clear whether the 44 GHz spectrum represents
two velocity components of maser emission
or a single component of thermal emission with a self-absorption feature.
We favor, however, the interpretation that the 44 GHz line consists of two narrow features.
The 95 GHz line shows a redshifted line wing.
The CH$_{3}$OH\ source position was determined by mapping the region
in the 95 GHz line with a grid spacing of 11$''$.
The best-fit source position of KLC 5
is ($-$16$''$, $-$10$''$) with respect to MM1 (Figure \ref{fig11}).
KLC 5 coincides within $\sim$3$''$
with the compact emission feature ``southwest shock''
traced by the CH$_{3}$OH\ $8 \rightarrow 7$ $E$ lines,
which is at the tip of the redshifted jet probably driven by MM3
\citep{Stanke07}.
\subsection{KLC 6/7 in the HOPS 361 Field (NGC 2071)}
Previous observations of the NGC 2071 region
showed many YSOs and various star formation activities
such as OH and H$_{2}$O\ masers \citep{Pankonin77,Genzel79,Tofani95},
infrared sources \citep{Persson81,Walther93},
compact radio sources \citep{Snell86,Torrelles98},
and molecular outflows \citep{Snell84,Scoville86,Choi93,Stojimirovi08}.
IRS 1 is the most luminous object in this region
and is an intermediate-mass class I protostar \citep{Skinner09}.
The distributions of radio continuum and H$_{2}$O\ maser emission suggest
that IRS 1 drives a jet in the east-west direction
\citep{Tofani95,Torrelles98,Trinidad09,Carrasco12}.
IRS 3 is a deeply embedded protostar,
but its nature is less certain \citep{Skinner09}.
IRS 3 drives a large-scale outflow/jet in the northeast-southwest direction,
and its H$_{2}$O\ masers seem to trace a rotating disk
\citep{Torrelles98,Eisloffel00,Trinidad09}.
The H$_{2}$O\ line and the three CH$_{3}$OH\ lines
were detected toward the NGC 2071 region (Figures \ref{fig12}--\ref{fig13}).
The H$_{2}$O\ maser spectrum displayed a strong variability
in flux and velocity over the four observing runs.
The velocity components were distributed
in a relatively wide range, from $-$6 to 16 km s$^{-1}$.
\input{fig12.tex}
\input{fig13.tex}
\input{fig14.tex}
Mapping observations in the H$_{2}$O\ line were carried out
on 2011 November 25 with a grid spacing of 65$''$.
The source positions of all the velocity components detected on this day
agree to each other within the uncertainty.
The best-fit position of the source (KLC 6)
is $\sim$9$''$ west of IRS 1 (Figure \ref{fig14}).
All the four radio sources in the NGC 2071 region (IRS 1--3 and VLA 1)
display H$_{2}$O\ maser activities \citep{Genzel79,Torrelles98,Trinidad09},
but there is no previous report of maser at the position of KLC 6.
Interestingly, KLC 6 is at the intersection
of the IRS 1 western outflow and the IRS 3 southwestern outflow,
and it is difficult to point out the object
responsible for the excitation of the KLC 6 maser.
As the position difference between KLC 6 and the previously known
masers is less than 2 rms of pointing uncertainty, it is possible
that KLC 6 may not be a new maser.
The CH$_{3}$OH\ lines of KLC 7 (Figure \ref{fig13})
have a peak velocity close to the systemic velocity of the cloud
and show a width of $\sim$3.2 km s$^{-1}$,
larger than the line widths of KLC 1/5 by a factor of $\sim$4.
There is a possibility that the CH$_{3}$OH\
spectra of KLC 7 may have double peaks. In this case, the line may
consist of a weak maser overlaid upon thermal emission.
The NGC 2071 region was mapped in the 95 GHz line
with a grid spacing of 10$''$.
The 95 GHz map shows that the source is extended
over a region of $\sim$50$''$ (Figure \ref{fig14}).
By contrast, \cite{Haschick89} found
that the CH$_{3}$OH\ 36 GHz maser line
has velocities redshifted by $\sim$5.5 km s$^{-1}$,
and \cite{Liechti96} reported
that the 36 GHz maser source is located $\sim$20$''$ south of IRS 1--3.
The large line width, extended source size,
and dissimilarity to the 36 GHz maser suggest
that the CH$_{3}$OH\ 44/95/133 GHz line emission of KLC 7 may be thermal.
The CH$_{3}$OH\ lines probably trace the dense molecular gas and outflows
in and around the NGC 2071 cluster \citep{Garay00}.
\subsection{KLC 8 in the HOPS 362 Field (V380 Ori NE)}
V380 Ori NE (HOPS 362) is a deeply embedded protostar \citep{Davis00}.
\cite{Zavagno97} classified it as a class I YSO,
but \cite{Stanke03} considered it class 0.
\cite{Stanke03} suggested that V380 Ori NE drives a molecular jet.
The bipolar outflow of V380 Ori NE
shows an interesting point-symmetric morphology in that
the flow direction changes by 20\arcdeg\ when imaged in a larger scale
\citep{Davis00,Stanke03}.
\cite{Davis00} suggested that the jet is
either deflected by the ambient cloud
or driven by a precessing object.
The three CH$_{3}$OH\ lines
were detected toward the V380 Ori NE region (Figure \ref{fig15}).
The CH$_{3}$OH\ lines of KLC 8
have a peak velocity redshifted by $\sim$1.2 km s$^{-1}$\
with respect to the systemic velocity of the cloud,
a line width of $\sim$4 km s$^{-1}$, and prominent redshifted line wings.
The V380 Ori NE region was mapped in the 95 GHz line
with a grid spacing of 10$''$.
The peak position is
near the H$_2$ line emission knots S$_1$--S$_3$ (Figure \ref{fig16}),
which is associated with the redshifted outflow lobe R$_1$
seen in the CO $J$ = 4 $\rightarrow$ 3 line \citep{Davis00}.
The 95 GHz map shows that the source is
marginally resolved in the north-south direction
and unresolved in the east-west direction (Figure \ref{fig16}).
The map does not quite cover the northern (blueshifted) outflow area
but shows a hint of CH$_{3}$OH\ emission in that direction.
The large line width, extended source size, and the elongation suggest
that the CH$_{3}$OH\ line emission of KLC 8 may be thermal
and related to the southern outflow.
\input{fig15.tex}
\input{fig16.tex}
While the detection of CH$_{3}$OH\ emission
shows the presence of shocked gas near the bend of the outflow lobe,
it does not necessarily support or refute the deflection or precession
models \citep{Davis00}.
In the deflection scenario,
the CH$_{3}$OH\ emission may trace the dense gas deflecting the jet.
In the precession scenario,
it may trace the ambient cloud shocked freshly by the jet
carving a cavity in a new direction.
It is interesting to note
that the CH$_{3}$OH\ emission shows a good positional agreement
with the 4.6 \micron\ emission (Figure \ref{fig16}).
The CH$_{3}$OH\ emission in other regions (OMC 2, L1641N, and NGC 2071)
also shows a similar trend (Figures \ref{fig3}, \ref{fig11}, and \ref{fig14}).
\section{DISCUSSION}
Most of the stars in the Orion molecular cloud complex form in clusters.
This clustering should be taken into account
in the interpretation of the survey.
The targets were HOPS protostars showing CO line wings.
The CO observations, however, were made with a large beam,
and the detected CO line wings are not necessarily produced
by the outflows of the target protostars.
For example, the CO wings of HOPS 361
are most likely coming from the outflows driven by NGC 2071 IRS 1/3.
Therefore, the survey targets are not necessarily low-mass protostars
driving molecular outflows,
and they may rather be considered as typical star-forming regions
around low-mass protostars forming in clusters.
The clustering makes the interpretation of the single-dish survey complicated
because the main beam area can contain multiple protostars.
The consequence of clustering is obvious in the survey results.
Many of the detected masers are excited
not by the target HOPS protostars but by other YSOs in the survey fields.
For example, the exciting sources of KLC 2/3/6 are
HH 1--2 VLA 1, HH 1--2 VLA 3, and NGC 2071 IRS 1/3, respectively.
There are also some ambiguous cases.
Therefore, careful mapping is essential in single-dish surveys of masers.
While the effect of clustering in each region is difficult to predict,
we may assume that the degree of clustering
(i.e., the number of protostars in a main-beam area) in the survey regions
is more or less similar within an order of magnitude.
The reason is
that all the target sources in the survey are at similar distances
and that excessively complicated regions, such as the Orion KL region,
were excluded.
\subsection{H$_{2}$O\ Masers}
The nominal detection rate of the single-pointing H$_{2}$O\ maser survey
is 4\% (4/99).
The Orion KL masers hindered the detection of maser from nearby sources,
and the detection rate considering only the unaffected targets
is 5\% (3/60).
Since the mapping observations revealed
two maser sources in the HOPS 167 field,
the detection rate of the full survey is 7\% (4/60).
This value is somewhat smaller than the detection rate of 20--70\%
from previous H$_{2}$O\ maser surveys of YSOs in various stages of evolution
\citep{Churchwell90,Furuya01,Sridharan02,Wilking94,Bae11}.
The relatively low rate from this survey is not surprising
because the targets are low-mass protostars.
Previous surveys are usually biased toward well-known (luminous) YSOs,
massive star-forming regions, or known maser sources.
For example, \cite{Furuya01} reported a detection rate of
40\% for Class 0, 4\% for Class I, and 0\% for Class II low-mass YSOs.
Their sample, however, consists of protostars
known before the $Spitzer$ data became available.
The results of this work is probably more representative
of typical low-mass star-forming regions.
In the regions described in Section 4,
a typical number of known protostars in a KVN main beam area is $\sim$4.
Then the detection rate,
defined as the number of detected masers per protostar in the covered region,
would be $\sim$1.7\%.
Another way to consider the clustering is to focus on the original target
sources only, not considering the off-center sources.
HOPS 182 (L1641N MM1/3) is the only detection among them,
and the detection rate would be 1.7\% (1/60).
Either way, the per-protostar detection rate is $\sim$1.7\%.
Therefore, an H$_{2}$O\ maser is a rare phenomenon in low-mass star formation.
Another factor complicating the interpretation is the time variability.
Some of the masers were detectable only at a certain epoch.
The maser in the HOPS 96 field was detected once in four observing runs.
KLC 2 was detected $\sim$4 times in five runs,
and KLC 3 was detected $\sim$2 times in five runs.
KLC 4/6 were detectable in all four runs.
Then the detection probability of these sources during a given run
is $\sim$70\%.
The detection rate (detections per protostar at a given epoch) of the survey
would be $\sim$1.2\%.
The H$_{2}$O\ maser is a good signpost of star formation activities,
especially shocked regions around the base of jets and on the disks.
Once detected, the H$_{2}$O\ maser is a useful tool
providing information on the mass accretion and ejection processes
on a small scale.
The rarity of H$_{2}$O\ masers, however, is one of the limitations of this tool
because $\sim$98\% of the protostars
would not exhibit detectable H$_{2}$O\ maser emission.
In other words, a non-detection of H$_{2}$O\ maser toward a known protostar
would not provide much constraints on the nature of the object.
One of the important properties of masers is
that the radiation can be anisotropic.
Since the H$_{2}$O\ masers in star-forming regions
are produced in structures that are far from spherical,
the detection of masers can depend
on the geometry of the emission/amplification region
\citep{Elitzur92}.
Therefore, the small detection rate above
means that H$_{2}$O\ masers are rarely detectable,
but does not necessarily imply that they are rarely occurring.
\subsection{CH$_{3}$OH\ Lines}
The survey resulted in the detection of four CH$_{3}$OH\ sources.
For a given source,
the three CH$_{3}$OH\ lines used in the survey have similar spectral profiles,
which strongly suggests that they have the same origin.
Judging from several parameters measured
and from information in the literature,
KLC 1 in the OMC 2 region is clearly a maser source,
KLC 5 in L1641N is an ambiguous case,
and KLC 7/8 are thermal emission sources.
It is possible, however, that the
detected emission can be a mixture of maser and thermal components
to a certain degree \citep{Kalenskii10}.
The detection rate of CH$_{3}$OH\ class I masers is even smaller than
that of the H$_{2}$O\ masers. Since there are only one or two masers detected,
the nominal detection rate of the survey in the 44 GHz line is
1--2\%. The detection rates of CH$_{3}$OH\ masers toward high-mass
protostellar candidates range from 30\% to 50\%
\citep{Fontani10,Haschick90}. It is clear that low-mass protostars
have a much smaller detection rate for CH$_{3}$OH\ class I masers.
Unlike H$_{2}$O\ masers, CH$_{3}$OH\ class I masers are often
detected far from protostars, which may be one of the reasons for
the low detection rate. Since the number of detections is so small,
it is difficult to make a further analysis. The effect of clustering
for the CH$_{3}$OH\ line is expected to be smaller than that for the
H$_{2}$O\ line because the beam size of the 44 GHz line is a factor of two
smaller. KLC 1, however, is in a relatively crowded region (OMC 2
FIR 3/4) and probably not excited by the original target (HOPS 64).
Therefore, the per-protostar detection rate may be much smaller
than 1\%, and the CH$_{3}$OH\ class I maser is an even rarer phenomenon,
at least for low-mass protostars.
\section{SUMMARY}
Ninety-nine protostars in the Orion molecular cloud complex
were observed in the H$_{2}$O\ maser line at 22 GHz
and the CH$_{3}$OH\ class I maser lines at 44, 95, and 133 GHz
with the KVN antennas in its single-dish telescope mode.
The target sources are protostars
identified using the infrared observations
with the {\it Spitzer Space Telescope}.
The survey areas may be considered
as typical regions around low-mass protostars
at the same distance and in similar environments.
The main results are summarized as follows:
1. The H$_{2}$O\ maser line was detected toward four target sources
(HOPS 96, 167, 182, and 361).
The H$_{2}$O\ masers showed significant variability
in intensity and velocity on monthly timescales.
2. Regions around detected H$_{2}$O\ masers were mapped
to identify the YSOs responsible for exciting the masers.
KLC 2/3 in the HOPS 167 field may be excited
by HH 1--2 VLA 3 and VLA 1, respectively.
The VLA 1 H$_{2}$O\ maser is a new detection.
KLC 4 in the HOPS 182 field may be excited by L1641N MM1/3.
KLC 6 in the HOPS 361 field may be excited by NGC 2071 IRS 1/3.
The maser in the HOPS 96 field may be excited by one of the YSOs
in the OMC 3 SIMBA condensations,
which is also a new detection.
3. The detection rate of H$_{2}$O\ masers,
defined as number of detections per survey field,
is 5--7\%.
This value is lower than those of previous surveys
probably because the targets in this survey are low-mass protostars.
The detection rate, defined as detections per protostar,
is $\sim$2\%.
This small rate suggests
that the H$_{2}$O\ maser of low-mass protostar
is a rarely detectable phenomenon.
4. The CH$_{3}$OH\ 44, 95, and 133 GHz lines were detected
toward four target sources (HOPS 64, 182, 361, and 362).
The CH$_{3}$OH\ lines did not show significant variability
and have peak velocities within $\sim$1 km s$^{-1}$\
relative to the systemic velocities of the ambient dense clouds.
The line width is a parameter
useful for distinguishing maser and thermal emission,
and the maser-thermal boundary is at $\sim$2 km s$^{-1}$.
5. Mapping observations in the 95 GHz line show
that the detected CH$_{3}$OH\ sources are related to molecular outflows.
KLC 1 in the HOPS 64 field is most likely a maser source.
Its exciting source may be one of the protostars
in the OMC 2 FIR 3/4 clusters.
KLC 5 in the HOPS 182 field is probably a maser source,
but interferometric observations are needed to verify its nature.
It appears related to the jet driven by L1641N MM1/3.
KLC 7/8 are probably thermal emission sources.
KLC 7 in the HOPS 361 field is
related with the dense cloud core and outflows
in and around the NGC 2071 IRS 1--3 cluster.
KLC 8 in the HOPS 362 field is related
with the southern molecular outflow of V380 Ori NE.
6. The per-field detection rate of CH$_{3}$OH\ class I masers is 1--2\%.
The per-protostar detection rate may be much smaller than 1\%.
This small rate suggests
that CH$_{3}$OH\ class I masers associated with low-mass protostar
is an extremely rare phenomenon.
\acknowledgments
We thank S. T. Megeath for a helpful discussion and
the KVN staff for their support.
J.-E.L. was supported by the Basic Science Research Program
through the National Research Foundation of Korea (NRF)
funded by the Ministry of Education of the Korean government
(grant number NRF-2012R1A1A2044689) and the 2013 Sabbatical Leave
Program of Kyung Hee Unviersity (KHU-20131724).
M.C. was supported by the Core Research Program of NRF
funded by the Ministry of Science, ICT and Future Planning
of the Korean government (grant number NRF-2011-0015816).
This work was also supported
by the Korea Astronomy and Space Science Institute (KASI) grant
funded by the Korean government.
|
2,877,628,088,431 | arxiv | \section{Introduction}
A Riemannian manifold $(M, g)$ is called an \emph{Einstein manifold} if its Ricci curvature satisfies $\mathrm{Ric} = \lambda g$ for some constant $\lambda \in \mathbb{R}$. In this paper we are interested in non-compact homogeneous Einstein manifolds. There are also many existence and nonexistence results if the manifold is compact, see for example \cite{WangZiller}. From the classical Bonnet-Myers Theorem, an Einstein manifold is compact if $\lambda>0$. If $\lambda=0$, a homogeneous Ricci flat space is necessary flat, see \cite{AlekKim}. So for non-compact homogeneous Einstein manifolds one can assume that $\lambda< 0$.
All known examples of non-compact, nonflat homogeneous Einstein manifolds are isometric to Einstein solvmanifolds. A solvmanifold $(G, g)$ is a simply-connected solvable Lie group $G$ endowed with a left invariant metric $g$. It has been conjectured by D. V. Alekseevskii that any noncompact, nonflat, homogeneous Einstein space $M$ has maximal compact isotropy subgroups, see \cite{Besse} and \cite{Alekseevskii}. If $G$ is a linear group that acts transitively on $M$, this implies that $M$ is a solvmanifold or is diffeomorphic to a Euclidean space, see \cite[Section 2]{Heber}. Einstein solvmanifolds have been intensively investigated in \cite{Heber} and \cite{LauretStandard}.
A natural generalization of an Einstein manifold is a Ricci soliton, i.e., a metric that satisfies the equation
\begin{equation}\label{eqn:RSLieX}
\mathrm{Ric} = \lambda g + \frac{1}{2} \mathscr{L}_X g
\end{equation}
where $X \in \mathfrak{X}(M)$ is a smooth vector field and $\mathscr{L}_X$ is the Lie derivative. A trivial example of a Ricci soliton is an Einstein metric with $X$ a Killing vector field. A Ricci soliton is called non-trivial if $X$ is not a Killing vector field. Under the Ricci flow, a Ricci soliton metric evolves via diffeomorphism and scaling. Besides the important role in the singularity analysis of Ricci flows, the geometry of Ricci solitons shares some common features with Einstein manifolds.
A Ricci soliton is called a gradient Ricci soliton if $X$ is a gradient vector field. From the work of Ivey, Naber, Perelman and Petersen-Wylie, a nontrivial, nonflat homogeneous Ricci soliton must be noncompact, expanding ($\lambda<0$) and of non-gradient type. In fact, all known examples are isometric to a left-invariant metric $g$ on a simply connected solvable Lie group $G$, which when identified with an inner product on the Lie algebra $\mathfrak{g}$ of $G$ satisfies
\begin{equation}
\mathrm{Ric} = \lambda I + D \label{eqn:AlgSol}
\end{equation}
for some $\lambda \in \mathbb{R}$ and $D\in \mathrm{Der}(\mathfrak{g})$ a symmetric derivation. On the other hand, any left invariant metric which satisfies the above equation is automatically a Ricci soliton and the diffeomorphisms which are generated by $D$ are automorphisms on the Lie algebra $\mathfrak{g}$. A generalized version of the Alekseevskii conjecture claims that these exhaust all examples of nontrivial, nonflat homogeneous Ricci solitons.
Recall that for a Lie group $H$, a Riemannian manifold, $(N,g)$ is called $H$-homogeneous if $H$ acts transitively and by isometries on $N$. The concept of a semi-algebraic Ricci soliton was introduced recently by M. Jablonski in \cite{Jablonski} via Ricci flows on homogeneous spaces. Roughly speaking, a Ricci soliton on a $H$-homogeneous is \emph{semi-algebraic with respect to $G$ } if the Ricci flow with initial metric $g$ flows by scaling and automorphisms which preserve the $H$-homogeneous structure (see Definition \ref{def:Jabl} below). Jablonski proves several interesting results, including that every Ricci soliton on a homogeneous space is semi-algebraic with respect to its isometry group. On the other hand, when $H$ is a proper subgroup of the isometry group, a $H$-homogeneous Ricci soliton may not be semi-algebraic with respect to $H$ (see Example 1.3 of \cite{Jablonski}).
In this paper we introduce the notion of a \emph{normal} semi-algebraic Ricci soliton. Roughly speaking, a semi-algebraic soliton on a $G$-homogeneous space is normal if a certain derivation, $D$, on the Lie algebra commutes with its adjoint when projected to the tangent space, see Section 2.1 for details. It is an easy consequence of the definition that all algebraic Ricci solitons are normal. Our first result shows that being a normal semi-algebraic Ricci soliton is the condition which allows us to construct a one dimensional extension which is Einstein.
\begin{thm}\label{thm:Einstein1extsoliton}
A non-flat, non-trivial normal semi-algebraic Ricci soliton on a homogeneous space admits an Einstein one-dimensional extension.
\end{thm}
\begin{rem}
This theorem extends part of the work of J. Lauret on nilpotent groups which states that an algebraic Ricci soliton on a nilpotent group admits an Einstein one-dimensional extension \cite{LauretNil}. His argument relies on the special curvature properties of nilpotent Lie groups. It is unknown whether there are semi-algebraic Ricci solitons which are not algebraic.
\end{rem}
\begin{rem}
Our construction shows that if there is a normal semi-algebraic or algebraic Ricci soliton that is not isometric to a simply-connected solvable Lie group, then there is also a homogeneous Einstein manifold which is not isometric to an Einstein solvmanifold, see Theorems \ref{thm:EinsteinExtDnormal}, \ref{thm:LieStruc} and their remarks. This would give a counter-example to the Alekseevskii conjecture. This result was also obtained in the case of algebraic Ricci solitons in a recent preprint of R. Lafuente and J. Lauret in \cite{LafuenteLauret} using different methods.
\end{rem}
Our next result gives another connection between semi-algebraic Ricci solitons and homogeneous Einstein manifolds. This new connection is obtained by studying a special construction of Einstein metrics as warped product metrics. For constants $\lambda \in \mathbb{R}$ and $m\ne 0$ the space of all solutions to the \emph{$(\lambda, n+m)$-Einstein equation} on a Riemannian manifold $(M^n, g)$ is the following function space
\begin{equation}\label{eqn:Wspacewpe}
W(M, g) = W_{\lambda, n+m}(M, g) = \set{w\in C^{\infty}(M): \mathrm{Hess} w = \frac{w}{m}\left(\mathrm{Ric} - \lambda g\right)}
\end{equation}
A nonzero constant function is in $W(M, g)$ if and only if $(M, g)$ is a $\lambda$-Einstein manifold. When $m\geq 2$ is a positive integer, then $W(M, g)$ contains a positive function $w$ if and only if the product $E = M \times F^m$ with metric $g_E = g + w^2 g_F$ is a $\lambda$-Einstein manifold where the fiber $(F, g_F)$ is an appropriate space form. We call such a manifold $(M, g)$ a \emph{$(\lambda,n+m)$-Einstein manifold}. In this case, we also require $w = 0$ on $\partial M$ if it is non-empty. A $(\lambda, n+m)$-Einstein manifold is called \emph{non-trivial} if the warping function is not a constant.
\begin{thm}\label{thm:WPE1extensionintro}
Let $m > 0$ be an integer and $\lambda < 0$ be a constant. A non-flat, non-trivial normal semi-algebraic Ricci soliton admits a homogeneous $(\lambda, n+m)$-Einstein one-dimensional extension.
\end{thm}
Theorems \ref{thm:Einstein1extsoliton} and \ref{thm:WPE1extensionintro} imply the following
\begin{cor}\label{cor:SARiccisiltonDnormal}
Let $m \geq 0$ be an integer. A non-flat, non-trivial normal semi-algebraic Ricci soliton on a homogeneous space $N^{n-1}$ admits a homogeneous Einstein extension $E^{n+m}$.
\end{cor}
\begin{rem}
Another interesting consequence of our construction is that every normal semi-algebraic Ricci soliton can be isometrically embedded into a homogeneous Einstein manifold with an arbitrary codimension.
\end{rem}
\begin{rem}
One of the main tools in the proof of Theorems \ref{thm:Einstein1extsoliton} and \ref{thm:WPE1extensionintro} is a simple construction, called a \emph{one-dimensional extension} of a homogeneous space, see the definition in section 2. It is a natural generalization of the semi-direct product of a Lie group with the real line $\mathbb{R}$, i.e., an abelian extension with the real line. The Ricci curvature of the extension enjoys very nice properties when the original homogeneous space has a normal semi-algebraic Ricci soliton structure, see Lemma \ref{lem:RicciGKHK}.
\end{rem}
We also prove that a converse of both Theorems \ref{thm:Einstein1extsoliton} and \ref{thm:WPE1extensionintro} hold, i.e., if the one-dimensional extension of a semi-algebraic Ricci soliton by $D$ is Einstein or $(\lambda, n+m)$-Einstein, then $D$ is normal. See Theorems \ref{thm:EinsteinExtDnormal} and \ref{thm:WPEextDnormal}. In the case of $(\lambda, n+m)$-Einstein metrics we also prove the following characterization of the spaces in Theorem \ref{thm:WPE1extensionintro}.
\begin{thm} \label{thm:WPEradial}
Let $(M, g)$ be a non-trivial homogeneous $(\lambda, n+m)$-Einstein metric. Then $M$ is a one-dimensional extension of a normal semi-algebraic Ricci soliton if and only if $\nabla_{\nabla w} \mathrm{Ric}=0$.
\end{thm}
Combining this result with Theorem \ref{thm:Einstein1extsoliton} gives us the following corollary.
\begin{cor} \label{cor:WPEradial}
A homogeneous warped product Einstein metric, $(E, g_E)$ of the form $g_E = g_M + w^2 g_F$ with $\nabla _{\nabla w} \mathrm{Ric} =0$ on $M$ is diffeomorphic to a product of Einstein metrics.
\end{cor}
\begin{rem}
We do not know of examples of homogeneous $(\lambda, n+m)$-Einstein metrics which do not satisfy $\nabla _{\nabla w} \mathrm{Ric} =0$. In Theorem \ref{thm:structureWPEextension} we also give a structure theorem for the case where $\nabla _{\nabla w} \mathrm{Ric} \neq 0$. $M$ must still be a one-dimensional extension of a space $N$, but $N$ satisfies a slightly different equation than the semi-algebraic soliton equation. We do not know if there are examples that satisfy this equation but are not Ricci solitons.
\end{rem}
\smallskip
The paper is organized as follows. In section 2 we review the definition of semi-algebraic Ricci solitons, define one-dimensional extensions of homogeneous spaces, and recall some useful facts of $(\lambda, n+m)$-Einstein manifolds. In section 3 we study two special cases of one-dimensional extensions: when the extension is Einstein and when the extension is $(\lambda, n+m)$-Einstein. The study of the first case gives a proof of Theorem \ref{thm:Einstein1extsoliton}. In section 4 we apply the results on $(\lambda, n+m)$-Einstein metrics with symmetries in our earlier paper \cite{HPWuniqueness} to study homogeneous $(\lambda, n+m)$-Einstein manifolds. In section 5 we characterize the structure of homogeneous $(\lambda, n+m)$-Einstein manifolds and prove Theorems \ref{thm:WPE1extensionintro} and \ref{thm:WPEradial}. In section 6 we specialize our study of general homogeneous spaces to Lie groups with left invariant metrics. In the appendix, we also give an alternative approach to semi-algebraic Ricci solitons in terms of algebras of vector fields and propose a definition of semi-algebraic Ricci solitons on non-homogeneous spaces.
\smallskip
\textbf{Acknowledgements.} Part of the work was done when the first author was at Lehigh University and he is very grateful to the institute for their hospitality.
\medskip
\section{Preliminaries}
This section is separated into three subsections. In the first subsection we recall the definiton of semi-algebraic Ricci solitons. In the second subsection we consider the useful construction of a \emph{one-dimensional extension} of a homogeneous space and study how its curvature relates to those on the original manifold. In the third subsection we collect a few relevant facts about $(\lambda, n+m)$-Einstein manifolds from \cite{HPWLcf,HPWuniqueness}.
\subsection{Semi-algebraic Ricci solitons}
We recall the definition of a homogeneous semi-algebraic soliton given in \cite{Jablonski}. First we fix some notation. Let $H$ be a Lie group and $(M = H/K, g)$ be an $H$-homogeneous space. Let $K$ is the isotropy subgroup at a fixed point $x\in M$ and $\mathfrak{h}, \mathfrak{k}$ be the Lie algebras of $H$ and $K$ respectively. Let $\Phi_t \in Aut(H)$ be a family of automorphisms of $H$ such that $\Phi_t(K) = K$; $\Phi_t$ gives rise to a well defined diffeomorphism $\phi_t$ of $H/K$ defined by
\[ \phi_t(hK) = \Phi_t(h) K \qquad h \in H. \]
\begin{definition} \label{def:Jabl} \cite[Definition 1.4]{Jablonski}
$(H/K,g)$ is a semi-algebraic Ricci soliton with respect to $H$ if there exists a family of automorphisms $\Phi_t \in Aut(H)$ such that $\Phi_t(K) = K$ and
\[ g_t = c(t) \phi^*_t(g) \]
is a solution to the Ricci flow
\[ \frac{\partial}{\partial t} g = - 2 \mathrm{Ric}_g \] on $H/K$ with $g_0=g$.
\end{definition}
Fix an $\mathrm{Ad}(K)$-invariant decomposition $\mathfrak{h} = \mathfrak{p} \oplus \mathfrak{k}$ and let $\mathrm{pr}: \mathfrak{h} \rightarrow \mathfrak{p}$ be the orthogonal projection. $\mathfrak{p}$ is then naturally identified with $T_xM$. Jablonski also proves the following proposition about semi-algebraic solitons.
\begin{prop} \label{prop:Jabl} \cite[Proposition 2.3]{Jablonski}
If $(H/K,g)$ is a semi-algebraic Ricci soliton with respect to a Lie Algebra $H$ then there exists a derivation $D \in \mathrm{Der}(\mathfrak{h})$ such that
\[
\mathrm{Ric} = \lambda I + \frac{1}{2} \left(\mathrm{pr}\circ D + (\mathrm{pr}\circ D)^*\right).
\]
Here $^* $ denotes the adjoint with respect to the metric $g$ on $\mathfrak{h}$. Moreover, we may assume that $D|_{\mathfrak{k}} = 0$.
\end{prop}
The condition which will become important in constructing extensions is that the derivation $D$ be a normal operator, at least when projected to $\mathfrak{p}$. In particular, we give the following definition.
\begin{definition} A semi-algebraic Ricci soliton is \emph{normal} if the map $\mathrm{pr} \circ D \circ \mathrm{pr} : \mathfrak{g} \rightarrow \mathfrak{g}$ is a normal operator. \end{definition}
\begin{rem} A Ricci soliton on a $H$ homogeneous space is called called \emph{algebraic} if
\[ \mathrm{Ric} = \lambda I + \mathrm{pr} \circ D \]
for some $D \in \mathrm{Der}(\mathfrak{h})$. Since the Ricci tensor is a symmetric operator, algebraic solitons are always normal.
\end{rem}
Recall that, if we consider the symmetric and anti-symmetric parts of $D$
\begin{eqnarray}
S &=& \frac{1}{2}(\mathrm{pr} \circ D \circ \mathrm{pr} + (\mathrm{pr} \circ D \circ \mathrm{pr})^*) \label{eqn:symmD}\\
A &=& \frac{1}{2}(\mathrm{pr} \circ D \circ \mathrm{pr} - (\mathrm{pr} \circ D \circ \mathrm{pr})^*), \label{eqn:antisymmD}
\end{eqnarray}
the operator $\mathrm{pr} \circ D \circ \mathrm{pr}$ will be normal if and only if $S$ and $A$ commute, $[S,A]=0$. We will find $[S,A]$ to be an important term in the calculation of Ricci curvatures of extensions in the next subsection.
\subsection{One-dimensional extension of homogeneous spaces}
We recall some general facts about extensions of Lie groups and Lie algebras, i.e., semi-direct products. Let $H$ be a Lie group and let $(N,h)$ be an $H$-homogeneous space. By passing to its universal cover if necessary, we may assume that $H$ is simply-connected. We use the same notation for $K$, $\mathfrak{h}$, $\mathfrak{k}$, $\mathfrak{p}$, $\mathrm{pr}$ as in the previous subsection.
To construct an extension we fix a constant $\alpha\in \mathbb{R}$ and a derivation of the Lie algebra $D \in \mathrm{Der}(\mathfrak{h})$ which preserves $K$ and consider the new Lie algebra
\[ \mathfrak{g} = \mathfrak{h} \oplus \mathbb{R} \xi \]
on which the Lie bracket operation is given by
\[ \mathrm{ad}_{\xi}(X) = \alpha D(X), \qquad \text{for all } X \in \mathfrak{h}.\]
Let $G$ be the simply-connected Lie group with Lie algebra $\mathfrak{g}$ that contains $H$ as a subgroup. Since $\mathrm{ad}_{\xi}(X) \in \mathfrak{h}$ for any $X\in \mathfrak{h}$, $H$ is a codimension one normal subgroup of $G$ and $G$ is a semi-direct product $G= H \ltimes \mathbb{R}$. Given the $\mathrm{Ad}(K)$-invariant decomposition $\mathfrak{h} = \mathfrak{p} \oplus \mathfrak{k}$, we have the corresponding $\mathrm{Ad}(K)$-invariant decomposition $\mathfrak{g} = \mathfrak{q} \oplus \mathfrak{k}$, where $\mathfrak{q} = \mathfrak{p} \oplus \mathbb{R} \xi$, and we identify $G$-invariant metrics with the restriction of $\mathrm{Ad}(K)$-invariant inner products on $\mathfrak{g}$ to $\mathfrak{q}$.
This extension of Lie groups defines a natural extension of homogeneous spaces.
\begin{definition}
Let $(N, h)$ be an $H$-homogeneous space. For a constant $\alpha \in \mathbb{R}$ and a derivation $D\in \mathrm{Der}(\mathfrak{h})$, the \emph{one-dimensional extension} of $(N,h)$ is a $G$-homogeneous space $(M,g)$ with $M = G/K$ and
\begin{eqnarray*}
g |_{\mathfrak{p}} &=& h, \\
g(\xi, X) &=& 0 \qquad \text{for all } X \in \mathfrak{p}, \\
g( \xi, \xi) &=& 1,
\end{eqnarray*}
where $G = H\ltimes \mathbb{R}$ is the semi-direct product of $H$ and $\mathbb{R}$ by $D$ and $\alpha$.
\end{definition}
\begin{rem}
Note that for the commutator series we have
\[
\mathfrak{h}^1 = [\mathfrak{h}, \mathfrak{h}] \subset \mathfrak{g}^1 = [\mathfrak{g}, \mathfrak{g}] \subset \mathfrak{h}.
\]
It follows that $\mathfrak{h}$ is a solvable Lie algebra if and only if $\mathfrak{g}$ is solvable.
\end{rem}
In the following we compute the curvatures of an extension and relate them back to the curvatures of $(N,h)$, the derivation $D$, and the constant $\alpha$. To do so we consider the codimension one submanifold $H/K \subset M = G/K$ which is the $H$ orbit at $x \in M$. The vector $\xi$ is a unit normal vector to $H/K$ at $x$. Using left translations of $H$ from $x$ we obtain a unit normal vector field to $H/K$ which is also denoted by $\xi$. The second fundamental form of $N \subset M$ is then given by
\[
\mathrm{II}_x (X, Y) = g(T(X), Y) = g(\nabla^M_X \xi, Y)
\]
where $T: T_x N \rightarrow T_x N$ is the shape operator. Recall that $S$ and $A$ denote symmetric and anti-symmetric parts of $D$, restricted to $\mathfrak{p} \simeq T_xM$,
\begin{eqnarray}
S &=& \frac{1}{2}(\mathrm{pr} \circ D + \mathrm{pr} \circ D^*) \label{eqn:symmD}\\
A &=& \frac{1}{2}(\mathrm{pr} \circ D - \mathrm{pr} \circ D^*), \label{eqn:antisymmD}
\end{eqnarray}
The next proposition relates the tensors $S$ and $A$ to the shape operator $T$.
\begin{prop} \label{prop:1extensionST}
Let $(M,g)$ be the one-dimensional extension of $(N,h)$ with
\[ \mathrm{ad}_{\xi}(X) = \alpha D(X) \qquad \text{for all } X \in \mathfrak{h}.\]
Then
\begin{eqnarray*}
T &=& -\alpha S \\
\nabla_{\xi} T &=& \alpha^2 [S,A].
\end{eqnarray*}
\end{prop}
\begin{proof}
We start with the calculation on the Lie group $G$, endowed with a metric so that the quotient map $G \rightarrow G/K$ is a Riemannian submersion. Then we have
\begin{eqnarray*}
\alpha D(X) = \mathrm{ad}_{\xi}(X) = - \nabla^G_X \xi + \nabla^G_{\xi} X.
\end{eqnarray*}
Since $\nabla^G_{\cdot} \xi$ is the shape operator of $H \subset G$ which is symmetric and $\nabla^G_{\xi} \cdot$ is skew-symmetric, we have
\begin{eqnarray*}
\nabla^G_{\xi} X &=& \frac{\alpha}{2} \left( D - D^* \right) (X)\\
\nabla^G_{X} \xi &=& - \frac{\alpha}{2} \left( D + D^* \right) (X).
\end{eqnarray*}
Now choosing $X$ in $\mathfrak{p}$ is equivalent to $X$ being a basic horizontal field of the Riemannian submersion $G \rightarrow G/K$, see \cite[Chapter 3]{CheegerEbin}. The unit vector $\xi$ is also basic and horizontal. Therefore, we have
\begin{eqnarray*}
\nabla_{\xi} X &=& \mathrm{pr}\left( \nabla^G_{\xi} X\right) = \alpha A \\
T(X) &=& \nabla_{X} \xi = \mathrm{pr}\left( \nabla^G_{X} \xi \right)= - \alpha S.
\end{eqnarray*}
This also gives us
\begin{eqnarray*}
(\nabla_{\xi} T)(X) &=& \nabla_{\xi}(T(X)) - T\left( \nabla_{\xi} X \right) \\
&=& ((\alpha A )\circ (- \alpha S) ) + (\alpha S) \circ(\alpha A) )(X)\\
&=& \alpha^2 [S,A](X)
\end{eqnarray*}
which finishes the proof.
\end{proof}
\begin{rem} While the tensor $S$ is defined on a chosen tangent space, $T_xM$, the formula $T=-\alpha S$, defines $S$ geometrically over the entire manifold $M$ and allows us to define covariant derivatives and the divergence of the tensor $S$. Note that, since the unit normal vector $\xi$ is invariant under the group $H$, so is $T$ and thus $S$.
\end{rem}
Combining these formulas with the Gauss, Codazzi, and radial curvature equations, see \cite{PetersenGTM}, we have the following formulas for the Ricci curvatures.
\begin{lem} \label{lem:RicciGKHK}
Let $(N,h)$ be an $H$-homogeneous space, $D$ a derivation of $\mathfrak{h}$ , and let $(M,g)$ the one-dimensional extension with
\[
\mathrm{ad}_{\xi}(X) = \alpha D(X) \qquad \text{for all }X \in \mathfrak{h}.
\]
Then the Ricci tensor of $(M,g)$ is given by
\begin{eqnarray}
\mathrm{Ric}\left(\xi,\xi\right) & = & - \alpha^2 \mathrm{tr}(S^2) \notag\\
\mathrm{Ric}\left(X,\xi\right) & = & - \alpha \mathrm{div}(S) \label{eqn:RicS} \\
\mathrm{Ric}\left(X,X\right) & = & \mathrm{Ric}^{N}(X,X)-\left(\alpha^2 \mathrm{tr}S\right)h\left(S(X),X\right) - \alpha^2 h([S,A](X), X) \notag
\end{eqnarray}
\end{lem}
\begin{rem}
When $D$ is normal, we have $[S, A] = 0$. Thus the Ricci curvatures of $M$ do not depend on the skew-symmetric part $A$ and have a very simple form in terms of $S$ and Ricci curvatures of $N$.
\end{rem}
\begin{proof}
The radial, Gauss, and Codazzi equations tell us the curvatures on $(M,g)$ have the following forms
\begin{eqnarray*}
R(X,\xi)\xi & = & -\left(\nabla_{\xi}T\right)\left(X\right)- T^{2}\left(X\right)\\
R(X,Y,Z,W) & = & R^{N}(X,Y,Z,W)-g(T(Y),Z)g\left(T(X),W\right)+g(T(X),Z)g\left(T(Y),W\right)\\
R(X,Y,Z,\xi) & = & -g\left(\left(\nabla_{X}T\right)\left(Y\right)-\left(\nabla_{Y}T\right)\left(X\right),Z\right).
\end{eqnarray*}
Let $\set{X_i}_{i=1}^{n-1}$ be an orthonormal basis of $\mathfrak{p}$, then the Ricci tensor satisfies
\begin{eqnarray*}
\mathrm{Ric}\left(\xi,\xi\right) & = & \sum R(X_{i},\xi,\xi,X_{i})\\
& = & -D_{\xi}(\mathrm{tr}T)-\mathrm{tr}\left(T^{2}\right) \\
\mathrm{Ric}\left(X,\xi\right) & = & \sum R(X_{i}, X, \xi, X_{i}) = -g\left(\left(\nabla_{X}T\right)\left(X_{i}\right)-\left(\nabla_{X_{i}}T\right)\left(X\right),X_{i}\right)\\
& = & -D_{X}\left(\mathrm{tr}T\right)+\mathrm{div}T(X)
\end{eqnarray*}
and
\begin{eqnarray*}
\mathrm{Ric}\left(X,X\right) & = & \sum R(X_{i},X,X,X_{i})\\
& = & R(\xi,X,X,\xi)+\sum_{i<n}R(X_{i},X,X,X_{i})\\
& = & -g\left(\left(\nabla_{\xi}T\right)(X),X\right)-g\left(T^{2}(X),X\right) \\
& & +\mathrm{Ric}^{N}(X,X)-g(T(X_{i}),X_{i})g\left(T(X),X\right)+g(T(X),X_{i})g\left(T(X_{i}),X\right)\\
& = & \mathrm{Ric}^{N}(X,X)-\left(\mathrm{tr}T\right)g\left(T(X),X\right) - g\left(\left(\nabla_{\xi}T\right)(X),X\right).
\end{eqnarray*}
Note that $T$ is invariant under the isometries and so $\mathrm{tr}(T)$ is constant. Substituting in $T=\alpha S$ and $\nabla_\xi T = -\alpha^2 [S,A]$ from Proposition \ref{prop:1extensionST} gives us
\begin{eqnarray}
\mathrm{Ric}\left(\xi,\xi\right) & = &- \alpha^2 \mathrm{tr}(S^2) \notag\\
\mathrm{Ric}\left(X,\xi\right) & = & -\alpha \mathrm{div}(S) \notag \\
\mathrm{Ric}\left(X,X\right) & = & \mathrm{Ric}^{N}(X,X)-\left(\alpha^2 \mathrm{tr}S\right)h\left(S(X),X\right) - \alpha^2 h([S,A](X), X) \notag
\end{eqnarray}
which finishes the proof.
\end{proof}
\subsection{$(\lambda, n+m)$-Einstein manifold with symmetries}
Recall that the space $W = W_{\lambda, n+m}(M, g)$ contains the solutions to the following $(\lambda, n+m)$-Einstein equation
\begin{equation}
\mathrm{Hess} w = \frac{w}{m}\left(\mathrm{Ric} - \lambda g\right).
\end{equation}
The space $W$ is clearly a vector space. Moreover, there is an associated quadratic form
\begin{equation}\label{eqn:muw}
\mu(w) = w \Delta w + (m-1)\abs{\nabla w}^2 + \lambda w^2.
\end{equation}
In the case when $(M, g)$ is a $(\lambda, n+m)$-Einstein manifold with the warping function $w$, $\mu(w)$ is the Ricci curvature of $(F, g_F)$ and the warped product metric $g + w^2 g_F$ is Einstein with Einstein constant $\lambda$. In a series of papers \cite{HPWLcf,HPWconstantscal,HPWuniqueness} the geometry of $(\lambda, n+m)$-Einstein manifolds has been intensively studied for its connection to comparison geometry of $m$-Bakry Emery tensors, gradient Ricci solitons, etc. In particular, in \cite{HPWconstantscal} we showed that there are non-trivial $(\lambda, 4+m)$-Einstein metrics on certain 4-dimensional solvable Lie groups with left invariant metrics. This gave the first examples of non-trivial homogeneous $(\lambda, n+m)$-Einstein manifolds.
By studying the space $W$ with the quadratic form $\mu$ we obtained in \cite{HPWuniqueness} a few uniqueness theorems. A main result from \cite{HPWuniqueness} is the following structure theorem when $\dim W \geq 2$.
\begin{thm}[Theorem 2.6 in \cite{HPWuniqueness}]\label{thm:wpedecomposition}
Let ($M^n, g$) be a complete simply-connected Riemannian manifold with $\dim W_{\lambda, n+m}(M,g) = k+1$, then
\[
M = B^b \times_u F^k
\]
where
\begin{enumerate}
\item $B$ is a manifold, possibly with boundary, and $u$ is a nonnegative function in $W_{\lambda, b+(k+m)}(B, g_B)$ with $u^{-1}(0) = \partial B$,
\item the function $u$ spans $W_{\lambda, b+(k+m)}(B, g_B)$, and
\item the fiber $F^k$ is a space form with $\dim W_{\mu_B(u), k+m}(F,g_F) = k+1$, where $\mu_B$ denotes the quadratic form on $W_{\lambda, b+(k+m)}(B, g_B)$.
\end{enumerate}
Moreover,
\begin{enumerate} \setcounter{enumi}{3}
\item $W_{\lambda, n+m}(M,g) = \{ uv : v \in W_{\mu_B(u), k+m}(F, g_F) \}$.
\end{enumerate}
\end{thm}
The above theorem motivated the following
\begin{definition}
Let $(B^b, g_B)$ be a Riemannian manifold possibly with boundary and let $u$ a nonnegative function on $B$ with $u^{-1}(0) = \partial B$. Then $(B, g_B, u)$ is called a \emph{$(\lambda,k+m)$-base manifold} if $\left(W_{\lambda, b+(k+m)}(B, g_B)\right)_D = \text{span}\set{u}$, where $W_D$ denotes the solutions satisfying Dirichlet boundary conditions. It is called an \emph{irreducible base manifold} if $W_{\lambda, b+(k+m)}(B, g_B)= \text{span}\set{u}$ with no boundary conditions imposed.
\end{definition}
We showed that the converse of the above theorem also holds.
\begin{thm}[Theorem 2.10 in \cite{HPWuniqueness}] \label{thm:ExistenceMBF}
Given an irreducible $(\lambda,k+m)$-base manifold $(B, g_B, u)$ there is a complete metric of the form
\[ M = B^b \times_u F^k \]
such that $\dim W_{\lambda, (b+k) +m} (M,g_M) = k+1$.
\end{thm}
\begin{rem} If $\partial B = \emptyset$, $\mu_B(u)>0$, and $k=1$ there are two such metrics corresponding to the choice $F= \mathbb{R}$ or $F = \mathbb{S}^1$. Otherwise, the warped product over $B$ with $\dim W_{\lambda, (b+k) +m} (M,g_M) = k+1$ is unique. In this case, we call $M$ the \emph{ $k$-dimensional elementary warped product extension} of $(B,g_B, u)$.
\end{rem}
In Section 5 of \cite{HPWuniqueness} we established the following relationship between isometries on $M$ and $B$. Note that any isometry $\sigma$ of $M$ has a natural action on $W(M, g)$ sending $w$ to $w\circ \sigma^{-1}$.
\begin{thm}[Theorem 5.7 in \cite{HPWuniqueness}]\label{thm:isometryMBF}
Let $M$ be a simply connected $(\lambda, n+m)$-Einstein manifold with $\dim W_{\lambda, n+m}(M)=k+1>1$. Then the isometry group of $M$ consists of maps $h:M \rightarrow M$ of the form
\[
h = h_1 \times h_2 \quad \text{with} \quad h_1: B \rightarrow B \quad h_2: F \rightarrow F,
\]
where $h_1 \in \mathrm{Iso}(B, g_B)$ and
\begin{enumerate}
\item If $\mu(u) \neq 0$ then $h_2 \in \mathrm{Iso}(F, g_F)$.
\item If $\mu(u) = 0$ then $h_2$ is a $C$-homothety of $\mathbb{R}^k$ where $C = C_{h_1}$ is the constant so that $u \circ h_1^{-1} = C_{h_1} u$. Namely,
\[
h_2(v) = b + C A(v) \quad \text{with} \quad b \in \mathbb{R}^k \text{ and } A \in \mathrm{O}(\mathbb{R}^k).
\]
\end{enumerate}
\end{thm}
\begin{rem}
Note that the isometry group $\mathrm{Iso}(M,g)$ contains the following subgroup
\[
\mathrm{Iso}(B,g_B)_u \times \mathrm{Iso}(\mathbb{R}^k),
\]
where $\mathrm{Iso}(B,g_B)_u = \set{\sigma \in \mathrm{Iso}(B,g_B) : u = u\circ \sigma^{-1}}$. This subgroup has codimension one unless $u$ is a constant function.
\end{rem}
\medskip
\section{One-dimensional extension of homogeneous space}
In this section using Lemma \ref{lem:RicciGKHK} we obtain the conditions under which a homogeneous space admits an Einstein one-dimensional extension, see Theorem \ref{thm:EinsteinConstruction}, and then prove Theorem \ref{thm:Einstein1extsoliton} in the introduction, see Theorem \ref{thm:EinsteinExtDnormal}. In the second part, we show that it also admits a one-dimensional extension which is $(\lambda, n+m)$-Einstein with $\lambda < 0$, see Theorem \ref{thm:WPE1extension}.
\subsection{Einstein one-dimensional extension}
First we have
\begin{thm} \label{thm:EinsteinConstruction}
Suppose $(N,h)$ is an $H$-homogeneous space and $D$ is a derivation of $\mathfrak{h}$ such that the following conditions hold
\begin{enumerate}
\item $ \mathrm{Ric}^N = \lambda I + S + \dfrac{1}{\mathrm{tr}(S)} [S,A] $
\item $\mathrm{div}(S) = 0$
\item $\mathrm{tr}(S^2) = -\lambda \mathrm{tr}(S)$
\end{enumerate}
for some constant $\lambda <0$, then the one-dimensional extension of $(N,h)$ with $D$ and $\alpha^2 = 1/\mathrm{tr}(S)$ is $\lambda$-Einstein.
\end{thm}
\begin{proof}
From Lemma \ref{lem:RicciGKHK} we see that $\mathrm{div}(S) = 0$ is equivalent to $\mathrm{Ric}\left(X,\xi\right) = 0$. The condition $\mathrm{tr}(S^2) = -\lambda \mathrm{tr}(S)$ then tells us that
\[ \mathrm{Ric}\left(\xi,\xi\right) = \lambda \alpha^2 \mathrm{tr}(S). \]
So if we choose $\alpha$ so that $\alpha^2 = \dfrac{1}{\mathrm{tr}(S)}$, then $\mathrm{Ric}(\xi, \xi)=\lambda$. Then we have
\begin{eqnarray*}
\mathrm{Ric}\left(X,X\right) & = & \mathrm{Ric}^{N}(X,X)-h\left(S(X),X\right) - \frac{1}{\mathrm{tr}(S)} h([S,A](X), X) \\
&=& \lambda g(X,X)
\end{eqnarray*}
from condition (1). This shows that the one-dimensional extension $(M,g)$ of $(N, h)$ is Einstein with Einstein constant $\lambda$.
\end{proof}
In the case when $(N, h)$ is a semi-algebraic Ricci soliton the theorem above gives us Theorem \ref{thm:Einstein1extsoliton} in the introduction.
\begin{thm}\label{thm:EinsteinExtDnormal}
A non-flat, non-trivial semi-algebraic Ricci soliton on a homogeneous space
admits an Einstein one-dimensional extension with
\[
\mathrm{ad}_{\xi}(X) = \alpha D(X), \qquad \text{for all }X \in \mathfrak{h}
\]
if and only if $D$ is normal.
\end{thm}
\begin{proof}
Let $(N^{n-1},h)$ be a semi-algebraic Ricci soliton
\[
\mathrm{Ric} = \lambda I +S
\]
for some constant $\lambda$. By exponentiating, we can identify $S$ with the Lie derivative $\frac{1}{2} \mathscr{L}_Y h $ for some vector field $Y \in \mathfrak{X}(N)$ and get a Ricci soliton structure on $N$. This gives us
\[
\mathrm{div}( \mathscr{L}_Y h) = 2 \mathrm{div}(\mathrm{Ric}) = d \mathrm{scal} = 0
\]
and so $\mathrm{div}(S) = 0$. Recall the formula for the Laplacian of the scalar curvature of a Ricci soliton
\[
\Delta(\mathrm{scal}) - D_Y \mathrm{scal} = \lambda \mathrm{scal} - |\mathrm{Ric}|^2.
\]
Since the scalar curvature is constant, we have
\[
\abs{\mathrm{Ric}}^2 = \lambda \mathrm{scal}
\]
which is equivalent to $\mathrm{tr}(S^2) = -\lambda \mathrm{tr}(S)$.
If $D$ is normal, then $[S,A]=0$ and then from the previous Theorem \ref{thm:EinsteinConstruction} there is a one-dimensional extension of $(N, h)$ which is Einstein.
On the other hand, if we have a non-trivial semi-algebraic Ricci soliton and a one-dimensional extension which is $\lambda$-Einstein, then the equation $\mathrm{tr}(S^2) = -\lambda \mathrm{tr}(S)$ and Ricci curvature $\mathrm{Ric}^M(\xi, \xi) = \alpha^2 \mathrm{tr}(S^2) = \lambda$ imply that $\alpha^2 \mathrm{tr}(S) = 1$. Plugging this into the equation for $\mathrm{Ric}^M(X,X)$ in Lemma \ref{lem:RicciGKHK} then implies that $[S,A]=0$, i.e., $D$ is normal.
To finish the proof we show that if the extension has Einstein constant $c \neq \lambda$ then $N$ is flat. Now we must choose $\alpha$ so that $\alpha^2 = \dfrac{c}{\lambda \mathrm{tr}(S)}$, then the third equation in Lemma \ref{lem:RicciGKHK} tells us that on $N$ we have
\begin{eqnarray*}
\alpha^2 [S,A] + \left(\frac{c}{\lambda} - 1\right) S + (c - \lambda) I = 0.
\end{eqnarray*}
Tracing this equation yields
\begin{eqnarray*}
\left(\frac{c}{\lambda} - 1\right) \mathrm{tr}(S) + (c - \lambda) (n-1) = 0.
\end{eqnarray*}
Since $c \neq \lambda$ this is equivalent to
\begin{eqnarray*}
\mathrm{tr}(S) + \lambda(n-1) = 0.
\end{eqnarray*}
But this implies we have $\mathrm{scal}=0$ which implies $|\mathrm{Ric}|^2 =0$ on $N$. So $N$ is a flat space as a homogeneous Ricci flat space is flat, see for example, \cite[Theorem 7.61]{Besse}.
\end{proof}
\smallskip
\subsection{$(\lambda, n+m)$-Einstein one-dimensional extension}
We consider the same set-up as in the previous subsection. Namely $(N^{n-1}, h)$ is an $H$-homogeneous space and $(M^n,g)$ is a one-dimensional extension of $N$ by a derivation $D\in \mathrm{Der}(\mathfrak{h})$ and a constant $\alpha$. We let $r$ be the signed distance function on $M$ to the hypersurface $N$. For reasons made clear below, we let
\[ w(r) = e^{L r}. \]
for some constant $L$ which is also determined later.
\begin{thm}\label{thm:WPE1extension}
Let $m>0$ be an integer. Suppose $(N^{n-1},h)$ is an $H$-homogeneous space with a derivation $D \in \mathrm{Der}(\mathfrak{h})$ which satisfies the following conditions
\begin{enumerate}
\item $\mathrm{Ric}^N = \lambda I + S + \dfrac{1}{\mathrm{tr}(S)-\lambda m} [S,A] $
\item $\mathrm{div}(S) = 0$
\item $\mathrm{tr}(S^2) = -\lambda \mathrm{tr}(S)$
\end{enumerate}
for some constant $\lambda <0$. Then the one-dimensional extension of $(N, h)$ with $D$ and $\alpha^2 = 1/(\mathrm{tr}(S) - \lambda m)$ is a $(\lambda, n+m)$-Einstein manifold with warping function
\[
w(r) = e^{Lr} \quad \text{where} \quad L = \lambda \alpha.
\]
\end{thm}
\begin{proof}
Setting $w=e^{Lr}$ we have
\begin{eqnarray*}
\mathrm{Hess} w = L^2 w dr\otimes dr + L w g(T(\cdot),\cdot)
\end{eqnarray*}
where $dr\otimes dr(X, Y) = X(r) Y(r)$ and $T(X) = \nabla_X \nabla r$ for any $X, Y \in T_xM$. Note that $\xi = \nabla r$. Applying the assumption on $N$ along with Lemma \ref{lem:RicciGKHK} we obtain the following equations
\begin{eqnarray*}
\left( \mathrm{Ric} - \frac{m}{w} \mathrm{Hess} w\right)(\xi, \xi) &=& \lambda \alpha^2 \mathrm{tr}(S) - mL^2 \\
\left( \mathrm{Ric} - \frac{m}{w} \mathrm{Hess} w\right)( X, X) &=& \lambda h(X,X)+(1 -\alpha^{2}\left(\mathrm{tr}S\right) + mL \alpha )h\left(S(X),X\right) \\
&& + \left(\frac{1}{\mathrm{tr}(S)-\lambda m}- \alpha^2\right) h([S,A](X), X)
\end{eqnarray*}
We wish to show there is a solution of $\alpha$ and $L$ such that $ \mathrm{Ric} - \dfrac{m}{w} \mathrm{Hess} w = \lambda g$. From the first equation above we can see that a necessary condition is that
\[
\alpha^2 \mathrm{tr}(S) = 1 + \frac{m}{\lambda}L^2.
\]
Plugging this condition into the trace of the second equation above we obtain,
\[
mL\left(\alpha - \frac{L}{\lambda}\right) = 0
\]
which indicates we should either choose $L=0$ or $L= \lambda \alpha$. The case $L=0$ corresponds to the Einstein extension discussed in Theorem \ref{thm:EinsteinConstruction}. In the other case plugging $L= \lambda \alpha$ back into the first equation gives
\[
\alpha^2 = \frac{1}{\mathrm{tr}(S) - \lambda m}
\]
This choice of $\alpha$ also makes the $[S,A]$ term vanish. Note that since $\mathrm{tr}(S)>0$ and $\lambda<0$ the quantity on the right is positive, so there is a solution to this equation.
\end{proof}
We note that, in the special case where $N$ is a normal semi-algebraic Ricci soliton this gives us the following.
\begin{cor} \label{cor:SolitonQuasiEinstein}
Let $(N,h)$ be a non-flat H-homogeneous normal semi-algebraic Ricci soliton. For every $m>0$ there is a one-dimensional extension of $(N,h)$, $(M, g_m)$ such that $(M,g_m)$ is a $(\lambda, n+m)$-Einstein manifold.
\end{cor}
\begin{proof}
Let $(N^{n-1}, h)$ be a semi-algebraic Ricci soliton
\[
\mathrm{Ric} = \lambda I + S
\]
for some constant $\lambda < 0$. As in the proof of Theorem \ref{thm:EinsteinExtDnormal}, we have
\[
\mathrm{div} S = 0,\quad \mathrm{tr}(S^2) = - \lambda \mathrm{tr}(S) \quad \text{and}\quad \abs{\mathrm{Ric}}^2 = \lambda \mathrm{scal}.
\]
If $D$ is normal, then $[S,A] = 0$ and from Theorem \ref{thm:WPE1extension} there is a one-dimensional extension of $(N, h)$ which is $(\lambda, n+m)$-Einstein.
\end{proof}
\begin{rem}
The examples in Theorems \ref{thm:EinsteinConstruction} and \ref{thm:WPE1extension} illustrate that there can be different base metrics on a given topological manifold which produce different homogeneous warped product Einstein metrics. More precisely, let $(N^{n-1},h)$ be a semi-algebraic Ricci soliton with a normal derivation $D$ and $M^n = N \times \mathbb{R}$. Then on the product manifold
\[
E^{n+m} = M \times \mathbb{R}^m
\]
there are two homogeneous, warped product Einstein metrics. One is the product metric $g_0 + g_{H^m}$ where $g_0$ is the Einstein metric on $M$ constructed in Theorem \ref{thm:EinsteinConstruction} and $g_{H^m}$ is the hyperbolic space with Ricci curvature $\lambda$. The other is the metric $g_m + e^{Lr} g_{\mathbb{R}^m}$ where $g_m$ is the metric constructed in Theorem \ref{thm:WPE1extension}, and $g_{\mathbb{R}^m}$ is the Euclidean metric. We will verify that the metric $g_m + e^{Lr} g_{\mathbb{R}^m}$ is homogeneous in Theorem \ref{thm:homogeneousE}
\end{rem}
\begin{rem}
The family of metrics $g_m$ only differ by the choice of the structure constant given by $\alpha$. As $m \rightarrow \infty$, both $\alpha$ and $L$ approach $0$ and thus the metrics $g_m$ on $M$ are converging to the product metric $g = h+ dt^2$ on $N \times \mathbb{R}$. Moreover, for any $X\in TN$, we have
\begin{eqnarray*}
\frac{m}{w} \mathrm{Hess} w(X, X) &=& - mL\alpha h(S(X),X))\\
&=& -\lambda m \alpha^2 h(S(X),X)\\
&=& - \frac{\lambda m}{\mathrm{tr}(S) - \lambda m} h(S(X), X).
\end{eqnarray*}
So the symmetric 2-tensor $ \dfrac{m}{w} \mathrm{Hess} w$ is converging to $S=\frac{1}{2} \mathscr{L}_Y h$ on $N$. On the other hand the quantity $\dfrac{m}{w} \mathrm{Hess} w(\xi, \xi)$ is blowing up as $m\rightarrow \infty$.
Also note that the corresponding measures
\[ w^m dvol_{g_m} \]
do not converge as $dvol_{g_m} \rightarrow dvol_g$ but $w^m = e^{L mr}$ and
\[
\abs{Lm}= \dfrac{-\lambda m}{\left(\mathrm{tr}(S) - \lambda m\right)^{1/2}} \rightarrow \infty.
\]
\end{rem}
\medskip
\section{Homogeneous $(\lambda, n+m)$-Einstein manifolds}
In the previous section we discussed how homogeneous $(\lambda, n+m)$-Einstein metrics can be constructed via one-dimensional extensions. A natural question is whether every non-trivial $(\lambda, n+m)$-Einstein metric is a one-dimensional extension of some space. We will show this is true in the next section. In this section we prepare for the proof of this result by applying Theorems \ref{thm:wpedecomposition} and \ref{thm:ExistenceMBF} along with the results from \cite[Section 5]{HPWuniqueness} to homogeneous manifolds. We show that the base manifold of a homogeneous $(\lambda, n+m)$-Einstein manifold is either $\lambda$-Einstein or has a special form. See Lemma \ref{lem:basehomogeneous}. We also show that from a homogeneous $(\lambda, n+m)$-Einstein manifold $M$, one can find a homogeneous $\lambda$-Einstein manifold as the total space of a warped product over $M$, see Theorem \ref{thm:homogeneousE}.
First we consider base manifolds. Let $B$ be a base manifold with $\partial B = \emptyset$ and let $G$ be a transitive group of isometries acting on $B$. Fix a point $x \in B$ and let $G_x$ be the isotropy group at $x$, i.e.,
\[
G_x = \{ \sigma \in G : \sigma(x) = x \}.
\]
Let $H$ be the subgroup which fixes $u \in W_{\lambda, b+(k+m)}(B,g_B)$
\begin{eqnarray*}
H &=& \{ \sigma \in G : u \circ \sigma^{-1} = u \} \\
&=& G \cap \mathrm{Iso}(B, g_B)_u.
\end{eqnarray*}
Note that since $H$ is the kernel of the group homomorphism from $G$ to $\mathbb{R}$, see \cite[Definition 5.3]{HPWuniqueness}, $H$ is a normal subgroup. Also note that from \cite[Proposition 5.4]{HPWuniqueness} $H$ contains $G_x$ and so we have
\[
H/G_x \subset G/G_x .
\]
\begin{lem} \label{lem:basehomogeneous}
Let $(B, g_B)$ be a base manifold which is homogeneous and has $\partial B = \emptyset$. Then either
\begin{enumerate}
\item $(B, g_B) $ is $\lambda$-Einstein, or
\item $\mu(u) =0$, $B$ is noncompact, the action of $H$ on $B$ is cohomogeneity one with
\[
r: B \mapsto B /H = \mathbb{R}
\]
where $H$ acts transitively on the level sets of $u$, $r$ is a smooth distance function and $u$ is of the form
\[
u = Ae^{L r}
\]
for some constants $A$ and $L$.
\end{enumerate}
\end{lem}
\begin{rem}
In other words we either have in case (1) that $ H/G_{x} = B $ and $u$ is constant, or in case (2) we have
\begin{eqnarray*}
G/G_x &=& \mathbb{R} \times H/G_x \\
g_B &=& dr^2 + g_r
\end{eqnarray*}
where $g_r$ is a family of $H$-homogeneous metrics on $H/G_x$.
\end{rem}
\begin{proof}
For $\sigma \in H$, $u \circ \sigma^{-1} = u $. So if $H$ acts transitively on $M$, then $u$ is constant and $B$ is $\lambda$-Einstein.
Otherwise, suppose that $u$ is not constant. Then $\mathrm{Iso}(B, g_B)_u$ must be a codimension one subgroup of $\mathrm{Iso}(B, g_B)$ and thus $H \subset G$ also has codimension one. So $H$ acts on $B$ by cohomogeneity one. In this case, from \cite[Proposition 5.4]{HPWuniqueness} we know that $B$ is noncompact and $\mu(u)=0$. Let $r$ be the quotient map
\[
r: B \mapsto B/H.
\]
Since $u$ is preserved by $H$, $u$ can be written as a function of $r$, $u=u(r)$. The fact that $D\sigma_p(\nabla u) = C \nabla u |_{\sigma(p)}$ for any $\sigma \in G$ shows that if $u$ has a critical point, then $u$ is constant. This shows that $B/ H$ must be all of $\mathbb{R}$.
Let $\gamma$ be a unit speed integral curve of $\nabla r$ with $\gamma(0) = x$. Define $h_s$ to be a one-parameter subgroup of isometries taking $\gamma(0)$ to $\gamma(s)$. The differential of $h_s$ gives a Killing vector field $X^* = \nabla r + Y$ where $Y$ is tangent to the level surfaces of $u$. From \cite[Proposition 5.2]{HPWuniqueness} we have
\begin{eqnarray*}
L u = D_{X^*} u = D_{\nabla r} u
\end{eqnarray*}
for some constant $L$. Integrating this implies that $u = Ae^{Lr}$.
Finally, the fact that $\lambda<0$ follows from \cite[Proposition 4.5]{HPWuniqueness}, since $u$ is a positive function in $W_{\lambda, b+(k+m)} (B)$ and $B$ has constant scalar curvature.
\end{proof}
\begin{rem}
When $k+m>1$ we can also see that the constant $L$ is determined by the scalar curvature of $B$. To see this we compute
\begin{eqnarray*}
\mu_B(u) &=& (m+k-1) |\nabla u|^2 + \frac{u^2}{m+k}( \mathrm{scal}^B - (n-m-k) \lambda) \\
&=& \left( (m+k-1) L^2 + \frac{ \mathrm{scal}^B - (n-m-k)\lambda}{m+k} \right) e^{2Lr}.
\end{eqnarray*}
Since $\mu_B(u) =0$ we obtain
\begin{equation}\label{eqn:L}
L^2 = -\frac{\mathrm{scal}^B - (n-m-k)\lambda}{(m+k)(m+k-1)}.
\end{equation}
The only case where $L$ is not determined by the above formula is when $k=0$ and $m=1$, in which case $\mu_B$ is always zero.
\end{rem}
\begin{rem}
A different proof of this lemma can be established using the results in \cite{HPWconstantscal} more heavily. In \cite{HPWconstantscal} we also produced examples showing case (2) is possible. We generalize that construction in the next section.
\end{rem}
Next we consider the warped product $M = B \times_u F$ where $M$ is homogeneous and $\dim W_{\lambda, n+m}(M) = k+1 > 1$. Then Theorem \ref{thm:isometryMBF} gives us the following proposition.
\begin{prop}\label{prop:homogeneousB}
Let $M$ be a simply connected metric with $\dim W_{\lambda, n+m}(M) = k+1>1$. Then $M$ is homogeneous if and only if its base manifold $B$ has no boundary and is homogeneous.
\end{prop}
\begin{proof}
From the proof of Theorem \ref{thm:isometryMBF} in \cite{HPWuniqueness} we know that isometries of $M$ preserve the singular set where all functions in $W(M, g)$ vanish, showing that if $M$ is homogeneous then the singular set is empty. It follows that $\partial B = \emptyset$. Now Theorem \ref{thm:isometryMBF} shows that the isometry group of $M$ acts transitively on $M$ if and only if the isometry group of $B$ acts transitively on $B$.
\end{proof}
The quadratic form $\mu$ on $W(M, g)$ is either positive definite, semi-positive definite with nullity one, or non-degenerate with index one. The manifold $(M, g)$ is said to be \emph{elliptic}, \emph{parabolic} or \emph{hyperbolic} respectively, in each of the three cases, see \cite[Section 2]{HPWuniqueness}. As a simple consequence of the previous lemma and proposition, we note the following
\begin{cor}
Let $M$ be a simply connected homogeneous metric with $W_{\lambda, n+m}(M) \neq \{ 0\}$. If either $\lambda \geq 0$, or $M$ is elliptic or hyperbolic, then $M$ is isometric to the Riemannian product $B \times F$.
\end{cor}
As another application, we prove that if the base of a warped product Einstein metric is homogeneous, then there is a warped product metric with the same base which is homogeneous and Einstein.
\begin{thm}\label{thm:homogeneousE}
Let $m>0$ be an integer and let $M$ be a simply connected homogeneous manifold with $W_{\lambda, n+m}(M,g) \neq \{0\}$. If there exists a positive function $w \in W_{\lambda, n+m}(M,g)$ and $F^m$ is the simply connected space form with Ricci curvature $\mu(w)$, then the warped product metric
\[ E = M \times_w F \]
is both $\lambda$-Einstein and homogeneous.
\end{thm}
\begin{proof}
The proof breaks into various cases.
We first assume that $M$ is a base manifold. By Lemma \ref{lem:basehomogeneous} there are two cases. First, if $M$ is $\lambda$-Einstein then clearly taking $w=c$ a constant and $F$ a space form with Ricci curvature $\dfrac{\lambda}{c^2}$ will make $E$ a homogeneous $\lambda$-Einstein metric. On the other hand, if $M$ is a base manifold and is not $\lambda$-Einstein, then we know that $w=Ae^{Lr}$ and $\mu(w) = 0$. In particular, $F = \mathbb{R}^m$. We also have that, if $\sigma_1 \in \mathrm{Iso}(M,g_M)$ then $w \circ \sigma_1 = C w$ for some constant $C = C_{\sigma_1}$. By Theorem \ref{thm:isometryMBF}, or \cite[Lemma 5.6]{HPWuniqueness}, the product map $\sigma_1 \times \sigma_2 $ is an isometry of $(E,g_{E})$ where $\sigma_2$ is a $C$-homothety of $\mathbb{R}^m$. This gives us a transitive group of isometries acting on $E$.
Next we assume that $M$ is not a base manifold. If $M$ is a space of constant curvature then the theorem is true by the special form of the warping functions, see \cite[Example 2.1]{HPWuniqueness}. Note that the sphere does not have a positive function in $W_{\lambda, n+m}(M,g)$. Otherwise, from Lemma \ref{lem:basehomogeneous} again we also have two different cases. In the first case, we have $M=B\times \widetilde{F}$ where $B$ is $\lambda$-Einstein and $\widetilde{F}$ is a space form. We know that if $\lambda>0$, there are no positive functions in $W_{\lambda, n+m}(M)$. When $\lambda \leq 0$, we have that $w = Av$ where $v$ a positive function in $W_{ \lambda, k+m}(\widetilde{F})$. Take another space form $F$ such that $\widetilde{F}\times_{v} F$ is a homogeneous $\lambda$-Einstein manifold. Then $E$ is a product of $\lambda$-Einstein manifolds $B$ and $\widetilde{F}\times_{v} F$ which are both homogeneous. Finally, in the second case we have
\[ g_M = g_B + e^{Lr} g_{\mathbb{R}^k}\]
where $B$ is a base manifold. Since $w>0$ on $M$ this tells us that $w=A e^{Lr}$ for some constant $A$. Then we can write
\begin{eqnarray*}
g_{E} &=& g_B + e^{Lr}( g_{\mathbb{R}^k} + A g_{\mathbb{R}^m} ) \\
&=& g_B + e^{Lr} g_{\mathbb{R}^{k+m}}.
\end{eqnarray*}
Since $B$ is a base manifold, we can now apply the base manifold case that we already discussed to this metric to show that $(E, g_{E})$ is homogeneous. This finishes the proof.
\end{proof}
\begin{rem}\label{rem:homogeneous}
Note that in the last case of the previous Theorem \ref{thm:homogeneousE}, when
\begin{eqnarray*}
g_M = g_B + e^{Lr} g_{\mathbb{R}^k} & \text{ and } & w(r) = A e^{Lr}
\end{eqnarray*}
that we still have the property that $w \circ \sigma^{-1} = C_{\sigma} w$ for any $\sigma \in \mathrm{Iso}(M,g)$, even though $M$ is not a base manifold. Writing $M = G/G_x$ this allows us to conclude, as we did at the beginning of this section in the base manifold case, that
\[
G/G_{x} = \mathbb{R} \times H/G_{x}
\]
with
\[
g_M = dr^2 + g_r \quad \text{and} \quad w = Ae^{L r}
\]
where $g_r$ is a family of $H$-homogeneous metrics on $H/G_{x}$.
\end{rem}
\medskip
\section{Non-trivial Homogeneous $(\lambda, n+m)$-Einstein manifolds are one-dimensional extensions}
In this section we characterize homogeneous $(\lambda, n+m)$-Einstein manifolds using one-dimensional extensions, see Theorem \ref{thm:structureWPEextension}. From this structure theorem we prove Theorem \ref{thm:WPE1extensionintro}, see Theorem \ref{thm:WPEextDnormal}, and Corollary \ref{cor:SARiccisiltonDnormal}.
\begin{thm}\label{thm:structureWPEextension}
Let $(M^n,g)$ be a homogeneous $(\lambda, n+m)$-Einstein space which is not Einstein. Then $(M,g)$ is the one-dimensional extension of a homogeneous space $(N^{n-1},h)$ with a derivation $D$ and $\alpha^2 = \dfrac{1}{\mathrm{tr}(S) - \lambda m}$ satisfying the following conditions.
\begin{enumerate}
\item $\mathrm{Ric}^N = \lambda I + S + \dfrac{1}{\mathrm{tr}(S)-\lambda m} [S,A] $
\item $\mathrm{div}(S) = 0$
\item $\mathrm{tr}(S^2) = - \lambda \mathrm{tr}(S)$,
\end{enumerate}
where $\lambda<0$. Moreover the warping function $w \in C^\infty(M)$ is given by $w(r) = e^{L r}$ where $L = \lambda \alpha$ and $r$ is the signed distance function to $N$.
\end{thm}
\begin{proof}
Let $G$ be a connected Lie group that acts transitively on $M$ by isometries. Fix a point $x \in M$ and let $K = G_x$ be the isotropy subgroup at $x$. The Lie algebras of $G$ and $K$ are denoted by $\mathfrak{g}$ and $\mathfrak{k}$ respectively. Let $\mathfrak{q}$ be the orthogonal complement of $\mathfrak{k}$ in $\mathfrak{g}$. The tangent space $T_x M$ is identified with $\mathfrak{q}$. In the following we separate our argument into two different cases.
\smallskip
\textsc{Case I}. We assume that $\dim W(M, g) =1$, i.e., $(M,g)$ is a base manifold. From Lemma \ref{lem:basehomogeneous} we may assume that $w(r) = e^{Lr}$ for some constant $L$. From the proof of Lemma \ref{lem:basehomogeneous} there is a codimension one normal subgroup $H\subset G$ containing $K$ that acts transitively on the hypersurface $N^{n-1} = G.x$. Let $\mathfrak{h}$ be the Lie algebra of $H$ and $\xi \in \mathfrak{g}$ corresponds to $\nabla r$ at the point $x$. Let $\mathfrak{p}\subset \mathfrak{q}$ be the orthogonal complement of $\xi$, then we have
\[
\mathfrak{g} = \mathbb{R}\xi \oplus \mathfrak{p} \oplus \mathfrak{k}, \qquad \mathfrak{h} = \mathfrak{p}\oplus \mathfrak{k}.
\]
Since $H \subset G$ is a normal subgroup, we have $[\xi, X]\in \mathfrak{h}$ for any $X\in \mathfrak{h}$. It follows that $[\xi, \cdot]$ defines a derivation on $\mathfrak{h}$ and thus $M$ is a one-dimensional extension of $N$. Following the construction in Theorem \ref{thm:WPE1extension} we define
\begin{equation}\label{eqnDXadxi}
D(X) = \frac{1}{\alpha} \mathrm{ad}_\xi (X) \quad \text{for all } X \in \mathfrak{h}
\end{equation}
where $\alpha = L/ \lambda$. In the following we show that $(N, h)$ with the derivation $D$ satisfies the properties stated in the theorem.
From Proposition \ref{prop:1extensionST}, the shape operator of $N \subset M$ at $x$ is given by $T = - \alpha S$ where $S$ is the symmetrization of $D$ in equation (\ref{eqn:symmD}).
\begin{claim}
We have $\mathrm{tr}(S^2) = - \lambda \mathrm{tr}(S)$.
\end{claim}
In fact since $w = e^{Lr}$ for any $X\in T_x N$ we have
\[
\mathrm{Hess} w(X, X) = L w h(T(X), X).
\]
From the equation $\mathrm{Ric}^M - \frac{m}{w}\mathrm{Hess} w = \lambda g$ we have
\[
T(X) = \frac{1}{mL}\left(\mathrm{Ric}^M(X) - \lambda X\right).
\]
It follows that
\begin{eqnarray*}
\left(\nabla_\xi T\right)(X) & = & \frac{1}{m L}\left(\nabla_\xi \mathrm{Ric}^M\right)(X) = \frac{1}{m L}\left(\nabla_{\frac{\nabla w}{L w}}\mathrm{Ric}^M \right)(X) \\
& = & \frac{1}{m L^2 w}\left(\nabla_{\nabla w}\mathrm{Ric}^M\right)(X).
\end{eqnarray*}
On the other hand, from \cite[Proposition 3.7]{HPWconstantscal} we have
\[
\left(\nabla_{\nabla w}\mathrm{Ric}^M\right)(X, Y) = \frac{w}{m}g\left(\left(\mathrm{Ric}^M - \rho I\right)(X), \left(\mathrm{Ric}^M - \lambda I\right)(Y)\right) + \frac{m}{w}Q(\nabla w, X, Y, \nabla w),
\]
where $Q$ is the $(0,4)$-tensor defined at the beginning in \cite[Section 3]{HPWconstantscal} and
\[
\rho = \frac{(n-1)\lambda - \mathrm{scal}}{m-1}.
\]
In terms of $T$, the above equation can be written as
\[
m L \left(\nabla_{\nabla w} T\right)(X, Y) = \frac{w}{m}g\left(mL T(X) + (\lambda -\rho) X, mL T(Y)\right) + \frac{m}{w}Q(\nabla w, X, Y, \nabla w).
\]
Let $\set{X_i}_{i=1}^{n-1}$ be an orthonormal basis of $T_x N$ and then we have
\[
mL\sum_{i=1}^{n-1}\left(\nabla_{\nabla w} T\right)(X_i, X_i) = w m L^2 \mathrm{tr}(T^2) + w L (\lambda-\rho)\mathrm{tr}(T).
\]
The trace of the term that involves $Q$ vanishes by using equation (3.1) and Proposition 3.3 in \cite{HPWconstantscal}. From Proposition \ref{prop:1extensionST}, since $\nabla_\xi T = \alpha^2 [S,A]$, the left hand side of the equation above is zero and we have
\[
m L \mathrm{tr}(T^2) + (\lambda -\rho)\mathrm{tr}(T) = 0.
\]
Note that $k = 0$ in equation (\ref{eqn:L}) and we have
\[
L^2 = - \frac{\mathrm{scal} - (n-m)\lambda}{m(m-1)} = \frac{\rho - \lambda}{m}.
\]
It follows that $\mathrm{tr} (T^2) = L \mathrm{tr}(T)$. Then the claim follows by $T = - \alpha S$ and $L =\lambda \alpha$.
From the formulas of Ricci curvature in Lemma \ref{lem:RicciGKHK} we have
\begin{eqnarray*}
\left(\mathrm{Ric}- \frac{m}{w}\mathrm{Hess} w\right)(\xi, \xi) & = & - \alpha^2 \mathrm{tr}(S^2) - mL^2 \\
\left(\mathrm{Ric}- \frac{m}{w}\mathrm{Hess} w\right)(\xi, X) & = & - \alpha \mathrm{div}(S) \\
\left(\mathrm{Ric}- \frac{m}{w}\mathrm{Hess} w\right)(X, X) & = & \mathrm{Ric}^N(X, X) - \left(\alpha^2 \mathrm{tr}(S) - mL \alpha\right)h(S(X), X) \\
& & - \alpha^2 h\left([S,A](X),X \right).
\end{eqnarray*}
The second equation above implies that
\[
\mathrm{div}(S) = 0.
\]
The first equation shows that
\[
- \alpha^2 \mathrm{tr}(S^2) - mL^2 = \lambda
\]
i.e.,
\begin{eqnarray*}
\lambda = \lambda \alpha^2 \mathrm{tr}(S) - mL^2 = \lambda \alpha^2 \mathrm{tr}(S) - m \lambda^2 \alpha^2
\end{eqnarray*}
and so we have
\[
\alpha^2(\mathrm{tr}(S) - m\lambda) = 1.
\]
Plugging it into the third equation above shows that
\begin{eqnarray*}
\lambda I & = & \mathrm{Ric}^N - \left( 1+ m\lambda\alpha^2 - m \lambda \alpha^2\right)S - \alpha^2 [S,A] \\
& = & \mathrm{Ric}^N - S - \frac{1}{\mathrm{tr}(S) - m\lambda}[S,A].
\end{eqnarray*}
It follows that
\[
\mathrm{Ric}^N = \lambda I + S + \frac{1}{\mathrm{tr}(S) - m\lambda}[S, A]
\]
which finishes the proof in this case.
\smallskip
\textsc{Case II.} We assume that $\dim W(M, g)=k+1$ with $k\geq 1$. From Lemma \ref{lem:basehomogeneous} and Proposition \ref{prop:homogeneousB} we know that $M = B^b \times_u \mathbb{R}^k$ and $B$ is a homogeneous base manifold and $u \in W_{\lambda, b+(k+m)}(B,g_B)$ is positive everywhere with $\mu_B(u) = 0$. From the proof of Theorem \ref{thm:homogeneousE} we may assume that $w = u = e^{Lr}$ for some constant $L$. Let $B = G_1/ K_1$ and $K_1 \subset H_1 =G_1 \cap \mathrm{Iso}(B, g_B)_u$. From Theorem \ref{thm:isometryMBF} the group $G = G_1 \ltimes \mathbb{R}^k$ acts transitively on $M$ via isometries where $\mathbb{R}^k$ is the $C$-translation in the Euclidean group, i.e., $A = \mathrm{Id}$ in Theorem \ref{thm:isometryMBF}. The isotropy subgroup is given by $K = K_1 \times \set{0}$. Applying the argument in the previous case to $(B, g_B)$ yields that $B$ is a one-dimensional extension of $N_1^{b-1}= H_1/ K_1$ with a derivation $D \in \mathrm{Der}(\mathfrak{h}_1)$ and a constant $\alpha$ such that the following equations hold.
\begin{eqnarray}
\mathrm{tr}_{\mathfrak{h}_1}(S^2) & = & -\lambda \mathrm{tr}_{\mathfrak{h}_1}S \label{eqn:trS2S} \\
\mathrm{div}_{N_1} S & = & 0 \label{eqn:divS} \\
\mathrm{Ric}^{N_1} & = & \lambda I + S + \frac{1}{\mathrm{tr}_{\mathfrak{h}_1}S - (k+m)\lambda}[S, A]. \label{eqn:RicN1SA}
\end{eqnarray}
Let $H = H_1 \times \mathbb{R}^k$ such that $N = H/K = N_1\times \mathbb{R}^k$ with the product metric is the zero-level set of $w$ in $M$. It follows that $\mathfrak{h} = \mathfrak{h}_1 \oplus \mathbb{R}^k$. We extend $D$ to $\mathfrak{h}$ by letting $D(U) = - \lambda U$ for any $U \in \mathbb{R}^k$ and the Lie bracket is extended by $[\xi, U] = - L U$ and $[X, U] = 0$ for any $X \in \mathfrak{h}_1$. It is easy to verify that $D$ is a derivation of $\mathfrak{h}$. Moreover we have $\mathrm{tr} S = \mathrm{tr}_{\mathfrak{h}_1} S - k \lambda$ on $\mathfrak{h}$ and so equation (\ref{eqn:RicN1SA}) can be written as
\[
\mathrm{Ric}^N|_{N_1} = \lambda I + S + \frac{1}{\mathrm{tr}(S) - m \lambda}[S,A].
\]
Since $S = - \lambda I$ and $A = 0$ on $\mathbb{R}^k$, the right hand side of above equation is zero which is equal to $\mathrm{Ric}^N$ on the $\mathbb{R}^k$ factor. This shows property (1) in the theorem. Property (3) follows from equation (\ref{eqn:trS2S}) and the extension of $D$ to $\mathfrak{h}$. Finally property (2) follows from equation (\ref{eqn:divS}), the product structure of $N$ and the fact that $S = - \lambda I$ on the $\mathbb{R}^k$ factor.
\end{proof}
We can now prove the converse statement of Corollary \ref{cor:SolitonQuasiEinstein}.
\begin{thm}\label{thm:WPEextDnormal}
A non-flat, non-trivial semi-algebraic Ricci soliton on a homogeneous space admits a $(\lambda,n+m)$-Einstein one-dimensional extension with
\[
\mathrm{ad}_\xi (X) = \alpha D(X), \qquad \text{for all } X \in \mathfrak{h}
\]
if and only if $D$ is normal.
\end{thm}
\begin{proof}
Let $(N^{n-1}, h)$ be a semi-algebraic Ricci soliton
\[
\mathrm{Ric} = \lambda I + S
\]
for some constant $\lambda < 0$. As in the proof of Theorem \ref{thm:EinsteinExtDnormal}, we have
\[
\mathrm{div} S = 0,\quad \mathrm{tr}(S^2) = - \lambda \mathrm{tr}(S) \quad \text{and}\quad \abs{\mathrm{Ric}}^2 = \lambda \mathrm{scal}.
\]
If $D$ is normal, then $[S,A] = 0$ and from Theorem \ref{thm:WPE1extension} there is a one-dimensional extension of $(N, h)$ which is $(\lambda, n+m)$-Einstein.
On the other hand, if $(N, h)$ admits a one-dimensional extension which is $(\lambda, n+m)$-Einstein, then from Theorem \ref{thm:structureWPEextension} on $N$ we have
\[
\mathrm{Ric} = \lambda I + S + \frac{1}{\mathrm{tr}(S) - \lambda m}[S, A].
\]
Comparing with the semi-algebraic Ricci soliton equation yields
\[
[S, A] = 0,
\]
i.e., $D$ is normal.
To finish the proof we show that if the extension is $(c, n+m)$-Einstein with $c\ne \lambda$, then $N$ is flat. From Theorem \ref{thm:structureWPEextension} again we have $\mathrm{tr}(S^2) = - c \mathrm{tr} (S)$ and thus we have $(c-\lambda)\mathrm{tr}(S) = 0$. It follows that $\mathrm{tr}(S) = 0$ and then $\mathrm{tr}(S^2) = 0$, i.e., $S = 0$ which shows that $N$ is a trivial semi-algebraic Ricci soliton, a contradiction.
\end{proof}
As a consequence of Theorem \ref{thm:structureWPEextension} we also have the following formula for the covariant derivative of the Ricci tensor in the direction of $\nabla w$.
\begin{prop} \label{prop:RadialRic} If $(M^n,g)$ is a homogeneous $(\lambda, n+m)$-Einstein metric, then
\[
\nabla_{\nabla w} \mathrm{Ric} = (m w L \alpha^2)[S,A].
\]
\end{prop}
\begin{proof}
By Theorem \ref{thm:structureWPEextension} $M$ is a one-dimensional extension of a homogeneous space $N$ and
\[ w= e^{Lr}, \]
where $r$ is the distance to $N$. This tells us that
\[ \frac{1}{w} \mathrm{Hess} w = L^2 dr \otimes dr + L g(T(\cdot), \cdot). \]
Taking the covariant derivative of both sides of the equation above gives us,
\begin{eqnarray*}
&\nabla_{\nabla w} \left( \dfrac{1}{w}\mathrm{Hess} w \right) = L e^{Lr} g\left( \left(\nabla_{\nabla r} T\right) (\cdot), \cdot \right), &
\end{eqnarray*}
where we have used that $\nabla_{\nabla r} \left( dr \otimes dr \right) = 0$, which follows from a simple calculation since $r$ is a distance function.
Then by differentiating the $(\lambda, n+m)$-Einstein equation we have that
\begin{eqnarray*}
\nabla_{\nabla w} \left(\mathrm{Ric} \right) &=& m \nabla_{\nabla w} \left( \frac{1}{w}\mathrm{Hess} {w} \right) \\
&=& mLw \left(\nabla_{\nabla r} T\right)\\
&=& mLw \alpha^2 [S,A].
\end{eqnarray*}
In the last line we have used Proposition \ref{prop:1extensionST}.
\end{proof}
This result, when combined with our earlier results, gives us the following results mentioned in the introduction.
\begin{proof} [Proof of Theorem \ref{thm:WPEradial}]
From Proposition \ref{prop:RadialRic}, $\nabla_{\nabla w} \mathrm{Ric}=0$ if and only if $[A,S]=0$. When $[A,S]=0$, part (1) of Theorem \ref{thm:structureWPEextension} shows that $M$ is the one-dimensional extension of a normal semi-algebraic Ricci soliton.
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor:WPEradial}]
If $w$ is constant, then the statement is trivial. If $w$ is non-constant, then we have that
\[ g_E = g_M + w^2 g_{\mathbb{R}^m}\]
where $w>0$. Moreover, $g_M$ is a one-dimensional extension of $(N,g_N)$, a normal semi-algebraic Ricci soliton.
By Theorem \ref{thm:EinsteinConstruction}, $N$ also admits a different one-dimensional extension which is $\lambda$-Einstein. The underlying manifold of this Einstein metric is also $M= N \times \mathbb{R}$. Denote this metric by $\widetilde{g}_M$. Then the metric
\[ \widetilde{g}_E = \widetilde{g}_M + g_{H^m} \]
is a metric on $E$ where $g_{H^m}$ denotes the $m$-dimensional hyperbolic space with Ricci curvature $\lambda$.
\end{proof}
We end this section with the proof of Corollary \ref{cor:SARiccisiltonDnormal} in Introduction.
\begin{proof}[Proof of Corollary \ref{cor:SARiccisiltonDnormal}]
The case when $m=0$ follows from Theorem \ref{thm:EinsteinExtDnormal}. When $m\geq 1$, from Theorem \ref{thm:WPEextDnormal} we know that $N^{n-1}$ admits a homogeneous one-dimensional extension $M^n$ which is $(\lambda, n+m)$-Einstein. From Theorem \ref{thm:homogeneousE} the warped product of $M$ with a space form fiber $F^{m}$ is both homogeneous and $\lambda$-Einstein.
\end{proof}
\medskip
\section{Left invariant metrics on Lie groups and algebraic Ricci solitons}
In this section we specialize to Lie groups with left invariant metrics. In the first subsection we discuss general results about how $W(G,g)$ interacts with the Lie group structure of $G$. In the second subsection we consider simply connected solvable Lie groups with left invariant metrics and give a classification of such groups that have $W(G,g) \neq \{ 0 \}$.
\subsection{Left invariant metrics on Lie groups with $\mathbf{W \neq 0}$.} Let $(G, g)$ be an $n$-dimensional Lie group with left invariant metric such that $W_{\lambda, n+m}(G, g) \ne \set{0}$. Combining Lemma \ref{lem:basehomogeneous} and Proposition \ref{prop:homogeneousB}, we have two cases. Either
\begin{enumerate}
\item $(G,g)$ is isometric to a Riemannian product $ B \times F^k$ where $B$ is a Lie group with left invariant $\lambda$-Einstein metric and $F$ is a simply connected space form, or
\item
\[ g = g_B + e^{Lr} g_{\mathbb{R}^k} \]
where $L$ is a constant, $B$ is a base manifold, $r : B \rightarrow \mathbb{R}$ is a smooth distance function and
\[ w = e^{Lr} \in W_{\lambda, n+m}(G,g)\]
where $\lambda<0$.
\end{enumerate}
Note that, in either case, $k$ could be zero. In case (1) we call the metric $(G,g)$ \emph{rigid} and, in case (2) we call the metric \emph{non-rigid}.
In the non-rigid case we can always re-parametrize the distance function $r$ so that $r(e)=1$, where $e$ is the identity element of $G$. We call such a re-parametrized distance function \emph{normalized}. We will usually assume the distance function is normalized below.
The study of $W(G,g)$ in the rigid case reduces to studying the solutions on space forms discussed in \cite[Section 2]{HPWuniqueness} as $W$ will consist of the pullbacks of functions $v \in W_{\lambda, k+m}(F, g_F)$. In the second case we have the following more interesting interaction between properties of the function $u$ and the Lie group structure.
\begin{prop} \label{prop:H}
Let $(G,g)$ be a non-rigid Lie group with left invariant metric and let $w \in W_{\lambda, n+m}(G,g)$ where
\[ w = e^{Lr}, \]
$L$ is a constant, and $r$ is a normalized distance function. Then $r$ is the signed distance to a codimension one normal subgroup $H$ and the vector field $\xi = \nabla r $ is a left invariant vector field. In particular,
\begin{equation}
\label{eqn:xiH} [\xi, \mathfrak{h}] \subset \mathfrak{h}
\end{equation}
where $\mathfrak{h}$ is the Lie algebra of $H$.
\end{prop}
\begin{proof}
Let $H$ be the level hypersurface $w=1$. Since $w(e)=1$, the elements of $H$ are the elements whose left translation preserves $w$. In particular, $H$ is a codimension one normal subgroup in $G$. Thus we obtain a Riemannian submersion $G\rightarrow G/H$
which is also a Lie algebra homomorphism. This shows that left invariant
vector fields on $G/H$ lift to left invariant vector fields on $G$
that are perpendicular to $H$. As $G/H=\mathbb{R}$ it follows that
$\nabla r$ is a left invariant vector field on $G.$
For the last part note that
\[ g([\xi, X], \xi) = -2g (\nabla_{\nabla r} \nabla r, X) = 0 \]
for any $X \perp \xi$.
\end{proof}
In the spirit of Theorem \ref{thm:homogeneousE}, we now address the question of whether it is always possible to build a left invariant Einstein metric from a Lie group with left invariant metric $(G,g)$ with $W(G,g) \neq 0$.
In the rigid case we can see quickly that this is always true. Given $v \in W_{\lambda, k+m}(F, g_F)$, let $\widetilde{F}$ be the $m$-dimensional simply connected space form with Ricci curvature $\mu_{F}(v)$. Define
\begin{eqnarray*}
E &=& G \times_{v} \widetilde{F}.
\end{eqnarray*}
Then we have
\begin{eqnarray*}
E &=& B \times (F \times_{v} \widetilde{F}) \\
&=& B \times \hat{F}
\end{eqnarray*}
where $\hat{F}$ is a simply connected space form. Clearly, $E$ is naturally a Lie group with the product structure coming from $B$ and $\hat{F}$, and the metric, being a product of left invariant metrics on the factors, is also left invariant.
In the non-rigid case we also have the following
\begin{thm} \label{thm:LieStruc}
Suppose that $(G,g)$ is a non-rigid Lie group with left invariant metric such that
\[
w = e^{Lr} \in W_{\lambda, n+m}(G,g),
\]
where $L$ is a constant, and $r$ is a normalized distance function. Let $E = G\times_{w}\mathbb{R}^m$ with metric $g_E = g + w^2 g_0$ where $(\mathbb{R}^m, g_0)$ is Euclidean space. Then $E$ is a Lie group and its Lie algebra $\mathfrak{e}$ is the abelian extension of the Lie algebra $\mathfrak{g}$ of $G$ by $\mathfrak{a} = \mathbb{R}^m$ with
\[
[\xi, U] = - L U, \quad [X_i, U] = 0 \text{ for any } U \in \mathfrak{a} \text{ and }i = 1, \ldots, n-1,
\]
where $\set{\xi}\cup \set{X_i}_{i=1}^{n-1}$ is an orthonormal basis of the metric Lie algebra $\mathfrak{g}$.
\end{thm}
\begin{proof}
From Remark \ref{rem:homogeneous}, in the non-rigid case we know that, for any $x \in G$ there is a constant, $C_x$, such that $u \circ L_{x^{-1}} = C_x u$ where $L_{x^{-1}}$ the left multiplication of $x^{-1}$ on $G$. $C_x$ induces an automorphism $\tau(x, \cdot)$ of $(\mathbb{R}^m, +)$ in the following way
\[
\tau(x, \cdot) : \mathbb{R}^m \rightarrow \mathbb{R}^m, \quad a \mapsto C_x a.
\]
For a fixed $x \in G$, the differential $\bar{\tau}(x)$ of $\tau(x)$ is a Lie algebra isomorphism of the abelian Lie algebra $\mathfrak{a}$ which is the tangent space of the Lie group $(\mathbb{R}^m, +)$ at the origin. In particular $\mathfrak{a}$ is isomorphic to $(\mathbb{R}^m, +)$ and we have
\[
\bar{\tau}(x) : \mathfrak{a} \rightarrow \mathfrak{a}, \quad U \mapsto C_x U.
\]
From the formula $w(x) = e^{L r(x)}$, we also have
\[
C_x = \frac{w(e)}{w(x)} = \frac{1}{e^{L r(x)}} = e^{- L r(x)}.
\]
On the total space $E$, a Lie group structure is given by the semidirect product $G\times_{w}\mathbb{R}^m $ which is the Lie group with $G \times \mathbb{R}^m$ as its underlying manifold and with multiplication and inversion given by
\begin{eqnarray*}
(x,a) \cdot (y, b) & = & (x\cdot y, C_{y^{-1}}a + b) \\
(x,a)^{-1} & = & (x^{-1}, - C_x a).
\end{eqnarray*}
For the semi-direct product of two general Lie groups, see \cite[Section I.15]{Knapp} for example.
The map $\bar{\tau}$ is a smooth homomorphism of $G$ into $\mathrm{Aut}(\mathfrak{a})$, the automorphisms of $\mathfrak{a}$. The differential $D \bar{\tau}$ is a homomorphism of the Lie algebra $\mathfrak{g}$ of $G$ into $\mathrm{Der}(\mathfrak{a})$, the derivations of $\mathfrak{a}$. The Lie algebra $\mathfrak{e}$ of $E$ is given by the semi-direct product $\mathfrak{g}\oplus_{D\bar{\tau}}\mathfrak{a}$, i.e., the Lie brackets of $\mathfrak{g}$ and $\mathfrak{a}$ are preserved in $\mathfrak{e}$ and, for any $X \in \mathfrak{g}$, $U \in \mathfrak{a}$ we have
\[
[X, U] = \left(D{\bar{\tau}}(X)\right)(U).
\]
In the following we compute the map $D\bar{\tau}$.
Let $\set{X_i}_{i=0}^{n-1}$ be an orthonormal basis of $\mathfrak{g}$ with $X_0 = \xi = \nabla r|_{e}$. For $t \in \mathbb{R}$ let $x(t) = \exp(tX_i)$. If $i \geq 1$, then $x(t) \in H$ and it follows that $r(x(t)) = 0$. Thus $C_{x(t)} = 1$ and $\bar{\tau}(x(t))$ is the identity map for $t \in \mathbb{R}$. So its differential is zero, i.e., $[X_i, U] = 0$. Now we are left with $D\bar{\tau}(X_0)$. In this case we have $r(x(t)) = t$ and then
\begin{eqnarray*}
\left(D\bar{\tau}(X_0)\right)(U) = \frac{\mathrm{d}}{\mathrm{d} t}\left(e^{-L t}U\right) = -L U,
\end{eqnarray*}
which shows that $[\xi, U] = -L U$.
\end{proof}
\begin{rem}\label{rem:solvablility}
From the Lie algebra structure of $\mathfrak{e}$ in Theorem \ref{thm:LieStruc}, we have
\[
\mathfrak{e}^1 = [\mathfrak{e}, \mathfrak{e}] = [\mathfrak{g}, \mathfrak{g}] \oplus \mathbb{R}^m = \mathfrak{g}^1 \oplus \mathbb{R}^m,
\]
and
\[
\mathfrak{e}^2 = [\mathfrak{e}^1, \mathfrak{e}^1] = [\mathfrak{g}^1, \mathfrak{g}^1] = \mathfrak{g}^2
\]
as $\mathfrak{g}^1\subset \mathfrak{h}$ by Proposition \ref{prop:H}. On the other hand from Proposition \ref{prop:H} again we have the following relation in the commutator Lie algebras
\[
\mathfrak{h}^{i+1} \subset \mathfrak{g}^{i+1} \subset \mathfrak{h}^{i}
\]
for $i \geq 0$. It follows that $\mathfrak{e}$ is a solvable Lie algebra if and only if $\mathfrak{h}$ is solvable.
\end{rem}
\smallskip
\subsection{Non-rigid solvable Lie groups with $\mathbf{W \neq 0}$. }
From Theorems \ref{thm:WPE1extension} and \ref{thm:WPEextDnormal}, a one-dimensional extension of an algebraic Ricci soliton $(H, h)$ admits a non-rigid $(\lambda, n+m)$-Einstein metric. We show a converse of this construction on solvmanifolds, i.e., any non-rigid $(\lambda, n+m)$-Einstein solvmanifold can be obtained in this way.
A solvmanifold with $W_{\lambda, n+m}(G,g) \neq \{0\}$ which is rigid is a product of a $\lambda$-Einstein solvmanifold and a space form. Moreover, $W_{\lambda, n+m}(G,g)$ consists of functions which are pullbacks of solutions on the space form factor. Thus, the study of these spaces reduces to studying left invariant Einstein metrics on simply connected solvable Lie groups. There is a rich structure to these spaces, see \cite{Heber}, \cite{LauretStandard} and the references therein.
In the non-rigid case, the group $G$ shall be identified with its metric Lie algebra $(\mathfrak{g}, \langle\cdot, \cdot\rangle)$ where $\mathfrak{g}$ is the Lie algebra of $G$ and $\langle\cdot, \cdot\rangle$ denotes the inner product on $\mathfrak{g}$ which determines the metric. We consider the orthogonal decomposition
\begin{equation*}
\mathfrak{g} = \mathfrak{a}\oplus \mathfrak{n},
\end{equation*}
where $\mathfrak{n}$ is the nilradical of $\mathfrak{g}$, i.e., the maximal nilpotent ideal. Assuming that $(G,g)$ is non-rigid, by Proposition \ref{prop:H}, the zero set of a normalized distance function $H$ is a codimension one normal subgroup. Let $h$ be the induced metric on $H$. Then $(H,h)$ is also a solvmanifold since, by equation (\ref{eqn:xiH}), $\xi \in \mathfrak{a}$. The Lie algebra of $H$, $\mathfrak{h}$, then has the following decomposition
\[
\mathfrak{h} = \mathfrak{a}' \oplus \mathfrak{n}
\]
where $\mathfrak{a}'$ is the orthogonal complement of $\mathbb{R} \xi \subset \mathfrak{a}$.
\begin{thm}\label{thm:Hsolvsoliton}
Suppose that $(G, g)$ is a solvmanifold with $W_{\lambda, n+m}(G, g) \neq \{ 0 \}$ which is non-rigid. Let $H$ be the zero set of a normalized distance function with induced metric $h$. Then $(H, h)$ is an algebraic Ricci soliton with $\mathrm{Ric}^{H} = \lambda I + S$ where $S \in \mathrm{Der}(\mathfrak{h})$ is symmetric and $\mathfrak{h}$ is the Lie algebra of $H$.
\end{thm}
\begin{rem} Recall that, under the hypothesis, $\lambda$ must be negative. \end{rem}
\begin{proof}
From Proposition \ref{prop:H} and Theorem \ref{thm:structureWPEextension} we know that $H$ is a codimension one normal subgroup of $G$ and its Ricci curvature satisfies the following equation
\[
\mathrm{Ric}^H = \lambda I + S + \frac{1}{\mathrm{tr}(S) - \lambda m}[S, A],
\]
where the derivative $D$ is given in the proof of Theorem \ref{thm:structureWPEextension}. To show that $(H, h)$ is an algebraic Ricci soliton, it is sufficient to show that $S$, the symmetrization of $D$, is also a derivation, and $[S, A] = 0$. Since $D$ is given by $\dfrac{1}{\alpha} \mathrm{ad}_\xi$ in Case I of Theorem \ref{thm:structureWPEextension}, and by $\dfrac{1}{\alpha}\mathrm{ad}_\xi$ on $\mathfrak{h}_1$ and $- \lambda I$ on $\mathbb{R}^k$ in Case II of Theorem \ref{thm:structureWPEextension}, we only have to show that $(\mathrm{ad}_{\xi})^*$ is a derivation and $\mathrm{ad}_\xi$ is a normal operator. We prove this by considering the Einstein solvmanifold $(E, g_E)$ in Theorem \ref{thm:LieStruc}.
Recall that the derivation $\mathrm{ad}_\xi$ is extended to $\mathfrak{e}= \mathbb{R}\xi \oplus \mathfrak{h} \oplus \mathbb{R}^m$ by $\mathrm{ad}_\xi (U) = - L U$ for any $U \in \mathbb{R}^m$. The property that it is a normal operator and its adjoint is a derivation of $\mathfrak{h}$ holds if and only if its extension has the same property on $\mathfrak{e}$. In Theorem \ref{thm:LieStruc}, we have $[\mathfrak{e}, \mathfrak{e}] = \mathfrak{n} \oplus \mathbb{R}^m$ and its orthogonal complement is $\mathfrak{a}$ which is abelian. It follows that $(E, g_E)$ is of standard type. Since $\xi \in \mathfrak{a}$, $\mathrm{ad}_\xi$ is a normal operator by Theorem B, or Theorem 4.10 in \cite{Heber}. In \cite[Lemma 4.7]{LauretSol}, it is shown that this it is equivalent to $(\mathrm{ad}_\xi)^*$ being a derivation. This finishes the proof.
\end{proof}
Finally, using the structure results of algebraic Ricci soliton on solvmanifolds in \cite{LauretSol}, we have the following characterization of non-rigid solvmanifolds.
\renewcommand{\theenumi}{\roman{enumi}}
\begin{thm}\label{thm:wpesolvmanifold}
Let $(G, g)$ be a solvmanifold with metric Lie algebra $(\mathfrak{g}, \scp{\cdot, \cdot})$ and consider orthogonal decompositions of the form $\mathfrak{g} = \mathfrak{a} \oplus \mathfrak{n}$ and $\mathfrak{a} = \mathbb{R} \xi \oplus \mathfrak{a}'$, where $\mathfrak{n}$ is the nilradical of $\mathfrak{g}$ and $r$ is a signed distance function with $\nabla r = \xi$. Then $(G, g)$ is a non-rigid space with $e^{L r} \in W_{\lambda, n+m}(G,g)$ from some constants $L$ and $m$ if and only if the following conditions hold:
\begin{enumerate}
\item $(\mathfrak{n}, \scp{\cdot, \cdot}|_{\mathfrak{n}\times \mathfrak{n}})$ is a nilsoliton with Ricci operator $\mathrm{Ric}_1 = \lambda I + D_1$, for some $D_1 \in \mathrm{Der}(\mathfrak{n})$,
\item $[\mathfrak{a}, \mathfrak{a}] = 0$,
\item $(\mathrm{ad}_A)^* \in \mathrm{Der}(\mathfrak{g})$(or equivalently, $[\mathrm{ad}_A, (\mathrm{ad}_A)^*] = 0$) for all $A \in \mathfrak{a}$,
\item $\scp{A,A} = - \frac{1}{\lambda} \mathrm{tr} S(\mathrm{ad}_A)^2$ for all $A\in \mathfrak{a}'$,
\item $\mathrm{tr} S(\mathrm{ad}_{\xi})^2 = -\lambda - m L^2$.
\end{enumerate}
\end{thm}
\renewcommand{\theenumi}{\arabic{enumi}}
\begin{proof}
From Theorems \ref{thm:WPE1extension}, \ref{thm:WPEextDnormal} and \ref{thm:Hsolvsoliton}, $(G, g)$ is a non-rigid space with $e^{Lr} \in W_{\lambda, n+m}(G,g)$ if and only if $(H, h)$ is an algebraic Ricci soliton, i.e., $\mathrm{Ric}^{H} = \lambda I + D$ for some $D \in \mathrm{Der}(\mathfrak{h})$, and $S(\mathrm{ad}_{\xi}) = \alpha D$ for some $\alpha \in \mathbb{R}$. From \cite[Theorem 4.8]{LauretSol}, the structure results of algebraic Ricci solitons on solvmanifolds, we have conditions (i), (ii), (iii) and (iv) for any $\mathfrak{h}$. In (iii) the fact that $\left(\mathrm{ad}_{\xi}\right)^* \in \mathrm{Der}(\mathfrak{g})$ follows from that $S\left(\mathrm{ad}_{\xi}\right)$ is a derivation. The last condition (v) follows from the facts that $\mathrm{Ric}(\xi, \xi) = -(\lambda + mL^2)$ and $\mathrm{Ric}(\xi, \xi) = - \mathrm{tr} S(\mathrm{ad}_{\xi})^2$. It is equivalent to the existence of $\alpha$ such that $S\left(\mathrm{ad}_{\xi}\right) = \alpha D$.
\end{proof}
\medskip
|
2,877,628,088,432 | arxiv | \section{Introduction}
The family of {\em episturmian words} is an interesting natural generalization of the well-known
{\em Sturmian words} (a particular class of binary infinite words) to an arbitrary finite alphabet, introduced by Droubay, Justin, and Pirillo \cite{xDjJgP01epis} (also see \cite{aG05powe,jJgP02epis,jJgP04epis,jJlV00retu} for instance). Episturmian words share many properties with Sturmian words and include the well-known \emph{Arnoux-Rauzy sequences}, the study of which began in \cite{pAgR91repr} (also see \cite{jJgP02onac,rRlZ00agen} for example).
In this paper, we characterize by {\em lexicographic order} all {\em finite} Sturmian and episturmian words, i.e., all (finite) factors of such infinite words. Consequently, we obtain a characterization of episturmian words in a {\em wide sense} (episturmian and {\em episkew} infinite words). That is, we characterize the set of all infinite words whose factors are (finite) episturmian. Similarly, we characterize by lexicographic order all {\em balanced} infinite words over a $2$-letter alphabet; in other words, all Sturmian and {\em skew} infinite words, the factors of which are (finite) Sturmian.
To any infinite word $\mathbf{t}$ we can associate two infinite words $\min(\mathbf{t})$ and $\max(\mathbf{t})$ such that any prefix of $\min(\mathbf{t})$ (resp.~$\max(\mathbf{t})$) is the \emph{lexicographically} smallest (resp.~greatest) amongst the factors of $\mathbf{t}$ of the same length (see \cite{gP05ineq} or Section \ref{S:words}). Our main results in this paper extend recent work by Pirillo \cite{gP05ineq,gP05mors}, Justin and Pirillo \cite{jJgP02onac}, and Glen \cite{aG06acha}. In the first of these papers, Pirillo proved that, for infinite words $\mathbf{s}$ on a 2-letter alphabet $\{a,b\}$ with $a<b$, the inequality $a\mathbf{s} \leq \min(\mathbf{s}) \leq \max(\mathbf{s}) \leq b\mathbf{s}$ characterizes {\em standard Sturmian words} (both aperiodic and periodic). Similarly, an infinite word $\mathbf{s}$ on a finite alphabet $\mathcal{A}$ is {\em standard episturmian} if and only if, for any letter $a \in \mathcal{A}$ and lexicographic order $<$ satisfying $a = \min(\mathcal{A})$, we have
\begin{equation} \label{eq:gP05ineq}
a\mathbf{s} \leq \min(\mathbf{s}).
\end{equation}
Moreover, $\mathbf{s}$ is a {\em strict} standard episturmian word (i.e., a {\em standard} Arnoux-Rauzy sequence \cite{pAgR91repr,rRlZ00agen}) if and only if \eqref{eq:gP05ineq} holds with strict equality \cite{jJgP02onac}. In a similar spirit, Pirillo \cite{gP05mors} very recently defined \emph{fine words} over two letters; that is, an infinite word $\mathbf{t}$ over a 2-letter alphabet $\{a,b\}$ ($a < b$) is said to be \emph{fine} if $(\min(\mathbf{t}), \max(\mathbf{t})) = (a\mathbf{s}, b\mathbf{s})$ for some infinite word $\mathbf{s}$. These words are characterized in \cite{gP05mors} by showing that fine words on $\{a,b\}$ are exactly the {\em aperiodic Sturmian} and {\em skew} infinite words.
Glen \cite{aG06acha} recently extended Pirillo's definition of fine words to an arbitrary finite alphabet; that is, an infinite word $\mathbf{t}$ is \emph{fine} if there exists an infinite word $\mathbf{s}$ such that $\min(\mathbf{t}) = a\mathbf{s}$ for any letter $a \in$ Alph$(\mathbf{t})$ and lexicographic order $<$ satisfying $a = \min(\mbox{Alph}(\mathbf{t}))$. (Here, Alph$(\mathbf{t})$ denotes the \emph{alphabet} of $\mathbf{t}$, i.e., the set of distinct letters occurring in $\mathbf{t}$.) These generalized fine words are characterized in \cite{aG06acha}; specifically, it is shown that an infinite word $\mathbf{t}$ is fine if and only if $\mathbf{t}$ is either a {\em strict} episturmian word, or a {\em strict episkew word} (i.e., a particular kind of non-recurrent infinite word, all of whose factors are episturmian). Here, we prove further that an infinite word $\mathbf{t}$ is episturmian in the {\em wide sense} (episturmian or episkew) if and only if there exists an infinite word $\mathbf{u}$ such that $a\mathbf{u} \leq \min(\mathbf{t})$ for any letter $a\in \mathcal{A}$ and lexicographic order $<$ satisfying $a = \min(\mathcal{A})$. This result follows easily from our characterization of finite episturmian words in Section 4.
This paper is organized as follows. Section 2 contains all of the necessary terminology and notation concerning words, morphisms, and Sturmian and episturmian words. In Section 3, we give a number of equivalent definitions of {\em episkew words}, and recall the aforementioned characterizations of {\em fine words}. Then, in Section 4, we prove a new characterization of finite episturmian words, from which a new characterization of finite Sturmian words is an easy consequence. Lastly, in Section 5, we obtain characterizations of episturmian words in the wide sense and balanced binary infinite words, which follow from the main results in Sections 3 and 4.
\section{Preliminaries}
\subsection{Words and morphisms} \label{S:words}
Let $\mathcal{A}$ denote a finite alphabet. A (finite) \emph{word} is an
element of the \emph{free monoid} $\mathcal{A}^*$ generated by $\mathcal{A}$, in
the sense of concatenation. The identity $\varepsilon$ of $\mathcal{A}^*$ is
called the \emph{empty word}, and the \emph{free semigroup},
denoted by $\mathcal{A}^+$, is defined by $\mathcal{A}^+ :=
\mathcal{A}^*\setminus\{\varepsilon\}$. An \emph{infinite word} (or simply
\emph{sequence}) $\mathbf{x}$ is a sequence indexed by $\mathbb{N}$ with values in $\mathcal{A}$,
i.e., $\mathbf{x} = x_0x_1x_2\cdots$, where each $x_i \in \mathcal{A}$. The set of
all infinite words over $\mathcal{A}$ is denoted by $\mathcal{A}^\omega$, and we define $\mathcal{A}^\infty := \mathcal{A}^* \cup \mathcal{A}^\omega$.
If $w = x_{1}x_{2}\cdots x_{m} \in \mathcal{A}^+$, each
$x_{i} \in \mathcal{A}$, the \emph{length} of $w$ is $|w| = m$ and we denote by
$|w|_a$ the number of occurrences of a letter $a$ in $w$. (Note that
$|\varepsilon| = 0$.) The \emph{reversal} $\widetilde{w}$ of $w$ is given by $\widetilde{w} = x_{m}x_{m-1}\cdots x_{1}$, and if $w = \widetilde{w}$, then $w$ is called a \emph{palindrome}.
An infinite word $\mathbf{x} = x_0x_1x_2\cdots$, each $x_i \in \mathcal{A}$, is said to be \emph{periodic} (resp.~\emph{ultimately periodic}) with period $p$ if $p$ is the smallest positive integer such that $x_i = x_{i+p}$ for all $i \in \mathbb{N}$ (resp.~for all $i \geq m$ for some $m\in \mathbb{N}$). If $u$, $v \in \mathcal{A}^+$, then $v^\omega$ (resp.~$uv^\omega$) denotes the periodic (resp.~ultimately periodic) infinite word $vvv\cdots$ (resp.~$uvvv\cdots$) having $|v|$ as a period.
A finite word $w$ is a \emph{factor} of $z \in \mathcal{A}^\infty$ if $z = uwv$ for some $u \in \mathcal{A}^*$, $v \in \mathcal{A}^\infty$. Further, $w$ is called a \emph{prefix} (resp.~\emph{suffix}) of $z$ if $u = \varepsilon$ (resp.~$v = \varepsilon$).
An infinite word $\mathbf{x} \in \mathcal{A}^\omega$ is called a \emph{suffix} of $\mathbf{z} \in \mathcal{A}^\omega$ if there exists a word $w \in \mathcal{A}^*$ such that $\mathbf{z} = w\mathbf{x}$. A factor $w$ of a word $z \in \mathcal{A}^\infty$ is \emph{right} (resp.~\emph{left}) \emph{special} if $wa$, $wb$ (resp.~$aw$, $bw$) are factors of $z$
for some letters $a$, $b \in \mathcal{A}$, $a \ne b$.
For any word $w \in \mathcal{A}^\infty$, $F(w)$ denotes the set of all its
factors, and $F_n(w)$ denotes the set of all factors of
$w$ of length $n \in \mathbb{N}$, i.e., $F_n(w) := F(w)
\cap \mathcal{A}^n$ (where $|w|\geq n$ for $w$ finite). Moreover, the \emph{alphabet} of $w$ is Alph$(w)
:= F(w) \cap \mathcal{A}$ and, if $w$ is infinite, we denote by Ult$(w)$ the set of
all letters occurring infinitely often in $w$. Two infinite words $\mathbf{x}$, $\mathbf{y} \in \mathcal{A}^\omega$ are said to be \emph{equivalent} if $F(\mathbf{y}) = F(\mathbf{x})$, i.e., if $\mathbf{x}$ and $\mathbf{y}$ have the same set of factors. A factor of an infinite word $\mathbf{x}$ is \emph{recurrent} in $\mathbf{x}$ if it occurs infinitely many times in $\mathbf{x}$, and $\mathbf{x}$ itself is said to be \emph{recurrent} if all of its factors are recurrent in it.
Suppose the alphabet $\mathcal{A}$ is totally ordered by the relation $<$. Then we
can totally order $\mathcal{A}^+$ by the \emph{lexicographic order} $<$,
defined as follows. Given two words $u$, $v \in \mathcal{A}^+$, we have $u
< v$ if and only if either $u$ is a proper prefix of $v$ or $u =
xau^\prime$ and $v = xbv^\prime$, for some $x$, $u^\prime$,
$v^\prime \in \mathcal{A}^*$ and letters $a$, $b$ with $a < b$. This is
the usual alphabetic ordering in a dictionary, and we say that $u$
is \emph{lexicographically less} than $v$. This notion
naturally extends to $\mathcal{A}^\omega$, as follows. Let $\mathbf{u} =
u_0u_1u_2\cdots$ and $\mathbf{v} = v_0v_1v_2\cdots$, where $u_j$, $v_j
\in \mathcal{A}$. We define $\mathbf{u} < \mathbf{v}$ if there exists an index $i\geq0$
such that $u_j = v_j$ for all $j=0,\ldots, i-1$ and $u_{i} < v_{i}$.
Naturally, $\leq$ will mean $<$ or $=$.
Let $w \in \mathcal{A}^\infty$ and let $k$ be a positive integer. We denote by $\min(w | k)$ (resp.~$\max(w | k)$) the lexicographically smallest (resp.~greatest) factor of $w$ of length $k$ for the given order (where $|w|\geq k$ for $w$ finite). If $w$ is infinite, then it is clear that $\min(w | k)$ and $\max(w | k)$ are prefixes of the respective words $\min(w | k+1)$ and $\max(w | k+1)$. So we can define, by taking limits, the following two infinite words (see \cite{gP05ineq})
\[
\min(w) = \underset{k\rightarrow\infty} {\lim}\min(w | k) \quad \mbox{and} \quad
\max(w) = \underset{k\rightarrow\infty}{\lim}\max(w | k).
\]
The \emph{inverse} of $w \in \mathcal{A}^*$, written $w^{-1}$, is
defined by $ww^{-1} = w^{-1}w = \varepsilon$. It must be emphasized that
this is merely formal notation, i.e., for $u, v, w \in \mathcal{A}^*$, the
words $u^{-1}w$ and $wv^{-1}$ are defined only if $u$ (resp.~$v$)
is a prefix (resp.~suffix) of $w$.
A \emph{morphism on} $\mathcal{A}$ is a map $\psi: \mathcal{A}^* \rightarrow
\mathcal{A}^*$ such that $\psi(uv) = \psi(u)\psi(v)$ for all $u, v \in
\mathcal{A}^*$. It is uniquely determined by its image on the alphabet
$\mathcal{A}$. The action of morphisms on $\mathcal{A}^*$ naturally extends to infinite words; that is, if
$\mathbf{x} = x_0x_1x_2 \cdots \in \mathcal{A}^\omega$, then $\psi(\mathbf{x}) = \psi(x_0)\psi(x_1)\psi(x_2)\cdots$.
In what follows, we shall assume that $\mathcal{A}$ contains two or more letters.
\subsection{Sturmian words} \label{SS:Sturmian}
Sturmian words admit several equivalent definitions and have numerous characterizations; for instance, they can be characterized by their palindrome or return word structure
\cite{xDgP99pali,jJlV00retu}. A particularly useful definition of Sturmian words is the following.
\newpage
\begin{defn}
An infinite word $\mathbf{s}$ over $\{a,b\}$ is \textbf{\em Sturmian} if there exist real numbers $\alpha$, $\rho \in [0,1]$ such that $\mathbf{s}$ is equal to one of the following two infinite words:
\[
s_{\alpha,\rho}, ~s_{\alpha,\rho}^{\prime}: \mathbb{N} \rightarrow \{a,b\}
\]
defined by
\[
\begin{matrix}
&s_{\alpha,\rho}(n) = \begin{cases}
a &\mbox{if} ~\lfloor(n+1)\alpha + \rho\rfloor -
\lfloor n\alpha + \rho\rfloor = 0, \\
b &\mbox{otherwise};
\end{cases} \\
&\qquad \\
&s_{\alpha,\rho}^\prime(n) = \begin{cases}
a &\mbox{if} ~\lceil(n+1)\alpha + \rho\rceil -
\lceil n\alpha + \rho\rceil = 0, \\
b &\mbox{otherwise}.
\end{cases}
\end{matrix} \qquad (n \geq 0)
\]
Moreover, $\mathbf{s}$ is said to be \textbf{\em standard Sturmian} if $\rho = \alpha$.
\end{defn}
\begin{remq} A Sturmian word of {\em slope} $\alpha$ is:
\begin{itemize}
\item \emph{aperiodic} (i.e., not ultimately periodic) if $\alpha$ is irrational;
\item \emph{periodic} if $\alpha$ is rational.
\end{itemize}
Nowadays, for most authors, only the aperiodic Sturmian words are considered to be `Sturmian'. In several of our previous papers (see \cite{aG06acha,jJgP97deci,jJgP04epis,gP99anew,gP05mors} for instance), we have referred to aperiodic Sturmian words as `proper Sturmian' to highlight the fact that such Sturmian words correspond to the most common sense of `Sturmian' now. In the present paper, the term `Sturmian' will refer to both aperiodic and periodic Sturmian words.
\end{remq}
\begin{defn} \label{D:balance}
A finite or infinite word $w$ over $\{a,b\}$ is said to be \textbf{\em balanced} if, for any factors $u$, $v$ of $w$ with $|u| = |v|$, we have $||u|_{b} - |v|_{b}| \leq 1$ (or equivalently $||u|_{a} - |v|_{a}| \leq 1$).
\end{defn}
In the pioneering paper \cite{gHmM40symb}, balanced infinite words over a $2$-letter alphabet are called `Sturmian trajectories' and belong to three classes: aperiodic Sturmian; periodic Sturmian; and non-recurrent infinite words that are ultimately periodic (but not periodic), called {\em skew words}. That is, the family of balanced infinite words consists of the (recurrent) Sturmian words and the (non-recurrent) skew infinite words, all of whose factors are balanced.
It is important to note that a finite word is \emph{finite Sturmian} (i.e., a factor of some Sturmian word) if and only if it is balanced \cite{jBpS02stur}. Accordingly, balanced infinite words are precisely the infinite words whose factors are finite Sturmian. In Section \ref{S:infinite}, we will generalize this concept by showing that the set of all infinite words whose factors are {\em finite episturmian} consists of the (recurrent) episturmian words and the (non-recurrent) {\em episkew} infinite words (see Propositions \ref{P:skew} and \ref{P:wide}, to follow).
\begin{comment}
... by defining the family of \emph{episturmian words in wide sense}, consisting of all infinite words whose factors are {\em finite episturmian}. We will show that such infinite words are precisely the (recurrent) episturmian words and the (non-recurrent) {\em episkew} infinite words (see Propositions \ref{P:skew} and *******, to follow).
\end{comment}
For a comprehensive introduction to Sturmian words, see for instance \cite{jAjS03auto,jBpS02stur,nP02subs} and references therein. Also see \cite{aHrT00char,gP05mors} for further work on skew words.
\subsection{Episturmian words} \label{S:episturmian}
For episturmian words and morphisms\footnotemark[1] we use the same terminology and notation as in \cite{xDjJgP01epis,jJgP02epis,jJgP04epis}. \footnotetext[1]{In \cite{jJgP02epis}, Section 5.1 is incorrect and should be ignored.}
An infinite word $\mathbf{t} \in \mathcal{A}^\omega$ is \emph{episturmian} if $F(\mathbf{t})$ is closed under
reversal and $\mathbf{t}$ has at most one right (or equivalently left) special factor of each length. Moreover, an episturmian word is \emph{standard} if all of its left special factors are prefixes of
it. Sturmian words are exactly the episturmian words over a 2-letter alphabet.
\noindent\textit{Note.} Episturmian words are recurrent \cite{xDjJgP01epis}.
Standard episturmian words are characterized in \cite{xDjJgP01epis} using the concept of the \emph{palindromic right-closure} $w^{(+)}$ of a finite word $w$, which is the (unique) shortest palindrome having $w$ as a prefix (see \cite{aD97stur}).
Specifically, an infinite word $\mathbf{t} \in \mathcal{A}^\omega$ is standard episturmian if and only if there exists an infinite word $\Delta(\mathbf{t}) = x_1x_2x_3\ldots$, each $x_i
\in \mathcal{A}$, called the \emph{directive word} of $\mathbf{t}$, such that the infinite sequence of palindromic prefixes $u_1 =
\varepsilon$, $u_2$, $u_3$, $\ldots$ of $\mathbf{t}$ (which exists by results in \cite{xDjJgP01epis}) is given by
\begin{equation} \label{eq:02.09.04}
u_{n+1} = (u_nx_n)^{(+)}, \quad
n \in \mathbb{N}^+.
\end{equation}
\textit{Note.} An equivalent way of constructing the sequence $(u_n)_{n\geq 1}$ is via the `hat operation' \cite[Section III]{rRlZ00agen}.
Let $a \in \mathcal{A}$ and denote by $\psi_a$ the morphism on $\mathcal{A}$ defined by
\[
\psi_a : \left\{\begin{array}{lll}
a &\mapsto &a \\
x &\mapsto &ax \quad \mbox{for all $x \in \mathcal{A}\setminus\{a\}$}.
\end{array} \right.
\]
Together with the permutations of the alphabet, all of the morphisms $\psi_a$ generate by composition the monoid of {\em epistandard morphisms} (`epistandard' is an elegant shortcut for `standard episturmian' due to Richomme \cite{gR03conj}). The submonoid generated by the $\psi_a$ only is the monoid of {\em pure epistandard morphisms}, which includes the {\em identity morphism} Id$_{\mathcal{A}} =$ Id, and consists of all the \emph{pure standard (Sturmian) morphisms} when $|\mathcal{A}|=2$.
\begin{remq} \label{R:separating} If $\mathbf{x} = \psi_a(\mathbf{y})$ or $\mathbf{x} = a^{-1}\psi_a(\mathbf{y})$ for some $\mathbf{y} \in \mathcal{A}^\omega$ and $a \in \mathcal{A}$, then the letter $a$ is said to be \emph{separating for $\mathbf{x}$} and its factors; that is, any factor of $\mathbf{x}$ of length $2$ contains the letter $a$.
\end{remq}
Another useful characterization of standard episturmian words is the following (see \cite{jJgP02epis}). An infinite word $\mathbf{t} \in \mathcal{A}^\omega$ is standard episturmian with directive word $\Delta(\mathbf{t}) = x_1x_2x_3\cdots$ ($x_i \in \mathcal{A}$) if and only if there exists an infinite sequence of infinite words
$\mathbf{t}^{(0)} = \mathbf{t}$, $\mathbf{t}^{(1)}$, $\mathbf{t}^{(2)}$, $\ldots$ such that $\mathbf{t}^{(i-1)} = \psi_{x_i}(\mathbf{t}^{(i)})$ for all $i \in \mathbb{N}^+$. Moreover, each $\mathbf{t}^{(i)}$ is a standard episturmian word with directive word
$\Delta(\mathbf{t}^{(i)}) = x_{i+1}x_{i+2}x_{i+3}\cdots$, the \emph{$i$-th shift} of $\Delta(\mathbf{t})$.
To the prefixes of the directive word $\Delta(\mathbf{t}) = x_1x_2\cdots$, we associate the morphisms
\[
\mu_0 := \mbox{Id}, \quad \mu_n := \psi_{x_1}\psi_{x_2}\cdots\psi_{x_n}, \quad n \in \mathbb{N}^+,
\]
and define the words
\[
h_n := \mu_n(x_{n+1}), \quad n \in \mathbb{N},
\]
which are clearly prefixes of $\mathbf{t}$. For the palindromic prefixes $(u_i)_{i\geq 1}$ given by \eqref{eq:02.09.04}, we have the following useful formula \cite{jJgP02epis}
\[
u_{n+1} = h_{n-1}u_{n};
\]
whence, for $n > 1$ and $0 < p < n$,
\begin{equation} \label{eq:u_n}
u_n = h_{n-2}h_{n-3}\cdots h_1h_0 = h_{n-2}h_{n-3}\cdots h_{p-1}u_p.
\end{equation}
\noindent\textit{Note.}
Evidently, if a standard episturmian word $\mathbf{t}$ begins with the letter $x \in \mathcal{A}$, then $x$ is separating for $\mathbf{t}$ (see \cite[Lemma 4]{xDjJgP01epis}).
\subsubsection{Strict episturmian words}
A standard episturmian word $\mathbf{t}\in \mathcal{A}^\omega$, or any equivalent (episturmian) word,
is said to be \emph{$\mathcal{B}$-strict} (or $k$-\emph{strict} if $|\mathcal{B}|=k$, or {\em strict} if $\mathcal{B}$ is understood) if
Alph$(\Delta(\mathbf{t})) =$ Ult$(\Delta(\mathbf{t})) = \mathcal{B} \subseteq \mathcal{A}$. In particular, a standard episturmian word
over $\mathcal{A}$ is $\mathcal{A}$-strict if every letter in $\mathcal{A}$ occurs infinitely many times in its directive word.
The $k$-strict episturmian words have complexity $(k - 1)n+1$ for
each $n \in \mathbb{N}$; such words are exactly the $k$-letter Arnoux-Rauzy
sequences. In particular, the $2$-strict episturmian words correspond to the aperiodic Sturmian words. The strict standard episturmian words are precisely the standard (or characteristic) Arnoux-Rauzy sequences.
\section{Episkew words} \label{S:skew}
Recall that a finite word $w$ is said to be {\em finite Sturmian} (resp.~\emph{finite episturmian}) if $w$ is a factor of some infinite Sturmian (resp.~episturmian) word. When considering factors of infinite episturmian words, it suffices to consider only the strict standard ones (i.e., characteristic Arnoux-Rauzy sequences). Indeed, for any factor $u$ of an episturmian word, there exists a strict standard episturmian word also having $u$ as a factor.
Thus, finite episturmian words are exactly the {\em finite Arnoux-Rauzy words} considered by Mignosi and Zamboni \cite{fMlZ02onth}.
In this section, we define {\em episkew words}, which were alluded to (but not explicated) in the recent paper \cite{aG06acha}. The following proposition gives a number of equivalent definitions of such infinite words.
\noindent\textbf{Notation:} Denote by $\mathbf{v}_p$ the prefix of length $p$ of a given infinite word $\mathbf{v}$.
\begin{prop} \label{P:skew} An infinite word $\mathbf{t}$ with {\em Alph}$(\mathbf{t}) = \mathcal{A}$ is \textbf{\em episkew} if equivalently:
\begin{enumerate}[{\em (i)}]
\item $\mathbf{t}$ is non-recurrent and all of its factors are (finite) episturmian;
\item there exists an infinite sequence $(\mathbf{t}^{(i)})_{i\geq0}$ of non-recurrent infinite words and a directive word $x_1x_2x_3\cdots$ $(x_i \in \mathcal{A})$ such that $\mathbf{t}^{(0)} = \mathbf{t}$, $\ldots$~, $\mathbf{t}'^{(i-1)} = \psi_{x_i}(\mathbf{t}^{(i)})$, where $ \mathbf{t}'^{(i-1)}= \mathbf{t}^{(i-1)}$ if $ \mathbf{t}^{(i-1)}$ begins with $x_i$ and ${\mathbf{t}^\prime}^{(i-1)} = x_i\mathbf{t}^{(i-1)}$ otherwise;
\item there exists a letter $x \in \mathcal{A}$ and a standard episturmian word $\mathbf{s}$ on $\mathcal{A}\setminus\{x\}$ such that $\mathbf{t} = v \mu(\mathbf{s})$, where $\mu$ is a pure epistandard morphism on $\mathcal{A}$ and $v$ is a non-empty suffix of $\mu(\widetilde{\mathbf{s}_p}x)$ for some $p \in \mathbb{N}$.
\end{enumerate}
Moreover, $\mathbf{t}$ is said to be a \textbf{\em strict episkew word} if $\mathbf{s}$ is strict on $\mathcal{A}\setminus\{x\}$, i.e., if each letter in $\mathcal{A}\setminus\{x\}$ occurs infinitely often in the directive word $x_1x_2x_3\cdots$.
\end{prop}
\vspace{-15pt}\begin{pf}
\noindent (i) $\Rightarrow$ (ii): Since all of the factors of $\mathbf{t}$ are finite episturmian, there exists a letter, $x_1$ say, that is separating for $\mathbf{t}$. If $\mathbf{t}$ does not begin with $x_1$, consider $\mathbf{t}' = x_1\mathbf{t}$; otherwise consider $\mathbf{t}' = \mathbf{t}$. Then, $\mathbf{t}' = \psi_{x_1}(\mathbf{t}^{(1)})$ for some $\mathbf{t}^{(1)} \in \mathcal{A}^\omega$. Continuing in this way, we obtain infinite words $\mathbf{t}^{(2)}$, $\mathbf{t}'^{(2)}$, $\mathbf{t}'^{(3)}$, $\mathbf{t}^{(3)}$, $\ldots$ with $\mathbf{t}'^{(i-1)}$ as in the statement. Clearly, if some $\mathbf{t}^{(i)}$ is recurrent then $\mathbf{t}$ is also recurrent, in which case $\mathbf{t}$ is episturmian by \cite[Theorem 3.10]{jJgP02epis}. Thus all of the $\mathbf{t}^{(i)}$ are non-recurrent.
\medskip
\noindent (ii) $\Rightarrow$ (iii): We proceed by induction on $| \mathcal{A}|$. The starting point of the induction (i.e., $|\mathcal{A}|=2$) will be considered later.
Let $\Delta := x_1x_2x_3 \cdots$. If $\mathcal{A}= \textrm{Ult}(\Delta)$ then any letter in $\mathcal{A}$ is separating for infinitely many $\mathbf{t}^{(i)}$, thus is recurrent in all $\mathbf{t}^{(i)}$. Consider any factor $w$ of $\mathbf{t}$. As $ |\textrm{Ult}(\Delta )| >1$, we easily see that $w$ is a factor of $\psi_{x_1}\psi_{x_2}\cdots\psi_{x_q}(x)$ for some $q$ and letter $x$. Hence $w$ is recurrent in $\mathbf{t}$ and it follows that $\mathbf{t}$ itself is recurrent; a contradiction. Thus, there exists a letter $x$ in $\mathcal{A}$ and some minimal $n$ such that $x$ is not recurrent in $\mathbf{t}^{(n)}$. Two cases are possible: \smallskip
\noindent {\em Case $1$}: $x$ does not occur in $\mathbf{t}^{(n)}$. Then $ |\textrm{Alph}(\mathbf{t}^{(n)})| < |\mathcal{A}|$; whence, by induction, $\mathbf{t}^{(n)}$ has the desired form and clearly $\mathbf{t}$ also has the desired form. More precisely, if we let $\mathcal{B} := \mathcal{A}\setminus\{x\}$, then $\mathbf{t}^{(n)} = \hat v \lambda(\mathbf{s})$ where $\mathbf{s}$ is a standard episturmian word on $\mathcal{B}\setminus\{y\}$ for some letter $y \ne x$, $\lambda$ is a pure epistandard morphism on $\mathcal{B}$, and $\hat v$ is a non-empty suffix of $\lambda(\widetilde{ \mathbf{s} _q}y)$ for some $q \in \mathbb{N}$. It easily follows that $\mathbf{t} = v\mu(\mathbf{s})$ where $\mathbf{s}$ is a standard episturmian word on $\mathcal{A}\setminus\{y\}$, $\mu$ is a pure epistandard morphism on $\mathcal{A}$, and $v$ is a non-empty suffix of $\mu(\widetilde{\mathbf{s}_p}y)$ for some $p \in \mathbb{N}$. \smallskip
\noindent {\em Case $2$}: $x$ occurs in $\mathbf{t}^{(n)}$. We now show that $x$ occurs exactly once in $\mathbf{t}^{(n)}$.
Suppose on the contrary that $x$ occurs at least twice in $\mathbf{t}^{(n)}$. Then,
since $x_{n+1}$ is separating for $\mathbf{t}^{(n)}$, we have $x w^{(n)}x \in F(\mathbf{t})$ for some non-empty word $w^{(n)}$ for which $x_{n+1}$ is separating, and the first and last letter of $w^{(n)}$ is $x_{n+1}$ (that is,
$w^{(n)}x = \psi_{x_{n+1}}(w^{(n+1)}x)$,
where $w^{(n+1)} = \psi_{x_{n+1}}^{-1}(w^{(n)}x_{n+1}^{-1})$).
Using the fact that $ |w^{(n)}x|= 2 |w^{(n+1)}x| - |w^{(n+1)}x|_{x_{n+1}}$, we see that $|w^{(n+1)}| < |w^{(n)}|$. So, continuing the above procedure, we obtain infinite words $\mathbf{t}^{(n+1)}$, $\mathbf{t}^{(n+2)}$, $\ldots$~ containing similar shorter factors $xw^{(n+1)}x$, $x w^{(n+2)}x$, $\ldots$ until we reach $\mathbf{t}^{(q)}$, which contains $xx$. But this is impossible because the letter $x_{q+1} \ne x$ is separating for $\mathbf{t}^{(q)}$. Therefore $\mathbf{t}^{(n)}$ contains only one occurrence of $x$ and we have
$$\mathbf{t}^{(n)} = ux\mathbf{s}^{(n)} \quad \mbox{for some $u \in (\mathcal{A}\setminus\{x\})^*$ and $\mathbf{s}^{(n)} \in (\mathcal{A}\setminus\{x\})^\omega$}.$$
Now, as $x$ is never separating for $\mathbf{t}^{(j)}$, $j \geq n$, we can write $\mathbf{t}^{(n+j)} = u^{(j)}x\mathbf{s}^{(n+j)}$ for some $u^{(j)}$, $\mathbf{s}^{(n+j)}$, and we have $ \mathbf{s}^{(n+j-1)} = \psi_{x_{n+j}}(\mathbf{s}^{(n+j)})$, $j>0$. It follows by the Preliminaries (Section \ref{S:episturmian}) that $\mathbf{s}^{(n)}$ is a (recurrent) standard episturmian word.
Now we study the factor $u$ preceding $x$ in $\mathbf{t}^{(n)}$. Let $u^\prime = x_{n+1}u$ if $u$ does not begin with $x_{n+1}$; otherwise let $u^\prime = u$. Then $u^\prime x$ is a prefix of ${\mathbf{t}'}^{(n)}$. Moreover, since $x_{n+1}$ is separating for $u^\prime x$, we have $u^\prime x = \psi_{x_{n+1}}(u^{(1)}x)$ where $u^{(1)} = \psi_{x_{n+1}}^{-1}(u^\prime x_{n+1}^{-1})$. Hence $\mathbf{t}^{(n+1)} = u^{(1)}x \mathbf{s}^{(n+1)}$, where $x_{n+2}$ is separating for $u^{(1)}x $ (if $u^{(1)} \ne \varepsilon$). Continuing in this way, we arrive at the infinite word $\mathbf{t}^{(q)} = x \mathbf{s}^{(q)}$ for some $q \geq n$, where $\mathbf{s}^{(q)}$ is a standard episturmian word on $\mathcal{A}\setminus\{x \}$.
Reversing the procedure, we find that
\[
\mathbf{t}^{(n)} = w\mathbf{s}^{(n)} \quad \mbox{where $w = ux$ is a non-empty suffix of $\psi_{x_{n+1}}\cdots \psi_{x_q}(x)$}.
\]
Suppose $(u_i)_{i\geq1}$ is the sequence of palindromic prefixes of
\[
\mathbf{s} = \psi_{x_1}\cdots \psi_{x_n} (\mathbf{s}^{(n)}) = \mu_n(\mathbf{s}^{(n)}),
\]
and the words $(h_i)_{i\geq0}$ are the prefixes $(\mu_{i}(x_{i+1}))_{i\geq 0}$ of $\mathbf{s}$. Then, letting $u_i^{(n)}$, $h_i^{(n)}$, and $\mu_i^{(n)}$ denote the analogous elements for $\mathbf{s}^{(n)}$, we have
$$\mu_0^{(n)} = \mbox{Id}, \quad \mu^{(n)}_{i} = \psi_{x_{n+1}}\psi_{x_{n+2}}\cdots \psi_{x_{n+i}} = \mu_n^{-1}\mu_{n+i}$$ and
\[
h_0^{(n)} = x_{n+1}, \quad h_i^{(n)} = \mu_{i}^{(n)}(x_{n+1+i}) \quad \mbox{for $i=1$, $2$, $\ldots$~.}
\]
Now, if $u\ne \varepsilon$, then $q \geq 1$, and we have
\begin{align*}
\psi_{x_{n+1}}\cdots \psi_{x_q}(x) = \mu_{q-n}^{(n)}(x)
&= \mu_{q-n-1}^{(n)}\psi_q(x_n) \notag\\
&= \mu_{q-n-1}^{(n)}(x_qx_n) \notag\\
&= h_{q-n-1}^{(n)}\mu_{q-n-1}^{(n)}(x) \notag\\
&\qquad \vdots \notag\\
&=h_{q-n-1}^{(n)}\cdots h_{1}^{(n)}\mu_0^{(n)}(x_{n+1}x) \notag \\
&=h_{q-n-1}^{(n)}\cdots h_1^{(n)}h_0^{(n)}x
=u_{q-n+1}^{(n)}x \quad \mbox{(by \eqref{eq:u_n}).}
\end{align*}
Therefore, $w = ux$ where $u$ is a (possibly empty) suffix of the palindromic prefix $u_{q-n+1}^{(n)}$ of $\mathbf{s}^{(n)}$. That is, $u$ is the reversal of some prefix of $\mathbf{s}^{(n)}$; in particular
$$u = \widetilde{\mathbf{s}}_p^{(n)} \quad \mbox{for some $p \in \mathbb{N}$},$$
and hence
\[
\mathbf{t}^{(n)} = \widetilde{\mathbf{s}}_p^{(n)}x\mathbf{s}^{(n)}.
\]
So, passing back from $\mathbf{t}^{(n)}$ to $\mathbf{t}$, we find that
\[
\mathbf{t} = v\mu_n(\mathbf{s}^{(n)}) = v\mathbf{s} \quad \mbox{where $v$ is a non-empty suffix of $\mu_n(\widetilde{\mathbf{s}}_p^{(n)}x)$. } \]
It remains to treat the case $| \mathcal{A}| = 2$. Reasoning as previously we see that for some $n$, $\mathbf{t}^{(n)} = y^p x y ^\omega$ where $x \neq y \in \mathcal{A}$; whence the desired form for $\mathbf{t}$.
\medskip
\noindent (iii) $\Rightarrow$ (i): It suffices to show that the factors of $\widetilde{\mathbf{s}_p}x\mathbf{s}$ are (finite) episturmian. This is trivial for factors not containing the letter $x$. Suppose $w$ is a factor containing $x$. Then $w$ is a factor of $u_r x u_r$ where $u_r$ is a long enough palindromic prefix of $\mathbf{s}$. Thus it remains to show that $u_r x u_r$ is episturmian and this is true because it is $(u_rx)^{(+)}$, which is a palindromic prefix of some standard episturmian word.
\qed\end{pf}
\vspace{-10pt}
\begin{remq}
Episkew words on a $2$-letter alphabet are precisely the skew words, defined in Section \ref{SS:Sturmian}.
\end{remq}
\subsection{Fine words}
\begin{defn} An {\bf acceptable pair} is a pair $(a, <)$ where $a$ is a letter and $<$ is a lexicographic order on $\mathcal{A}^+$ such that $a = \min(\mathcal{A}).$
\end{defn}
\begin{defn} \cite{aG06acha}
An infinite word $\mathbf{t}$ on $\mathcal{A}$ is said to be {\bf fine} if there exists an infinite word $\mathbf{s}$ such that $\min(\mathbf{t}) = a\mathbf{s}$ for any acceptable pair $(a,<)$.
\end{defn}
\noindent\textit{Note.}
Since there are only two lexicographic orders on words over a 2-letter alphabet, a fine word $\mathbf{t}$ over $\{a,b\}$ ($a< b$) satisfies $(\min(\mathbf{t}), \max(\mathbf{t})) = (a\mathbf{s}, b\mathbf{s})$ for some infinite word $\mathbf{s}$.
Pirillo \cite{gP05mors} characterized fine words over a 2-letter alphabet. Specifically:
\begin{prop} \label{P:gP05mors} Let $\mathbf{t}$ be an infinite word over $\{a,b\}$. The following properties are equivalent:
\begin{enumerate}[{\em (i)}]
\item $\mathbf{t}$ is fine,
\item either $\mathbf{t}$ is aperiodic Sturmian, or $\mathbf{t} = v\mu(x)^\omega$ where $\mu$ is a pure standard Sturmian morphism on $\{a,b\}$, and $v$ is a non-empty suffix of $\mu(x^py)$ for some $p \in \mathbb{N}$ and $x$, $y \in \{a,b\}$ $(x\ne y)$. \qed
\end{enumerate}
\end{prop}
In other words, a fine word over two letters is either an aperiodic Sturmian word or an ultimately periodic (but not periodic) infinite word, all of whose factors are Sturmian, i.e., a \emph{skew word} (see Section \ref{SS:Sturmian}). Recently, Glen \cite{aG06acha} generalized this result to infinite words over two or more letters; that is, an infinite word $\mathbf{t}$ is fine if and only if $\mathbf{t}$ is either a strict episturmian word or a strict episkew word.
\begin{comment}
The following proposition will be used in Section \ref{S:infinite}; it concerns $\min(\mathbf{t})$ when $\mathbf{t}$ is an episkew word.
\begin{prop} \label{p1} If $\mathbf{t}$ is an episkew word on $\mathcal{A}$, then there exists an infinite word $\mathbf{u}$ such that
\begin{equation} \label{eq:non-strict}
a\mathbf{u} \leq \min(\mathbf{t})
\end{equation}
for any acceptable pair $(a,<)$.
\end{prop}
The proof of the above proposition requires the following technical lemma.
\begin{lem} \label{lem1}
Suppose $\mathbf{t}$, $\mathbf{t}'$ are infinite words on $\mathcal{A}$ such that $\mathbf{t} = \psi_z(\mathbf{t}')$ for some letter $z \in \mathcal{A}$. Then, for any given order $<$ on $\mathcal{A}$, we have:
\begin{enumerate}[{\em (i)}]
\item $\min(\mathbf{t}) = \psi_z(\min(\mathbf{t}'))$ if $\min(\mathbf{t})$ begins with $z$;
\item $\min(\mathbf{t}) = z^{-1}\psi_z(\min(\mathbf{t}'))$ if $\min(\mathbf{t})$ does not begin with $z$.
\end{enumerate}
\end{lem}
\vspace{-15pt}\begin{pf}
For short, let $\mathbf{m} = \min(\mathbf{t})$ and $\mathbf{m}' = \min(\mathbf{t}')$. \medskip
Suppose (i) does not hold and let $p$ be maximal such that $\psi_z(\mathbf{m}'_p) = w$ is a prefix of $\mathbf{m}$, say $w = \mathbf{m}_\ell$. If $\mathbf{m}'_{p+1} = \mathbf{m}'_pz$, then $\psi_z(\mathbf{m}'_{p+1}) = \mathbf{m}_\ell z = \mathbf{m}_{\ell+1}$, contradicting the maximality of $p$. Thus, $\mathbf{m}'_{p+1} = \mathbf{m}'_px$ for some letter $x > z$
and we have
\[
\psi_z(\mathbf{m}'_{p+1}) = \mathbf{m}_\ell zx.
\]
Now, $\mathbf{m}_\ell zz \not\in F(\mathbf{t})$ because otherwise $\mathbf{m}'_pz \in F(\mathbf{t}')$ with $\mathbf{m}'_pz < \mathbf{m}'_{p+1} = \mathbf{m}'_px$, which is impossible. Thus
\[
\psi_z(\mathbf{m}'_{p+1}) = \mathbf{m}_\ell zx = \mathbf{m}_{\ell+2},
\]
contradicting the maximality of $p$. \medskip
Similarly, suppose (ii) does not hold and let $p$ be maximal such that $z^{-1}\psi_z(\mathbf{m}'_p)$ is a prefix, say $\mathbf{m}_\ell$, of $\mathbf{m}$. Let $\mathbf{m}'_{p+1} = \mathbf{m}'_px$ for some letter $x$. If $x = z$, then
\[
z^{-1}\psi_z(\mathbf{m}'_{p+1}) = \mathbf{m}_\ell z = \mathbf{m}_{\ell+1}
\]
in contradiction with the maximality of $p$. Therefore, $\mathbf{m}'_{p+1} = \mathbf{m}'_px$, and hence $\mathbf{m}_\ell zx \in F(\mathbf{t})$. Now, since $\mathbf{m}_\ell zx \ne \mathbf{m}_{\ell+2}$, we have $\mathbf{m}_{\ell+2} = \mathbf{m}_\ell zy$ for some letter $y \ne x$, and thus $\mathbf{m}'_p y \in F(\mathbf{t}')$; in particular $\mathbf{m}'_py = \mathbf{m}'_{p+1}$, contradicting $\mathbf{m}'_{p+1} = \mathbf{m}'_px$.
\qed\end{pf}
\vspace{-15pt}\begin{pf*}{Proof of Proposition \ref{p1}}
Note that if $a \not\in$ Alph$(\mathbf{t})$ then \eqref{eq:non-strict} is trivially satisfied, so we may assume without loss of generality that Alph$(\mathbf{t}) = \mathcal{A}$. If $\mathbf{t}$ is a strict episkew word, then $\mathbf{t}$ is fine, by Glen \cite[Theorem 3.5]{aG06acha}; whence \eqref{eq:non-strict} holds with strict equality. On the other hand, if $\mathbf{t}$ is a non-strict episkew word, then $\mathbf{t} = v\mu(\mathbf{s})$ where $\mathbf{s}$ is a non-strict standard episturmian word on $\mathcal{A}\setminus\{x\}$, $\mu$ is a pure epistandard morphism on $\mathcal{A}$, and $v$ is a non-empty suffix of $\mu(\widetilde{\mathbf{s}_p}x)$ for some $p \in \mathbb{N}$.
First observe that if $\mu =$ Id, then
\[
\mathbf{t} = \widetilde{\mathbf{s}_q}x\mathbf{s} \quad \mbox{for some $q \leq p$.}
\]
In this case, Alph$(\mathbf{s}) = \mathcal{A}\setminus\{x\}$ and, for any acceptable pair $(a,<)$ with $a \ne x$, we have $\min(\mathbf{t}) = \min(\mathbf{s})$. Indeed, if $\mathbf{t} = x\mathbf{s}$ (i.e., $q=0$), then it is clear that $\min(\mathbf{t}) = \min(\mathbf{s})$. On the other hand, if $q\geq1$, let us suppose, on the contrary, that $\min(\mathbf{t}) \ne \min(\mathbf{s})$. Then $\min(\mathbf{t})$ contains the letter $x$, i.e.,
\[
\min(\mathbf{t}) = a\widetilde{\mathbf{s}_\ell}x\mathbf{s} \quad \mbox{for some $\ell$ with $0 \leq \ell < q \leq p$}
\]
(where $a\widetilde{\mathbf{s}_\ell} = \widetilde{\mathbf{s}}_{\ell+1} \in F(\mathbf{s})$). We have $\mathbf{s}_{\ell+1} = \mathbf{s}_{\ell}a$
and, by \cite[Proposition 3.2]{gP05ineq}, $a\mathbf{s} \leq \min(\mathbf{s})$. Thus, if $a\mathbf{s}_{\ell+1}$ is a factor of $\mathbf{s}$ (and hence a factor of $\mathbf{t}$), then $a\mathbf{s}_{\ell+1} = a\mathbf{s}_{\ell}a < a\widetilde{\mathbf{s}_{\ell}}x$; a contradiction. Therefore $a\mathbf{s}_{\ell+1} \not\in F(\mathbf{s})$ for any $a \in \mathcal{A}\setminus\{x\}$. But this is impossible since $\mathbf{s}$ is a standard episturmian word, so there exists at least one letter $z \in \mathcal{A}\setminus\{x\}$ such that any prefix of $z\mathbf{s}$ is a factor of $\mathbf{s}$. Thus $\min(\mathbf{s}) = \min(\mathbf{t})$, and hence $a\mathbf{s} \leq \min(\mathbf{t})$ (since $a\mathbf{s} \leq \min(\mathbf{s})$). Moreover, it is clear that $\min(\mathbf{t}) = x\mathbf{s}$ for any order such that $x = \min(\mathcal{A})$. Therefore $a\mathbf{s} \leq \min(\mathbf{t})$ for any acceptable pair $(a,<)$; whence $\mathbf{u} = \mathbf{s}$ satisfies \eqref{eq:non-strict}.
Now consider the case when $\mu$ is not the identity. Let us suppose that there does not exist an infinite word $\mathbf{u}$ satisfying \eqref{eq:non-strict} and let $\mu$ be minimal with this property. Then, $\mu = \psi_z\eta$ for some $z \in \mathcal{A}$ and pure epistandard morphism $\eta$.
Consider $\mathbf{t}^\prime = v^\prime\mu(\mathbf{s})$, where $v^\prime = v$ if $v$ begins with $z$ and $v^\prime = zv$ otherwise. Then $v^\prime$ is also a non-empty suffix of $\mu(\widetilde{\mathbf{s}_p}x)$ since $z$ is separating for the word $\mu(\widetilde{\mathbf{s}_p}x)$ (which begins with $z$). Letting $\mathbf{t}^{(1)} = \psi_z^{-1}(\mathbf{t}^\prime)$, we have
\[
\mathbf{t}^{(1)} = w\eta(\mathbf{s})
\]
where $w = \psi_z^{-1}(v^\prime)$ is a non-empty suffix of $\eta(\widetilde{\mathbf{s}_p}x)$.
By minimality of $\mu$, there exists an infinite word $\mathbf{u}^{(1)} = \psi_z^{-1}(\mathbf{u})$ such that $a\mathbf{u}^{(1)} \leq \min(\mathbf{t}^{(1)})$ for any acceptable pair $(a,<)$.
(Note that Alph$(\mathbf{t}^{(1)}) = \mathcal{A}$ if $zz \in F(\mathbf{t})$; otherwise Alph$(\mathbf{t}^{(1)}) = \mathcal{A}\setminus\{z\}$.) Then, since $\psi_z$ preserves the lexicographic order on infinite words (see Richomme \cite[Proposition 6.2]{gR04onmo}), we have
\[
\psi_z(a\mathbf{u}^{(1)}) \leq \psi_z(\min(\mathbf{t}^{(1)})) = \begin{cases}
\min(\mathbf{t}') &\mbox{if $z = a$}, \\
z\min(\mathbf{t}') &\mbox{if $z > a$},
\end{cases}
\]
by Lemma \ref{lem1}. That is, $a\mathbf{u} \leq \min(\mathbf{t}') = \min(\mathbf{t})$ for any acceptable pair $(a,<)$.
\qed\end{pf*}
\end{comment}
\section{A characterization of finite episturmian words}
Let $w \in \mathcal{A}^\infty$ and let $k$ be a positive integer. Recall that $\min(w | k)$ (resp.~$\max(w | k)$) denotes the lexicographically smallest (resp.~greatest) factor of $w$ of length $k$ for the given order (where $|w|\geq k$ for $w$ finite).
\begin{defn} For a finite word $w \in \mathcal{A}^+$ and a given order, $\min(w)$ will denote $\min(w | k)$ where $k$ is maximal such that all $\min (w | j)$, $j= 1,2, \dots, k$, are prefixes of $\min (w | k)$. In the case $\mathcal{A}= \{a,b\}$, $\max(w)$ is defined similarly.
\end{defn}
\begin{example} \label{Ex:min} Suppose $w = baabacababac$. Then, for the orders $b<a<c$ and $b<c<a$ on the $3$-letter alphabet $\{a,b,c\}$: \vspace{-10pt}
\begin{eqnarray*}
\min(w|1) &=& b \\
\min(w|2) &=& ba \\
\min(w|3) &=& bab \\
\min(w|4) &=& baba \\
\min(w|5) &=& babac ~=~ \min(w)
\end{eqnarray*}
\end{example} \vspace{-5pt}
Notice that, in the above example, $\min(w)$ is a suffix of $w$; in fact, this interesting property is true in general, as shown below.
\begin{prop} \label{prop:suffix}
For any finite word $w$ and a given order, $\min(w)$ is a suffix of $w$. Moreover, $\min(w)$ is unioccurrent (i.e., has only one occurrence) in $w$.
\end{prop}
\vspace{-15pt}\begin{pf} If $ \min (w)$ ($= \min (w | k)$, say) has an occurrence in $w$ that is not a suffix of $w$, then $\min(w | k+1) = \min (w | k)x$ for some letter $x$, contradicting the maximality of $k$. Hence $\min(w)$ occurs just once in $w$ as a suffix.
\qed\end{pf}
\vspace{-10pt}
\noindent\textbf{Notation:} From now on, it will be convenient to denote by $v_p$ the prefix of length $p$ of a given finite {\em or} infinite word $v$ (where $|v| \geq p$ for $v$ finite).
In this section, we shall prove the following characterization of finite episturmian words.
\begin{thm}\label{t3} A finite word $w$ on $\mathcal{A}$ is episturmian if and only if there exists a finite word $u$ such that, for any acceptable pair $(a, <)$, we have
\begin{equation}au_{|m|-1} \le m \label{e2} \end{equation} where $m= \min(w)$ for the considered order.
\end{thm}
The following two lemmas are needed for the proof of Theorem \ref{t3}.
\begin{lem} \label{lem:separating}
If $w$ and $u$ satisfy inequality $\eqref{e2}$ for all acceptable pairs $(a,<)$ and $|\mbox{{\em Alph}}(w)| >1$, then $u$ is non-empty and its first letter is separating for $w$.
\end{lem}
\vspace{-15pt}\begin{pf}
Let $a \neq b \in \textrm{Alph}(w)$ and let $(a,<)$, $(b,<')$ be two acceptable pairs. As the corresponding two $\min(w)$'s are suffixes of $w$ (by Proposition \ref{prop:suffix}), they have different lengths; whence $|u| > 0$.
Now we show that the first letter $u_1$ of $u$ is separating for $w$. Indeed, if this is not true, then there exist letters $z$, $z' \in \mathcal{A}\setminus\{u_1\}$ (possibly equal) such that $zz' \in F(w)$. But $\min(\mathcal{A}) = z \leq z' < u_1$ for some acceptable pair $(z,<)$, in which case $zz' < zu_1$, contradicting the fact that $zu_1 \leq m_2$.
\qed\end{pf}
\begin{lem} \label{lem:psi_z}
Consider $w$, $w'$ $ \in \mathcal{A}^*$ and some letter $z \in \mathcal{A}$. For any given order $<$ on $\mathcal{A}$:
\begin{enumerate}[{\em (i)}]
\item if $w$ does not end with $z$ and $w= \psi_z(w')$, then
\[
\min(w) = \begin{cases}
\psi_z(\min(w')) &\mbox{if $\min(w)$ begins with $z$}, \\
z^{-1}\psi_z(\min(w')) &\mbox{otherwise};
\end{cases}
\]
\item if $w$ ends with $z$ and $w= \psi_z(w')z$, then
\[
\min(w) = \begin{cases}
\psi_z(\min(w'))z &\mbox{if $\min(w)$ begins with $z$}, \\
z^{-1}\psi_z(\min(w'))z &\mbox{otherwise}.
\end{cases}
\]
\end{enumerate}
\end{lem}
\vspace{-15pt}\begin{pf}
We denote by $m$, $m'$ the respective words $\min (w)$, $\min (w')$.
Consider first the simplest case: $w$ does not end with $z$, $m$ begins with $z$. Thus $w= \psi_z (w')$ for some word $w'$ that does not end with $z$. Write $e:= \psi_z (m')$. We have to show that $e=m$.
Let $k$ be maximal such that $e_i=\min(w | i)$ for $i=1, \dots, k$. Suppose $k<|e|$. Then there exist $x,y \in \mathcal{A}$, $x>y$, such that $e_{k+1} = e_k x$ and $e_k y \in
F(w)$. Thus, as $z$ is separating for $w$, $e_k = e_{k-1}z$, with $e_{k-1}=\psi_z (m'_q)$ for some $q$. Since $m$ begins with $z$, $\min(\textrm{Alph}(w')) = z$ and we have $e_{k+1} = \psi_z(m'_q x)= \psi_z (m'_{q+1})$. Also, if $y \neq z$ then $e_k y= \psi_z (m'_q y)$ with $m'_q y \in F(w')$. If $y=z$, then as $w$ does not end with $z$, $e_k yd = e_{k-1} zyd $ is a factor of $w$ for some letter $d$; whence again $m'_q y \in F(w')$. As $x>y$, this contradicts $m'_{q+1}= \min(w' | q+1)$.
Thus $k=|e|$. It suffices now to show that no $ex$, $x \in \mathcal{A}$, occurs in $w$. Otherwise $ex \in F(w)$. As $m'$ does not end with $z$, also $e$ does not end with $z$, thus $x=z$. So, as $w$ does not end with $z$, $ezy=exy$ occurs in $w$ for some letter $y$, whence $\psi_z (m' y) \in F(w')$ contradicting the unioccurrence of $m'$ in $w'$.
Now we pass to the most complicated case: $w$ ends with $z$, $m$ does not begin with $z$, $w= \psi_z(w')z$. Letting $e:= z^{-1} \psi_z(m')z$, we need to show that $e=m$. Let $k$ be maximal such that $e_i=\min(w | i)$ for $i=1, \dots, k$. Suppose $k<|e|$. Then, there exist $f \in F(w)$ and $x,y \in \mathcal{A}$ with $x>y$, such that $e_{k+1} = e_k x$ and $f=e_k y$. As $w$ begins with $z$, clearly $z e_{k+1}, zf \in F(w)$. Also $e_k$ ends with $z$, hence
$ze_{k+1} = \psi_z (m'_q)zx $ and $zf=\psi_z(m'_q)zy$ for some $q <|m'|$. We distinguish three cases: $x,y \neq z$; $x=z$; $y=z$.
The first case leads to $ze_{k+1} = \psi_z (m'_qx)$ and $zf=\psi_z(m'_qy)$; whence $m'_{q+1} >m'_q y$, contradicting the definition of $m'$. For the case $x=z, y \neq z$, let $m' = m'_q u$, $u \in \mathcal{A}^*$, and recall that $ze= \psi_z(m')z$. We get $ze= \psi_z (m'_q) \psi_z (u)z$, thus $\psi_z (u)z$ begins with $zz$, and so $u$ begins with $z$. Hence $m'_{q+1}= m'_qz=m'_qx$, leading to a contradiction as above. The third case is similar.
Thus $k= |e|$ and it remains to show that no $ex$, $x$ a letter, occurs in $w$. Consider for instance the case $x=z$. Indeed $ez \in F(w)$ implies $z^{-1}\psi_z (m')zz \in F(\psi_z (w')z)$, so
$\psi_z (m')z \in F(\psi_z (w'))$, whence $m'd \in F(w')$ for some letter $d$; a contradiction.
The other two cases in the lemma have similar proofs.
\qed\end{pf}
\vspace{-5pt}
\begin{example} Let us illustrate the most complicated case when $w$ ends with $z$ and $m$ does not begin with $z$. Let $w' = aa$, $z=b$, $w = babab = \psi_b (w')b $. Then $m'=aa$ and $m= abab= b^{-1}\psi_b (m')b$.
\end{example}
\vspace{-15pt}\begin{pf*}{Proof of Theorem \ref{t3}} ONLY IF part: $w$ is finite episturmian, so is a factor of some standard episturmian word $\mathbf{s}$. By \cite[Proposition 3.2]{gP05ineq} or \cite[Theorem 0.1]{jJgP02onac}, $a\mathbf{s}\leq \min(\mathbf{s})$ for any acceptable pair $(a,<)$. Thus, $m = \min(w)$ trivially satisfies
\[
a\mathbf{s}_{|m|-1} \leq m;
\]
that is, with $r$ large enough and $u=\mathbf{s}_r$, inequality \eqref{e2} is satisfied for any acceptable pair $(a,<)$, as required. \smallskip
\noindent IF part: Remark first that if \eqref{e2} is satisfied for some $u$ then it also holds for any $uv$, $v \in \mathcal{A}^*$. Also, if $a \not \in \textrm{Alph} (w)$ then \eqref{e2} is trivially satisfied, allowing us to limit our attention to acceptable pairs $(a,<)$ with $a \in \textrm{Alph} (w)$.
Let $x:= u_1$, the first letter of $u$. The proof will proceed by induction on $\ell= |w|$. If $w$ is a letter, then $w$ is clearly finite episturmian, i.e., the initial case $|w| = 1$ is trivially true.
We now distinguish two cases according to whether or not $w$ begins with $x$.
\noindent \emph{Case $1$}: $w$ begins with $x$. Suppose for instance $w$ does not end with $x$ (the other case is similar). Then, by Lemma \ref{lem:separating}, $w = \psi_x (w')$ for some word $w'$ that does not end with $x$. Further, it follows from Lemma \ref{lem:psi_z} that, for any acceptable pair $(a,<)$, $\min(w) = \psi_x(\min(w'))$ if $x=a$ (resp.~$\min(w) =x^{-1} \psi_x (\min(w'))$ if $x \ne a$). For short, let $m$, $m'$ denote the respective words $\min(w)$, $\min(w')$. The induction step will consist in constructing some word $u'$ such that inequality \eqref{e2} holds for $w'$, $u'$.
For any acceptable pair $\pi =(a,<)$ with $a \in \textrm{Alph} (w)$, let $h= h(\pi)$ be maximal such that $au_h$ is a prefix of $m$, and let $H$ be the largest $h(\pi)$ for all such pairs $\pi$. As $u_H \in F(w)$ and begins with $x$, we have $u_H=\psi_x (v)$ for some word $v$.
Now consider an acceptable pair $\pi =(a,<)$ as above with $h<H$. If $au_h=m$ then we see that $av_q =m'$ for some $q$. Otherwise there exist letters, $y < z$ such that $au_{h+1}=au_h y$ and $ m_{h+2} =au_h z$; whence easily $av_{q+1}=av_qy$ and $m'_{q+2}= av_qz$, and thus $av_{|m'|-1} <m'$.
Now, for any pair $(a,<)$ such that $h=H$ we have either $au_H =m$ or $au_{H+1}=au_Hy < m_{H+2} = m_{H+1}z$, for some letters $y<z$; whence $av=m'$ or $avy < m$.
Consequently we can take either $u'=v$ or $u'=vy$. This is the induction step. Clearly $|w'|=\ell' <\ell= |w|$ unless $|\textrm{Alph}(w)|=1$, a trivial case.
\noindent\emph{Case $2$}: $w$ does not begin with $x$. In this case, we have $w = x^{-1}\psi_x(w')$ for some word $w'$ that does not begin with $x$. Consider $W=xw = \psi_x(w')$. Then, for any acceptable pair $(a,<)$ with $a \neq x$, we have easily $\min(W) = \min (w)$. The same holds if $a=x$ and $aa$ occurs in $w$ because in this case $\min(W)$ begins with $aa$ and $W$ begins with $ay$ for some $y \neq x$; thus $\min(W) \in F(w)$. If $x=a$ and $xx \not \in F(w)$, then the letter $x$ does not occur in $w'$, so inequality \eqref{e2} is trivially satisfied for $w'$ (as stated previously). Thus we can use $W=xw$ instead of $w$ for performing the induction step as in Case 1, ignoring acceptable pairs of the form $(x,<)$. However, as $|W| =|w| +1$, it is possible that $|w'|=|w|$ or $|w'|=|w| +1$, which are trivial cases corresponding to words $w'$ of the form $yx^p$ for some letter $y \ne x$ and $p \in \mathbb{N}$.
\qed\end{pf*}
\begin{example} Recall the finite word $w= baabacababac$ from Example \ref{Ex:min}.
For the different orders on $\{a,b,c\}$, we have
\begin{itemize}
\item $a<b<c$ or $a<c<b$: $\min(w) = aabacababac$,
\item $b<a<c$ or $b<c<a$: $\min(w) = babac$,
\item $c<a<b$ or $c<b<a$: $\min(w) = cababac$.
\end{itemize}
It can be verified that a finite word $u$ satisfying \eqref{e2} must begin with $aba$ and one possibility is $u=abacaaaaaa$; thus $w$ is a finite episturmian word.
\end{example}
\noindent\textit{Note.} In the above example, any two acceptable pairs involving the same letter give the same $\min(w)$, which is not the case in general.
A corollary of Theorem \ref{t3} is the following new characterization of finite Sturmian words (i.e., finite balanced words).
\begin{cor}\label{cor2} A finite word $w$ on $\mathcal{A}=\{a,b\}$, $a<b$, is not Sturmian (in other words, not balanced) if and only if there exists a finite word $u$ such that $aua$ is a prefix of $\min(w)$ and $bub$ is a prefix of $\max(w)$.
\qed
\end{cor}
\begin{example}
For $w = ababaabaabab$, $\min(w)=aabaabab$ and $\max(w)=babaabaabab$. The longest common prefix of $a^{-1}\min(w)$ and $b^{-1}\max(w)$ is $abaaba$, which is followed by $b$ in $\min(w)$ and $a$ in $\max(w)$. Thus $w$ is Sturmian. However, if we take $w= aabababaabaab$ for instance, then $w$ is not Sturmian since $\min(w)= auab$ and $\max(w)= bubaabaab$ where $u = aba$.
\end{example}
\begin{remq}
An unrelated connection between finite balanced words (i.e., finite Sturmian words) and lexicographic ordering was recently studied by Jenkinson and Zamboni \cite{oJlZ04char}, who presented three new characterizations of `cyclically' balanced finite words via orderings. Their characterizations are based on the ordering of a {\em shift orbit}, either lexicographically or with respect to the $1$-\emph{norm}, which counts the number of occurrences of the symbol $1$ in a given finite word over $\{0,1\}$.
\end{remq}
\section{A characterization of infinite episturmian words in a wide sense} \label{S:infinite}
In this last section, we characterize by lexicographic order the set of all infinite words whose factors are (finite) episturmian. Such infinite words are exactly the episturmian and episkew words, as shown in Proposition \ref{P:wide} below.
\begin{defn} An infinite word is said to be \textbf{\em episturmian in the wide sense} if all of its factors are (finite) episturmian.
\end{defn}
We have the following easy result:
\begin{prop} \label{P:wide} An infinite word is episturmian in the wide sense if and only if it is episturmian or episkew.
\end{prop}
\vspace{-15pt}\begin{pf}
Let $\mathbf{t}$ be an infinite word. First suppose that $\mathbf{t}$ is episturmian in the wide sense. Clearly, if $\mathbf{t}$ is recurrent, then $\mathbf{t}$ is episturmian ({\it cf.} proof of (i) $\Rightarrow$ (ii) in Proposition \ref{P:skew}). On the other hand, if $\mathbf{t}$ is non-recurrent, then $\mathbf{t}$ is episkew, by Proposition \ref{P:skew}.
Conversely, if $\mathbf{t}$ is episturmian or episkew, then all of its factors are (finite) episturmian, and hence $\mathbf{t}$ is episturmian in the wide sense.
\qed\end{pf}
\begin{remq}
Recall that in the $2$-letter case the balanced infinite words (all of whose factors are finite Sturmian) are precisely the Sturmian and skew infinite words. As such, `episturmian words in the wide sense' can be viewed as a natural generalization of balanced infinite words to an arbitrary finite alphabet.
\end{remq}
As a consequence of Theorem \ref{t3}, we obtain the following characterization of episturmian words in the wide sense (episturmian and episkew words).
\begin{cor}\label{cor1*} An infinite word $\mathbf{t}$ on $\mathcal{A}$ is episturmian in the wide sense if and only if there exists an infinite word $\mathbf{u}$ such that
\begin{equation}a\mathbf{u} \le \min(\mathbf{t}) \label{e1*}\end{equation}
for any acceptable pair $(a, <)$.
\end{cor}
\vspace{-15pt}\begin{pf} IF part: Inequality \eqref{e1*} holds. So, for any factor $w$ of $\mathbf{t}$ and any acceptable pair $(a,<)$, we have
\[
a\mathbf{u}_{|m|-1} \leq m \quad \mbox{where $m = \min(w)$}.
\]
Therefore, by Theorem \ref{t3}, $w$ is a finite episturmian word; whence $\mathbf{t}$ is episturmian in the wide sense since any factor of $\mathbf{t}$ is (finite) episturmian.
\smallskip
\noindent ONLY IF part: $\mathbf{t}$ is episturmian in the wide sense, so all of its factors are (finite) episturmian; in particular, any prefix $\mathbf{t}_q$ of $\mathbf{t}$ is finite episturmian. Therefore, by Theorem \ref{t3}, there exists a finite word, say $u(q)$, such that, for any acceptable pair $(a,<)$, we have
\[
au(q)_{|m(q)|-1} \leq \min(\mathbf{t}_q) \quad \mbox{where $m(q) = \min(\mathbf{t}_q)$}.
\]
On the other hand, for any $k \in \mathbb{N}$ there exists a number $r(k) \in \mathbb{N}$ such that, for any $q \geq r(k)$, $\mathbf{t}_q$ contains all the $\min(\mathbf{t} | k)$ as factors for all acceptable pairs $(a, <)$. It follows then that $\min(\mathbf{t} | k)$ is a prefix of $\min(\mathbf{t}_q)$; in particular $|\min(\mathbf{t}_q)| \ge k$, and hence $|u(q)| \geq k-1$. Thus, the $|u(q)|$ are unbounded.
Let us denote by $\mathbf{u}$ a limit point of the $u(q)$. Then, for any $n$, infinitely many $u(q)$ have $\mathbf{u}_n$ as a prefix.
Now, for any given $k\in \mathbb{N} ^+$ and acceptable pair $(a,<)$, there exists a $q$ (as above) such that
\[
a\mathbf{u} _{k-1} = au(q)_{k-1}\le \min(\mathbf{t}_q)_k = \min(\mathbf{t} | k).
\]
Thus $a \mathbf{u} \le \min (\mathbf{t})$.
\begin{comment}
\noindent ONLY IF part: If $\mathbf{t}$ is a (recurrent) episturmian word, then there exists a standard episturmian word $\mathbf{s}$ with the same set of factors such that $a\mathbf{s} \leq \min(\mathbf{s}) = \min(\mathbf{t})$ for any acceptable pair $(a,<)$, by Pirillo \cite[Proposition 3.2]{gP05ineq} or \cite[Theorem 0.1]{jJgP02onac}. On the other hand, if $\mathbf{t}$ is an episkew word, then \eqref{e1*} is satisfied, by Proposition \ref{p1}. In either case \eqref{e1*} holds.
\end{comment}
\qed\end{pf}
\begin{comment}
\begin{remq} The ``only if'' part of the preceding theorem can be proved alternatively by observing that if $\mathbf{t}$ is episturmian in the wide sense, then all of its factors are (finite) episturmian; in particular, any prefix $\mathbf{t}_q$ of $\mathbf{t}$ is finite episturmian. Therefore, by Theorem \ref{t3}, there exists a finite word $u$ such that, for any acceptable pair $(a,<)$, we have
\[
au_{|m(q)|-1} \leq \min(\mathbf{t}_q) \quad \mbox{where $m(q) = \min(\mathbf{t}_q)$}.
\]
Letting $q$ (and hence $|m(q)|$) tend to infinity, we see that there exists an infinite word
$\mathbf{u}$
such that, for any acceptable pair $(a,<)$, we have
\[
a\mathbf{u} \leq \min(\mathbf{t}) \quad \mbox{where $\mathbf{t} = \underset{q\rightarrow\infty}{\lim} \mathbf{t}_q$}.
\]
\end{remq}
\end{comment}
In the 2-letter case, we have the following characterization of balanced infinite words; in other words, all Sturmian and skew infinite words.
\begin{cor}\label{cor1} An infinite word $\mathbf{t}$ on $\{a,b\}$, $a<b$, is balanced (i.e., Sturmian or skew) if and only if there exists an infinite word $\mathbf{u}$ such that
\begin{equation*}a\mathbf{u} \le \min(\mathbf{t}) \le \max(\mathbf{t}) \le b\mathbf{u}. \label{e1} \qed\end{equation*}
\end{cor}
\begin{remq} A variation of the above result appears, under a different guise, in a paper by S.~Gan \cite[Lemma 4.4]{sG01stur}.
\end{remq}
|
2,877,628,088,433 | arxiv | \section{Introduction}
In the geostatistical literature,
likelihood-based methods for parameter estimation
make explicit assumptions on the
statistical distributions of the spatial variable in question,
thus contrast with some traditional methods
such as ones based on curve-fitting for an empirical variogram model
\citep{Kovitz:2004:SSC, Emery:2007:RFS}.
The distributional assumptions are manifested, for example,
in ``model-based geostatistics''
\citep{Diggle:2007:MBG}
and ``trans-Gaussian'' models
\citep{Christensen:2001:APV}.
First employed apparently in the early 1980s,
likelihood-based methods have by now established themselves
as a standard approach \citep[see][for a review]{Zimmerman:2010:LBM}.
Given the distributional assumptions,
one may obtain maximum likelihood (ML) estimates of the parameters,
such as parameters in the spatial covariance function.
However,
estimating an ``optimal'' value for the parameters of the covariance function
via ML methods is flawed because,
according to \citet{Warnes:1987:PLE},
the profile likelihood can be multimodal
and is often nearly flat in the neighborhood of its mode.
These problems suggest that
ML estimates may be difficult to find and
such estimates, if found, may not be truly ``representative'' or ``optimal''.
\citet{Handcock:1993:BAK} further emphasize that
``plug-in'' measures of prediction uncertainty based on ML parameter
estimates tend to be over optimistic.
They argue that a Bayesian approach mitigates this problem.
Indeed,
parallel to the growth of Bayesian statistics in general,
Bayesian geostatistics (or Bayesian kriging) has enjoyed steady growth
since the 1980s
\citep{%
Kitanidis:1986:PUE,
Handcock:1993:BAK,
deOliveira:1997:BPT,
Ecker:1999:BMI,
Berger:2001:OBA,
Diggle:2002:BIG,
Banerjee:2004:HMA,
Palacios:2006:NGB,
Cowles:2009:RMP}.
In this study,
we take a typical geostatistical formulation
and present an algorithm
for deriving a numerical
representation of the posterior distribution of the parameters.
The parameter vector, $\theta$,
consists of standard elements such as
trend coefficients $\trend$,
variance $\sd^2$,
scale $\scale$,
smoothness $\kappa$, etc.
It also accommodates geometric anisotropy.
The prior specification combines
standard non-informative priors (for $\trend$ and $\sd^2$)
and informative priors (for $\scale$ and $\kappa$).
The algorithm uses a normal mixture to approximate the posterior density;
the approximation is updated iteratively to approach the true
posterior distribution.
The proposed algorithm avoids two empirical treatments
that have been used in the literature,
namely, imposing bounds on an unbounded parameter
and discretizing a continuous parameter.
One parameter that has received such treatments
is the scale parameter, $\scale$
\citetext{%
\citealp{deOliveira:1997:BPT};
\citealp[sec.~7.2]{Diggle:2007:MBG};
\citealp{Cowles:2009:RMP}}.
These treatments have conceptual as well as practical complications.
Conceptually,
one needs to take great care to defend one's subjectively chosen bounds
for an unbounded parameter by showing that the posterior inference
is not sensitive to these bounds
\citep{Berger:2001:OBA}.
Practically,
discretizing a continuous parameter renders the computational cost
directly proportional to the resolution of the discretization.
If discretization and artificial bounds are applied simultaneously
on one parameter, say the scale $\scale$,
the desires to use wide bounds and dense discretization
exert great burden on computation.
Furthermore,
the discretization approach is not scalable,
in the sense that the computational cost grows exponentially
as the number of parameters being discretized increases.
The central step of the algorithm,
multivariate kernel density estimation,
is a standard task with unsolved difficulties
\citep{Silverman:1986:DES, Scott:1992:MDE, Wand:1995:KS}.
Our algorithm makes particular efforts to determine two tuning
parameters,
one concerning localization and the other concerning bandwidth,
by a likelihood criterion.
Another technical difficulty arises from high skewness of
importance weights, especially in early iterations.
This problem is alleviated by a ``flattening'' transform.
This study intends to provide a generic, or routine,
procedure for the posterior inference of Bayesian kriging parameters.
As already mentioned,
the procedure specifies a prior with limited user intervention,
improves an initial, rough approximation iteratively,
and avoids some ad hoc treatments that have been used in the literature.
Moreover,
the procedure is readily applicable to other model formulations
so long as all the model parameters are defined on
$(-\infty, \infty)$ or can be transformed to be so.
In this sense,
application of the algorithm extends beyond geostatistics.
The article is organized as follows.
The geostatistical parameterization for a spatial variable
is described in Section~\ref{sec:parameterization},
with details on the Mat{\'e}rn correlation function,
geometric anisotropy,
the likelihood function,
and specification of the prior.
The technical core of this article,
the iterative algorithm for posterior inference,
is the subject of Section~\ref{sec:algorithm}.
The proposed method is capable of dealing with a type of
``change-of-support'' problems; this is briefly discussed in
Section~\ref{sec:change-of-support}.
In Section~\ref{sec:examples},
we apply the geostatistical model and the algorithm
in two examples, one using synthetic data and the other using historical
data.
Section~\ref{sec:concl} concludes the article with a summary.
\section{Parameterization}
\label{sec:parameterization}
Let $\Y(\x)$ be a spatial random variable,
where
$\x \in \Omega \subset \mathcal{R}^d$ is location
in $d$-dimensional space (hence $\x$ is a $d$-vector, $d=1,2,3$).
We adopt the following mixed-effects model for $\Y(\x)$:
\begin{equation}\label{eq:Y-mixed-model}
\Y(\x)
= \transpose{\mu(\x)}\trend
+ \sqrt{1 - \nugget} \,\sd \,\varrho(\x)
+ \sqrt{\nugget}\sd \,\epsilon(\x)
,
\end{equation}
where
$\mu(\x)$ is a $p$-vector of deterministic covariates
(e.g.\@ polynomials of the spatial coordinates);
$\trend$ is a $p$-vector of trend coefficients;
$\varrho(\x)$ is a zero-mean, unit-variance, stationary Gaussian
process;
$\epsilon(\x)$ is an iid standard normal white-noise process;
$\sd > 0$ is the standard deviation of $\Y(\x)$;
and
$0 \le \nugget < 1$ is a nugget parameter.
The process $\varrho(\x)$ is characterized by a
correlation function parameterized by a $q$-vector $\phi$.
In addition, we assume $\varrho(\x)$ and $\epsilon(\x)$ are independent
of each other.
We denote the full parameter vector by
$\theta = (\trend, \nugget, \sd^2, \phi)$.
Modeling efforts are typically concentrated on inferencing and
interpreting
the nugget parameter $\nugget$,
the variance $\sd^2$,
and
the correlation parameter(s) $\phi$.
The content of $\phi$ depends on the specific correlation
function employed;
our choice here is the Mat\'ern correlation function,
to be described below.
In addition,
the formulation above allows for geometric anisotropy,
parameters of which are also contained in $\phi$.
\subsection{Correlation function}
We assume spatial stationarity for $\Y$,
hence the correlation between $\Y(\x_1)$ and $\Y(\x_2)$
is a function of
scaled distance, denoted by
$\ell(\x_1,\x_2) = |\x_1 - \x_2| / \scale$,
where $\scale$ is the ``scale'' parameter.
The marginal distribution of $\Y(\x)$
in the formulation~(\ref{eq:Y-mixed-model})
is
$N\bigl(\transpose{\mu(x)}\!\trend,\, \sd^2\bigr)$.
Of the total variance $\sd^2$,
$\nugget \sd^2$ is contributed by the white noise component
$\sqrt{\nugget}\sd\, \epsilon(\x)$,
whereas the spatially correlated component $\sqrt{1-\nugget}\sd
\varrho(\x)$ contributes a variance of $(1-\nugget)\sd^2$.
In other words,
the so-called ``nugget effect'' accounts for a fraction $\nugget$
of the total variance.
The covariance between
$\Y(\x_1)$ and $\Y(\x_2)$ is
\begin{equation}\label{eq:cov}
\begin{split}
\cov\bigl(\Y(\x_1), \Y(\x_2)\bigr)
&= \sd^2 \corr\bigl(\Y(\x_1), \Y(\x_2)\bigr)
\\
&= \sd^2 \Bigl(
(1 - \nugget) \rho\bigl(\ell(\x_1,\x_2)\bigr)
+ \nugget I(\x_1 = \x_2)
\Bigr)
,
\end{split}
\end{equation}
where $I$ is the identity function,
assuming value 1 if $\x_1$ coincides with $\x_2$ and 0 otherwise,
and $\rho$ is taken to be the Mat\'ern correlation function:
\begin{equation}\label{eq:matern-corr}
\rho(\ell; \smoothness)
= \frac{1}{2^{\smoothness - 1} \Gamma(\smoothness)}
\,
\ell^{\smoothness}
\,
\mathcal{K}_{\smoothness}(\ell)
,\quad
\smoothness > 0
,
\end{equation}
where $\Gamma$ is the gamma function,
$\mathcal{K}_{\smoothness}$ is the modified Bessel function
of the third kind of order $\smoothness$
\citep[secs~9.6 and 10.2]{Abramowitz:1965:HMF};
and
$\smoothness$
is the smoothness parameter
\citetext{%
\citealp[p.~31]{Stein:1999:ISD};
\citealp[p.~51]{Diggle:2007:MBG}}.
In summary,
in a 1-D model or 2- and 3-D isotropic models,
the correlation parameter $\phi$ contains two elements,
$\scale$ and $\smoothness$.
In an anisotropic model,
$\phi$ contains additional elements that characterize anisotropy, as we
discuss next.
\subsection{Geometric anisotropy}
When $d > 1$, our formulation allows for geometric anisotropy.
\citet{Zimmerman:1993:ALA} distinguishes three forms of geometric
anisotropy, including anisotropy in sill,
in range (the ``scale'' parameter here), and in nugget, respectively.
Range anisotropy is the most commonly discussed and is also the
anisotropy considered here.
When such anisotropy is present,
the scaled distance, $\ell$, is calculated in a transformed coordinate
system, which is obtained by
rotating the ``natural'' axes to the major and minor
directions of anisotropy,
and using different scales ($\scale$'s) along different axes.
Specifically,
in 2-D we need one angle ($\angle$) and two scales
($\scale_1$, $\scale_2$) to describe such anisotropy,
whereas in 3-D we need three angles
($\angle_1$, $\angle_2$, $\angle_3$) and three scales
($\scale_1$, $\scale_2$, $\scale_3$).
The rotational angles determine a transformation matrix, $\anis$
\citep[see, \eg][p.~62]{Wackernagel:2003:MG}.
With the matrix $\anis$ and directional scales $\scale_1,\dotsc,\scale_d$, define
\[
\mat{B}(\scale, \anis)
= \anis
[\diag(\scale_1^{-2},\dotsc,\scale_d^{-2})]
\transpose{\anis}
,
\]
where
$\diag(\scale_1^{-2},\dotsc,\scale_d^{-2})$ denotes the diagonal matrix
with $\scale_1^{-2},\dotsc,\scale_d^{-2}$ on the main diagonal.
The scaled distance is then defined by
\begin{equation}\label{eq:ell}
\ell(\x_1, \x_2)
= \sqrt{
\transpose{[\x_1 - \x_2]}
\mat{B} \,
[\x_1 - \x_2]}
.
\end{equation}
Note that the $\x_1 - \x_2$ is a column vector of length $d$.
This $\ell$ is used in~(\ref{eq:matern-corr}) to calculate
$\rho$.
In a study of geometric anisotropy in 2-D,
\citet{Ecker:1999:BMI} treat the matrix $\mat{B}$ as parameter and derive
$\anis$ and $\scale$ from $\mat{B}$.
They discuss, in a Bayesian context,
how to specify a prior for
$\mat{B}$ using the Wishart distribution.
The parameterization we adopt goes in the opposite direction.
With our parameterization,
priors for the angle(s) $\angle$
and the scales $\scale$ are specified.
This parameterization has some advantages in terms of
interpretation and intuition.
These two parameterizations contain the same number of unknowns.
\citet{Hristopulos:2002:NAC}
studies more general forms of anisotropy that are not considered here.
In summary,
in a 2-D anisotropic model, the correlation parameter
$\phi$ consists of elements $\smoothness$, $\angle$,
and $\scale_1$, $\scale_2$;
in a 3-D anisotropic model, $\phi$ consists of
$\smoothness$, $\angle_1$, $\angle_2$, $\angle_3$,
and $\scale_1$, $\scale_2$, $\scale_3$.
\subsection{Likelihood}
The parameter vector is denoted by
$\theta = (\trend, \nugget, \sd^2, \phi)$.
The content of the correlation parameter $\phi$
depends on the spatial dimension $d$ and whether anisotropy is
considered, as discussed above.
Suppose we have measurements of $\Y$,
denoted by $\vy$,
at $n$ locations $\vx$.
The likelihood function of $\theta$ with respect to $\vy$ is
equal to
\begin{equation}\label{eq:likelihood}
p(\vy \given \theta)
= (2\pi \sd^2)^{-n/2} \,
|\mR|^{-1/2}
\exp\Bigl( -\frac{1}{2\sd^2}
\transpose{\bigl(\vy - \mX \trend\bigr)}
\negthinspace
\mR^{-1}
\bigl(\vy - \mX\trend\bigr)
\Bigr)
,
\end{equation}
where
$\mX$ is the ``design matrix'' of covariates corresponding to $\vx$,
each row being $\mu(\x)$ for a single location $\x$;
$\mR$ is the correlation matrix between the locations $\vx$,
calculated using the relations
(\ref{eq:cov}) and~(\ref{eq:matern-corr}).
In applications,
the parameter vector $\theta$ could be simplified depending on the actual
situation of the spatial variable and emphasis of the investigation.
For example, one might choose to fix the smoothness $\smoothness$
at a certain value, say 0.5 or 1.5.
(Then, $\smoothness$ would not appear in $\theta$.)
For another example,
one might decide, based on background knowledge,
not to consider anisotropy,
hence $\scale$ would be a scalar,
and $\phi$ would not contain $\angle$.
When anisotropy is considered,
the coordinate rotation affects location-aware calculations
including the trend function (via $\mu(\x)$ and $\trend$) and
the correlation.
We choose to define the trend function in terms of the original coordinates.
The angles ($\angle$'s) define the rotations;
the scales ($\scale$'s) are applied along the axes after the rotation.
The parameters $\smoothness$, $\nugget$, and $\sd^2$ do not have directional
components because we consider range anisotropy only.
\subsection{Specification of prior}
\label{sec:prior}
We specify a prior with independent components:
\begin{equation}\label{eq:prior}
\pi(\trend, \sd^2, \nugget, \scale, \smoothness, \angle)
=
\bigl(\sd^2\bigr)^{-1}
\operatorname{beta}(\nugget; \cdot)\,
\operatorname{gamma}(\scale; \cdot) \,
\operatorname{gamma}(\smoothness; \cdot)\,
\operatorname{unif}(\angle; 0, \pi/2)
.
\end{equation}
(Remember that $\angle$ appears in anisotropic models only.
In anisotropic models, $\scale$ contains, and $\angle$ may contain,
more than one elements.)
This specifies a flat prior on $(-\infty, \infty)$ for the trend coefficients
$\trend$ and a conventional noninformative prior for the variance
$\sd^2$.
It is sensible to use a diffuse prior for $\scale$,
whereas subjective information (or preference) about
$\smoothness$ and $\nugget$ may be injected into their priors.
Relevant discussions can be found
in
\citet{Berger:2001:OBA},
\citet[sec.~5.1.1]{Banerjee:2004:HMA},
\citet[p.~50]{Gelman:2004:BDA},
and
\citet{Gelman:2006:PDV}.
Additional parameters need to be chosen
for the beta and gamma distributions.
Some details are listed below.
However, bear in mind that these particularities are empirical and
subject to adjustment.
$\operatorname{beta}(\nugget;\cdot)$:
this is taken to be
$\operatorname{beta}(\nugget; 1, 5)
= (1 - \nugget)^4 I(0 \le \nugget \le 1)$.
This prior places more weight on small nugget values.
$\operatorname{gamma}(\scale; \cdot)$:
this is taken to be
$\operatorname{gamma}(\scale; 1, L/(2\log 2))$,
where $L$ is the size of the model domain.
This is actually the exponential distribution with median
$L/2$.
$\operatorname{gamma}(\smoothness;\cdot)$:
parameters of this gamma distribution are chosen such that
its mode is 1.5 and its variance is 4.0.
\section{Inference of the posterior}
\label{sec:algorithm}
We estimate the posterior distribution of the parameter vector,
$\theta$, by normal mixtures in an iterative procedure.
The versatility of normal mixtures in approximating complex densities is
documented by \citet{Marron:1992:EMI}.
The normal kernel
implies that all components of $\theta$ must be defined
on $(-\infty, \infty)$.
This requires that the support of each component of $\theta$
is one of
$(-\infty, \infty)$, $(c, \infty)$, $(-\infty, c)$,
and $(c_1, c_2)$, where $c$, $c_1$, and $c_2$ are constants.
Parameters defined on half-bounded intervals
(such as $\sd^2$, $\scale$, and $\smoothness$)
may be log-transformed.
Parameters defined on bounded intervals
(such as $\nugget$ and $\angle$)
may be logit-transformed.
In fact,
the algorithm below is applicable to any model formulation
as long as all the parameters are defined on $(-\infty, \infty)$,
either directly or after transformation.
The prior given by~(\ref{eq:prior}) is for the parameters
on their natural scale.
The prior for the transformed parameters
are determined by (\ref{eq:prior}) and the transformations.
To avoid clutter in notation, we shall still use $\theta$
to denote the parameter vector (now transformed)
and refer to (\ref{eq:prior}) for its prior,
although in reality the prior of the transformed parameter
is a modified form of (\ref{eq:prior}).
The algorithm actually derives posterior distribution of these
\emph{transformed} parameters.
Distribution of the parameters on their natural scale
can be studied based on back-transformed samples
from the derived posterior distribution.
\subsection{Algorithm}
\label{sec:algor}
We begin with an initial approximation to the posterior,
denoted by $f^{(0)}(\theta)$,
which is taken to be a sufficiently diffuse
multivariate normal distribution.
In the $k$th iteration, the current approximation $f^{(k-1)}$
is updated to $f^{(k)}$ in three steps as follows.
\begin{enumerate}
\item
Take a random sample,
$\{\theta_1,\dotsc,\theta_n\}$, from $f^{(k-1)}$.
\item
For $i=1,\dotsc,n$,
compute the non-normalized posterior density
$s_i = \pi(\theta_i)\, p(\vy \given \theta_i)$
and the proposal density
$t_i = f^{(k-1)}(\theta_i)$;
let
$w_i = \frac{s_i / t_i}{\sum_{j=1}^n s_j/t_j}$
be the importance weight of $\theta_i$.
\item
Update the approximate posterior from $f^{(k-1)}$ to
\begin{equation}\label{eq:mixture-normal}
f^{(k)}(\theta)
\approx
\sum_{i=1}^n w_i\, \gauss(\theta;\, \theta_i, V_i).
\end{equation}
This is a mixture of $n$ normal densities (denoted by $\gauss$),
each with mean $\theta_i$ and covariance matrix $V_i$.
Computation of $V_i$ is detailed in
Section~\ref{sec:localization}.
\end{enumerate}
This algorithm does not require one to be able to draw a random sample
from the prior of $\theta$.
Instead, a convenient initial approximation,
$f^{(0)}$,
starts the procedure.
One only needs to be able to \emph{calculate} the prior density for any
particular value of $\theta$.
This provides great flexibilities in choosing
the prior $\pi(\theta)$ and the initial approximation $f^{(0)}$.
Sampling from a normal mixture distribution is easy.
Note that $f^{(k)}$ is a normal mixture, just like $f^{(k-1)}$,
and is ready to be updated in the next iteration.
Alternatively,
one may terminate the iteration
according to certain empirical criterion,
and take $f^{(k)}$ as the final approximate posterior distribution
of the parameter $\theta$.
Some properties of $f^{(k)}$
may be examined semi-analytically.
More often, one is more interested in the unknown field $\Y$ or a function
thereof than in the parameter $\theta$ itself.
Properties of $\Y$ or a function thereof
may be investigated via sampling $\theta$ from $f^{(k)}(\theta)$
and simulating (i.e.\@ sampling) $\fy$ (realization of the field)
according to~(\ref{eq:likelihood}).
\subsection{Algorithm detail: computation of $V_i$}
\label{sec:localization}
The step~3 of the algorithm
entails kernel density estimation,
which is a standard but un-settled task
\citep{Silverman:1986:DES, Scott:1992:MDE, Wand:1995:KS}.
The covariance matrix $V_i$ may be expressed as
$h_i \mSigma_i$,
where
$\mSigma_i$ is the empirical weighted covariance matrix
of the sample points ($\theta$'s) in a certain neighborhood of
$\theta_i$,
and $h_i$ is a ``bandwidth'' parameter.
While $\mSigma_i$ specifies the shape of the kernel centered at
$\theta_i$,
the bandwidth $h_i$ further adjusts the spread of this kernel.
Several factors complicate the computation of $V_i$.
First,
the distribution of the importance weights, $w_i$,
can be highly skewed,
especially in early iterations
when the proposal distribution tends to be very different
from the true posterior distribution.
In not-so-rare pathological cases,
a few sample points (or a single sample point) carry a dominant fraction
of the total weight, making the other sample points negligible.
When this happens,
$\mSigma_i$ may contain variance entries that are nearly 0.
To mitigate this problem,
we use a ``flattened'' version of weights in the computation of $\mSigma_i$.
Let
\begin{equation}\label{eq:flatten-by-entropy}
v_i = \frac{w_i ^ \gamma}{\sum_{j=1}^n w_j^\gamma},
\quad\text{where}\quad
\gamma
= -\frac{1}{\log n} \sum_{i=1}^n w_i \log w_i.
\end{equation}
The exponent $\gamma$ is the ``entropy'' of $\{w_i\}$,
a measure of the uniformity of the weights
\citep[see][]{West:1993:APD}.
If $\{w_i\}$ are all equal, then $\gamma = 1$.
At the other extreme,
if one $w_i$ is 1 and all the other weights are 0,
then $\gamma = 0$.
Note that the weights
$\{v_i\}$ are used in calculating the empirical weighted covariance
matrix $\mSigma_i$;
they do not replace the weights
$\{w_i\}$ in~(\ref{eq:mixture-normal}).
As the algorithm proceeds in iterations,
the importance weights $\{w_i\}$ become more uniform,
hence $\gamma$ becomes closer to 1,
and the adjustment to $\{w_i\}$ by the above ``flattening'' becomes minor.
The second complication is in the ``localization'' of $\mSigma_i$,
that is,
we choose to define $\mSigma_i$ as the empirical weighted covariance matrix
of sample points in a ``neighborhood'' of $\theta_i$,
say sample points that take up a fraction $r_i$,
$0 < r_i \le 1$, of the entire sample.
Such localization is important if
the target distribution (i.e.\@ the posterior) is severely multi-modal
\citep{Givens:1996:LAI}.
However, there is no guidance on the determination of the fraction $r_i$.
The third complication is due to the bandwidth $h_i$.
A number of adaptive procedures have been proposed for choosing $h$
\citep{Jones:1990:VKD, Hall:1991:ODB, Sheather:1991:RDB, Terrell:1992:VKD,
Givens:1996:LAI, Sain:2002:MLA}.
However, most of the literature in kernel density estimation
is developed based on a \emph{random} sample,
whereas what we have here is a \emph{weighted} one.
In addition,
localization, as parameterized by $r_i$,
has received less attention in the literature.
Traditional rule-of-thumb choices for the bandwidth parameter
\citep{Jones:1996:BSB}
do not apply directly to a localized algorithm,
because the rules are based on analysis of global estimators.
A localized algorithm encounters other difficulties,
such as edge effects,
that do not arise in a global analysis.
To simplify the matter,
we use a common bandwidth parameter, denoted by $h$ (where $h > 0$),
and a common localization parameter, denoted by $r$ (where $0 < r \le
1$),
for all the mixture components.
Sensible choices
of these two tuning parameters depend on the sample size $n$,
the dimensionality of $\theta$,
and characteristics of the target distribution $\pi(\theta)p(\vy\given
\theta)$.
We determine their values by a maximum likelihood cross-validation criterion:
\begin{equation}\label{eq:ML-kernel}
(r, h)
= \argmax_{r,h}
J(r, h;\, \theta_1,\dotsc,\theta_n)
,
\end{equation}
where
\begin{equation}\label{eq:obj-kernel}
J(r, h;\, \theta_1,\dotsc,\theta_n)
= \sum_{i=1}^n
w_i
\log\biggl(
\frac{1}{1 - w_i}
\sum_{j\ne i}
w_j\,
\gauss\bigl(\theta_i;\; \theta_j, h \mSigma_j\bigr)
\biggr)
.
\end{equation}
Note that
$\mSigma_j$ is a function of
$r$, $\{\theta_i\}$, and $\{v_i\}$.
In words,
$J$ is the usual log likelihood of $r$ and $h$ with respect to
the weighted sample $\{\theta_i\}$,
except that the density at $\theta_i$ is calculated by the
mixture density \emph{leaving out} the mixture component centered at
$\theta_i$.
Leaving out the offending mixture component is key.
Otherwise, maximizing $J$ would drive $h$ to be arbitrarily small.
The idea above is quite general,
and may be extended to determine other tuning parameters.
To reduce the computational cost of this optimization,
a combination of discrete search for $r$
and continuous search for $h$
is performed as sketched below.
\begin{enumerate}
\item
Let $J^* \assign -\infty$,
that is, assign value $-\infty$ to the variable $J^*$.
\item
Let $r \assign 1$.
Let
$\Theta_1 \assign \dotsb \assign \Theta_n
\assign \{\theta_1,\dotsc,\theta_n\}$.
Compute the empirical weighted covariance matrix of the entire sample
$\{\theta_1,\dotsc,\theta_n\}$,
using the flattened weights $\{v_i\}$
instead of the original weights $\{w_i\}$.
Denote the result by $\mSigma$,
and let $\mSigma_1 \assign \dotsb \assign \mSigma_n \assign \mSigma$.
\item\label{it:optim-h}
Find $h$ that maximizes~(\ref{eq:obj-kernel}).
Denote the maximizing $h$ by $h_*$
and the achieved maximum $J$ by $J_*$.
(This is a univariate optimization problem.
Since we have values of $\mSigma_1$,..., $\mSigma_n$,
the parameter $r$ does not appear in the calculation
of~(\ref{eq:obj-kernel}).)
If $J_* > J^*$, then
let
$J^* \assign J_*$,
$r^* \assign r$,
$h^* \assign h_*$,
and
$\mSigma_i^* \assign \mSigma_i$
for $i=1,\dotsc,n$.
\item\label{it:halve-r}
If $r$ is below a pre-specified threshold fraction,
say $\frac{1}{8}$,
go to step~\ref{it:conclude}.
Otherwise,
let $r \assign r/2$
and go to step~\ref{it:localize}.
\item\label{it:localize}
For $i=1,\dotsc,n$,
\begin{enumerate}
\item
Within the set $\Theta_i$,
identify the $\lceil r n\rceil$ closest neighbors of
$\theta_i$ in the Mahalonobis sense measured by
the covariance matrix $\mSigma_i$.
Update $\Theta_i$ to be the set that contains
these newly identified neighbors.
\item
Compute the empirical weighted covariance matrix,
$\mSigma_i$, based on the sample $\Theta_i$
and the corresponding relative weights $\{v_i\}$.
\end{enumerate}
Go to step~\ref{it:optim-h}.
\item\label{it:conclude}
Adopt
$r^*$ and $h^*$ as the final values for $r$ and $h$,
respectively,
and let the $V_i$ in~(\ref{eq:mixture-normal}) be
$h^* \mSigma_i^*$ for $i=1,\dotsc,n$.
This concludes the search for optimum values of
$r$ and $h$.
\end{enumerate}
The ideas of normal mixture and iterative updating
are used by \citet{West:1993:APD}.
The procedure in \citet{West:1993:APD}
sets the bandwidth parameter following empirical rules
and does not consider localization.
\section{Provisions for the ``change of support'' problem}
\label{sec:change-of-support}
The data $\vy$ in~(\ref{eq:likelihood}) are the values of $\Y$
at individual locations $\vx$.
This formulation can be easily generalized to use data
that are \emph{linear} functions of $\Y$.
Let the data vector, denoted by $\vz$,
be expressed as
\[
\vz = \mH \vy,
\]
where
$\vy$ is a $n$-vector of $\Y$ at locations
$\vx$
and
$\mH$ is a $m\times n$ matrix of rank $m$, where $m \le n$.
Correspondingly,
the likelihood~(\ref{eq:likelihood}) is replaced by
\begin{equation}\label{eq:likelihood-linear}
p(\vz \given \theta)
= (2\pi \eta^2)^{-m/2} \,
|\mH\mR\transpose{\mH}|^{-1/2}
\exp\Bigl( -\frac{1}{2\eta^2}
\transpose{\bigl(\vz - \mH \mX \trend\bigr)}
[\mH\mR\transpose{\mH}]^{-1}
\bigl(\vz - \mH \mX \trend\bigr)
\Bigr)
,
\end{equation}
where $\mX$ is the design matrix for the locations $\vx$,
and $\mR$ is the correlation matrix between the locations $\vx$.
The algorithm described in Section~\ref{sec:algorithm}
works for this form of data and likelihood without modification.
In applications,
measurements of $\Y$ often have areal or volume (or, in 1-D, interval)
support rather than point support.
Consider studies in which the model domain is discretized into a numerical grid.
Let us call the numerical grid the ``basic'' spatial unit (or support)
in the model.
It may occur that a data value is some function
(e.g.\@ the average) of $Y$ in more than one basic spatial unit.
Specifically,
we may distinguish
``point data'', which have the basic support in the model,
and
``linear data'', which have aggregated support and can be expressed as
linear functions (such as simple averages) of point data.
Increasingly in environmental studies,
available data include a mix of
satellite- and ground-based measurements that span a hierarchy of spatial
supports.
This entails the so-called ``change of support'' problems
\citep{Young:2007:LSD},
a typical example being ``downscaling'' problems.
The method described above is able to use data on a variety of supports
as long as they are all \emph{linear} functions of $\Y$ on the basic
support.
Besides being a natural situation due to data resources on disparate
scales,
linear data can also be useful by methodological design.
For example,
\citet{Zhang:2011:AAI} proposes an inverse algorithm in which
a key model device is linear functions of the random field.
\section{Examples}
\label{sec:examples}
We illustrate the algorithm with two examples.
The first example uses a satellite elevation dataset to demonstrate
the algorithm's performance in a 2-D, anisotropic setting.
Because the true field is known in this case,
model performance can be assessed by comparing conditional simulations
with the true field.
The second example uses a historical dataset of soil moisture.
In this realistic setting, the true field is unknown,
and the sampling locations of the measurements are not ideal as far as
interpolation is concerned
(as is a common situation with historical data).
However, the point of the examples is not to interpolate,
but to illustrate how the algorithm
works with available data to approximate the posterior distribution of the
model parameters in an iterative procedure.
\subsection{Example 1}
\label{sec:example1}
We extracted satellite tomography data from the National Elevation Dataset
(NED) on the web site of the National Map Seamless Server operated by
the U.S.\@ Geological Survey, EROS Data Center, Sioux Falls, South Dakota.
The particular dataset we used covers a region in the Appalachian mountains
on a $37 \times 23$ grid.
The elevation map is shown in Figure~\ref{fig:appal-datamap},
which also marks 20 randomly selected locations that provided synthetic
measurements.
We modeled the elevation, denoted by $\Y(\x)$, by the geostatistical
formulation described in Section~\ref{sec:parameterization}
with a linear trend function and geometric anisotropy.
The smoothness $\smoothness$ was fixed at 1.5.
This formulation has eight parameters:
$\trend_0$, $\trend_X$, $\trend_Y$, $\angle$,
$\scale_X$, $\scale_Y$, $\sd^2$, and $\nugget$.
As mentioned in Section~\ref{sec:algorithm},
the algorithm works with a transformed parameter vector:
$\theta = \bigl(\trend_0, \trend_X, \trend_Y,
\log \frac{\angle}{\frac{\pi}{2} - \angle},
\log \scale_X, \log \scale_Y, \log \sd^2,
\log \frac{\nugget}{1 - \nugget}\bigr)$.
It is straightforward to study the parameters
in their natural units by back-transforming samples
from the estimated posterior distribution of $\theta$.
The initial approximation $f^{(0)}(\theta)$ was taken
to be the product of eight independent and fairly diffuse
normal distributions,
one for each component of $\theta$.
This initial distribution was updated eight times
in the iterative algorithm.
During the iterations,
the approximate posterior, $f^{(k)}$,
converged to the true posterior, $\pi(\theta)\, p(\vy \given \theta)$
(up to a normalizing factor).
The convergence was examined via two measures
that indicate the ``closeness'' or ``distance'' of two
distributions.
The first measure is the entropy of importance sampling weights.
Consider the sample $\{\theta_i\}_{i=1}^n$ obtained in step~1
(see Section~\ref{sec:algor})
of the $k$-th iteration, which is a random sample
from the density $f^{(k-1)}(\theta)$.
The entropy of the importance weights
$\{w_i\}_{i=1}^n$, obtained in step~2 of the algorithm,
is the $\gamma$ defined in~(\ref{eq:flatten-by-entropy}).
The entropy can be used as an indicator of how close the
estimated distribution, $f^{(k-1)}(\theta)$,
is to the true posterior distribution.
An entropy value close to 1 indicates good approximation
\citep{West:1993:APD, Liu:1998:RCS}.
The second measure is the $L_1$ distance
between the estimated posterior,
say $f(\theta)$,
and the true posterior,
$c g(\theta) = c \pi(\theta) p(\vy \given \theta)$,
where $c$ is an unknown normalizing constant.
The $L_1$ distance is defined as
$L_1 = \int_{\Theta} \lvert f(\theta) - c g(\theta)\rvert \diff \theta$,
hence
\[
\begin{split}
L_1
&= \int_{\Theta}
\biggl\lvert
1 -
\frac{g(\theta)}{f(\theta) \int_{\Theta} g(\theta) \diff\theta}
\biggr\rvert
f(\theta) \diff \theta
\\
&= \int_{\Theta}
\biggl\lvert
1 -
\frac{\frac{g(\theta)}{f(\theta)}}
{\int_{\Theta} \frac{g(\theta)}{f(\theta)} f(\theta) \diff\theta}
\biggr\rvert
f(\theta) \diff \theta
\\
&= E_{f(\theta)} \biggl\lvert
1 -
\frac{g(\theta)/f(\theta)}
{E_{f(\theta)} \bigl[g(\theta)/f(\theta)\bigr]}
\biggr\rvert
,
\end{split}
\]
where the two expectations are with respect to the distribution
$f(\theta)$.
Therefore,
a Monte Carlo estimate of this distance is
\[
d_{L_1}
= \frac{1}{n} \sum_{i=1}^n |1 - nw_i|
,
\]
making use of the fact that the sample mean of
$\frac{g(\theta)}{f(\theta)}$ is
$\frac{1}{n} \sum_{i=1}^n w_i$, i.e.\@ $\frac{1}{n}$.
The values of $\gamma$ and $d_{L_1}$ in each iteration
are listed in Table~\ref{tab:appal-converge},
which shows definite trends of increase in $\gamma$
and decrease in $d_{L_1}$.
\begin{table}
\caption{Convergence of the approximate posterior to the true posterior
in iterations of Example~1,
as indicated by functions of the importance sampling weights.}
\label{tab:appal-converge}
\centering
\begin{tabular}{l|cccccccccc}
\br
iteration ($k$) & 1 & 2 & 3 & 4 & 5 & 6 & 7 & 8\\ \hline
sample size ($n$) & 3000 & 2800 & 2620 & 2458 & 2312 & 2181 & 2063 & 1957\\
entropy ($\gamma$) & 0.12 & 0.35 & 0.37 & 0.57 & 0.80 & 0.88 & 0.91 & 0.92\\
$L_1$ distance ($d_{L_1}$) & 1.99 & 1.92 & 1.86 & 1.68 & 1.22 &
1.05 & 0.88 & 0.84\\
\br
\end{tabular}
\end{table}
\begin{table*}
\caption{Empirical mean and standard deviation of 1000 samples
of the parameters (back-transformed to their natural units)
drawn from the final posterior approximation in Example~1.}
\label{tab:appal-mean}
\centering
\begin{tabular}{l|ccccccccc}
\br
parameter & $\trend_0$ & $\trend_X$ & $\trend_Y$
& $\scale_X$ & $\scale_Y$ & $\alpha$
& $\sd^2$ & $\nugget$ \\ \hline
sample mean & 941 & -9.01 & 0.81
& 36.1 & 2.90 & 0.32
& 4.95e04 & 0.06 \\
sample s.d. & 168 & 4.67 & 11.6
& 30.7 & 2.15 & 0.08
& 7.12e04 & 0.09\\
\br
\end{tabular}
\end{table*}
In total we obtained nine approximate posterior distributions,
including the initial approximation and the eight subsequent updates.
From each of these approximations
we drew 1000 samples of the parameter vector $\theta$.
The marginal distribution of each component of $\theta$
is plotted in Figure~\ref{fig:appal-geost-marginal}.
It can be seen that the posterior approximations stabilized
after six or seven iterations,
consistent with the quantitative indicators in
Table~\ref{tab:appal-converge}.
The stabilized approximate distributions of the
scale, variance, and nugget parameters are more outspread
than those of the trend and rotation parameters.
This is related to the interactions between the former group of
parameters, which cause identifiability
difficulties \citep[see][]{Zhang:2004:IEA}.
To provide some idea of the posterior mean and uncertainty
of the individual parameter components,
Table~\ref{tab:appal-mean} lists
the empirical means and standard deviations of the eight parameter
components calculated using the 1000 samples from
$f^{(8)}(\theta)$, the final approximation.
The empirical posterior mean of the rotation, $\angle$,
is 0.32, i.e.\@ 18 degrees.
The empirical posterior mean of the scale along the rotated horizontal
axis ($\scale_X$) is 36.1, in contrast to 2.90 along the
vertical axis.
The angle and scales are in keeping with what we observe
in the synthetic true field.
We note that,
with the 20 irregularly located measurements in this example,
these anisotropy parameters would be difficult to
estimate by methods based on curve-fitting for empirical variograms
\citep[see, e.g.][]{Paleologos:2011:SAF}.
In geostatistical analysis,
one of the usual goals is to provide an interpolated map of the spatial field
accompanied by a measure of uncertainty.
We conducted 100 conditional simulations based on the final
approximation to the posterior distribution of the parameter.
The point-wise median of the simulations is shown
in Figure~\ref{fig:appal-medianmap}.
A comparison of Figure~\ref{fig:appal-medianmap} with
Figure~\ref{fig:appal-datamap} confirms that the model and algorithm
have captured main features of the true spatial field.
The point-wise standard deviation of the simulations
is shown in Figure~\ref{fig:appal-uncertaintymap}.
The level of uncertainty in the simulated fields is largely uniform
throughout the model domain, because the locations of the observations are
reasonably balanced in the model domain.
\subsection{Example 2}
\label{sec:example2}
In the second example,
we used a hydrologic dataset provided by
the Southeast Watershed Research Laboratory (SEWRL)
of the U.S.\@ Department of Agriculture.
The dataset contains long-term records of a number of hydrologic variables
for the Little River Experimental Watershed in south-central Georgia,
United States.
This research program as well as its data products are described in
\citet{Bosch:2007:LRE}.
For illustration,
we focused on a time snapshot
(specifically, at 18:00 on 1 August 2007)
of soil moisture represented
by measurements at 29 irregularly-located gauges.
The measurements are shown in
Figure~\ref{fig:soil-datamap}.
Details about the regional geography and the moisture data
can be found in
\citet{Bosch:2007:LRE} and
\citet{Bosch:2007:PSM}, respectively.
Because soil moisture is a percentage,
we took its logit transform as the spatial variable $Y$, that is,
$Y = \log \frac{\text{moisture}}{0.5 - \text{moisture}}$.
(The upper bound was taken to be 0.5 because it was noticed that
the largest observed value is 0.34.)
Such a transformation is needed because $Y$ must be defined on
$(-\infty,\infty)$ in order to be modeled as a Gaussian variable.
This treatment is in line with generalized linear models
in model-based geostatistics \citep{Diggle:2007:MBG};
but see \citet{Michalak:2005:MIN} for an alternative approach.
With the parameterization described in Section~\ref{sec:parameterization},
we took the trend model $\transpose{\mu(x)}\trend$
to be a constant $\trend$,
fixed the smoothness parameter $\smoothness$ at 0.5
(i.e.\@ used an exponential correlation function),
and did not consider anisotropy.
This left us with four parameters:
$\trend$, $\sd^2$, $\scale$, and $\nugget$.
The algorithm worked with a transformed parameter vector:
$\theta = \bigl(\trend, \log \sd^2, \log \scale,
\log \frac{\nugget}{1 - \nugget}\bigr)$.
Construction of the prior and the initial approximate posterior
distributions followed procedures similar to those in the first example.
The initial distribution was updated five times.
The convergence of the approximate posterior distributions to the truth
were examined via the two measures
$\gamma$ and $d_{L_1}$ as listed in
Table~\ref{tab:soil-converge}.
The values listed
show definite trends of increase in $\gamma$
and decrease in $d_{L_1}$.
As $\gamma$ and $d_{L_1}$ approached their respective limits, 1 and 0,
their values began to ``level off''.
\begin{table}
\caption{Convergence of the approximate posterior to the true posterior
in iterations of Example~2,
as indicated by functions of the importance sampling weights.}
\label{tab:soil-converge}
\centering
\begin{tabular}{l|ccccccc}
\br
iteration ($k$) & 1 & 2 & 3 & 4 & 5 \\ \hline
sample size ($n$) & 2000 & 1880 & 1772 & 1675 & 1587 \\
entropy ($\gamma$) & 0.47 & 0.92 & 0.98 & 0.98 & 0.99 \\
$L_1$ distance ($d_{L_1}$) & 1.82 & 0.87 & 0.40 & 0.36 & 0.31 \\
\br
\end{tabular}
\end{table}
\begin{table}
\caption{Empirical mean and standard deviation of 1000 samples
of the parameters (back-transformed to their natural units)
drawn from the final posterior approximation in Example~2.}
\label{tab:soil-mean}
\centering
\begin{tabular}{l|ccccccc}
\br
parameter & $\trend$ & $\scale$ & $\sd^2$ & $\nugget$ \\ \hline
sample mean & -0.78 & 30897 & 0.52 & 0.21\\
sample s.d. & 0.39 & 56140 & 0.41 & 0.16\\
\br
\end{tabular}
\end{table}
Table~\ref{tab:soil-mean} lists
the empirical means and standard deviations of the four parameter
components obtained using 1000 samples from the final posterior
approximation.
A major purpose of a geostatistical analysis
like the current one is to generate an ensemble of soil moisture maps
to be used in subsequent studies that require
the value of soil moisture in the entire domain.
We divided the model domain into
41 (east-west) by 74 (north-south) grids,
each of size $1106\unit{m} \times 1106\unit{m}$,
and conducted 100 conditional simulations based on the approximate posterior
distribution obtained in the final iteration of the algorithm.
Each simulation was back-transformed to give a moisture map
with values in $(0,0.5)$,
in contrast to the variable $\Y \in (-\infty, \infty)$ in the model.
The point-wise median of the simulations is shown
in Figure~\ref{fig:soil-medianmap}, which confirms
a high-moisture area in the upper-central section of the model domain.
More scientific insights are expected if one
examines the simulated soil moisture maps
in the context of other hydrologic variables as well as the geography.
\section{Conclusion}
\label{sec:concl}
We have described a Bayesian geostatistical framework
and proposed an iterative algorithm for deriving
the posterior distribution of the parameter vector.
The contribution of this study is to provide a
general inference procedure that avoids some difficult elements
utilized in practice, including
(1) fitting a variogram curve;
(2) imposing bounds on an unbounded parameter;
(3) discretizing a continuous parameter.
Moreover,
the procedure can be applied to other model formulations
as long as the model parameters, or transforms thereof,
are defined on $(-\infty, \infty)$.
Common transformations that achieve this goal include
the logarithmic and logistic transformations,
as exemplified in Section~\ref{sec:examples}.
The algorithm centers on normal kernel density estimation.
Particular efforts are made to determine the localization and bandwidth
parameters in a systematic fashion.
Difficulties caused by highly skewed importance sampling weights
are alleviated by ``flattening'' the weights.
The method was demonstrated by two examples using synthetic and historical data.
In both examples,
we examined convergence of the approximate posterior distributions,
as well as features of the marginal posterior distributions.
The estimated posterior distributions served as a basis for
conditional simulations of the spatial field.
\bigskip
\noindent
\textbf{Acknowledgement:}\ \
{\small%
The author's Senior Visiting Scholarship at Tsinghua University was funded by
the Excellent State Key Lab Fund no.\@ 50823005,
National Natural Science Foundation of China,
and
the R\&D Special Fund for Public Welfare Industry no.\@ 201001080,
Chinese Ministry of Water Resources.
}
|
2,877,628,088,434 | arxiv | \section{Introduction}
\input{contents/intro}
\pagestyle{plain}
\section{Related Work}
\input{contents/related_work}
\section{Model}
\input{contents/method}
\section{Experiments}
\label{sec:exp}
\input{contents/experiments}
\section{Conclusion}
\input{contents/conclusion}
{\small
\bibliographystyle{ieee_fullname}
\subsection{Overview}
Our network model, illustrated in \cref{fig:arch_overview}, follows the established design of a fully convolutional segmentation network with a \texttt{softmax} output and skip connections \cite{long2015fully}.
This allows for a straightforward extension of any segmentation-based network architecture and exploiting pre-trained classification models for parameter pre-conditioning.
Inference requires only a single forward pass, analogous to fully supervised segmentation networks.
By contrast, however, our model allows to learn for segmentation in a self-supervised fashion from image labels alone.
\begin{figure}[t]%
\def\linewidth{\linewidth}
\input{figures/overview/overview.pdf_tex}
\caption{\textbf{Architecture overview.} Our model shares the design of a segmentation network, but additionally makes use of \emph{normalised Global Weighted Pooling} (nGWP, \cref{sec:mask_size}) and \emph{Pixel-Adaptive Mask Refinement} (PAMR, \cref{sec:mask_refinement}) to enable self-supervised learning for segmentation from image labels.}%
\label{fig:arch_overview}%
\vspace{-0.5em}
\end{figure}
We propose three novel components relevant to our task:
\emph{(i)} a new class aggregation function, \emph{(ii)} a local mask refinement module, and \emph{(iii)} a stochastic gate.
The purpose of the class aggregation function is to leverage segmentation masks for classification decisions, \ie to provide \emph{semantic fidelity} as defined earlier.
To this end, we develop a \emph{normalised Global Weighted Pooling} (nGWP) that utilises pixel-level confidence predictions for relative weighting of the corresponding classification scores.
Additionally, we incorporate a \emph{focal mask penalty} into the classification scores to encourage \emph{completeness}.
We discuss these components in more detail in \cref{sec:mask_size}.
Next, in order to comply with \emph{local consistency}, we propose \emph{Pixel-Adaptive Mask Refinement} (PAMR), which revises the coarse mask predictions \wrt appearance cues.
The updated masks are further used as pseudo ground truth for segmentation, trained jointly along with the classification objective, as we explain in \cref{sec:mask_refinement}.
The refined masks produced by PAMR may still contain inaccuracies \wrt the ground truth, and self-supervised learning may further compound these errors via overfitting.
To alleviate these effects, we devise a \emph{Stochastic Gate} (SG) that combines a deep feature representation susceptible to this phenomenon with more robust, but less expressive shallow features in a stochastic way.
\cref{sec:stochasitc_gate} provides further detail.
\subsection{Classification scores}
\label{sec:mask_size}
\paragraph{CAMs.}
It is instructive to briefly review how the class score is normally computed with Global Average Pooling (GAP), since this analysis builds the premise for our aggregation mapping.
Let $x_{k,:,:}$ denote one of $K$ feature channels of size $h \times w$ preceding GAP, and $a_{c,:}$ be the parameter vector for class $c$ in the fully connected prediction layer.
The class score for class $c$ is then obtained as
\begin{equation}
\begin{aligned}
y^{\text{GAP}}_c = \frac{1}{h w} \sum_{k=1}^K a_{c,k} \sum_{i,j} x_{k,i,j}.
\end{aligned}
\label{eq:gap_score}
\end{equation}
Next, we can compute the Class Activation Mapping (CAM) \cite{ZhouKLOT16} for class $c$ as
\begin{equation}
\begin{aligned}
m^{\text{CAM}}_{c,:,:} = \max\bigg(0, \sum_{k=1}^K a_{c,k} x_{k,:,:} \bigg).
\end{aligned}
\label{eq:cam}
\end{equation}
\cref{fig:GAP} illustrates this process, which we refer to as CAM-GAP.
From \cref{eq:gap_score} we observe that it encourages all pixels in the feature map to identify with the target class.
This may disadvantage small segments and increase the reliance of the classifier on the context, which can be undesirable due to a loss in mask precision.
Also, as becomes evident from \cref{eq:cam}, there are two more issues if we were to adopt CAM-GAP to provide segment cues for learning.
First, the mask value is not bounded from above, yet in segmentation we seek a normalised representation (\eg $\in (0, 1)$) that can be interpreted as a confidence by downstream applications.
Second, GAP does not encode the notion of pixel-level competition from the underlying segmentation task where each pixel can assume only one class label (\ie there is no \texttt{softmax} or a related component).
We thus argue that CAM-GAP is ill-suited for the segmentation task.
\begin{figure}[t]
\subcaptionbox{\label{fig:GAP}}{
\def\linewidth{\linewidth}
\input{figures/aggregation/aggregation_v4_gap.pdf_tex}
}\vspace{1.0em}
\subcaptionbox{\label{fig:GWP}}{
\def\linewidth{\linewidth}
\input{figures/aggregation/aggregation_v4_gwp.pdf_tex}
}
\caption{\textbf{The original GAP-CAM architecture} \subref{fig:GAP} and our \textbf{proposed modification, nGWP} \subref{fig:GWP}.
Our analysis of CAMs prompts us to devise an alternative aggregation mapping of class scores, nGWP, which allows to re-use the original classification loss, yet enables joint training for segmentation with substantial improvements in mask quality.
}
\vspace{-0.5em}
\end{figure}
\myparagraph{Going beyond CAMs.}
To address this, we propose a novel scheme of score aggregation, see \cref{fig:GWP} for an overview, which allows for seamless integration into existing backbones, yet does not inherit the shortcomings of CAM-GAP.
Note that the following discussion is orthogonal to the loss function applied on the final classification scores, which we keep from our baseline model.
Given features $x_{:,:,:}$, we first predict classification scores $y_{:,:,:}$ of size $C \times h \times w$ for each pixel.
We then add a background channel (with a constant value) and compute a pixelwise \texttt{softmax} to obtain masks with confidence values $m_{:,:,:}$ -- this is a standard block in segmentation.
To compute a classification score, we propose \emph{normalised Global Weighted Pooling} (nGWP), defined as
\begin{equation}
\begin{aligned}
y_c^\text{nGWP} = \frac{\sum_{i,j} m_{c,i,j} y_{c,i,j}}{\epsilon + \sum_{i',j'} m_{c,i',j'}}.
\end{aligned}
\label{eq:gwp}
\end{equation}
Here, a small $\epsilon > 0$ tackles the saturation problem often observed in practice (\cf \cref{sec:supp_loss}).
As we observe from \cref{eq:gwp}, nGWP is invariant to the mask size.
This may bring advantages for small segments, but lead to inferior recall compared to the more aggressive GAP aggregation.
To encourage \emph{completeness}, we encourage increased mask size for positive classes with a penalty term:
\begin{equation}
\begin{aligned}
y_c^\text{size} = \log\Big(\lambda + \frac{1}{hw} \sum_{i,j} m_{c,i,j}\Big).
\end{aligned}
\label{eq:pen_simple}
\end{equation}
The magnitude of this penalty is controlled by a small $\lambda > 0$.
The logarithmic scale ensures that we incur a large negative value of the penalty only when the mask is near zero.
Since we decouple the influence of the class scores (captured by \cref{eq:gwp}) from that of the mask size (through \cref{eq:pen_simple}), we can apply difficulty-aware loss functions.
We generalise the penalty term in \cref{eq:pen_simple} to the focal loss \cite{LinGGHD17}, used in our final model:
\begin{equation}
\begin{aligned}
y_c^\text{size-focal} = (1-\bar{m}_c)^p \log(\lambda + \bar{m}_c), \:\; \bar{m}_c = \tfrac{1}{hw} \sum_{i,j} m_{c,i,j}.
\end{aligned}
\label{eq:pen_focal}
\end{equation}
Note that as the mask size approaches zero, $\bar{m}_c \rightarrow 0$, the penalty retains its original form, \ie \cref{eq:pen_simple}.
However, if the mask is non-zero, $p > 0$ discounts the further increase in mask size to focus on the failure cases of near-zero masks.
We compute our final classification scores as $y_c \equiv y_c^\text{nGWP} + y_c^\text{size-focal}$ and use the multi-label soft-margin loss function \cite{paszke2017automatic} used in previous work \cite{AhnK18,WeiXSJFH18} as the classification loss,
\begin{equation}
\begin{aligned}
\mathcal{L}_\text{cls}(\mathbf{y}, \mathbf{z}) = -\frac{1}{C} \sum^C_{c=1} & z_c \log{\bigg(\frac{1}{1 + e^{-y_c}} \bigg)} + \\
& + (1 - z_c) \log \bigg( \frac{e^{-y_c}}{1 + e^{-y_c}} \bigg),
\end{aligned}
\label{eq:cls_loss}
\end{equation}
where $\mathbf{z}$ is a binary vector of ground-truth labels.
The loss encourages $y_c < 0$ for negative classes (\ie when $z_c = 0$) and $y_c > 0$ for positive classes (\ie when $z_c = 1$).
\subsection{Pixel-adaptive mask refinement}
\label{sec:mask_refinement}
While our classification loss accounts for \emph{semantic fidelity} and \emph{completeness},
the task of local mask refinement is to fulfil \emph{local consistency}: nearby regions sharing the same appearance should be assigned to the same class.
The mapping formalising this idea takes the pixel-level mask predictions $m_{:,:,:} \in (0, 1)^{(C + 1)\times h \times w}$ (note $+1$ for the background class) and considers the image $I$ to produce refined masks $m_{:,:,:}^\ast$.
Such a mapping has to be efficient, since we will use it to produce self-supervision for segmentation trained concurrently with the classification objective.
Therefore, a naive choice of GrabCut \cite{rother2004grabcut} or dense CRFs \cite{KrahenbuhlK11} would slow down the training process.
\begin{figure}[t]%
\def\linewidth{\linewidth}
\input{figures/refinement/refine_v3.pdf_tex}
\caption{\textbf{Concept illustration of Pixel-Adaptive Mask Refinement (PAMR).} For each pixel, we compute an affinity kernel measuring its proximity to its neighbours in RGB space. We iteratively apply this kernel to the semantic masks via an adaptive convolution to obtain refined labels.}%
\label{fig:pixel_adaptive}%
\vspace{-0.5em}
\end{figure}
Instead, our implementation derives from the Pixel-Adaptive Convolution (PAC) \cite{su2019pixel}.
The idea, illustrated in \cref{fig:pixel_adaptive}, is to iteratively update pixel label $m_{:,i,j}$ using a convex combination of the labels of its neighbours $\mathcal{N}(i,j)$, \ie at the $t^\text{th}$ iteration we have
\begin{equation}
\begin{aligned}
m^t_{:,i,j} = \sum_{(l,n) \in \mathcal{N}(i,j)} \alpha_{i,j,l,n} \cdot m^{t-1}_{:,l,n},
\end{aligned}
\label{eq:refine_label}
\end{equation}
where the pixel-level affinity $\alpha_{i,j,l,n}$ is a function of the image $I$.
To compute $\alpha$, we use a kernel function on the pixel intensities $I$,
\begin{equation}
\begin{aligned}
k(I_{i,j}, I_{l,n}) = -\tfrac{ \lvert I_{i,j} - I_{l,n} \rvert }{\sigma_{i,j}^2},
\end{aligned}
\end{equation}
where we define $\sigma_{i,j}$ as the standard deviation of the image intensity computed \emph{locally} for the affinity kernel.
We apply a \texttt{softmax} to obtain the final affinity distance $\alpha_{i,j,l,n}$ for each neighbour $(l,n)$ of $(i,j)$, \ie
$\alpha_{i,j,l,n} = e^{\bar{k}(I_{i,j}, I_{l,n})} / \sum_{(q,r) \in \mathcal{N}(i,j)} e^{\bar{k}(I_{i,j}, I_{q,r}))} $,
where $\bar{k}$ is the average affinity value across the RGB channels.
This local refinement, termed \emph{Pixel-Adaptive Mask Refinement} (PAMR), is implemented as a parameter-free recurrent module, which iteratively updates the labels following \cref{eq:refine_label}.
Clearly, the number of required iterations depends on the size and shape of the affinity kernel (\eg $3\times 3$ in \cref{fig:pixel_adaptive}).
In practice, we combine multiple $3 \times 3$-kernels with varying dilation rates.
We study the choice of the affinity structure in more detail in our ablation study (\cf \cref{sec:ablation}).
Note that since we do not back-propagate through PAMR, it is always in ``evaluation'' mode, hence memory-efficient.
In practice, one iteration adds less than $1\%$ of the baseline's GPU footprint, and we empirically found 10 refinement steps to provide a sufficient trade-off between the efficiency and the delivered accuracy boost from PAMR.
\myparagraph{Self-supervised segmentation loss.}
We generate pseudo ground-truth masks from PAMR by considering pixels with confidence $>60\%$ of the maximum value ($>70\%$ for the background class).
Conflicting pixels and pixels with low confidence are ignored by the loss function.
We fully discard images for which some of the ground-truth classes do not give rise to any confident pseudo ground-truth pixels.
Following the fully supervised case \cite{deeplabv3plus2018}, we use pixelwise cross-entropy, but balance the loss distribution across the classes, \ie the loss for each individual class is normalised \wrt the number of corresponding pixels contained in the pseudo ground truth.
The intermediate results for segmentation self-supervision at training time are illustrated in \cref{fig:intermediate}.
\begin{figure}[t]
\captionsetup[subfigure]{labelformat=empty}
\centering
\begin{subfigure}{0.19\linewidth}
\caption{Input}
\includegraphics[width=\linewidth]{./figures/pseudo-masks/airplane/input.png}\end{subfigure}
\begin{subfigure}{0.19\linewidth}
\caption{Ground Truth}
\includegraphics[width=\linewidth]{./figures/pseudo-masks/airplane/gt.png}\end{subfigure}
\begin{subfigure}{0.19\linewidth}
\caption{Prediction}
\includegraphics[width=\linewidth]{./figures/pseudo-masks/airplane/pred.png}\end{subfigure}
\begin{subfigure}{0.19\linewidth}
\caption{Refinement}
\includegraphics[width=\linewidth]{./figures/pseudo-masks/airplane/refined.png}\end{subfigure}
\begin{subfigure}{0.19\linewidth}
\caption{Pseudo GT}
\includegraphics[width=\linewidth]{./figures/pseudo-masks/airplane/pseudo.png}\end{subfigure}
\vspace{1mm}
\begin{subfigure}{0.19\linewidth}\includegraphics[width=\linewidth]{./figures/pseudo-masks/boat_people/input.png}\end{subfigure}
\begin{subfigure}{0.19\linewidth}\includegraphics[width=\linewidth]{./figures/pseudo-masks/boat_people/gt.png}\end{subfigure}
\begin{subfigure}{0.19\linewidth}\includegraphics[width=\linewidth]{./figures/pseudo-masks/boat_people/pred.png}\end{subfigure}
\begin{subfigure}{0.19\linewidth}\includegraphics[width=\linewidth]{./figures/pseudo-masks/boat_people/refined.png}\end{subfigure}
\begin{subfigure}{0.19\linewidth}\includegraphics[width=\linewidth]{./figures/pseudo-masks/boat_people/pseudo.png}\end{subfigure}
\caption{\textbf{Intermediate results at training time.} PAMR refines model predictions \emph{(middle)} by accounting for appearance cues in the image. The revised masks \emph{($4^\text{th}$ column)} serve as pseudo ground truth, where we use only high-confidence pixels \emph{(last column)}.}
\label{fig:intermediate}
\vspace{-0.5em}
\end{figure}
\subsection{Stochastic gate}
\label{sec:stochasitc_gate}
In self-supervised learning we expect the model to ``average out'' inaccuracies (manifested by their irregular nature) in the pseudo ground truth, thereby improving the predictions and thus the pseudo supervision.
However, a powerful model may just as well learn to mimic these errors.
Strong evidence from previous work indicates that the large receptive field of the deep features enables the model to learn such complex phenomena in segmentation \cite{deeplabv3plus2018,wu2019wider,zhao2017pyramid}.
To counter the compounding effect of the errors in self-supervision, we propose a type of regularisation, referred to as \emph{Stochastic Gate} (SG).
The underlying idea, shown in \cref{fig:gate}, is to encourage information sharing between the deep features (with a large receptive field) and ``shallow'' representation of the preceding layers (with a lower receptive field), by stochastically exchanging these representations, yet maintaining the deep features as the expected value.
Formally, let $x^{(d)}$ and $x^{(s)}$ represent the activation in the deep and shallow feature map, respectively (omitting the tensor subscripts for brevity).
Applying SG for each pixel at training time is reminiscent of Dropout \cite{SrivastavaHKSS14}:
\begin{gather}
\begin{aligned}
x^\text{SG} = (1 - r) \underbrace{\delta [ x^{(d)} - \psi x^{(s)}]}_{x^{(\ast)}} + r x^{(s)}, \:\text{with}\: r \sim \text{Bern}(\psi),
\end{aligned}
\label{eq:bernoulli}
\raisetag{10pt}
\end{gather}
where the \emph{mixing rate} $\psi \in [0, 1]$ regulates the proportion of $x^{(\ast)}$ and $x^{(s)}$ in the output tensor.
The constant $\delta=1 / (1 - \psi)$ ensures that $\mathbb{E}[x^\text{SG}] = x^{(d)}$.
It is easy to show that $x^{(\ast)} \approx x^{(s)}$, encouraged by the stochasticity, implies $x^{(d)} \approx x^{(s)}$, \ie feature sharing between the deep and shallow representations.
At inference time, we deterministically combine the two, now complementary, streams:
\begin{equation}
\begin{aligned}
x^\text{SG} = (1 - \psi) x^{(d)} + \psi x^{(s)}.
\end{aligned}
\label{eq:gate_sum}
\end{equation}
\begin{figure}[t]%
\def\linewidth{1.01\linewidth}
\input{figures/gate/gate_v3.pdf_tex}
\caption{\textbf{Concept illustration of the Stochastic Gate.} All rectangular blocks are tensors of the same size. The baseline model from DeepLabv3+ \cite{deeplabv3plus2018} is shown in red: the output from ASPP is augmented via a skip connection from \texttt{conv3} and the result, $x^{(d)}$, passes directly through the decoder. Shown in blue, our modification (GCI) infuses global cues extracted from the deep features into the shallow features via AdIN \cite{huang2017arbitrary}. The enriched shallow and the deep features are then combined using \cref{eq:bernoulli} at training and \cref{eq:gate_sum} at inference time.}%
\label{fig:gate}%
\vspace{-0.5em}
\end{figure}
Shallow features alone may be too limited in terms of the semantic information they contain.
To enrich their representation, yet preserve their original receptive field, we devise \emph{Global Cue Injection (GCI)} via Adaptive Instance Normalisation (AdIN) \cite{huang2017arbitrary}.
As shown in \cref{fig:gate}, we first apply a $1\!\times\!1$ convolution to the deep feature tensor to double the number of channels.
Then, we extract two vectors with global information (\ie without spatial cues) via Global Max Pooling (GMP).
Shown as the left (unshaded) and right (shaded) half of the 1D vector after GMP in \cref{fig:gate}, let $z^{(a)}$ and $b^{(a)}$ denote two parts of such a representation, which will be shared by each site in a shallow feature channel.
We compute the \emph{augmented} shallow activation $x^{(s)^\ast}$ as
\begin{equation}
\begin{aligned}
x^{(s)^\ast} = \texttt{ReLU} \Big( z^{(a)} \Big(\tfrac{x^{(s)} - \mu(x^{(s)})}{\sigma(x^{(s)})}\Big) + b^{(a)} \Big),
\end{aligned}
\end{equation}
where $\mu(\cdot)$ and $\sigma(\cdot)$ are the mean and the standard deviation of each channel of $x^{(s)}$.
The updated activation, $x^{(s)^\ast}$, goes through a $1\!\times\!1$-convolution and replaces the original $x^{(s)}$ in \cref{eq:bernoulli} and \cref{eq:gate_sum} in the final form of SG.
Following \cite{deeplabv3plus2018}, the output from SG then passes through a 3-layer decoder.
\subsection{Setup}
\paragraph{Dataset.} Pascal VOC 2012~\cite{EveringhamGWWZ10} is an established benchmark for weakly supervised semantic segmentation and contains 20 object categories.
Following the standard practice~\cite{AhnK18,KolesnikovL16,WeiXSJFH18}, we augment the original VOC training data with an additional image set provided by Hariharan \etal \cite{HariharanABMM11}.
In total, we use \num{10582} images with image-level annotation for training and \num{1449} images for validation.
\myparagraph{Implementation details.}
Our model is implemented in PyTorch \cite{paszke2017automatic}.
We use a \textit{WideResNet-38} backbone network \cite{wu2019wider} provided by \cite{AhnK18} (see \cref{sec:backbones}, for experiments with VGG16 \cite{SimonyanZ14a} and ResNet backbones \cite{HeZRS16}).
We further extend this model to \textit{DeepLabv3+} by adding Atrous Spatial Pyramid Pooling (ASPP), a skip connection (with our Stochastic Gate), and the 3-layer decoder \cite{deeplabv3plus2018}.
We train our model for \num{20} epochs with SGD using weight decay \num{5e-4} with momentum \num{0.9}, a constant learning rate of \num{0.01} for the new (randomly initialised) modules and \num{0.001} for \textit{WideResNet-38} parameters, initialised from ImageNet \cite{deng2009imagenet} pre-training.
We first train our model for 5 epochs using only the classification loss and switch on the self-supervised segmentation loss for the remaining 15 epochs.
We use inference with multi-scale inputs \cite{chen2016attention} and remove masks for classes with classifier confidence $<0.1$.
\myparagraph{Data augmentation.}
Following common practice \cite{AhnK18,WeiXSJFH18}, we use random rescaling (in the $(0.9, 1.0)$ range \wrt the original image area), horizontal flipping, colour jittering, and train our model on random crops of size $321\times321$.
\begin{table}[t]
\centering
\subcaptionbox{IoU (val,\%) \wrt focal mask penalty\label{table:ablation_focal}. We fix $\psi\!=\!0.5$ w/ GCI for SG and use PAMR kernel $[1,2,4,8,12,24]$.}{%
\footnotesize%
\setlength{\tabcolsep}{0.55em}%
\begin{tabularx}{0.47\linewidth}{@{}X@{}S[table-format=2.1]S[table-format=2.1]S[table-format=2.1]@{}}
\toprule
$\downarrow p \hfill/\hfill \lambda\rightarrow$ & {$0.1$} & {$0.01$} & {$0.001$} \\
\midrule
$0$ & 58.8 & 58.9 & 57.4 \\
$3$ & 59.5 & 59.4 & 58.1 \\
$5$ & 60.2 & 59.1 & 57.1 \\
\bottomrule
\end{tabularx}
}\hfill
\subcaptionbox{IoU (val,\%) \wrt Pixel-Adaptive Mask Refinement\label{table:ablation_pamr}. We fix $\psi\!=\!0.5$ w/ GCI for SG and set $p\!=\!3, \lambda\!=\!0.01$.}{%
\footnotesize%
\setlength{\tabcolsep}{0.55em}%
\begin{tabularx}{0.47\linewidth}{@{}cccccX@{}S[table-format=2.1]@{}}
\toprule
1 & 2 & 4 & 8 & 12 & 24 & {IoU} \\
\midrule
\cmark & \cmark & \cmark & \cmark & \cmark & \cmark & \bfseries 59.4 \\
\midrule
\multicolumn{6}{@{}l}{\footnotesize{\textit{(no refinement)}}} & 31.8 \\
\cmark & \cmark & \cmark & \cmark & & & 50.6 \\
\cmark & \cmark & \cmark & \cmark & \cmark & & 55.7 \\
\cmark & 3 & 6 & 9 & \cmark & 16 & 57.9 \\
\cmark & \cmark & \cmark & \cmark & \cmark & 16 & 58.2 \\
\bottomrule
\end{tabularx}
}\\[0.5em]
\subcaptionbox{IoU (val,\%) \wrt the Stochastic Gate\label{table:ablation_gate}. We use $p\!=\!3, \lambda\!=\!0.01$ for the focal penalty and use PAMR kernel $[1,2,4,8,12,24]$.}{%
\footnotesize
\begin{tabularx}{\linewidth}{@{}XS[table-format=2.1]@{\hspace{2em}}S[table-format=2.1]@{}}
\toprule
Config & {IoU} & {IoU \scriptsize{(+ CRF)}} \\
\midrule
$\psi = 0.5$ & 59.4 & 62.2 \\
$\psi = 0.5$ w/o GCI & \bfseries 59.8 & 60.9 \\
\midrule
$\psi = 0.3$ & \bfseries 59.7 & \bfseries 62.7 \\
$\psi = 0.3$ w/o GCI & 57.7 & 60.3 \\
\midrule
w/o SG & 55.6 & 57.5 \\
Deterministic Gate & 57.5 & 57.7 \\
\bottomrule
\end{tabularx}
}
\caption{\textbf{Ablation study on Pascal VOC.} We study the role of \subref{table:ablation_focal} the focal mask penalty, \subref{table:ablation_pamr} the Pixel-Adaptive Mask Refinement, and \subref{table:ablation_gate} the Stochastic Gate.}
\label{table:ablation}
\vspace{-0.5em}
\end{table}
\subsection{Ablation study}
\label{sec:ablation}
\paragraph{Focal mask penalty.}
Following the intuition from \cite{LinGGHD17}, the focal mask penalty emphasises training on the current failure cases, \ie small (large) masks for the classes present (absent) in the image.
Recall from \cref{eq:pen_focal} that $\lambda$ controls the penalty magnitude, while $p$ is the discounting rate for better-off image samples.
We aim to verify if the ``focal'' aspect of the mask penalty provides advantages over the baseline penalty (\ie $p = 0$).
\cref{table:ablation_focal} summarises the results.
First, we find that the focal version of the mask penalty improves the segmentation quality of the baseline.
This improvement, maximised with $p=5$ and $\lambda=0.1$, is tangible, yet comes at a negligible computational cost.
Second, we observe that increasing $\lambda$ tends to increase the segmentation accuracy.
While changing $\lambda$ from $0.01$ to $0.001$ leads to higher recall on average, it has a detrimental effect on precision.
Lastly, we also find that moderate positive values of $p$ in conjunction with CRF refinement lead to more sizeable gains in mask quality:
with $p=3, \lambda=0.01$ we achieve $62.2\%$ IoU, whereas the highest IoU with $p=0$ is only $60.5\%$ (reached with $\lambda=0.01$).
However, higher values of $p$ do not benefit from CRF processing (\eg $59.8\%$ with $p=5, \lambda=0.1$).
Hence, $p=3$ strikes the best balance between the model accuracy with and without using a CRF.
Note that removing the mask penalty, $y_c^\text{size-focal}$, leads to an expected drop in recall, reaching only $56.6\%$ IoU.
\begin{table}
\footnotesize
\begin{tabularx}{\linewidth}{@{}XS[table-format=2.1]@{\hspace{2em}}S[table-format=2.1]@{}}
\toprule
Method & {IoU (train,\%)} & {IoU (val,\%)} \\
\midrule
CAM \cite{AhnK18} \scriptsize{(Our Baseline)} & 48.0 & 46.8 \\
CAM + RW \cite{AhnK18} & 58.1 & 57.0 \\
CAM + RW + CRF \cite{AhnK18} & 59.7 & {--} \\
CAM + IRN + CRF \cite{AhnCK19} & 66.5 & {--} \\
\toprule
Ours & 64.7 & 63.4 \\
Ours + CRF & \bfseries 66.9 & \bfseries 65.3 \\
\bottomrule
\end{tabularx}
\caption{\textbf{Segmentation quality on Pascal VOC training and validation sets.} Here, we use ground-truth image-level labels to remove masks of any false positive classes predicted by our model.}
\label{table:seg_training}
\vspace{-0.5em}
\end{table}
\myparagraph{Pixel-Adaptive Mask Refinement (PAMR).}
Recall from \cref{sec:mask_refinement} that PAMR aims to improve the quality of the original coarse masks \wrt \emph{local consistency} to provide self-supervision for segmentation.
Here, we verify \textit{(i)} the importance of PAMR by training our model without the refinement; and \textit{(ii)} the choice of the kernel structure, \ie the composition of dilation rates of the $3\times3$-kernels in PAMR.
The results in \cref{table:ablation_pamr} show that PAMR is a crucial component in our self-supervised model, as the segmentation accuracy drops markedly from $59.4 \%$ to $31.8 \%$ without it.
We find further that the size of the kernel also affects the accuracy.
This is expected, since small receptive fields (dilations \texttt{1-2-4-8} in \cref{table:ablation_pamr}) are insufficient to revise the boundaries of the coarse masks that typically exhibit large deviations from the object boundaries.
The results with larger receptive fields of the affinity kernel further support this intuition: increasing the dilation of the largest $3\!\times\!3$-kernel to $24$ attains the best mask quality compared to the smaller affinity kernels.
Furthermore, we observe that varying the kernel \emph{shape} does not have such a drastic effect;
the change from \texttt{1-3-6-9-12-16} to \texttt{1-2-4-8-12-16} only leads to small accuracy changes.
This is desirable in practice as sensitivity to these minor details would imply that our architecture overfits to particularities in the data \cite{torralba2011unbiased}.
\myparagraph{Stochastic Gate (SG).}
The intention of the SG, introduced in \cref{sec:stochasitc_gate}, is to counter overfitting to the errors contained in the pseudo supervision.
Here, there are four baselines we aim to verify: \emph{(i)} disabling SG; \emph{(ii)} combining $x^{(d)}$ and $x^{(s)}$ deterministically (\ie $r\equiv\psi$ in \cref{eq:bernoulli}); \emph{(iii)} the role of the Global Cue Injection (GCI); and \emph{(iv)} the effect of the mixing rate $\psi$.
These results are summarised in \cref{table:ablation_gate}.
Evidently, SG is crucial, since disabling it substantially weakens the mask accuracy (from 59.8\% to 55.6\% IoU).
The \emph{stochastic} nature of SG is also important: simply summing up $x^{(d)}$ and $x^{(s)}$ (we used $r\equiv\psi=0.5$) yields inferior mask IoU (57.5\% \vs 59.8\%).
In our model comparison with both $\psi=0.5$ and $\psi=0.3$, we find that the model with GCI tends to provide superior results.
However, the model without GCI can be as competitive given a particular choice of $\psi$ (\eg, $0.5$).
In this case, the model with GCI usually has higher recall, while the model without it has higher precision.
Since CRFs tend to increase the precision provided sufficient mask support, the model with GCI should therefore profit more from this refinement.
We confirmed this and observed a more sizeable improvement of the model with GCI (59.4 \vs 62.2\% IoU).
Additionally, we found GCI to deliver more stable results for different $\psi$, which can alleviate parameter fine-tuning in practice.
\begin{table}
\footnotesize
\begin{tabularx}{\linewidth}{@{}X@{\hspace{0.5em}}l@{\hspace{0.25em}}c@{\hspace{0.25em}}c@{\hspace{1em}}S[table-format=2.1]@{\hspace{1em}}S[table-format=2.1]@{}}
\toprule
Method & Backbone & Superv. & Dep. & {val} & {test} \\
\midrule
\multicolumn{6}{@{}l}{\scriptsize \textit{Fully supervised}} \\
\midrule
WideResNet38~\cite{wu2019wider} & & $\mathcal{F}$ & & 80.8 & 82.5 \\
DeepLabv3+~\cite{deeplabv3plus2018} & Xception-65 \cite{chollet17} & $\mathcal{F}$ & & {--} & 87.8 \\
\midrule
\multicolumn{6}{@{}l}{\scriptsize \textit{Multi stage + Saliency}} \\
\midrule
STC \cite{WeiLCSCFZY17} & DeepLab \cite{chen2017deeplab} & $\mathcal{S}, \mathcal{D}$ & \cite{JiangWYWZL13} & 49.8 & 51.2 \\
SEC \cite{KolesnikovL16} & VGG-16 & $\mathcal{S}, \mathcal{D}$ & \cite{SimonyanVZ13} & 50.7 & 51.7 \\
Saliency \cite{OhBKAFS17} & VGG-16 & $\mathcal{S}, \mathcal{D}$ & & 55.7 & 56.7 \\
DCSP \cite{ChaudhryDT17} & ResNet-101 & $\mathcal{S}$ & \cite{LiuH16} & 60.8 & 61.9 \\
RDC \cite{WeiXSJFH18} & VGG-16 & $\mathcal{S}$ & \cite{xiao2017self} & 60.4 & 60.8 \\
DSRG \cite{HuangWWLW18} & ResNet-101 & $\mathcal{S}$ & \cite{WangJYCHZ17} & 61.4 & 63.2 \\
FickleNet~\cite{LeeKLLY19} & ResNet-101 & $\mathcal{S}$ & \cite{HuangWWLW18} & 64.9 & 65.3 \\
\multirow{2}{*}{Frame-to-Frame~\cite{Lee_2019_ICCV}} & VGG-16 & \multirow{2}{*}{$\mathcal{S}, \mathcal{D}$} & \multirow{2}{*}{\cite{sun2018pwc,HouCHBTT17,LeeKLLY19}} & 63.9 & 65.0 \\
& ResNet-101 & & & 66.5 & 67.4 \\
\midrule
\multicolumn{6}{@{}l}{\scriptsize \textit{Single stage + Saliency}} \\
\midrule
JointSaliency~\cite{Zeng_2019_ICCV} & \scriptsize{DenseNet-169} \cite{HuangLMW17} & $\mathcal{S}, \mathcal{D}$ & & 63.3 & 64.3 \\
\midrule
\multicolumn{6}{@{}l}{\scriptsize \textit{Multi stage}} \\
\midrule
AffinityNet~\cite{AhnK18} & WideResNet-38 & $\mathcal{I}$ & & 61.7 & 63.7 \\
IRN~\cite{AhnCK19} & ResNet-50 & $\mathcal{I}$ & & 63.5 & 64.8 \\
SSDD~\cite{Shimoda_2019_ICCV} & WideResNet-38 & $\mathcal{I}$ & \cite{AhnK18} & 64.9 & 65.5 \\
\midrule
\multicolumn{6}{@{}l}{\scriptsize \textit{Single stage}} \\
\midrule
TransferNet~\cite{HongOLH16} & VGG-16 & $\mathcal{D}$ & & 52.1 & 51.2 \\
WebCrawl~\cite{HongYKLH17} & VGG-16 & $\mathcal{D}$ & & 58.1 & 58.7 \\
\midrule
EM~\cite{PapandreouCMY15} & VGG-16 & $\mathcal{I}$ & & 38.2 & 39.6 \\
MIL-LSE~\cite{PinheiroC15} & Overfeat \cite{SermanetEZMFL13} & $\mathcal{I}$ & & 42.0 & 40.6 \\
CRF-RNN \cite{RoyT17} & VGG-16 & $\mathcal{I}$ & & 52.8 & 53.7 \\
\midrule
Ours & \multirow{2}{*}{WideResNet-38} & $\mathcal{I}$ & & 59.7 & 60.5 \\
Ours + CRF & & $\mathcal{I}$ & & 62.7 & 64.3 \\
\bottomrule
\end{tabularx}
\caption{\textbf{Mean IoU (\%) on Pascal VOC validation and test.}
For each method we indicate additional cues used for training beyond image-level labels $\mathcal{I}$, such as saliency detection ($\mathcal{S}$), additional data $(\mathcal{D})$, as well as their dependence on other methods (``Dep.'').
}
\label{table:main_result}
\vspace{-0.5em}
\end{table}
\subsection{Comparison to the state of the art}
\label{sec:main_result}
\paragraph{Setup.} Here, our model uses SG with GCI, $\psi=0.3$, the focal penalty with $p=3$ and $\lambda=0.01$, and PAMR with $10$ iterations and a \texttt{1-2-4-8-12-24} affinity kernel.
\begin{figure*}[t]
\centering
\begin{subfigure}[t]{.375\linewidth}
\centering\large
\includegraphics[width=0.98\linewidth]{figures/qualitative/final/train_001.png}\\
\includegraphics[width=0.98\linewidth]{figures/qualitative/final/train_002.png}\\
\includegraphics[width=0.98\linewidth]{figures/qualitative/final/train_003.png}\\
\includegraphics[width=0.98\linewidth]{figures/qualitative/final/train_004.png}
\caption{Train}\label{fig:q_train}
\end{subfigure}%
\begin{subfigure}[t]{.375\linewidth}
\centering\large
\includegraphics[width=0.976\linewidth]{figures/qualitative/final/val_001.png}\\
\includegraphics[width=0.976\linewidth]{figures/qualitative/final/val_002.png}\\
\includegraphics[width=0.976\linewidth]{figures/qualitative/final/val_003.png}\\
\includegraphics[width=0.976\linewidth]{figures/qualitative/final/val_004.png}
\caption{Val}\label{fig:q_val}
\end{subfigure}%
\begin{subfigure}[t]{.25\linewidth}
\centering\large
\includegraphics[width=0.976\linewidth]{figures/qualitative/final/test_001.png}\\
\includegraphics[width=0.976\linewidth]{figures/qualitative/final/test_002.png}\\
\includegraphics[width=0.976\linewidth]{figures/qualitative/final/test_003.png}\\
\includegraphics[width=0.976\linewidth]{figures/qualitative/final/test_004.png}
\caption{Test}\label{fig:q_test}
\end{subfigure}
\caption{\textbf{Qualitative results on PASCAL VOC.} We show example segmentations from our method (\textit{left}), the result of CRF post-processing (\textit{middle}), and the ground truth (\textit{right}). Our method produces masks of high quality under a variety of challenging conditions.}
\label{fig:qualitative}%
\vspace{-0.5em}
\end{figure*}
\myparagraph{Mask quality.}
Recall that the majority of recent work, \eg \cite{HuangWWLW18,LeeKLLY19,Lee_2019_ICCV,WeiXSJFH18}, additionally trains a separate fully supervised segmentation network from the pseudo ground truth.
To evaluate the quality of such pseudo supervision generated by our model, we use image-level ground-truth labels to remove any masks for classes that are not present in the image (for this experiment only).
The results in \cref{table:seg_training} show that using our single-stage mask output as pseudo segmentation labels improves the baseline IoU of CAMs by a staggering $18.9 \%$ IoU ($18.5 \%$ on validation), and even outperforms recent multi-stage methods \cite{AhnK18} by $7.2\%$ IoU and \cite{AhnCK19} by $0.4\%$ IoU.
This is remarkable, since we neither train additional models nor recourse to saliency detection.
\myparagraph{Segmentation accuracy.}
\cref{table:main_result} provides a comparative overview \wrt the state of the art.
Since image-level labels are generally not available at test time, we do \emph{not} perform any mask pruning here (unlike \cref{table:seg_training}).
In the setting of image-level supervision, IRN \cite{AhnCK19} and SSDD \cite{Shimoda_2019_ICCV} are the only methods with higher IoU than ours.
Both methods are multi-stage; they are trained in at least \emph{three} stages.
IRNet \cite{AhnCK19} trains an additional segmentation network on pseudo labels to eventually outperform our method by only $0.5\%$ IoU.
Recall that SSDD is essentially a post-processing approach: it refines the masks from AffinityNet \cite{AhnK18} (with $63.7\%$ IoU) using an additional network and further employs a cascade of \emph{two} networks to revise the masks.
This strategy improves over our results by only $1.2\%$ IoU, yet at the cost of a considerable increase in model complexity.
Our single-stage method is also competitive with JointSaliency \cite{Zeng_2019_ICCV}, which uses a more powerful backbone \cite{HuangLMW17} and saliency supervision.
The recent Frame-to-Frame system \cite{Lee_2019_ICCV} is also supervised with saliency and is trained on additional 15K images mined from videos, which requires state-of-the-art optical flow \cite{sun2018pwc}.
By contrast, our approach is substantially simpler since we train \emph{one} network in a \emph{single} shot.
Nevertheless, we surpass a number of multi-stage methods that use additional data and saliency supervision \cite{ChaudhryDT17,HuangWWLW18,KolesnikovL16,OhBKAFS17,WeiLCSCFZY17}.
We significantly improve over previous single-stage methods \cite{PapandreouCMY15,PinheiroC15,RoyT17},
as well as outperform the single-stage WebCrawl \cite{HongYKLH17}, which relies on additional training data and needs multiple forward passes through its class-agnostic decoder.
Our model needs neither and infers masks for all classes in one pass.
Note that training a standalone segmentation network on our pseudo labels is a trivial extension, which we omit here in view of our practical goals.
However, we still provide these results in the supplemental material (\cref{sec:pseudo_labels}), in fact, achieving state of the art in a \emph{multi-stage} setup as well.
\myparagraph{Qualitative analysis.}
From the qualitative results in \cref{fig:qualitative}, we observe that our method produces segmentation masks that align well with object boundaries.
Our model exhibits good generalisation to challenging scenes with varying object scales and semantic content.
Common failure modes of our segmentation network are akin to those of fully supervised methods: segmenting fine-grained details (\eg, bicycle wheels), mislabelling under conditions of occlusions (\eg, leg of the cyclist \vs bicycle), and misleading appearance cues (\eg, low contrast, similar texture).
\section{On Training Stages}
\label{sec:stages}
\cref{table:related_work} provides a concise overview of previous work on semantic segmentation from image-level labels.
As an important practical consideration, we highlight the number of stages used by each method.
In this work, we refer to one stage as learning an independent set of model parameters with intermediate results saved \emph{offline} as input to the next stage.
For example, the approach by Ahn~\etal \cite{AhnK18} comprises three stages: \emph{(1)} extracting CAMs as seed masks; \emph{(2)} learning a pixel affinity network to refine these masks; and \emph{(3)} training a segmentation network on the pseudo labels generated by affinity propagation.
Note that three methods \cite{JingCT20,WeiFLCZY17,WeiLCSCFZY17} also use multiple training cycles (given in parentheses) of the same model, which essentially acts as a multiplier \wrt the total number of stages.
Finally, we note if a method relies on saliency detection, extra data, or previous frameworks.
We observe that predominantly early single-stage methods stand in contrast to the more complex recent multi-stage pipelines.
Our approach is single stage and relies neither on previous frameworks, nor on saliency supervision.\footnote{The background cues provided by saliency detection methods give a substantial advantage, since $63\%$ of 10K train images (VOC+SBD) have only one class.}
\begin{table}
\footnotesize
\begin{tabularx}{\linewidth}{@{}Xllc@{}}
\toprule
Method & Extras & \# of Stages & PSR \\
\midrule
MIL-FCN \begin{tiny}\textit{ICLR '15}\end{tiny} \cite{PathakSLD14} & -- & 1 & \xmark \\
MIL-LSE \begin{tiny}\textit{CVPR '15}\end{tiny} \cite{PinheiroC15} & -- & 1 & \xmark \\
EM \begin{tiny}\textit{ICCV '15}\end{tiny} \cite{PapandreouCMY15} & -- & 1 & \xmark \\
TransferNet \begin{tiny}\textit{CVPR '16}\end{tiny} \cite{HongOLH16} & -- & 1 & \xmark \\
SEC \begin{tiny}\textit{ECCV '16}\end{tiny} \cite{KolesnikovL16} & $\mathcal{S}$ \cite{SimonyanVZ13} & 2 & \xmark \\
DCSP \begin{tiny}\textit{ECCV '17}\end{tiny} \cite{ChaudhryDT17} & $\mathcal{S}$ \cite{LiuH16} & 1 & \xmark \\
AdvErasing \begin{tiny}\textit{CVPR '17}\end{tiny} \cite{WeiFLCZY17} & $\mathcal{S}$ \cite{WangJYCHZ17} & 2 $(\times 3)$ & \cmark \\
WebCrawl \begin{tiny}\textit{CVPR '17}\end{tiny} \cite{HongYKLH17} & $\mathcal{D}$ & 1 & \xmark \\
CRF-RNN \begin{tiny}\textit{CVPR '17}\end{tiny} \cite{RoyT17} & -- & 1 & \xmark \\
STC \begin{tiny}\textit{TPAMI '17}\end{tiny} \cite{WeiLCSCFZY17} & $\mathcal{D}$, $\mathcal{S}$ \cite{JiangWYWZL13} & 3 $(\times3)$ & \cmark \\
MCOF \begin{tiny}\textit{CVPR '18}\end{tiny} \cite{wang2018weakly} & $\mathcal{S}$ \cite{wang2018weakly} & 2 & \xmark \\
RDC \begin{tiny}\textit{CVPR '18}\end{tiny} \cite{WeiXSJFH18} & $\mathcal{S}$ \cite{xiao2017self} & 3 & \cmark \\
DSRG \begin{tiny}\textit{CVPR '18}\end{tiny} \cite{HuangWWLW18} & $\mathcal{S}$ \cite{WangJYCHZ17} & 2 & \cmark \\
Guided-Att \begin{tiny}\textit{CVPR '18}\end{tiny} \cite{li2018tell} & SEC \cite{KolesnikovL16} & 1+2 & \xmark \\
SalientInstances \begin{tiny}\textit{ECCV '18}\end{tiny} \cite{fan2018associating} & $\mathcal{S}$ \cite{fan2019s4net} & 2 & \cmark \\
Affinity \begin{tiny}\textit{CVPR '18}\end{tiny} \cite{AhnK18} & -- & 3 & \cmark \\
SeeNet \begin{tiny}\textit{NIPS '18}\end{tiny} \cite{HouJWC18} & $\mathcal{S}$ \cite{HouCHBTT19} & 2 & \cmark \\
FickleNet \begin{tiny}\textit{CVPR '19}\end{tiny} \cite{LeeKLLY19} & $\mathcal{S}$, DSRG \cite{HuangWWLW18} & 1+3 & \cmark \\
JointSaliency \begin{tiny}\textit{ICCV '19}\end{tiny} \cite{Zeng_2019_ICCV} & $\mathcal{S}$ & 1 & \xmark \\
Frame-to-Frame \begin{tiny}\textit{ICCV '19}\end{tiny} \cite{Lee_2019_ICCV} & $\mathcal{D}$, $\mathcal{S}$ \cite{HouCHBTT17} & 2 & \cmark \\
SSDD \begin{tiny}\textit{ICCV '19}\end{tiny} \cite{Shimoda_2019_ICCV} & Affinity \cite{AhnK18} & 2+3 & \cmark \\
IRN \begin{tiny}\textit{CVPR '19}\end{tiny} \cite{AhnCK19} & -- & 3 & \cmark \\
Coarse-to-Fine \begin{tiny}\textit{TIP '20}\end{tiny} \cite{JingCT20} & GrabCut \cite{rother2004grabcut} & 2 $(\times 5)$ & \cmark \\
\midrule
Ours & -- & 1 & \xmark \\
\bottomrule
\end{tabularx}
\caption{\textbf{Summary of related work.} We analyse the related methods \wrt external input (``Extras''), such as saliency detection ($\mathcal{S}$), additional data $(\mathcal{D})$, or their reliance on previous work. We count the number of training stages in the method and note in parentheses if the method uses multiple training cycles of the same models. We also mark methods that additionally train a standalone segmentation network in a (pseudo) fully supervised regime to refine the masks (PSR).}
\label{table:related_work}
\end{table}
\section{Loss Functions}
In this section, we take a detailed look at the employed loss functions.
Since we compute the classification scores differently from previous work, we also provide additional analysis to justify the form of this novel formulation.
\label{sec:supp_loss}
\subsection{Classification loss}
We use the multi-label soft-margin loss function used in previous work \cite{AhnK18,WeiXSJFH18} as the classification loss, $\mathcal{L}_\text{cls}$.
Given model predictions $\mathbf{y} \in \mathbb{R}^C$ (\cf \cref{eq:gwp} and \cref{eq:pen_focal}; see also below) and a binary vector of ground-truth labels \mbox{$\mathbf{z}\in\{0,1\}^C$}, we compute the multi-label soft-margin loss \cite{paszke2017automatic} as
\begin{multline}
\mathcal{L}_\text{cls}(\mathbf{y}, \mathbf{z}) = -\frac{1}{C} \sum^C_{c=1} z_c \log{\bigg(\frac{1}{1 + e^{-y_c}} \bigg)} + \\
+ (1 - z_c) \log \bigg( \frac{e^{-y_c}}{1 + e^{-y_c}} \bigg).
\label{eq:supp_cls_loss}
\end{multline}
As illustrated in \cref{fig:supp_cls_loss}, the loss function encourages \mbox{$y_c < 0$} for negative classes (\ie when $z_c = 0$) and $y_c > 0$ for positive classes (\ie when $z_c = 1$).
This observation will be useful for our discussion below.
Recall from \cref{eq:gwp} and \cref{eq:pen_focal} that we define our classification score $y_c$ as
\begin{equation}
y_c = y_c^\text{nGWP} + y_c^\text{size-focal}.
\label{eq:supp_yc}
\end{equation}
For convenience, we re-iterate the definition of the two terms, the normalised Global Weighted Pooling
\begin{align}
y_c^\text{nGWP} &= \frac{\sum_{i,j} m_{c,i,j} y_{c,i,j}}{\epsilon + \sum_{i',j'} m_{c,i',j'}}
\label{eq:supp_gwp}\\
\intertext{and the focal penalty}
y_c^\text{size-focal} &= (1 - \bar{m}_c)^p \log(\lambda + \bar{m}_c), \label{eq:supp_fp}\\
\text{with}\qquad \bar{m}_c &= \frac{1}{hw} \sum_{i,j} m_{c,i,j}.
\end{align}
Note that $y_{c,i,j}$ in \cref{eq:supp_gwp} refers to pixel site $(i, j)$ on the score map of class $c$, whereas $y_c$ in \cref{eq:supp_yc} is the \emph{aggregated} score for class $c$.
We compute the class confidence $m_{:,i,j}$ for site $(i, j)$ using a \texttt{softmax} on $y_{:,i,j}$, where we include an additional \emph{background} channel $y_{0,i,j}$.
We fix $y_{0,:,:}\equiv 1$ throughout our experiments.
\begin{figure}[t]
\input{figures_supp/cls_loss/cls_loss.tex}
\caption{\textbf{Soft-margin classification loss.} The loss encourages $y_c>0$ for positive classes and $y_c<0$ for negative classes. The function exhibits saturation regions: as $y_c \rightarrow \infty$ for positive classes, the associated loss (shown in blue) approaches $0$. Conversely, the loss for negative classes, shown in red, approaches $0$, as $y_c \rightarrow -\infty$.}
\label{fig:supp_cls_loss}
\end{figure}
The use of hyperparameter $\epsilon > 0$ in the definition of \cref{eq:supp_gwp} may look rather arbitrary and redundant at first.
In the analysis below, we show that in fact it serves multiple purposes:
\begin{enumerate}
\item \textbf{Numerical stability}. First, using $\epsilon > 0$ prevents division by zero for saturated scores, \ie where $\sum_{i',j'} m_{c,i',j'} = 0$ for some $c$ in the denominator of \cref{eq:supp_gwp}, which can happen (approximately) in the course of training for negative classes.
Secondly, $\epsilon > 0$ resolves discontinuity issues.
Observe that
\begin{equation}
\lim_{m_{c,:,:} \rightarrow 0} \:\frac{\sum_{i,j} m_{c,i,j} y_{c,i,j}}{\epsilon + \sum_{i',j'} m_{c,i',j'}} = 0,
\label{eq:supp_limit_zero}
\end{equation}
\ie $y_c^\text{nGWP} \approx 0$ for negative classes and positive $\epsilon$.
However, with $\epsilon=0$ the nGWP term in \cref{eq:supp_gwp} is not continuous at 0, which in practice may result in unstable training when $\sum_{i,j} m_{c,i,j} \approx 0$ for some $c$.
One exception, unlikely to occur in practice, is when $y_{c,i,j} = y_{c,k,l} = d$ for all $(i,j)$ and $(k,l)$.
Then,
\begin{multline}
\lim_{m_{c,:,:} \rightarrow 0} \frac{\sum_{i,j} m_{c,i,j} y_{c,i,j}}{\sum_{i',j'} m_{c,i',j'}} = \\
= \lim_{m_{c,:,:} \rightarrow 0} \frac{d \sum_{i,j} m_{c,i,j}}{\sum_{i',j'} m_{c,i',j'}} = d.
\end{multline}
In the case of more practical relevance, \ie when $y_{c,i,j} \neq y_{c,k,l}$ for some $(i,j) \neq (k,l)$, the limit does not exist, as the following lemma shows.
With some abuse of notation, we here write $m_i$ and $y_i$ to refer to the confidence and the score values of class $c$ at pixel site $i$, respectively.
\paragraph{Lemma 1.} Let $\epsilon = 0$ in \cref{eq:supp_gwp} and suppose there exist $k$ and $l$ such that $y_k \neq y_l$.
Then, the corresponding limit
\begin{equation}
\begin{aligned}
\lim_{m_i \rightarrow 0, \forall i} \frac{\sum_i m_i y_i}{\sum_{i'} m_{i'}}
\end{aligned}
\label{eq:supp_limit_lemma}
\end{equation}
does not exist.
\begin{proof}
Let $m_k(t) = t$ and $m_i(t) = 0$ for $i \neq k$.
Then,
$$
\lim_{m_i \rightarrow 0, \forall i} \frac{\sum_i m_i y_i}{\sum_{i'} m_{i'}} = \lim_{t \rightarrow 0} \frac{t y_k}{t} = y_k.
$$
On the other hand, if we let $m_l(t) = t$ and $m_i(t) = 0$ for $i \neq l$, we obtain
$$
\lim_{m_i \rightarrow 0, \forall i} \frac{\sum_i m_i y_i}{\sum_{i'} m_{i'}} = \lim_{t \rightarrow 0} \frac{t y_l}{t} = y_l.
$$
Since $y_k \neq y_l$ by our assumption, we have now found two paths of the multivariable limit in \cref{eq:supp_limit_lemma}, which evaluate to different values.
Therefore, the limit in \cref{eq:supp_limit_lemma} is not unique, \ie does not exist (\citesupp{stewart12}, Ch.~14.2).
\end{proof}
\item \textbf{Emphasis on the focal penalty}.
For negative classes, it is not sensible to give meaningful relative weightings of the pixels for nGWP in \cref{eq:supp_gwp}; we seek such a relative weighting of different pixels only for positive classes.
At training time we would thus like to minimise the class scores for negative classes \emph{uniformly} for all pixel sites.
Empirically, we observed that with $\epsilon > 0$ the focal penalty term, which encourages this behaviour, contributes more significantly to the score of the negative classes than the nGWP term, which relies on relative pixel weighting.
\begin{table*}
\footnotesize
\begin{tabularx}{\linewidth}{@{}X|S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}|S[table-format=2.1]@{}}
\toprule
Method & {bg} & {aero} & {bike} & {bird} & {boat} & {bot.} & {bus} & {car} & {cat} & {chair} & {cow} & {tab.} & {dog} & {horse} & {mbk} & {per.} & {plant} & {sheep} & {sofa} & {train} & {tv} & {mean} \\
\midrule
\multicolumn{23}{@{}l}{\scriptsize \textit{Multi stage}} \\
\midrule
SEC$^\ast$ \cite{KolesnikovL16} & 82.4 & 62.9 & 26.4 & 61.6 & 27.6 & 38.1 & 66.6 & 62.7 & 75.2 & 22.1 & 53.5 & 28.3 & 65.8 & 57.8 & 62.3 & 52.5 & 32.5 & 62.6 & 32.1 & 45.4 & 45.3 & 50.7 \\
AdvErasing$^\ast$ \cite{WeiFLCZY17} & 83.4 & 71.1 & 30.5 & 72.9 & 41.6 & 55.9 & 63.1 & 60.2 & 74.0 & 18.0 & 66.5 & 32.4 & 71.7 & 56.3 & 64.8 & 52.4 & 37.4 & 69.1 & 31.4 & 58.9 & 43.9 & 55.0 \\
RDC$^\ast$ \cite{WeiXSJFH18} & 89.4 & 85.6 & 34.6 & 75.8 & 61.9 & 65.8 & 67.1 & 73.3 & 80.2 & 15.1 & 69.9 & 8.1 & 75.0 & 68.4 & 70.9 & 71.5 & 32.6 & 74.9 & 24.8 & 73.2 & 50.8 & 60.4 \\
FickleNet$^\ast$ \cite{LeeKLLY19} & 88.1 & 75.0 & 31.3 & 75.7 & 48.8 & 60.1 & 80.0 & 72.7 & 79.6 & 25.7 & 67.3 & 42.2 & 77.1 & 67.5 & 65.4 & 69.2 & 42.2 & 74.1 & 34.2 & 53.7 & 54.7 & 61.2 \\
AffinityNet~\cite{AhnK18} & 88.2 & 68.2 & 30.6 & 81.1 & 49.6 & 61.0 & 77.8 & 66.1 & 75.1 & 29.0 & 66.0 & 40.2 & 80.4 & 62.0 & 70.4 & 73.7 & 42.5 & 70.7 & 42.5 & 68.1 & 51.6 & 61.7 \\
FickleNet$^{\ast,\dagger}$ \cite{LeeKLLY19} & 89.5 & 76.6 & 32.6 & 74.6 & 51.5 & 71.1 & 83.4 & 74.4 & 83.6 & 24.1 & 73.4 & 47.4 & 78.2 & 74.0 & 68.8 & 73.2 & 47.8 & 79.9 & 37.0 & 57.3 & 64.6 & 64.9 \\
SSDD~\cite{Shimoda_2019_ICCV} & 89.0 & 62.5 & 28.9 & 83.7 & 52.9 & 59.5 & 77.6 & 73.7 & 87.0 & 34.0 & 83.7 & 47.6 & 84.1 & 77.0 & 73.9 & 69.6 & 29.8 & 84.0 & 43.2 & 68.0 & 53.4 & 64.9 \\
\midrule
\multicolumn{23}{@{}l}{\scriptsize \textit{Single stage}} \\
\midrule
TransferNet$^\ast$ \cite{HongOLH16} & 85.3 & 68.5 & 26.4 & 69.8 & 36.7 & 49.1 & 68.4 & 55.8 & 77.3 & 6.2 & 75.2 & 14.3 & 69.8 & 71.5 & 61.1 & 31.9 & 25.5 & 74.6 & 33.8 & 49.6 & 43.7 & 52.1 \\
CRF-RNN \cite{RoyT17} & 85.8 & 65.2 & 29.4 & 63.8 & 31.2 & 37.2 & 69.6 & 64.3 & 76.2 & 21.4 & 56.3 & 29.8 & 68.2 & 60.6 & 66.2 & 55.8 & 30.8 & 66.1 & 34.9 & 48.8 & 47.1 & 52.8 \\
WebCrawl$^\ast$ \cite{HongYKLH17} & 87.0 & 69.3 & 32.2 & 70.2 & 31.2 & 58.4& 73.6 & 68.5 & 76.5 & 26.8 & 63.8 & 29.1 & 73.5 & 69.5 & 66.5 & 70.4 & 46.8 & 72.1 & 27.3 & 57.4 & 50.2 & 58.1 \\
\midrule
Ours & 87.0 & 63.4 & 33.1 & 64.5 & 47.4 & 63.2 & 70.2 & 59.2 & 76.9 & 27.3 & 67.1 & 29.8 & 77.0 & 67.2 & 64.0 & 72.4 & 46.5 & 67.6 & 38.1 & 68.2 & 63.6 & 59.7 \\
Ours + CRF & 88.7 & 70.4 & 35.1 & 75.7 & 51.9 & 65.8 & 71.9 & 64.2 & 81.1 & 30.8 & 73.3 & 28.1 & 81.6 & 69.1 & 62.6 & 74.8 & 48.6 & 71.0 & 40.1 & 68.5 & 64.3 & 62.7 \\
\bottomrule
\multicolumn{23}{@{}l}{\scriptsize Methods marked with $(^\ast)$ use saliency detectors or additional data, or both (see Sec. 2). $(^\dagger)$ denotes a ResNet-101 backbone.} \\
\end{tabularx}
\caption{Per-class IoU (\%) comparison on Pascal VOC 2012, validation set.}
\label{table:main_result_val}
\end{table*}
\begin{table*}
\footnotesize
\begin{tabularx}{\linewidth}{@{}X|S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}S[table-format=2.1]@{\hspace{0.5em}}|S[table-format=2.1]@{}}
\toprule
Method & {bg} & {aero} & {bike} & {bird} & {boat} & {bot.} & {bus} & {car} & {cat} & {chair} & {cow} & {tab.} & {dog} & {horse} & {mbk} & {per.} & {plant} & {sheep} & {sofa} & {train} & {tv} & {mean} \\
\midrule
\multicolumn{23}{@{}l}{\scriptsize \textit{Multi stage}} \\
\midrule
SEC$^\ast$ \cite{KolesnikovL16} & 83.5 & 56.4 & 28.5 & 64.1 & 23.6 & 46.5 & 70.6 & 58.5 & 71.3 & 23.2 & 54.0 & 28.0 & 68.1 & 62.1 & 70.0 & 55.0 & 38.4 & 58.0 & 39.9 & 38.4 & 48.3 & 51.7 \\
FickleNet$^\ast$ \cite{LeeKLLY19} & 88.5 & 73.7 & 32.4 & 72.0 & 38.0 & 62.8 & 77.4 & 74.4 & 78.6 & 22.3 & 67.5 & 50.2 & 74.5 & 72.1 & 77.3 & 68.8 & 52.5 & 74.8 & 41.5 & 45.5 & 55.4 & 61.9 \\
AffinityNet~\cite{AhnK18} & 89.1 & 70.6 & 31.6 & 77.2 & 42.2 & 68.9 & 79.1 & 66.5 & 74.9 & 29.6 & 68.7 & 56.1 & 82.1 & 64.8 & 78.6 & 73.5 & 50.8 & 70.7 & 47.7 & 63.9 & 51.1 & 63.7 \\
FickleNet$^{\ast,\dagger}$ \cite{LeeKLLY19} & 89.8 & 78.3 & 34.1 & 73.4 & 41.2 & 67.2 & 81.0 & 77.3 & 81.2 & 29.1 & 72.4 & 47.2 & 76.8 & 76.5 & 76.1 & 72.9 & 56.5 & 82.9 & 43.6 & 48.7 & 64.7 & 65.3 \\
SSDD \cite{Shimoda_2019_ICCV} & 89.5 & 71.8 & 31.4 & 79.3 & 47.3 & 64.2 & 79.9 & 74.6 & 84.9 & 30.8 & 73.5 & 58.2 & 82.7 & 73.4 & 76.4 & 69.9 & 37.4 & 80.5 & 54.5 & 65.7 & 50.3 & 65.5 \\
\midrule
\multicolumn{23}{@{}l}{\scriptsize \textit{Single stage}} \\
\midrule
TransferNet$^\ast$~\cite{HongOLH16} & 85.7 & 70.1 & 27.8 & 73.7 & 37.3 & 44.8 & 71.4 & 53.8 & 73.0 & 6.7 & 62.9 & 12.4 & 68.4 & 73.7 & 65.9 & 27.9 & 23.5 & 72.3 & 38.9 & 45.9 & 39.2 & 51.2 \\
CRF-RNN \cite{RoyT17} & 85.7 & 58.8 & 30.5 & 67.6 & 24.7 & 44.7 & 74.8 & 61.8 & 73.7 & 22.9 & 57.4 & 27.5 & 71.3 & 64.8 & 72.4 & 57.3 & 37.3 & 60.4 & 42.8 & 42.2 & 50.6 & 53.7 \\
WebCrawl$^\ast$ \cite{HongYKLH17} & 87.2 & 63.9 & 32.8 & 72.4 & 26.7 & 64.0 & 72.1 & 70.5 & 77.8 & 23.9 & 63.6 & 32.1 & 77.2 & 75.3 & 76.2 & 71.5 & 45.0 & 68.8 & 35.5 & 46.2 & 49.3 & 58.7 \\
\midrule
Ours & 87.4 & 63.6 & 34.7 & 59.9 & 40.1 & 63.3 & 70.2 & 56.5 & 71.4 & 29.0 & 71.0 & 38.3 & 76.7 & 73.2 & 70.5 & 71.6 & 55.0 & 66.3 & 47.0 & 63.5 & 60.3 & 60.5 \\
Ours + CRF & 89.2 & 73.4 & 37.3 & 68.3 & 45.8 & 68.0 & 72.7 & 64.1 & 74.1 & 32.9 & 74.9 & 39.2 & 81.3 & 74.6 & 72.6 & 75.4 & 58.1 & 71.0 & 48.7 & 67.7 & 60.1 & 64.3 \\
\bottomrule
\multicolumn{23}{@{}l}{\scriptsize Methods marked with $(^\ast)$ use saliency detectors or additional data, or both (see Sec. 2). $(^\dagger)$ denotes a ResNet-101 backbone.} \\
\end{tabularx}
\caption{Per-class IoU (\%) comparison on Pascal VOC 2012, test set.}
\label{table:main_result_test}
\end{table*}
\item \textbf{Negative class debiasing}.
Negative classes dominate in the label set of Pascal VOC, \ie each image sample depicts only few categories.
The gradient of the loss function from \cref{eq:supp_cls_loss} vanishes for positive classes with $y_{c} \rightarrow \infty$ and negative classes with $y_{c} \rightarrow -\infty$.
However, in our preliminary experiments with GAP-CAM, and later with nGWP with $\epsilon = 0$ in \cref{eq:supp_gwp}, we observed that further iterations continued to decrease the scores for the negative classes in regions in which the loss is near-saturated, while the $y_{c}$ of positive classes increased only marginally.
This may indicate a strong inductive bias towards negative classes, which might be undesirable for real-world deployment.
Assuming $\epsilon > 0$ and $\sum_{i',j'} m_{c,i',j'} \neq 0$, we observe that,
\begin{equation}
\begin{aligned}
\frac{\sum_{i,j} m_{c,i,j} y_{c,i,j}}{\epsilon + \sum_{i',j'} m_{c,i',j'}} > \frac{\sum_{i,j} m_{c,i,j} y_{c,i,j}}{\sum_{i',j'} m_{c,i',j'}}
\end{aligned}
\label{eq:supp_ineq}
\end{equation}
when $y_{c,:,:} < 0$, which is the case for saturated negative classes.
Recall further that the use of the constant background score (fixed to $1$) implies that $m_{c,i,j} \rightarrow 0$ as $y_{c,i,j} \rightarrow -\infty$.
Since the RHS is the convex combination of $y_{c,i,j}$'s, its minimum is $\min(\{y_{c,i,j}\}_{i,j})$.
Therefore, the RHS is unbounded from below as $y_{c,i,j} \rightarrow -\infty$, hence the $y_{c,i,j}$ keep getting pushed down for negative classes.
By contrast, the LHS has a defined limit of $0$ as $m_{c,i,j} \rightarrow 0$ (see \cref{eq:supp_limit_zero}), which is undesirable as the score for a negative class.
This is because a finite (\ie larger) $m_{c,i,j}$ yields a negative, thus smaller value of the classification score, which we are trying to minimise in this case.
Therefore, $\epsilon>0$ will encourage SGD to pull the negative scores away from the saturation areas by pushing the $\sum_{i,j} m_{c,i,j}$ away from zero.
\end{enumerate}
In summary, for negative classes $\epsilon$ improves numerical stability and emphasises the focal penalty while leveraging nGWP to alleviate the negative class bias.
For positive classes, the effect of $\epsilon$ is negligible, since $\epsilon \ll \sum_{i,j} m_{c,i,j}$ in this case.
We set $\epsilon = 1$ in all our experiments.
\subsection{Segmentation loss}
For the segmentation loss \wrt the pseudo ground truth, we use a \textit{weighted} cross-entropy defined for each pixel site $(i, j)$ as
\begin{equation}
\begin{aligned}
\mathcal{L}_{\text{seg},i,j} = - q_c \log m_{c,i,j},
\end{aligned}
\end{equation}
where $c$ is the label in the pseudo ground-truth mask.
The balancing class weight $q_c$ accounts for the fact that the pseudo ground truth may contain a different amount of pixel supervision for each class:
\begin{equation}
\begin{aligned}
q_c = \frac{M_{b,\text{total}} - M_{b,c}}{1 + M_{b,\text{total}}},
\end{aligned}
\end{equation}
where $M_{b,\text{total}}$ and $M_{b,c}$ are the total and class-specific number of pixels for supervision in the pseudo ground-truth mask, respectively;
$b$ indexes the sample in a batch;
and $1$ in the denominator serves for numerical stability.
The aggregated segmentation loss $\mathcal{L}_\text{seg}$ is a weighted mean over the samples in a batch, \ie
\begin{equation}
\begin{aligned}
\mathcal{L}_\text{seg} = \frac{1}{\sum_{b^\prime} M_{b^\prime,\text{total}}} \frac{1}{hw} \sum_b M_{b,\text{total}} \sum_{i,j} \mathcal{L}_{\text{seg},i,j}.
\end{aligned}
\end{equation}
\section{Quantitative Analysis}
\label{sec:supp_qaunt}
We list the per-class segmentation accuracy on Pascal VOC validation and training in \cref{table:main_result_val,table:main_result_test}.
We first observe that none of previous methods, including the state of the art \cite{AhnK18,LeeKLLY19,Shimoda_2019_ICCV}, outperforms other pipelines on \emph{all} class categories.
For example FickleNet \cite{LeeKLLY19}, based on a ResNet-101 backbone, reaches top segmentation accuracy only for classes ``bottle'', ``bus'', ``car'', and ``tv''.
SSDD \cite{Shimoda_2019_ICCV} has the highest mean IoU, but is inferior to other methods on 10 out of 21 classes.
Our single-stage method compares favourably even to the multi-stage approaches that rely on saliency supervision or additional data.
For example, we improve over the more complex AffinityNet~\cite{AhnK18} that trains a deep network to predict pixel-level affinity distance from CAMs and then further trains a segmentation network in a (pseudo) fully supervised regime.
The best single-stage method from previous work, CRF-RNN \cite{RoyT17}, trained using only the image-level annotation we consider in this work, reaches $52.8\%$ and $53.7\%$ IoU on the validation and test sets.
We substantially boost this result, by $9.9\%$ and $10.6\%$ points, respectively, and attain new a state-of-the-art mask accuracy overall on classes ``bike'', ``person'', and ``plant''.
\section{Ablation Study: PAMR Iterations}
We empirically verify the number of iterations used in PAMR, which we set to $10$ in our main ablation study.
\cref{table:pamr_iter} reports the results with higher and fewer number of iterations.
We observe that using only few PAMR iterations decreases the mask quality.
On the other hand, the benefits of PAMR diminishes if we increase the number of iterations further.
$10$ iterations appears to strike a good balance between the computational expense and the obtained segmentation accuracy.
Additionally, we visualise semantic masks produced at intermediate iterations from our PAMR module in \cref{fig:pamr_iter}.
The initial masks produced by our segmentation model in the early stages of training exhibit coarse boundaries.
PAMR mitigates this shortcoming by virtue of exploiting visual cues with pixel-adaptive convolutions.
Our model then uses the revised masks to generate pseudo ground-truth for self-supervised segmentation.
\begin{table}
\footnotesize
\begin{tabularx}{\linewidth}{@{}XS[table-format=2.1]@{\hspace{2em}}S[table-format=2.1]@{}}
\toprule
Number of iterations & {IoU} & {IoU w/ CRF} \\
\midrule
5 & 59.1 & 59.0 \\
10 & \bfseries 59.4 & \bfseries 62.2 \\
15 & 59.0 & 61.6 \\
\bottomrule
\end{tabularx}
\caption{\textbf{Effect of the iteration number in PAMR.} We train our model with the iteration number in the PAMR module fixed to a pre-defined value. We report the IoU (\%) with and without CRF refinement on Pascal VOC validation.}
\label{table:pamr_iter}
\end{table}
\section{Pseudo Labels}
\label{sec:pseudo_labels}
The last-stage training of a segmentation network is \emph{agnostic} to the process of pseudo-label generation;
it is the quality of the pseudo labels and the ease of obtaining them that matters.
\begin{table}
\footnotesize
\begin{tabularx}{\linewidth}{@{}X@{\hspace{0.5em}}lcS[table-format=2.1]@{\hspace{1em}}S[table-format=2.1]@{}}
\toprule
Method & Backbone & Supervision & {val} & {test} \\
\midrule
\multicolumn{5}{@{}l}{\scriptsize \textit{Fully supervised}} \\
\midrule
WideResNet38~\cite{wu2019wider} & & $\mathcal{F}$ & 80.8 & 82.5 \\
DeepLabv3+~\cite{deeplabv3plus2018} & Xception-65 \cite{chollet17} & $\mathcal{F}$ & {--} & 87.8 \\
\midrule
\multicolumn{5}{@{}l}{\scriptsize \textit{Multi stage + Saliency}} \\
\midrule
FickleNet~\cite{LeeKLLY19} & ResNet-101 & $\mathcal{S}$ & 64.9 & 65.3 \\
\multirow{2}{*}{Frame-to-Frame~\cite{Lee_2019_ICCV}} & VGG-16 & \multirow{2}{*}{$\mathcal{S}, \mathcal{D}$} & 63.9 & 65.0 \\
& ResNet-101 & & 66.5 & 67.4 \\
\midrule
\multicolumn{5}{@{}l}{\scriptsize \textit{Single stage + Saliency}} \\
\midrule
JointSaliency~\cite{Zeng_2019_ICCV} & \scriptsize{DenseNet-169} \cite{HuangLMW17} & $\mathcal{S}, \mathcal{D}$ & 63.3 & 64.3 \\
\midrule
\multicolumn{5}{@{}l}{\scriptsize \textit{Multi stage}} \\
\midrule
AffinityNet~\cite{AhnK18} & WideResNet-38 & $\mathcal{I}$ & 61.7 & 63.7 \\
IRN~\cite{AhnCK19} & ResNet-50 & $\mathcal{I}$ & 63.5 & 64.8 \\
SSDD~\cite{Shimoda_2019_ICCV} & WideResNet-38 & $\mathcal{I}$ & 64.9 & 65.5 \\
\midrule
\multicolumn{5}{@{}l}{\scriptsize \textit{Two stage}} \\
\midrule
Ours + DeepLabv3+ & ResNet-101 & $\mathcal{I}$ & 65.7 & 66.6 \\
Ours + DeepLabv3+ & Xception-65 & $\mathcal{I}$ & 66.8 & 67.3\\
\bottomrule
\end{tabularx}
\caption{\textbf{Mean IoU (\%) on Pascal VOC validation and test sets.}
We train DeepLabv3+ in a fully supervised regime on pseudo ground truth obtained from our method (with CRF refinement).
Under equivalent level of supervision, our two-stage approach outperforms the previous state of the art, trained in three or more stages
and performs on par with other multi-stage frameworks relying on additional data and saliency detection \cite{Lee_2019_ICCV}.
}
\label{table:finetuning}
\end{table}
Although we intentionally omitted the common practice of training a standalone network on our pseudo labels,
we show that, in fact, we can achieve state-of-the-art results in a multi-stage setting as well.
We use our pseudo labels on the train split of Pascal VOC (see \cref{table:seg_training}) and train a segmentation model DeepLabv3+~\cite{deeplabv3plus2018} in a fully supervised fashion.
\cref{table:finetuning} summarises the results.
We observe that the resulting simple two-stage pipeline outperforms other multi-stage frameworks under the same image-level supervision.
Remarkably, our method even attains the mask accuracy of Frame-to-Frame~\cite{Lee_2019_ICCV}, which not only utilises saliency detectors, but also relies on additional data (15K extra) and sophisticated network models (\eg PWC-Net [46], FickleNet \cite{LeeKLLY19}).
\begin{table}
\small
\begin{tabularx}{\linewidth}{@{}XS[table-format=2.1]@{\hspace{1.5em}}S[table-format=2.1]@{\hspace{1em}}c@{\hspace{1em}}S[table-format=2.1]@{\hspace{1em}}S[table-format=2.1]@{}}
\toprule
& \multicolumn{2}{@{}c@{}}{Baseline (CAM)} & & \multicolumn{2}{@{}c@{}}{Ours} \\ \cmidrule(lr){2-3} \cmidrule{5-6}
Backbone & {w/o CRF} & {+ CRF} & & {w/o CRF} & {+ CRF} \\
\midrule
VGG16 & 41.2 & 38.0 & & 55.9 & 56.6 \\
ResNet-50 & 43.7 & 43.5 & & 60.4 & 64.1 \\
ResNet-101 & 46.2 & 45.2 & & 62.9 & 66.2 \\
WideResNet-38 & 44.9 & 45.2 & & 63.1 & 65.8 \\
\midrule
Mean & 44.0 & 43.0 & & 60.6 & 63.2 \\
\bottomrule
\end{tabularx}
\caption{\textbf{Segmentation quality (IoU, \%) on Pascal VOC validation.} We use ground-truth image-level labels to remove false positive classes from the masks to decouple the segmentation quality from the accuracy of the classifier.}
\label{table:backbones}
\end{table}
\section{Exchanging Backbones}
\label{sec:backbones}
Here we confirm that our segmentation method generalises well to other backbone architectures.
We choose VGG16 \cite{SimonyanZ14a}, ResNet-50 and ResNet-101 \cite{HeZRS16} -- all widely used network architectures -- as a drop-in alternative to WideResNet-38 \cite{wu2019wider}, which we use for all other experiments.
We train these models on $448\times448$ image crops using the same data augmentation as before.
We use multi-scale inference with image sides varying by factors $1.0$, $0.75$, $1.25$, $1.5$.
These scales are slightly different from the ones we used in the main experiments, which we found to slightly improve the IoU on average.
We re-evaluate our WideResNet-38 on the same scales to make the results from different backbones compatible.
Motivated to measure the segmentation accuracy alone, we report validation IoU on the masks with the false positives removed using ground-truth labels.
\cref{table:backbones} summarises the results.
The results show a clear improvement over the CAM baseline for all backbones: the average improvement without CRF post-processing is $16.6\%$ IoU and $20.2\%$ IoU with CRF refinement.
\begin{figure*}[t]
\captionsetup[subfigure]{labelformat=empty}
\centering
\begin{subfigure}{0.137\linewidth}
\caption{Prediction}
\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_000753/2008_000753_00.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}
\caption{Iteration 1}
\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_000753/2008_000753_01.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}
\caption{Iteration 3}
\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_000753/2008_000753_03.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}
\caption{Iteration 5}
\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_000753/2008_000753_05.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}
\caption{Iteration 7}
\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_000753/2008_000753_07.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}
\caption{Iteration 10}
\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_000753/2008_000753_10.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}
\caption{Ground Truth}
\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_000753/2008_000753_gt.jpg}\end{subfigure}
\vspace{1mm}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_004730/2008_004730_00.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_004730/2008_004730_01.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_004730/2008_004730_03.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_004730/2008_004730_05.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_004730/2008_004730_07.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_004730/2008_004730_10.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_004730/2008_004730_gt.jpg}\end{subfigure}
\vspace{1mm}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2010_002877/2010_002877_00.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2010_002877/2010_002877_01.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2010_002877/2010_002877_03.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2010_002877/2010_002877_05.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2010_002877/2010_002877_07.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2010_002877/2010_002877_10.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2010_002877/2010_002877_gt.jpg}\end{subfigure}
\vspace{1mm}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2009_004272/2009_004272_00.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2009_004272/2009_004272_01.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2009_004272/2009_004272_03.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2009_004272/2009_004272_05.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2009_004272/2009_004272_07.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2009_004272/2009_004272_10.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2009_004272/2009_004272_gt.jpg}\end{subfigure}
\vspace{1mm}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_008218/2008_008218_00.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_008218/2008_008218_01.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_008218/2008_008218_03.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_008218/2008_008218_05.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_008218/2008_008218_07.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_008218/2008_008218_10.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2008_008218/2008_008218_gt.jpg}\end{subfigure}
\vspace{1mm}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2010_001652/2010_001652_00.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2010_001652/2010_001652_01.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2010_001652/2010_001652_03.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2010_001652/2010_001652_05.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2010_001652/2010_001652_07.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2010_001652/2010_001652_10.jpg}\end{subfigure}
\begin{subfigure}{0.137\linewidth}\includegraphics[width=\linewidth]{./figures_supp/pamr_iter/2010_001652/2010_001652_gt.jpg}\end{subfigure}
\caption{\textbf{Visualisation of PAMR iterations.} The initial model predictions suffer from local inconsistency: mask boundaries do not align with available visual cues. Our PAMR module iteratively revises the masks to alleviate this problem. Our model uses the mask from the last iteration for self-supervision.}
\label{fig:pamr_iter}
\vspace{-0.5em}
\end{figure*}
|
2,877,628,088,435 | arxiv | \section{Introduction}
The first few years of the millennium have seen increased support assembled
for the hierarchical structure formation paradigm (Spergel et~al. 2003).
In this model, small ripples in the density of the primordial universe
are amplified by the force of gravity acting over billions of years.
The most successful model, a universe in which cold dark matter outweighs
baryonic matter and in which the rate of expansion is accelerating due
to a dynamically dominant dark energy component, agrees spectacularly
well with the latest measurements of the primordial spectrum of density
perturbations, as shown in Fig.~\ref{fig:pk}.
Physical models of galaxy formation in a hierarchical universe are also
reaching maturity. The roots of modern-day ``semi-analytical'' galaxy
formation models actually predate the cold dark matter cosmology and
took hold in the 1970s, with the papers by Press \& Schechter (1974)
and White \& Rees (1978). These papers set out the basic ideas which underpin
the approach, namely that galaxies form by the radiative cooling of baryons
inside dark matter haloes which were assembled by a merging process driven
by gravitational instability. The 1990s saw the first detailed calculations
based on these ideas which established the validity
of the approach (White \& Frenk 1991; Kauffmann, White \& Charlot 1993;
Lacey et~al. 1993; Cole et~al. 1994). These models are now firmly
established as a powerful tool which can generate testable predictions,
connecting hierarchical clustering cosmologies to observations of the
galaxy population at different epochs in the history of the universe.
\begin{figure}
{\epsfxsize=10.truecm
\epsfbox[-100 150 590 720]{figs/baugh_f1.ps}}
\caption{
The power spectrum of temperature fluctuations in the cosmic microwave
background radiation (the first year WMAP data; Hinshaw et~al. 2003)
plotted on the same scale as the power spectrum of galaxy clustering
measured from the two-degree field galaxy redshift survey by
Cole et~al. (2005). Figure courtesy of Ariel Sanchez, based on results
from Sanchez et~al. (2006).
}
\label{fig:pk}
\end{figure}
Now that the parameter space which defines the background cosmology is
shrinking (see for example Sanchez et~al. 2006), semi-analytical models
of galaxy formation are entering a new phase. Coupled with the explosion
of observational data available for galaxies at high redshift, the focus
is shifting towards a critical assessment of the physics implemented in
the models. The modular nature of the models and their speed means that
different prescriptions can be tested for a particular physical process.
The ongoing efforts to improve the modelling of the various phenomena
involved in the galaxy formation process are inevitable, given their
complexity and our lack of detailed understanding of the relevant physics.
\begin{figure}[t]
{\epsfxsize=12.truecm
\epsfbox[-50 150 590 690]{figs/baugh_f2.ps}}
\caption{
The star formation histories for four massive
galaxies, as predicted by the {\tt GALFORM} model,
plotted as a function of the age of the model universe.
The total star formation rate (solid line) takes into account
the quiescent star formation in the progenitors of the present
day object (dashed line), and includes starbursts triggered by
galaxy mergers (dotted line). The smooth dotted curves show
simple exponential star formation laws for comparison; these
star formation histories start when the universe was 3\,Gyr old
and have e-fold times of 0.1, 1 and 10 Gyr.
}
\label{fig:sfhist}
\end{figure}
The problem of matching the bright end of the local field galaxy luminosity
function provides a good illustration of this point (Benson et~al. 2003).
The first generation of semi-analytical models had little problem in
matching the exponential break observed in galaxy luminosity function.
Today, modellers find this a more challenging task for two reasons:
(i) A shift in the favoured cosmological model. Today, the ``standard''
cold dark matter model has a matter density parameter less than one third of
the critical density (Sanchez et~al. 2006).
Structures tend to form earlier in a dark energy dominated universe,
so, coupled with the slightly older age for the universe in this
cosmology, more massive haloes have been able to cool gas for a longer
period than would have been the case in a universe with the critical
density in matter. This leads to more gas cooling in these haloes. This
problem is exacerbated by the tighter observational constraints on the
baryon density, which typically result in higher baryon densities being
input into the models than would have been used in earlier calculations.
(ii) The luminosity function is not being considered in isolation. The
continued development of the models now means that they are able to predict
a much wider range of galaxy properties than was possible in the early days.
This increased sophistication actually makes it more difficult to match
one particular observation because other galaxy properties can be
adversely affected by parameter changes.
The attempt to find a solution to the problem of matching the sharpness
of the observed break of the bright end of the luminosity function
has led to a revision of the treatments of gas cooling and feedback
in massive haloes used in the models
(e.g. Benson et~al. 2003; Bower et~al. 2006; Croton et~al. 2006;
de Lucia et~al. 2006). While some
of our more conservative colleagues view such changes as sufficient
grounds on which to dismiss the semi-analytical approach altogether,
what we are witnessing is simply the application of the scientific method;
when a model prediction is found to be incorrect, this shows that
an ingredient in the scenario is
either modelled incorrectly or is missing altogether. The resolution of
the discrepancy leads to a new model in which our understanding of
galaxy formation has been advanced. Now that the utility of this approach
has gained general acceptance, we should welcome conflict between
observations and theoretical predictions, as this will drive future
progress in the models.
In this article, we deal with another area in which the models have faced
a stern challenge; matching the abundance of high redshift galaxies detected
through their dust emission in the sub-mm. At first sight, the galaxies
seen with the {\tt SCUBA} instrument at 850 microns appeared to be
massive galaxies at high redshift, with star formation rates approaching
$1000 {\rm M}_{\odot} {\rm yr}^{-1}$
(Smail et~al. 1997). Such objects would dominate the star formation in the
early Universe, dwarfing the contribution of galaxies seen in the rest-frame
ultraviolet (Hughes et~al. 1998; Barger et~al. 1998). Our solution to this
problem is controversial, but spawns a number of testable predictions.
On the whole these predictions agree remarkably well with observations,
as we will discuss. In \S~2, we give a brief overview
of our model of galaxy formation. Our treatment of the
impact of dust on the spectral energy distribution
of our model galaxies is a novel apsect of our model and is described
in \S~3. We present the main results of interest to
the participants of this workshop in \S~4.
Further tests of the model are listed in \S~5 along with our conclusions.
\section{The galaxy formation model}
\label{sec:model}
We use the semi-analytical model {\tt GALFORM}; the content
of the model and the philosophy behind it are set out in detail in
Cole et~al. (2000). Important revisions to the basic model are described
in Benson et~al. (2002, 2003). Our solution to the problem of accommodating
the number counts of {\tt SCUBA} sources in the cold dark matter model
is explained in Baugh et~al. (2005).
In summary, the aim of {\tt GALFORM} is to carry out an {\it ab initio}
calculation of the formation and evolution of galaxies, in a background
cosmology in which structures grow hierarchically. The physical ingredients
considered in the model include:
(i) The formation of dark matter haloes through mergers and accretion
of material.
(ii) The collapse of baryons into the gravitational potential wells of
dark matter haloes.
(iii) The radiative cooling of gas that is shock heated during infall
into the dark halo.
(iv) The formation of a rotationally supported disk of cold gas.
(v) The formation of stars from the cold gas.
(vi) The injection of energy into the interstellar medium, through
supernova explosions or the accretion of material onto a supermassive
black hole.
(vii) The chemical evolution of the interstellar medium, stars and
the hot gas.
(viii) The merger of galaxies following the merger of their host dark matter
haloes, due to dynamical friction.
(ix) The formation of spheroids during mergers due to the rearrangement of
pre-existing stars (i.e. the disk and bulge of the progenitor
galaxies) and the formation of stars in a burst.
(x) The construction of a composite stellar population for each galaxy,
yielding a spectral energy distribution, including the effects of
dust extinction, a point to which we shall return in more detail later
on in this section.
Four examples of the star formation histories predicted by {\tt GALFORM}
are shown in Fig.~\ref{fig:sfhist}. The cases shown are massive galaxies
at the present day. The star formation history of a galaxy is constructed
by considering the quiescent star formation in all of its progenitors and
all the bursts of star formation triggered by galaxy mergers. A common
{\it assumption} for the star formation history of a galaxy made in
many other models is that stars form with an exponentially declining rate;
some examples of such histories are marked on each panel with
illustrative e-folding times. The star formation histories {\it predicted}
by {\tt GALFORM} are quite different from the simple exponential form.
\section{The effect of dust on the spectral energy distribution}
\label{sec:dust}
In order to make predictions for the sub-mm emission from galaxies,
we need to take into account the effect of dust on the spectral
energy distribution of galaxies.
Previous work in this area has either employed
template spectral energy distributions based on local galaxies (e.g.
Blain et~al. 1999; Devriendt \& Guiderdoni 2000) or has treated the
temperature of the dust, $T_{\rm d}$, as a free parameter (e.g. Kaviani,
Haehnelt \& Kauffmann 2003). The dust luminosity per unit frequency at
long wavelengths scales as $T^{-5}_{\rm d}$ for a given
bolometric dust luminosity, for a standard assumption about the
emissivity of the dust.
Given this strong dependence of luminosity on $T_{\rm d}$, it would appear
trivial to match the observed sub-mm counts by simply making a modest
tweak to the dust temperature. Unfortunately, such a model is unphysical.
The dust temperature should be set by requiring that the dust grains be
in thermal equilibrium, with a balance between radiative heating
and cooling. With this criteria met, the dust luminosity per unit frequency
depends rather less dramatically upon the bolometric luminosity and the
dust mass; significant changes to these properties are required to change
the dust luminosity (see Baugh et~al. 2005 for a discussion).
An important feature of our model is a physically consistent treatment of the
extinction of starlight by dust and the reprocessing of this energy
at longer wavelengths. This is achieved by using
the {\tt GRASIL} spectro-photometric model introduced by Silva et~al. (1998).
{\tt GRASIL} computes the emission from both the stars and dust in a galaxy,
based on the star formation and metal enrichment histories predicted by the
semi-analytical model (Granato et~al. 2000). {\tt GRASIL} includes
radiative transfer through a two-phase dust medium, with a diffuse component
and giant molecular clouds, and a distribution of dust grain sizes.
Stars are assumed to form inside the clouds and then gradually migrate. The
output from {\tt GRASIL} is the galaxy SED from the far-UV to the sub-mm.
\section{Results}
\label{sec:res}
\begin{figure}[t]
{\epsfxsize=12.truecm
\epsfbox[-50 150 590 700]{figs/baugh_f3.ps}}
\caption{
The cumulative number counts of galaxies at 850 microns predicted
by the Baugh et~al. (2005) model, compared with a compilation of
observational estimates, as indicated by the legend. The solid curve
shows the total number counts, the dashed curve the contribution from
galaxies which are undergoing a galaxy merger induced starburst and
the dotted curve shows the counts from galaxies that are forming stars
quiescently in galactic discs.
}
\label{fig:s850}
\end{figure}
\begin{figure}
{\epsfxsize=12.truecm
\epsfbox[-50 150 590 700]{figs/baugh_f4.ps}}
{\epsfxsize=12.truecm
\epsfbox[-50 150 590 700]{figs/baugh_f5.ps}}
\caption{
The impact on the predicted number counts of switching off key
ingredients of the model. The fiducial model from Baugh et~al.
(2005) is shown by the solid line; as Fig~\ref{fig:s850} shows
this model matches the observed counts remarkably well.
In the upper panel, the dashed line shows the predicted counts if
we adopt a standard IMF for star formation in merger induced bursts,
rather than the flat IMF used in the fiducial model.
In the bottom panel, the dashed curve shows how the counts change if
starbursts triggered by minor merger (i.e. when a gas rich disk is hit
by a small satellite) are switched off.
}
\label{fig:s850comp}
\end{figure}
\begin{figure}[t]
{\epsfxsize=11.truecm
\epsfbox[-50 150 590 700]{figs/baugh_f6.ps}}
\caption{
The predicted redshift distribution at a series of 850 micron
fluxes (as indicated in the key). The solid lines shows the
redshift distribution of all galaxies, the dashed lines shows
ongoing bursts and the dotted lines show galaxies which are
forming star quiescently. The median redshift ($z_{50}$) and the
redshift below with 90\% of galaxies are predicted to lie ($z_{90}$) are
also given on each panel.
}
\label{fig:dndz}
\end{figure}
Previous attempts to match the observed counts of sub-mm sources using the
combined {\tt GALFORM} and {\tt GRASIL} machinery, whilst retaining the
successes of the models at other redshifts, were unsuccessful, failing
to match the counts by over an order of magnitude (see Baugh et~al. 2005).
The are two principle reasons for the increased counts of sub-mm galaxies in
the model introduced by Baugh et~al, as shown by Fig.~\ref{fig:s850}.
Firstly, more star formation takes place in starbursts than in
earlier models. There are two reasons for this. In the new model,
the timescale for quiescent star formation is independent of redshift,
instead of scaling with the dynamical time of the galaxy as in the fiducial
model of Cole et~al. (2000). High redshift disks consequently have
larger gas fractions than before, resulting in gas rich starbursts
at early epochs. In addition, in the new model, a burst can be triggered
by the accretion of a satellite galaxy which brings in a modest amount of
mass. Such a collision is assumed to leave the stellar disk of the primary
galaxy intact, but induces instabilities in the cold gas present, driving
it to the centre of the galaxy, where it takes part in a burst.
Secondly, we assume that star formation induced by mergers produces stars
with a flat initial mass function (IMF). With a larger proportion of high mass
stars, the total energy radiated in the ultra-violet per units mass of stars
produced is increased, thus increasing the amount of radiation heating
the dust.
Moreover, the flat IMF produces a higher yield of metals than a
standard, solar neighbourhood IMF, which means more dust.
The impact of these two ingredients is readily apparent from the
comparisons presented in Fig~\ref{fig:s850comp}. One of the beauties
of semi-analytcal modelling is that certain aspects of the model
can be switched on and off in order to assess their impact on the
predictions. These comparisons show that the assumption of a flat
IMF in starbursts is the main factor responsible for the model
reproducing the observed counts. The model predictions for the
redshift distribution of sub-mm sources are shown in
Fig.~\ref{fig:dndz}. At bright fluxes, the predictions are in
good agreement with the median redshift determined by Chapman et~al. (2003).
\section{Conclusions}
\label{sec:conc}
The assumption of a flat IMF in starbursts is undoubtedly controversial.
It is therefore important to explore the predictions of the model in
detail, to find other evidence in support of this choice. The successes of
the our new model include:
\begin{itemize}
\item The reproduction of the properties of the local galaxy population,
such as the optical and near infrared luminosity functions and the
distribution
of disk scalelengths. This is the first hurdle that any realistic model of
galaxy formation should overcome. Not only does this undermine any claims of
chicanery when changing model parameters, it also permits a meaningful
discussion of the descendants of high redshift galaxies.
\item The recovery of the luminosity function of Lyman break galaxies
at $z=3$ and $z=4$, with a realistic degree of dust extinction in the
rest frame UV, computed by tracking the chemical evolution of the model
galaxies and calculating the sizes of the disk and bulge components.
\item Nagashima et~al. (2005a) show that the model with a flat IMF
reproduces the observed abundances of elements in the hot gas in clusters.
\item Nagashima et~al. (2005b) applied the same model to the calculation
of element abundances in elliptical galaxies and again found better
agreement with the model in which starbursts have a flat IMF.
\item Le Delliou et~al. (2005, 2006) computed the abundance of Lyman-alpha
emitters using {\tt GALFORM}. The Baugh et~al. model gives a somewhat better
match to the shape of the observed counts than a model with a standard IMF.
\end{itemize}
Granato et~al. (2004) present an alternative model in which they consider
the evolution of quasars and spheroids. These authors find that they can
explain the number counts of sub-mm galaxies without using a non-standard
IMF, by using different feedback and gas cooling prescriptions from those
employed in the model of Baugh et~al. (2005). While it is not clear that these
recipes would still work in a fully fledged semi-analytical model (Granato et~al. do
not follow galactic disks nor do they consider mergers between galaxies or
between haloes), it will be interesting to see if the new generation of
semi-analytical models with modified cooling and feedback prescriptions in
massive haloes can reproduce the number counts of dusty galaxies with a
standard choice of IMF (Croton et~al. 2006; Bower et~al. 2006).
\acknowledgements CMB and AJB are supported by the Royal Society; this
research was also supported by {\tt PPARC}.
|
2,877,628,088,436 | arxiv | \section{Introduction}
{Saturn's regular moons exhibit a very diverse range of spectral and photometric properties, with Enceladus having an extremely bright and ice-rich surface, while the leading side of Iapetus is exceptionally dark due to being covered with a layer of debris derived from Saturn's irregular satellites \citep[for recent reviews, see][]{Hendrix18, Thomas18, Verbiscer18}. For the mid-sized moons orbiting between Titan and the rings (i.e. Mimas, Enceladus, Tethys, Dione and Rhea), many of the observed spectral and photometric trends have been attributed to interactions with the E ring. This broad and diffuse ring consists of dust-sized, ice-rich particles generated by Enceladus' geological activity that impact the surfaces of different moons at different rates. The brightness and density of the E ring is strongly correlated with the geometric albedos of these satellites at visible \citep{Verbiscer07} and radio wavelengths \citep{Ostro10, LeGall19}, as well as spectral features like the depth of water-ice absorption bands \citep{Filacchione12, Filacchione13}. Furthermore, there are brightness asymmetries between the leading and trailing sides of these moons that can be attributed to differences in the fluxes of E-ring particles \citep{Buratti98, Schenk11, Scipioni13, Scipioni14, Hendrix18, LeGall19}.}
However, there are several smaller moons orbiting within the E ring whose {surface scattering properties} deviate from the trends observed among the larger moons. On the one hand, four extremely small moons (Aegaeon, Anthe, Methone and Pallene) are found within the E-ring's inner flank. Aegaeon is situated between Janus and Mimas in the G ring, while Methone, Anthe and Pallene orbit between Mimas and Enceladus. One might therefore expect these moons to follow the same trend as Janus, Mimas and Enceladus, but in fact they appear to be somewhat darker than this trend would predict \citep{Thomas18, Verbiscer18}. On the other hand, the small moons Telesto and Calypso share Tethys' orbit, while Helene and Polydeuces share Dione's orbit, but they do not all appear to have the same {spectrophotometric properties} as their larger companions \citep{Verbiscer07, Verbiscer18, Filacchione12, Filacchione13}.
A major challenge in interpreting existing brightness estimates of these objects is that many of them are significantly elongated, and so their brightness varies substantially depending on their orientation relative to the observer and the Sun, leading to large scatter in the disk-integrated brightness estimates at any given phase angle. Fortunately, all these objects are spin-locked and so their orientation relative to the Sun and the spacecraft can be securely predicted. Furthermore, {almost all of these moons were observed at high enough resolution to determine their shapes, and Aegaeon, Methone and Pallene in particular are very close to perfect ellipsoids.} It should therefore be possible to model their shape-related brightness variations to a fair degree of accuracy, and thereby derive estimates of their surface brightnesses that can be compared to those of the larger moons.
This paper describes a new investigation of Saturn's moons that uses a photometric model to obtain precise and comparable measurements of the surface brightnesses for both the small and mid-sized moons orbiting interior to Titan. The brightness estimates for the mid-sized moons, along with the smaller moons Janus and Epimetheus, are consistent with previous studies in that they are clearly correlated with the local flux of E-ring particles. However, this work also confirms that many small moons deviate from this basic trend, implying that other processes influence the moons' surface brightnesses. Specifically, we find that (1) Aegaeon is exceptionally dark, (2) Methone and Pallene are darker than one would expect given their locations between Mimas and Enceladus, (3) Calypso and Helene are brighter than their larger companions, and (4) Prometheus and Pandora are brighter than moons orbiting nearby like Atlas, Janus and Epimetheus. Aegaeon, Methone, and Pallene are most likely dark because they occupy belts of high energy proton fluxes. While the mechanism by which this radiation darkens their surfaces is still obscure, we find that the overall brightnesses of these moons are probably determined by the ratio of the energetic proton flux to the E-ring particle flux. On the other hand, the excess brightness of Helene, Calypso, Prometheus and Pandora is most likely due to some localized increase in the particle flux. For Prometheus and Pandora, the additional flux of particles from the F ring is probably responsible for increasing their brightness. However, there is no obvious particle source that would preferentially brighten Calypso and Helene more than their co-orbital companions, so the brightness of these moons either involves a previously-unknown asymmetry in the E-ring particle flux or is a transient phenomenon due to a recent event like an impact.
Section~\ref{data} below describes the Cassini imaging data used in this study, and how it is transformed into estimates of the disk-integrated brightness of Saturn's various moons. Section~\ref{model} then describes the photometric model we use to convert these disk-integrated brightness estimates into estimates of the moons' surface brightness while accounting for the moons' variably elongated shapes. Finally, Section~\ref{trends} describes the trends among these shape-corrected brightness estimates and their implications, while Section~\ref{summary} summarizes our findings.
\section{Observations and Preliminary Data Reduction}
\label{data}
The disk-integrated brightness estimates considered in this study are derived from images obtained by the Narrow Angle Camera (NAC) of the Imaging Science Subsystem (ISS) onboard the Cassini Spacecraft \citep{Porco04}. These images were all calibrated using version 3.9 of the Cisscal package to remove dark current and instrumental electronic noise, apply flat-field corrections, and convert the raw data numbers to values of radiance factors $I/F$, a standard dimensionless measure of reflectance that is unity for an illuminated Lambertian surface viewed at normal incidence and emission angles \citep{Porco04, West10}. Here $I$ is the scattered intensity of light and ${\pi F}$ is the specific solar flux over the camera filter bandpass.
While this analysis focuses on the photometry of the small moons, in order to facilitate comparisons with the mid-sized satellites we will consider data for all the satellites interior to Titan except for Daphnis (whose location within a narrow gap in the main rings would have required specialized algorithms). We searched for images containing all moons obtained by the NAC through its clear filters using the OPUS search tool on the PDS ring-moon system node ({\tt https://pds-rings.seti.org/search}). For the small moons Aegaeon, Anthe, Methone and Pallene, as well as the trojan moons Telesto, Calypso, Helene and Polydueces we considered all images where the moon was observed at phase angles below 80$^\circ$ (at higher phase angles the signal-to-noise ratio for these moons was often too poor to be useful), while for Pan, Atlas, Prometheus, Pandora, Janus, Epimetheus, Mimas, Enceladus, Tethys, Dione and Rhea we only considered images where the moon was at phase angles between 20$^\circ$ and 40$^\circ$. This more restricted phase range corresponds to conditions with the best signal-to-noise ratio data for the smaller moons, and reduces the number of images that needed to be analyzed to a manageable level.
Since nearly all of the images of the small moons were unresolved, this analysis will only consider disk-integrated brightness estimates for the moons, which were computed following approaches similar to those described in \citet{Hedman10}. This process begins by selecting a region within each image that contains the entire signal from the moon based on visual inspection. Then, instrumental and ring backgrounds are removed from this region using one of three different procedures depending on the moon:
\begin{itemize}
\item {\bf Aegaeon and Pan } For these moons the dominant backgrounds come from the nearby rings (the A ring for Pan and the G ring for Aegaeon). Each image was therefore geometrically navigated based on the positions of stars in the field of view and the appropriate SPICE kernels \citep{Acton96} {using a variant of the CAVIAR software package {\tt (https://www.imcce.fr/recherche/equipes/pegase/caviar)}.} We then used regions extending 10 pixels on either side of the selected region along the moon's orbit to determine the mean background image brightness as a function of ringplane radius. This profile was then interpolated onto the pixels in the selected region containing the moon and removed from that region.
\item{\bf Atlas, Anthe, Methone, Pallene, Telesto, Calypso, Helene and Polydeuces.} For these moons, instrumental backgrounds dominate, and these are typically a stronger function of row than sample number \citep{West10}. Hence we used regions 10 columns wide on either side of the region containing the moon to define a background brightness level as a function of row number which was then removed from all the pixels in the selected regions.
\item{\bf Prometheus, Pandora, Janus, Epimetheus, Mimas, Enceladus, Tethys, Dione and Rhea.} These larger moons were often resolved and so the signal-to-noise ratio was much higher than for the smaller moons. Hence we used regions 10 columns wide on either side of the region containing the moon to define a mean background brightness level that was removed from all the pixels in the selected regions.
\end{itemize}
After removing the backgrounds, the total brightness of the object in each image was computed and expressed in terms of an effective area, $ A_{\rm eff}$, which is the equivalent area of material with $ I/F = 1$ that would be required to account for the observed brightness:
\begin{equation}
A_{\rm eff} = \sum_{x} \sum_{y} I/F_{x,y}\times \Omega_{pixel}\times D^2
\label{aeff}
\end{equation}
where $x$ and $y$ are the row and column numbers of the pixels in the
selected region, $I/F_{x,y}$ is the (background-subtracted) brightness in the $x,y$ pixel, $\Omega_{pixel} =$ (6 $\mu$rad$)^2$ is the assumed solid angle subtended by a NAC pixel, and $D$ is the distance between the spacecraft and the object during the observation {(designated as ``Range'' in Tables~\ref{aegaeontab}-~\ref{methonecoltab} of Appendix C).} The assumed values for $D$ are derived from the appropriate SPICE kernels \citep{Acton96}. This approach deviates from traditional integrated disk measurement convention in that an effective whole-disk area is measured at each phase angle rather than the magnitude equivalent of an average whole-disk reflectance. Two advantages of this approach for our study are that (1) it requires no \textit{a priori} knowledge of the object's average size and (2) it easily accommodates target blur in which subpixel-sized objects are smeared across several pixels due to spacecraft motion and/or long camera exposures. We also estimate the statistical uncertainly on $A_{\rm eff}$ based on the standard deviation of the brightness levels in the second region after any radial trends have been removed. Note that this procedure underestimates the true uncertainty in the measurements, but is still a useful way to identify images with low signal-to-noise ratios. For the smaller objects, we also computed their mean position in the field of view by computing the coordinates (in pixels) of the streak’s center of light $x_c$ and $y_c$:
\begin{equation} x_c = \frac{\sum_x \sum_y x\times I/F_{x,y}}{\sum_x \sum_y I/F_{x,y}}
\end{equation}
\begin{equation} y_c = \frac{\sum_x \sum_y y\times I/F_{x,y}}{\sum_x \sum_y I/F_{x,y}} \end{equation}
These numbers are not used directly for any part of this study. However, they are useful for identifying images where the background levels were not removed properly, since in those images the computed center of light would fall outside the image of the moon.
For the larger moons, the signal-to-noise ratio for every image was sufficiently high that all unsaturated images yielded useful estimates of $A_{\rm eff}$. After excluding any moons with saturated pixels, we had the following numbers of data points for these moons: 40 for Prometheus, 40 for Pandora, 53 for Janus, 47 for Epimetheus, 76 for Mimas, 82 for Enceladus, 67 for Tethys, 67 for Dione and 74 for Rhea {(see Tables~\ref{prometheustab}-~\ref{rheatab} in Appendix C)}. For the smaller moons, we visually inspected the regions containing the moons and excluded any images where these regions were obviously corrupted by bad pixels or cosmic rays, or where the computed center of light was noticeably displaced from the bright pixels containing the signal from the moon. For Aegaeon, Anthe, Methone and Pallene, we also excluded any images where $A_{\rm eff}$ was negative or more than 10 times the median value among all the images, as well as any images where the brightest pixel in the relevant region was more than 10 times the median pixel brightness times the number of pixels in the region (this removed images contaminated by a bad pixel or cosmic ray that was not obvious upon visual inspection). Finally, we excluded any images of Aegaeon, Anthe, Methone and Pallene with exposures less than 0.5 seconds because these usually had poor signal-to-noise ratios, and any images of Anthe, Methone and Pallene with exposures longer than 2 seconds, where unresolved images could be saturated. Similarly, we excluded images of Polydeuces with exposures longer than 0.68 seconds and any images of Telesto, Calypso and Helene with exposures longer than 0.15 seconds. After these selections, the final number of measurements for the small moons were: 168 for Aegaeon, 167 for Anthe, 187 for Methone, 159 for Pallene, 41 for Pan, 71 for Atlas, 142 for Telesto, 157 for Calypso, 122 for Helene and 136 for Polydueces {(see Tables~\ref{aegaeontab}-~\ref{atlastab} in Appendix C)}. Estimates of $A_{\rm eff}$ for these images, along with relevant geometric parameters like the sub-solar and sub-observer latitudes and longitudes, are provided in {Tables~\ref{aegaeontab}-~\ref{rheatab} in Appendix C.}
\begin{figure*}
\resizebox{6.5in}{!}{\includegraphics{gobj_phaserotplot_mooncomp_080419.pdf}}
\caption{Summary of the brightness measurements for Aegaeon, Anthe, Methone and Pallene. The left-hand plots show the effective areas $A_{\rm eff}$ of the moons as a function of phase angle, with the data points color coded by the quadrant viewed by the spacecraft. The right-hand plots show the fractional residuals of the brightness estimates relative to the quadratic trend shown as a solid line in the left-hand plots. Note that all these objects are consistently brighter when their leading or trailing sides are viewed than when their sub-Saturn or anti-Saturn sides are viewed.}
\label{phaserot}
\end{figure*}
\begin{figure*}
\resizebox{6.5in}{!}{\includegraphics{trojans_phaserotplot_moonmodcomp_lam_xmod_080419.pdf}}
\caption{Summary of the brightness measurements for the trojan moons Telesto, Calypso, Helene and Polydeuces. The left-hand plots show the effective areas $A_{\rm eff}$ of the moons as a function of phase angle, with the data points color coded by the quadrant viewed by the spacecraft. The right-hand plots show the fractional residuals of the brightness estimates relative to the quadratic trend shown as a solid line in the left-hand plots. Note that these objects are consistently brighter when their leading or trailing sides are viewed than they are when their sub-Saturn or anti-Saturn sides are viewed.}
\label{tphaserot}
\end{figure*}
\begin{figure*}
\resizebox{6.5in}{!}{\includegraphics{rings_phaserotplot_moonmodcomp_lam_xmod_080419.pdf}}
\caption{Summary of the brightness measurements for the ring moons. The left-hand plots show the effective areas $A_{\rm eff}$ of the moons as a function of phase angle, with the data points color coded by the quadrant viewed by the spacecraft. The right-hand plots show the fractional residuals of the brightness estimates relative to the lienar trend shown as a solid line in the left-hand plots. Note that these objects are often brighter when their leading or trailing sides are viewed than they are when their sub-Saturn or anti-Saturn sides are viewed.}
\label{rphaserot}
\end{figure*}
\begin{deluxetable*}{lccccccc}
\tablecaption{Satellite shapes from resolved images \citep[and Appendix B]{Thomas10, Thomas13, Thomas19}
\label{shapes}}
\tablehead{
\colhead{Object} & \colhead{$a$ (km)} & \colhead{$b$ (km)} & \colhead{$c$ (km)} & \colhead{$b/a$} & \colhead{$c/b$} & \colhead{$R_m$ (km)} \\}
\startdata
Aegaeon & $0.70\pm0.05$ & $0.25\pm0.06$ & $0.20\pm 0.08$ & $0.36\pm0.09$ & $0.80\pm 0.37$ & $0.33\pm0.06$ \\
Methone & $1.94\pm0.02$ & $1.29\pm0.04$ & $1.21\pm0.02$ & $0.66\pm0.02$ & $0.94\pm0.03$ & $1.45\pm0.03$ \\
Pallene & $2.88\pm0.07$ & $2.08\pm0.07$ & $1.84\pm0.07$ & $0.72\pm0.03$ & $0.89\pm0.04$ & $2.23\pm0.07$ \\ \hline
Mimas & 207.8$\pm$0.5 & 196.7$\pm$0.5 & 190.6$\pm$0.3 & 0.947$\pm$0.003 & 0.969$\pm$0.003 & 198.2$\pm$0.4 \\
Enceladus & 256.6$\pm$0.3 & 251.4$\pm$0.2 & 248.3$\pm$0.2 & 0.980$\pm$0.002 & 0.988$\pm$0.001 & 252.1$\pm$0.2 \\
Tethys & 538.4$\pm$0.3 & 528.3$\pm$1.1 & 526.3$\pm$0.6 & 0.981$\pm$0.002 & 0.996$\pm$0.002 & 531.0$\pm$0.6 \\
Dione & 563.4$\pm$0.6 & 561.3$\pm$0.5 & 559.6$\pm$0.4 & 0.996$\pm$0.001 & 0.997$\pm$0.001 & 561.4$\pm$0.4 \\
Rhea &765.0$\pm$0.7 & 763.1$\pm$0.6 & 762.4$\pm$0.6 & 0.998$\pm$0.001 & 0.999$\pm$0.001 & 763.5$\pm$0.6 \\ \hline
Pan & 17.3$\pm$0.2 & 14.1$\pm$0.2 & 10.5$\pm$0.5 & 0.815$\pm$0.015 & 0.745$\pm$0.037 & 13.7$\pm$0.3 \\
Atlas & 20.4$\pm$0.1 & 17.7$\pm$0.2 & 9.3$\pm$0.3 & 0.868$\pm$0.011 & 0.525$\pm$0.018 & 14.9$\pm$0.2 \\
Prometheus & 68.5$\pm$0.5 & 40.5$\pm$1.4 & 28.1$\pm$0.4 & 0.591$\pm$0.021 & 0.694$\pm$0.026 & 42.8$\pm$0.7 \\
Pandora & 51.5$\pm$0.3 & 39.5$\pm$0.3 & 31.5$\pm$0.2 & 0.767$\pm$0.007 & 0.797$\pm$0.008 & 40.0$\pm$0.3 \\
Epimetheus & 64.8$\pm$0.3 & 58.1$\pm$0.2 & 53.5$\pm$0.2 & 0.897$\pm$0.005 & 0.921$\pm$0.005 & 58.6$\pm$0.3 \\
Janus & 101.7$\pm$0.9 & 92.9$\pm$0.3 & 74.5$\pm$0.3 & 0.913$\pm$0.009 & 0.802$\pm$0.004 & 89.0$\pm$0.5 \\ \hline
Telesto & 16.6$\pm$0.3 & 11.7$\pm$0.3 & 9.6$\pm$0.2 & 0.705$\pm$0.022 & 0.821$\pm$0.027 & 12.3$\pm$0.3\\
Calypso & 14.7$\pm$0.3 & 9.3$\pm$0.9 & 6.4$\pm$0.3 & 0.632$\pm$0.062 & 0.688$\pm$0.073 & 9.5$\pm$0.4\\
Helene & 22.6$\pm$0.2 & 19.6$\pm$0.3 & 13.3$\pm$0.2& 0.867$\pm$0.015 & 0.679$\pm$0.015 & 18.1$\pm$0.2 \\
Polydeuces & 1.75$\pm$0.2 & 1.55$\pm$0.2 & 1.31$\pm$0.2 & 0.89$\pm$0.15 & 0.85$\pm$0.17 & 1.53$\pm$0.2 \\ \hline
\enddata
\end{deluxetable*}
\begin{figure*}
\resizebox{6.5in}{!}{\includegraphics{gobj_phaserotplot_moonmodcomp_area_lam_080419.pdf}}
\caption{Average reflectances of Aegaeon, Methone and Pallene. The left-hand plots show the average reflectances of the moons as a function of phase angle, with the data points color coded by the quadrant viewed by the spacecraft. The right-hand plots show the fractional residuals of the brightness estimates relative to the quadratic trend shown as a solid line in the left-hand plots. While the longitudinal brightness variations are reduced compared to the effective areas shown in Figure~\ref{phaserot}, they are not completely removed, especially at higher phase angles.}
\label{phaserotarea}
\end{figure*}
\begin{figure*}
\resizebox{6.5in}{!}{\includegraphics{trojans_phaserotplot_moonmodcomp_lam_area_080419.pdf}}
\caption{Average reflectances of of Telesto, Calypso, Helene and Polydeuces with their nominal shapes. The left-hand plots show the average reflectances of the moons as a function of phase angle, with the data points color coded by the quadrant viewed by the spacecraft. The right-hand plots show the fractional residuals of the average reflectances relative to the quadratic trend shown as a solid line in the left-hand plots. While the longitudinal brightness variations are reduced compared to the effective areas shown in Figure~\ref{tphaserot}, they are not completely removed, especially at higher phase angles.}
\label{tphaserotarea}
\end{figure*}
\begin{figure*}
\resizebox{6.5in}{!}{\includegraphics{rings_phaserotplot_moonmodcomp_lam_area_080419.pdf}}
\caption{Average reflectances of of the ring moons with their nominal shapes. The left-hand plots show the average reflectances of the moons as a function of phase angle, with the data points color coded by the quadrant viewed by the spacecraft. The right-hand plots show the fractional residuals of the brightness estimates relative to the linear trend shown as a solid line in the left-hand plots. Again, while the longitudinal brightness variations are slightly reduced compared to the effective areas shown in Figure~\ref{rphaserot}, they are not completely removed.}
\label{rphaserotarea}
\end{figure*}
\clearpage
\section{ A photometric model for elongated bodies}
\label{model}
The challenges associated with photometric analyses of small moons are best seen in the left-hand panels of Figures~\ref{phaserot}-\ref{rphaserot}, which show the effective area estimates for these moons as functions of the observed phase angle. While there is a clear trend of decreasing brightness with increasing phase angle, the data points also show a relatively large amount of scatter around this basic trend. This scatter cannot be entirely attributed to measurement errors because it is strongly correlated with the viewing geometry. In particular, the moons' sub-Saturn and anti-Saturn quadrants are systematically lower in $A_{\rm eff}$ than their leading and trailing quadrants. These variations can be more clearly documented by plotting the fractional residuals from the mean trend (where $A_{\rm eff}$ is assumed to be a linear or quadratic function of phase angle) as functions of the sub-observer longitude. With the exception of Helene, Pan and Atlas, these plots all show clear sinusoidal patterns with maxima around $90^\circ$ and $270^\circ$ and minima around $0^\circ$ and $180^\circ$.
These trends with sub-observer longitude arise because these small moons have ellipsoidal shapes and are tidally locked so that their long axes point towards Saturn. Resolved images of these moons show that they are all significantly elongated (see Table~\ref{shapes}, note that Anthe was never observed with sufficient resolution to determine its shape and size accurately). Furthermore, the relative magnitudes of the longitudinal brightness variations shown in Figures~\ref{phaserot}-\ref{rphaserot} are generally consistent with the relative $a/b$ ratios of these different moons, with Aegaeon being the most elongated object and the one with the largest longitudinal brightness variations, followed by Calypso, Promethus and Methone.
\subsection{Variations in projected area do not adequately account for the moons' photometric properties}
The simplest way to account for these shape-related brightness variations is to divide the observed $A_{\rm eff}$ by the plane-projected area of the object $A_{\rm phys}$, which for an ellipsoidal object is given by the following formula \citep{Vickers96}:
\begin{equation}
\begin{aligned}
A_{\rm phys}= \pi \left[b^2c^2\cos^2\lambda_O\cos^2\phi_O+ a^2c^2\cos^2\lambda_O\sin^2\phi_O+ a^2b^2\sin^2\lambda_O\right]^{1/2} \\
\end{aligned}
\end{equation}
where $a$, $b$, and $c$ are the dimensions of the ellipsoid, while $\lambda_O$ and $\phi_O$ are the sub-observer latitude and longitude for the object. The resulting ratio then yields the disk-averaged reflectance of the object:
\begin{equation} <I/F> = \frac{A_{\rm eff}}{A_{\rm phys}}. \end{equation}
Tables~\ref{aegaeontab}-~\ref{methonecoltab} include estimates of $A_{\rm phys}$ computed assuming that these bodies have the shape parameters given in Table~\ref{shapes}, and Figures~\ref{phaserotarea}-\ref{rphaserotarea} show the resulting estimates of $<I/F>$ as functions of phase and sub-observer longitude. While the fractional brightness residuals around the mean phase curve are somewhat smaller for $<I/F>$ than they are for $A_{\rm eff}$, the dispersion is still rather large. For Aegaeon, Prometheus and Pandora there are still clear maxima at $90^\circ$ and $270^\circ$, while for Methone, Pallene, Telesto and Calypso the dispersion at larger phase angles has a similar magnitude for the two parameters.
\subsection{A Generic Ellipsoid Photometric Model (GEPM) }
The major problem with using $<I/F>$ is that this parameter only accounts for the viewing geometry, but not how the moon is illuminated by the Sun, which for very elongated bodies can strongly affect the observed brightness \citep{HV89}. \citet{ML15} provide analytical formula for elongated bodies assuming the surface follows a Lommel-Seeliger scattering law, which is appropriate for dark surfaces. However, the applicability of such a model for bright objects like many of Saturn's moons is less clear. We therefore use a more generic numerical method to translate whole-disk brightness data into information about the object's global average reflectance behavior, which we call the \textit{ Generic Ellipsoid Photometric Model (GEPM)}.
The GEPM predicts an ellipsoidal object's $A_{\rm eff}$ in any given image assuming its surface reflectance $R$ follows a generic scattering law that is a function of the cosines of the incidence and emission angles $\cos i$ and $\cos e$. The predicted effective area of an object whose surface obeys these scattering laws is given by the following integral:
\begin{equation}
A_{\rm pred}=\int_{++} (\cos e) R r_{\rm eff}^2 \cos\lambda d\lambda d\phi
\label{apred1}
\end{equation}
where $\lambda$ and $\phi$ are the latitude and longitude on the object, the $++$ sign indicates that the integral is performed only over the part of the object that is both illuminated and visible (i.e. where $\cos i$ and $\cos e$ are both positive), the factor of $\cos e$ in this integral accounts for variations in the projected size of the area element, and $r_{\rm eff}$ is the effective radius of the object at a given latitude and longitude, which is given by the following expression:
\begin{equation}
r_{\rm eff}^2 =\left[b^2c^2\cos^2\lambda\cos^2\phi+ a^2c^2\cos^2\lambda\sin^2\phi+ a^2b^2\sin^2\lambda\right]^{1/2} .
\end{equation}
The factors of $\cos i$ and $\cos e$ in the above integral are also functions of $\lambda$ and $\phi$ that depend upon the viewing and illumination geometry, as well as the object's shape. To obtain these functions, we first use the sub-observer latitude $\lambda_O$ and longitude $\phi_O$ to define a unit vector pointing from the center of the body towards the spacecraft:
\begin{equation}
\hat{\bf o}=\cos\lambda_O\cos\phi_O\hat{\bf x}+\cos\lambda_O\sin\phi_O\hat{\bf y}+\sin\lambda_O\hat{\bf z}
\end{equation}
where $\hat{\bf x}$ points from the moon towards Saturn, $\hat{\bf y}$ points in the direction of orbital motion and $\hat{\bf z}$ points along the object's rotation axis. Similarly, we use the sub-solar latitude $\lambda_S$ and longitude $\phi_S$ to define a unit vector pointing from the center of the body towards the Sun:
\begin{equation}
\hat{\bf s}=\cos\lambda_S\cos\phi_S\hat{\bf x}+\cos\lambda_S\sin\phi_S\hat{\bf y}+\sin\lambda_S\hat{\bf z}
\end{equation}
Finally, for each latitude $\lambda$ and longitude $\phi$ on the surface, we compute the surface normal for the body, which depends on the shape parameters $a$, $b$ and $c$.
\begin{equation}
\hat{\bf n}=\frac{\frac{\cos\lambda\cos\phi}{a}\hat{\bf x}+\frac{\cos\lambda\sin\phi}{b}\hat{\bf y}+\frac{\sin\lambda}{c}\hat{\bf z}}{\sqrt{\frac{\cos^2\lambda\cos^2\phi}{a^2}+\frac{\cos^2\lambda\sin^2\phi}{b^2}+\frac{\sin^2\lambda}{c^2}}}
\end{equation}
The cosines of the incidence and emission angles are then given by the standard expressions $\cos i =\hat{\bf n}\cdot\hat{\bf s}$ and $\cos e =\hat{\bf n}\cdot\hat{\bf o}$.
The above expressions allow us to evaluate $A_{\rm pred}$ for any given scattering law $R$, {including Lunar-Lambert functions or Akimov functions \citep{McEwen91, Shkuratov11, Schroeder14}. However, for the sake of concreteness, we will here assume that $R$ is given by the Minnaert function} \citep{Minnaert41}:
\begin{equation} R_M = B (\cos i)^{k}(\cos e)^{1-k},
\end{equation}
where the quantities $k$ and $B$ will be assumed to be constants for each image. Note that if we assume that $k=1$ the above expression reduces to the Lambert photometric function:
\begin{equation} R_L = B \cos i.
\end{equation}
For a generic Minnaert function, Equation~\ref{apred1} becomes:
\begin{equation}
A_{\rm pred}=B \int_{++} (\cos i)^k (\cos e)^{2-k} r^2_{\rm eff} \cos\lambda d\lambda d\phi =B a_{\rm pred}
\end{equation}
The factor of $B$ moves outside the integral, and the remaining factor $a_{\rm pred}$ can be numerically evaluated for any specified viewing and illumination geometry, so long as we also assume values for the object's shape parameters $a$, $b$ and $c$, as well as the photometric parameter $k$. A Python function which evaluates $a_{\rm pred}$ given these parameters is provided in Appendix A. From this, we can estimate the ``brightness coefficient" parameter $B$ to be simply the ratio $A_{\rm eff}/a_{\rm pred}$. Note that $B$ is not equal to the mean reflectance $<I/F>$, even for spherical bodies, because it includes corrections for the varying incidence and emission angles across the disk.
\begin{figure}
\centerline{\resizebox{3in}{!}{\includegraphics{minlamcompplot_110719.pdf}}}
\caption{The fractional difference between the predicted values of $a_{\rm pred}$ between a Minnaert model with {$k=0.5$} and a Lambertian model for three different shapes. Each point corresponds to a different combination of viewing and illuminate geometries scattered uniformly over the objects' surface.}
\label{minlam}
\end{figure}
Of course, the estimated value of $B$ depends on the assumed values of $a$, $b$, $c$ and $k$, which are typically estimated from resolved images. While only a few images are needed to obtain useful estimates of $a$, $b$ and $c$, the parameter $k$ can depend on the observed phase angle and location on the object, and so $k$ is far harder to estimate reliably for the full range of observation geometries. {Previous studies of resolved Voyager images of the mid-sized moons found that Mimas, Enceladus, Tethys, Dione and Rhea had $k=0.65, 0.78, 0.63, 0.53-0.56$ and 0.52-0.54 respectively at phase angles between 5$^\circ$ and $17^\circ$ \citep{Buratti84}, while a study of resolved Cassini images of Methone found that $k=0.887-0.003\alpha$ ($\alpha$ being the solar phase angle) based on images obtained at phase angles between 45$^\circ$ and $65^\circ$ \citep{Thomas13}. Thus this parameter likely varies among these moons and with observation geometry and terrain for each moon.}
{Figure~\ref{minlam} shows the fractional differences in the predicted values of $a_{\rm pred}$ between a model where $k=0.5$ and a model where $k=1$ (that is, a Lambertian surface). Each data point corresponds to a different combination of sub-observer and sub-solar locations scattered over the entire object. For the spherical object there is a trend where the $k=0.5$ model becomes 15\% brighter than the Lambertian model at high phase angles. Very prolate and oblate objects show a similar overall trend, but with considerably more scatter depending on the exact viewing and illumination geometry. This suggests that one could constrain $k$ by minimizing the $rms$ scatter in the estimated brightness coefficients from unresolved images of elongated bodies at each phase angle. In practice, the $rms$ scatter in the brightness coefficients for the various moons are very weak functions of $k$, most likely because real surface albedo variations and/or instrumental noise dominate the dispersion in the brightness coefficients. We therefore expect that a thorough analysis of resolved images would be needed to constrain $k$ reliably for each of the moons, and such an analysis is well beyond the scope of this paper. }
{Fortunately, it turns out that the results presented below are insensitive to the exact value of $k$. We bracket the range of possible $k$-values by considering cases where $k=0.5, 0.75$ and 1.0 when estimating the brightnesses of the various moons. Estimates of $a_{\rm pred}$ derived from all three cases are provided in Tables~\ref{aegaeontab}-~\ref{methonecoltab}. However, since the brightness differences among these different scenarios are subtle, we will normally only plot brightness coefficients computed assuming the surface follows a Lambertian scattering law (i.e. $k=1$).}
\subsection{Validation of General Ellipsoid Photometric Model with Saturn's small moons}
\begin{figure*}
\centerline{\resizebox{5.5in}{!}{\includegraphics{gobj_arcorbit_plotset_lam_080419.pdf}}}
\caption{Observed and predicted rotation light curves for Aegaeon. Each plot shows data from one particular ARCORBIT observation of Aegaeon, which measured the brightness of Aegaeon as a function of sub-observer longitude. The data points are the measured data, while the green curves show the variations in the projected area of the moon, and the red curves show the predicted variations in $a_{\rm pred}$ for a Lambertain model with Aegaeon's nominal shape {(Minnaert models with $k=0.5$ and $k=0.75$ show nearly the same trends)}. The predicted curves are both scaled to best match the median observed signal. The GEPM model predictions clearly match the data much better than the effective areas. }
\label{lightcurve}
\end{figure*}
\begin{table}
\caption{{Cassini ISS NAC images belonging to the various ARCORBIT sequences}}
\label{arcorbittab}
\resizebox{6.5in}{!}{\begin{tabular}{l l}\hline
Sequence & Images \\ \hline \hline
Rev 177 &
N1735282418
N1735287671
N1735287895
N1735292924
N1735293148
N1735298177
N1735298401
N1735303430
N1735303654 \\ \hline
Rev 180 &
N1738781138
N1738781312
N1738785881
N1738786055
N1738790624
N1738790798
N1738804853
N1738805027
N1738809596
N1738809770 \\ &
N1738814339
N1738814513
N1738842971 \\ \hline
Rev 188 &
N1746467285
N1746467459
N1746472918
N1746473092
N1746484184
N1746484358
N1746489747
N1746489817
N1746489991
N1746495450 \\ &
N1746495624
N1746501083
N1746501257 \\ \hline
Rev 201 &
N1769540993
N1769545670
N1769549999
N1769550173
N1769559005
N1769559179
N1769563508
N1769563682
N1769568011
N1769572514 \\ &
N1769572688
N1769577191
N1769581520
N1769581694
N1769586023
N1769586197 \\ \hline
Rev 206 &
N1783325708
N1783332836
N1783333010
N1783343963
N1783347440
N1783347614
N1783351091
N1783351265
N1783358393
N1783358567 \\ \hline
Rev 210 &
N1796027549
N1796033113
N1796035895
N1796041459
N1796052587
N1796055369
N1796058151
N1796060933
N1796069279
N1796085971 \\ &
N1796088753
N1796091535 \\ \hline
Rev 236 &
N1843274411
N1843274619
N1843274723
N1843274810
N1843274984
N1843275331
N1843275435
N1843276039
N1843276143
N1843276747 \\ &
N1843276851
N1843277559
N1843278163
N1843278267
N1843278739
N1843279092
N1843279196
N1843279800
N1843280612
N1843281924 \\ &
N1843282326
N1843282500
N1843282957
N1843284269
N1843285081
N1843285685
N1843286261
N1843286614
N1843287322
N1843288134 \\ &
N1843289446
N1843289550
N1843289848
N1843290022
N1843290479
N1843291083
N1843291187
N1843291791
N1843291895
N1843292499 \\ &
N1843292603
N1843293311
N1843293609
N1843293783
N1843294136
N1843294240
N1843294844
N1843294948
N1843295552
N1843295656 \\ &
N1843296260
N1843296364
N1843296968
N1843297072
N1843297370
N1843297544
N1843297897
N1843298001
N1843298605
N1843298709 \\ &
N1843299313
N1843299417
N1843300021
N1843300125
N1843300729
N1843301131
N1843303074
N1843303886
N1843304490
N1843304594 \\ &
N1843304892
N1843305066
N1843305419
N1843305523
N1843306127
N1843306231
N1843306835
N1843306939
N1843307543
N1843308251 \\ &
N1843308827
N1843309180
N1843309888
N1843309992
N1843310596
N1843310700
N1843312012
N1843312116
N1843312414
N1843312588 \\ &
N1843312941
N1843313649
N1843313753
N1843315065
N1843315169
N1843316175
N1843316349
N1843316702
N1843320110
N1843321275 \\ &
N1843321879
N1843321983
N1843322587
N1843323399
N1843323864
N1843324006
N1843324110
N1843324422 \\ \hline
Rev 239 &
N1848936800
N1848936974
N1848940466
N1848940640
N1848944132
N1848944306
N1848947798
N1848947972
N1848951464
N1848951638 \\ &
N1848955130
N1848962392
N1848962462
N1848962636
N1848969794
N1848969968
N1848969968 \\
\hline
\end{tabular}}
\end{table}
The utility of this model can be most clearly demonstrated by taking a closer look at the data for Aegaeon. {Aegaeon is not only the most elongated moon, it was also observed in multiple ARCORBIT sequences where the spacecraft repeatedly imaged the moon as it moved around a significant fraction of its orbit, yielding true rotational light curves}. {The images from eight of these sequences are listed in Table~\ref{arcorbittab}, while the values of $A_{\rm eff}$ are shown in Figure~\ref{lightcurve}, plotted as a function of the sub-observer longitude}. All show clear brightness variations that can be attributed to the changing viewing geometry of the moon, with the lowest signals seen when the sub-observer longitude is near 0$^\circ$ or 180$^\circ$, and the highest signals seen when the sub-observer longitude is close to $90^\circ$ or 270$^\circ$. However, it is also clear that the maxima and minima are not precisely aligned with these four longitudes (this is most clearly seen in the Rev 201, 210 and 236 data). This result clearly demonstrates that the projected area is not the only thing affecting the moon's projected brightness. Indeed, the green curves show the variations in the projected area, scaled to match the mean signal in the data. These variations are both too subtle to match the observations, and the observed and predicted peaks and troughs do not line up. By contrast, the GEPM predictions are a much better fit to the data in all cases. This demonstrates that accounting for the lighting geometry not only causes the predicted locations of peaks and troughs to shift into alignment with the data, but also increases the predicted fractional brightness variations.
\begin{figure*}
\resizebox{6.5in}{!}{\includegraphics{gobj_phaserotplot_moonmodcomp_pred_lam_080419.pdf}}
\caption{Estimated brightness coefficients for Lambertian models of Aegaeon, Methone and Pallene with their nominal shapes. The left plots show the brightness coefficients of the moons as a function of phase angle, with the data points color coded by the quadrant viewed by the spacecraft. The right-hand plots show the fractional residuals of the brightness estimates relative to the quadratic trend shown as a solid line in the left-hand plots. The longitude dependent brightness variations are almost completely removed, and those that remain, like Pallene's leading-trailing asymmetry, are probably real surface features.}
\label{phaserotpred}
\end{figure*}
\begin{figure*}
\resizebox{6.5in}{!}{\includegraphics{trojans_phaserotplot_moonmodcomp_lam_080419.pdf}}
\caption{Estimated brightness coefficients for Lambertian models of Telesto, Calypso, Helene and Polydeuces with their nominal shapes. The left plots show the brightness coefficients of the moons as a function of phase angle, with the data points color coded by the quadrant viewed by the spacecraft. The right-hand plots show the fractional residuals of the brightness estimates relative to the quadratic trend shown as a solid line in the left-hand plots. For Telesto, Calypso and Polydeuces, this model yields reduces the dispersion associated with the viewing geometry. However, for Helene the dispersion of the measurements around the mean trend has actually increased.}
\label{tphaserotmod}
\end{figure*}
\begin{figure*}
\resizebox{6.5in}{!}{\includegraphics{rings_phaserotplot_moonmodcomp_lam_080419.pdf}}
\caption{Estimated brightness coefficients for Lambertian models of the ring moons with their nominal shapes. The left plots show the brightness coefficients of the moons as a function of phase angle, with the data points color coded by the quadrant viewed by the spacecraft. The right-hand plots show the fractional residuals of the brightness estimates relative to the linear trend shown as a solid line in the left-hand plots. This model again reduces the dispersion around the phase trend, although for Atlas the dispersion is rather high compared to the other moons. Also note that for Janus and Epimethus the leading hemisphere is clearly darker than the trailing hemisphere.}
\label{rphaserotmod}
\end{figure*}
\begin{figure*}
\resizebox{6.5in}{!}{\includegraphics{mids_phaserotplot_moonmodcomp_lam_080419.pdf}}
\caption{Estimated brightness coefficients for Lambertian models of the mid-sized moons with their nominal shapes. The left plot show the brightness coefficients of the moons as a function of phase angle, with the data points color coded by the quadrant viewed by the spacecraft. The right-hand plots show the fractional residuals of the brightness estimates relative to the linear trend shown as a solid line in the left-hand plots. Since all of these moons are close to spherical, the model does not affect the trends with longitude very much. Instead, this model simply enables the data for these moons to be compared with that from smaller moons. Note the longitudinal brightness asymmetries seen in these data are consistent with previous measurements and are mostly due to asymmetries in the E-ring flux.}
\label{mphaserotmod}
\end{figure*}
More generally, Figures~\ref{phaserotpred}-\ref{rphaserotmod} show the estimated brightness coefficients for the small moons derived assuming that they have the nominal shapes given in Table~\ref{shapes} and surfaces that follow a Lambertian scattering law {(assuming the surfaces follow Minnaert scattering laws with k=0.5 and 0.75 yield nearly identical results)}. Compared with Figures~\ref{phaserotarea}-\ref{rphaserotarea}, the scatter around the mean trends with phase angle are much tighter for almost all of the moons. The exceptions to this trend are Helene, Polydeuces, Janus and Epimetheus, for which the dispersions in $B$ and $<I/F>$ are comparable. These exceptions are likely because these four moons are the most spherical in shape, so the corrections included in the GEPM are less important. There are some outlying data points for Aegaeon, Atlas and Prometheus, but these can attributed to the low signals from Aegaeon and misestimated background levels for Atlas and Prometheus.
Besides the reduction in dispersion, the fractional brightness residuals no longer show double-peaked trends with co-rotating longitude. Instead, the most obvious remaining longitudinal trends are that the leading sides of Pallene, Epimetheus and Janus are 10-20\% darker than their trailing sides. Such leading-trailing asymmetries are reminiscent of the longitudinal brightness variations seen in ground-based, voyager and Cassini images of the mid-sized icy moons \citep{Noland74, Franz75, BV84, Verbiscer89, Buratti90, Verbiscer92, Verbiscer94, Buratti98, Pitman10, Schenk11}. Figure~\ref{mphaserotmod} shows the brightness coefficients for these larger moons derived assuming the same Lambertian model as was used for the smaller moons.\footnote{Due to the nearly spherical shape of these moons, the dispersions around the mean trends in these cases do not change much between $A_{\rm eff}$, $<I/F>$ and $B$.} Consistent with prior work \citep{Buratti98, Schenk11, Hendrix18, Verbiscer18}, we find that Tethys, Dione and Rhea have brighter leading sides, while Mimas has a brighter trailing side. This overall trend is thought to arise because the particles in the E ring preferentially strike the leading sides of moons orbiting exterior to Enceladus and the trailing sides of moons orbiting interior to Enceladus \citep{HB94}. The longitudinal brightness asymmetries observed on Pallene, Janus and Epimetheus are consistent with this pattern, but it is worth noting that Aegaeon, Methone and the trojan moons do not appear to have such asymmetries. The implications of these findings will be discussed further in Section~\ref{trends} below.
\begin{figure*}
\centerline{\resizebox{5.5in}{!}{\includegraphics{gobj_phaserotfitplot2_lam_uni_newdat_052719.pdf}}}
\caption{Photometric analysis of the shapes of Aegaeon, Anthe, Methone and Pallene. Each panel shows the $rms$ dispersion of the residuals between the photometric data and a Lambertian GEPM model with different shape parameters $a/b$ and $b/c$ and a quadratic $B(\alpha)$ function. The red data points are the estimates of these parameters derived from resolved images. For Aegaeon, Methone and Pallene the best-fit photometric models are reasonably consistent with the observed shapes. For Pallene, the best-fit model has a slightly higher $b/a$ than was observed, but this can probably be attributed to the longitudinal brightness variations in this moon. These data also indicate that Methone and Anthe have very similar aspect ratios.}
\label{shapefig}
\end{figure*}
\begin{figure*}
\centerline{\resizebox{5.5in}{!}{\includegraphics{gobj_phaserotfitplot2t_072219.pdf}}}
\caption{Photometric analysis of the shapes of Telesto, Calypso, Helene and Polydeuces. Each panel shows the $rms$ dispersion of the residuals between the photometric data and a Lambertian GEPM model with different shape parameters $a/b$ and $b/c$ and a quadratic $B(\alpha)$ function. The red data points are the estimates of these parameters derived from resolved images. For Calypso and Polydeuces the best-fit photometric models are reasonably consistent with the observed shapes. For Telesto, the best-fit model has a slightly higher $b/a$ than was observed, and for Helene the best-fit model has a $b/a=1$. }
\label{shapefigt}
\end{figure*}
Finally, we may consider what happens if we relax the assumption that the shapes of these moons equal the best-fit values from resolved images. Figures~\ref{shapefig} and~\ref{shapefigt} show the $rms$ residuals as functions of the aspect ratios $b/a$ and $c/b$ for Aegaeon, Anthe, Methone, Pallene and the trojan moons, along with the aspect ratios derived from the resolved images. For the smaller moons shown in Figure~\ref{shapefig}, the two methods for determining the shape agree fairly well. The best-fit solution for Aegeaon falls where $b \simeq c$ and $b \simeq 0.3 a$, consistent with the constraints from the resolved images. For Methone, the two methods yield best-fit solutions with the same value of $c/b \simeq 0.95$ and $b/a \simeq 0.65$, although the photometry favors a slightly higher value for $b/a$. Similarly, for Pallene we find both methods give $c/b \simeq 0.9$, but the resolved images favor $b/a \simeq 0.7$, while the photometry prefers 0.8. This discrepancy is likely related to the variations in the surface brightness with longitude mentioned above. More sophisticated photometric models could potentially resolve this difference, but even this level of consistency gives us some confidence in this method. Also, the photometric data for Anthe clearly favor a shape with $b/a \simeq 0.7$ and $c/b \simeq 0.95$, which implies that Anthe has a shape similar to Methone.
The shapes of the trojan moons derived from the photometry and resolved images are somewhat more discrepant. For Polydeuces, both methods yield $b/a \simeq 0.8$ and $c/b \simeq 1$ with rather large uncertainties. For Calypso, both methods agree that $b/a\simeq0.6$, but the photometry favors $c/b \simeq 0.9$ while the resolved images prefer $c/b \simeq 0.7$. For Telesto, both methods agree that $c/b\simeq0.8$, but the photometry favors $b/a \simeq 0.8$ while the resolved images prefer $b/a \simeq 0.7$.
The biggest discrepancies are found with Helene, where the photometry prefers a much more spherical shape than found with the resolved images. These differences most likely arise because the shapes of these objects are not simple ellipsoids like the smaller moons appear to be. Again, a more sophisticated photometric model that could accommodate more complex shapes would likely reduce these discrepancies. Even so, the relatively simple GEPM clearly reduces the dispersion in the brightness estimates for Telesto, Polydeuces, and especially Calypso, and so it provides a useful basis for comparing the brightnesses of all these satellites.
\clearpage
\begin{table}
\caption{Brightness Coefficients for Saturn's Moons based on data between 20$^\circ$ and 40$^\circ$ phase assuming a Lambertian surface scattering law. The first error bars are statistical, second are systematic errors due to uncertainties in the object's average size.}
\label{brighttab}
\centerline{\resizebox{6.5in}{!}{\begin{tabular}{lccccc}\hline
$B$ at $\alpha=30^\circ$ & All Data & Sub-Saturn Quadrant & Leading Quadrant & Anti-Saturn Quadrant & Trailing Quadrant \\
$C_B=dB/d\alpha$ (per degree) & & & & & \\ \hline
Pan & 0.6492$\pm$0.0058$\pm$0.0142 & 0.6398$\pm$0.0233$\pm$0.0140 & 0.6314$\pm$0.0056$\pm$0.0138 & 0.6293$\pm$0.0126$\pm$0.0138 & 0.6744$\pm$0.0089$\pm$0.0148 \\
& -0.0071$\pm$0.0011$\pm$0.0002 & -0.0135$\pm$0.0041$\pm$0.0003 & -0.0066$\pm$0.0011$\pm$0.0001 & -0.0056$\pm$0.0022$\pm$0.0001 & -0.0042$\pm$0.0016$\pm$0.0001 \\ \hline
Atlas & 0.6254$\pm$0.0084$\pm$0.0084 & 0.5960$\pm$0.0092$\pm$0.0080 & 0.6475$\pm$0.0100$\pm$0.0087 & 0.6334$\pm$0.0242$\pm$0.0085 & 0.5898$\pm$0.0180$\pm$0.0079 \\
& -0.0055$\pm$0.0014$\pm$0.0001 & -0.0151$\pm$0.0015$\pm$0.0002 & -0.0051$\pm$0.0016$\pm$0.0001 & -0.0070$\pm$0.0046$\pm$0.0001 & 0.0026$\pm$0.0031$\pm$0.0000 \\ \hline
Prometheus & 0.7928$\pm$0.0127$\pm$0.0130 & 0.8153$\pm$0.0184$\pm$0.0133 & 0.7550$\pm$0.0164$\pm$0.0123 & 0.7924$\pm$0.0236$\pm$0.0130 & 0.8129$\pm$0.0420$\pm$0.0133 \\
& -0.0083$\pm$0.0025$\pm$0.0001 & -0.0038$\pm$0.0048$\pm$0.0001 & -0.0084$\pm$0.0027$\pm$0.0001 & -0.0062$\pm$0.0047$\pm$0.0001 & -0.0168$\pm$0.0090$\pm$0.0003 \\ \hline
Pandora & 0.7600$\pm$0.0053$\pm$0.0057 & 0.7638$\pm$0.0320$\pm$0.0057 & 0.7704$\pm$0.0052$\pm$0.0058 & 0.7570$\pm$0.0225$\pm$0.0057 & 0.7550$\pm$0.0052$\pm$0.0057 \\
& -0.0072$\pm$0.0010$\pm$0.0001 & -0.0089$\pm$0.0054$\pm$0.0001 & -0.0069$\pm$0.0009$\pm$0.0001 & -0.0049$\pm$0.0049$\pm$0.0000 & -0.0073$\pm$0.0009$\pm$0.0001 \\ \hline
Janus & 0.5305$\pm$0.0082$\pm$0.0027 & 0.5726$\pm$0.0138$\pm$0.0029 & 0.4654$\pm$0.0055$\pm$0.0024 & 0.4989$\pm$0.0120$\pm$0.0026 & 0.5814$\pm$0.0043$\pm$0.0030 \\
& -0.0075$\pm$0.0015$\pm$0.0000 & -0.0081$\pm$0.0024$\pm$0.0000 & -0.0069$\pm$0.0009$\pm$0.0000 & -0.0057$\pm$0.0020$\pm$0.0000 & -0.0104$\pm$0.0010$\pm$0.0001 \\ \hline
Epimetheus & 0.4628$\pm$0.0060$\pm$0.0026 & 0.4665$\pm$0.0141$\pm$0.0026 & 0.4480$\pm$0.0038$\pm$0.0025 & 0.4422$\pm$0.0080$\pm$0.0025 & 0.5188$\pm$0.0103$\pm$0.0029 \\
& -0.0049$\pm$0.0010$\pm$0.0000 & -0.0041$\pm$0.0024$\pm$0.0000 & -0.0042$\pm$0.0006$\pm$0.0000 & -0.0079$\pm$0.0015$\pm$0.0000 & -0.0076$\pm$0.0019$\pm$0.0000 \\ \hline
Aegaeon & 0.1793$\pm$0.0050$\pm$0.0326 & 0.1648$\pm$0.0084$\pm$0.0300 & 0.1582$\pm$0.0042$\pm$0.0288 & 0.1352$\pm$0.0921$\pm$0.0246 & 0.2437$\pm$0.0145$\pm$0.0443 \\
& -0.0009$\pm$0.0009$\pm$0.0002 & -0.0019$\pm$0.0008$\pm$0.0003 & -0.0034$\pm$0.0006$\pm$0.0006 & -0.0086$\pm$0.0183$\pm$0.0016 & -0.0056$\pm$0.0022$\pm$0.0010 \\ \hline
Mimas & 0.7546$\pm$0.0028$\pm$0.0015 & 0.7533$\pm$0.0040$\pm$0.0015 & 0.7340$\pm$0.0016$\pm$0.0015 & 0.7622$\pm$0.0073$\pm$0.0015 & 0.7862$\pm$0.0034$\pm$0.0016 \\
& -0.0072$\pm$0.0006$\pm$0.0000 & -0.0093$\pm$0.0009$\pm$0.0000 & -0.0081$\pm$0.0004$\pm$0.0000 & -0.0079$\pm$0.0012$\pm$0.0000 & -0.0075$\pm$0.0007$\pm$0.0000 \\ \hline
Methone & 0.5558$\pm$0.0033$\pm$0.0115 & 0.5631$\pm$0.0081$\pm$0.0117 & 0.5529$\pm$0.0042$\pm$0.0114 & 0.5542$\pm$0.0083$\pm$0.0115 & 0.5498$\pm$0.0066$\pm$0.0114 \\
& -0.0058$\pm$0.0006$\pm$0.0001 & -0.0048$\pm$0.0003$\pm$0.0001 & -0.0060$\pm$0.0009$\pm$0.0001 & -0.0073$\pm$0.0015$\pm$0.0002 & -0.0049$\pm$0.0011$\pm$0.0001 \\ \hline
Pallene & 0.5228$\pm$0.0066$\pm$0.0164 & 0.5556$\pm$0.0126$\pm$0.0174 & 0.4660$\pm$0.0057$\pm$0.0146 & 0.5519$\pm$0.0114$\pm$0.0173 & 0.5267$\pm$0.0078$\pm$0.0165 \\
& -0.0071$\pm$0.0011$\pm$0.0002 & -0.0051$\pm$0.0005$\pm$0.0002 & -0.0076$\pm$0.0012$\pm$0.0002 & -0.0085$\pm$0.0021$\pm$0.0003 & -0.0052$\pm$0.0012$\pm$0.0002 \\ \hline
Enceladus & 1.1530$\pm$0.0029$\pm$0.0009 & 1.1451$\pm$0.0034$\pm$0.0009 & 1.1215$\pm$0.0043$\pm$0.0009 & 1.1580$\pm$0.0049$\pm$0.0009 & 1.1711$\pm$0.0034$\pm$0.0009 \\
& -0.0107$\pm$0.0005$\pm$0.0000 & -0.0104$\pm$0.0006$\pm$0.0000 & -0.0095$\pm$0.0010$\pm$0.0000 & -0.0126$\pm$0.0009$\pm$0.0000 & -0.0103$\pm$0.0006$\pm$0.0000 \\ \hline
Tethys & 0.9363$\pm$0.0046$\pm$0.0011 & 0.9408$\pm$0.0050$\pm$0.0011 & 0.9550$\pm$0.0035$\pm$0.0011 & 0.9704$\pm$0.0106$\pm$0.0011 & 0.8903$\pm$0.0067$\pm$0.0010 \\
& -0.0094$\pm$0.0008$\pm$0.0000 & -0.0103$\pm$0.0008$\pm$0.0000 & -0.0087$\pm$0.0007$\pm$0.0000 & -0.0086$\pm$0.0022$\pm$0.0000 & -0.0081$\pm$0.0011$\pm$0.0000 \\ \hline
Telesto & 0.9028$\pm$0.0100$\pm$0.0220 & 0.9365$\pm$0.0148$\pm$0.0228 & 0.8383$\pm$0.0087$\pm$0.0204 & 0.9671$\pm$0.0147$\pm$0.0236 & 0.8494$\pm$0.0216$\pm$0.0207 \\
& -0.0125$\pm$0.0019$\pm$0.0003 & -0.0088$\pm$0.0006$\pm$0.0002 & -0.0084$\pm$0.0014$\pm$0.0002 & -0.0151$\pm$0.0034$\pm$0.0004 & -0.0112$\pm$0.0042$\pm$0.0003 \\ \hline
Calypso & 1.0999$\pm$0.0176$\pm$0.0463 & 1.0628$\pm$0.0224$\pm$0.0448 & 1.1331$\pm$0.0239$\pm$0.0477 & 1.1645$\pm$0.0240$\pm$0.0490 & 1.2138$\pm$0.0712$\pm$0.0511 \\
& -0.0144$\pm$0.0032$\pm$0.0006 & -0.0123$\pm$0.0011$\pm$0.0005 & -0.0107$\pm$0.0045$\pm$0.0004 & -0.0251$\pm$0.0061$\pm$0.0011 & -0.0028$\pm$0.0134$\pm$0.0001 \\ \hline
Dione & 0.6785$\pm$0.0099$\pm$0.0005 & 0.6669$\pm$0.0105$\pm$0.0005 & 0.7634$\pm$0.0094$\pm$0.0005 & 0.7027$\pm$0.0151$\pm$0.0005 & 0.5561$\pm$0.0144$\pm$0.0004 \\
& -0.0088$\pm$0.0017$\pm$0.0000 & -0.0039$\pm$0.0020$\pm$0.0000 & -0.0064$\pm$0.0015$\pm$0.0000 & -0.0062$\pm$0.0033$\pm$0.0000 & -0.0049$\pm$0.0021$\pm$0.0000 \\ \hline
Helene & 1.0251$\pm$0.0148$\pm$0.0113 & 1.0807$\pm$0.0187$\pm$0.0119 & 0.8373$\pm$0.1644$\pm$0.0093 & 1.0544$\pm$0.0308$\pm$0.0117 & 0.9522$\pm$0.0289$\pm$0.0105 \\
& -0.0196$\pm$0.0027$\pm$0.0002 & -0.0086$\pm$0.0007$\pm$0.0001 & -0.0074$\pm$0.0224$\pm$0.0001 & -0.0169$\pm$0.0058$\pm$0.0002 & -0.0001$\pm$0.0064$\pm$0.0000 \\ \hline
Polydeuces & 0.6075$\pm$0.0089$\pm$0.0794 & 0.6034$\pm$0.0120$\pm$0.0789 & 0.6476$\pm$0.0205$\pm$0.0846 & 0.5945$\pm$0.0090$\pm$0.0777 & 0.6266$\pm$0.0121$\pm$0.0819 \\
& -0.0071$\pm$0.0016$\pm$0.0009 & -0.0073$\pm$0.0006$\pm$0.0010 & -0.0072$\pm$0.0030$\pm$0.0009 & -0.0094$\pm$0.0017$\pm$0.0012 & -0.0300$\pm$0.0029$\pm$0.0039 \\ \hline
Rhea & 0.7144$\pm$0.0057$\pm$0.0006 & 0.6967$\pm$0.0066$\pm$0.0005 & 0.7937$\pm$0.0046$\pm$0.0006 & 0.7146$\pm$0.0042$\pm$0.0006 & 0.6321$\pm$0.0031$\pm$0.0005 \\
& -0.0110$\pm$0.0011$\pm$0.0000 & -0.0109$\pm$0.0011$\pm$0.0000 & -0.0093$\pm$0.0009$\pm$0.0000 & -0.0120$\pm$0.0011$\pm$0.0000 & -0.0065$\pm$0.0004$\pm$0.0000 \\ \hline
\end{tabular}}}
\end{table}
\begin{table}
{\caption{Brightness Coefficients for Saturn's Moons based on data between 20$^\circ$ and 40$^\circ$ phase assuming a Minnaert surface scattering law with $k=0.75$. The first error bars are statistical, second are systematic errors due to uncertainties in the object's average size.}}
\label{brighttab75}
\centerline{\resizebox{6.5in}{!}{\begin{tabular}{lccccc}\hline
$B$ at $\alpha=30^\circ$ & All Data & Sub-Saturn Quadrant & Leading Quadrant & Anti-Saturn Quadrant & Trailing Quadrant \\
$C_B=dB/d\alpha$ (per degree) & & & & & \\ \hline
Pan & 0.6379$\pm$0.0061$\pm$0.0140 & 0.6289$\pm$0.0284$\pm$0.0138 & 0.6190$\pm$0.0060$\pm$0.0136 & 0.6166$\pm$0.0144$\pm$0.0135 & 0.6633$\pm$0.0091$\pm$0.0145 \\
& -0.0080$\pm$0.0011$\pm$0.0002 & -0.0152$\pm$0.0050$\pm$0.0003 & -0.0072$\pm$0.0012$\pm$0.0002 & -0.0063$\pm$0.0025$\pm$0.0001 & -0.0053$\pm$0.0016$\pm$0.0001 \\ \hline
Atlas & 0.6122$\pm$0.0088$\pm$0.0082 & 0.5863$\pm$0.0108$\pm$0.0079 & 0.6347$\pm$0.0110$\pm$0.0085 & 0.6126$\pm$0.0215$\pm$0.0082 & 0.5787$\pm$0.0187$\pm$0.0078 \\
& -0.0064$\pm$0.0015$\pm$0.0001 & -0.0174$\pm$0.0018$\pm$0.0002 & -0.0058$\pm$0.0017$\pm$0.0001 & -0.0074$\pm$0.0041$\pm$0.0001 & 0.0016$\pm$0.0032$\pm$0.0000 \\ \hline
Prometheus & 0.7702$\pm$0.0127$\pm$0.0126 & 0.8051$\pm$0.0208$\pm$0.0132 & 0.7404$\pm$0.0184$\pm$0.0121 & 0.7668$\pm$0.0236$\pm$0.0125 & 0.7848$\pm$0.0398$\pm$0.0128 \\
& -0.0108$\pm$0.0025$\pm$0.0002 & -0.0114$\pm$0.0054$\pm$0.0002 & -0.0089$\pm$0.0031$\pm$0.0001 & -0.0108$\pm$0.0047$\pm$0.0002 & -0.0198$\pm$0.0086$\pm$0.0003 \\ \hline
Pandora & 0.7498$\pm$0.0052$\pm$0.0056 & 0.7639$\pm$0.0299$\pm$0.0057 & 0.7577$\pm$0.0061$\pm$0.0057 & 0.7397$\pm$0.0220$\pm$0.0055 & 0.7481$\pm$0.0059$\pm$0.0056 \\
& -0.0075$\pm$0.0010$\pm$0.0001 & -0.0102$\pm$0.0051$\pm$0.0001 & -0.0073$\pm$0.0011$\pm$0.0001 & -0.0048$\pm$0.0048$\pm$0.0000 & -0.0078$\pm$0.0011$\pm$0.0001 \\ \hline
Janus & 0.5257$\pm$0.0084$\pm$0.0027 & 0.5692$\pm$0.0142$\pm$0.0029 & 0.4595$\pm$0.0054$\pm$0.0024 & 0.4924$\pm$0.0115$\pm$0.0025 & 0.5777$\pm$0.0051$\pm$0.0030 \\
& -0.0077$\pm$0.0015$\pm$0.0000 & -0.0085$\pm$0.0025$\pm$0.0000 & -0.0070$\pm$0.0009$\pm$0.0000 & -0.0058$\pm$0.0019$\pm$0.0000 & -0.0107$\pm$0.0012$\pm$0.0001 \\ \hline
Epimetheus & 0.4588$\pm$0.0060$\pm$0.0026 & 0.4651$\pm$0.0145$\pm$0.0026 & 0.4433$\pm$0.0039$\pm$0.0025 & 0.4375$\pm$0.0078$\pm$0.0025 & 0.5136$\pm$0.0105$\pm$0.0029 \\
& -0.0051$\pm$0.0011$\pm$0.0000 & -0.0043$\pm$0.0024$\pm$0.0000 & -0.0045$\pm$0.0006$\pm$0.0000 & -0.0083$\pm$0.0015$\pm$0.0000 & -0.0078$\pm$0.0019$\pm$0.0000 \\ \hline
Aegaeon & 0.1758$\pm$0.0049$\pm$0.0320 & 0.1620$\pm$0.0079$\pm$0.0295 & 0.1512$\pm$0.0042$\pm$0.0275 & 0.1323$\pm$0.0939$\pm$0.0241 & 0.2386$\pm$0.0138$\pm$0.0434 \\
& -0.0008$\pm$0.0008$\pm$0.0001 & -0.0016$\pm$0.0008$\pm$0.0003 & -0.0037$\pm$0.0006$\pm$0.0007 & -0.0099$\pm$0.0186$\pm$0.0018 & -0.0054$\pm$0.0021$\pm$0.0010 \\ \hline
Mimas & 0.7498$\pm$0.0028$\pm$0.0015 & 0.7506$\pm$0.0038$\pm$0.0015 & 0.7285$\pm$0.0015$\pm$0.0015 & 0.7576$\pm$0.0076$\pm$0.0015 & 0.7796$\pm$0.0037$\pm$0.0016 \\
& -0.0073$\pm$0.0006$\pm$0.0000 & -0.0094$\pm$0.0008$\pm$0.0000 & -0.0082$\pm$0.0004$\pm$0.0000 & -0.0082$\pm$0.0013$\pm$0.0000 & -0.0076$\pm$0.0008$\pm$0.0000 \\ \hline
Methone & 0.5508$\pm$0.0035$\pm$0.0114 & 0.5663$\pm$0.0087$\pm$0.0117 & 0.5488$\pm$0.0043$\pm$0.0114 & 0.5423$\pm$0.0073$\pm$0.0112 & 0.5415$\pm$0.0074$\pm$0.0112 \\
& -0.0070$\pm$0.0007$\pm$0.0001 & -0.0050$\pm$0.0003$\pm$0.0001 & -0.0064$\pm$0.0009$\pm$0.0001 & -0.0090$\pm$0.0014$\pm$0.0002 & -0.0064$\pm$0.0013$\pm$0.0001 \\ \hline
Pallene & 0.5181$\pm$0.0064$\pm$0.0163 & 0.5537$\pm$0.0127$\pm$0.0174 & 0.4622$\pm$0.0050$\pm$0.0145 & 0.5468$\pm$0.0097$\pm$0.0172 & 0.5210$\pm$0.0065$\pm$0.0164 \\
& -0.0077$\pm$0.0011$\pm$0.0002 & -0.0052$\pm$0.0005$\pm$0.0002 & -0.0078$\pm$0.0010$\pm$0.0002 & -0.0095$\pm$0.0018$\pm$0.0003 & -0.0058$\pm$0.0010$\pm$0.0002 \\ \hline
Enceladus & 1.1462$\pm$0.0029$\pm$0.0009 & 1.1391$\pm$0.0035$\pm$0.0009 & 1.1146$\pm$0.0043$\pm$0.0009 & 1.1511$\pm$0.0048$\pm$0.0009 & 1.1639$\pm$0.0035$\pm$0.0009 \\
& -0.0110$\pm$0.0005$\pm$0.0000 & -0.0108$\pm$0.0006$\pm$0.0000 & -0.0099$\pm$0.0010$\pm$0.0000 & -0.0130$\pm$0.0009$\pm$0.0000 & -0.0106$\pm$0.0006$\pm$0.0000 \\ \hline
Tethys & 0.9311$\pm$0.0046$\pm$0.0011 & 0.9360$\pm$0.0049$\pm$0.0011 & 0.9494$\pm$0.0035$\pm$0.0011 & 0.9645$\pm$0.0108$\pm$0.0011 & 0.8849$\pm$0.0065$\pm$0.0010 \\
& -0.0096$\pm$0.0008$\pm$0.0000 & -0.0105$\pm$0.0008$\pm$0.0000 & -0.0090$\pm$0.0007$\pm$0.0000 & -0.0087$\pm$0.0023$\pm$0.0000 & -0.0083$\pm$0.0010$\pm$0.0000 \\ \hline
Telesto & 0.8915$\pm$0.0103$\pm$0.0217 & 0.9298$\pm$0.0150$\pm$0.0227 & 0.8245$\pm$0.0067$\pm$0.0201 & 0.9545$\pm$0.0162$\pm$0.0233 & 0.8341$\pm$0.0185$\pm$0.0203 \\
& -0.0134$\pm$0.0019$\pm$0.0003 & -0.0085$\pm$0.0006$\pm$0.0002 & -0.0094$\pm$0.0011$\pm$0.0002 & -0.0169$\pm$0.0037$\pm$0.0004 & -0.0107$\pm$0.0036$\pm$0.0003 \\ \hline
Calypso & 1.0928$\pm$0.0196$\pm$0.0460 & 1.0672$\pm$0.0257$\pm$0.0449 & 1.1091$\pm$0.0336$\pm$0.0467 & 1.1360$\pm$0.0286$\pm$0.0478 & 1.2102$\pm$0.0790$\pm$0.0510 \\
& -0.0143$\pm$0.0035$\pm$0.0006 & -0.0125$\pm$0.0012$\pm$0.0005 & -0.0126$\pm$0.0064$\pm$0.0005 & -0.0226$\pm$0.0073$\pm$0.0010 & -0.0026$\pm$0.0148$\pm$0.0001 \\ \hline
Dione & 0.6747$\pm$0.0098$\pm$0.0005 & 0.6631$\pm$0.0104$\pm$0.0005 & 0.7591$\pm$0.0093$\pm$0.0005 & 0.6987$\pm$0.0151$\pm$0.0005 & 0.5530$\pm$0.0142$\pm$0.0004 \\
& -0.0090$\pm$0.0017$\pm$0.0000 & -0.0041$\pm$0.0020$\pm$0.0000 & -0.0066$\pm$0.0015$\pm$0.0000 & -0.0063$\pm$0.0032$\pm$0.0000 & -0.0051$\pm$0.0020$\pm$0.0000 \\ \hline
Helene & 1.0163$\pm$0.0150$\pm$0.0112 & 1.0758$\pm$0.0184$\pm$0.0119 & 0.8208$\pm$0.2026$\pm$0.0091 & 1.0479$\pm$0.0289$\pm$0.0116 & 0.9473$\pm$0.0256$\pm$0.0105 \\
& -0.0201$\pm$0.0027$\pm$0.0002 & -0.0088$\pm$0.0007$\pm$0.0001 & -0.0081$\pm$0.0276$\pm$0.0001 & -0.0153$\pm$0.0054$\pm$0.0002 & -0.0012$\pm$0.0057$\pm$0.0000 \\ \hline
Polydeuces & 0.6016$\pm$0.0090$\pm$0.0786 & 0.6016$\pm$0.0121$\pm$0.0786 & 0.6375$\pm$0.0212$\pm$0.0833 & 0.5867$\pm$0.0087$\pm$0.0767 & 0.6201$\pm$0.0090$\pm$0.0811 \\
& -0.0074$\pm$0.0016$\pm$0.0010 & -0.0075$\pm$0.0006$\pm$0.0010 & -0.0071$\pm$0.0031$\pm$0.0009 & -0.0099$\pm$0.0017$\pm$0.0013 & -0.0310$\pm$0.0022$\pm$0.0041 \\ \hline
Rhea & 0.7104$\pm$0.0056$\pm$0.0006 & 0.6928$\pm$0.0066$\pm$0.0005 & 0.7894$\pm$0.0046$\pm$0.0006 & 0.7106$\pm$0.0042$\pm$0.0006 & 0.6286$\pm$0.0031$\pm$0.0005 \\
& -0.0112$\pm$0.0011$\pm$0.0000 & -0.0111$\pm$0.0011$\pm$0.0000 & -0.0094$\pm$0.0008$\pm$0.0000 & -0.0121$\pm$0.0011$\pm$0.0000 & -0.0067$\pm$0.0004$\pm$0.0000 \\ \hline
\end{tabular}}}
\end{table}
\begin{table}
{\caption{Brightness Coefficients for Saturn's Moons based on data between 20$^\circ$ and 40$^\circ$ phase assuming a Minnaert surface scattering law with $k=0.50$. The first error bars are statistical, second are systematic errors due to uncertainties in the object's average size.}}
\label{brighttab50}
\centerline{\resizebox{6.5in}{!}{\begin{tabular}{lccccc}\hline
$B$ at $\alpha=30^\circ$ & All Data & Sub-Saturn Quadrant & Leading Quadrant & Anti-Saturn Quadrant & Trailing Quadrant \\
$C_B=dB/d\alpha$ (per degree) & & & & & \\ \hline
Pan & 0.6181$\pm$0.0066$\pm$0.0135 & 0.6099$\pm$0.0346$\pm$0.0134 & 0.5976$\pm$0.0068$\pm$0.0131 & 0.5935$\pm$0.0172$\pm$0.0130 & 0.6444$\pm$0.0099$\pm$0.0141 \\
& -0.0093$\pm$0.0012$\pm$0.0002 & -0.0174$\pm$0.0061$\pm$0.0004 & -0.0079$\pm$0.0013$\pm$0.0002 & -0.0072$\pm$0.0030$\pm$0.0002 & -0.0067$\pm$0.0018$\pm$0.0001 \\ \hline
Atlas & 0.5903$\pm$0.0095$\pm$0.0079 & 0.5682$\pm$0.0141$\pm$0.0076 & 0.6134$\pm$0.0124$\pm$0.0082 & 0.5808$\pm$0.0207$\pm$0.0078 & 0.5595$\pm$0.0201$\pm$0.0075 \\
& -0.0075$\pm$0.0016$\pm$0.0001 & -0.0202$\pm$0.0024$\pm$0.0003 & -0.0066$\pm$0.0019$\pm$0.0001 & -0.0080$\pm$0.0039$\pm$0.0001 & 0.0006$\pm$0.0034$\pm$0.0000 \\ \hline
Prometheus & 0.7362$\pm$0.0130$\pm$0.0120 & 0.7796$\pm$0.0255$\pm$0.0128 & 0.7167$\pm$0.0207$\pm$0.0117 & 0.7280$\pm$0.0242$\pm$0.0119 & 0.7486$\pm$0.0375$\pm$0.0122 \\
& -0.0138$\pm$0.0025$\pm$0.0002 & -0.0192$\pm$0.0066$\pm$0.0003 & -0.0096$\pm$0.0035$\pm$0.0002 & -0.0160$\pm$0.0048$\pm$0.0003 & -0.0225$\pm$0.0081$\pm$0.0004 \\ \hline
Pandora & 0.7299$\pm$0.0057$\pm$0.0055 & 0.7540$\pm$0.0330$\pm$0.0057 & 0.7349$\pm$0.0072$\pm$0.0055 & 0.7105$\pm$0.0217$\pm$0.0053 & 0.7329$\pm$0.0074$\pm$0.0055 \\
& -0.0082$\pm$0.0010$\pm$0.0001 & -0.0118$\pm$0.0056$\pm$0.0001 & -0.0082$\pm$0.0013$\pm$0.0001 & -0.0051$\pm$0.0047$\pm$0.0000 & -0.0086$\pm$0.0013$\pm$0.0001 \\ \hline
Janus & 0.5143$\pm$0.0086$\pm$0.0026 & 0.5589$\pm$0.0146$\pm$0.0029 & 0.4478$\pm$0.0055$\pm$0.0023 & 0.4791$\pm$0.0110$\pm$0.0025 & 0.5672$\pm$0.0060$\pm$0.0029 \\
& -0.0082$\pm$0.0015$\pm$0.0000 & -0.0091$\pm$0.0026$\pm$0.0000 & -0.0074$\pm$0.0009$\pm$0.0000 & -0.0061$\pm$0.0018$\pm$0.0000 & -0.0112$\pm$0.0014$\pm$0.0001 \\ \hline
Epimetheus & 0.4494$\pm$0.0061$\pm$0.0025 & 0.4579$\pm$0.0149$\pm$0.0026 & 0.4336$\pm$0.0040$\pm$0.0024 & 0.4272$\pm$0.0075$\pm$0.0024 & 0.5027$\pm$0.0108$\pm$0.0028 \\
& -0.0056$\pm$0.0011$\pm$0.0000 & -0.0047$\pm$0.0025$\pm$0.0000 & -0.0050$\pm$0.0007$\pm$0.0000 & -0.0088$\pm$0.0014$\pm$0.0000 & -0.0082$\pm$0.0020$\pm$0.0000 \\ \hline
Aegaeon & 0.1712$\pm$0.0048$\pm$0.0311 & 0.1578$\pm$0.0077$\pm$0.0287 & 0.1440$\pm$0.0044$\pm$0.0262 & 0.1245$\pm$0.0952$\pm$0.0226 & 0.2322$\pm$0.0132$\pm$0.0422 \\
& -0.0007$\pm$0.0008$\pm$0.0001 & -0.0015$\pm$0.0008$\pm$0.0003 & -0.0040$\pm$0.0007$\pm$0.0007 & -0.0119$\pm$0.0189$\pm$0.0022 & -0.0053$\pm$0.0020$\pm$0.0010 \\ \hline
Mimas & 0.7362$\pm$0.0028$\pm$0.0015 & 0.7391$\pm$0.0036$\pm$0.0015 & 0.7147$\pm$0.0015$\pm$0.0014 & 0.7438$\pm$0.0079$\pm$0.0015 & 0.7640$\pm$0.0041$\pm$0.0015 \\
& -0.0077$\pm$0.0006$\pm$0.0000 & -0.0099$\pm$0.0008$\pm$0.0000 & -0.0085$\pm$0.0004$\pm$0.0000 & -0.0088$\pm$0.0013$\pm$0.0000 & -0.0080$\pm$0.0008$\pm$0.0000 \\ \hline
Methone & 0.5396$\pm$0.0043$\pm$0.0112 & 0.5621$\pm$0.0102$\pm$0.0116 & 0.5395$\pm$0.0056$\pm$0.0112 & 0.5228$\pm$0.0073$\pm$0.0108 & 0.5282$\pm$0.0086$\pm$0.0109 \\
& -0.0085$\pm$0.0008$\pm$0.0002 & -0.0054$\pm$0.0004$\pm$0.0001 & -0.0072$\pm$0.0011$\pm$0.0001 & -0.0109$\pm$0.0013$\pm$0.0002 & -0.0080$\pm$0.0015$\pm$0.0002 \\ \hline
Pallene & 0.5070$\pm$0.0063$\pm$0.0159 & 0.5436$\pm$0.0134$\pm$0.0171 & 0.4538$\pm$0.0049$\pm$0.0142 & 0.5336$\pm$0.0087$\pm$0.0168 & 0.5098$\pm$0.0057$\pm$0.0160 \\
& -0.0086$\pm$0.0011$\pm$0.0003 & -0.0055$\pm$0.0005$\pm$0.0002 & -0.0080$\pm$0.0010$\pm$0.0003 & -0.0107$\pm$0.0016$\pm$0.0003 & -0.0066$\pm$0.0009$\pm$0.0002 \\ \hline
Enceladus & 1.1264$\pm$0.0029$\pm$0.0009 & 1.1200$\pm$0.0036$\pm$0.0009 & 1.0949$\pm$0.0044$\pm$0.0009 & 1.1310$\pm$0.0048$\pm$0.0009 & 1.1433$\pm$0.0036$\pm$0.0009 \\
& -0.0120$\pm$0.0005$\pm$0.0000 & -0.0118$\pm$0.0007$\pm$0.0000 & -0.0107$\pm$0.0010$\pm$0.0000 & -0.0139$\pm$0.0008$\pm$0.0000 & -0.0115$\pm$0.0006$\pm$0.0000 \\ \hline
Tethys & 0.9152$\pm$0.0045$\pm$0.0010 & 0.9205$\pm$0.0048$\pm$0.0010 & 0.9330$\pm$0.0034$\pm$0.0011 & 0.9474$\pm$0.0109$\pm$0.0011 & 0.8694$\pm$0.0063$\pm$0.0010 \\
& -0.0103$\pm$0.0008$\pm$0.0000 & -0.0112$\pm$0.0008$\pm$0.0000 & -0.0097$\pm$0.0006$\pm$0.0000 & -0.0093$\pm$0.0023$\pm$0.0000 & -0.0089$\pm$0.0010$\pm$0.0000 \\ \hline
Telesto & 0.8680$\pm$0.0107$\pm$0.0212 & 0.9083$\pm$0.0154$\pm$0.0222 & 0.8019$\pm$0.0064$\pm$0.0196 & 0.9258$\pm$0.0201$\pm$0.0226 & 0.8096$\pm$0.0149$\pm$0.0197 \\
& -0.0144$\pm$0.0020$\pm$0.0004 & -0.0086$\pm$0.0006$\pm$0.0002 & -0.0106$\pm$0.0010$\pm$0.0003 & -0.0193$\pm$0.0047$\pm$0.0005 & -0.0103$\pm$0.0029$\pm$0.0003 \\ \hline
Calypso & 1.0692$\pm$0.0227$\pm$0.0450 & 1.0550$\pm$0.0296$\pm$0.0444 & 1.0738$\pm$0.0437$\pm$0.0452 & 1.0868$\pm$0.0374$\pm$0.0458 & 1.1901$\pm$0.0861$\pm$0.0501 \\
& -0.0142$\pm$0.0041$\pm$0.0006 & -0.0128$\pm$0.0014$\pm$0.0005 & -0.0150$\pm$0.0083$\pm$0.0006 & -0.0196$\pm$0.0095$\pm$0.0008 & -0.0032$\pm$0.0162$\pm$0.0001 \\ \hline
Dione & 0.6632$\pm$0.0097$\pm$0.0005 & 0.6517$\pm$0.0103$\pm$0.0005 & 0.7462$\pm$0.0092$\pm$0.0005 & 0.6868$\pm$0.0149$\pm$0.0005 & 0.5435$\pm$0.0139$\pm$0.0004 \\
& -0.0094$\pm$0.0017$\pm$0.0000 & -0.0046$\pm$0.0019$\pm$0.0000 & -0.0072$\pm$0.0014$\pm$0.0000 & -0.0069$\pm$0.0032$\pm$0.0000 & -0.0055$\pm$0.0020$\pm$0.0000 \\ \hline
Helene & 0.9935$\pm$0.0151$\pm$0.0110 & 1.0543$\pm$0.0174$\pm$0.0116 & 0.7958$\pm$0.2351$\pm$0.0088 & 1.0264$\pm$0.0275$\pm$0.0113 & 0.9315$\pm$0.0215$\pm$0.0103 \\
& -0.0204$\pm$0.0028$\pm$0.0002 & -0.0094$\pm$0.0007$\pm$0.0001 & -0.0090$\pm$0.0320$\pm$0.0001 & -0.0139$\pm$0.0052$\pm$0.0002 & -0.0029$\pm$0.0048$\pm$0.0000 \\ \hline
Polydeuces & 0.5879$\pm$0.0090$\pm$0.0769 & 0.5925$\pm$0.0123$\pm$0.0774 & 0.6206$\pm$0.0215$\pm$0.0811 & 0.5705$\pm$0.0084$\pm$0.0746 & 0.6065$\pm$0.0060$\pm$0.0793 \\
& -0.0079$\pm$0.0016$\pm$0.0010 & -0.0079$\pm$0.0006$\pm$0.0010 & -0.0073$\pm$0.0032$\pm$0.0010 & -0.0106$\pm$0.0016$\pm$0.0014 & -0.0322$\pm$0.0014$\pm$0.0042 \\ \hline
Rhea & 0.6984$\pm$0.0055$\pm$0.0005 & 0.6811$\pm$0.0065$\pm$0.0005 & 0.7760$\pm$0.0044$\pm$0.0006 & 0.6985$\pm$0.0041$\pm$0.0005 & 0.6179$\pm$0.0031$\pm$0.0005 \\
& -0.0116$\pm$0.0011$\pm$0.0000 & -0.0115$\pm$0.0011$\pm$0.0000 & -0.0100$\pm$0.0008$\pm$0.0000 & -0.0126$\pm$0.0011$\pm$0.0000 & -0.0071$\pm$0.0004$\pm$0.0000 \\ \hline
\end{tabular}}}
\end{table}
\clearpage
\pagebreak
\section{Brightness Trends among Saturn's small moons}
\label{trends}
The GEPM appears to provide reasonably consistent surface brightness estimates for moons with a range of sizes and shapes, so we will now compare these brightness estimates for the different moons and identify trends that might clarify the processes responsible for determining the moons' visible surface brightness. We begin by describing the specific brightness values we will use for this project in Section~\ref{estimates}. Section~\ref{ering} then shows how these brightness parameters for the mid-sized moons correlate with the local flux of E-ring particles, which is consistent with prior work. Next, Section~\ref{radiation} examines why some of the small moons are darker than expected given their locations within the E ring, and shows that high-energy proton radiation probably also influences the moons' surface brightness. Finally, Section~\ref{trojans} discusses the small moons that are brighter than one might expect given their locations.
\subsection{Comparable brightness estimates for Saturn's moons}
\label{estimates}
The calculations described in Section~\ref{model} demonstrate that our model is a sensible way to quantify the surface brightness of elongated moons. However, the absolute values of $B$ derived by this method are not directly comparable to traditional parameters like geometric or {Bond} albedos previously reported in the literature. In principle, the derived values of $B$ as functions of phase angle could be extrapolated to zero, and then we could translate that value of $B$ to a geometric albedo for an equivalent spherical object. However, in practice performing such an extrapolation would be problematic because the small moons were rarely observed by Cassini at phase angles below 10$^\circ$, so we have no information about the magnitude or shape of the opposition surges for these bodies. Also, since these objects are not spherical, the amount of light scattered will depend on the orientation of the body, which complicates the classical definition of albedo. On the other hand, it is relatively straightforward to estimate $B$ for Saturn's other moons, and so we have chosen that approach for this analysis.
In order to compare the reflectances of these objects, we only consider the data obtained between phase angles of 20$^\circ$ and 40$^\circ$, and fit the brightness coefficients in this phase range to a linear function of phase angle $\alpha$:
\begin{equation} B(\alpha) = B_0 + (\alpha-30^\circ) C_B \end{equation}
Note that $B_0$ is the brightness coefficient at 30$^\circ$ phase, which is in the middle of the range of observed phase angles and so does not depend on any questionable extrapolations. For a spherical object of radius $r$ observed at $0^\circ$ and $30^\circ$ phase, $a_{\rm pred}=2.12 r^2$ and $1.86 r^2$ respectively for a Lambertian scattering law {(for a Minnaert scattering law with $k=0.5$, these numbers become $a_{\rm pred}=2.12 r^2$ and $1.91 r^2$, respectively)}. Hence the average reflectance the object would have at 30$^\circ$ if it were a sphere is $<I/F>_{\rm sphere}=0.59 B_0$ for a Lambertian scattering law {(or $<I/F>_{\rm sphere}=0.61 B_0$ for a Minnaert $k=0.5$ scattering law)}. The corresponding geometric albedo of the object assuming that the linear phase function could be extrapolated to an {appropriately small} phase angle $\alpha_0$ would be\footnote{{Note that at small phase angles the brightness differences for Minnaert and Lambertian scattering laws are negligible since the incidence and emission angles are nearly identical.}}
\begin{equation}
\mathcal{A}_{\rm sphere}=0.67[B_0 + (\alpha_0-30^\circ) C_B]
\end{equation}
For the mid-sized moons Mimas, Enceladus, Tethys, Dione and Rhea, this expression yields $\mathcal{A}_{\rm sphere}=0.60, 0.92, 0.75, 057$ and 0.63, respectively for $\alpha_0=10^\circ$, which are 0.97, 0.92, 0.90, 0.87 and 1.00 times the central values reported by \citet{Thomas18}, so at this phase angle the calculations are reasonably consistent. However, if we extrapolate to zero degrees phase we obtain $\mathcal{A}_{\rm sphere}= 0.65, 0.99, 0.82, 0.63$ and 0.55, which are 0.68, 0.72, 0.67, 0.63 and 0.74 times the geometric albedos measured by \citet{Verbiscer07}. This discrepancy most likely reflects the fact that this formula does not account for the opposition surges associated with these moons.
\begin{figure*}
\centerline{\resizebox{5.5in}{!}{\includegraphics{clrmoons_compplot8_111119.pdf}}}
\caption{{Comparisons of the average brightness coefficients of the various moons derived assuming different Minnaert scattering laws (see also Tables~\ref{brighttab}-~\ref{brighttab50}). While models assuming lower values of $k$ do yield systematically lower brightness coefficients, the differences between the different models are fairly subtle and do not alter the trends among the different moons.}}
\label{brightcomp}
\end{figure*}
For the sake of simplicity we will just use the parameters $B_0$ in our analyses below. {Tables~\ref{brighttab}-~\ref{brighttab50} tabulate $B_0$ and $C_B$ for all of Saturn's moons examined here assuming either a Lambertian scattering law or a Minnaert scattering law with $k=0.75$ or $k=0.50$}. These tables also includes statistical errors based on the scatter of the points around the mean trend, and a systematic error which is based on the uncertainty in the mean size of the object. Aditional systematic uncertainties associated with the size of the pixel response function are of order 5\% and are not included here because these systematic uncertainties are common to all the brightness estimates and so do not affect the moons' relative brightness.
{Figure~\ref{brightcomp} compares the values of $B_0$ derived assuming the different scattering laws for each of the moons. This plot shows that models assuming lower values of the Minnaert $k$ parameter yield systematically lower estimates of $B_0$, but that these differences are rather subtle and have little effect on the relative brightnesses of the various moons. Hence, for the sake of simplicity we will only consider the estimates of $B_0$ computed assuming a Lambertian scattering law from here on.} Figure~\ref{eringcomp} plots these central brightness values as a function of distance from the center of Saturn, along with estimates of the relative E-ring particle density and fluxes derived from images, as well as the relative energetic proton and electron fluxes (see below).
\begin{figure*}
\centerline{\resizebox{5.5in}{!}{\includegraphics{clrmoons_compplot6_092619.pdf}}}
\caption{Correlations between the moon's brightness and their environment. The top panel shows the brightness coefficients of the moons computed assuming a Lambertian scattering law as a function of distance from the planet's center. Note that the locations of the trojan moons Telesto, Calypso, Polydeuces and Helene are offset slightly for the sake of clarity. The next two panels show the E-ring's relative brightness density and the relative flux of E-ring particles into the moons (i.e. the parameters $F$ and $|\delta F_\lambda|$ see text). The bottom panel shows the flux of high-energy protons and electrons derived by the MIMI LEMMS instrument from one early pass through this entire region.}
\label{eringcomp}
\end{figure*}
\subsection{Correlations between the moons' brightnesses and the E ring particle flux}
\label{ering}
Figure~\ref{eringcomp} shows that the brightness coefficients for the largest moons (Janus, Epimetheus, Mimas, Tethys, Dione and Rhea) follow the same basic trends as previously reported for these moon's geometric albedos \citep{Verbiscer07, Verbiscer18}. The brightness of these larger moons falls off with distance from Enceladus' orbit, which strongly suggests that the E-ring plays an important role in determining these moons' surface brightness. Furthermore, the leading-trailing asymmetries in these moons' surface brightness are also consistent with how E-ring particles are expected to strike the moons \citep{Buratti98, Schenk11}. Most of the visible E ring particles have orbital semi-major axes close to Enceladus and a range of eccentricities \citep{Horanyi92, HB94}. The E-ring particles outside Enceladus' orbit are therefore closer to the apocenter of their eccentric orbits and so are moving slower around the planet than the moons are. Hence Tethys, Dione and Rhea mostly overtake the E-ring particles and the corresponding flux onto these moons is larger on their leading sides. On the other hand, the E-ring particles inside Enceladus' orbit are closer to their orbital pericenters, and so tend to move faster around the planet than the moons. Hence the E-ring particles tend to preferentially strike the trailing sides of Janus, Epimetheus and Mimas. For all of these moons, the side that is preferentially struck by E-ring particles is brighter, as one would expect if the E-ring flux was responsible for brightening the moons.
The connection between the E-ring and the moons' brightness can be made more quantitative by estimating the flux of E-ring particles onto each of the moons. The E ring consists of particles with a wide range of orbital elements, so detailed numerical simulations will likely be needed to accurately compute the fluxes onto each moon. Such simulations are beyond the scope of this report, and so we will here simply approximate the particle flux based on simplified analytical models of the E ring motivated by the available remote-sensing and in-situ observations.
Prior analyses of E-ring images obtained by the Cassini spacecraft provided maps of the local brightness density within this ring as functions of radius and height above Saturn's equatorial plane. These brightness densities are proportional to the local number density of particles times the size-dependent scattering efficiency for those particles, so these maps provide relatively direct estimates of the particle density in the vicinity of the moons. For this particular study, we will focus on the E-ring density profile shown in Figure~\ref{eringcomp}, which is derived from the E130MAP observation made on day 137 of 2006, and is described in detail in \citet{Hedman12}. This observation included a set of wide-angle camera images that provided an extensive and high signal-to-noise edge-on map of the E ring at phase angles around 130$^\circ$ (W1526532467, W1526536067, W1526539667, W1526543267, W1526546867, W1526550467 and W1526554067). These images were assembled into a single mosaic of the edge-on ring, and then onion-peeled to transform the observed integrated brightness map into a map of the local ring brightness density in a vertical cut through the ring \citep{Hedman12}. The profile of the brightness density near Saturn's equatorial plane was extracted from this map as the average brightness in regions between 1000 and 2000 km from Saturn's equator plane after removing a background level based on the average brightness 20,000-30,000 km away from that plane. Note the region used here deliberately excluded the region within 1000 km of the equator plane to minimize contamination from the G ring interior to 175,000 km. The vertical scale height of the E ring is sufficiently large that including data in the 1000-2000 km range should provide a reasonable estimate of the density in the plane containing all these moons.
Consistent with previous studies, the E-ring brightness density is strongly peaked around Enceladus' orbit. However, this profile also differs from the profiles reported elsewhere in the literature. For example, this profile is much more strongly peaked than that the profile used by \citet{Verbiscer07}. This is because the \citet{Verbiscer07} profile was of the E-ring's vertically integrated brightness, rather than the brightness density close to the equatorial plane. The latter quantity is more sharply peaked because the E-ring's vertical thickness increases with distance from Enceladus' orbit, and because the ring is warped so that its peak brightness shifts away from Saturn's equator plane far from Enceladus' orbit \citep{Kempf08, Hedman12, Ye16}. Since all the moons orbit close to the planet's equator plane, the profile shown in Figure~\ref{eringcomp} is a better representation of the relative dust density surrounding the moons. On the other hand, the profile shown in Figure~\ref{eringcomp} is also a bit broader than in-situ measurements would predict. These data have generally been fit to models where the particle density falls off with distance from Enceladus' orbit like power laws with very large indices \citep{Kempf08, Ye16}. Assuming that the brightness density is proportional to the particle number density, these models tend to underpredict the signals seen exterior to 5 $R_S$ and interior to 3 $R_S$. These discrepancies arise in part because the in-situ instruments are only sensitive to grains with radii larger than 0.9 microns, while the images are sensitive to somewhat smaller particles that probably have a broader spatial distribution \citep{Horanyi08, Hedman12}.
The profile shown in Figure~\ref{eringcomp} therefore should provide a reasonable estimate of the relative densities of the particles larger than 0.5 $\mu$m across within this ring.
However, for the purposes of understanding how the E-ring affects the moons' surface properties, we are primarily interested in the E-ring particle {flux} into the moons, which depends not only on the particle density, but also on the particles' velocity distribution. Since a detailed investigation of the E-ring's orbital element distribution is beyond the scope of this paper, we will here consider a simple analytical model of the E-ring particles' orbital properties that can provide useful rough estimates of the particle flux into the various moons.
Prior analyses of E-ring images indicate that most of the visible particles in this ring have semi-major axes close to Enceladus' orbit, and that the large size of this ring is because the particles have a wide range of eccentricities and inclinations. Furthermore, the images indicate that mean eccentricities, inclinations and semi-major axes were strongly correlated with each other \citep{Hedman12}. We therefore posit that the E-ring density distribution seen in Figure~\ref{eringcomp} primarily reflects a distribution of eccentricities $\mathcal{F}(e)$. We also allow the semi-major axes of the particles to vary, but for the sake of simplicity we assume that the semi-major axes of these particles $a$ is a strict function of $e$: $a=[240,0000+(e/0.1)\times5000]$ km, which is consistent with the imaging data \citep{Hedman12}. Since we are only concerned with particles near Saturn's equatorial plane, we will not consider the inclination distributions here.
For a given eccentricity distribution $\mathcal{F}(e)$, the particle density as a function of radius $d(r)$ should be given by the following integral:
\begin{equation}
d(r)=\int_{e_{min}}^1 \frac{2}{|v_r| T} \mathcal{F}(e) de,
\end{equation}
where $v_r$ is the radial velocity of the particles and $T$ is their orbital period, and the factor of two arises because particles with a finite eccentricity passes through each radius twice. The integral's lower limit is $e_{min}=|1-r/a|$ because only particles with eccentricities greater than this can reach the observed $r$.
Similarly, the average flux of particles onto a body on a circular orbit at a given radius is given by the integral.
\begin{equation}
{F}(r)=\int_{e_{min}}^1 \frac{1}{2|v_r| T} \mathcal{F}(e)\sqrt{v_r^2+\delta v_\lambda^2} de,
\end{equation}
where $v_r$ is the particle's radial velocity as defined above, and $\delta v_\lambda$ is the particle's azimuthal velocity measured relative to the local circular orbital velocity. Note that a factor of $1/4$ arises from averaging over the moon's surface. Since the radial motions of particles on any given orbit are symmetric inwards and outwards, in this model there are no differences in the fluxes on the sub-Saturn and anti-Saturn sides of the moon. However, because the particles at a given radius can have different average azimuthal velocities, there can be asymmetries in the fluxes on the moon's leading and trailing sides. These asymmetries can be parametrized by the following integral:
\begin{equation}
\delta F_\lambda(r)=\int_{e_{min}}^1 \frac{1}{2|v_r| T} \mathcal{F}(e)\delta v_\lambda de,
\end{equation}
which is {\bf half} the difference between the fluxes on the leading and trailing points.
In order to evaluate these integrals, recall the standard expressions for $v_r$ and $\delta v_\lambda$ in terms of orbital elements \citep{MD99}:
\begin{equation}
v_r=\frac{na}{\sqrt{1-e^2}}e\sin f
\end{equation}
\begin{equation}
\delta v_\lambda=\frac{na}{\sqrt{1-e^2}}(1+e\cos f)-na\sqrt{a/r}
\end{equation}
where $n=2\pi/T$ is the mean motion of the particle and $f$ is the true anomaly of the particle's orbit. At a given radius $r$ we will be observing particles at different mean anomalies given by the standard expression:
\begin{equation}
r=\frac{a(1-e^2)}{1+e\cos f}
\end{equation}
so we can re-express $|v_r|$ and $\delta v_\lambda$ in terms of $r$
\begin{equation}
|v_r|=\frac{na^2}{r}\sqrt{e^2-(r/a-1)^2},
\end{equation}
\begin{equation}
\delta v_\lambda=\frac{na^2}{r}\left[\sqrt{1-e^2}-\sqrt{r/a}\right].
\label{vlambda}
\end{equation}
Inserting these expressions into the above integrals, we find the density, flux and flux asymmetries can all be expressed as the following integrals over the eccentricity distribution (remember that $a$ is implicitly a function of $e$):
\begin{equation}
d(r)=\int_{e_{min}}^1 \frac{r\mathcal{F}(e)}{\pi a^2\sqrt{e^2-(r/a-1)^2}} de,
\end{equation}
\begin{equation}
F(r)=\int_{e_{min}}^1 \frac{n\mathcal{F}(e)}{4\pi}\sqrt{1+\frac{\left(\sqrt{1-e^2}-\sqrt{r/a}\right)^2}{e^2-(r/a-1)^2}} de,
\end{equation}
\begin{equation}
F_\lambda(r)=\int_{e_{min}}^1 \frac{n\mathcal{F}(e)(\sqrt{1-e^2}-\sqrt{r/a})}{4\pi\sqrt{e^2-(r/a-1)^2}} de.
\end{equation}
It turns out that the observed density distribution shown in Figure~\ref{eringcomp} can be matched reasonably well by assuming the eccentricity is a Lorentzian times a regularization term to avoid singularities at $e=0$:
\begin{equation}
\mathcal{F}(e)=\frac{\mathcal{F}_0}{1+e^2/e_0^2}[1-\exp(-e/e_c)]
\end{equation}
with constants $\mathcal{F}_0$, $e_0$ and $e_c$. The specific model density distribution shown in Figure~\ref{eringcomp} has $e_0=0.17$ and $e_c=0.01$. Note that $e_c$ basically only affects the density levels close to Enceladus' orbit, while $e_0$ determines how quickly the signal falls off away from the E-ring core.
Using this same ansatz for $\mathcal{F}(e)$, we obtain estimates for how both the flux and flux asymmetry vary with radius, which are shown in the third panel of Figure~\ref{eringcomp}. First note that the flux is much less sharply peaked than the density. This occurs because the particles are moving faster relative to the local circular velocity at larger distances from Enceladus' orbit thanks to their higher eccentricities. Also note that azimuthal component of the flux is larger exterior to Enceladus' orbit than it is interior to that moon. This asymmetry arises because exterior to Enceladus the particles are all moving slower than the local circular velocity, while interior to Enceladus, the particles are all moving faster than the local circular speed, but the azimuthal components of their velocity can still fall below the local circular velocity depending on their true anomaly.
Finally, we can translate these trends in the relative flux into rough estimates of the absolute flux of E-ring particles using recent in-situ measurements of the particle number density in the E-ring core. In principle, the brightness density seen in images can be translated into estimates of the particle number density, but in practice the relevant conversion factor depends on the size-dependent light scattering efficiency of the particles. By contrast, the peak E-ring particle density has been directly measured to be between 0.02 and 0.2 /m$^3$ with various in-situ measurements \citep{Kempf08, Ye16}. Now, as mentioned above, these densities are for the particles larger than the detection thresholds of these instruments, which is slightly larger than the sizes of the particles seen in remote-sensing images. Hence the number density of visible particles could be somewhat larger than these measurements. Nevertheless, these are still the best estimates of the absolute particle density near the core of the E ring, and they should still provide a useful order-of-magnitude estimate of the visible particles. For the sake of concreteness, we will here assume a peak number density of 0.03 particles/m$^3$, which is the value measured by \citet{Ye16} for particles larger than 1 $\mu$m in radius.
\begin{figure}
\resizebox{6in}{!}{\includegraphics{clrmoons_compplot6x_080419.pdf}}
\caption{Moon brightness coefficients versus computed E-ring particle flux, assuming peak number density of 0.03/m$^3$, $e_0=0.17$ and $e_{min}=0.01$. The diamonds are the average values for each moon, while the orange triangles and magenta stars indicate the values for the leading and trailing sides of the mid-sized moons.}
\label{eringflux}
\end{figure}
Figure~\ref{eringflux} and Table~\ref{fluxtab} give the brightness coefficients and estimated E-ring particle fluxes for all the moons known to be embedded in the E ring. For the mid-sized moons and the co-orbitals, there is a very good correlation between the moon's brightness coefficients and the estimated fluxes. This correlation not only applies to the average values for each moon, but also to their leading and trailing sides, where the estimated flux values are taken to be $F\pm\delta F_\lambda/\sqrt{2}$ (the factor of $\sqrt{2}$ accounts for the fact that the observations span a full quadrant of each body). The one mid-sized moon that falls noticeably off the main trend is Rhea. {This could be related to other unusual aspects of Rhea's surface. Rhea's visible spectrum is redder than any of the other mid-sized moons \citep{Thomas18}, and its near-infrared spectrum exhibits stronger ice bands than Dione \citep{Filacchione12, Scipioni14}. Furthermore, radio-wave data indicate that Rhea has a higher albedo at centimeter wavelengths than Dione \citep{Black07, LeGall19}. Various explanations have been put forward to these anomalies, including differences in the moons' regolith structure \citep{Ostro10}, and variations in the energetic particle and/or ring particle flux \citep{Scipioni14, Thomas18}. The latter option is probably the more likely explanation for Rhea's excess brightness at visible wavelengths, since the moon is in a region where both the predicted fluxes of E-ring grains and energetic particles are expected to be very low (see also below). Indeed, our simple E-ring model is probably not completely accurate near Rhea's orbit. Detailed numerical simulations of the E-ring particles reveal that there could be a population of sub-micron particles orbiting in the outskirts of the E ring close to Rhea's orbit \citep{Horanyi08}. This population could potentially increase the particle flux into Rhea, moving that data point closer to the trend defined by Mimas and Dione, but it is not clear whether it could also explain that moon's spectral properties.} In any case, overall these data confirm that the E-ring particle flux is an important factor for determining the surface brightness of the mid-sized and co-orbital satellites.
Turning to the smaller moons, however, the situation is more complicated. Two of the co-orbital moons --Telesto and Polydeuces-- fall close to the same trend as their larger companions. However, the other two co-orbitals --Calypso and Helene-- fall noticeably above this trend, Meanwhile, Aegaeon, Methone and Pallene all fall well below the trend for the mid-sized moons. This implies that something besides the E-ring flux is affecting the brightnesses of these small moons.
\begin{table}
\caption{Particle and radiation fluxes}
\label{fluxtab}
\centerline{\begin{tabular}{| l | c c c | c c c | r |}\hline
Moon & \multicolumn{3}{c|}{$B_0^a$} & \multicolumn{3}{c|}{E-ring flux$^b$} & \multicolumn{1}{c|}{Radiation flux$^c$} \\
& \multicolumn{3}{c|}{ } &\multicolumn{3}{c|}{(part. m$^{-2}$sec$^{-1})$} & \multicolumn{1}{c|}{(protons cm$^{-2}$sec$^{-1})$} \\
& Ave. & Lead & Trail & Ave. & Lead & Trail & \multicolumn{1}{c|}{ in range 25-59 MeV} \\ \hline
Janus & 0.53 & 0.47 & 0.58 & 1.21 & 1.01 & 1.41 & 5.4 \\
Epimetheus & 0.46 & 0.45 & 0.52 & 1.21 & 1.01 & 1.41 & 7.4 \\
Aegaeon & 0.18 & 0.16 & 0.24 & 1.60 & 1.32 & 1.88 & 920.4 \\
Mimas & 0.75 & 0.73 & 0.79 & 2.23 & 1.81 & 2.64 & 5.6 \\
Methone & 0.56 & 0.55 & 0.55 & 2.66 & 2.15 & 3.17 & 140.2 \\
Pallene & 0.53 & 0.47 & 0.53 & 3.89 & 3.19 & 4.60 & 162.0 \\
Enceladus & 1.15 & 1.12 & 1.17 & 6.44 & 6.25 & 6.63 & 4.4 \\
Tethys & 0.94 & 0.96 & 0.89 & 3.46 & 4.77 & 2.15 & 3.2 \\
Telesto & 0.90 & 0.84 & 0.84 & 3.46 & 4.77 & 2.15 & 3.2 \\
Calypso & 1.10 & 1.13 & 1.21 & 3.46 &4.77 & 2.15 & 3.2 \\
Dione & 0.68 & 0.76 & 0.56 & 1.53 & 2.33 & 0.72 & 3.1 \\
Helene & 1.02 & 0.84 & 0.95 & 1.53 & 2.33 & 0.72 & 3.1 \\
Polydeuces & 0.61 & 0.65 & 0.63 & 1.53 & 2.33 & 0.72 & 3.1 \\
Rhea$^d$ & 0.71 & 0.79 & 0.63 & 0.46 & 0.76 & 0.15 & 3.1 \\
\hline
\end{tabular}}
$^a$ Brightness coefficient at 30$^\circ$ computed assuming surface follows Lambertian scattering law, see Table~\ref{brighttab}.
$^b$ E-ring flux computed assuming $e_0=0.17$, $e_c=0.01$ and a peak number density of 0.03 particles per cubic meter. Leading and trailing side values given by $F\pm \delta F_\lambda/\sqrt{2}$
$^c$ Radiation fluxes computed assuming $\pi$ steradian viewing angle.
$^d$ Radiation fluxes at Rhea assumed to be the same as those for Dione
\end{table}
\subsection{Darkening Aegaeon, Methone and Pallene with radiation}
\label{radiation}
\begin{figure*}
\resizebox{6.5in}{!}{\includegraphics{mooncompdisp_gamp75_073019.pdf}}
\caption{Resolved images of Saturn's moons at phase angles between 50$^\circ$ and 60$^\circ$, shown with a common stretch with a gamma of 0.75. Letters following the names indicate the hemisphere observed (T=trailing, L=leading, A=anti-Saturn). This gallery clearly demonstrates that Aegaeon is anomalously dark and has a comparable surface brightness to Iapetus' dark side. {Image files used for this mosaic are: N1643265020 = Aegaeon, N1870695654 = Atlas,
N1506184171 = Calypso (T), N1656997840 = Daphnis, N1597675353 = Dione (T), N1606072353 = Dione (L),
N1824725635 = Enceladus (T), N1521539514 = Epimetheus (T), N1687121104 = Helene (L), N1669612923 = Hyperion, N1510339442 = Iapetus (L), N1521539514 = Janus (T), N1627323065 = Janus (A), N1716192290 = Methone (T), N1484509816 = Mimas (L), N1855894795 = Mimas (T), N1867602424 = Pan, N1504612286 = Pandora (T), N1853392689 = Prometheus (T), N1505772509 = Rhea (L), N1591815592 = Rhea (T), N1514163567 = Telesto (T), N1855868106 = Tethys (L), N1563723519 = Tethys (T).}
}
\label{ims}
\end{figure*}
Aegaeon, Methone and Pallene are all considerably darker than one would have predicted given their locations and the observed trend between Janus, Epimetheus, Mimas and Enceladus. Aegaeon in particular is exceptionally dark, with a surface brightness coefficient around 0.2. This conclusion is consistent with the few resolved images of these bodies, which show that only the dark side of Iapetus has a comparably low surface brightness as Aegaeon (see Figure~\ref{ims}). In principle, the darkness of these moons could be due to a number of different factors, including their small size and their associations with other dusty rings, but the most likely explanation involves their locations within Saturn's radiation belts.
Aegaeon, Methone and Pallene are much smaller than the mid-sized moons, and so their small sizes could somehow be responsible for their low surface brightness. Indeed, based on preliminary investigations of Aegaeon's low surface brightness, \citet{Hedman10} suggested that a recent collision catastrophically disrupted the body, stripping off its bright outer layers to produce the debris that is now the G ring, and leaving behind a dark remnant. Deeper examination, however, does not support this model. For example, if a collision was sufficient to expose the dark core of a kilometer-scale body, then small impacts should expose dark materials on the other moons, but no such dark patches have been seen in any of the high-resolution images obtained by Cassini. More fundamentally, attributing Aegaeon's extreme darkness to some discrete event in the past is difficult because the E-ring should be able to brighten this moon relatively quickly. Assuming typical particle sizes of around 1 $\mu$m, the above estimates of the E-ring flux implies that Aegaeon would acquire a layer of E-ring particles 10 $\mu$m thick in only 100,000 years. This calculation neglects mixing within Aegaeon's regolith and any material ejected from Aegaeon, but even this basic calculation shows that E-ring particles would cause Aegaeon to brighten on very short timescales, which strongly suggests that whatever process is making Aegaeon dark is actively ongoing.
Another possibility is that Aegaeon, Methone and Pallene are dark because they are embedded not only in the E ring, but also in narrower rings like the G ring. However, since these rings likely consist of debris knocked off the moons' surfaces, this does not necessarily explain why this material would be so dark. Another problem is that Prometheus and Pandora are rather bright, even though these moons are close to the F ring and therefore likely coated in F-ring material (see below). Furthermore, since the rings/arcs associated with Methone and Pallene are over an order of magnitude fainter than the G ring \citep{Hedman18}, it seems unlikely that these tenuous rings could significantly affect the surface properties of those two moons. Thus the G ring and other dusty rings do not appear to provide a natural explanation for the darkness of these small moons.
A more plausible explanation for the low surface brightnesses of Aegaeon, Methone and Pallene is that these moons orbit Saturn within regions that have unique radiation environments. The bottom panel of Figure~\ref{eringcomp} shows profiles of energetic proton and electron fluxes from two of the highest energy channels of Cassini's Low-Energy Magnetospheric Measurement System (LEMMS) instrument \citep{Krimigis04}. Note that while the electron flux has a broad peak centered around the orbit of Aegaeon, the high-energy proton flux is confined to three belts between the main rings and the orbits of Janus, Mimas and Enceladus \citep{Roussos08}. This disjoint distribution of high-energy protons arises because energetic protons circle the planet under the influence of the corotation electric field, but their longitudinal motion is enhanced by gradient-curvature drifts, which act in the same direction as corotation. These particles therefore orbit the planet with a period of one to a few hours. Combined with the alignment of the spin and dipole axes, this drift allows these protons to re-encounter the inner moons frequently, causing a permanent flux decrease (macrosignature) along their orbits for sufficiently energetic protons. It is important to realize that these macrosignatures are not just depressions in the energetic proton density, but are also regions where the {\em flux} of protons is greatly decreased. Proton macrosignatures along the moon orbits are visible in the data at energies of $>$ 300 keV or so, but are only expected to exist at much higher energies for electrons. Since Janus and Mimas are exposed to comparable amounts of energetic electrons as Aegaeon, Methone and Pallene, the electron flux is not a natural explanation for the smaller moons' low surface brightness. However, Aegaeon, Methone and Pallene are exposed to much higher fluxes of high-energy protons than any of Saturn's other moons, so this is a plausible potential explanation for the darkness of these three moons.
Table~\ref{fluxtab} provides more quantitative estimates of the proton fluxes into the different moons. Here we use the protons observed with the P8 channel on the LEMMS instrument as a proxy for the total flux of high-energy protons \citep{Krimigis04}. In the radiation belts, this channel's sensitivity is mainly to protons of 25.2 to 59 MeV, and the numbers reported in Table~\ref{fluxtab} are for an acceptance solid angle of 1 steradian to facilitate comparisons with the E-ring particle flux. Since Mimas, Janus and Epimetheus occupy narrow gaps in the radiation belts, the fluxes provided in this table are averaged over the moons' true anomalies. Furthermore, for Janus and Epimetheus we also average over the semi-major axis variations associated with these moons' co-orbital interactions. Also note that while estimates of the Galactic Cosmic Ray background have been removed from these data, the fluxes for the moons beyond Enceladus are probably upper limits. Nevertheless, these numbers clearly show that Methone, Pallene and Aegaeon experience exceptionally high proton fluxes.
\begin{figure}
\resizebox{6in}{!}{\includegraphics{clrmoons_compplot6yy_080419.pdf}}
\caption{Moon brightness versus the ratio of the radiation mass flux to the E-ring particle flux. The radiation mass flux here is flux of protons with energies between 25 and 50 MeV, while the E-ring mass flux is the mass of particles larger than 1 micron. Note that the radiation fluxes for Methone, Pallene and Aegaeon are measured, while for the other moons they may represent upper limits. Still, the overall trend of decreasing brightness with increasing flux ratio is clear.}
\label{radflux}
\end{figure}
If radiation damage from high-energy protons is the dominant darkening process and the E-ring particles are the dominant brightening process, then we would expect the moons' equilibrium surface brightness to depend the ratio of the radiation flux to the E-ring particle flux. To make this ratio properly unitless, we consider the mass flux ratio for high-energy protons and E-ring particles. This quantity is computed by taking the number flux of protons given in Table~\ref{fluxtab} multiplied by the proton mass and dividing that number by the average flux of E-ring particles given in Table~\ref{fluxtab} multiplied by $m_{\rm eff}$, the effective average mass of particles with radii larger than 1 $\mu$m. Specifically, we assume $m_{\rm eff}=4\pi \ln(100)\times 10^{-15}$~kg, which is consistent with a power-law size distribution with a differential index of -4 \citep{Ye14} extending between 1 and 100 microns, and a particle mass density of 1 g/cm$^3$. Figure~\ref{radflux} shows that the moons' brightnesses do indeed systematically decrease as this ratio increases. Furthermore, Methone, Pallene and Aegaeon fall along the same basic trend in this plot as the mid-sized and co-orbital moons. This match provides strong evidence that radiation damage is the dominant process responsible for making Methone, Pallene and Aegaeon dark.
At the moment, we do not know precisely which components of the radiation flux are responsible for the darkening. The macrosignatures are most prominent in the highest-energy proton channels, which suggests energetic protons are the dominant agent, but other agents (e.g., electrons with even higher energies) may also contribute at some level. If the darkening agent is protons with energies greater than 1 MeV, there are several ways that they could be altering the surface. \citet{Howett11} suggested that energetic electrons ``sinter" or fuse grains in the uppermost layer, which can affect the reflectance properties at a range of wavelengths \citep{Schenk11, Howett18}. Electrons are mainly slowed down by interactions with electrons in materials. Protons are initially slowed down mainly by interactions with electrons in materials, although as they slow down, they interact more with nuclei. So it is possible very energetic protons (in the first fraction of their mean range) are causing the same kinds of changes to the ice as energetic electrons. Furthermore, they have large gyroradii and so they can often affect the whole body, i.e. they don't weather the surface differentially in many cases. Radiation damage can also change the chemical properties of surface materials, {such as generating complex organics \citep{Hapke86, Poston18} or producing color centers in salts \citep{Hand15, Hibbitts19, Trumbo19}, which could darken the surface and produce distinctive spectral signatures. }
\begin{figure}
\resizebox{6.5in}{!}{\includegraphics{colors_ellipsoids_080419.pdf}}
\caption{Visible spectra of Aegaeon and Methone obtained from resolved images during Cassini's closest encounters with these moons. The only clear spectral feature is the break in slope around 500 nm, which is commonly found in spectra of the icy moons.}
\label{colors}
\end{figure}
\begin{figure}
\resizebox{6.5in}{!}{\includegraphics{methonespecplot2_112119.pdf}}
\caption{Near infrared spectrum of Methone derived from observations from the VIMS instrument. The brightness is shown at an arbitrarily normalized scale. Note the bands at 1.5, 2 and 3.1 microns are typical of water ice, and the overall shape of the spectrum is comparable to that of other small moons.}
\label{metspec}
\end{figure}
During its closest flybys of Aegaeon and Methone, Cassini's NAC obtained images of both moons through multiple filters. We processed these high-resolution color images with the same methods as described above for the mid-sized moons, yielding the brightness coefficients given in {Tables~\ref{aegaeoncoltab} and~\ref{methonecoltab}}. We then interpolated these measurements to a single phase angle to obtain the visible spectra shown in Figure~\ref{colors}. Furthermore, during the close encounter with Methone, the Visual and Infrared Mapping Spectrometer (VIMS) \citep{Brown04} was able to obtain near-infrared spectra of the moon. While VIMS was unable to spatially resolve Methone, VIMS acquired five ``cubes'' with good signal-to-noise (V1716191964, V1716192051, V1716192374, V1716192461 and V1716192872) that could yield decent disk-integrated spectra. We calibrated these cubes with the standard pipelines \citep[][flux calibration RC19]{Brown04,Clark12}, co-added the signals in the pixels containing the moon signal, removed backgrounds based on adjacent pixels, normalized the resulting spectra and averaged them together to produce a single mean spectrum shown in Figure~\ref{metspec} (Note that this spectrum is arbitrarily normalized). The near-infrared spectrum of Methone is dominated by water-ice, while the visible spectra of the two moons are generally flat with a spectral break around 0.5 microns. These moon's spectral properties are therefore very similar to those of the rest of Saturn's moons \citep{Filacchione12, Buratti10, Buratti19}. {Furthermore, the depths of the water-ice bands at 1.5 and 2.0 microns (computed following the procedures laid out in \citet{Filacchione12}) are 0.45$\pm$0.04 and 0.60$\pm$0.06, respectively. These values are very similar to those previously found for Janus, Epimetheus, Telesto, Dione and Rhea \citep{Filacchione12}.} Hence there is no evidence that these bodies have surfaces with distinctive spectrally active components, like irradiated salts \citep{Hibbitts19}. Instead, the radiation appears to change the surface brightness relatively uniformly at all wavelengths.
An interesting point of comparison for these spectra is Jupiter's moon Callisto. Callisto and Aegaeon are probably exposed to similar fluxes of high-energy protons \citep[roughly 10/(cm$^2$ s sr keV) around energies of 1 MeV, ][]{Paranicas18}, and both objects are comparably dark (Callisto has a geometric albedo around 0.2 between 0.5 and 2.5 microns \citep{Calvin95}, and extrapolating the available data suggests that Aegaeon's equivalent spherical albedo is around 0.15). Like Aegaeon, Callisto has a visible spectrum that is flat longwards of 0.6 microns and has a downturn at shorter wavelengths \citep{Calvin95, Moore07}. The structure in Callisto's spectrum in the near-infrared primarily consists of comparatively weak water-ice features, along with a weak carbon-dioxide band at 4.2 $\mu$m \citep{Moore07}, which is consistent with the compositional features seen in Methone and Saturn's other moons \citep{Clark08, Hendrix18, Buratti19}. It may therefore be that both Callisto and Aegaeon represent end-members of what ice-rich surfaces look like when exposed to large doses of high-energy radiation. If correct, it could mean that even if we had near-IR spectra of Aegaeon, they would not contain diagnostic spectral features of the chemicals responsible for making its surface dark. Whether the observed amount of darkening is consistent with structural changes in the regolith \citep{Howett11, Schenk11} or chemical changes in a more spectrally neutral component \citep[such as the non sulfur-bearing ices in][]{Poston18}, will require detailed spectral models that are beyond the scope of this work.
\subsection{The excess brightness of Prometheus, Pandora, Calypso and Helene}
\label{trojans}
After Aegaeon, Methone and Pallene, the next most obvious anomalies in Figure~\ref{eringflux} are Helene and Calypso. Calypso is brighter than both Tethys and Telesto, while Helene is brighter than both Dione and Polydeuces. These findings are actually consistent with prior estimates of these moons' geometric albedos summarized in \citet{Verbiscer07, Verbiscer18}. At the same time, it is worth noting that despite being in a region where the E-ring flux should be extremely low, Pan, Atlas, Prometheus and Pandora are all reasonably bright, and in particular Prometheus and Pandora are brighter than any of the other moons in their vicinity. {The excess brightness of these moons is probably related to distinctive aspects of their near-infrared spectra. For the mid-sized moons, the depths of the moon's water-ice bands are reasonably well correlated with their visible brightness \citep{Filacchione12}. However, Calypso, Prometheus and Pandora all have deeper water-ice bands than Enceladus, while Atlas, Telesto and Helene have water-ice bands intermediate in strength between Tethys and Dione \citep{Buratti10, Filacchione12, Filacchione13}. Note that even though Helene does not appear to have as exceptionally deep water-ice bands as Calypso, both Calypso and Helene have deeper bands than their respective large companions Tethys and Dione. The anomalous spectral and photometric properties of Prometheus, Pandora, Calypso and Helene} could be attributed to a number of different factors, including the moons' size and differences in the local radiation environment. In practice, localized enhancements in the particle flux appears to be the most likely explanation, but the excess particle flux responsible for the high observed brightnesses of Helene and Calypso is far from clear.
In principle, the surface brightness of small moons could be different from the brightness of large moons in the same environment because their reduced surface gravity affects the porosity and structure of their regolith. In fact, one aspect of these moons' surface properties probably can be attributed to their small size: their general lack of a leading-trailing brightness asymmetries. In particular, all four of the trojan moons appear to lack the strong leading-trailing brightness asymmetries seen on Tethys and Dione.\footnote{{\citet{Hirata14} also found that there was not a strong leading-trailing asymmetry in the morphology of deposits on Telesto and Calypso. However, they also found that Helene appears to have a higher crater density on its trailing side, perhaps implying a thicker mantling deposit on its leading side.}} For Tethys and Dione, this asymmetry is thought to arise because of asymmetries in the E-ring flux, so it is perhaps surprising that similar asymmetries are not seen in the smaller moons. However, the smaller size and much lower surface gravity of these moons means that secondary ejecta from the E-ring impacts can be more easily globally dispersed over their surfaces, providing a natural explanation for why these moons have more uniform surface properties.
While detailed modeling of impact debris transport around these small moons is beyond the scope of this paper, we can provide rough order-of-magnitude calculations that are sufficient to demonstrate the feasibility of this idea. Assuming that most of the E-ring particles striking these moons are near the apocenters of their eccentric orbits, we can assume $r \simeq (1+e) a$ in Equation~\ref{vlambda} and estimate the E-ring particle's impact velocities at the orbits of Tethys and Dione to be 1.4 km/s and 3.6 km/s, respectively. Experimental studies of impacts into ice-rich targets by 20-100 $\mu$m glass beads found impact yields at these velocities of order 100 \citep{Koschny01}, and assuming roughly comparable values for micron-sized E-ring grains would imply an effective average ejection velocity of at most 14 m/s and 36 m/s, respectively. These speeds are much less than the escape speeds from Tethys and Dione (which are 390 and 510 m/s respectively), but are comparable to the escape speeds of Telesto, Calypso, and Helene (which are 7, 19 and 9 m/s, respectively). Hence it is reasonable to think that the debris from E-ring particle impacts is more uniformly distributed on the smaller trojans than they are on Tethys and Dione, making their surfaces more uniform in brightness.
However, while their size is likely responsible for their lack of leading-trailing brightness asymmetries, it is unlikely that size-related phenomena can explain the overall high brightness of Prometheus, Pandora, Calypso and Helene. The problem with such ideas is that there are no clear trends between size and brightness excess. Telesto is intermediate in size between Helene and Calypso, but is roughly the same brightness as Tethys. Also, Prometheus and Pandora are themselves intermediate in size between Epimetheus and Atlas. This lack of obvious trends with mass or size suggests that the brightness excess of these moons most likely reflects something different about their environments.
Given the previous analyses of the trends among the other moons, the easiest way to increase the brightness of these moons would be either by decreasing the radiation flux or increasing the particle flux. Decreasing the radiation flux is an unlikely explanation because the high-energy proton flux onto many of the moons is already very low (see Table~\ref{fluxtab}). Furthermore, for the radiation flux to be lower at Calypso than at Telesto, or lower at Helene than at Polydeuces, there would need to a strong asymmetry in the radiation flux on either side of Tethys and Dione. While both Tethys and Dione have been observed to produce localized reductions in plasma known as microsignatures, these do not extend all the way to the trojan moons. Also, given that the plasma is bound to the planet's magnetic field, which rotates faster than the moons orbit around the planet, it is hard to imagine any interaction with charged particles that would similarly affect Calypso (which trails Tethys) and Helene (which leads Dione).
The above considerations leave variations in the particle flux as the best remaining option. For Prometheus and Pandora, this is a perfectly reasonable explanation, since these moons fall within the outskirts of the F ring, which is a natural source of dust flux into these moons. If an excess flux of F-ring particles is indeed responsible for the high brightness of Prometheus and Pandora, this is interesting because it means that the dust impacting the moons does not have to be the relatively fresh ice found in the E ring to produce increases in brightness. In this case, the lower-opacity dusty rings surrounding Pan and Atlas could be keeping those moons somewhat brighter than they would otherwise be given their locations well interior to the E ring.
Excess dust fluxes are also an attractive explanation for the excess brightness of Calypso and Helene because these moons are nearly as bright as Enceladus, an object whose brightness is certainly due to a high particle flux. However, there is no obvious source for the particles that would strike Calypso and Helene more than their co-orbital companions. The brightness differences cannot be easily attributed to differences in the E-ring flux, since Telesto, Calypso and Tethys are at basically the same orbital distance from Saturn, as are Helene, Polydeuces and Dione. Hence the E-ring flux into Telesto, Tethys and Calypso (or Helene, Dione and Polydeuces) should be nearly the same.
In principle, asymmetries in the particle flux could arise from either subtle interactions between the larger moons and the E-ring grains, or more local particle populations sourced from the larger moons. For example, particles on nearly circular orbits launched from Tethys that are carried outward by plasma drag would preferentially fall behind Tethys, and thus potentially be more likely to strike Calypso than Telesto. The problem with such explanations is that for Calypso and Helene to fall along the same trend as the other moons in Figure~\ref{eringflux}, the excess particle flux into these moons needs to be a substantial fraction of the E ring flux, which would imply a large excess density of particles around these moons compared to Tethys, Dione, Telesto or Polydeuces. These particles should produce localized brightness enhancements and/or longitudinal brightness asymmetries in the regions around Tethys' and Dione's orbits, and no such features have yet been seen in the Cassini images of the E ring. In principle, there could be some subtle interactions between E ring particles and the relevant moons that would cause the requisite variations in the particle flux without producing detectable asymmetries in the density of the rings. Fully exploring such possibilities would likely require numerical simulations that are beyond the scope of this work.
Given the above challenges with identifying a suitable dust population around these moons during the Cassini mission, it is worth considering whether Calypso's and Helene's current brightness are not their steady-state values, but are instead transient phenomena caused by some event in the relatively recent past. Specifically, we can consider the possibility that a recent impact into Calypso and/or Helene released enough material to affect the global surface properties of both these moons. A thorough quantitative evaluation of such scenarios is beyond the scope of this paper. Instead we will just provide some order-of-magnitude calculations which demonstrate that this is a viable possibility.
The total amount of material needed to affect the overall surface brightness of Calypso and Helene is simply the amount of material needed to coat these moons with a layer thick enough to determine its optical properties at optical wavelengths. Since light can only penetrate a few wavelengths through the surface regolith, we may conservatively estimate that a 10-$\mu$m thick layer of material will be sufficient for these purposes. Given that Calypso and Helene have surface areas of around 400 km$^2$ and 1400 km$^2$ respectively, the total volume of material needed to coat these moons is 4000 m$^3$ and 14,000 m$^3$. Conservatively assuming this is ice-rich material of negligible porosity, these volumes correspond to total debris masses of 4$\times10^{6}$ and 14$\times 10^{6}$ kg, respectively.
The flux of objects large enough to produce these sorts of debris clouds is fairly well constrained thanks to observations of comparably-massive impact-generated debris clouds above Saturn's rings described by \citet{Tiscareno13}. The masses of those clouds are uncertain and depend on the assumed size distribution of the debris, but the feature designated Bx probably had a mass between $2\times10^{5}$ kg and $2\times10^{7}$ kg, which is comparable to that required to brighten Calypso or Helene. The estimated flux of impactors able of producing this amount of debris is $3\times10^{-20}$/m$^2$/s \citep{Tiscareno13}. Assuming cross-sections consistent with the observed sizes of Calypso and Helene, this means the mean time between impacts should be between 1000 and 10,000 years.
The other relevant timescale in this problem is how long it would take E-ring material to coat these moons and erase any transient signals from an impact. Using the fluxes provided in Table~\ref{fluxtab}, and assuming a typical E-ring particle radius of around 1 $\mu$m, we estimate that E-ring material would accumulate on the surfaces at rates of between 0.2 nm/year for Helene and 0.5 nm/year for Calypso. It would therefore take 20,000 years for Calypso and 50,000 years for Helene to accumulate a 10-micron-thick layer of E-ring particles. These numbers could potentially be reduced by a factor of $\sim$100 if we account for the yield of secondary debris from each E-ring particle impact (see above). The resurfacing time due to E-ring particles is therefore likely comparable to the mean time between impacts capable of covering these entire moons in fresh impact debris. It is therefore not unreasonable that Calypso and Helene both experienced a recent impact that brightened their surfaces, while Telesto and Polydeuces managed to avoid such a recent collision. In principle, some of the material from a collision into Helene or Calypso could have been transported to the other moon, allowing a single impact to brighten both moons, but to properly explore the relative likelihood of such an event will require numerical simulations of both dynamics of the impact debris similar to those done by \citet{Dobrovolskis09}, perhaps including non-gravitational forces.
Note that this explanation for Calypso's and Helene's brightness means that impacts tend to increase the moons' brightness, rather than darken them. This might at first be counterintuitive because cometary debris in the outer solar system is generally assumed to be much darker than Saturn's moons. However, it is important to realize that most of the debris produced by the impacts would come from Helene and Calypso, not the impactor. Indeed, the expected impact yields for such bodies is of order 10,000 \citep{CE98, Tiscareno13}, and so the impactor required to produce $4-14 \times 10^{6}$ kg of debris would be only $4-14 \times 10^{2}$ kg, which corresponds to an ice-rich object with a radius between 0.5 and 0.7 meters. Thus most of the debris falling on the moons would be pure ice, and should be able to brighten moon surfaces like the E and F rings are apparently able to do. This bright deposit would also be much thinner than the deposits previously identified in high-resolution images, which affect both the morphology and the color of the surface \citep{Thomas13, Hirata14}. While such a thin deposit would probably not produce obvious morphological structures (even the original crater would be near the resolution limit of the best images), it is not clear whether a fresh global covering of fine debris is consistent with the color variations currently observed on these moons \citep{Thomas13}. Also, it is not obvious whether recent impacts are consistent with near-infrared spectra of these moons, which show that Prometheus, Pandora and Calypso have deeper ice bands than Telesto and Helene \citep{Filacchione12}. Even so, the above considerations indicate that a recent impact is a possible explanation for Calypso's and Helene's high surface brightness and so merits further investigation.
\section{Summary}
\label{summary}
In conclusion, we have used a new photometric model for non-spherical objects to obtain surface brightness estimates that are comparable to those for the mid-sized satellites. Applying this model to Saturn's moons has revealed a number of interesting features and trends, which are summarized below in order of the moon's increasing distance from Saturn.
\begin{itemize}
\item Prometheus and Pandora are brighter than moons orbiting exterior and interior to them, suggesting that dust from the F ring is brightening these moons.
\item Janus and Epimetheus have brighter trailing hemispheres than leading hemispheres, which is consistent with the expected pattern for impacting E-ring particles.
\item Aegaeon, Methone and Pallene are all darker than expected given their location within the E ring. This is most likely due to the high flux of high-energy radiation into these moons.
\item The spectral data for Aegaeon and Methone indicate that whatever material is responsible for making these moons dark reduces their brightness over a broad range of wavelengths and does not have obvious spectral signatures.
\item The photometric data indicate that Anthe probably has a shape similar to Methone.
\item Pallene's leading side is slightly darker than its trailing side.
\item Telesto and Polydeuces have surface brightnesses similar to their larger orbital companions (Tethys and Dione, respectively).
\item Calypso is substantially brighter than Tethys and Telesto, while Helene is substantially brighter than Dione and Polydeuces. These phenomena could either be due to an asymmetric flux of E-ring particles, or recent collisions with larger impactors.
\end{itemize}
\section{Acknowledgements}
Authors MMH and PH offer thanks to NASA's Cassini Mission Project during various stages of this study. MMH acknowledges support from the Cassini Data Analysis Program Grant NNX15AQ67G. PH gratefully acknowledges that photometric modeling enhancements, as well as refinements and testing of computer software that are used and first-reported in this paper were developed under the auspices of NASA Planetary Geology and Geophysics Program grant NNX14AN04G. MH also wishes to acknowledge that the initial analyses of the small moons' brightnesses were done by M. Rehnberg as part of the REU program, and the calculations regarding the deposition rates on the various small moons were inspired by conservations with S. Morrison, S.G. Zaidi and H. Sharma. We thank the anonymous reviewer for their helpful comments on an earlier draft of this manuscript.
\pagebreak
|
2,877,628,088,437 | arxiv | \section{Introduction}
There has been substantial recent interest in the physics of high-dimensional systems
with applications to ground-state problems, packing problems, number theory, and
phase behaviors of many-particle systems \cite{To10, ToSt03, ZaTo09, Sk06, Pa06, Ro07, Me09, Lu10}.
The problem of identifying the densest sphere packings in high-dimensional Euclidean space
$\mathbb{R}^d$ is an open and fundamental problem in discrete geometry and number theory
with important applications to communications theory \cite{CoSl99}. In particular, Shannon
showed that the optimal method of sending digital signals over noisy channels corresponds to the
densest packing in a high-dimensional space \cite{Sh48}.
Although the densest packings in dimensions
two and three are known to be Bravais lattice packings (the triangular and FCC lattices, respectively
\cite{Ha05}), in sufficiently high dimensions, non-Bravais lattices
almost surely are the densest packings. In addition to
providing putative exponential improvement on Minkowski's lower bound for the maximal
sphere-packing density, Torquato and Stillinger presented strong arguments to suggest that the
densest packings in high dimensions are in fact \emph{disordered} \cite{ToSt06, ToSt10}.
Their methods rely on the so-called \emph{decorrelation principle} for disordered sphere packings,
which states that as the dimension $d$ increases, all \emph{unconstrained} correlations
asymptotically vanish, and any higher-order correlation functions $g_n(\mathbf{r}^n)$
may be expressed in terms of the number density $\rho$ and the pair correlation function
$g_2$ \cite{ToSt06}. Since its introduction, additional work has shown that the decorrelation
principle is remarkably robust, meaning that it
is already manifested in low dimensions and applies also to certain soft-matter systems \cite{Za08} and
quantum many-particle distributions \cite{To08}. Furthermore, detailed numerical studies of saturated
maximally random jammed hard-sphere packings, which are
the most disordered packings of spheres that are
rigorously incompressible and nonshearable, have demonstrated that
unconstrained correlations beyond the hard-sphere diameter asymptotically vanish even in
relatively low dimensions $d = 1$-$6$ \cite{Sk06}. Similar results have also been observed
for the exactly-solvable ``ghost'' random sequential addition (RSA)
process \cite{To06B} along with the usual RSA process \cite{ToUcSt06}. All evidence
to date supports the notion that the decorrelation principle applies fundamentally
to disordered many-particle systems. In this paper,
we provide evidence that the decorrelation principle applies more generally to any periodic
crystal, which has implications for the densest sphere packings
in high dimensions and sheds light on the reasons why
it is robust in low dimensions.
The properties of periodic crystal structures are fundamental to the physical and mathematical
sciences. Experience in two- and three-dimensional systems suggests that crystals are
prototypical ground states of matter \cite{ToSt08}, obtained by slow annealing of an interacting
many-particle system to absolute zero temperature. Unlike disordered states of matter, including
gases and liquids, crystals possess complete long-range correlations and translational symmetry.
As such, periodic crystals can be specified by translational copies of a single fundamental cell
containing one (in the case of a Bravais lattice) or more particles.
Elemental carbon is known to adopt numerous polymorphs of fundamental significance. Its
four-electron valence structure implies that it can be tetrahedrally, covalently bonded with itself
to form the diamond crystal. Certain ``superdense''
polymorphs of carbon involving different packings of
carbon tetrahedra have also recently been reported in the
literature \cite{Zh11}. The two-dimensional analog of the diamond crystal is the
so-called honeycomb crystal, in which points are placed at the vertices of hexagons
that tile the plane. This polymorph of carbon is the graphene structure, variations of which have gained
substantial interest as nanomaterials \cite{GeNo07}. Each point in the honeycomb crystal
is coordinated with three other points of the structure, and by placing particles at the midpoints of
each of the ``bonds,'' one obtains the kagom\'e crystal.
The kagom\'e crystal
and its three-dimensional counterpart, the pyrochlore crystal, have
been used in models of spin-frustrated antiferromagnetic materials \cite{So05}.
This type of geometric frustration in so-called ``spin ice''
induces a nonvanishing residual entropy in the ground state,
equivalent to behavior identified in liquid water \cite{Pa35}.
Recently, Torquato has reformulated the covering and quantizer problems from
discrete geometry as ground-state problems involving many-body interactions with
one-body, two-body, three-body, and higher-body potentials \cite{To10}.
Formally, the covering problem seeks the point configuration that minimizes
the radius of overlapping spheres circumscribed around each of the points
required to cover $\mathbb{R}^d$ \cite{CoSl99, To10}. The quantizer problem involves finding the
point configuration in $\mathbb{R}^d$ that minimizes a Euclidean ``distance error''
associated with replacing a randomly placed point in $\mathbb{R}^d$
with the nearest point of the point process \cite{CoSl99, To10}.
Closely related
is the so-called number variance problem, which aims to identify the distribution of points
that minimizes fluctuations in the local number density over large length scales \cite{ToSt03,
ZaTo09}. This problem can also be interpreted as the determination of the ground state
for a particular soft, bounded pair interaction, and, for the special case of Bravais lattices,
is equivalent to identifying the minimizer of the so-called Epstein zeta function \cite{Sa06}.
Note that the number variance of a point pattern has been suggested to quantify structural order
over large length scales \cite{ToSt03, ZaTo09}.
Studies of many-body fluids and amorphous packings have attempted to glean new information about
low-dimensional physical properties, including the equation of state,
radius of convergence of the virial series, phase transitions,
and structure, from high-dimensional models. Frisch and Percus have
shown that for repulsive interactions, Mayer cluster expansions of the free energy
are dominated by ring diagrams at each order in particle density $\rho$ \cite{Fr99}.
This result was extended by Zachary, Stillinger, and Torquato to show that the
so-called mean-field approximation for soft, bounded pair interactions becomes exact
in the high-dimensional limit \cite{Za08}. Parisi and Zamponi have utilized
the HNC approximation to the pair correlation function and mean-field theory to
understand hard-sphere glasses and jamming in high dimensions \cite{Pa06}, and Rohrmann
and Santos have generalized results from liquid-state theory to study fluids
of hard spheres in high dimensions \cite{Ro07}. Michels and Trappeniers
\cite{Mi84}, Skoge {\it et al.} \cite{Sk06},
van Meel {\it et al.} \cite{Me09}, and Lue \emph{et al.} \cite{Lu10} have numerically studied the effect
of dimensionality of the disorder-order transition
in equilibrium hard-sphere systems in up to dimension six. Additionally, Doren and Herschbach
have developed a dimensionally-dependent perturbation theory for quantum systems
to draw conclusions about the energy eigenvalues in low dimensions \cite{Do86}.
In this paper, we generalize the kagom\'e and diamond crystals for
high-dimensional Euclidean space $\mathbb{R}^d$. We are motivated by the observation that there
are $d+1$ particles within the fundamental cell of the kagom\'e crystal, which grows with the
dimension. The high-dimensional kagom\'e crystal thus possesses a large basis
of particles and approximates the case of a (possibly irregular)
$N$-particle many-particle distribution subject to
periodic boundary conditions for $N$ large. The $d$-dimensional kagom\'e crystal
provides an intriguing structure for which to test the applicability of the decorrelation
principle for periodic point patterns. Since such periodic crystals possess full long-range order,
it is highly nonintuitive that the decorrelation principle should apply, and yet we provide
indirect and direct evidence that it continues to hold in this general setting. Furthermore,
by analyzing the structural properties of the high-dimensional diamond and kagom\'e crystals,
we show that certain ``disordered'' packings can be quantitatively more ordered with respect to
local fluctuations in the number density than periodic crystals, even in relatively low dimensions.
Our results therefore have important implications for the low- and high-dimensional problems outlined
above.
Our major results are summarized as follows:
\begin{enumerate}
\item We develop constructions of high-dimensional generalizations of the kagom\'e and
diamond crystals using the geometry of the fundamental cell for the $A_d$ Bravais lattice
(defined below). Our results suggest a natural method for constructing a large class of
``kagom\'e-like'' crystals in high dimensions.
\item We examine the behavior of structural features of the kagom\'e and diamond crystals,
including the packing densities, coordination numbers, covering radii, and quantizer errors.
In particular, we show that the kagom\'e crystal possesses a lower packing fraction than the
diamond crystal for all $d \geq 4$, a larger covering radius for all $d \geq 2$, and a larger
quantizer error for all $d \geq 3$.
\item We relate these structural features to the distribution of the void space external to the
particles in the fundamental cell via numerical calculation of the void exclusion probability function
$E_V$ (defined below). As the spatial dimension increases, the fundamental cell of the kagom\'e
lattice develops substantially large holes, thereby skewing the bulk of the void-space distribution
such that large holes are less rare than in the uncorrelated Poisson point pattern. The kagom\'e
crystal therefore lies above Zador's upper bound on the minimal quantizer error in sufficiently high
dimensions.
\item We calculate the number variance coefficients governing asymptotic surface-area
fluctuations in the local number density for the kagom\'e and diamond crystals.
The kagom\'e crystal for all $d \geq 3$ possesses a larger number variance coefficient than
a certain correlated disordered packing corresponding to a so-called $g_2$-invariant process
\cite{ToSt06, St05, ToSt02}, providing indirect evidence for a decorrelation principle of high-dimensional
periodic point patterns.
\item We provide direct evidence for a decorrelation principle of periodic structures by
examining a ``smoothed'' pair correlation function for the $d$-dimensional kagom\'e crystal.
Our analysis also applies to Bravais lattices as evidenced by corresponding calculations
for the hypercubic lattice $\mathbb{Z}^d$, establishing the universality of the decorrelation effect.
These results suggest that pair correlations
alone are sufficient to completely characterize a sphere packing for large
dimension $d$ and that the best conjectural lower bound on the maximal density of
sphere packings provided by Torquato and Stillinger \cite{ToSt06} may in fact be optimal in
high dimensions. This statement suggests that the densest sphere packings in high
dimensions are in fact disordered.
\end{enumerate}
\section{Definitions}
\subsection{Crystals and correlation functions}
A $d$-dimensional \emph{Bravais lattice} is a periodic structure defined by integer
linear combinations of a set of basis vectors $\{\mathbf{e}_j\}$ for $\mathbb{R}^d$, i.e.,
\begin{eqnarray}
\mathbf{p} &= \sum_{j=1}^d n_j \mathbf{e}_j \equiv M_{\Lambda}\mathbf{n}
\qquad (n_j \in \mathbb{Z} ~~\forall j)
\end{eqnarray}
for all points $\mathbf{p}$ of the Bravais lattice \cite{CoSl99}, where we have
defined the generator matrix $M_{\Lambda}$ of the Bravais lattice $\Lambda$ with columns given
by the basis vectors. The basis vectors define
a \emph{fundamental cell} for the Bravais lattice containing only one lattice point. This concept
can be naturally generalized to include multiple points within the fundamental cell, defining a
periodic crystal, or
non-Bravais lattice \cite{FN1}.
Specifically, a non-Bravais lattice consists of the union of a Bravais lattice with one or more
translates of itself; it can therefore be defined by specifying the generator matrix $M_{\Lambda}$ for the
Bravais lattice along with a set of translate vectors $\{\boldsymbol\nu_j\}$. Note that the special case of
a single zero translate vector $\mathbf{0}$ defines a Bravais lattice.
Every Bravais lattice $\Lambda$ possesses a \emph{dual Bravais lattice}
$\Lambda^*$ with lattice points
$\mathbf{q}$ defined by $\mathbf{p}\cdot\mathbf{q} = m \in \mathbb{Z}$ \cite{FN2}. The generator
matrix for the dual Bravais lattice is given by \cite{CoSl99}
\begin{eqnarray}
M_{\Lambda^*} &= (M_{\Lambda}^{\mbox{\scriptsize T}})^{-1},
\end{eqnarray}
where $B^{\mbox{\scriptsize T}}$ denotes the transpose of a matrix $B$. A Bravais lattice and its dual obey
the Poisson summation formula \cite{FNPS} for any Schwartz function \cite{ToSt08, CoKuSc09}.
In general, a crystal, containing more than one particle per fundamental cell,
does not possess a dual structure in the same sense as for a Bravais lattice.
A many-particle distribution is determined by its number density $\rho$, equal to the number
of particles per unit volume, and
the set of \emph{$n$-particle correlation functions} $g_n$, proportional to the probability
density of finding a configuration $\mathbf{r}^n$ of any $n$ particles within the system.
Of particular importance is the pair correlation function
$g_2(r)$, which for an isotropic and statistically homogeneous point pattern is a function only of the
magnitude $r$ of pair separations between particles. For any periodic crystal consisting of
topologically equivalent particles, the angularly-averaged pair correlation function has the form
\begin{eqnarray}
\rho s(r)g_2(r) &= \sum_{k=1}^{+\infty} Z_k \delta(r-r_k),\label{g2periodic}
\end{eqnarray}
where $Z_k$ is the number of points at a radial distance $r_k$ away from a reference particle of the
lattice and $s(r)$ is the surface area of a $d$-dimensional sphere of radius $r$. The
\emph{cumulative coordination number} $Z(R)$, the total number of particles within a radial distance
$R$ from a reference particle, is therefore given by
\begin{eqnarray}
Z(R) &= \rho\int_0^{R} s(r) g_2(r) dr;\label{ZR}
\end{eqnarray}
for a periodic crystal, this identity simplifies to $Z(R) = \sum_{k=1}^K Z_k$, where $K$
denotes the highest index for which $r_K \leq R$.
\subsection{Hyperuniformity and the number variance problem}
Torquato and Stillinger have characterized fluctuations in the local number density of
a many-particle distribution \cite{ToSt03} and have shown that these fluctuations
behave differently for periodic crystals and uncorrelated systems.
Define the random variable $N(\mathbf{x}_0; R)$ to the be the number of particles
within a spherical observation window of radius $R$ centered at position $\mathbf{x}_0$.
By definition, $\langle N(\mathbf{x}_0; R)\rangle = \rho v(R)$, where $v(R)$ is the volume of a
$d$-dimensional sphere of radius $R$. For a Poisson point pattern in which there are no
correlations between particles, the underlying Poisson counting measure also implies that
\begin{eqnarray}
\sigma^2(R) &= \langle N^2(\mathbf{x}_0; R)\rangle - \langle N(\mathbf{x}_0; R)\rangle^2
= \langle N(\mathbf{x}_0; R)\rangle = \rho v(R),
\end{eqnarray}
meaning that fluctuations in the local number density of the observation window scale with
the window volume.
However, this scaling is not a general feature of all point patterns.
In the general case of correlated point patterns, the local number variance is
given by \cite{ToSt03}
\begin{eqnarray}
\sigma^2(R) &= \rho v(R)\left\{1+\rho\int \left[g_2(\mathbf{r})-1\right]\alpha(r; R) d\mathbf{r}\right\},
\end{eqnarray}
where $\alpha(r; R)$ is the so-called \emph{scaled intersection volume}, defined geometrically
as the volume of the intersection of two $d$-dimensional spheres of radius $R$ with centers separated
by a distance $r$, normalized by the volume $v(R)$ of a sphere. Explicit expressions for
the scaled intersection volume in various dimensions have been given by Torquato and Stillinger
\cite{ToSt03, ToSt06}.
Introducing a length scale $D$ (e.g., the mean nearest-neighbor distance between points)
and a corresponding reduced density $\eta = \rho v(D/2)$,
one can show that the asymptotic behavior of the number variance for large observation windows is
\cite{ToSt03}
\begin{eqnarray}
\sigma^2(R) &= 2^d \eta\left\{ A \left(R/D\right)^d + B \left(R/D\right)^{d-1} +
o[(R/D)^{d-1}]\right\},
\end{eqnarray}
where $o(x)$ denotes terms of order less than $x$. The coefficients $A$ and $B$
are given by
\begin{eqnarray}
A &= 1+\rho\int [g_2(\mathbf{r})-1] d\mathbf{r} = \lim_{\vert\vert\mathbf{k}\vert\vert\rightarrow 0} S(\mathbf{k})\\
B &= \frac{-\eta \Gamma(1+d/2)}{D v(D/2) \Gamma[(d+1)/2] \Gamma(1/2)}
\int \vert\vert\mathbf{r}\vert\vert [g_2(\mathbf{r})-1] d\mathbf{r}\label{BNV},
\end{eqnarray}
where $S(\mathbf{k}) = 1+\rho\mathfrak{F}\{g_2(\mathbf{r})-1\}(\mathbf{k})$, with $\mathfrak{F}$
denoting the Fourier transform, is the
\emph{structure factor}.
It follows that the number variance of any point pattern for which $A = 0$ grows more
slowly than the volume of the observation window, implying that the point pattern
is \emph{effectively} homogeneous even on local length scales. Such systems are
known as \emph{hyperuniform} \cite{ToSt03} or \emph{superhomogeneous} point patterns \cite{PiGaLa02}.
Examples of hyperuniform point patterns include all Bravais and non-Bravais lattices, quasicrystals
possessing Bragg peaks, and certain disordered point patterns with pair correlations decaying
exponentially fast.
It has also been suggested that the coefficient $B$ quantifies
large-scale order in a hyperuniform point pattern \cite{ToSt03, ZaTo09}. The issue of identifying the
point pattern that minimizes this coefficient defines the so-called number variance problem \cite{To10, ToSt03}.
It has recently been proved that the integer lattice is the unique number variance minimizer in
one dimension among all hyperuniform point patterns \cite{ToSt03}.
Numerical results strongly suggest that the
triangular lattice minimizes the number variance in two dimensions \cite{ToSt03, ZaTo09}.
However, contrary to the
expectation that the densest lattice packing should also minimize the number variance, it has been
shown in three dimensions that the BCC lattice possesses a lower asymptotic number variance
coefficient $B$ than the FCC lattice \cite{ToSt03}.
It is worth mentioning in this regard that the BCC lattice is the
dual of FCC.
\subsection{Jamming in hard-sphere packings}
A \emph{sphere packing} is obtained from a point pattern in $d$-dimensional Euclidean space
by decorating each of the points with a sphere of radius $R_P$ such that no spheres overlap
after the decoration; the parameter $R_P$ is the packing radius. It is an open and nontrivial
problem to quantify the extent of randomness (equivalently, of order) in a sphere packing,
which reflects nontrivial structural information about the system. Research in this area is aimed
at identifying sets of \emph{order metrics} $\psi$ \cite{ToTrDe00} that align with physical intuitions of order,
at least in relatively low dimensions, and are positively correlated. It has recently been proposed that
hyperuniformity is itself a measure of order over large length scales \cite{ToSt03, ZaTo09}.
Torquato and Stillinger have introduced a classification of sphere packings in terms of the extent to
which they are \emph{jammed} \cite{ToSt01, ToSt10}.
In particular, they have provided a mathematically precise
hierarchy of jammed sphere packings, distinguished depending on the nature of their
mechanical stability \cite{ToSt01, ToSt10}:
\begin{enumerate}
\item \emph{Local jamming}: Each particle in the packing is locally trapped by at least
$d+1$ contacting neighbors, not all in the same hemisphere. Locally jammed
particles cannot be translated while fixing the positions of all other particles.
\item \emph{Collective jamming}: Any locally jammed configuration is collectively jammed
if no subset of particles can simultaneously be displaced so that its members move out of contact
with each other and with the remainder set.
\item \emph{Strict jamming}: Any collectively jammed configuration is strictly jammed if it
disallows all uniform volume-nonincreasing strains of the system boundary.
\end{enumerate}
These categories certainly do not include all possible distinctions of jammed configurations, but
they span a reasonable spectrum of possibilities. Importantly, jamming depends explicitly
on the boundary conditions for the packing \cite{ToSt01, ToSt10}.
\emph{Isostatic} packings are jammed packings with the minimal number of contacts $M$ for
a given jamming category under the specified boundary conditions \cite{ToSt10}. Under periodic
boundary conditions, for collective jamming $M = 2N-1$ and $3N-2$ for $d = 2$ and $d = 3$,
respectively, and for strict jamming $M = 2N+1$ and $3N+3$ for $d = 2$ and $d = 3$, respectively
\cite{ToSt10, DoCoStTo07}.
In this case, the relative differences between isostatic collective and strictly jammed packings
diminishes for large $N$, and an isostatic packing
in $d$ dimensions has a mean contact number per particle $Z = 2d$,
known as the isostatic condition. Note, however, that
packings for which $Z = 2d$ are not necessarily collectively or strictly jammed;
the two-dimensional square lattice and three-dimensional simple-cubic lattice are
simple counterexamples in dimensions $d = 2$ and $d = 3$, respectively. Another interesting
example is the two-dimensional kagom\'e crystal, which is locally jammed but neither collectively nor
strictly jammed under periodic
boundary conditions and possesses a nearest-neighbor contact number per particle $Z = 4$
\cite{ToSt01, DoCoStTo07}. However, this
structure can be made strictly jammed by ``reinforcing'' it with an extra ``row'' and
``column'' of disks \cite{DoCoStTo07}.
\subsection{The covering problem}
Consider a distribution of particles at unit number density. The \emph{covering radius} for
the point process is defined by decorating each of the particles with a sphere of radius $R$
and identifying the minimal radius $R_C$ necessary to cover the space completely. More precisely,
for any choice of $R$, we can define the volume fraction of space $\phi_P$ occupied by the
spheres; the volume fraction occupied by the void space external to the spheres is then
$\phi_V = 1-\phi_P$. The covering radius $R_C$ is then defined as the
minimal value of $R$ for which $\phi_P = 1$ and $\phi_V = 0$.
The volume fraction $\phi_V$ of the void space external to a set of spheres of radius $R$ is
equivalent to the probability of inserting a ``test'' sphere of radius $R$ into the system and finding it
contained entirely in the void space. This latter quantity is known as the \emph{void exclusion
probability function} $E_V(R)$
and can be expressed in terms of the $n$-particle correlation functions
for the underlying point pattern \cite{ToLuRu90, To02}:
\begin{eqnarray}
E_V(R) &= 1+\sum_{k=1}^{+\infty} \frac{(-\rho)^k}{\Gamma(k+1)} \int
g_k(\mathbf{r}^k) \prod_{j=1}^k \Theta(R-\vert\vert \mathbf{x}-\mathbf{r}_j\vert\vert) d\mathbf{r}_j,
\end{eqnarray}
where $\Theta(x)$ is the Heaviside step function. This expression can also be rewritten
for a statistically homogeneous point pattern in terms of
intersection volumes of spheres \cite{To10}:
\begin{eqnarray}
E_V(R) &= 1+\sum_{k=1}^{+\infty} \frac{(-\rho)^k}{\Gamma(k+1)}
\int g_k(\mathbf{r}_{12}, \ldots, \mathbf{r}_{1k})
v_{\mbox{\scriptsize int}}^{(k)} (\mathbf{r}^k; R) d\mathbf{r}^k,\label{Evvint}
\end{eqnarray}
where $v_{\mbox{\scriptsize int}}^{(k)}(\mathbf{r}^k; R)$ is the intersection volume of $k$ spheres of radius $R$
and centers at $\mathbf{r}^k$:
\begin{eqnarray}
v_{\mbox{\scriptsize int}}^{(k)}(\mathbf{r}^k; R) &= \int d\mathbf{x} \prod_{j=1}^k
\Theta(R-\vert\vert\mathbf{x}-\mathbf{r}_j
\vert\vert).
\end{eqnarray}
The expression \eref{Evvint} implies that $E_V(R)$ can be interpreted as a total energy
per particle associated with a many-particle interaction involving one-body, two-body,
three-body, and higher-body potential energies \cite{To10}. For a single realization of
$N$ points in a volume $V \subset \mathbb{R}^d$ \cite{To10}
\begin{eqnarray}
\fl E_V(R) &= 1-\rho v(R) + \frac{1}{V}\sum_{i<j} v_{\mbox{\scriptsize int}}^{(2)}(r_{ij}; R)
-\frac{1}{V} \sum_{i<j<k} v_{\mbox{\scriptsize int}}^{(3)}(r_{ij}, r_{ik}, r_{jk}; R)+\cdots.
\end{eqnarray}
We remark that truncating the series expression \eref{Evvint} at order $k$
provides an upper bound to $E_V$ when $k$ is even and a lower bound when $k$ is odd \cite{To86}.
The covering problem concerns identifying the point pattern with the minimal covering radius
at unit number density.
In particular, one attempts to identify the
point pattern that minimizes the one-dimensional Lebesgue
measure of the interval of compact support $[0, R_C]$ of the
void exclusion probability function $E_V(R)$. A lower bound on the minimal covering radius
can be obtained by truncating the series representation \eref{Evvint} for $E_V$ at first order,
implying at unit number density that
\begin{eqnarray}
E_V(R) &\geq \left[1- v(R)\right]\Theta\left[1-v(R)\right].
\end{eqnarray}
This lower bound has a zero at $R^* = \Gamma^{1/d}(1+d/2)/\sqrt{\pi}$, which increases as
$\sqrt{d}$ for large $d$.
\subsection{The quantizer problem}
A $d$-dimensional quantizer is a device that takes as an input a point at position $\mathbf{x}$ in
$\mathbb{R}^d$ generated from some probability density function $p(\mathbf{x})$ and outputs the
nearest point $\mathbf{r}_j$ of a known point pattern to $\mathbf{x}$ \cite{CoSl99}. The quantizer problem
is then to choose the point pattern to minimize the \emph{scaled dimensionless error}
$\mathcal{G} = \langle R^2\rangle/d$, where $\langle R^2\rangle$ is the second moment of the
nearest-neighbor distribution function for the \emph{void space} external to the particles in the
point process. Specifically, we define the \emph{void nearest-neighbor density function} $H_V(R)$
such that $H_V(R) dR$ is the probability of finding the nearest particle of a point pattern with respect to
an arbitrary point $\mathbf{x}$ of the void space within a radial distance $R+dR$ from $\mathbf{x}$.
The void exclusion probability function $E_V(R)$ is then the complementary cumulative distribution
function associated with $H_V(R)$ \cite{To02}:
\begin{eqnarray}
E_V(R) &= 1-\int_0^R H_V(r) dr.
\end{eqnarray}
Using integration by parts, one can then show that
\begin{eqnarray}
\mathcal{G} &= \frac{1}{d} \int_0^{+\infty} R^2 H_V(R) dR\\
&= \frac{2}{d} \int_{0}^{+\infty} R E_V(R) dR.
\end{eqnarray}
The quantizer error therefore depends sensitively on the shape of the void-space distribution. This
situation is distinct from the covering problem, which is concerned only with the compact support of
$E_V$.
Using upper and lower bounds on $E_V$, Torquato has been able to re-derive Zador's bounds
for the minimum scaled dimensionless error \cite{To10}:
\begin{eqnarray}
\frac{\Gamma^{2/d}(1+d/2)}{\pi(d+2)} &\leq \mathcal{G}_{\mbox{\scriptsize min}}
\leq \frac{\Gamma^{2/d}(1+d/2)\Gamma(1+2/d)}{\pi d}.\label{Zador}
\end{eqnarray}
These bounds converge in asymptotically high dimensions, implying
\begin{eqnarray}
\mathcal{G}_{\mbox{\scriptsize min}} &\rightarrow (2\pi e)^{-1} \qquad (d\rightarrow +\infty).\label{Gasymp}
\end{eqnarray}
This convergence implies that lattices and disordered point patterns are equally good
quantizers in asymptotically high dimensions. Using known results for sphere packings,
Torquato has also presented an improved upper bound to the minimal quantizer error \cite{To10},
which is generally appreciably tighter than Zador's upper bound for low to moderately high
dimensions and converges to the exact asymptotic result \eref{Gasymp} in high dimensions.
\subsection{Comparison of the packing, number variance, covering, and quantizer problems}
In his study of the best solutions of the covering and quantizer problems in up to
$24$ dimensions, Torquato \cite{To10} compared these results to the best known solutions
for the sphere packing and number variance problems.
In $\mathbb{R}$ and $\mathbb{R}^2$, it is well-known that the integer lattice $\mathbb{Z}$ and the
triangular lattice, respectively, possess simultaneously the maximal packing density, the minimal
asymptotic number variance, the minimal covering radius, and the
minimal quantizer error \cite{CoSl99, To10}. However, the solutions to these problems
are no longer the same in as low as three dimensions. Although the FCC lattice generates
the densest sphere packing in three dimensions \cite{Ha05}, its \emph{dual} lattice BCC
minimizes the three-dimensional covering radius, quantizer error, and asymptotic
number variance. To understand these differences,
Torquato has shown that while the number variance, covering, and quantizer problems
are described by soft, bounded interactions, the packing problem is described by
a short-ranged pair potential that is zero whenever two spheres do not overlap and
infinite when they do \cite{To10}. Furthermore, although the number variance problem
can be interpreted as the determination of the ground state of a short-ranged soft pair interaction
\cite{ToSt03, To10}, the covering and quantizer problems involve one-body, two-body,
three-body, and higher-order interactions \cite{To10}. Therefore, for $d \geq 4$,
the solutions for each of these problems are not necessarily the same.
One notable exception occurs in $\mathbb{R}^{24}$, where the Leech lattice $\Lambda_{24}$
\cite{CoSl99} likely provides the globally optimal solution for all four problems \cite{To10}.
It is currently unknown whether such globally optimal solutions exist for dimensions other
than $d = 1$, $d = 2$, and $d = 24$. It was shown \cite{To10} that \emph{disordered} saturated
sphere packings provide both good coverings and quantizers in relatively low dimensions
and may even surpass the best known lattice coverings and quantizers in these dimensions.
We shall return to this point in Section IV.
\section{High-dimensional generalizations of the kagom\'e and diamond crystals}
Our constructions of the $d$-dimensional generalizations of the kagom\'e and diamond crystals
will involve an underlying $A_d$ Bravais lattice structure. All angles between the basis vectors for
the $A_d$ lattice are $\pi/3$ radians, implying that $\mathbf{e}_j\cdot \mathbf{e}_k = a^2/2$ for all
$j \neq k$, where $a$ is the magnitude of each basis vector $\mathbf{e}_j$ \cite{FNAd}. It is therefore possible to
identify a coordinate system in which the generator matrix $M_{A_d}$ is triangular. The two-dimensional
$A_2$ lattice is the usual triangular lattice, which is the known densest packing
in $\mathbb{R}^2$. Similarly, the $A_3$ lattice is one representation for the FCC lattice, which
is the densest packing in three dimensions \cite{Ha05}. However, for $d \geq 4$, the
$A_d$ is no longer optimally dense, even among Bravais lattices.
\subsection{The $d$-dimensional diamond crystal}
The fundamental cell for the $A_d$ lattice is a regular rhombotope, the $d$-dimensional
generalization of the two-dimensional rhombus and three-dimensional rhombohedron.
Therefore, the points $\{\mathbf{0}\} \cup \{\mathbf{e}_j\}_{j=1}^d$,
where $\mathbf{e}_j$ denotes a basis vector of the $A_d$ lattice, are situated at the
vertices of a regular $d$-dimensional simplex. The $d$-dimensional diamond crystal
can therefore be obtained by including in the fundamental cell the centroid of this
simplex:
\begin{eqnarray}
\boldsymbol\nu &= \frac{1}{d+1}\sum_{j=1}^d \mathbf{e}_j,\label{translate}
\end{eqnarray}
resulting in a periodic crystal with two points per fundamental cell. By construction,
the number of nearest neighbors to each point in the $d$-dimensional diamond crystal
is $d+1$, corresponding to one neighbor for each vertex of the regular simplex. One can verify
by translation of the fundamental cell that all points of the $d$-dimensional diamond crystal are
topologically equivalent. Note that the two-dimensional diamond crystal is the
usual honeycomb lattice, in which each point is at the vertex of a regular hexagon.
We mention that our construction of the diamond crystal is distinct for all $d\neq 3$ from the $D_d^+$
structure mentioned by Conway and Sloane \cite{CoSl99}. The $D_d$ lattice is obtained by placing
points using a ``checkerboard'' pattern in $\mathbb{R}^d$ \cite{CoSl99}:
\begin{eqnarray}
D_d &= \left\{(x_1, \ldots, x_d) \in \mathbb{Z}^d : \sum_{j=1}^d x_j = 2m \mbox{ for some } m \in \mathbb{Z}
\right\}.
\end{eqnarray}
The structure $D_d^+$ is then obtained by including the translate vector
$\boldsymbol\nu = (1/2, 1/2, \ldots, 1/2)$
in the fundamental cell. Although in three dimensions the $D_d^+$ structure does provide an
equivalent construction of the diamond crystal, the relationship to our
structure does not hold for any other dimension.
Indeed, $D_d^+$ is a Bravais lattice for all even dimensions, which is not true for our
construction of the $d$-dimensional diamond crystal. For example, in two dimensions, $D_2^+$ is
equivalent to a rectangular lattice with generator matrix
\begin{eqnarray}
M_{D_2^+} = \left(\begin{array}{cc}
a/2 & 0 \\
0 & a
\end{array}\right),
\end{eqnarray}
where $a$ determines the fundamental cell size. Each point in this structure possesses two
nearest neighbors and is therefore distinct from the honeycomb crystal, in which the coordination number
of each particle is three.
\subsection{A $d$-dimensional kagom\'e crystal}
The two-dimensional kagom\'e crystal is obtained by placing points at the midpoints of
each nearest-neighbor bond in the honeycomb crystal, resulting in a non-Bravais lattice
with three particles per fundamental cell. Similarly, the three-dimensional kagom\'e crystal,
also known as the pyrochlore crystal \cite{KaHuUeSc03},
can be constructed by placing points at the midpoints of
each nearest-neighbor bond in the three-dimensional diamond crystal. We therefore generalize the
kagom\'e crystal to higher dimensions using the aforementioned construction of the $d$-dimensional
diamond crystal, placing points at the midpoints of each nearest-neighbor bond. With respect to
the underlying $A_d$ Bravais lattice structure, these points are located at
\begin{eqnarray}
\mathbf{x}_0 &= \boldsymbol\nu/2\\
\mathbf{x}_j &= \boldsymbol\nu + \boldsymbol\eta_j/2 \qquad (j = 1, \ldots, d),
\end{eqnarray}
where
\begin{eqnarray}
\boldsymbol\eta_j &= \mathbf{e}_j-\boldsymbol\nu
\end{eqnarray}
denotes a ``bond vector'' of the $d$-dimensional diamond crystal. By translating the
fundamental cell such that the origin is at $\mathbf{x}_0$, we can also represent
the $d$-dimensional kagom\'e crystal as $A_d \oplus \{\mathbf{v}_j\}$,
where
\begin{eqnarray}
\mathbf{v}_j &= \mathbf{e}_j /2 \qquad (j = 1, \ldots d).
\end{eqnarray}
The $d$-dimensional kagom\'e crystal therefore has $d+1$ points per fundamental cell,
growing linearly with dimension.
Each point of the $d$-dimensional kagom\'e crystal is at the vertex of a regular simplex
obtained by connecting all nearest-neighbors in the structure, implying that each point
possesses $2d$ nearest neighbors in $d$ Euclidean dimensions \cite{OKe91}.
We illustrate our constructions of the two-dimensional kagom\'e and
diamond (honeycomb) crystals in Figure \ref{Kag2D}.
\begin{figure}[!t]
\centering
\includegraphics[width=0.325\textwidth]{figure1}
\caption{Portion of the honeycomb crystal (two-dimensional diamond) with the
$A_2$ fundamental cell (rhombus). The points of the honeycomb
crystal are the vertices of the regular hexagons. The kagom\'e crystal (circular points)
is then constructed from the
midpoints of the bonds between nearest-neighbors in the honeycomb crystal. The kagom\'e crystal
consists of vertex-sharing simplices, the centroids of which recover the honeycomb crystal.}\label{Kag2D}
\end{figure}
\subsection{Other high-dimensional kagom\'e crystals}
Our simple construction of the $d$-dimensional kagom\'e crystal suggests that there exists
a large family of ``kagom\'e-like'' crystals obtained by including the midpoints of the basis vectors
for a Bravais lattice within the fundamental cell. A simple example is to include
basis-vector midpoints into the $d$-dimensional integer lattice $\mathbb{Z}^d$. A more interesting example is
obtained in $\mathbb{R}^4$
by including the midpoints of the basis vectors for the $D_4$ lattice, the densest
known packing in four dimensions, into the fundamental cell. The packing density of the
resulting structure is $\phi^{\prime} = 5\pi^2/256 \approx 0.1928$, which should be compared to the
density of the four-dimensional kagom\'e crystal $\phi = \sqrt{5}\pi^2/128 \approx 0.1724$.
Note, however, that this kagom\'e-like structure does not possess the same relationship to the
$d$-dimensional diamond crystal as our construction above, and we therefore focus the remainder of the
discussion on the more natural generalization of the kagom\'e crystal in terms of vertex-sharing simplices
in Euclidean space $\mathbb{R}^d$.
\section{Structural properties of the high-dimensional kagom\'e and diamond crystals}
\subsection{Packing densities and coordination numbers}
The \emph{packing density} associated with a periodic point pattern is the maximal fraction of space
that can be occupied by decorating each of the points with a sphere of radius $R_P$, where
$R_P$ is the \emph{packing radius}, defined as the maximal value of $R$ for which
$E_V(R)$ exactly obtains its one-point lower bound at unit number density $\rho$:
\begin{eqnarray}
R_P &\equiv \underset{\rho = 1}{\mbox{sup}}\left\{R : E_V(R) = 1-v(R)\right\}\label{RP}.
\end{eqnarray}
Note that this definition is consistent with our discussion of the packing radius in Section 2.3, involving
decorating each of the points in a point pattern with a sphere of maximal radius $R_P$ such that none of the
resulting spheres overlap. However, the definition (\ref{RP}) helps to elucidate the remarkable
connections among the packing, covering, quantizer, and number variance problems.
For lattices, this formulation is equivalent to identifying the minimal lattice vector at unit number density,
which can be obtained from the in-radius of the Voronoi cell for a given lattice point \cite{CoSl99}.
Note that we have the following weak upper bound on the packing radius for any
Euclidean dimension $d$:
\begin{eqnarray}
R_P &\leq \Gamma^{1/d}(1+d/2)/\sqrt{\pi} \leq R_C,
\end{eqnarray}
where $R_C$ is the covering radius. Substantially improved upper bounds \cite{CoEl03} and
conjectural lower bounds \cite{ToSt06} have been provided for the packing radii of the
densest sphere packings in any Euclidean dimension $d$.
To calculate the packing density of the $d$-dimensional kagom\'e crystal, we first consider the
$A_d$ Bravais lattice at unit number density, which has a known packing density \cite{CoSl99}
\begin{eqnarray}
\phi_{A_d} &= v\left(R_P^{(A_d)}\right) = \frac{\pi^{d/2}}{2^{d/2}\Gamma(1+d/2)\sqrt{d+1}},
\end{eqnarray}
where $R_P^{(A_d)}$ is the corresponding packing radius:
\begin{eqnarray}
2R_P^{(A_d)} &= \left(\frac{2^{d/2}}{\sqrt{d+1}}\right)^{1/d}.
\end{eqnarray}
By construction, the $d$-dimensional kagom\'e crystal has the same fundamental cell
with $d+1$ particles, and the associated packing radius is
\begin{eqnarray}
R_P^{(\mbox{\scriptsize Kag}_d)} &= \left(2^{d/2}\sqrt{d+1} \right)^{1/d}/4 = (d+1)^{1/d} R_P^{(A_d)}/2.
\end{eqnarray}
Therefore,
\begin{eqnarray}
\phi_{\mbox{\scriptsize Kag}_d} &= v\left(R_P^{(\mbox{\scriptsize Kag}_d)}\right) = (d+1) \phi_{A_d}/2^d\\
&= \frac{\pi^{d/2} \sqrt{d+1}}{2^{3d/2} \Gamma(1+d/2)}\label{kagphi}.
\end{eqnarray}
The packing density of the $d$-dimensional diamond crystal can be calculated similarly.
In particular, the packing radius of the diamond crystal in $d$ Euclidean dimensions is
\begin{eqnarray}
R_P^{(\mbox{\scriptsize Dia}_d)} &=
\vert\vert\boldsymbol\nu\vert\vert/2 = 2^{1/d} \sqrt{\frac{d}{2(d+1)}} R_P^{(A_d)},
\end{eqnarray}
where $\boldsymbol\nu$ is the translate vector \eref{translate} corresponding to the centroid of the
regular simplex formed by the basis vectors for the $A_d$ Bravais lattice. The norm of this
translate vector can be evaluated by induction and the recursion relation
\begin{eqnarray}
K_d &= \left\vert\left\vert \sum_{j=1}^d \mathbf{e}_j\right\vert\right\vert = \sqrt{K_{d-1}^2 + d}.
\end{eqnarray}
It follows that the packing density of the $d$-dimensional diamond crystal is
\begin{eqnarray}
\phi_{\mbox{\scriptsize Dia}_d} &= 2 \left(\frac{d}{2(d+1)}\right)^{d/2} \phi_{A_d}\\
&= \frac{(\pi d)^{d/2}}{2^{d-1} (d+1)^{(d+1)/2} \Gamma(1+d/2)}.
\end{eqnarray}
\begin{figure}[!t]
\centering
\includegraphics[width=0.5\textwidth]{figure2}
\caption{Scaled packing densities $2^d \phi$ of
the $d$-dimensional kagom\'e and diamond crystals.}\label{KDphi}
\end{figure}
Figure \ref{KDphi} compares the packing densities of the $d$-dimensional kagom\'e and
diamond crystals for increasing dimension $d$. It is interesting to note that for $d \leq 3$,
the kagom\'e crystal is a denser packing than the diamond crystal; however, this trend
reverses for all $d \geq 4$. We will argue in the following sections that this behavior is related
to the distribution of the void space external to the particles in the lattices. Specifically, the
$d$-dimensional kagom\'e structure has increasingly large holes within the fundamental cell,
skewing the void exclusion probability function $E_V(R)$ to higher values of $R$. This
behavior implies that the kagom\'e crystal is effectively ``filamentary'' in asymptotically high dimensions,
in the sense that the net of bonds between nearest-neighbors consists of strands that
branch at each point of the crystal but are separated by increasingly large
holes within the fundamental cell.
While this argument should also hold for the $d$-dimensional diamond crystal, the placement of
a particle at the centroid of the regular simplex formed by the basis vectors for the fundamental
cell apparently prevents the lattice holes from growing more rapidly than in the kagom\'e structure.
We have also determined the coordination numbers for both the $d$-dimensional diamond and
kagom\'e crystals up to at least the first one hundred coordination shells. Such calculations
are helpful in the evaluation of lattice sums for these structures and provide insight into the
coordination structure of the crystals \cite{ToSt03, OKe91}.
Table \ref{coordtable} provides abridged results up to
$d = 5$.
Note that although both structures possess nearest-neighbor coordination numbers growing
linearly with dimension, the kagom\'e crystal in high Euclidean dimensions has a much larger
number of nearest-neighbors than the diamond crystal. This observation implies that the
kagom\'e crystal is a much more ``branched'' structure (in
the sense defined above) than the diamond crystal with highly-coordinated
particles separated over increasingly large length scales by holes in the fundamental cell.
This observation also has implications for the number variance of the kagom\'e structure and
the decorrelation principle for periodic point patterns, which we discuss in further detail
in subsequent sections.
\begin{table}[!t]
\caption{Coordination numbers for the $d$-dimensional diamond (Dia$_d$) and kagom\'e (Kag$_d$)
crystals. The square-coordination distance $r_k^2$ is given in parentheses, followed
by the number of neighbor particles $Z_k$ at the distance $r_k$. The nearest-neighbor
distance determines the length scale for each structure.}\label{coordtable}
\begin{tabular}{c | c c | c c | c c | c c}
\hline\hline
Shell number & Dia$_2$ & Kag$_2$ & Dia$_3$ & Kag$_3$ & Dia$_4$ & Kag$_4$ & Dia$_5$ & Kag$_5$\\
\hline
1 & (1) 3 & (1) 4 & (3) 4 & (4) 6 & (2) 5 & (1) 8 & (5) 6 & (1) 10\\
2 & (3) 6 & (3) 4 & (8) 12 & (12) 12 & (5) 20 & (3) 24 & (12) 30 & (3) 40\\
3 & (4) 3 & (4) 6 & (11) 12 & (16) 12 & (7) 30 & (4) 20 & (17) 60 & (4) 30\\
4 & (7) 6 & (7) 8 & (16) 6 & (20) 12 & (10) 30 & (5) 48 & (24) 90 & (5) 120\\
5 & (9) 6 & (9) 4 & (19) 12 & (28) 24 & (12) 30 & (7) 72 & (29) 90 & (7) 200\\
6 & (12) 6 & (12) 6 & (24) 24 & (32) 6 & (15) 60 & (8) 30 & (36) 140 & (8) 90\\
7 & (13) 6 & (13) 8 & (27) 16 & (36) 18 & (17) 80 & (9) 56 & (41) 240 & (9) 190\\
8 & (16) 3 & (16) 6 & (32) 12 & (44) 12 & (20) 60 & (11) 96 & (48) 270 & (11) 360\\
9 & (19) 6 & (19) 8 & (35) 24 & (48) 24 & (22) 60 & (12) 60 & (53) 210 & (12) 140\\
10 & (21) 12 & (21) 8 & (40) 24 & (52) 36 & (25) 120 & (13) 144 & (60) 360 & (13) 520\\
\hline\hline
\end{tabular}
\end{table}
\subsection{Void exclusion probabilities, covering radii, and quantizer errors}
As previously mentioned, the covering radius $R_C$ and scaled dimensionless quantizer error
$\mathcal{G}$ can be determined from knowledge of the void exclusion probability function $E_V(R)$
of a point pattern, which contains information about the distribution of the void space external to the
particles. This connection to $E_V$ was first explicitly mentioned recently by Torquato \cite{To10},
and we have been unable to find studies of this function for any periodic crystal in the literature.
Here, we determine
$E_V$ for the $d$-dimensional kagom\'e and diamond crystals and use our results to
provide estimates for the covering radii and quantizer errors for these systems.
Our calculations involve Monte Carlo sampling of the void space within the fundamental cell for
the underlying $A_d$ Bravais lattice. Periodicity of the point pattern implies that $E_V$ must
have compact support, and it is therefore sufficient only to sample within a single fundamental
cell, subject to periodic boundary conditions, to obtain the full distribution $E_V$. Noting that
any point $\mathbf{r}$ within the fundamental cell can be expressed as an appropriate linear
combination of the Bravais lattice basis vectors:
\begin{eqnarray}
\mathbf{r} &= M_{A_d} \mathbf{x},\label{generator}
\end{eqnarray}
where $\mathbf{x} = (x_j)_{j=1}^d$
with $0 \leq x_j \leq 1$ for all $j$, we can efficiently sample the void space
by placing points randomly and uniformly in the $d$-dimensional unit cube and then mapping those points
to the fundamental cell with the generator matrix $M_{A_d}$ as in \eref{generator}. The void
exclusion probability function is then obtained by measuring nearest-neighbor distances between
the sampling points and the particles of the crystal. Note that this calculation of the
void exclusion probability function is more efficient than direct calculation of the
Voronoi tessellation for the crystals in high dimensions, thereby providing a facile means of
obtaining estimates for $R_C$ and $\mathcal{G}$.
\begin{figure}[!t]
\centering
$\begin{array}{c c}
\underset{\mbox{\Large(a)}}{\includegraphics[height=2.5in]{figure3a}} &
\underset{\mbox{\Large(b)}}{\includegraphics[height=2.5in]{figure3b}}\\
\underset{\mbox{\Large(c)}}{\includegraphics[height=2.5in]{figure3c}} &
\underset{\mbox{\Large(d)}}{\includegraphics[height=2.5in]{figure3d}}
\end{array}$
\caption{Void exclusion probability functions $E_V(R)$ for the $A_d$, $d$-dimensional
diamond (Dia$_d$), and $d$-dimensional kagom\'e (Kag$_d$) crystals at unit number
density: (a) $d = 2$; (b) $d = 3$; (c) $d = 4$; (d) $d = 5$. }\label{Evd}
\end{figure}
\begin{table}[!t]
\caption{Estimates of the covering radius $R_C$ and quantizer error $\mathcal{G}$
for the $A_d$, $d$-dimensional diamond Dia$_d$, and $d$-dimensional
kagom\'e Kag$_d$ lattices. Errors for the calculations are $\pm 0.0004$
for the covering radii and $\pm 0.00004$ for the quantizer errors, as
estimated by comparison with exact results for the $A_d$ lattice in two and three
dimensions. The covering radii for the $A_2$, $A_3$, Dia$_2$, and Kag$_2$
lattices are known exactly \cite{CoSl99, To10}, and these exact results are reported here;
also reported are the exact values for the quantizer errors of the $A_2$ and $A_3$
lattices \cite{To10}.}\label{tablecovquant}
\begin{indented}
\item[]\begin{tabular}{c | c | c | c | c | c | c | c | c}
\hline\hline
~ & \multicolumn{2}{c|}{$d = 2$} & \multicolumn{2}{c|}{$d = 3$} & \multicolumn{2}{c|}{$d = 4$} &
\multicolumn{2}{c}{$d = 5$}\\
\cline{2-9}
~ & $R_C$ & $\mathcal{G}$ & $R_C$ & $\mathcal{G}$ & $R_C$ & $\mathcal{G}$ &
$R_C$ & $\mathcal{G}$\\
\hline
$A_d$ & 0.6204 & 0.08018 & 0.7937 & 0.07875 & 0.8816 & 0.07780 & 0.9984 & 0.07769\\
\cline{2-9}
Dia$_d$ & 0.8774 & 0.09627 & 0.8640 & 0.09112 & 1.0472 & 0.08825 & 1.0776 & 0.08649\\
\cline{2-9}
Kag$_d$ & 0.9306 & 0.09615 & 1.0384 & 0.09925 & 1.2048 & 0.09973 & 1.2824 & 0.09939\\
\hline\hline
\end{tabular}
\end{indented}
\end{table}
Our results are shown in Figure \ref{Evd}. Table \ref{tablecovquant} summarizes our
results for the covering radii and quantizer errors of the the diamond and kagom\'e crystals.
The $d$-dimensional kagom\'e crystal possesses relatively large covering radius in
each dimension, implying that the covering of Euclidean space with the kagom\'e crystal involves
much more than pairwise overlap potentials even in two dimensions. This behavior
follows directly from the increasing sizes of holes within the fundamental cell in
high dimensions. Since all of the particles in the kagom\'e crystal are relegated to the
boundary of the fundamental cell, the majority of the space in the fundamental cell is
void space, thereby increasing the value of the covering radius relative to the
$A_d$ Bravais lattice.
However, it is interesting to note that the quantizer error for the
two-dimensional kagom\'e crystal is actually smaller than the associated error for the honeycomb
(two-dimensional diamond) crystal. Indeed, we recall that kagom\'e crystal generates a denser
sphere packing in two dimensions than the honeycomb crystal, implying by definition that
$E_V(R) = 1-v(R)$ at unit density for a larger range in $R$. The void exclusion probability of the
kagom\'e crystal is therefore relatively ``tight'' compared to the honeycomb crystal in such a way that
the longer tail does not substantially affect the first moment of the distribution. The two-dimensional
kagom\'e crystal therefore provides an interesting example of how increasing the complexity of
a crystal structure can conceivably improve the quantizer error; ``simpler'' structures
are not always better quantizers, even in low dimensions.
This behavior changes drastically in higher dimensions, where the quantizer error for the
$d$-dimensional kagom\'e structure is unusually high relative to the $d$-dimensional diamond
and $A_d$ structures. Indeed, the bulk distribution of the void space for the five-dimensional kagom\'e
crystal is seen to be larger the corresponding curve for a Poisson-distributed point pattern, consisting of
uncorrelated random points in Euclidean space. This unusual property implies that the
quantizer error for the five-dimensional kagom\'e crystal is larger even than Zador's upper bound
\eref{Zador} for the minimal quantizer error. It is highly counterintuitive that a disordered
point pattern should be a better quantizer than a periodic crystal with relatively low complexity and
in relatively low dimensions. Nevertheless, this observation is consistent with the prevalence of
large void regions in the high-dimensional kagom\'e crystal and supports our description of this system as
begin effectively ``filamentary'' in high dimensions \cite{FN4}.
This result also suggests the onset of a decorrelation principle
for the $d$-dimensional kagom\'e crystal, an issue we explore in more detail in Section V.
\begin{figure}[!t]
\centering
\includegraphics[width=0.45\textwidth]{figure4}
\caption{Void exclusion probability functions $E_V(R)$ at unit number density for the five-dimensional
kagom\'e crystal and a disordered, uncorrelated Poisson point process. Also
included is the one-point series lower bound $1-v(R)$.}\label{KP}
\end{figure}
\subsection{Number variance coefficients}
As previously mentioned, the asymptotic scaling of the number variance provides a quantitative metric
for the extent of order within a structure over asymptotically large length scales with respect to
the mean nearest-neighbor separation between points \cite{ToSt03, ZaTo09}. Since we are utilizing the
$d$-dimensional kagom\'e crystal to probe the applicability of the decorrelation principle to
periodic structures, it is therefore of interest to calculate the surface area coefficient $B$ [c.f., \eref{BNV}]
governing surface-area fluctuations in the local number density. Note that periodicity of the
fundamental cell implies the presence of full long-range order in both the $d$-dimensional
kagom\'e and diamond crystals, which is sufficient to induce hyperuniformity.
Unfortunately, this long-range order also implies that the integral \eref{BNV} diverges; however,
Torquato and Stillinger have reformulated this expression using a convergence ``trick'' \cite{StDeSa98}
to ensure a properly convergent expression for periodic crystals \cite{ToSt03}. Specifically, we rewrite
the expression \eref{BNV} for the coefficient $B$ as
\begin{eqnarray}
B &= \lim_{\beta\rightarrow 0^+}\frac{-\rho \kappa(d)}{D} \int \exp(-\beta r^2) r \left[g_2(\mathbf{r})-1\right]
d\mathbf{r},
\end{eqnarray}
where $\kappa(d) = \Gamma(1+d/2)/\{\Gamma[(d+1)/2]\Gamma(1/2)\}$ and $r = \vert\vert\mathbf{r}\vert\vert$.
Expanding this integral implies that
\begin{eqnarray}
B &=\frac{\rho d \pi^{(d-1)/2}}{2 D \beta^{(d+1)/2}} - \frac{\rho \kappa(d)}{D} \int
\exp(-\beta r^2) r g_2(\mathbf{r}) d\mathbf{r} \qquad (\beta\rightarrow 0^+),
\end{eqnarray}
and the remaining integral involving the pair correlation function can be interpreted as the
average pair sum for the pair interaction $v(r) = \exp(-\beta r^2) r$ over the underlying crystal structure,
which is convergent for all $\beta > 0$. Writing the average pair sum explicitly, we find
\begin{eqnarray}
\fl B &= \frac{\rho d \pi^{(d-1)/2}}{2D \beta^{(d+1)/2}} - \frac{\kappa(d)}{ND}
\sideset{}{^\prime}\sum_{j, \ell, \mathbf{p}} \exp\left(-\beta \vert\vert \mathbf{p}+\boldsymbol\nu_j-
\boldsymbol\nu_{\ell}\vert\vert\right)
\vert\vert\mathbf{p}+\boldsymbol\nu_j - \boldsymbol\nu_\ell\vert\vert \qquad (\beta \rightarrow 0^+),
\end{eqnarray}
where the prime on the summation means that the vector $\mathbf{p}=\mathbf{0}$ is excluded
when $\boldsymbol\nu_j = \boldsymbol\nu_\ell$.
To remove the dependence of $B$ on the length scale $D$, we
report the scaled coefficient $\eta^{1/d} B$, where
$\eta = \rho v(D/2)$, as has previously been done in the literature \cite{ToSt03, ZaTo09}.
\begin{table}[!t]
\caption{Number variance coefficients $\eta^{1/d} B$ for the
$A_d$, $d$-dimensional kagom\'e Kag$_d$, and $d$-dimensional diamond
Dia$_d$ crystals. Here, we have taken $\eta$ to be the packing density
of the structure. The last two entries correspond to $g_2$-invariant processes
as discussed in the text. The approximate error for each entry is $\pm 0.00005$
by comparison with previously reported results \cite{ToSt03, ZaTo09}.}\label{Kagnumvar}
\begin{indented}
\item[]\begin{tabular}{c| c c c c c}
\hline\hline
$d$ & $A_d$ & Kag$_d$ & Dia$_d$ & Step-function & Step+delta-function\\
\hline
2 & 0.12709 & 0.14675 & 0.14176 & 0.21221 & 0.15005\\
3 & 0.15569 & 0.20740 & 0.17737 & 0.28125 & 0.19086\\
4 & 0.17734 & 0.27330 & 0.20555 & 0.33953 & 0.22342 \\
5 & 0.19579 & 0.35412 & 0.23144 & 0.39063 & 0.25092\\
\hline
\hline
\end{tabular}
\end{indented}
\end{table}
Table \ref{Kagnumvar} reports our results for the number variance coefficients of the $d$-dimensional
diamond and kagom\'e crystals. It is helpful to compare these results to similar calculations
performed for certain so-called \emph{$g_2$-invariant processes} \cite{ToSt02}. A $g_2$-invariant process
involves constraining a chosen non-negative form for the pair correlation function $g_2$
to remain invariant over a nonvanishing density range while keeping all other relevant
macroscopic variables fixed \cite{ToSt02}. We consider the following two examples of $g_2$-invariant
processes: the so-called ``step-function $g_2$,'' in which the pair correlation function has the
form
\begin{eqnarray}
g_2(r) &= \Theta(r-D)\label{step}
\end{eqnarray}
for some length scale $D$, and the ``step+delta-function $g_2$'', given by
\begin{eqnarray}
g_2(r) &= \Theta(r-D) + \frac{Z}{\rho s(D)} \delta(r-D),\label{stepdelta}
\end{eqnarray}
where $Z$ can be interpreted as an average contact coordination number \cite{ToSt03}. Both of these
processes correspond to \emph{disordered} point patterns that are hyperuniform at the
critical densities
\begin{eqnarray}
\eta_c &= 1/2^d \qquad (\mbox{step-function})\\
\eta_c &= (d+1)/2^{d+1} \qquad (\mbox{step+delta-function})\label{stepdeta}.
\end{eqnarray}
Strong numerical evidence has been presented to suggest that these pair correlation functions
are indeed realizable as point processes at the critical densities \cite{Uc06}.
Torquato and Stillinger have used $g_2$-invariant processes to define a optimization procedure
that places lower bounds on the maximal sphere-packing density in $d$ Euclidean dimensions
\cite{ToSt06, ToSt02}.
Torquato and Stillinger have analytically evaluated the number variance coefficients for these
$g_2$-invariant processes:
\begin{eqnarray}
B_{\mbox{\scriptsize step}} &= \frac{d^2 \Gamma(d/2)}{4\Gamma[(d+3)/2]\Gamma(1/2)}\\
2^d \eta_c B_{\mbox{\scriptsize delta+step}} &=
\frac{d^2 (d+2)\Gamma(d/2)}{16 \Gamma[(d+3)/2]\Gamma(1/2)},
\end{eqnarray}
and these results are included in Table \ref{Kagnumvar}. One notices that for
all $d\geq 3$, the $d$-dimensional kagom\'e crystal possesses a higher number variance coefficient
than the step+delta-function process, suggesting that there exists a disordered configuration
of points in high dimensions that is more ordered over asymptotically large length
scales than this periodic structure. This result is surprising since points in the step+delta-function point
pattern are completely decorrelated from each other beyond the constrained hard-particle diameter.
Furthermore, the average contact coordination number for this process is \cite{ToSt03}
\begin{eqnarray}
Z &= d/2,
\end{eqnarray}
which is for all dimensions $d$
less than the nearest-neighbor coordination number of the $d$-dimensional
kagom\'e crystal, $Z_{\mbox{\scriptsize Kag}_d} = 2d$.
To understand this behavior, we first note that the packing density \eref{stepdeta} is less than
the corresponding density \eref{kagphi} for the $d$-dimensional kagom\'e crystal for all $d \leq 4$;
however, for $d \geq 5$, the step+delta-function process possesses a higher packing density
than the kagom\'e crystal. This observation implies that the local ordering between points induced by the
delta-function contribution to the pair correlation function of the step+delta-function process
is sufficient to regularize the void space in such a way that the packing radius $R_P$
remains relatively high compared the kagom\'e structure. In particular, the large holes within
the kagom\'e fundamental cell control the structural properties of the point pattern in high
Euclidean dimensions, and it is these holes that increase the asymptotic number variance coefficient
in such a way that the point pattern can no longer be distinguished from correlated but disordered
point patterns. This behavior is in accordance with an effective decorrelation between the
points of the kagom\'e structure over large length scales and supports the presence of a
decorrelation principle for this system.
It is important to note that the increasing nearest-neighbor coordination number of the $d$-dimensional
kagom\'e crystal implies that correlations between nearest-neighbors are \emph{increasing}
with increasing dimension. For this reason, the number variance coefficient governing surface
area fluctuations is always smaller than the corresponding coefficient for the simple step-function
process in any dimension; these \emph{constrained} correlations are never removed by the
dimensionality of the system. However, correlations over several nearest-neighbor distances
apparently diminish in an effective manner, which we make more precise in Section V,
and it is this type of decorrelation that we claim is responsible for unusually large
asymptotic local-number-density fluctuations in the kagom\'e structure. Note also that these
results are consistent with our analysis of the quantizer errors for the $d$-dimensional kagom\'e crystals.
\section{The decorrelation principle for periodic point patterns}
\subsection{Universality of decorrelation in high dimensions}
The decorrelation
principle \cite{ToSt06} states that
unconstrained asymptotic $n$-particle correlations vanish in sufficiently
high dimensions, and all higher-order $(n\geq 3)$ correlation functions
can be expressed in terms of the pair correlation function within some small error.
Although originally stated in the context of hard-sphere packings, certain ``soft''
many-particle distributions, including points interacting in the Gaussian core model \cite{Za08}
and noninteracting spin-polarized ground-state fermions \cite{To08}, are also known to exhibit this effect,
even in relatively low dimensions $d = 1$-$6$.
No rigorous proof for this principle has been found to date, but
it is based on strong theoretical arguments and has been shown to be
remarkably robust in theoretical and numerical studies.
Does the decorrelation principle apply in some generalized sense to periodic crystals?
It is not trivial to extend the decorrelation principle to periodic crystals,
which inherently possess long-range order owing to the
regular arrangement of points within a lattice structure.
This full long-range order induces
deterministic correlations as manifested by Bragg peaks in the power spectrum.
In particular, we recall from \eref{g2periodic} that the angularly-averaged pair correlation function
consists of consecutive Bragg peaks at each coordination shell; it is convenient to
express this relation in terms of the packing fraction $\phi$ and associated packing diameter $D$:
\begin{eqnarray}
g_2(r) &= \sum_{k=1}^{+\infty} \frac{Z_k}{2^d d \phi}\left(\frac{D}{r_k}\right)^{d-1} \delta[(r-r_k)/D].\label{pcf}
\end{eqnarray}
The intensity of each peak in the pair correlation function is therefore determined by the
coordination number $Z_k$ of the $k$-th coordination shell, the packing density $\phi$,
and the distance $r_k$ to the $k$-th coordination shell.
It is interesting to examine the behavior
of the intensity
\begin{eqnarray}
A(d) = Z_1(d)/(2^d d \phi)\label{AD}
\end{eqnarray}
associated with the first peak of the pair correlation function
for the $d$-dimensional kagom\'e and hypercubic $\mathbb{Z}^d$ crystals, shown in Figure \ref{figAd}.
Note that both of these crystals possess a nearest-neighbor contact number $Z_1(d) = 2d$,
equivalent to the isostatic condition \cite{FNiso}.
After an initial drop in relatively low dimensions, the intensity $A(d)$ increases without bound for
both of the periodic systems. Furthermore, the $d$-dimensional kagom\'e crystal possesses a
first-shell intensity that grows much more rapidly with dimension than even the
hypercubic lattice $\mathbb{Z}^d$,
which is a direct consequence of the exponentially diminishing packing density
and the prevalence of large holes in the fundamental cell. In both cases, nearest-neighbor
correlations asymptotically \emph{increase} with dimension $d$, and it is therefore
unclear whether a decorrelation principle should hold for periodic crystals.
This behavior should be contrasted
with corresponding results for the disordered $g_2$-invariant step+delta-function process
\eref{stepdelta}, where the first-peak intensity $A(d) = 1/(d+2)$ diminishes
for all dimensions $d$.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figure5}
\caption{Intensity $A(d) = Z_1(d)/(2^d d \phi)$ associated with the first $\delta$-function peaks
of the pair correlation functions for the $d$-dimensional kagom\'e and hypercubic
$\mathbb{Z}^d$ crystals. Also shown for comparison is the result for the
$g_2$-invariant step+delta-function process \eref{stepdelta}.}\label{figAd}
\end{figure}
Nevertheless, one can consider a disordered point pattern to be a
realization of a non-Bravais lattice with a large number of particles
randomly distributed in the fundamental cell. This observation suggests
that periodic crystals with an $M$-particle basis
growing with dimension may exhibit the same decorrelation properties as a
disordered many-particle distribution. If this notion is true, then
the effects of decorrelation should then be readily observed
even in relatively low dimensions as with disordered packings
\cite{ToSt06, To06B, ToUcSt06, Za08, To08, Sk06}.
It is therefore intriguing to test the
decorrelation principle for the $d$-dimensional kagom\'e lattice, which, as previously mentioned,
possesses $d+1$ particles per fundamental cell.
The deterministic long-range order of a periodic crystal implies that the decorrelation principle,
if it applies, cannot be directly observed from the pair correlation function \eref{pcf} itself,
but rather from some smoothed form of $g_2$.
Instead, the pair correlation function of a crystal must be interpreted in the sense of
\emph{distributions} \cite{Li01}; it gains physical meaning only when integrated
with an admissible function. Therefore, the appropriate function to consider is the
``smoothed'' pair correlation function
\begin{eqnarray}
g_2^{(a)}(r) &= \sum_{k=1}^{+\infty} \frac{Z_k D}{2^{d} d \phi a \sqrt{\pi}} \left(\frac{D}{r_k}\right)^{d-1}
\exp\left\{-[(r-r_k)/a]^2\right\},\label{g2smooth}
\end{eqnarray}
corresponding to a convolution of \eref{pcf} with a Gaussian kernel \cite{Li01, Bl93}.
Note that $g_2^{(a)}(r) \rightarrow g_2(r)$ for $r \in [0, +\infty)$
(in the sense of distributions) as $a\rightarrow 0$.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figure6}
\caption{Smoothed pair correlation function for the kagom\'e crystal in dimensions
$d = 2$ and $d = 4$. The smoothing parameter $a = 0.1D$ [c.f. \eref{g2smooth}].}\label{Kagg2ref}
\end{figure}
Figure \ref{Kagg2ref} compares the smoothed pair correlation functions for the $d = 2$ and $d = 4$
kagom\'e lattices. Remarkably, asymptotic pair correlations are observed to diminish
even in the relatively low dimensions shown (as with
disordered point patterns \cite{ToSt06, Za08, To08, Sk06, To06B, ToUcSt06}),
implying that the pair correlation function
approaches its asymptotic value of unity in sufficiently high dimensions. Importantly,
this effect at large pair separations is observed for any nonzero choice of the smoothing parameter $a$ with only qualitative differences in the pair correlation function, corresponding to localization
of the $\delta$-function peaks. Our results therefore suggest that the decorrelation principle
applies to the $d$-dimensional kagom\'e crystal in the sense that \emph{any} delocalization
of the local density field is sufficient to cause asymptotic pair correlations to diminish
with respect to increasing dimension. Note that these observations are consistent with
our calculations for the asymptotic number variance coefficient for the kagom\'e crystal,
which is higher than the corresponding result for the disordered step+delta-function process
even in low dimensions.
\begin{figure}[t]
\centering
\includegraphics[width=0.5\textwidth]{figure7}
\caption{Smoothed pair correlation function for the hypercubic Bravais lattice $\mathbb{Z}^d$
in dimensions
$d = 2$, $d = 4$, and $d = 20$. The smoothing parameter $a = 0.1D$.}\label{Zg2}
\end{figure}
Additional calculations suggest that this approach to the decorrelation principle for periodic structures
is applicable even to crystals with $M$-particle bases that do not increase with dimension. Figure
\ref{Zg2} provides calculations of the smoothed pair correlation function for the hypercubic Bravais lattice
$\mathbb{Z}^d$. Like the kagom\'e crystal, decorrelation is readily apparent even in low dimensions,
and upon reaching $d = 20$ the system is essentially completely decorrelated beyond
a few nearest-neighbor distances.
Since any point pattern, disordered or not, can be modeled as a periodic point pattern, potentially
with a large number of points per fundamental cell, these observations support the remarkable statement
that the decorrelation principle is a \emph{universal} feature of high-dimensional
point patterns, including those distributions associated with sphere packings.
In particular, the principle should apply not only to disordered point patterns as originally
discussed by Torquato and Stillinger \cite{ToSt06} but also to periodic crystals.
The smoothing operation that we have introduced for the
pair correlation function allows us to observe the effects of decorrelation in periodic
crystals in relatively low dimensions. In asymptotically high dimensions,
the widths of the Gaussians can be made
arbitrarily small since consecutive coordination shells are tightly
clustered.
Decorrelation is therefore a fundamental feature of the pair correlation function itself
of a high-dimensional periodic point pattern, whether it is a simple Bravais lattice
or a crystal with many points per fundamental cell. This principle supports the claim that
higher-order correlation functions do not provide additional information beyond
that contained in $g_2$, meaning that the pair correlations alone
completely determine the packing
in high dimensions.
\subsection{Implications for the maximal density of sphere packings}
The onset of decorrelation in high dimensions for periodic crystals has important implications for
optimal lower bounds on the maximal sphere-packing density. Minkowski provided
a \emph{nonconstructive} proof that the asymptotic behavior of the maximal density of
sphere packings is bounded from below by \cite{ToSt06, Mi05}
\begin{eqnarray}
\phi_{\mbox{\scriptsize max}} &\gtrsim \frac{1}{2^d} \qquad (d\rightarrow +\infty).
\end{eqnarray}
This scaling is quite distinct from the Kabatiansky-Levenshtein upper bound on the
maximal sphere packing density \cite{KaLe78}
\begin{eqnarray}
\phi_{\mbox{\scriptsize max}} &\leq \frac{1}{2^{0.5990d}} \qquad (d\rightarrow +\infty).\label{KL}
\end{eqnarray}
Utilizing the decorrelation principle for \emph{disordered} sphere packings, Torquato and
Stillinger derived a conjectural lower bound on the maximal sphere-packing density that provides
putative exponential improvement over Minkowski's bound \cite{ToSt06}:
\begin{eqnarray}
\phi_{\mbox{\scriptsize max}} &\gtrsim \frac{d^{1/6}}{2^{0.77865\ldots d}} \qquad (d\rightarrow +\infty).\label{TS}
\end{eqnarray}
This bound was derived using the aforementioned $g_2$-invariant optimization procedure
for a ``test'' pair correlation function that in the high-dimensional limit becomes a step+delta
function. It is a conjectural bound because it has yet to be shown that such a pair correlation
function with packing density \eref{TS} is realizable by a point process, an issue to which we
will return.
The gap between the Kabatiansky-Levenshtein upper bound and the Torquato-Stillinger lower bound
remains relatively large in high dimensions, and it is therefore an open problem to determine
which bound provides the ``correct'' asymptotic scaling.
To gain some insight into this issue, we note that
in sufficiently high
dimensions the distances between subsequent coordination shells become increasingly
small, implying that the smoothing parameter $a$ used to observe the decorrelation effect
in the pair correlation function does not need to be chosen very large. In the asymptotic dimensional
limit, it follows that \emph{any} choice of the smoothing parameter is sufficient to ``collapse''
the pair correlation function onto its asymptotic value of unity with the exception of nearest-neighbor
correlations, which are dominant in high dimensions. We therefore emphasize that
the smoothing operation we have employed in this work is only a convenient tool
that allows us to observe the decorrelation principle in even relatively low dimensions. The decorrelation
principle itself is apparently a \emph{fundamental} and \emph{universal} phenomenon of
any high-dimensional point pattern, ordered or not,
manifested in the pair correlation function since higher-order correlation functions
do not introduce additional information \cite{ToSt06}. In summary,
decorrelation suggests that the pair correlation functions
of general periodic point patterns tend to the step+delta-function form, which is precisely
the same asymptotic form as the test function that Torquato and Stillinger used to obtain
the lower bound \eref{TS} \cite{ToSt06, ScStTo08}.
Appendix A contains an analytical demonstration of the decorrelation
effect for the previously-mentioned $g_2$-invariant step-function and step+delta-function processes.
However, the asymptotic scaling of the packing density for a sphere packing
will depend inherently on the \emph{manner} in which the pair correlation function approaches
this asymptotic form.
Since the dominant correlations in asymptotically high dimensions will be from nearest-neighbors
in a sphere packing, owing to the well-defined exclusion region in the pair correlation function,
the decorrelation principle suggests that all sphere packings in high dimensions possess
pair correlation functions of an effective step+delta-function form, which is
precisely the same asymptotic form as the test function that Torquato and
Stillinger used to obtain the lower bound \eref{TS} \cite{ToSt06, ScStTo08}. The intensity $A(d)$ of the
associated delta-function peak is given by (\ref{AD}). Whether this intensity
increases (as with the $d$-dimensional
kagom\'e and hypercubic $\mathbb{Z}^d$ crystals)
or diminishes [as with the $g_2$-invariant
step+delta-function process at its critical density \eref{stepdeta}]
in asymptotically high dimensions therefore depends on the
relative scalings of $Z(d)$ and $\phi(d)$.
Using the same linear programming techniques introduced in Ref. [12],
Scardicchio, Stillinger, and Torquato have numerically explored the
$Z(d)$-$\phi(d)$ parameter space associated with the step+delta-function process
when hyperuniformity is \emph{not} enforced \emph{a priori}
[as it is at the critical density \eref{stepdeta}] \cite{ScStTo08}. Their results provide
the \emph{same} exponential improvement on Minkowski's lower bound for the
maximal sphere-packing density as the Torquato-Stillinger lower bound \eref{TS}.
Additionally, this scaling is robust in the sense that it is recovered for
test pair correlation functions containing any number of delta-function peaks \cite{ScStTo08}.
This latter observation implies that next-nearest-neighbor correlations, even if they
persist in high dimensions, do not provide additional exponential improvement
over the Torquato-Stillinger lower bound \eref{TS} for the maximal sphere-packing density.
Since the $g_2$-optimization procedure identifies the maximal packing density
obtainable with the step+delta-function form, which is also
obtained by high-dimensional periodic sphere packings by the decorrelation principle,
our results support the remarkable possibility that the Torquato-Stillinger
lower bounds may in fact be \emph{optimal} in asymptotically high dimensions. If confirmed,
this result would imply that the
Kabatiansky-Levenshtein upper bound \eref{KL} therefore provides a suboptimal
high-dimensional estimate.
This conclusion is consistent with similar arguments put forth in
Ref. [51]. If this claim is true, it is interesting to note that the results of
Scardicchio, Stillinger, and Torquato suggest that the intensity of the nearest-neighbor peak in
the pair correlation function of maximally dense sphere packings in high dimensions
should \emph{diminish} with increasing $d$ \cite{ScStTo08}, implying that the full pair correlation function
completely decorrelates to a step-function form. It follows that certain periodic point patterns such as the
$d$-dimensional kagom\'e and hypercubic $\mathbb{Z}^d$ crystals \emph{cannot} be
maximally dense in high dimensions, thereby
providing direct evidence that the manner in which
the pair correlation function approaches the asymptotic
step+delta-function form reflects the high-dimensional asymptotic
scaling of the packing density. Indeed, the maximally dense sphere packings
in high dimensions may therefore likely be disordered
(i.e., with a pair correlation function decaying to unity sufficiently
fast in the infinite-volume limit \cite{ToSt06}) as first suggested by Torquato and
Stillinger \cite{ToSt06}.
\section{Concluding remarks}
We have provided constructions of the high-dimensional generalizations of
the kagom\'e and diamond crystals. The $d$-dimensional diamond crystal is obtained by
including in the $A_d$ fundamental cell the centroid of the regular simplex formed by the
lattice basis vectors. The $d$-dimensional kagom\'e crystal can then be constructed by
placing points at the midpoints of the ``bonds'' in the diamond crystal. The kagom\'e crystal
possesses a nearest-neighbor contact number $Z = 2d$ in $d$ Euclidean dimensions, which is
equivalent to the isostatic condition for jammed sphere packings.
In two dimensions, the kagom\'e crystal is locally but neither collectively nor strictly jammed \cite{Do04} under periodic boundary conditions;
however, it can be reinforced to obtain the lowest-density strictly jammed
subpacking of the triangular lattice. In three dimensions,
the pyrochlore crystal has clustered equilateral-triangle vacancies. In contrast
to $d = 2$, the $d$-dimensional
kagom\'e crystal is therefore not strictly jammed for any $d\geq 3$ \cite{OKe91}.
Using these constructions, we have derived analytically the packing densities of these structures
and have shown that while the kagom\'e crystal generates a denser sphere packing for $d = 2$
and $d = 3$, the diamond crystal is denser for all $d \geq 4$, at which point the holes
in the kagom\'e lattice become substantially large. These observations are supported by
numerical calculations for the void exclusion probabilities of the kagom\'e and diamond crystals.
Surprisingly, the bulk of the void-space distribution for the kagom\'e lattice in dimensions $d \geq 5$
is larger than the corresponding result for the disordered, fully-uncorrelated Poisson point pattern.
Our results have implications for the quantizer errors and covering radii of these structures in high
dimensions. The diamond crystal provides a thinner covering of space than the kagom\'e crystal
for all $d\geq 2$, even though the kagom\'e crystal is a better quantizer in two dimensions.
However, the large holes in the fundamental cell for the kagom\'e lattice rapidly increase
its quantizer error in high dimensions such that it even exceeds Zador's upper bound
and, therefore, also Torquato's improved upper bound \cite{To10}, in as low as $d = 5$.
This observation implies that disordered point patterns can be better quantizers than certain
periodic structures even in relatively low dimensions, which is consistent with the
properties of certain disordered point patterns reported in Ref. [1].
We have also calculated the asymptotic surface-area coefficients for the number variance of the
kagom\'e and diamond crystals. Interestingly, the $d$-dimensional kagom\'e lattice possesses a larger
asymptotic number variance coefficient even than the disordered step+delta-function $g_2$-invariant
process for all $d \geq 3$. Since the number variance coefficient provides a quantitative measure
of structural order over large length scales \cite{ToSt03, ZaTo09}, this result counterintuitively
suggests that periodic crystals may possess \emph{less} long-range structural order than
prototypical ``disordered'' point patterns in sufficiently high dimensions, which is consistent with
a generalized decorrelation principle for periodic structures.
By calculating a ``smoothed'' pair correlation function for the $d$-dimensional kagom\'e
crystal, we have provided direct evidence for the decorrelation principle in periodic
point patterns. Indeed,
the decorrelation principle appears to be \emph{universal}, applying also to Bravais lattices
as shown by corresponding calculations for the hypercubic lattice $\mathbb{Z}^d$ in high dimensions.
Our work has important implications for the maximal sphere-packing density in high Euclidean
dimensions. In particular, the suggested universality of the decorrelation principle for both disordered
and periodic sphere packings suggests that the putative exponential improvement
obtained by Torquato and Stillinger \cite{ToSt06} on
Minkowski's lower bound for the maximal packing density is in fact optimal, which is consistent
with previously-reported results in the literature \cite{ScStTo08}. The pair correlation functions of
high-dimensional sphere packings apparently possess a general step+delta-function form, and
optimization of the packing structure through the
$Z$-$\phi$ parameter space \cite{ScStTo08}, where $Z$ is the
mean nearest-neighbor contact number and $\phi$ is the packing density, suggests that
maximally dense packings undergo a complete decorrelation in high Euclidean dimensions.
In particular, the intensity of the nearest-neighbor peak in the pair correlation function diminishes
in high dimensions, which should be contrasted with the corresponding behaviors for the
$d$-dimensional kagom\'e and hybercubic $\mathbb{Z}^d$ crystals. These latter structures
therefore cannot be maximally dense in high dimensions, which is in accordance with the
notion that the densest packings for asymptotically large $d$ are in fact disordered \cite{ToSt06}.
Importantly, this work provides the foundation for a rigorous proof of the
Torquato-Stillinger lower bound on the maximal-sphere packing density and its optimality
in high dimensions.
Future work is also warranted to explore the implications of the decorrelation
principle for the covering, quantizer, and number variance problems.
\ack
This work was supported by the Materials Research Science and Engineering Center
(MRSEC) Program of the National Science Foundation under Grant No.
DMR-0820341.
|
2,877,628,088,438 | arxiv | \section{Introduction}
The world contains structure: objects with dynamics governed by
physical law. Intelligent systems must infer this structure
from noisy, jumbled, and lossy sensory data. Bayesian statistics
provides a natural framework for designing such systems: given a prior distribution
$p(z)$ on underlying descriptions of the world, and a forward model $p(x|z)$
describing the process by which observations are generated,
mathematical probability defines the posterior $p(z|x) \propto p(z)p(x|z)$ over worlds
given observed data. Bayesian generative models have recently shown
exciting results in applications ranging from visual scene
understanding \citep{kulkarni2015picture,eslami2016attend} to
inferring celestial bodies \citep{regier2015celeste}.
In this paper we apply the Bayesian framework directly to a
challenging perceptual task: monitoring seismic events from a network
of spatially distributed sensors. This task is motivated by the Comprehensive Test Ban
Treaty (CTBT), which bans testing of nuclear weapons and provides for
the establishment of an International Monitoring System (IMS) to
detect nuclear explosions, primarily from the seismic signals that
they generate. The inadequacy of existing monitoring systems was cited as a
factor in the US Senate's 1999 decision not to ratify the treaty.
Our system, SIGVISA (Signal-based Vertically Integrated Seismic
Analysis), consists of a generative probability model of seismic
events and signals, with interpretable latent variables for
physically meaningful quantities such as the arrival times and
amplitudes of seismic phases. A previous system, NETVISA \citep{arora2013net}, assumed that
signals had been preprocessed into discrete detections; we extend this
by directly modeling seismic waveforms. This allows our model to
capture rich physical structure such as path-dependent modulation as
well as predictable travel times and attenuations. Inference in our model
recovers a posterior over event histories directly from
waveform traces, combining top-down with bottom-up processing to
produce a joint interpretation of all observed data. In particular,
Bayesian inference provides a principled approach to combining
evidence from phase travel times and waveform correlations, a previously unsolved problem in
seismology.
A full description of our system is given by \citet{moore2016signal};
this paper describes the core model and key points of training and
inference algorithms. Evaluating against existing systems for seismic
monitoring, we show that SIGVISA significantly increases event recall
at the same precision, detecting many additional low-magnitude events
while reducing mean location error by a factor of four. Initial
results indicate we also perform as well or better than existing
systems at detecting events with no nearby historical seismicity.
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{netvisa_sigvisa-crop}
\caption{High level structure of traditional detection-based monitoring (GA),
Bayesian monitoring (NETVISA), and signal-based
Bayesian monitoring (SIGVISA, this work). Compared to
detection-based approaches, inference in a signal-based model
incorporates rich information from seismic waveforms.}
\label{fig:monitoring_comparison}
\end{figure}
\section{Seismology background}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{correlation_dprk_MDJ}
\caption{Aligned seismic waveforms (station MDJ,
amplitude-normalized and filtered to
0.8-4.5Hz) from five events at the North Korean
nuclear test site, showing strong inter-event
correlation. By modeling this repeated structure, new
events at this site can be detected even if only observed
by a single station. }
\label{fig:xcalign}
\end{figure}
\vspace{-.5em}
We model seismic events as point sources, localized in space and time,
that release energy in the form of seismic waves. These include
compression and shear (P and S) waves that
travel through the solid earth, as well as surface waves such as
Love and Rayleigh waves. Waves are further categorized
into {\em phases} according to the path traversed from the source
to a detecting station; for example, we distinguish P
waves propagating directly upwards to the surface (Pg phases)
from the same waves following a path guided along the crust-mantle
boundary (Pn phases), among other options.
Given an event's location, it is possible to predict arrival times for
each phase by considering the length and characteristics of the event--station
path. Seismologists have developed a number of travel
time models, ranging from simple models that condition only on event
depth and event-station distance \citep{kennett1991traveltimes}, to more sophisticated models
that use an event's specific location to provide more
accurate predictions incorporating the local velocity structure
\citep{simmons2012llnl}. Inverting the predictions of a travel-time
model allows events to be located by triangulation given a set of arrival times.
Seismic stations record continuous ground motion along one or more
axes,\footnote{This work considers vertical motion only.} including background noise from natural and human sources, as well as the
signals generated by arriving seismic phases. The detailed
fluctuations in these signals are a function of the source as well as a path-dependent transfer function,
in which seismic energy is modulated and distorted by the geological
characteristics of the event--station path. Since geology does not
change much over time, events with similar
locations and depths tend to generate highly correlated waveforms (\Cref{fig:xcalign}); the lengthscale at which such
correlations are observed depends on the local geology and may range
from hundreds of meters up to tens of kilometers.
\subsection{Seismic monitoring systems}
Traditional architectures for seismic monitoring operate via bottom-up
processing (\Cref{fig:monitoring_comparison}). The waveform at each station is thresholded to
produce a feature representation consisting of discrete ``detections'' of possible phase
arrivals. {\em
Network processing} attempts to associate detections from
all stations into a coherent set of events. Potential complications include
false detections caused by noise, and missed detections from
arrivals for which the station processing failed to trigger.
In addition, the limited information (estimated arrival time, amplitude, and azimuth) in a single detection
means that an event typically requires detections from
at least three stations to be formed.
The NETVISA system \citep{arora2010global,arora2013net} replaces heuristic network
processing with Bayesian inference in a principled model of seismic
events and detections, including probabilities of false and missing
detections. Maximizing posterior probability in this model via
hill-climbing search yields an event bulletin that incorporates all available
data accounting for uncertainty. Because NETVISA separates
inference from the construction of an explicit domain
model, domain experts can improve system
performance simply by refining the model. Compared to
the IMS's previous network processing system (Global
Association, or GA), NETVISA provides significant improvements
in location accuracy and a 60\% reduction in missed events; as of this
writing, it has been proposed by the UN as the new production monitoring
system for the CTBT.
Recently, new monitoring approaches have been proposed using the
principle of {\em waveform matching}, which exploits correlations
between signals from nearby events to detect and
locate new events by matching incoming signals against a library of
historical signals. These promise the ability
to detect events up to an order of magnitude below the threshold of a
detection-based system \citep{gibbons2006detection,schaff2010one}, and
to locate such events even from a single station
\citep{schaff2012seismological}. However, adoption has been hampered by the inability to detect
events in locations with no historical seismicity, a crucial requirement
for nuclear monitoring. In addition, it has not been clear how to quantify
the reliability of events detected by waveform correlation, how to
reconcile correlation evidence from multiple stations,
or how to combine correlation and detection-based
methods in a principled way. Our work resolves these questions by
showing that both triangulation and waveform matching behaviors emerge
naturally during inference in a unified generative model of seismic
signals.
\section{Modeling seismic waveforms}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{world_ev_prior_green}
\caption{Global prior on seismic event locations.}
\label{fig:global_prior}
\end{figure}
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{.45\textwidth}
\includegraphics[width=\textwidth]{model_algebra-crop}
\caption{Parametric form $g(t; \v{\theta})$ modeling the signal
envelope of a single arriving phase. (\cref{eqn:parametric_form})}
\label{fig:parametric_form}
\end{subfigure}
\begin{subfigure}[b]{.45\textwidth}
\includegraphics[width=\textwidth,height=3cm]{model_algebra_modulation_trimmed}
\caption{Signal for an arriving phase, generated by multiplying
the parametric envelope $g$ (shaded) by a zero-mean modulation signal
$m(t; \v{w})$. (\cref{eqn:modulation})}
\label{fig:modulated_signal}
\end{subfigure}
\begin{subfigure}[b]{.45\textwidth}
\includegraphics[width=\textwidth, height=3cm]{model_algebra_combined_trimmed}
\caption{The final generated signal $\v{s}_j$ sums the contributions
of all arriving phases with an autoregressive background
noise process. (\cref{eqn:predicted_signal,eqn:final_signal})}
\label{fig:overlapping_phases}
\end{subfigure}
\caption{Steps of the SIGVISA forward model.}
\label{fig:model_algebra}
\end{figure}
SIGVISA (our current work) extends NETVISA by incorporating
waveforms directly in the generative model, eliminating the need for bottom-up
detection processing. Our model describes a joint distribution
$p(\v{E}, \v{S}) = p(\v{S}|\v{E})p(\v{E})$ on $n_E$ seismic events $\v{E}$ and
signals $\v{S}$ observed across $n_S$ stations. We model event occurrence as a time-homogenous Poisson process,
\begin{align*}
p(n_E) = \text{Poisson}(\lambda T),\; p(\v{E}) = p(n_E) n_E! \prod_{i=1}^{n_E} p(\v{e}_i),
\end{align*}
so that the number of events generated during a time period of
length $T$ is itself random, and each event is sampled independently from
a prior $p(\v{e}_i)$ over surface location, depth, origin time, and
magnitude. The homogeneous process implies a uniform prior on origin
times, with a labeling symmetry that we correct by multiplying by the
permutation count $n_E!$. Location and depth priors, along with the event
rate $\lambda$, are estimated from historical seismicity as described by
\citet{arora2013net}. The location prior is a mixture of a kernel density
estimate of historical events, and a uniform component to allow
explosions and other events in locations with no previous seismicity (\Cref{fig:global_prior}).
We assume that signals at different stations are
conditionally independent, given events, and introduce auxiliary
variables $\v{\theta}$ and $\v{w}$ governing signal generation at each
station, so that the forward model has the form
\begin{equation}
p(\v{S}|\v{E}) = \prod_{j=1}^{n_S} \left(\iint p(\v{s}_j |
\v{\theta}_j, \v{w}_j)p(\v{\theta}_j|\v{E}) p(\v{w}_j| \v{E})
d\v{w}_j d\v{\theta}_j \right).
\end{equation}
The parameters $\v{\theta}$ describe an
envelope shape for each arriving phase, and $\v{w}$ describes {\em
repeatable} modulation processes
that multiply these envelopes to generate observed waveforms (\Cref{fig:model_algebra}). In
particular, we decompose these into independent (conditioned on event
locations) components $\v{\theta}_{i,j,k}$ and $\v{w}_{i,j,k}$
describing the arrival of phase $k$ of event $i$ at station $j$.
The envelope of each phase is modeled by a linear onset
followed by a poly-exponential decay (Figure~\ref{fig:parametric_form}),
\begin{equation}g(t; \theta_{i,j,k}) = \left\{\begin{array}{ll}
0 & \text{ if } t \le \tau\\
\alpha (t-\tau) / \rho & \text{ if } \tau < t \le \tau+\rho\\
\alpha (t-\tau+1)^{-\gamma} e^{-\beta (t-\tau)} &\text{ otherwise}\\
\end{array}\right.\label{eqn:parametric_form}
\end{equation}
with parameters $\theta_{i,j,k} = (\tau, \rho, \alpha, \gamma,
\beta)_{i,j,k}$ consisting of an arrival time $\tau$, rise time $\rho$, amplitude
$\alpha$, and decay rates $\gamma$ and $\beta$ governing respectively
the envelope peak and its coda, or long-run decay. This decay form is inspired by previous work modeling seismic
coda \citep{mayeda2003stable}, while the linear
onset follows that used by
\citet{cua2005creating} for seismic early warning.
To produce a signal, the envelope is multiplied by a {\em modulation
process} $m$ (Figure~\ref{fig:modulated_signal}), parameterized
by wavelet coefficients $\v{w}_{i,j,k}$ so that
\begin{equation}
m(t; \v{w}_{i,j,k}) = \left\{\begin{array}{ll} (\mathbf{D}\v{w}_{i,j,k})(t) & \text{if } 0 \le t <
20s\\\v{\varepsilon}(t) &
\text{otherwise}\end{array}\right.
\label{eqn:modulation}
\end{equation}
where $\mathbf{D}$ is a discrete wavelet transform matrix, and
$\v{\varepsilon}(t)\sim\mathcal{N}(0, 1)$ is a Gaussian white noise process. We
explicitly represent coefficients describing the first 20
seconds\footnote{This cutoff was chosen to
capture repeatability of the initial arrival period, which is
typically the most clearly observed, while still fitting historical
models in memory.} of each
arrival, and model the modulation as random after that point. We use an order-4 Daubechies wavelet
basis \citep{daubechies1992ten}, so that for 10Hz signals each
$\v{w}_{i,j,k}$ is a vector of 220 coefficients. As described below, we
model $\v{w}$ jointly across events using a Gaussian process, so that
our modulation processes are {\em repeatable}: events in nearby locations
will generate correlated signals (\Cref{fig:wavelet_gps}).
Summing the signals from all arriving phases yields the {\em
predicted signal} $\v{\bar{s}}_j$,
\begin{equation}
\bar{s}_j(t) = \sum_{i, k} g(t; \v{\theta}_{i,j,k})\cdot
m(t-\tau_{i,j,k}; \v{w}_{i,j,k});
\label{eqn:predicted_signal}
\end{equation}
and we generate the observed signal $\v{s_j}$ (\Cref{fig:overlapping_phases})
by adding an order-$R$ autoregressive noise process,
\begin{align}
p(\v{s}_j | \v{\theta}_j, \v{w}_j) &= p_{AR}(\v{s}_j - \v{\bar{s}}_j - \mu_j;
\sigma^2_j, \v{\phi}_j)\label{eqn:final_signal}\\
p_{AR}(\v{z}; \sigma^2, \v{\phi}) &= \prod_{t=1}^T \mathcal{N}\left(z(t);
\sum_{r=1}^R \v{\phi}_r z(t-r), \sigma^2\right).\nonumber
\end{align}
We adapt the noise process online during inference, following station-specific priors on the mean $p(\mu_j)$, variance
$p(\sigma^2_j)$, and autoregressive coefficients $p(\v{\phi}_j)$.
\subsection{Repeatable signal descriptions}
\begin{figure}
\centering
\begin{subfigure}[b]{0.235\textwidth}
\includegraphics[width=\textwidth]{wavelet_gps_same}
\caption{Co-located sources.}
\end{subfigure}
\begin{subfigure}[b]{0.235\textwidth}
\includegraphics[width=\textwidth]{wavelet_gps_different}
\caption{Adding a distant source.}
\end{subfigure}
\caption{Samples from a GP prior on db4 wavelets.}
\label{fig:wavelet_gps}
\end{figure}
For each station $j$ and phase $k$, we model the signal
descriptions $p(\v{\theta}_{j,k} | \v{E})$ and $p(\v{w}_{j,k} |
\v{E})$ jointly across events as Gaussian process
transformations of the event space $\v{E}$ \citep{rasmussen2006}, so that events in
nearby locations will tend to generate similar envelope shapes and
correlated modulation signals, allowing our system to detect and
locate repeated events even from a weak signal recorded at a single
station. Since the events are unobserved, this is effectively a
GP latent variable model \citep{lawrence2004gaussian}, with the twist that in our model the
outputs $\v{\theta}, \v{w}$ are themselves latent variables, observed
only indirectly through the signal model $p(\v{S} | \v{\theta}, \v{w})$.
We borrow models from geophysics to predict the arrival time and
amplitude of each phase: given the origin time, depth, and
event-station distance, the IASPEI-91 \citep{kennett1991traveltimes} travel-time
model predicts an arrival time $\bar{\tau}$, and the Brune
source model \citep{brune1970tectonic} predicts a source (log) amplitude
$\log \bar{\alpha}$ as a function of event magnitude. We then use
GPs to model deviations from these physics-based predictions; that is,
we take $\bar{\tau}$ and $\log \bar{\alpha}$ as the
mean functions for GPs modeling $\tau$ and $\log
\alpha$ respectively. The remaining shape parameters $\rho, \gamma,
\beta$ (in log space) and the 220 wavelet coefficients in $\v{w}_{i,j,k}$ are
each modeled by independent zero-mean GPs.
\begin{table}
\centering
\begin{tabular}{ll}
\hline
\textbf{Parameter} & \textbf{Features $\phi(\v{e})$} \\\hline
Arrival time ($\tau$) & n/a \\
Amplitude (log $\alpha$) & $\left(1,\Delta,\sin(\frac{\Delta}{15000}),
\cos(\frac{\Delta}{15000})\right)$ \\
Onset (log $\rho$) & (1, mb) \\
Peak decay (log $\gamma$) & (1, mb, $\Delta$) \\
Coda decay (log $\beta$) & (1, mb, $\Delta$) \\
Wavelet coefs ($w$) & n/a \\\hline
\end{tabular}
\caption{Feature representations in terms
of magnitude mb and event--station distance $\Delta$ (km).}
\label{tbl:gp_models}
\end{table}
All of our GPs share a common covariance form,
\[k(\v{e}, \v{e}') = \phi(\v{e})^T \v{B}\phi(\v{e}') + \sigma^2_f
k_\text{Mat\'ern}(d_\ell(\v{e}, \v{e}')) + \sigma^2_n \delta,\]
where the first term models an unknown function that is linear in some
feature representation $\phi$, chosen separately for each parameter
(\Cref{tbl:gp_models}). This represents general regularities such as
distance decay that we expect to hold even in regions with no
observed training data. For efficiency and to avoid degenerate
covariances we represent this component in weight space \citep[section 2.7]{rasmussen2006}; at test time we choose $\v{B}$
to be the posterior covariance given training
data. The second term models an unknown
differentiable function using a stationary Mat\'ern ($\nu=3/2$) kernel \citep[Chapter
4]{rasmussen2006}, with great-circle distance metric controlled by
lengthscale hyperparameters $\ell$; this allows our model to represent
detailed local seismic structure. We also include iid noise
$\sigma^2_n \delta$ to encode unmodeled variation
between events in the same location.
To enable efficient training and test-time GP predictions,
we partition the training data into spatially local regions using
$k$-means clustering, and factor the nonparametric (Mat\'ern)
component into independent regional models. This allows predictions in
each region to be made efficiently using only a small number of nearby events, avoiding a na\"ive
$O(n^2)$ dependence on the entire training set. Each region is given
separate hyperparameters, allowing our models to adapt to spatially varying seismicity.
\subsection{Collapsed signal model}
The explicit model described thus far exhibits tight coupling between
envelope parameters and modulation coefficients; a small shift in a
phase's arrival time may significantly change the wavelets needed to
explain an observed signal, causing inference moves that do not
account for this joint structure to fail. Fortunately, it is possible to exactly
marginalize out the coefficients $\v{w}$ so that they do not need to
be represented in inference. This follows from the linear Gaussian
structure of our signal model: GPs induce a Gaussian distribution on
wavelet coefficients, which are observed under linear projection (a
wavelet transform followed by envelope scaling) with autoregressive
background noise. Thus, the collapsed distribution
$p(\v{s}_j | \v{\theta}_j, \v{E}) = \int p(\v{s}_j | \v{\theta}_j, \v{w}_j)
p(\v{w}_j | \v{E}) d\v{w}_j$
is multivariate Gaussian, and in principle can be evaluated directly.
Doing this efficiently in practice requires exploiting graphical model
structure. Specifically, we formulate the signal model at each station
as a linear Gaussian state space model, with a state vector
that tracks the AR noise process as well as the set of
wavelet coefficients that actively contribute to the
signal. Due to the recursive structure of wavelet bases, the number
of such coefficients is only logarithmic at each timestep. We exploit
this structure, the same used
by fast wavelet transforms, to efficiently\footnote{Requiring time linear in the signal length:
$O(T (K \log C + R)^2 )$ for a signal of length $T$ with order-$R$
AR noise and at most $K$ simultaneous phase arrivals, each described
by $C$ wavelet coefficients.} compute coefficient posteriors and
marginal likelihoods by Kalman filtering
\citep{grewal2014kalman}.
We assume for efficiency that test-time signals from different
events are independent given the training data. During training, however, it is necessary to compute
the joint density of signals involving multiple events so that we can
find correct alignments. This requires us to
pass messages $f_j(\v{w}_j) =
p(\v{s}_j | \v{w}_j, \v{\theta}_j)$ from each signal upwards to the GP
prior \citep{koller2009probabilistic}. We compute a diagonal
approximation \[\tilde{f}_j(\v{w}_j) = \frac{1}{Z_j}\prod_{c=1}^{220} \mathcal{N}(\v{w}_{j,c};
\tilde{\nu}_{j,c}, \tilde{\xi}_{j,c})\] by dividing the Kalman
filtering posterior on wavelet coefficients by the (diagonal Gaussian)
prior. The product of these messages with the GP priors $p(\v{w}_c | \v{E})
\sim \mathcal{N}(\v{\mu}_c(\v{E}), \v{K}_c(\v{E}) )$, integrated over coefficients $\v{w}$, gives an approximate
joint density
\begin{equation}
p(\v{S} | \v{\theta}, \v{E}) \approx \left(\prod_{j=1}^N
\frac{1}{Z_j} \right) \prod_c \mathcal{N}\left(\bar{\nu}_c; \v{\mu}_c(\v{E}), \v{K}_c(\v{E}) + \v{\tilde{\xi}}_{c} \right)\label{eqn:efficient_joint_density},
\end{equation}
in which the wavelet GPs are evaluated at the values of (and
with added variance given by) the approximate messages from observed signals. We target this
approximate joint density during training, and condition our test-time
models on the upwards messages generated by training signals.
\section{Training}
We train using a bulletin of historical event locations which
we take as ground truth (relaxing this assumption to incorporate noisy training
locations, or even fully unsupervised training, is important
future work). Given observed events, we use
training signals to estimate the GP models---hyperparameters as well as the
upwards messages from training signals---along with priors
$p(\mu_j)$, $p(\sigma^2_j)$, and $p(\v{\phi}_j)$ on background
noise processes at each station. We use the EM algorithm
\citep{dempster1977maximum} to search for maximum likelihood parameters integrating over
the unobserved noise processes and envelope shapes.
The E step runs MCMC inference (\Cref{sec:inference}) to sample from
the posterior over the latent variables
$\v{\theta}, \v{\mu}, \v{\sigma}^2, \v{\phi}$ under the collapsed
objective described above. We approximate the sampled
posterior on $\v{\theta}$ by univariate Gaussians to compute
approximate upwards messages. These are used in the M step to fit GP hyperparameters via
gradient-based optimization of the marginal likelihood
(\ref{eqn:efficient_joint_density}). The priors on background noise means
$p(\mu_j)$ and coefficients $p(\v{\phi}_j)$ are fit as Gaussian and
multivariate Gaussian respectively. For the noise variance
$\sigma^2_j$ at each station, we fit log-normal, inverse Gamma, and
truncated Gaussian priors and select the most likely; this adapts
for different noise distributions between stations.
\section{Inference}
\label{sec:inference}
We perform inference using reversible jump MCMC \citep{hastie2012model} applied
to the collapsed model. Our algorithm consists of a cyclic sweep of
single-site, random-walk Metropolis-Hastings moves over all currently
instantiated envelope parameters $\v{\theta}$, autoregressive noise
parameters $\mu, \sigma^2, \phi$ at each station, and event
descriptions $(\v{e}_i)_{i=1}^{n_E}$ including surface location,
depth, time, and magnitude. We also include custom
moves that propose swapping the associations of consecutive arrivals,
aligning observed signals with GP predicted signals,
and shifting envelope peak times to match those of the observed signal.
To improve event mixing, we augment our model to include {\em unassociated
arrivals}: phase arrivals not generated by any particular event,
with envelope parameters and modulation from a fixed Gaussian
prior. Unassociated arrivals are useful in allowing events to be built
and destroyed piecewise, so that we are not required to perfectly
propose an event and all of its phases in a single shot. They can also be
viewed as small events whose locations have been integrated out. Our birth
proposal generates unassociated arrivals with probability proportional
to the signal envelope, so that periods of high signal energy are
quickly explained by unassociated arrivals which may then be
associated into larger events.\footnote{In this sense the unassociated
arrivals play a role similar to detections produced by traditional
station processing. However, they are not generated by bottom-up
preprocessing, but as part of a dynamic inference procedure that may
create and destroy them using information from observed signals as well as top-down
event hypotheses.}
Event birth moves are constructed using two complementary proposals. The first is based on a
Hough transform of unassociated arrivals; it grids the 5D event space
(longitude, latitude, depth, time, magnitude) and scores each bin
using the log likelihood of arrivals greedily associated with an event
in that bin. The second proposal is a mixture of Gaussians centered at
the training events, with weights determined by waveform
correlations against test signals. This allows us to recover weak events
that correlate with training signals, while the Hough proposal can construct events in regions with no previous seismicity.
In proposing a new event we must also propose envelope parameters for
all of its phases. Each phase may associate a currently unassociated arrival; where
there are no plausible arrivals to associate we parent-sample envelope
parameters given the proposed event, then run auxiliary Metropolis-Hastings steps
\citep{storvik2011} to adapt the envelopes to observed signals. The
proposed event, associations, and envelope parameters are jointly accepted or
rejected by a final MH step. Event death moves similarly involve
jointly proposing an event to kill along with a set of phase arrivals
to delete (with the remainder preserved as unassociated).
By chaining birth and death moves we also construct
mode-jumping moves that repropose an existing event, along with split
and merge moves that replace two existing events with a single one, and
vice versa. \citet{moore2016signal} describes our inference moves in
more detail.
\section{Evaluation}
\vspace{-0.5em}
\begin{figure}
\centering
\includegraphics[width=.45\textwidth]{train_stations2}
\caption{Training events (blue dots) from the western US dataset, with region of interest outlined. Triangles indicate IMS stations.}
\label{fig:iscevents}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{test_prec_recall_new}
\caption{Precision-recall performance over the two-week test
period, relative to the reference bulletin.}
\label{fig:test_prec_recall}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{isc_detections_by_mb}
\caption{Event recall by magnitude range. The SIGVISA (top) bulletin is defined to match the
precision of SEL3 (51\%).}
\label{fig:isc_evs_by_mb}
\end{figure}
\begin{figure*}
\centering
\begin{subfigure}[b]{0.30\textwidth}
\stackinset{r}{}{b}{}{\includegraphics[width=0.75in]{visa_map_wells}}
{\includegraphics[width=\textwidth]{visa_map}}
\caption{NETVISA (139 events)}
\label{fig:visa_map}
\end{subfigure}
\begin{subfigure}[b]{0.30\textwidth}
\stackinset{r}{}{b}{}{\includegraphics[width=0.75in]{sigvisa_top_map_wells}}
{\includegraphics[width=\textwidth]{sigvisa_top_map}}
\caption{SIGVISA top events (393 events)}
\label{fig:sigvisa_map}
\end{subfigure}
\begin{subfigure}[b]{0.30\textwidth}
\includegraphics[width=\textwidth]{location_err_violin_test}
\caption{Distribution of location errors.}
\label{fig:test_location_err}
\end{subfigure}
\caption{Inferred events (green), with inset close-up of Wells
aftershocks. Reference events are in blue.}
\label{fig:inferred_map}
\end{figure*}
We consider the task of monitoring seismic events in the western
United States, which contains both significant natural seismicity and
regular mining explosions. We focus in particular on the time period
immediately following the magnitude 6.0 earthquake near Wells, NV, on
February 21, 2008, which generated a large number of aftershocks. We
train on one year of historical data (\Cref{fig:iscevents}), from January to December 2008; to enable SIGVISA to recognize aftershocks using waveform
correlation, we also train on the first six hours following the Wells mainshock. The test period is two weeks long, beginning twelve hours
after the Wells mainshock; the six hours immediately preceding were used as a validation set.
We compare SIGVISA's performance to that of existing systems that also
process data from the International Monitoring System. SEL3 is the
final-stage automated bulletin from the CTBTO's existing system (GA); it is reviewed by a team of human analysts to
produce the Late Event Bulletin (LEB) reported to the member
states. We also compare to the automated NETVISA bulletin\citep{arora2013net}, which
implements detection-based Bayesian monitoring.
We construct a reference bulletin by combining events from regional networks aggregated by the
National Earthquake Information Center (NEIC),
and an analysis of aftershocks from the Wells earthquake based on data
from the transportable US Array and
temporary instruments deployed by the University of Nevada, Reno (UNR)
\citep{smith2011preliminary}. Because the reference bulletin has access to many
sensors not included in the IMS, it is a plausible source of ``ground
truth'' to evaluate the IMS-based systems.
We trained two sets of SIGVISA models, on
broadband (0.8-4.5Hz) signals as well as a higher-frequency band
(2.0-4.5Hz) intended to provide clearer evidence of regional
events, using events observed from the reference bulletin.
To produce a test bulletin, we ran three MCMC chains on
broadband signals and two on high-frequency signals, and merged the
results using a greedy heuristic that iteratively selects the
highest-scoring event from any chain excluding duplicates. Each
individual chain was parallelized by dividing the test period into 168
two-hour blocks and running independent inference on each block of
signals. Overall SIGVISA inference used 840 cores for 48 hours.
We evaluate each system by computing a minimum weight maximum
cardinality matching between the inferred and reference bulletins, in a bipartite
graph with edges weighted by distance and restricted to
events separated by at most $2\deg$ in distance and 50s in
time. Using this matching, we report precision (the percentage of inferred
events that are real), recall (the percentage of real events detected
by each system), and mean location error of matched events. For
NETVISA and SIGVISA, which attach a confidence score to each event, we
report a precision-recall curve parameterized by the confidence
threshold (\Cref{fig:test_prec_recall}).
Our results show that the merged SIGVISA
bulletin dominates both NETVISA and SEL3. When operating at the same
precision as SEL3 (51\%), SIGVISA achieves recall three times higher
than SEL3 (19.3\% vs 6.4\%), also eclipsing the 7.3\% recall achieved by NETVISA at a slightly higher
precision (54.7\%). The human-reviewed LEB achieves near-perfect precision
but only 9\% recall, confirming that many events recovered by SIGVISA
are not obvious to human analysts. The most sensitive SIGVISA bulletin
recovers a full 33\% of the reference events, at the cost of
many more false events (14\% precision).
Signal-based modeling particularly improves recall for low-magnitude
events (\Cref{fig:isc_evs_by_mb}) for which
bottom-up processing may not register detections. We also observe
improved locations (\Cref{fig:inferred_map})
for clusters such as the Wells aftershock sequence where observed
waveforms can be matched against training data.
\begin{figure}[tb]
\centering
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{pd31_missed}
\caption{Likely mining explosion at Black Thunder Mine. Location 105.21$\deg$ W, 43.75$\deg$ N, depth 1.9km, origin time 17:15:58 UTC, 2008-02-27, mb 2.6, recorded at PDAR (PD31).}
\end{subfigure}
\begin{subfigure}[b]{0.45\textwidth}
\includegraphics[width=\textwidth]{nv01_missed}
\caption{Event near Cloverdale, CA along the Rodgers Creek
fault. Location 122.79$\deg$ W, 38.80$\deg$ N, depth 1.6km,
origin time 05:20:56 UTC, 2008-02-29, mb 2.6, recorded at NVAR
(NV01).}
\end{subfigure}
\caption{Waveform evidence for two events detected by SIGVISA
but not the reference bulletin. Green indicates the model
predicted signal (shaded $\pm 2\sigma$); black is the observed
signal (filtered 0.8-4.5Hz).}
\label{fig:sigvisa_genuine_evs}
\end{figure}
Interestingly, because we do not have access to absolute ground
truth, some events labeled as false in our evaluation may actually be genuine events missed
by the reference bulletin. \Cref{fig:sigvisa_genuine_evs} shows two
candidates for such events, with strong correspondence between the model-predicted and observed
waveforms. The existence of such events provides reason to believe
that SIGVISA's true performance on this dataset is modestly higher than our evaluation
suggests.
\subsection{{\em de novo} events}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{denovo_recall}
\caption{Recall for 24 {\em de novo} events between January and March
2008. }
\label{fig:denovo_results}
\end{figure}
For nuclear monitoring it is particularly important
to detect {\em de novo} events: those with no nearby historical
seismicity. We define ``nearby'' as within 50km. Our two-week test period includes only three
such events, so we broaden the scope to the
three-month period of January through March 31, 2008, which
includes 24 de novo events. We evaluated each system's recall
specifically on this set: of the 24 {\em de novo} reference events, how many were detected?
As shown in \Cref{fig:denovo_results}, SIGVISA's
performance matches or exceeds the other systems. Operating at the
same precision as SEL3, it detects the same number (6/24) of de novo
events. This suggests that SIGVISA's
improved performance on repeated events---including almost
all of the natural seismicity in the western US during our two-week
test period---does not come at a cost for de novo events. To the contrary,
the full high-sensitivity SIGVISA bulletin includes six genuine events
missed by all other IMS-based systems.
\vspace{-1em}
\section{Discussion}
\vspace{-1em}
Our results demonstrate the promise of
Bayesian inference on raw waveforms for monitoring seismic
events. Applying MCMC inference to a generative probability model of repeatable
seismic signals, we recover up to three times as many events as a
detection-based baseline (SEL3) while operating at the same precision,
and reduce mean location errors by a factor of four while greatly increasing
sensitivity to low-magnitude events. Our system maintains effective
performance even for events in regions with no historical seismicity
in the training set.
A major advantage of the generative formulation is that the explicit
model is interpretable by domain experts. We continue to engage with
seismologists on potential model improvements, including tomographic
travel-time models, directional information from seismic arrays
and horizontal ground motion, and explicit modeling of earthquake
versus explosion sources. Additional directions include
more precise investigations of our model's ability to quantify
uncertainty and to estimate its own detection limits as a function of
network coverage and historical seismicity. We also expect to continue scaling to global seismic data,
exploiting parallelism and refining our inference moves and
implementation. More generally, we hope that successful application of complex
Bayesian models will inspire advances in probabilistic
programming systems to make generative modeling accessible to a wider
scientific audience.
\vspace{-0.5em}
\subsubsection*{Acknowledgements}
\vspace{-0.5em}
We are very grateful to Steve Myers (LLNL) and Kevin Mayeda
(Berkeley/AFTAC) for sharing their expertise on seismic modeling and
monitoring and advising on the experimental setup. This work is
supported by the Defense Threat Research Agency (DTRA) under grant
\#HDTRA-1111-0026, and experiments by an Azure for Research grant from Microsoft.
\newpage
\bibliographystyle{apa}
|
2,877,628,088,439 | arxiv | \section{Preliminaries and Reductions}
In this chapter, we introduce some first reductions as well as the notation, which will be similar to that in \cite{MAL18}. Throughout, let $\ell \in \{2,3\}$ and $q = p^f$ for a prime $p \neq \ell$ and some $f \geq 1$. As customary, we write $\operatorname{GL}_n(-q)$ for the general unitary group $\ensuremath{\operatorname{GU}_n(q)}$ (and similarly for the special linear group). For $\varepsilon \in \{\pm 1\}$, let $d$ be the order of $\varepsilon q$ modulo $\ell$ (so for $\ell = 2$, we simply have $d = 1$) and $\ell^a$ be the exact power of $\ell$ dividing $(\varepsilon q)^d -1$.
\begin{Remark}
Let $B$ be a $2$-block of $G = \ensuremath{\operatorname{GL}_n(\varepsilon q)}$, where $\varepsilon \in \{\pm 1\}$. The block theoretic invariants of $B$ are the same as for the principal block $B'$ of $C_G(s)$, where $s$ is the semisimple $2'$-element of $G$ corresponding to $B$ by \cite[Prop. 3.4]{BRO86} (see \cite[Cor. 6.4]{GEC91} for the Brauer characters). By \cite[Chapter 1]{FON82}, this is in turn a product of general linear and unitary groups. Since all occurring invariants are multiplicative, it suffices to consider the principal block of $\ensuremath{\operatorname{GL}_n(\varepsilon q)}$ for the purpose of proving our main theorem. For $ \ell = 3$ and $B$ a unipotent block of $FG$, there exists a weight $w > 0$ such that the block theoretic invariants of $B$ are the same as those of the principal block $B'$ of $\operatorname{GL}_{wd}(\varepsilon'q)$ for some $\varepsilon' \in \{\pm 1\}$ (cf. \cite[Thm 1.9]{MIC83}).
\end{Remark}
In the following, we therefore may assume that $B$ is the principal $\ell$-block of $G = \ensuremath{\operatorname{GL}_{wd}(\varepsilon q)}$, where $q$ is not divisible by $\ell$, $\varepsilon \in \{\pm 1\}$ and $w \geq 1$.
\bigskip
Let $s \in \mathbb{Z}_{> 0}$ and $t \in \mathbb{Z}_{\geq 0}$. By $\pi(t)$, we denote the number of \emph{partitions} of $t$ and write $|\lambda| = t$ if $\lambda$ is a partition of $t$. By $k(s,t)$, we denote the number of $s$-\emph{multipartitions} of $t$, that is, the number of tuples $(\mu, \ldots, \mu_s)$ of partitions $\mu_1, \ldots, \mu_s$ such that $|\mu_1| + \ldots + |\mu_s| = t$. Furthermore, we define an $\ell$-\emph{decomposition} of $t$ to be a tuple $(t_0, \ldots, t_k)$ of nonnegative integers $t_0, \ldots, t_k$ such that $\sum_{i = 0}^k t_i \ell^i = t$ and $t_k \neq 0$. The set of $\ell$-decompositions of $t$ will be denoted by $W_t$ and its cardinality by $p_\ell(t)$. Furthermore, an ordered tuple $(t_1, \ldots, t_s)$ of nonnegative integers with $t_1 + \ldots + t_s = t$ is called an $s$-\emph{split} of $t$ (write $\lambda \Vdash t$).
\bigskip
For any natural number $n$ and a prime number $r$, denote by $n_r$ the largest power of $r$ dividing $n$. Let $w = \sum_{i = 0}^v a_i \ell^i$ be the $\ell$-adic decomposition of $w$. We recall the values of some invariants:
\begin{lemma}\label{lem:kBformula}
Let $2^{\tilde{a}} = (q+ \varepsilon)_2$. For the principal $2$-block $B$ of $\ensuremath{\operatorname{GL}_{w}(\varepsilon q)}$, it holds that
\begin{equation}\label{eq:formulakB2}
k(B) = \sum_{ \textbf{w} \in W_w} k(2^a, w_0) \, k(2^{a+\tilde{a}-1}-2^{a-1},w_1) \cdot \prod_{i \geq 2} k(2^{a+\tilde{a}-2}, w_i).
\end{equation}
Let $b = d + \frac{3^a - 1}{d}$ and $b_1 = 2 \cdot \frac{3^{a-1}}{d}$. For the principal $3$-block $B$ of $\ensuremath{\operatorname{GL}_{wd}(q)}$, it holds that
\begin{equation}\label{eq:formulakB3}
k(B) = \sum_{\textbf{w} \in W_w} k(b, w_0) \cdot \prod_{i \geq 1} k(b_1, w_i) =: k(3,a,d,w).
\end{equation}
\end{lemma}
\begin{proof}
Cf. \cite[Prop. 2.39 and Lemma 2.44]{GRU18} for $\ell =2$ and \cite[Prop. 6]{OLS84} for $\ell = 3$.
\end{proof}
Observe that the formulas for $\ell= 2$, $\varepsilon q \equiv 1 \mod 4$ and $\ell = 3$ are similar, so we treat these cases in parallel. Moreover, we write $k^w(B)$ if we want to clarify which value of $w$ is currently examined.
\begin{lemma}
The number of characters of height zero in the principal $\ell$-block $B$ is given by
$k_0(B) = 2^{\sum_{i = 0}^v a_i(i+1)}$ for $\ell = 2$ and by $k_0(B) = \prod_{i \geq 0} k(b \cdot 3^i, a_i)$ for $\ell = 3$.
\end{lemma}
\begin{proof}
Cf. \cite[Thm.~2.60]{GRU18} for $\ell = 2$ and \cite[Prop. 2.13]{MIC83} for $\ell = 3$.
\end{proof}
In the following, denote by $\operatorname{SD}_{2^{\tilde{a}+2}} = \langle x,y \mid x^2 = y^{2^{\tilde{a}+1}} = 1, \, xyx = y^{2^{\tilde{a}}-1} \rangle$ the semidihedral group of order $2^{\tilde{a}+2}.$ The defect groups $D$ of the principal $\ell$-block are Sylow $\ell$-subgroups of $G$ whose structure can be described as follows:
\begin{lemma}
\begin{enumerate}
\item Let $\ell = 2$ and $\varepsilon q \equiv 1 \mod 4$, or $\ell = 3$. Then $D \cong \prod_{i = 0}^v D_{i,\ell^a}^{a_i}$, where $D_{i,\ell^a} = C_{\ell^a} \wr C_\ell \wr \ldots \wr C_\ell$ is the iterated wreath product of the cyclic group $C_{\ell^a}$ with $i$ factors of the cyclic group $C_\ell$.
\item If $\ell = 2$ and $\varepsilon q \equiv 3 \mod 4$, then $D \cong \prod_{i = 0}^{a_i} P_{2^i}^{a_i}$, where $P_1 = C_2$ and for $i \geq 1$, we have $P_{2^i} = \operatorname{SD}_{2^{\tilde{a}+2}}\, \wr \, C_2 \wr \ldots \wr C_2$ with $i-1$ factors of the cyclic group $C_2$.
\end{enumerate}
\end{lemma}
\begin{proof}
See \cite[p.18]{GRU18} for $\ell = 2$ and \cite[Prop. 5.11]{MAL18} for $\ell = 3$.
\end{proof}
\section{General Linear and Unitary Groups}
In this chapter, we prove the inequalities (C1) and (C2) for the general linear and unitary groups $\ensuremath{\operatorname{GL}_n(\varepsilon q)}$, using the notation from the previous chapter. We first assume $\ell = 3$ or $\varepsilon q \equiv 1 \mod 4$ if $\ell = 2$, that is, $a \geq 2$ and $\tilde{a} = 1$. We begin by deriving bounds for the occurring numbers of multipartitions.
\begin{lemma}\label{lem:multipartitionsbasics}
\begin{enumerate}
\item For all $s \geq 3$ and $t \geq 1$, it holds that $k(s,t) \leq s^t$.
\item For all $s\geq 3$ and $t_1, t_2 \geq 1$, it holds that $k(s, t_1 + t_2) \leq k(s,t_1) \cdot k(s,t_2).$ Moreover, for $t \geq 2$, it holds that $k(2,t+1) \leq 2 \cdot k(s,t).$
\end{enumerate}
\end{lemma}
\begin{proof}
Cf. \cite[Lemma 5.5]{MAL18} and \cite[Lemma 2.48]{GRU18} for the second part of (ii).
\end{proof}
\begin{lemma}\label{lem:k3aw2}
For $s \geq 1$, it holds that $k(s,1) = s$, $k(s,2) = \frac{1}{2}s^2 + \frac{3}{2} s$ and $k(s,3) = \frac{1}{6}s^3 + \frac{3}{2}s^2 + \frac{4}{3} s.$
\end{lemma}
\begin{proof}
By \cite[Lemma 1]{OLS84} it holds for all $t \geq 0$ that $k(s,t) = \sum_{(k_1, \ldots, k_s) \Vdash t} \pi(k_1) \cdots \pi(k_s),$ so counting the different $s$-splits of $t \in \{1,2,3\}$ and using $\pi(t) = t$ in this case yields the claim.
\end{proof}
\begin{lemma}\label{lem:multipartitions}\label{lem:k2tschwach}
Let $w \geq 0$.
\begin{enumerate}
\item It holds that $k(2,w) \leq 2^{w + 0.35}.$
\item For $a \geq 3$, it holds that
$k(2^a, w) \leq 2^{\left(a-\frac{4}{3}\right)w + 3}.$ For $a\geq 5$, we have $k(2^a, w) \leq 2^{\left(a-\frac{4}{3}\right)w + 2}.$
\item For $a \geq 2$, it holds that
$k(b, w) \leq 3^{\left(a-\frac{5}{6}\right)w + 2- \log_3(d)}$, where $b$ is defined as in Lemma \ref{lem:kBformula}. For $a \geq 3$ and $w \geq 9$, one can omit the summand 2 in the exponent.
\end{enumerate}
\end{lemma}
\begin{proof}
Using Lemma \ref{lem:multipartitionsbasics}, the first claim follows $k(2,2) = 5 \leq 2^{2.35}$ by induction. Now consider the second inequality. For $c \geq 3$ and $x \geq 1$, it holds that
\begin{equation}\label{eq:k(cx,w)}
k(cx, w) = \sum_{(i_1, \ldots, i_x) \vDash w} k(c, i_1) \cdots k(c,i_x) \leq \sum_{(i_1, \ldots, i_x) \vDash w} c^w = \binom{x + w-1}{w} c^w
\end{equation}
(cf. \cite[Lemma 5.6]{MAL18}). We apply this estimate with $x = 4$. To this end, we claim that for all $w \geq 0$, it holds that
$$\binom{w+3}{w} \leq 2^{\frac{2}{3}w + 3}.$$
For $w \leq 5$, this can be checked directly. For $w \geq 5$, we obtain by induction
$$\binom{(w+1)+3}{w+1} = \frac{w+4}{w+1} \cdot \binom{w+3}{w} \leq 2^{\frac{2}{3}(w+1) + 3}$$
since $\frac{w+4}{w+1} = 1 + \frac{3}{w + 1} \leq 2^{2/3}$ for $w \geq 5$. Equation \eqref{eq:k(cx,w)} then yields for $a \geq 4$
$$k(2^a, w) \leq \binom{w+3}{w} \cdot 2^{(a-2)w} \leq 2^{(a-2)w + \frac{2}{3}w + 3} = 2^{(a-\frac{4}{3}) w + 3}.$$
For $a = 3$, we check the claim directly for $w \leq 9$ using GAP \cite{GAP4}. For $w \geq 10$, we can use the above proof to show that even $\binom{w+3}{w} \leq 2^{2/3 w + 1.6}$, so with the first part of the lemma we obtain
$$k(8,w) = \sum_{(i_1,\ldots, i_4) \vDash w} k(2, i_1) \cdots k(2, i_4) \leq \sum_{(i_1,\ldots, i_4) \vDash w} 2^{i_1 + 0.35 + \ldots + i_4 + 0.35} = \binom{w+3}{w} \cdot 2^{w+ 1.4} \leq 2^{\frac{5}{3}w + 3}.$$
For the stronger bound for $k(2^a,w)$, we use $x = 8$ instead of $x = 4$. The last part of the lemma can be proven in the same fashion by using $x = 3$ and $b = 3^a$ for $d = 1$ and $b = (3^a + 3)/2 \leq 2 \cdot 3^{a-1}$ for $ d = 2$.
\end{proof}
\begin{lemma}\label{lem:bound3,4}
For $w \geq 1$, it holds that $k(3,w) \leq 3^{\frac{w}{2} + \frac{9}{4}}$ and $k(4,w) \leq 2^{1.2 w + 2} $.
\end{lemma}
\begin{proof}
Using $\pi(n) \leq \frac{e^{c \sqrt{n}}}{n^{3/4}}$ with $c = \pi \sqrt{2/3}$ (cf. \cite[p.114]{AZE09}), we obtain
$\pi(n) \leq 1.4^{n + 1.2}$ for $n \geq 38$. We can check directly that this bound in fact holds for all $n \geq 1$. With this, we have
$$k(3,w) = \sum_{(i_1, i_2, i_3) \vDash w} \pi(i_1) \pi(i_2) \pi(i_3) \leq \binom{w+2}{w} \cdot 1.4^{w + 3.6}.$$
The last term can be bounded by $3^{\frac{w}{2} + \frac{9}{4}}$ for $w \geq 20$. The remaining cases are checked in GAP.
\\
By Lemma \ref{lem:k2tschwach}, we have
$$k(4,w) = \sum_{i_1 + i_2 = w} k(2,i_1) k(2,i_2) \leq \sum_{i_1 + i_2 = w} 2^{i_1 + 0.35} \cdot 2^{i_2 + 0.35} = (w+1) \cdot 2^{w+0.7}.$$
With $w+1 \leq 2^{0.2w +1.3}$ for $w \geq 13$, we obtain the desired bound. The remaining cases can be checked in GAP.
\end{proof}
\begin{lemma}\label{lem:plw}
\begin{enumerate}
\item It holds that $p_3(w) \leq 3^{w/6}$ for all $w \neq 3$.
\item For $w \geq 0$, it holds that $p_2(w) \leq 2^{\frac{w}{3}+1}.$
\end{enumerate}
\end{lemma}
\begin{proof}
By \cite[Lemma 5.2]{MAL17}, it holds that $p_\ell(w) \leq \frac{w}{\ell} \cdot p_\ell(\lfloor w/\ell \rfloor)$ for $\ell \geq 2$ and $w \geq 1$. Checking the claim directly for $w \leq 12$, we can use induction and $w/3 \leq 3^{w/9}$ for $w \geq 8$ to show by induction that
$p_\ell(w) \leq \frac{w}{\ell} \cdot p_\ell(\lfloor w/\ell \rfloor) \leq 3^{w/9} \cdot 3^{w/18} = 3^{w/6}.$
The second statement can be proved in the same fashion.
\end{proof}
\begin{lemma}\label{lem:bound1.5} \label{lem:bound23}
\begin{enumerate}
\item For $\ell = 2$ and $a \geq 4$, it holds that $k(B) \leq 2^{(a-1) w + 3/2}.$ For $a \geq 3$, there is the weaker bound $k(B) \leq 2^{(a-1)w +3}.$
\item For $a \geq 2$ and any $3$-decomposition $(w_0, \ldots, w_v)$ of $w$, we have
$$k(b,w_0) \cdot \prod_{i \geq 1} k(b_1, w_i) \leq 3^{\left(a-\frac{5}{6}\right)w + 2 - \log_3(d)},$$
which yields $k(B) \leq p_3(w) \cdot 3^{\left(a-\frac{5}{6}\right)w + 2 - \log_3(d)}.$
\end{enumerate}
\end{lemma}
\begin{proof}
We first prove the second part. By Lemma and \ref{lem:multipartitionsbasics} and \ref{lem:multipartitions}, it holds that
\begin{alignat*}{2}
k(b,w_0) \cdot \prod_{i \geq 1} k(b_1, w_i) &\leq 3^{\left(a-\frac{5}{6} \right)w_0 + 2 - \log_3(d)} \cdot \prod_{i \geq 1} 3^{a w_i}
&&\leq
3^{\left(a-\frac{5}{6} \right)w_0 +2 - \log_3(d)} \cdot 3^{a \frac{w-w_0}{3}} \\
&= 3^{a \frac{w}{3} + \left(\frac{2}{3}a-\frac{5}{6}\right)w_0 + 2 - \log_3(d)}
&&\leq 3^{\left(a-\frac{5}{6} \right) w + 2 - \log_3(d)}.
\end{alignat*}
Now consider the first bound for $\ell = 2$. Let $w \geq 2$ and $a \geq 5$. We use the stronger bound from Lemma \ref{lem:multipartitions}. There is a single binary decomposition of $w$ with $w_0 = w$ and at most two with $w_0 = w-2$. For all others, it holds that $w_0 \leq w-4$, since $w_0$ and $w$ must have the same parity. Analogously to the above, we have
\begin{alignat*}{1}
k(B) &= \sum_{ \textbf{w} \in W_w} k(2^a, w_0) \prod_{i \geq 1} k(2^{a-1},w_i) \leq \sum_{\textbf{w} \in W_w} 2^{\left(a-\frac{4}{3}\right) w_0 +2+ (a-1)\frac{w-w_0}{2}} \\
&\leq p_2(w) \cdot 2^{\left(\frac{a}{2}-\frac{5}{6}\right)(w-4) +2+ \frac{a-1}{2}w} + 2 \cdot 2^{\left(\frac{a}{2}-\frac{5}{6}\right) (w-2) +2+ \frac{a-1}{2}w} + 2^{\left(\frac{a}{2}-\frac{5}{6}\right)w +2+ \frac{a-1}{2}w} \\
&\leq 2^{(a-1)w + \frac{3}{2}} \cdot \left(2^{\frac{29}{6} - 2a}+ 2^{- \frac{w}{3} + \frac{19}{6} - a} + 2^{-\frac{w}{3} +\frac{1}{2}} \right) \leq 2^{(a-1)w + \frac{3}{2}}.
\end{alignat*}
Here, we inserted the estimate $p_2(w) \leq 2^{\frac{w}{3}+1}$ (cf.\ Lemma \ref{lem:plw}) in the second step. In the third one, we used that the term in brackets is smaller than one for $a \geq 6$ and $w \geq 2$ as well as for $a =5$ and $w \geq 3$. The finitely many remaining cases can be checked directly. For $a \in \{3,4\}$, we use the same approach, albeit with the weaker bound of Lemma \ref{lem:multipartitions}, to prove the claim. For $w = 1$, we have $k(B) = k(2^a, 1) = 2^a,$ so the inequality holds.
\end{proof}
We now treat the case of small values of $a$.
\begin{lemma}\label{lem:bound1.65}
Let $w \geq 1$. For $\ell =2$ and $a = 2$, we have $k^w(B) \leq 2^{1.4 w + 1.65}$ and for $\ell = 3$ and $ a = 1$, it holds that
$k^w(B) \leq 3^{\frac{w +7}{2}}.$
\end{lemma}
\begin{proof}
We prove the claim by induction on $w$. Note that for any $\ell$-decomposition $(w_0, w_1, \ldots, w_v)$ of $w$, $\tilde{w} = (w_1, \ldots, w_v)$ is an $\ell$-decomposition of $(w-w_0)/\ell$ and each of them arises in this way. So there is a bijection between $W_w$ and $\bigcup_{j = 0}^{(w-a_0)/\ell} W_{r(j)}$, where $r(j) = (w-(a_0 + \ell j))/\ell$ (note that $w_0$ and $w$ have the same remainder modulo $\ell$). Summing over all possible values of $w_0$, we therefore obtain (setting $k^0(B) := 1$)
\begin{alignat*}{1}
k^w(B)&= \sum_{\textbf{w} \in W_w} k(\ell^a, w_0) \cdot \prod_{i \geq 1} k(\ell^{a-1}, w_i) = \sum_{j = 0}^{\frac{w-a_0}{\ell}} \sum_{\tilde{w} \in W_{r(j)}} k(\ell^a, \ell j+a_0) \cdot \prod_{i \geq 1} k(\ell^{a-1}, w_i) \\
&\leq k(\ell^a, w)+\sum_{j = 0}^{\frac{w-a_0}{\ell}-1} k(\ell^a, \ell j +a_0) \sum_{\tilde{w} \in W_{r(j)}} k(\ell^a, w_1) \cdot \prod_{i \geq 2} k(\ell^{a-1}, w_i) \\
&= \sum_{j = 0}^{(w-a_0)/\ell} k(\ell^a, \ell j +a_0) \cdot k^{r(j)}(B).
\end{alignat*}
By induction, using the geometric series as well as the bound from Lemma \ref{lem:bound3,4}, we obtain for $\ell = 2$
\begin{alignat*}{1}
k(B) &\leq \sum_{j = 0}^{(w-a_0)/2} 2^{1.2 (2j + a_0) +2} \cdot 2^{1.4 \frac{w- (2j + a_0)}{2} + 1.65} \leq 2^{3.65+ 0.7 w + 0.5 a_0} \cdot \left(2^{\frac{w-a_0}{2}+1} -1\right) \\
&\leq 2^{1.2 w + 4.65} \leq 2^{1.4 w + 1.65},
\end{alignat*}
where the last inequality holds for $w \geq 15$. For $\ell = 3$, assume that $k^i(B) \leq 3^i$ for $i \leq \frac{w-a_0}{3}$. With the above and the bound from Lemma \ref{lem:bound3,4}, we have
$$k(B) \leq \sum_{j = 0}^{(w-a_0)/3} 3^{\frac{3j + a_0}{2} + \frac{9}{4}} \cdot 3^{\frac{w-(3j +a_0)}{3}} \leq 3^{\frac{9}{4} + \frac{a_0}{6} + \frac{w}{3}} \cdot \frac{3^{\frac{1}{2}\left(\frac{w-a_0}{3}+1\right)} -1}{\sqrt{3}-1} \leq 3^{\frac{w+7}{2}} \leq 3^w,$$
where the last inequality holds for $w \geq 7$. Checking directly that $k^w(B) \leq 3^w \leq 3^{\frac{w+7}{2}}$ for $w \leq 6$, this shows inductively that $k^w(B) \leq \min\{3^\frac{w+7}{2}, 3^w\}$. The remaining cases can be checked directly.
\end{proof}
We have now assembled the prerequisites to prove the inequalities (C1) and (C2) for the general linear and unitary groups. To this end, note that by the same argument as in the proof of \cite[Prop. 5.11]{MAL18}, we may assume in the following that $w$ is divisible by $\ell$. For the number of characters in $D'$ and $D$, it holds by \cite[Lemma 5.10]{MAL18}
\begin{equation}\label{eq:kD'3}
k(D') = \prod_{i \geq 1} k(D_{i,\ell^a}')^{a_i} \geq \prod_{i \geq 1} \ell^{a(\ell^i-1) - \frac{\ell^i-\ell}{\ell-1} - i +1} = \ell^{\left(a- \frac{1}{\ell - 1}\right) w - \sum a_i \left(a+ i - \frac{2 \ell - 1}{\ell- 1}\right)},
\end{equation} and
\begin{equation}\label{eq:kD3}
k(D) = \prod_{i \geq 1} k(D_{i,\ell^a})^{a_i} \geq \prod_{i \geq 1} \left(\frac{\ell^{a \ell^i}}{\ell^{(\ell^i-1)/(\ell-1)}}\right)^{a_i} = \ell^{\left(a-\frac{1}{\ell-1}\right) w + \frac{1}{\ell - 1} \sum_{i \geq 1} a_i}.
\end{equation}
For small values of $a$, we need to improve this bound:
\begin{lemma}\label{lem:nrcharacters}
For $i \geq 1$, it holds that $$k(D_{i,3}) \geq 3^{3^\frac{i-1}{2}} \cdot 3^{\frac{3^i +1}{2}}$$
and
$$k(D_{i,3}') \geq 3^{3^\frac{i-1}{2}} \cdot 3^{\frac{3^i +1}{2}-i}.$$
\end{lemma}
\begin{proof}
The proof can be carried out analogously to \cite[Lemma 5.10]{MAL18} by using $k(D_{1,3}) = 17$ as an improved induction start for the first part.
\end{proof}
\begin{theorem}
(C1) and (C2) hold for the principal $3$-block of $\ensuremath{\operatorname{GL}_{wd}(\varepsilon q)}$.
\end{theorem}
\begin{proof}
First consider $\ensuremath{\operatorname{GL}_{wd}(q)}$.
As in \cite[Prop. 5.11]{MAL18}, the number of characters of height zero in $B$ can be bounded from below by
$$k_0(B) = \prod_{i \geq 1} k(b\ell^i,a_i) \geq \prod_{i \geq 1} \left(\frac{b}{3^a}\right)^{a_i} \cdot 3^{\sum_{i \geq 1} a_i (a+i-1)} \geq 3^{\sum_{i \geq 1} a_i (a+i-1 -\log_3(d))}.$$
Moreover, it holds that $l(B) \geq k(d,w) \geq p_\ell(w)$ (cf. \cite[Prop. 5.11]{MAL18}).
First assume $a \geq 2$. For $w \geq 6$, Lemma \ref{lem:bound23} yields
$$k(B) \leq p_3(w) \cdot 3^{\left(a-\frac{5}{6}\right) w
+ 2 - \log_3(d)} \leq 3^{\left(a-\frac{1}{2}\right)w + \left(\frac{3}{2} - \log_3(d)\right) \sum_{i \geq 1} a_i} \leq k_0(B) \cdot k(D')$$ and
$$k(B) \leq p_3(w) \cdot 3^{\left(a-\frac{5}{6}\right) w
+ 2- \log_3(d)} \leq \pi(w) \cdot 3^{\left(a-\frac{1}{2}\right)w + \frac{1}{2} \sum a_i} \leq l(B) \cdot k(D),$$
since $-\frac{w}{3} +2 - \log_3(d) \leq \frac{1}{2} \leq \min\{\left(\frac{3}{2} - \log_3(d)\right), \frac{1}{2}\} \cdot \sum a_i$ for $w \geq 6$. For $w = 3$, the above inequalities remain valid when inserting $p_3(3) = 2$.
For $a = 1$, we use the improved bounds from Lemma \ref{lem:Sylowb}
\begin{equation}\label{eq:kD'31}
k(D') = \prod_{i \geq 1} k(D_i')^{a_i} \geq \prod_{i \geq 1} \left(3^{3^{\frac{i-1}{2}}} 3^{\frac{3^i +1}{2}-i}\right)^{a_i}= 3^{\frac{2}{3} w + \sum_{i \geq 1} a_i \left(\frac{1}{2}- i\right)},
\end{equation} and
\begin{equation}\label{eq:kD31}
k(D) = \prod_{i \geq 1} k(D_i)^{a_i} \geq \prod_{i \geq 1} \left(3^{3^{\frac{i-1}{2}}} 3^{\frac{3^i +1}{2}}\right)^{a_i} = 3^{\frac{2}{3}w + \frac{1}{2}\sum_{i \geq 1} a_i}.
\end{equation}
Moreover, note that $b = 3 = 3^a$ for both $d = 1$ and $d = 2$ in this case. With
$$k(3^{i+1}, a_i) = \begin{cases}
3^{i+1} & \text{ if }a_i = 1 \\
\frac{3^{2i+2}}{2} + \frac{3^{i+2}}{2} \geq 3^{2i+1} & \text{ if }a_i = 2,
\end{cases}$$
we obtain
\begin{equation}\label{eq:betterboundk0B}
k_0(B) = \prod_{i \geq 1} k(3^{i+1}, a_i)\geq 3^{\sum_{i \geq 1} a_i i + \sum_{i: a_i \neq 0} 1}.
\end{equation}
For $w \notin \{3,6,9\}$, we have
$$k(B) \leq 3^{\frac{w+7}{2}} \leq 3^{\sum_{i \geq 1} a_i i + \sum_{a_i \neq 0} 1 } \cdot 3^{\frac{2}{3}w + \sum_{i \geq 1} \left(\frac{1}{2} -i\right) a_i} \leq k_0(B) \cdot k(D').$$ For $w \geq 6$, we furthermore obtain
$$k(B) \leq 3^{\frac{w+7}{2}} \leq \pi(w) \cdot 3^{\frac{2}{3}w + \frac{1}{2} \sum a_i} \leq l(B) \cdot k(D),$$ since then $\pi(w) \geq \pi(6) = 11$. In the remaining cases, we check the inequalities directly:
For $w = 3$, we have $k^3(B) = 24<3^3$ and $k(D') \geq 3^{\frac{3}{2}}$ as well as $k_0(B) = 3^2$. Moreover, $l(B) \geq \pi(3) = 3$ and $k(D) \geq 3^{5/2}$, so also (C2) holds. For $w = 6$, it holds that $k^6(B) = 270<3^6$, $k(D') \geq 3^3$ and $k_0(B) = 54> 3^3$. Finally, in case $w = 9$, we obtain $k^9(B) = 2043<3^7$, $k(D') \geq 3^\frac{9}{2}$ and $k_0(B) = 3^3$.
This finishes the proof for $\ensuremath{\operatorname{GL}_{wd}(q)}$.
\bigskip
Denoting the order of $-q$ modulo 3 by
$d$, the block theoretic invariants of $\operatorname{GU}_{wd}(q)$ are the same as of $\operatorname{GL}_{wd}(q_0)$, where $q_0$
has order $d$ modulo 3 and $3^a$ is the exact power of 3 dividing $q_0^d- 1$ (cf. \cite[Prop. 5.11]{MAL18}), so
the claim follows from the proven inequality for the linear case.
\end{proof}
For $\ell = 2$, the formulas hold for the general linear as well as for the general unitary groups.
\begin{theorem}
(C1) and (C2) hold for the principal $2$-block of $\operatorname{GL}_w(\varepsilon q)$ if $\varepsilon q \equiv 1 \mod 4$.
\end{theorem}
\begin{proof}
For $a \geq 3$, we use the bound from Lemma \ref{lem:bound23} together with Equations \eqref{eq:kD'3} and \eqref{eq:kD3} to obtain
$$k(B) \leq 2^{(a-1)w + 3} \leq 2^{(a-1)w + 3 \sum_{ \geq 1} a_i} \leq k_0(B) \cdot k(D')$$
and, for $w \geq 4$,
$$k(B)\leq 2^{(a-1)w + 3} \leq \pi(w) \cdot 2^{(a-1)w + \sum a_i} \leq l(B) \cdot k(D),$$
since then $\pi(w) \geq \pi(4) > 4$. In case $w = 2$, the claim follows similarly for $a \geq 4$ by using the stronger bound from Lemma \ref{lem:bound23}. For $a = 3$ and $w = 2$, we have $k(B) = 48 \leq 2^{6} \leq l(B) \cdot k(D)$.
\bigskip
For $a = 2$, we can use the improved upper bounds
$$k(D')= \prod_{i \geq 1} k(D_{i,4}')^{a_i} \geq 2^{2 a_1} \cdot \prod_{i \geq 2} \left(2^{1.4 \cdot 2^i - i + 1}\right)^{a_i} \geq 2^{1.4 w + \sum_{i \geq 2} a_i (-i + 1) -0.8 a_1}$$ and
$$k(D) = \prod_{i \geq 1} k(D_{i,4})^{a_i} \geq \prod_{i \geq 1} \left(2^{1.4 \cdot 2^i + 1}\right)^{a_i} = 2^{1.4 w + \sum_{i \geq 1} a_i}.$$
With this and the bound from Lemma \ref{lem:bound1.65}, we obtain
$$k^w(B) \leq 2^{1.4w + 1.65} \leq 2^{1.4w + 3 \sum_{i \geq 2} a_i + 2.2 a_1} \leq k_0(B) \cdot k(D').$$
Here, we used that for $w \geq 4$, there exists an $a_i > 0$ with $i \geq 2$ and that for $w = 2$, we have $a_1 = 1$.
Since $\pi(w) \geq 2$ for $w \geq 2$, we obtain for (C2)
$$k(B) = 2^{1.4 w + 1.65} \leq \pi(w) \cdot 2^{1.4w + 1} \leq l(B) \cdot k(D), $$
so the inequalities hold.
\end{proof}
\subsection{\texorpdfstring{The conjecture for $\varepsilon q \equiv 3$ \text{mod 4}}{The conjecture for eq = 3 mod 4}}
We examine the case $\ell = 2$ and $\varepsilon q \equiv 3 \mod 4$ by using a recursion to reduce to the previous case. Here, it holds that $a = 1$ and $\tilde{a} \geq 2$ in Equation \ref{eq:formulakB2}.
\begin{lemma}\label{lem:Sylowb}
It holds that $k(P_1) = 2$, $k(P_2) = 2^{\tilde{a}} +3$, $k(P_4) = k(2^{\tilde{a}} + 3, 2) = 2^{2\tilde{a} - 1} + 9 \cdot 2^{\tilde{a}-1} + 9$ and
$$k(P_{2^i}) \geq \frac{k(P_4)^{2^{i-2}}}{2^{2^{i-2} - 1}} \geq 2^{(\tilde{a}-1)\cdot 2^{i-1}+1}$$ for $i \geq 2$
as well as $k(P_1') = 1$, $k(P_2') = 2^{\tilde{a}}$ and, for $i \geq 2$,
$$k(P_{2^i}') \geq \frac{k(P_{2^i})^2}{2^i} \geq 2^{(\tilde{a}-1)\cdot 2^{i-1} -i + 2}.$$
\end{lemma}
\begin{proof}
Note that $P_{2^i}$ lies in $P_{2^{i-1}}^2$ by \cite[Lemma 1.4]{OLS76} with index $2^i$. With this, the proof can be carried out analogously to \cite[Lemma 5.10]{MAL18}. The formula for $k(P_4)$ follows from \cite[Lemma 4.2.9]{JAM84} together with Lemma \ref{lem:k3aw2}.
\end{proof}
\begin{lemma}\label{lem:recb}
Let $c_1, c_2 \in \mathbb{Z}_{>0}$ with $c_1 \geq c_2$ and assume that there exist constants $y, c \geq 0$ such that for every $t \geq 0$, we have
$$\sum_{ \textbf{w} \in W_t} k(2^{\tilde{a}}-1,w_0) \cdot \prod_{i \geq 1} k(2^{\tilde{a}-1},w_i) \leq 2^{(\tilde{a}-y)t + c}.$$
Then
$$k(B) \leq 2^{(\tilde{a}-y) \frac{w-a_0}{2} + a_0+ 0.35+c} \cdot \sum_{j = 0}^{(w-a_0)/2} \left(2^{2+y-\tilde{a}}\right)^j.$$
\end{lemma}
\begin{proof}
For $j \in \{0, \ldots, \frac{w-a_0}{2}\}$, let $r(j) = \frac{w-(2j +a_0)}{2}$. Again, we exploit the correspondence between the set of binary decompositions $W_w$ of $w$ and
$\bigcup_{j = 0}^{(w-a_0)/2} W_{r(j)}$ (see proof of Lemma \ref{lem:bound1.65}). This together with Lemma \ref{lem:multipartitionsbasics} and the assumption yields
\begin{alignat*}{1}
k(B) &= \sum_{j = 0}^{(w-a_0)/2} k(2,2j+a_0) \cdot \sum_{(w_1, \ldots, w_v) \in W_{r(j)}} k(2^{\tilde{a}}-1,w_1) \cdot \prod_{i \geq 2} k(2^{\tilde{a}-1},w_i) \\
&\leq \sum_{j = 0}^{(w-a_0)/2} 2^{2j+a_0 + 0.35} \cdot 2^{(\tilde{a}-y) \frac{w-(2j+a_0)}{2} + c} = 2^{(\tilde{a}-y) \frac{w-a_0}{2} + a_0+ 0.35+c} \cdot \sum_{j = 0}^{(w-a_0)/2} \left(2^{2+y-\tilde{a}}\right)^j. \tag*{\qedhere}
\end{alignat*}
\end{proof}
\begin{Remark}\label{rem:weven}
Since $k(2,2w_0+1) \leq 2 \cdot k(2,2w_0)$ for all $w_0 \geq 0$ and the sets of binary decompositions of $2j+1$ and $2j$ for $j \geq 0$ are in bijective correspondence by Lemma \ref{lem:bound1.65}, it follows as above that
\begin{alignat*}{1}
k^{2j+1}(B) &= \sum_{\textbf{w} \in W_{2j+1}} k(2,w_0) k(2^{\tilde{a}}-1,w_1) \prod_{i \geq 2} k(2^{\tilde{a}-1}, w_i) \\
&\leq \sum_{\textbf{w} \in W_{2j}} 2 \cdot k(2,w_0) k(2^{\tilde{a}}-1,w_1) \prod_{i \geq 2} k(2^{\tilde{a}-1}, w_i)= k^{2j}(B).
\end{alignat*}
\end{Remark}
We can now prove the inequalities of the conjecture. As before, we treat the case $\tilde{a} = 2$ separately.
\begin{lemma}
(C1) and (C2) hold for the principal $2$-block of $\operatorname{GL}_w(\varepsilon q)$ if $\varepsilon q \equiv 3 \mod 4$ and $\tilde{a} \geq 3$.
\end{lemma}
\begin{proof}
We apply Lemma \ref{lem:recb} using the bound from Lemma \ref{lem:bound1.5}. For $\tilde{a} \geq 4$, we have $2^{3-\tilde{a}} < 1$, hence the geometric series yields
$$k(B) \leq 2^{(\tilde{a}-1)\frac{w-a_0}{2} + a_0 + 1.85} \cdot \sum_{j = 0}^\infty (2^{3-\tilde{a}})^j = \frac{2^{(\tilde{a}-1)\frac{w-a_0}{2} + a_0 + 1.85}}{1-2^{3-\tilde{a}}}\leq 2^{(\tilde{a}-1)\frac{w-a_0}{2} + a_0 + 2.85}.$$
The number of conjugacy classes of $D$ is given by
$$k(D) = \prod_{i \geq 0 } k(P_{2^i}) \geq 2^{a_0} \cdot \prod_{i \geq 1} \left(2^{(\tilde{a}-1)\cdot 2^{i-1} +1 }\right)^{a_i} = 2^{(\tilde{a}-1)\frac{w- a_0}{2} + \sum_{i \geq 0} a_i}.$$
With this, (C2) holds for $w \geq 4$ since then $l(B) \geq \pi(w) \geq 2^{1.85}$. Since the bound for $k(D)$ increases by a factor of 2 when passing from $w = 2j$ to $w = 2j+1$, it remains to consider $w \in \{1,2\}$. For $w = 1$, we have $k(B) = 2 \leq k(D)$. For $w = 2$, it holds that $\pi(2) = 2$ and so $k(B) = 2^{\tilde{a}} + 4 \leq 2^{\tilde{a}+1} \leq l(B) \cdot k(D).$
\bigskip
For the derived subgroup, we have the estimate
$$k(D') = \prod_{i \geq 0} k(P_{2^i}')^{a_i} \geq 2^{\tilde{a} a_1} \cdot \prod_{i \geq 2} \left(2^{(\tilde{a}-1)2^{i-1} -i+ 2}\right)^{a_i} \geq 2^{(\tilde{a}-1)\frac{w-a_0}{2} - \sum_{i \geq 1} (i-2) a_i},$$
so $$k_0(B) \cdot k(D') \geq 2^{(\tilde{a}-1)\frac{w-a_0}{2} + a_0 + 3 \sum_{i \geq 1} a_i},$$
hence (C1) holds since $\sum_{i \geq 1} a_i \geq 1$ for $w \geq 2$ and $k_0(B) \geq 2 = k(B)$ for $w = 1$.
\bigskip
We now consider the case $ \tilde{a}= 3$. There, we use the stronger bounds in Lemma \ref{lem:Sylowb} to obtain
\begin{equation}\label{eq:kDb3}
k(D) = \prod_{i \geq 0} k(P_{2^i})^{a_i} \geq 2^{a_0} \cdot 11^{a_1} \cdot \prod_{i \geq 2} \left(2^{1.3 \cdot 2^i +1}\right)^{a_i} \geq 2^{1.3 w- 0.3 a_0 + 0.85 a_1 + \sum_{i \geq 2} a_i}.
\end{equation}
Furthermore, Lemma \ref{lem:recb} yields
\begin{equation}\label{eq:kB3}
k(B) \leq \left(\frac{w-a_0}{2} + 1\right) \cdot 2^{w + 3.35}.
\end{equation}
For $w \geq 11$, this can be bounded from above by $k^w(B) \leq 2^{1.3 w + 2.7}$
since then $(w-a_0)/2 +1 \leq 2^{0.3 w- 0.65}$ by induction. With this, (C2) holds for $w \geq 11$ since $\pi(w) \geq \pi(11) \geq 2^{3.35}$. For (C1), Lemma \ref{lem:Sylowb} yields
\begin{equation}\label{eq:kD'b3}
k(D') = \prod_{i \geq 0} k(P_{2^i}')^{a_i} \geq 2^{3 a_1} \cdot 2^{6 a_2} \cdot \prod_{i \geq 3} \left(2^{1.3 \cdot 2^i - (i-2)} \right)^{a_i} = 2^{1.3 (w-a_0) + 0.4 a_1 + 0.8 a_2 - \sum_{i \geq 3} (i-2) a_i}.
\end{equation}
For $w \geq 11$, the claim follows with $\sum_{i \geq 2} a_i \geq 1$ and $a_0 \in \{0,1\}$:
\begin{equation}\label{eq:dobby}
k(B) \leq 2^{1.3 w + 2.7} \leq 2^{1.3 w - 0.3 a_0+ 2.4 a_1 + 0.8 a_2 + 3 \sum_{i \geq 2} a_i } \leq k_0(B) \cdot k(D').
\end{equation}
Using the exact values of the $a_i$ in the estimates of Equations \eqref{eq:kDb3}, \eqref{eq:kB3} and \eqref{eq:dobby}, the claim holds for $w \in \{6,10\}$. For the remaining cases, we note that as before, we gain a factor 2 in Equations \eqref{eq:kDb3} and \eqref{eq:kD'b3} when passing from $w = 2x$ to $w = 2x+1$ for some $x \in \mathbb{Z}_{>0}$. So by Remark \ref{rem:weven} it suffices to consider the case $w = 1$ or $w$ even. We obtain the following values
\begin{center}
\begin{tabular}{ l | l | l | l }
$w$ & $k^w(B)$ & lower bound for $k_0(B) \cdot k(D')$ & lower bound for $l(B) \cdot k(D) $\\ \hline
$1$ & $2$ & $2$& $2$ \\
$2$ & $12$ & $2^5$ & $2^{4.45}$ \\
$4$ & $94$ & $2^9$ & $5 \cdot 2^{6.2}$\\
$8$& $2908$ & $2^{13.4}$& $22 \cdot 2^{11.4}$\\
\end{tabular}
\end{center}
which finishes the proof.
\end{proof}
\begin{lemma}
(C1) and (C2) hold for the principal $2$-block of $\ensuremath{\operatorname{GL}_{w}(\varepsilon q)}$ for $\varepsilon q \equiv 3 \mod 4$ and $\tilde{a} =2$.
\end{lemma}
\begin{proof}
It holds that $k(3,w) \leq 2^{1.2 w + 0.9}$ (for $w \geq 8$, this follows from Lemma \ref{lem:bound3,4} and the remaining cases can be checked directly). With this, we can prove analogously to Lemma \ref{lem:bound1.65} that
$$\sum_{ \textbf{w} \in W_w} k(3,w) \cdot \prod_{i \geq 1} k(2, w_i) \leq 2^{1.4 w + 1}.$$ With this, Lemma \ref{lem:recb} yields
$$k(B) \leq 2^{1.4 \frac{w-a_0}{2} + a_0 + 1.35} \cdot \sum_{j = 0}^{(w-a_0)/2} 2^{0.6 j} \leq 2^{0.7 w + 0.3 a_0 +1.35} \cdot \frac{2^{0.6 \left(\frac{w-a_0}{2}+1 \right)}-1}{2^{0.6}-1} \leq 2^{w + 2.95}.$$
The number of conjugacy classes of the defect group $D$ is bounded by
\begin{equation}\label{eq:toffi2}
k(D) \geq \prod_{i \geq 0} k(P_{2^i})^{a_i} \geq 2^{a_0} \cdot 7^{a_1} \cdot \prod_{i \geq 2} \left(2^{2^i +1}\right)^{a_i} \geq 2^{w + 0.8 a_1 + \sum_{i \geq 2} a_i}
\end{equation}
and
\begin{equation}\label{eq:toffi3}
k(D') \geq \prod_{i \geq 0} k(P_{2^i}')^{a_i} \geq 2^{2a_1} \cdot 2^{4.45a_2} \cdot \prod_{i \geq 3} \left(2^{2^i -i+2}\right)^{a_i} \geq 2^{w -a_0+0.45a_2 - \sum_{i \geq 3} (i-2) a_i}.
\end{equation}
With $l(B) \geq \pi(w) \geq 2^{1.95}$ and $\sum_{i \geq 2} a_i \geq 1$ for $w \geq 4$, (C2) holds. For $w \geq 4$, we obtain for the first inequality
\begin{equation}\label{eq:toffi}
k(B) \leq 2^{w + 0.95} \leq 2^{w + 2 a_1 + 0.45 a_2 + 3 \sum_{i \geq 2} a_i } \leq k_0(B) \cdot k(D').
\end{equation}
Using the Equations \eqref{eq:toffi2} and \eqref{eq:toffi}, we obtain the following table
\begin{center}
\begin{tabular}{ l | l | l | l }
$w$ & $k^w(B)$ & lower bound for $k_0(B) \cdot k(D')$ & lower bound for $k(D) \cdot l(B) $\\ \hline
$1$ & $2$ & $2$& $2$ \\
$2$ & $8$ & $2^4$ & $2^{3.8}$ \\
$3$ & $16$ & $2^5$ & $3 \cdot 2^{3.8}$\\
\end{tabular}
\end{center}
so the inequalities hold.
\end{proof}
This completes the proof of (C1) and (C2) for the general linear and unitary groups.
\section{Special linear and unitary groups}
In the following, assume $\ell = 3$. We prove the conjecture for the special linear and unitary groups $\ensuremath{\operatorname{SL}_n(\varepsilon q)}$, proceeding similarly to the proof of \cite[Thm.~5.16]{MAL18}. Observe that the proof given therein for the case that $\ell$ does not divide $q-\varepsilon$ is also valid for $\ell = 3$. Therefore, it remains to consider the case $3 |(q-\varepsilon)$. There, $\ensuremath{\operatorname{GL}_n(\varepsilon q)}$ has a single unipotent block $\tilde{B}$ (cf. \cite[Thm. 7.A]{FON82}) with defect group $\tilde{D}$ which covers the unique unipotent block $B$ of $\ensuremath{\operatorname{SL}_n(\varepsilon q)}$ (cf. \cite[Thm.]{CAB94}). Let $3^a$ be the exact power of $3$ dividing $q-\varepsilon$ and $3^m := |Z(G)|_3 = \gcd(w, q-\varepsilon)_3 = \min\{\log_3(w_3), a\}.$
\bigskip
For $Z \leq Z(G)$, let $B_Z$ be the principal block of $G/Z$ with defect group $D_Z$ and as a special case, let $\bar{B} = B_{Z(G)}$ be the principal block of $\operatorname{PSL}_n(\varepsilon q)$. The bounds given in the proof of \cite[Thm.~5.16]{MAL18} are also valid for $\ell = 3$: It holds that $k(D) \geq k(\tilde{D})/3^a$. For all $Z \leq Z(G)$, we obtain $k(D_Z) \geq k(\bar{D}) \geq k(D)/3^m$ (similarly for the derived subgroups). Moreover, it holds that $k_0(\bar{B}) = k_0(B) \geq k_0(\tilde{B})/3^a$ and $l(\tilde{B}) \geq l(B) = l(B_Z)$ as well as $k(B_Z) \leq k(B).$ In order to prove (C1) and $(C2)$ for the block $B_Z$ for $Z \leq Z(G)$, it is therefore sufficient to prove the following inequalities:
\begin{equation}
k(B) \leq k_0(B) \cdot k(\bar{D}') \tag{C1'}
\end{equation}
and
\begin{equation}
k(B) \leq l(B) \cdot k(\bar{D}) \tag{C2'}.
\end{equation}
If $w$ is not divisible by $3$, then $m = 0$. By \cite[Thm.~5.1]{MAL17}, it holds that $k(B) = k(\tilde{B})/3^a$, so it follows from the proven inequalities for the block $\tilde{B}$ that (C1) and (C2) hold for $B$. We therefore assume that 3 divides $w$.
\begin{lemma}\label{lem:abschaetzungspeciallingroup}
Let $a \geq 2$, $w \geq 6$ a positive integer divisible by 3 and $1 \leq j \leq \min\{a, \log_3 w_3\}$. Then it holds that
\begin{equation}\label{eq:baum}
3j + \frac{aw}{3^j} \leq
\left(a - \frac{5}{6}\right) w.
\end{equation}
\end{lemma}
\begin{proof}
The inequality holds for $w \geq 6$ if $j = 1$ and for $w \geq 9$ if $j = 2$ (note that for $w = 6$, only $j = 1$ is admissible). So assume $j \geq 3$. Using $j \leq a$, the left hand side can be bounded from above by $3a + \frac{aw}{27}$. The resulting inequality
$$3a + \frac{aw}{27} \leq \left(a-\frac{5}{6}\right) w$$
is fulfilled for $w \geq \frac{162 a}{52a - 45},$
i.e., for $w \geq 6$ if $a \geq 2$.
\end{proof}
The proof of the following lemma is analogous to \cite[Thm.~5.16]{MAL18}.
\begin{lemma}\label{lem:kBsl}
Let $a \geq 2$ and assume that $w \geq 6$ is divisible by 3. It holds that
$$k(B) \leq p_3(w) \cdot 3^{a(w-1) - \frac{5}{6}w + \log_3(19/18) +2} \leq 3^{a(w-1) - \frac{2}{3}w + \log_3(19/18)+2}.$$
\end{lemma}
\begin{proof}
As in the proof of \cite[Thm.~5.16]{MAL18}, we obtain $k(3,3^a,1,x) \leq p_3(x) \cdot 3^{ax}$ for all $x \geq 0$. By \cite[Thm.~5.1]{MAL17}, the number of characters in the block $B$ can thus be bounded by
\begin{equation}\label{eq:kBfuerspeciallineargroup}
k(B) \leq \left(k(3,3^a,1,w) + \sum_{j=1}^{m} p_3\left(\frac{w}{3^j}\right) \cdot 3^{2j + \frac{aw}{3^j}}\right)/3^a,
\end{equation}
where $3^m = \min\{w_3, 3^a\}$ as before. With Lemma \ref{lem:abschaetzungspeciallingroup} and the bound from Lemma \ref{lem:bound23} we obtain
\begin{alignat*}{2}
k(B) &\leq p_3(w) \cdot \left(3^{\left(a-\frac{5}{6}\right)w + 2} + \sum_{j=1}^{m} 3^{2j + \frac{aw}{3^j}}\right)/3^a
&&\leq p_3(w) \cdot 3^{a(w-1) - \frac{5}{6}w +2} \left(1 + \sum_{j \geq 1} 3^{-2-j}\right) \\
&\leq p_3(w) \cdot 3^{a(w-1) - \frac{5}{6}w +2} \left(1 + \frac{1}{18}\right)
&&= p_3(w) \cdot 3^{a(w-1) - \frac{5}{6}w +2+ \log_3(19/18)}.
\end{alignat*}
The second bound follows from that with Lemma \ref{lem:plw}.
\end{proof}
\begin{lemma}
The inequalities (C1') and (C2') hold if $a \geq 2$ and $w$ is divisible by 3.
\end{lemma}
\begin{proof}
We first assume $w \geq 6$ and consider (C2'). With the estimates from the beginning of this chapter and Equation \eqref{eq:kD3} it holds that
\begin{equation}\label{eq:kDbar}
\ensuremath{k(\bar{D})} \geq \frac{k(\tilde{D})}{3^{a+m}} \geq 3^{a(w-1) - \frac{w}{2} - m + \frac{1}{2} \sum_{i \geq 1} a_i}.
\end{equation}
We obtain
\begin{equation}\label{eq:AbschaetzungC2final69}
\frac{\ensuremath{k(\bar{D})} \cdot l(B)}{k(B)} \geq \frac{\pi(w) \cdot 3^{a(w-1) - \frac{w}{2} - m + \frac{1}{2} \sum_{i \geq 1} a_i}}{p_3(w) \cdot 3^{a (w-1) - \frac{5}{6}w + \log_3(19/18) +2}} = \frac{\pi(w)}{p_3(w)} \cdot 3^{\frac{w}{3} - m - \log_3(19/18) + \frac{1}{2} \sum_{i \geq 1} a_i -2}.
\end{equation}
Using $m \leq \log_3(w)$ and $\sum a_i \geq 1$, the above quotient is greater than 1 for $w \geq 12$. This also holds for $w \in \{6,9\}$ when inserting the exact values of $w$, $m$ and $\sum_{i \geq 1} a_i$ and using that $\pi(w)/p_3(w) \geq 3$ in this case.
\bigskip
Now we consider the inequality (C1').
By the estimates from the beginning of this section, Equation \ref{eq:kD'3} and \cite[Prop. 5.15]{MAL18} we have
\begin{equation}\label{eq:kDbar'}
k(\bar{D}') \geq \frac{k(\tilde{D}')}{3^{m + \delta}} \geq 3^{\left(a-\frac{1}{2}\right)w - \sum_{i \geq 1} a_i\left(a+i-\frac{5}{2}\right)-m - \delta},
\end{equation}
where $\delta = 1$ if $w$ is a power of 3 and $\delta = 0$ otherwise. Moreover, we have
$$k_0(B) \geq \frac{k_0(\tilde{B})}{3^a} \geq 3^{\sum_{i \geq 1} a_i (a+i-1) - a}.$$
With Lemma \ref{lem:kBsl} we obtain for $w \notin \{3,9\}$
$$\frac{k_0(B) \cdot k(\bar{D}')}{k(B)} \geq \frac{3^{\left(a-\frac{1}{2}\right)w + \frac{3}{2} \sum_{i \geq 1} a_i -m-a- \delta}}{3^{a(w-1) - \frac{2}{3} w + \log_3(19/18)+2}} \geq 3^{\frac{w}{6}+ \frac{3}{2} \sum_{i \geq 1} a_i - \log_3(19/18) - m - 2- \delta} > 1.$$
The last inequality holds for $w \geq 15$ not a power of 3 since then $m \leq \log_3(w/2) \leq \frac{w}{6}-1/2 - \log_3(19/18)$ and $\sum a_i \geq 1$. For $w \in \{6,12\}$, we have $m = 1$ and $\sum a_i = 2$, so the above term is greater than 1.
For $w \geq 27$ a power of 3, the inequality holds with $m \leq \log_3(w)$.
\bigskip
Let $w = 9$. For $a \geq 3$, we can use the stronger bound in Lemma \ref{lem:multipartitions} in the proof of Lemma \ref{lem:bound23} to obtain an improved bound $k(3,3^a,1,9) \leq p_3(9) \cdot 3^{9 \cdot \left(a-\frac{5}{6}\right)} \leq 3^{9a-6}.$ With this, the number of characters in $B$ can be bounded by
$$k(B) \leq \frac{3^{9a -6} +2 \cdot 3^{2+3a} + 3^{4+a} }{3^a} \leq 2 \cdot 3^{8a-6}.$$
By \cite[Example 5.14]{MAL17} we have $k_0(B) = 18$ and Equation \eqref{eq:kDbar'} yields $k(\bar{D}') \geq 3^{8a-7}$, so the inequality holds.
For $a = 2$ and $w =9$, it holds that $k(B) \leq 45687$. By the above calculation, we have $k(\bar{D}') \geq 3^9$ and $k_0(B) = 18$, so (C1') holds.
\bigskip
It remains to consider both inequalities for $w = 3$. In this case, we have $m = 1$ and Equations \eqref{eq:kDbar'} and \eqref{eq:kDbar} yield $k(\bar{D}'),\, \ensuremath{k(\bar{D})} \geq 3^{2a-2}$. Furthermore, by \cite[Example 5.14]{MAL18}, it holds that $k_0(B) = 6$ and $l(B) \geq 5$. By Lemma \ref{lem:k3aw2}, we have
$$k(3^a,3)= \frac{1}{6} \cdot 3^{3a} + \frac{3}{2} \cdot 3^{2a} + \frac{4}{3} \cdot 3^a \leq 0.35 \cdot 3^{3a}, $$
hence
\begin{equation*}
k(B) = \frac{k(3^a,3) + 3^{2+a} - 3^{a-1}}{3^a}
\leq 0.35 \cdot 3^{2a} +9 - \frac{1}{3} \\
\leq 5 \cdot 3^{2a-2},
\end{equation*}
where we used the assumption $a \geq 2$ in the third step.
This yields
$$k(B) \leq 0.5 \cdot 3^{2a} \leq 5 \cdot 3^{2a-2} \leq \min\{k_0(B) \cdot k(\bar{D}'),\, l(B) \cdot k(D)\},$$
so both inequalities hold also in this case.
\end{proof}
\begin{lemma}\label{lem:SlC2a1w3}
The inequalities (C1') and (C2') hold if $a = 1$ and $w \geq 6$ is divisible by 3.
\end{lemma}
\begin{proof}
It holds that $m \leq a = 1$ in this case. First consider (C2'). With the improved bound from Equation \eqref{eq:kD31}, we have
\begin{equation}\label{eq:kDa1wleq10}
\ensuremath{k(\bar{D})} \geq \frac{k(\tilde{D})}{9} \geq 3^{\frac{2}{3} w + \frac{1}{2}\sum_{i \geq 1} a_i -2}.
\end{equation}
By Lemmas \ref{lem:bound1.65} and \ref{lem:plw}, it holds
\begin{equation}\label{eq:kbsla1}
k(B) \leq \frac{k(3,3,1,w) + p_3(w/3) \cdot 3^{2 + \frac{w}{3}}}{3}
\leq 3^{\frac{w}{2} + \frac{5}{2}} + 3^{1 + \frac{w}{2}} \leq 3^{\frac{w}{2}+ 2.67}.
\end{equation}
For $w \geq 15$, the number of irreducible Brauer characters in $B$ can be bounded by $l(B) \geq \pi(w) \geq 3^{4.67}$, so (C2') holds. Using the exact values in Equation \eqref{eq:kDa1wleq10}, the claim also holds for $w \in \{9,12\}.$
For $w = 6$, we have $\ensuremath{l(\bar{B})} \geq k(1,6) = 11$, $\ensuremath{k(\bar{D})} \geq 3^{3}$ as well as $k(B) \leq \left(k(3,3,1,6) +3^4\right) /3 = 117$.
\bigskip
Now consider (C1'). Let $\delta = 1$ if $w$ is a power of 3 and zero otherwise as before. Equation \eqref{eq:kD'31} yields
\begin{equation}\label{eq:kD'}
k(\bar{D}') \geq \frac{k(\tilde{D}')}{3^{1+\delta}} \geq 3^{\frac{2}{3} w+ \sum_{i \geq 1} a_i \left(\frac{1}{2}-i\right)-1 - \delta}.
\end{equation}
With the improved bound from Lemma \ref{eq:betterboundk0B} we obtain
$$\frac{k(\bar{D}') \cdot k_0(B)}{k(B)} \geq \frac{3^{\frac{2}{3} w+ \sum_{i \geq 1} a_i \left(\frac{1}{2}-i\right) -1 - \delta + \sum_{i \geq 1} a_i \cdot i + \sum_{i: a_i \neq 0} 1 -1}}{3^{2.67 + \frac{w}{2}}} \\
= 3^{\frac{w}{6} - 4.67 + \frac{1}{2}\sum_{i \geq 1} a_i + \sum_{i: a_i \neq 0} 1 - \delta}.$$
If $w$ is not a power of 3, we either have $\sum_{i \geq 1} a_i \geq 2$ or $\sum_{i: a_i \neq 0} 1 \geq 2$. With this, the inequality holds for $w \geq 18$. Using the exact values of the $a_i$, the claim follows for $w \in \{12,15\}$. For $w =6$, we have $k_0(B) \geq k_0(\tilde{B})/3 = k(3^3,2)/3= 135$ by Equation \eqref{eq:betterboundk0B} and $k(B) \leq 117.$ For $w \geq 27$ a power of $3$, the above term is greater than 1. For $w = 9$, we have $k(B) \leq 745$ by Equation \ref{eq:kBfuerspeciallineargroup}. It holds that $|D_{1,3}^3 : D_{2,3}'| = 9$ (cf. \cite[Lemma 5.10]{MAL18}) and we can check directly that $k(D_{1,3}) = 17$, so we have
$$k(\bar{D}') \geq \frac{k(\tilde{D}')}{9} = \frac{k(D_{1,3})^3}{81} \geq 3^{3.5}.$$
With $k_0(B) = 18$ (cf. \cite[Thm.~5.12]{MAL17} the inequality holds in this case.
\end{proof}
For the remaining case $a = 1$ and $w = 3$, we consider the original inequalities (C1) and (C2).
\begin{lemma}
The inequalities (C1) and (C2) hold for the principal block of $H = G/Z$ if $a =1$ and $w = 3$.
\end{lemma}
\begin{proof}
We use the notation from the beginning of this chapter. We can check directly that $|\tilde{D}| = 81$, hence $|D| = 27$. Since $|Z(G)|_3 = 3$, a defect group $D_Z$ for $Z \leq Z(G)$ is either isomorphic to $D$ or $|D_Z| = 9$. In the latter case, $D_Z$ is abelian and the claim holds by \cite[Thm.~2.1]{MAL18}. For the first case, since $k_0(B) = k_0(B_Z)$ and $l(B) = l(B_Z)$, it suffices to prove the inequality for $\ensuremath{\operatorname{SL}_{w}(\varepsilon q)}$. We have $|\tilde{D}'| = 9$ and hence $|D'| = 3$, thus $k(D') = 3$. By Example 5.14 in \cite{MAL18}, we have $k_0(B) = 6$ and $k(B) = 16$, so (C1) holds. For (C2), we use Example 5.14 in \cite{MAL18} to obtain $l(B) = 5$ and $k(D) \geq k(\bar{D}) = 9$, since $\bar{D}$ is abelian. With this, the inequality holds.
\end{proof}
This completes the proof of our main theorem.
\bigskip
\textbf{Acknowledgement:} I would like to thank Prof.\ G.\ Malle for supervising my master thesis as well as for his numerous suggestions and his advice concerning this project.
\nocite{GAP4}
\nocite{BRE19}
\small
\bibliographystyle{plainurl}
|
2,877,628,088,440 | arxiv | \section{\label{sec:level1} Introduction\protect\\ \lowercase{}}
The tunnel current in the Metal-Insulator-Metal (M-I-M) structure has been theoretically described and experimentally tested for many years, starting in 1930 Ref.\cite {Fr1930}, see also e.g. Refs.\cite {Si}-\cite {Zh}. The theoretical problem of the transmission of relativistic electrons through the potential barrier of a controlled height, described by the Dirac equation, called the Klein paradox, was published Ref.\cite {Kl} and resolved many years ago, see review Ref.\cite {Ca}. The similar theoretical problem, in close analogy to Dirac model, but for transmission coefficient of conduction electrons through the potential barrier with the height depending on the voltage applied to it, $V_B$, was solved for graphene Ref.\cite{Ka}, for phosphorus Ref.\cite{Li2017} and for IV-VI semiconductor compounds Ref.\cite{Pf} in recent years. Both of these phenomena are the basis of the proposed theoretical tunneling transistor model. Thereby, the considered design, described below, differs from existing TFET solutions based on the fact that an increase in $V_G$ means an increase of
the bands bending in the source-channel heterojunction and, as a result, an increase in the tunnel current $J$ from the valence band to the conduction band in $p^+-i-n^+$ or $p^{++}-n^--n^+$ configuration or from the conduction band to the valence band in $n^+-i-p^+$ or $n^{++}-p^-p^+$ configuration (band-to-band tunneling (BTBT)) , see e.g. Refs.\cite{Sa}-\cite{Co}.
\section{\label{sec:level1} Tunneling Current in Metal-Insulator-Metal structure\protect\\ \lowercase{}}
Here we consider details of the general formula for the dependence of the current $J$ on the applied voltage, $V_C$, in the M-I-M structure (see Ref. \cite{St}) as the source-channel-drain structure in our design. The elements of this structure are selected so that current electrons from the metal A tunnel along the $z$ direction through the forbidden gap of the insulator to the metal B, see Fig. 1. The wave vector $k_z(z)$ of the tunneling electrons has a decisive influence on the magnitude of the $J$ current. This vector has an imaginary value and determines how much the electron wave function is damped in the insulator area.
\begin{figure}
\includegraphics[scale=0.4,angle=0, bb = 750 80 40 550]{Fig1.pdf}
\caption{Diagram of the Metal-Insulator-Metal structure with applied voltage $V_C$ between the metal A and the metal B, see e. g. Ref.\cite {St}. The tunneling current flows along the $z$ direction from the metal A through the forbidden gap of the insulator to the metal B. $V_b^L$ and $V_b^R$ are the band offsets between the conduction band of the insulator and the Fermi energy $E_F$ in the metal A or the metal B, respectively. $E_a$ is the energy of the electron.} \label{fig1th}
\end{figure}
In order to determine formula for $|k_z(z)|$ we proceed as follows:
the energy of electron in the insulator, $E(k)$, counted from the bottom of the forbidden gap, in the two band model, is:
\ \\
$$
E(k) = E_g-((\Phi(z)-E_a) =
$$
\begin{equation}
=\frac{E_g}{2} \pm
\left[(\frac{E_g}{2})^2+\frac{E_g\hbar^2({k^i_z}^2+k^2_{\perp})}{2m^*_0}\right]^{1/2}\;\;,
\end{equation}
where $E_g$ is the forbidden gap of the insulator, $k^2= k^2_{\perp} + k_z^i$, and wave vectors $k_{\perp}$ and $k_z^i$ have real and imaginary value in $E_g$, respectively. $E_a$ is the energy of the electron in the metal A, $m^*_0$ is the effective electron mass $m_c$ at the conduction-band edge or $m_v$ at the valence-band edge corresponding to the sign in front of the square root in Eq.(1) and $V_C$ is the applied voltage. $\Phi(z) = \Phi_B(z)+E_F$ is the energy of the M-I-M barrier potential relating to the metal A conduction band edge, where
\begin{equation}
\Phi_B(z) = V^L_b+(V^R_b-V^L_b-e V_c)z/d\;\;,
\end{equation}
$ V^L_b$ and $ V^R_b$ are metal-insulator barrier energies and $E_F$ is the Fermi energy.
\begin{figure}
\includegraphics[scale=0.35,angle=0, bb = 650 5 50 500]{Fig2.pdf}
\caption{The theoretical energy dispersions $E(k_z^2)$ in the forbidden gap of GaSe for $k_{\perp}$ = 0. The blue curve includes the electron mass $m_F(E)$ calculated in the two level Franz model for $m_C/m_0=0.35$ and $m_V/m_0=0.07$. The red curves are calculated in the two level model for $m^*_0/m_0=0.35$ or $m^*_0/m_0=0.07$.} \label{fig2th}
\end{figure}
Hence,
\begin{equation}
|k_z(z)| = \left[\left(1-\frac{\Phi(z)-E_a}{E_g}\right)(\Phi(z)-E_a)\frac{2m^*_0}{\hbar^2} + k^2_{\perp}\right]^{1/2}\;\;.
\end{equation}
or
\begin{equation}
|k_z(z)| = \left[\left(1-\frac{E(z)}{E_g}\right)E(z)\frac{2m^*_0}{\hbar^2} + k^2_{\perp}\right]^{1/2}\;\;.
\end{equation}
The next step is a general expression for the elastic tunneling current J(V) from the metal A to the metal B (e. g. Ref \cite{Ku}),
$$
J(V) = \frac{2eS}{\hbar}\int_0^{\infty}dE_a(f_A(E_a)-f_B(E_a))\cdot
$$
\begin{equation}
\cdot\int_0^{\infty}\frac{d^2k_{\perp}}{(2\pi)^2}\rm{exp}\left[-2\int^d_0|k_z|(k_{\perp}, \Phi(z)-E_a)dz\right]=
\end{equation}
$$
=\frac{2eS}{\hbar}\int_0^{\infty}dE_a(f_A(E_a)-f_B(E_a))\cdot
$$
$$
\cdot\int_0^{\infty}\frac{dk_{\perp}k_{\perp}}{2\pi}\rm{exp}\left[-2\int^d_0|k_z|(k_{\perp}, \Phi(z)-E_a)dz\right]\;\;,
$$
or
$$
J(V)/S\left[\frac{A}{cm^2}\right]=\frac{7.7483}{10^5}\int_0^{\infty}dE_a(f_A(E_a)-f_B(E_a))\cdot
$$
\begin{equation}
\cdot\int_0^{k^{M}_{\perp}} dk_{\perp}k_{\perp}
\rm{exp}\left[-2\int^d_0|k_z(z)|dz\right]
\end{equation}
where $k^{M}_{\perp} = k^F_{\perp}$ for $E_F$ and $k_z$=0 in the M-I-M system. Further, $d$ is the insulator thickness and S is the area of the interface between the metal and the insulator. $f_A(E_a)$ and $f_B(E_a)$ are F-D distribution functions for the metal A and the metal B,
$f_A(E_a) = 1/[1+\rm{exp}(E_a-E^F_a)/kT]$ and $f_B(E_a) = 1/[1+\rm{exp}[E_a-(E^F_a-eV_C)]/kT]$.
For very low temperature one has
$$
J(V)[\frac{A}{cm^2}] = \frac{7.7483}{10^5}\cdot
$$
\begin{equation}
\cdot\int_{E_F-eV}^{E_F}dE_a\int_0^{k^{M}_{\perp}} dk_{\perp}k_{\perp} \rm{exp}\left[-2\int^d_0|k_z(z)|dz\right]\;\;,
\end{equation}
One can notice that the formula for $J(V)$ is dominated by the element with the exponential decay which is a result of the imaginary value of $k_z$ in the electron wave function for the insulator forbidden gap.
It is also seen that all electrons in the metal A with energy in the range $E_F - (E_F-eV_C)$ form the tunneling current from the metal A to the metal B. Furthermore, the tunneling current of the electron is the bigger the smaller $|k_z|$ it has.
\begin{figure}
\includegraphics[scale=0.35,angle=0, bb = 650 5 90 520]{Fig3.pdf}
\caption{The theoretical energy dispersion $E(k_z^2)$ in the forbidden gap of InAs for $k_{\perp}$ = 0 calculated in the two level model for $E_g $ = 0.417 eV and $m^*_0/ m_0 = 0.026$.}
\label{fig3th}
\end{figure}
\section{\label{sec:level1} Dependence of current electrons energy on $k^2_z$ in channel forbidden gap \protect\\ \lowercase{}}
Henceforth, the term Metal-Insulator-Metal is replaced by the term Source-Channel-Drain, source and drain are metals or $n$ type semiconductors and channel is a wide gap or narrow gap semiconductor.
The comparison of experimental data with theoretical calculations $J(V)$ in the structure under consideration shows that to describe the dispersion $E(k_z^2)$ of electronic states in the forbidden gap of a channel, for example GaSe or InAs, the two-band model is sufficient, see Refs.\cite{Ku} and \cite{Pa}. The knowledge of the dependence $E$ on $k_z^2$ for $k_{\perp}$ = 0 is the most important, because the greater the $k_{\perp}$, the greater the $|k_z|$ for keeping the electron energy unchanged. On the other hand, the tunnel current is determined by the electrons with $|k_z|$ as small as possible, i.e. with damping of their wave function as little as possible.
If the effective masses of electrons $m_c$ and
holes $m_v$ are not equal the use of the two band Franz model for band-to-band tunneling, see Refs \cite{Fr1952} and \cite{Ta}, allows for a more detailed description of the tunneling process. It means replacing $m^*_0$ in Eq.(1) by $m_F$, the value of which depends on the energy of the electron $E$ in the band forbidden gap. $m_F$ has the form
\begin{equation}
m_F(E) = \frac{m_c}{(E/E_g)(1-m_c/m_v)+m_c/m_v}\;\;,
\end{equation}\ \\
where $E = E_g-\Phi(z)+E_a$, see Fig. 1. It is seen that for $E = 0$ $m_F = m_v$ and for $E = E_g$ $m_F = m_c$.
\begin{figure}
\includegraphics[scale=0.35,angle=0, bb = 650 10 60 550]{Fig4.pdf}
\caption{The theoretical tunneling current $J$ versus gate voltage $V_G$ applied to the GaSe barrier. The curves calculated for different valus of voltage $V_C$ applied to Au-GaSe-Au structure. For $V_G$ in the range ($V_C-V_b^L$) - (${E_g}/e- V_C-V_b^L$), the electrons tunnel through the whole width of the GaSe forbidden gap, see Fig 1. Changing the value of $V_G$ shifts GaSe on the energy axis, thereby changes the value of energy, $E(k_z)$, and the value of the wave vector, $k_z$, of electrons in the forbidden gap, (see Fig. 2) and, as a result, leading to the exponential change in the value of the tunneling current, see Eq. (6). The width of GaSe part $d$ = 15 nm and $V^L_b$=$V^R_b$=0.52 eV.} \label{fig4th}
\end{figure}
To calculate the energy dispersion $E(k_z^2)$ in the forbidden gap of GaSe for $k_{\perp}$ = 0 we used Eq. (3) with $m^*_0 = m_F$ and GaSe parameters (Ref.\cite{Ku}) $ E_g $ = 2 eV $m_C/m_0=0.35$ and $m_V/m_0=0.07$. The curves calculated for $m^*_0 = m_F$, $m^*_0 = m_C$ and $m^*_0 = m_V$ are shown in Fig. 2. From comparison of the curves it follows that the use of $m_F (E)$ is necessary. The results of similar calculations for InAs are shown in Fig. 3. InAs parameters are $E_g $ = 0.417 eV and $m^*_0/ m_0=0.026$.
Fig. 2 and Fig. 3 show that a slight change in the energy of the electron in the band gap significantly changes the value of $k_z^2$ of the electron, i.e. its importance in the formation of the tunnel current.
\begin{figure}
\includegraphics[scale=0.35,angle=0, bb = 700 10 70 500]{Fig5.pdf}
\caption{The theoretical tunneling current $J$ versus gate voltage $V_G$ applied to the InAs barrier. The curves are calculated for different valus of voltage $V_C$ applied to the InAs-InAs-InAs structure. The width of the barrier $d$ = 50 nm and electron effective mass $m_0^*/m_0$ = 0.026. It is seen that a slight change of $V_G$ can change the value of the tunneling current $J$ by a few orders of magnitude.} \label{fig5th}
\end{figure}
\begin{figure}
\includegraphics[scale=0.35,angle=0, bb = 650 10 60 500]{Fig6.pdf}
\caption{The theoretical tunneling current $J$ versus the gate voltage $V_G$ applied to the InAs barrier of width $d$ = 65 nm in the InAs-InAs-InAs structure. A comparison of Figs. 5 and 6 shows the strong dependence of tunneling current on the barrier width.} \label{fig6th}
\end{figure}
\section{\label{sec:level1} Principles of operation of the proposed tunneling transistor\protect\\ \lowercase{}}
The basis of the proposed transistor is the observation that the current that flows through the Source-Channel-Drain structure biased with the constant voltage $V_C$ can be changed depending on the magnitude of the gate voltage $V_G$ applied to the channel by electrode separated from the channel by the oxide layer.
In other words, by increasing or lowering the potential energy of the channel in relation to the source, we can control $k_z^2$ of the current electrons, and thus the magnitude of the tunneling current, see Figs. 2 and 3. So, the modified formula for the $|k_z(z)|^2$ of an electron in the forbidden channel gap with the applied voltage $V_G$ looks like this
\begin{equation}
|k_z(z)|^2 = \left(1-\frac{E(z)-eV_G}{E_g}\right)(E(z)-eV_G)\frac{2m^*_0}{\hbar^2} + k^2_{\perp}\;.
\end{equation}
\begin{figure}
\includegraphics[scale=0.4,angle=0, bb = 750 80 40 550]{Fig7.pdf}
\caption{Diagram of the proposed InAs-InAs-InAs tunneling transistor for the source-drain voltage $V_C$ and for two different values of the gate voltage $V_{G'}$ and $V_{G''}$ applied to the channel. It is seen that the current electrons with energy in the range $E_F$ - ($E_F$ - $eV_C$) tunnel through a barrier with a $V_G$-dependent height,
i.e. through energetically different parts of the forbidden gap of the channel.} \label{fig7th}
\end{figure}
In Fig. 4 is shown the dependence of the tunneling current $J$ on the voltage $V_G$ applied to the GaSe element with a width of $d$ = 15 nm in the Au-GaSe-Au structure with $V_C$ = 10 mV or 50 mV. A negative or positive value of $V_G$ means reduction or increase of the virtual shift of $V_b^L$ and $V_b^R$ of the GaSe barrier with respect to the Au source, and thus the change $k_z^2$ of current electrons. The dependence $J(V_G)$ is seen also in Figs. 5 and 6 for the InAs-InAs-InAs structure with a width of $d$ = 50 nm or 65 nm respectively, for different values of $V_C$. The electrons tunnel through the forbidden gap along the entire length of the channel for the value of $V_G$ within the range $V_C$ - ($E_g/e-V_C$), see Fig. 7. The theoretical curves in the above cases are calculated using Eqs. 7 and 9. The comparison of Figures 5 and 6 shows a clear dependence of the current $J$ on the size of the electrons tunneling path $d$.
The conclusion that can be drawn from the $J(V_G)$ curves in Figures 4, 5 and 6 is as follows: the smaller $V_C$, the greater the ratio of the maximum tunnel current $J_{MAX}$ to the minimum tunnel current $J_{MIN}$. The reason is that the smaller $V_C$ and, consequently, $J_C$, the fewer electrons forming the current and the smaller the difference between $|k_z|^2$ of these electrons. Thus, the highest ratio of $J_{MAX}$ to $J_{MIN}$ for a given width of $d$ will occur when the voltage $V_C$ will be extremely small, i.e. when the tunneling current will be formed exclusively from electrons of the same energy $E$. In this case, it is convenient to calculate the transmission coefficient $TC$ of electrons tunneling through the forbidden gap of the channel vs. $V_G$, (the procedure is included e.g. in Ref.\cite{Pf}). Such a dependence $TC(V_G)$ for the InAs-InAs-InAs structure is shown in Fig. 8. It can be seen that the ratio of $TC_{MAX}$ to $TC_{MIN}$ and therefore $J_{MAX}$ to $J_{MIN}$ is indeed extremely large.
\begin{figure}
\includegraphics[scale=0.35,angle=0, bb = 650 10 60 560]{Fig8.pdf}
\caption{Transmission probability $T_C$ for electrons with specific value of $E$ within forbidden gap of InAs versus barrier potential $V_G$. The curve calculated for values of $V_G$ in the range 0 - $E_g/e$.} \label{fig8th}
\end{figure}
\section{\label{sec:level1} Summary\protect\\ \lowercase{}}
The presented theoretical design of a tunneling transistor is a simple extension of the structure for studying the dependence of tunnel current on applied voltage. It has no heterojunction like TFET and is based on controlling the height of the potential barrier created by the channel biased with voltage $V_G$. Such a transistor would be extremely convenient in terms of technology. It is enough to apply a constant source-drain voltage $V_C$ of a few mV and then small change of the gate voltage $V_G$ applied to the channel changes the $J$ value by a few orders of magnitude.
The maximum computed change in $J$ formed by tunneling electrons along the entire length of the channel is due to a change in
$V_G$ in the range 0 - ($E_g/2e-V_C)$. For an InAs structure with $d$ = 65 nm, this means a change in $V_G$ from 0 to about 200 mV and, consequently, a change in $J$ by 9 orders of magnitude at $V_C$ = 20 meV and by more than 6 orders of magnitude at $V_C$ = 50 meV.
The intensity of the tunneling current $J$ can be additionally adjusted by changing the voltage $V_C$ and the width $d$ of the channel.
\ \\
\section*{References}
|
2,877,628,088,441 | arxiv | \section{Introduction}
A friend knocks on your office door, having made the journey across campus from the sociology department. ``In my fieldwork, I found a network of social links between individuals,'' they say. ``I think there are two demographic groups of equal size, and that people tend to link mostly to others within the same group. But their demographic information is hidden from me. If I show you the links, can you figure out where the two groups are?''
``I know this problem!'' you say. ``We call it \textsc{Min Bisection}. The question is how to divide the vertices into two equal groups, with as few edges as possible crossing from one group to the other. It's NP-hard~\cite{garey-johnson}, so we can't expect to solve it perfectly in polynomial time, but there are lots of good heuristics. Let me take a look at your graph.''
You quickly code up a local search algorithm and find a local optimum. You show it to your friend the sociologist, coloring the nodes red and blue, and putting them on the left and right side of the screen as in the top of Figure~\ref{fig:3reg}. You proudly announce that of the $300$ edges in the graph, only $38$ of them cross between the groups. It seems you have found a convincing division of the vertices (i.e., the people) into two communities.
``That's great!'' your friend says. ``Is that the best solution?''
``Well, I just ran a simple local search,'' you say modestly. ``We could try running it again, with different initial conditions\ldots'' You do so, and find a slightly better partition, with only $36$ edges crossing the cut. ``Even better!'' says your friend.
But when you compare these two partitions (see the bottom of Figure~\ref{fig:3reg}, where we kept the colors the same but moved the vertices) you notice something troubling. There is almost no correlation between them: they disagree about which person belongs in which group about as often as a coin flip. ``Which one is right?'' your friend wants to know.
``Well, this is the better solution, so if you want the one with the smallest number of edges between groups, you should take the second one,'' you say. ``\textsc{Min Bisection} is an optimization problem, inspired by assigning tasks to processors, or designing the layout of a circuit. If you want to minimize the amount of communication or wiring you need, you should take whichever solution has the fewest edges between the groups.''
Your friend frowns. ``But I'm not trying to design something---I'm trying to find out the truth. If there are two ways to fit a model to my data that are almost as good as each other, but that don't agree at all, what am I supposed to think? Besides, my data is uncertain---people might not report all their friendships, or they might be unusually willing to link with the other group. A few small errors could flip which of these two solutions is better!''
At that moment, a physicist strides into the room. ``Hi!'' she says brightly. ``I couldn't help overhearing you. It sounds to me like you have a bumpy energy landscape, with lots of local optima that are far apart from each other. Even if you could find the ground state, it's not representative of the Boltzmann distribution at thermal equilibrium. Besides, if it's hidden behind a free energy barrier, you'll never find it---you'll be stuck in a metastable paramagnetic state. We see this in spin glasses all the time!''
\begin{figure}
\begin{center}
\includegraphics[width=4in]{3reg1.pdf}
\includegraphics[width=4in]{3reg2.pdf}
\end{center}
\caption{Top: a partition of a graph into two equal groups, with $38$ of the $300$ edges crossing the cut. Bottom: another partition of the same graph, with only $36$ edges crossing. We kept the colors the same, and moved the vertices. The two partitions are nearly uncorrelated: about half of each group in the first partition ended up on each side of the second one. This is a sign that we might be overfitting, and in fact these ``communities'' exist only by chance: this is a random 3-regular graph. Calculations from physics~\cite{zdeborova-boettcher} suggest that such graphs typically have bisections with only about $11\%$ of their edges crossing the cut; see also~\cite{dembo-montanari-sen}. For evidence that this kind of overfitting and ambiguity can occur in real-world contexts as well, see~\cite{good-montjoye-clauset}.\label{fig:3reg}}
\end{figure}
You and your friend absorb this barrage of jargon and blink a few times. ``But what should I do?'' asks the sociologist. ``I want to label the individuals according to the two communities, and the only data I have is this graph. How can I find the right way to do that?''
A stately statistician with a shock of gray in his hair walks by. ``Perhaps there are in fact no communities at all!'' he declaims. ``Have you solved the hypothesis testing problem, and decisively rejected the null? For all you know, there is no real structure in your data, and these so-called `solutions' exist purely by chance.'' He points a shaking finger at you. ``I find you guilty of overfitting!'' he thunders.
A combinatorialist peeks around the corner. ``As a matter of fact, even random graphs have surprisingly good bisections!'' she says eagerly. ``Allow me to compute the first and second moments of the number of bisections of a random graph with a fraction $\alpha$ of their edges crossing the cut, and bound the large deviation rate\ldots'' She seizes your chalk and begins filling your blackboard with asymptotic calculations.
At that moment, a flock of machine learning students rappel down the outside of the building, and burst in through your window shouting acronyms. ``We'll use an EM algorithm to find the MLE!'' one says. ``No, the MAP!'' says another. ```Cross-validate!'' says a third. ``Compute the AIC, BIC, AUC, and MDL!'' The physicist explains to anyone who will listen that this is merely a planted Potts model, an even more distinguished statistician starts a fistfight with the first one, and the sociologist flees. Voices rise and cultures clash\ldots
\newpage
\section{The Stochastic Block Model}
There are many ways to think about community detection. A natural one is to optimize some objective function, like \textsc{Min Bisection}. One of the most popular such functions is the modularity~\cite{newman-girvan}, which measures how many edges lie within communities compared to what we would expect if the graph were randomly rewired while preserving the vertex degrees (i.e., if the graph were generated randomly by the configuration model~\cite{bollobas-book}). It performs well on real benchmarks, and has the advantage of not fixing the number of groups in advance. While maximizing it is NP-hard~\cite{brandes-etal}, it is amenable to spectral methods~\cite{Newman06c} and semidefinite and linear programming relaxations~\cite{agarwal-kempe}. However, just as the random graph in Figure~\ref{fig:3reg} has surprisingly good bisections, even random graphs often have high modularity~\cite{guimera-random,zhang-moore-pnas}, forcing us to ask when the resulting communities are statistically significant.\footnote{This is a good time to warn the reader that I will not attempt to survey the vast literature on community detection. One survey from the physics point of view is~\cite{fortunato-survey}; a computer science survey complementary to this one is~\cite{abbe-survey}.}
I will take a different point of view. I will assume that the graph is generated by a probabilistic model which has community structure built into it, and that our goal is to recover this structure from the graph. This breaks with tradition in theoretical computer science. For the most part we are used to thinking about worst-case instances rather than random ones, since we want algorithms that are guaranteed to work on any instance. But why should we expect a community detection algorithm to work, or care about its results, unless there really are communities in the first place? And when Nature adds noise to a data set, isn't it fair to assume that this noise is random, rather than diabolically designed by an adversary?
Without further ado, let us describe the stochastic block model, which was first proposed in the sociology literature~\cite{HLL83}. It was reinvented in physics and mathematics as the ``inhomogeneous random graph''~\cite{Soderberg02,BJR07} and in computer science as the planted partition problem (e.g.~\cite{jerrum-sorkin,condon-karp,mcsherry}). As a model of real networks is quite naive, but it can be elaborated to include arbitrary degree sequences~\cite{dcsbm}, overlapping communities~\cite{ABFX08,ball-karrer-newman}, and so on.
There are $n$ vertices which belong to $q$ groups. Each vertex $i$ belongs to a group $\sigma_i \in \{1,\ldots,q\}$; we call $\sigma$ the planted or ``ground truth'' assignment. Given $\sigma$, the edges are generated independently, where each pair of vertices is connected with a probability that depends only on their groups:
\begin{equation}
\label{eq:sbm-general}
\Pr[(i,j) \in E] = p_{\sigma_i,\sigma_j} \, ,
\end{equation}
where $p$ is some $q \times q$ matrix. We will focus on the symmetric case where the $\sigma_i$ are chosen a priori independently and uniformly, and where $\Pr[(i,j) \in E]$ depends only on whether $i$ and $j$ are in the same or different groups,
\begin{equation}
\label{eq:symmetric}
\Pr[(i,j) \in E] = \begin{cases}
p_\mathrm{in} & \mbox{if $\sigma_i=\sigma_j$} \\
p_\mathrm{out} & \mbox{if $\sigma_i \ne \sigma_j$} \, .
\end{cases}
\end{equation}
We often assume that $p_\mathrm{in} > p_\mathrm{out}$, i.e., that vertices are more likely to connect to others in the same group, which is called \emph{homophily} or \emph{assortativity}. But the disassortative case $p_\mathrm{in} < p_\mathrm{out}$ is also interesting. In particular, the case $p_\mathrm{in}=0$ corresponds to the planted coloring problem, where edges are only allowed between vertices of different colors.
Intuitively, communities are harder to find when the graph is sparse, since we have less information about each vertex. We will spend most of our time in the regime where
\begin{equation}
\label{eq:sparse}
p_\mathrm{in} = \frac{c_\mathrm{in}}{n} \quad \text{and} \quad p_\mathrm{out} = \frac{c_\mathrm{out}}{n}
\end{equation}
for some constants $c_\mathrm{in}, c_\mathrm{out}$. Since the expected size of each group is $n/q$, each vertex has in expectation $c_\mathrm{in}/q$ neighbors in its own group and $c_\mathrm{out}/q$ in each of the other groups. Thus the expected degree of each vertex is
\begin{equation}
\label{eq:avgdeg}
c = \frac{c_\mathrm{in} + (q-1)c_\mathrm{out}}{q} \, ,
\end{equation}
and the degree distribution is asymptotically Poisson. The case $c_\mathrm{in} = c_\mathrm{out}$ is the classic Erd\H{o}s-R\'enyi\ random graph where every pair of vertices is connected with the same probability $c/n$, which we denote $G(n,p=c/n)$. The constant-degree regime $c=O(1)$ is where many classic phase transitions in random graphs occur, such as the emergence of the giant component~\cite{bollobas-book} or the $k$-core~\cite{pittel-spencer-wormald}, as well as the threshold for $q$-colorability (e.g.~\cite{achlioptas-naor}).
Now, given the graph $G$, can you reconstruct the planted assignment? (Due to symmetry, we only ask you to do this up to a permutation of the $q$ groups.) For that matter, can you confirm that $G$ was generated by this model, and not a simpler one without community structure? This gives rise to several possible tasks, which we would like to solve with high probability---that is, with probability over $G$ that tends to $1$ as $n \to \infty$.
\emph{Exact reconstruction} consists of finding the planted assignment exactly, labeling every vertex correctly up to a permutation of the groups. This is impossible unless the graph is connected, and (in the assortative case) if every vertex has a plurality of its neighbors in its own group. Neither of these local properties hold with high probability unless the average degree grows as $\Omega(\log n)$, suggesting the parametrization $p_\mathrm{in}=c_\mathrm{in} (\log n)/n$ and $p_\mathrm{out}=c_\mathrm{out} (\log n)/n$. When $c_\mathrm{in}$ and $c_\mathrm{out}$ are large enough and different enough, exact reconstruction is in fact possible~\cite{BC09}. Recent work has determined the threshold condition precisely, along with polynomial-time algorithms that achieve it~\cite{abbe-bandeira-hall,abbe-sandon,HajekWuXuSDP14,HajekWuXuSDP15,agarwal-etal,mns-consistency}.
\emph{Reconstruction} (sometimes called weak reconstruction or weak recovery) consists of labeling the vertices with an assignment $\tau$ which is correlated with the planted assignment $\sigma$. There are several reasonable definitions of correlation or accuracy: we will use the maximum, over all $q!$ permutations $\pi$ of the groups, of the fraction of vertices $i$ such that $\tau_i = \pi(\sigma_i)$. We want this fraction to be bounded above $1/q$, i.e., noticeably better than chance.
Finally, \emph{Detection} is the hypothesis testing problem that the statistician wanted us to solve: can we distinguish $G$ from an Erd\H{o}s-R\'enyi\ random graph $G(n,c/n)$ with the same average degree? Intuitively, if we can't even tell if communities exist, we have no hope of reconstructing them.
I will assume throughout that the number of groups $q$ and the parameters $c_\mathrm{in}$ and $c_\mathrm{out}$ are known. There are ways to learn $c_\mathrm{in}$ and $c_\mathrm{out}$ from the graph, at least when detection is possible; choosing $q$ is a classic ``model selection'' problem, and there are deep philosophical debates about how to solve it.
I will focus on reconstruction and detection, both because that's what I know best, and because they have the strongest analogies with physics. In particular, they possess phase transitions in the constant-degree regime. If the community structure is too weak or the graph too sparse, they suddenly become impossible: no algorithm can label the vertices better than chance, or distinguish $G$ from a purely random graph.
For $q \ge 5$ groups, and the disassortative case with $q = 4$, there are two distinct thresholds: a computational one above which efficient algorithms are known to exist, and an information-theoretic one below which the graph simply doesn't contain enough information to solve the problem. In between, there is a regime where detection and weak reconstruction are possible, but believed to be exponentially hard---there is enough information in the graph to succeed, but we conjecture that any algorithm takes exponential time.
In order to locate these phase transitions, we need to explore an analogy between statistical inference and statistical physics.
\section{Bayes and Ising}
It is easy to write down the probability $P(G \mid \sigma)$ that the block model will produce a given graph $G$ assuming a fixed group assignment $\sigma$. Since the edges are generated independently according to~\eqref{eq:sbm-general}, this is just a product:
\[
P(G \mid \sigma) = \prod_{(i,j) \in E} p_{\sigma_i,\sigma_j}
\prod_{(i,j) \notin E} \left(1 - p_{\sigma_i,\sigma_j} \right) \, .
\]
In the symmetric case~\eqref{eq:symmetric}, we can rewrite this as
\begin{equation}
\label{eq:pgsigma}
P(G \mid \sigma) =
p_\mathrm{out}^m (1-p_\mathrm{out})^{{n \choose 2}-m}
\prod_{(i,j) \in E} \left( \frac{p_\mathrm{in}}{p_\mathrm{out}} \right)^{\delta_{\sigma_i,\sigma_j}}
\prod_{(i,j) \notin E} \left( \frac{1-p_\mathrm{in}}{1-p_\mathrm{out}} \right)^{\delta_{\sigma_i,\sigma_j}}
\end{equation}
where $m$ is the total number of edges, and $\delta_{rs} = 1$ if $r=s$ and $0$ otherwise.
Our goal is to invert the block model, recovering $\sigma$ from $G$. In terms of Bayesian inference, this means computing the \emph{posterior} distribution: the conditional distribution of $\sigma$ given $G$, i.e., given the fact that the block model produced $G$. According to Bayes' rule, this is
\begin{equation}
\label{eq:z}
P(\sigma \mid G) = \frac{1}{Z} \,P(G \mid \sigma) P(\sigma)
\quad \text{where} \quad
Z = P(G) = \sum_{\sigma'} P(G \mid \sigma') P(\sigma') \, .
\end{equation}
The normalization factor $Z$ is the total probability that the block model produces $G$, summed over all $q^n$ group assignments. It is an important quantity in itself, but for now we simply note that it is a function only of $G$ and the parameters $p_\mathrm{in}, p_\mathrm{out}$ of the model, not of $\sigma$. Furthermore, since we assumed that $\sigma$ is uniformly random, the prior probability $P(\sigma)$ is just the constant $q^{-n}$. This leaves us with a simple proportionality
\[
P(\sigma \mid G) \propto P(G \mid \sigma) \, .
\]
We can also remove the constant before the products in~\eqref{eq:pgsigma}, giving
\begin{equation}
\label{eq:posterior}
P(\sigma \mid G) \propto \prod_{(i,j) \in E} \left( \frac{p_\mathrm{in}}{p_\mathrm{out}} \right)^{\delta_{\sigma_i,\sigma_j}}
\prod_{(i,j) \notin E} \left( \frac{1-p_\mathrm{in}}{1-p_\mathrm{out}} \right)^{\delta_{\sigma_i,\sigma_j}} \, .
\end{equation}
In the assortative case $p_\mathrm{in} > p_\mathrm{out}$, this says that each edge $(i,j)$ of $G$ makes it more likely that $i$ and $j$ belong to the same group---since this edge is more likely to exist when that is true, and this edge does indeed exist. Similarly, each non-edge $(i,j)$ makes it more likely that $i$ and $j$ belong to different groups. In the sparse case where $p_\mathrm{in}, p_\mathrm{out} = O(1/n)$ the effect of the non-edges is weak, and we will often ignore them for simplicity, but they prevent too many vertices from being in the same group.
Many would call~\eqref{eq:posterior} a graphical model or Markov random field. Each edge has a weight that depends on the state of its endpoints, and the probability of a state $\sigma$ is proportional to the product of these weights. But it is also familiar to physicists, who like to describe probability distributions in terms of a \emph{Hamiltonian}\footnote{Yes, the same Hamilton the paths are named after.} or energy function $H(\sigma)$.
Physical systems tend to be in low-energy states---rocks fall down---but thermal fluctuations kick them up to higher-energy states some of the time. Boltzmann taught us that at equilibrium, the resulting probability distribution at equilibrium is
\begin{equation}
\label{eq:boltzmann}
P(\sigma) \propto \mathrm{e}^{-H(\sigma) / T} \, ,
\end{equation}
where $T$ is the temperature. When the system is very hot, this distribution becomes uniform; as $T$ approaches absolute zero, it becomes concentrated at the ground states, i.e., the $\sigma$ with the lowest possible energy. This we can think of $T$ as the amount of noise.
Physicists use the Hamiltonian to describe interactions between variables. In a block of iron, each atom's magnetic field can be pointed up or down. Neighboring atoms prefer to be aligned, giving them a lower energy if they agree. Given a graph, the energy of a state $\sigma$ is
\begin{equation}
\label{eq:potts}
H(\sigma) = - J \sum_{(i,j) \in E} \delta_{\sigma_i,\sigma_j} \, ,
\end{equation}
for a constant $J$ which measures the strength of the interaction. When $q=2$ this is the Ising model of magnetism, and for $q > 2$ it is called the Potts model. The cases $J > 0$ and $J < 0$ are called \emph{ferromagnetic} and \emph{antiferromagnetic} respectively; iron is a ferromagnet, but there are also antiferromagnetic materials.
As the reader may already have divined, if we make the right choice of $J$ and $T$, the posterior distribution $P(\sigma \mid G)$ in the block model will be exactly the Boltzmann distribution of the Ising model on $G$. If we compare~\eqref{eq:boltzmann} with our posterior distribution~\eqref{eq:posterior} and ignore the non-edges, we see that
\[
\frac{p_\mathrm{in}}{p_\mathrm{out}} = \mathrm{e}^{J/T}
\quad \text{so} \quad
\frac{J}{T} = \log \frac{p_\mathrm{in}}{p_\mathrm{out}} \, ,
\]
so assortativity and disassortativity correspond to ferromagnetic and antiferromagnetic models respectively. (In the assortative case, the non-edges cause a weak global antiferromagnetic interaction, which keeps the groups of roughly equal size.) As the community structure gets stronger, increasing $p_\mathrm{in}/p_\mathrm{out}$, we can think of this either as strengthening the interactions between neighbors, i.e., making $J$ bigger, or reducing the noise, i.e., making $T$ smaller.
Of course, most magnets don't have a ``correct'' state. The block model is a special case: there is a true underlying state $\sigma$, and $G$ is generated in a way that is correlated with $\sigma$. We say that $G$ has $\sigma$ ``planted'' inside it.
We can think of the block model as a noisy communication channel, where Nature is trying to tell us $\sigma$, but can only do so by sending us $G$. The posterior distribution $P(\sigma \mid G)$ is all we can ever learn about $\sigma$ from $G$. Even if we have unlimited computational resources, if this distribution doesn't contain enough information to reconstruct $\sigma$, there is nothing we can do. The question is thus to what extent $P(\sigma \mid G)$ reveals the ground truth.
In particular, suppose we can somehow find the ground state, i.e., the $\hat{\sigma}$ with the lowest energy and therefore the highest probability:
\[
\hat{\sigma}
= \mathrm{argmin}_\sigma \,H(\sigma)
= \mathrm{argmax}_\sigma \,P(\sigma \mid G) \, .
\]
Some people call $\hat{\sigma}$ the maximum likelihood estimate (MLE), or if we take the prior $P(\sigma)$ into account, the maximum a posteriori estimate (MAP). Exact reconstruction is information-theoretically possible if and only if $\hat{\sigma} = \sigma$: that is, if the community structure is strong enough, or the graph is dense enough, that the most likely state is in fact the planted one. As discussed above, this occurs when the average degree is a large enough constant times $\log n$
But from an inference point of view, this seems somewhat optimistic. Modern data sets are massive, but so is the number of variables we are typically trying to infer. Getting all of them exactly right (in this case, all $n$ of the $\sigma_i$) seems to be asking a lot. It is also strange from a physical point of view: it would be as if all $n=10^{26}$ atoms in a block of iron were aligned. Physical systems are sparse---each atom interacts with just a few neighbors, and thermal noise will always flip a constant fraction of atoms the wrong way. The most we can hope for is that the fraction of atoms pointing up, say, is some constant greater than $1/2$, so that the block of iron is magnetized---analogous to weak reconstruction, where the fraction of nodes labeled correctly is a constant greater than $1/q$. Are there regimes where even this is too much to ask?
\begin{figure}
\begin{center}
\includegraphics[width=0.33\columnwidth,angle=90]{ising-cold-new}
\includegraphics[width=0.33\columnwidth,angle=90]{ising-crit-new}
\includegraphics[width=0.33\columnwidth,angle=90]{ising-hot-new}
\end{center}
\caption{Typical states of the Ising model on a $512 \times 512$ lattice at three different temperatures. When $J/T$ is large enough (left) there are long-range correlations between the vertices; there are islands of the ``wrong'' state, but the fraction in the ``right'' state is bounded above $1/2$. When $J/T$ is too small (right) the system is unmagnetized: correlations between vertices decay exponentially with distance, and in the limit $n \to \infty$ half of the vertices point in each direction.
(From~\cite{moore-mertens-book}.)\label{fig:ising}}
\end{figure}
\section{Temperature and Noise}
Pierre Curie discovered that iron has a phase transition where, at a critical temperature, its magnetization suddenly drops to zero because an equal fraction of atoms point up and down (see Figure~\ref{fig:ising} for the Ising model on the square lattice). When the system is too hot and noisy, or equivalently if the interactions between neighbors are too weak, the correlations between atoms decay exponentially with distance, and the fraction of atoms pointed in each direction tends to $1/2$ as $n \to \infty$. In the same way, in community detection we expect phase transitions where, if the community structure is too weak or the graph too sparse, the fraction of vertices labeled correctly will tend to $1/q$, no better than chance.
\pagebreak
To make this precise, for each vertex $i$ define its \emph{marginal distribution}, i.e., the total probability that it belongs to a particular state $r$ according to the posterior distribution:
\begin{equation}
\label{eq:marginal}
\psi^i_r = P( \sigma_i = r \mid G ) = \sum_{\sigma: \sigma_i = r} P(\sigma \mid G) \, .
\end{equation}
We can think of the magnetization as the average, over all vertices, of the probability that they are labeled correctly. Rather than the ground state, another way to estimate $\sigma$ is to assign each vertex to its most likely group,
\[
\sigma^*_i = \mathrm{argmax}_r \,\psi^i_r \, .
\]
This estimator, if we can compute it, maximizes the expected fraction of vertices labeled correctly~\cite{devroye-book,Iba99}.
Thus the best possible algorithm for weak reconstruction is to find the marginals. If they are uniform, so that every vertex is equally likely to belong to every group, the system is unmagnetized, and even weak reconstruction is impossible. (The alert reader will object that, due to the permutation symmetry between the groups, the marginals $\psi^i$ are uniform anyway. You're right, but we can break this symmetry by fixing the labels of a few vertices.)
\pagebreak
Let's pause here, and note that we are crossing a cultural divide. The probability $P(G | \sigma)$ is a perfectly good objective function, and maximizing it gives the maximum likelihood estimator $\hat{\sigma}$. This is the single most likely community structure the graph could have. From a computer science point of view, solving this optimization problem seems like the right thing to do. What could be wrong with that?
What we are saying is that rather than focusing on the ground state $\hat{\sigma}$, we need to think about the entire ``energy landscape'' of possible community structures. If $\hat{\sigma}$ is only one of many local optima that have nothing in common with each other, it doesn't tell us anything about the ground truth---it is just overfitting, reacting to noise in the data rather than to an underlying pattern, like the bisections in Figure~\ref{fig:3reg}. In a bumpy landscape with many hills, one hill is probably a little higher than the others. If this hill corresponds to the best way to build a rocket, you should use it. But if it corresponds to a hypothesis about a noisy data set, how confident should you be?
Bayesian inference demands that we understand the posterior distribution of our model, not just its maximum, and embrace the fact that we are uncertain about it. The most likely single state in a ferromagnet is when all the atoms are aligned, but this is a bad guide to iron's physical properties. Focusing on the ground state is like pretending the system is at absolute zero, which corresponds to assuming that $G$ is much more structured than it really is. By focusing on the marginals instead, we ask what the likely $\sigma$ agree on.
When data is sparse or uncertain, the average of many likely fits of a model is often better than the single ``best'' fit.
How can we compute these marginals, and thus the ``magnetization'' of the block model? One approach, popular throughout machine learning and computational physics, is Monte Carlo sampling. We can flip the state of one vertex at a time, where the probability of each move depends on the ratio between the new and old values of the posterior~\eqref{eq:posterior}, or equivalently how much this move would raise or lower the energy. After a long enough time, the resulting states are essentially samples from the equilibrium distribution, and taking many such samples gives us good estimates of the marginals. This raises wonderful issues about how long it takes for this Markov chain to reach equilibrium, phase transitions between fast and slow mixing, and so on~\cite{LevinPeresWilmer2006}.
Instead, we will use a method that tries to compute the marginals directly. In addition to being an algorithm that is efficient and accurate under some circumstances, it lets us calculate analytically where phase transitions occur, and gives a hint of why, in some regimes, community detection might be possible but exponentially hard.
\section{The Cavity Method and Belief Propagation}
Each atom in a material is affected by its neighbors, and it affects them in turn. In the Ising model, the probability that an atom points up depends on the probability that each of its neighbors does. If we start with an initial estimate of these probabilities, we can repeatedly update each atom's probability based on those of its neighbors. Hopefully, this process converges to a fixed point, which (also hopefully) gives the correct probabilities at equilibrium.
This idea is called the \emph{cavity method} in physics~\cite{mezard-parisi-virasoro}, where it was invented to solve spin glasses---systems with random mixtures of ferromagnetic and antiferromagnetic interactions, such as the Sherrington-Kirkpatrick model~\cite{sherrington-kirkpatrick}. But it turns out to be deeply related to message-passing algorithms such as belief propagation that were developed in the AI community a few years earlier, including in the Turing Award-winning work of Judea Pearl~\cite{pearl}.
As a warm-up, suppose you are a vertex $i$, and you know the marginals~\eqref{eq:marginal} of your neighbors. Using Bayes' rule as before, the fact that an edge exists between you and each of your neighbors affects your own marginal: for each neighbor $j$ in group $s$, the edge $(i,j)$ multiplies the relative probability that you are in group $r$ by $p_{rs}$. Let's make the wild assumption that your neighbors' labels are independent, i.e., that their joint probability distribution is just the product of their marginals. Then your own marginal is given by
\begin{equation}
\label{eq:naive}
\psi^i_r \propto \prod_{j: (i,j) \in E} \sum_{s=1}^q \psi^j_s \,p_{rs} \, ,
\end{equation}
where $\propto$ hides the normalization we need to make $\sum_{r=1}^q \psi^i_r = 1$. We can iterate this equation, updating the marginal of a randomly chosen vertex at each step, until the marginals converge to a fixed point.\footnote{There could easily be $q!$ different fixed points, corresponding to permutations of the groups. Any one of them will do.}
This approach is quite naive, and it is appropriately called ``naive Bayes.'' While it takes interactions into account in finding the marginals, once they are found it assumes the vertices are independent. To put this differently, it assumes the posterior distribution is a product distribution,
\[
P(\sigma \mid G) \approx \prod_{i=1}^n P(\sigma_i \mid G) = \prod_{i=1} \psi^i_{\sigma_i} \, .
\]
\begin{figure}
\begin{center}
\includegraphics[width=1.4in]{bp}
\end{center}
\caption{In belief propagation, each vertex $i$ sends a message $\psi^{i \to j}$ to each of its neighbors $j$, consisting of an estimate of its marginal based on the messages it receives from its other neighbors $k \ne j$.\label{fig:bp}}
\end{figure}
Let's try something slightly less naive. We will assume our neighbors are independent of each other, when conditioned on our own state. Equivalently, we assume our neighbors are correlated only through us. We can model this by having each vertex $i$ send each of its neighbors $j$ a ``message'' $\psi^{i \to j}$, which is an estimate of what $i$'s marginal would be if $j$ were not there---or more precisely, if we did not know whether or not there is an edge between $i$ and~$j$. As shown in Figure~\ref{fig:bp}, $\psi^{i \to j}$ is an estimate of $i$'s marginal based only on $i$'s other neighbors $k \ne j$. The update equation~\eqref{eq:naive} then becomes
\begin{equation}
\label{eq:bp}
\psi^{i \to j}_r \propto \prod_{\substack{k: (i,k) \in E \\ k \ne j}} \sum_{s=1}^q \psi^{k \to i}_s \,p_{rs} \, .
\end{equation}
The non-edges can be treated as a global interaction~\cite{Decelle2011} which we omit here.
Belief propagation consists of initializing the messages randomly and then repeatedly updating them with~\eqref{eq:bp}. We typically do this asynchronously, choosing a vertex uniformly at random and updating its messages to all its neighbors. If all goes well, this procedure converges quickly, and the resulting fixed point gives a good estimate of the marginals. To compute the marginal of each vertex we use all its incoming messages,
\begin{equation}
\label{eq:bp-marginal}
\psi^i_r \propto \prod_{j: (i,j) \in E} \sum_{s=1}^q \psi^{j \to i}_s \,p_{rs} \, .
\end{equation}
We define the two-point marginal $\psi^{ij}_{rs}$ as the joint distribution of $\sigma_i$ and $\sigma_j$,
\[
\psi^{ij}_{rs} = P( \sigma_i=r \wedge \sigma_j=s \mid G ) \, ,
\]
and for neighboring pairs $i,j$ we estimate this from the messages as
\begin{equation}
\label{eq:bp-marginal2}
\psi^{ij}_{rs} \propto \psi^{i \to j}_r \psi^{j \to i}_s \,p_{rs} \, .
\end{equation}
\pagebreak
Note that the message $i$ sends to $j$ does not depend on the message $j$ sends to $i$. Why is this a good idea? One intuitive reason is that it avoids an ``echo chamber'' where information bounces back and forth between two vertices, being pointlessly amplified. It brings each vertex fresh information from elsewhere in the network, rather than just confirming its own beliefs.\footnote{At this point, I usually make a comment about talk radio and American politics.}
Another reason to like~\eqref{eq:bp} is that it is exact on trees. If the only paths between your neighbors go through you, then the conditional independence assumption is really true.
If $G$ is not a tree, and your neighbors are connected by paths that don't go through you, we might still hope that conditional independence is a good approximation, if these paths are long and correlations decay rapidly with distance. As a result, belief propagation can sometimes be shown to be asymptotically correct in ``locally treelike'' graphs, where most vertices do not lie on any short loops. Standard counting arguments show that sparse graphs generated by the block model, like sparse Erd\H{o}s-R\'enyi\ graphs, are in fact locally treelike with high probability.\footnote{People who deal with real networks know that they are far from locally treelike---they are full of short loops, e.g., because people introduce their friends to each other. In this setting belief propagation is simply wrong, but in practice it is often not far off, and its speed makes it a useful algorithm in practice as long as the number of groups is small.}
We can also derive~\eqref{eq:bp} by approximating the posterior with a distribution that takes correlations between neighbors into account, replacing the product distribution of each neighboring pair with its two-point marginal:
\begin{align}
\label{eq:bethe}
P(\sigma \mid G)
&\approx \prod_{i=1}^n P(\sigma_i \mid G)
\prod_{(i,j) \in E} \frac{P(\sigma_i, \sigma_j \mid G)}{P(\sigma_i \mid G) P(\sigma_j \mid G)} \nonumber \\
&= \prod_{i=1}^n \psi^i_{\sigma_i}
\prod_{(i,j) \in E} \frac{\psi^{ij}_{\sigma_i,\sigma_j}}{\psi^i_{\sigma_i} \psi^j_{\sigma_j}} \, .
\end{align}
Stable fixed points of~\eqref{eq:bp} are local minima, as a function of the messages, of the Kullback-Leibler divergence between the true posterior and~\eqref{eq:bethe}. Equivalently, they are local minima of a quantity called the Bethe free energy~\cite{yedidia-freeman-weiss}. But while~\eqref{eq:bethe} is exact if $G$ is a tree, for general graphs it doesn't even sum to one---in which case approximating the posterior this way is rather fishy.
To my knowledge, belief propagation for the block model first appeared in~\cite{hastings}. The authors of~\cite{Decelle2011,Decelle2011a} used it to derive, analytically but nonrigorously, the location of the detectability transition, and also to conjecture the detectable-but-hard regime where detection and reconstruction are information-theoretically possible but exponentially hard.\footnote{Some earlier physics papers also conjectured a detectability transition~\cite{reichardt-leone} and a hard regime~\cite{hu-ronhovde-nussinov} based on other methods, but did not locate these transitions precisely.} They did this by considering the possible fixed points of belief propagation, whether they are stable or unstable, and how often random initial messages converge to them. This brings us to the next section.
\section{Stability and Non-Backtracking Walks}
In the symmetric case~\eqref{eq:symmetric} of the block model, belief propagation has a trivial fixed point: namely, where all the messages are uniform, $\psi^{i \to j}_r = 1/q$. If it gets stuck there, then belief propagation does no better than chance. A good question, then, is whether this fixed point is stable or unstable. If we perturb it slightly, will belief propagation fly away from it---hopefully toward the truth---or fall back in?
The point at which this fixed point becomes unstable is called, among other things, the Kesten-Stigum threshold. We will see that it occurs when
\begin{equation}
\label{eq:kesten-stigum-cin-cout}
|c_\mathrm{in}-c_\mathrm{out}| > q \sqrt{c} \, ,
\end{equation}
where $c$ is the average degree~\eqref{eq:avgdeg}. It was conjectured in~\cite{Decelle2011,Decelle2011a} that this is the computational threshold: that polynomial-time algorithms for weak reconstruction exist if and only if~\eqref{eq:kesten-stigum-cin-cout} holds. Moreover, they conjectured that belief propagation is optimal, i.e., it achieves the highest possible accuracy.
The scaling of~\eqref{eq:kesten-stigum-cin-cout} is intuitive, since it says that the expected difference in the number of neighbors you have inside and outside your group has to be at least proportional to the $O(\sqrt{c})$ fluctuations we see by chance. What is interesting is that this behavior is sharp, and occurs at a specific constant.
The positive side of this conjecture is now largely proved. For the case $q=2$, efficient algorithms for weak reconstruction when~\eqref{eq:kesten-stigum-cin-cout} holds were given in~\cite{massoulie2014,mns-proof}, and in~\cite{mns-colt} a form of belief propagation was shown to be optimal when $(c_\mathrm{in}-c_\mathrm{out})/\sqrt{c}$ is sufficiently large. For $q > 2$, another variant of belief propagation~\cite{abbe-sandon-more-groups} was shown to achieve weak reconstruction. (Earlier papers~\cite{coja-mossel-vilenchik,coja-adaptive} had shown how to find planted colorings and bisections a constant factor above the threshold.) Thus weak reconstruction is both possible and feasible above the Kesten-Stigum threshold.
In this section we describe how to locate the Kesten-Stigum threshold analytically. Our discussion differs from the chronology in the literature, and has the clarity of hindsight; it will also introduce us to a class of spectral algorithms. First suppose that the messages are almost uniform,
\[
\psi^{i \to j}_r = \frac{1}{q} + \epsilon^{i \to j}_r \, .
\]
If we substitute this in~\eqref{eq:bp} and expand to first order in $\epsilon$, the update equation becomes a linear operator on $\epsilon$, multiplying it by a matrix of derivatives:
\[
\epsilon := M \epsilon
\quad \text{where} \quad
M_{((i,j),r),((k,\ell),s)} = \frac{\partial \psi^{i \to j}_r}{\partial \psi^{k \to \ell}_s} \, .
\]
If $M$ has an eigenvalue whose absolute value is greater than $1$, the uniform fixed point is unstable. If not, it is stable, at least locally.
Determining $M$'s eigenvalues might seem a little daunting, since it is a $2mq$-dimensional matrix. But it can be written quite compactly in terms of two simpler matrices~\cite{coja-mossel-vilenchik,Decelle2011,non-backtracking}. First we consider the effect of a single edge $(k,i)$. If we have no other information, then $i$'s outgoing message to $j$ is the one it receives from $k$, multiplied by a stochastic transition matrix $T$:
\[
\psi^{i \to j} = T \psi^{k \to i}
\quad \text{where} \quad
T_{rs} = \frac{p_{rs}}{\sum_{r'} p_{r's}}
\]
This is again just Bayes' rule, and $T$ is just the matrix $p$ normalized so that its columns sum to $1$. In the symmetric case~\eqref{eq:symmetric} this becomes
\begin{equation}
\label{eq:t-lambda}
T = \frac{1}{qc} \begin{pmatrix}
c_\mathrm{in} &\ldots &c_\mathrm{out} \\
\vdots &\ddots \\
c_\mathrm{out} & & c_\mathrm{in}
\end{pmatrix}
= \lambda \mathbf{1} + (1-\lambda) \frac{J}{q} \, ,
\end{equation}
where $\mathbf{1}$ is the $q$-dimensional identity matrix, $J$ is the $q \times q$ matrix of all $1$s,
and
\begin{equation}
\label{eq:lambda}
\lambda = \frac{c_\mathrm{in}-c_\mathrm{out}}{qc}
\end{equation}
is the second eigenvalue of $T$. Probabilistically, we can interpret~\eqref{eq:t-lambda} as copying $k$'s label to $i$ with probability $\lambda$, and choosing $i$'s label uniformly at random---just as if this edge did not exist---with probability $1-\lambda$.
Next, we define the following $2m$-dimensional matrix, whose rows and columns correspond to directed edges:
\[
B_{(i,j),(k,\ell)} = \begin{cases}
1 & \mbox{if $\ell=i$ and $k \ne j$} \\
0 & \mbox{otherwise} \, .
\end{cases}
\]
We call this the non-backtracking matrix; in graph theory it is also called the Hashimoto matrix~\cite{hashimoto1989zeta}. It corresponds to walks on $G$ where we are allowed to move in any direction except the one we just came from. We can move from $k \to i$ to $i \to j$, but not flip the arrow to $i \to k$.
A few lines of algebra show that
\[
M = B \otimes T \, .
\]
In other words, $M$ is the $2mq$-dimensional matrix formed by replacing each $1$ in $B$ with a copy of $T$. The appearance of $B$ encodes the non-backtracking nature of belief propagation, where $\psi^{i \to j}$ depends only on the incoming messages $\psi^{k \to i}$ for $k \ne j$.
Now, the eigenvalues of the tensor product $B \otimes T$ are the products of eigenvalues of $B$ and $T$, so the question is whether any of these products exceed $1$. However, there are a few eigenvalues that we can ignore. First, being a stochastic matrix, $T$ has an eigenvalue $1$, with the uniform eigenvector $(1,\ldots,1)$. A perturbation in this direction would mean increasing $\psi^{k \to i}_r$ for all $r$. But since the messages are normalized so that $\sum_{r=1}^q \psi^{k \to i}_r = 1$, we must have $\sum_{r=1}^q \epsilon^{i \to j}_r = 0$. Thus we can project the uniform vector away, leaving us just with $T$'s second eigenvalue $\lambda$.
What about $B$? Given a Poisson degree distribution, an edge can move to $c$ new edges in expectation, so $B$'s leading eigenvalue is $c$. However, the corresponding eigenvector $v$ is nonnegative and roughly uniform. Perturbing the messages by $v \otimes w$ for a $q$-dimensional vector $w$ would correspond to changing all the marginals in the same direction $w$, but this would make some groups larger and others smaller. At this point we (rather informally) invoke the non-edges that we've been ignoring up to now, and claim that they counteract this kind of perturbation---since if too many vertices were in the same group, the non-edges within that group would be very unlikely.
It was conjectured in~\cite{non-backtracking}, and proved in~\cite{bordenave-lelarge-massoulie}, that with high probability $B$'s second eigenvalue approaches
\begin{equation}
\label{eq:mu}
\max(\mu,\sqrt{c})
\quad \text{where} \quad
\mu = \frac{c_\mathrm{in}-c_\mathrm{out}}{q} = c\lambda \, .
\end{equation}
To see this intuitively, consider a vector $v$ defined on directed edges $i \to j$ which is $+1$ if $i$ is in the first group, $-1$ if $i$ is in the second group, and $0$ if it is in any of the others. Excluding the vertex it's talking to, each vertex has in expectation $c_\mathrm{in}/q$ incoming messages from its own group and $c_\mathrm{out}/q$ from each other group. If these expectations held exactly, $v$ would be an eigenvector of $B$ with eigenvalue $\mu$. The number of such messages fluctuates from vertex to vertex; but applying $B^\ell$ to $v$ smooths out these fluctuations, counting the total number of messages from each group entering a Poisson random tree of depth $\ell$ at its leaves. Thus $B^\ell v$ converges to an eigenvector with eigenvalue $\mu$ as $\ell$ increases.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{spectrum}
\end{center}
\caption{The spectrum in the complex plane of the non-backtracking matrix of a graph generated by the stochastic block model with $n=4000$, $q=2$, $c_\mathrm{in}=5$, $c_\mathrm{out}=1$, and average degree $c=3$. To the right on the real line we see the leading eigenvalues $c$ and $\mu=(c_\mathrm{in}-c_\mathrm{out})/q=2$. The eigenvector with eigenvalue $\mu$ is correlated with the communities whenever it is outside the bulk, so even ``linearized'' belief propagation---equivalently, spectral clustering with the non-bactracking matrix---will label nodes better than chance.\label{fig:spectrum}}
\end{figure}
Where does the $\sqrt{c}$ in~\eqref{eq:mu} come from? You might enjoy showing that most eigenvalues have absolute value at most $\sqrt{c}$ by considering the trace of $B^\ell (B^T)^\ell$, which corresponds to taking $\ell$ steps forward and $\ell$ steps backward. In fact, in Figure~\ref{fig:spectrum} you can see a ``bulk'' of eigenvalues inside the disk of radius $\sqrt{c}$ in the complex plane. These eigenvectors come from the randomness in the graph, and are uncorrelated with the community structure.
In any case, the relevant eigenvalue of $M = B \otimes T$ is thus $\mu \lambda$, and the uniform fixed point becomes unstable when
\begin{equation}
\label{eq:kesten-stigum-simultaneous}
\mu \lambda > 1 \quad \text{or} \quad \sqrt{c} \lambda > 1 \, ,
\end{equation}
both of which give
\begin{equation}
\label{eq:kesten-stigum}
c \lambda^2 > 1 \, .
\end{equation}
This is the Kesten-Stigum threshold, and applying~\eqref{eq:lambda} gives~\eqref{eq:kesten-stigum-cin-cout}.
If $\mu > \sqrt{c}$, which is also equivalent to~\eqref{eq:kesten-stigum}, then the 2nd through $q$th eigenvectors emerge from the bulk and become correlated with the communities, letting us label vertices according to their participation in these eigenvectors. This suggests that we can skip belief propagation entirely, and use the non-backtracking matrix to perform spectral clustering.
This spectral algorithm was proposed in~\cite{non-backtracking} and proved to work in~\cite{bordenave-lelarge-massoulie}. While it is not quite as accurate as belief propagation, it performs better than chance. (We note that the first algorithms to achieve weak reconstruction above the Kesten-Stigum threshold~\cite{massoulie2014,mns-proof} also worked by counting non-backtracking or self-avoiding walks.) Roughly speaking, if we apply a small random perturbation to the uniform fixed point, the first few steps of belief propagation---powering the matrix of derivatives $M$---already point in a direction correlated with the planted assignment.
In contrast, standard spectral clustering algorithms, using the adjacency matrix or graph Laplacian, fail in the constant-degree regime due to localized eigenvectors around high-degree vertices~\cite{non-backtracking}. On the other hand, if the average degree grows moderately with $n$ so that the classic ``semicircle law'' of random matrix theory takes over, the Kesten-Stigum threshold can also be derived by calculating when a community-correlated eigenvector crosses the boundary of the semicircle~\cite{nadakuditi-newman}.
It may seem to the reader like a coincidence that two events happen simultaneously: $c \lambda^2 = 1$, so that the uniform fixed point becomes unstable, and $\mu = \sqrt{c}$, so that the second eigenvalue emerges from the bulk. In fact, while $\mu$ depends only on the graph, $\lambda$ depends on the parameters we use to run belief propagation. These thresholds coincide when we know the correct parameters of the block model that generated the graph. If we run belief propagation with the wrong parameters, they become separate events, leading to a more complicated set of phase transitions. For instance, if we assume that $\lambda$ is larger than it really is---assuming too much structure in the data---there appears to be a ``spin glass phase'' where the uniform eigenvector is unstable but the non-backtracking matrix tells us nothing about the ground truth, and belief propagation fails to converge~\cite{zhang-moore-pnas}.
Finally, another class of algorithms for weak reconstruction, which I will not discuss, use semidefinite relaxations. These succeed almost all the way down to the Kesten-Stigum threshold~\cite{montanari-sen,javanmard-montanari-ricci-tersenghi}: specifically, they succeed at a threshold $c \lambda^2 = f(c)$ where $f(c)$ rapidly approaches $1$ as $c$ increases.
\section{Reconstruction on Trees}
In the previous section, we described efficient algorithms for weak reconstruction above the Kesten-Stigum threshold, proving the positive side of the conjectures made in~\cite{Decelle2011,Decelle2011a}. What about the negative side? For $q=2$, the same papers conjectured that, below the Kesten-Stigum threshold, even weak reconstruction is impossible, and no algorithm can label vertices better than chance. This is a claim about information, not computation---that the block model is simply too noisy, viewed as a communication channel, to get information about the planted assignment from Nature to us. In other words, for $q=2$ the computational and information-theoretic thresholds coincide.
This was proved in~\cite{mossel-neeman-sly-impossible} by connecting the block model to a lovely question that first appeared in genetics and phylogeny. It is called reconstruction on trees, and it is where the Kesten-Stigum threshold was first defined~\cite{KestenStigum66}. We will deal with it heuristically; see~\cite{mossel-peres-survey} for a survey of rigorous results.
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{rabbits}
\end{center}
\caption{The reconstruction problem on trees. With probability $\lambda$, the color is copied from parent to child; with probability $1-\lambda$, the child's color is uniformly random. Many generations later, the majority of the population has the same color as the root with high probability if and only if $c\lambda^2 > 1$. The rabbits in this experiment have $c=2$ children at each step and $\lambda = 0.28$, below the Kesten-Stigum threshold.\label{fig:rabbits}}
\end{figure}
Consider a population of black and white rabbits. They reproduce asexually~\cite{fibonacci}, and each one has $c$ children. With probability $\lambda$, the parent's color is copied to the child; with probability $1-\lambda$, this information is lost and the child's color is uniformly random. If we start with one rabbit, can we tell what color it was by looking at the population many generations later?
After $\ell$ generations, there are $c^\ell$ rabbits. The probability that a given one has an unbroken chain of inheritance, where the color was successfully passed from parent to child down through the generations, is $\lambda^\ell$. Let us call such a descendant ``faithful.'' The expected number of faithful descendants is $c^\ell \lambda^\ell$. If there are enough of them, then we can guess the progenitor's color by taking the majority of the population.
How many faithful descendants do we need? One or two is not enough. Through no fault of their own, each unfaithful descendant has a uniformly random color. If we pretend they are independent, then the black ones will typically outnumber the white ones, or vice versa, by $\Theta(\sqrt{c^\ell})$. The number of faithful descendants must be significantly larger than this imbalance for them to be in the majority with high probability. Thus we need
\begin{equation}
\label{eq:ks-trees}
c^\ell \lambda^\ell \gg \sqrt{c^\ell}
\quad \text{or} \quad
c \lambda^2 > 1 \, ,
\end{equation}
the same threshold we derived for the instability of the uniform fixed point.
On the other hand, if $c \lambda^2 < 1$, then the number of faithful descendants is exponentially smaller than the typical imbalance, and the correlation between the majority and the progenitor's color decays exponentially in $\ell$.
Of course, it is not obvious that simply taking the majority is the best way to infer the progenitor's color. The optimal strategy, by definition, is Bayesian inference, propagating the probabilities of the colors from the leaves to the root using---you guessed it---belief propagation. But for $q=2$ the condition~\eqref{eq:ks-trees} is known to be both necessary and sufficient~\cite{evans-etal}. It is also the precise threshold for \emph{robust} reconstruction, where the colors of the final population are given to us in a noisy way~\cite{janson-mossel}.
The analogy with the block model should be clear. Above we showed~\eqref{eq:t-lambda} that each edge $(i,j)$ copies $i$'s label to $j$ with probability $\lambda$ and sets $j$'s label uniformly at random with probability $1-\lambda$. Now imagine you want to infer the label of a vertex $i$. Think of its neighbors as its children, their neighbors as grandchildren, and so on. I generously give you the true labels of the vertices $\ell$ steps away---that is, its descendants after $\ell$ generations---and you try to infer the labels in the interior of this ball. Below the tree reconstruction threshold, the amount of information you can gain about $i$ is exponentially small. Thus the tree reconstruction threshold is a lower bound on the information-theoretic threshold for weak reconstruction, and for $q=2$ this matches the Kesten-Stigum threshold~\cite{mossel-neeman-sly-impossible}.
What happens for more than two groups? For $q \ge 4$ the threshold is not known, but simply taking the plurality of the population is not optimal, so the Kesten-Stigum threshold is not tight~\cite{mossel-beating,mezard-montanari-reconstruction,sly-reconstruction}. For $q=3$ it is believed to be tight, but as yet we have no proof of this. As we will see, for $q \ge 4$ the information-theoretic and computational thresholds in the block model are distinct: in between, reconstruction is information-theoretically possible, but we believe it is computationally hard.
\section{Detection and Contiguity}
We turn now from reconstruction to detection. If $G$ is too sparse or the community structure is too weak, we can't even tell whether $G$ was generated by the block model or the Erd\H{o}s-R\'enyi\ model. What do we mean by this, and how can we prove it?
Let $P(G)$ and $Q(G)$ be the probability of $G$ in the block model and the Erd\H{o}s-R\'enyi\ model $G(n,c/n)$ respectively. These two distributions are not statistically close: that is, $|P-Q|_1 = \sum_G |P(G)-Q(G)|$ is not small. If they were, they would be hard to tell apart even if we had many independent samples of $G$. But we are only given one graph $G$, so we mean that $P$ and $Q$ are close in a weaker sense.
Imagine the following game. I flip a coin, and generate $G$ either from the block model or from the Erd\H{o}s-R\'enyi\ model. I then show you $G$, and challenge you to tell me which model I used. It is easy to see that you can do this with probability greater than $1/2$, simply by counting short loops. For instance, in the limit $n \to \infty$ the expected number of triangles in $G(n,c/n)$ is $c^3/6$, while in the block model it is (exercise!)
\[
\frac{1}{6q^3} \,n^3 \,\mathrm{tr}\ p^3 = \frac{c^3}{6} \left( 1 + (q-1)\lambda^3 \right) \, ,
\]
where $p$ is the $q \times q$ matrix in~\eqref{eq:sbm-general} and $\lambda$ is defined as in~\eqref{eq:lambda}. In both models the number of triangles is asymptotically Poisson, so these distributions overlap, but they are noticeably different for any $\lambda > 0$. A similar calculation~\cite{mossel-neeman-sly-impossible} shows that above the Kesten-Stigum threshold, the number of cycles of length $\ell$ has less and less overlap as $\ell$ increases.
When we say that detection is impossible, we mean that you can't win this game with probability $1-o(1)$. To prove this, we need a concept called \emph{contiguity}. Let $P$ and $Q$ be two distributions, or to be more precise, families of distributions $(P_n), (Q_n)$ on graphs of size $n$. We write $P \trianglelefteq Q$ if, for any event $E$ such that $\lim_{n \to \infty} Q(E) = 0$, we also have $\lim_{n \to \infty} P(E) = 0$. Turning this around, any event $\overline{E}$ that holds with high probability in $Q$ also holds with high probability in $P$. We say that $P$ and $Q$ are mutually contiguous if $P \trianglelefteq Q$ and $Q \trianglelefteq P$.
This idea first appeared in random graph theory, in the proof that random 3-regular graphs have Hamiltonian cycles with high probability. One way to make 3-regular graphs is to start with a random $n$-cycle, add a random perfect matching, and condition on having no multiple edges. These graphs are Hamiltonian by construction---indeed, this is precisely a planted model---and this distribution can be shown to be contiguous to the uniform distribution on 3-regular graphs~\cite{robinson-wormald}.
If we can show that the block model $P$ and the Erd\H{o}s-R\'enyi\ model $Q$ are contiguous, then any algorithm that correctly says ``yes, there are communities'' with high probability in the block model would have to (incorrectly) say that in the Erd\H{o}s-R\'enyi\ model as well. Therefore there is no algorithm which is correct with high probability in both cases.
\pagebreak
This has some interesting scientific consequences. Consider the Ising model, or a model of an epidemic, or any other dynamical process defined in terms of a graph. If $P$ and $Q$ are contiguous, then any quantity produced by this process---the magnetization, the fraction of the population that becomes infected, whatever---must have overlapping distributions when $G$ is generated by $P$ or by $Q$. In particular, if this quantity is tightly concentrated, then it must have the same typical value for both $P$ and $Q$.
How can we prove contiguity? Here we will describe a sufficient condition that implies $P \trianglelefteq Q$. This is already enough to prevent the existence of an algorithm that distinguishes them with high probability, and thus prove a lower bound on the detectability transition.
Our starting point is inspired by statistics. If you have a fancy model $P$ and a null model $Q$, a classic way to tell which one generated the data $G$---and therefore whether the structure $P$ describes is really there---is to compute the likelihood ratio $P(G)/Q(G)$ and reject the null hypothesis if it exceeds a certain threshold. The Neyman-Pearson lemma~\cite{neyman-pearson} shows that this is the best possible test in terms of its statistical power.
If $P$ and $Q$ were nearly disjoint, then $P/Q$ would be either very small or very large, letting us classify $G$ with high probability. Conversely, if $P/Q$ is typically bounded, then $P$ and $Q$ are contiguous. In fact, it's sufficient to bound the expectation of $P/Q$ when $G$ is drawn from $P$, or equivalently its second moment when $G$ is drawn from $Q$:
\begin{equation}
\label{eq:ratio-moments}
\mathbb{E}_P \!\left[ \frac{P}{Q} \right]
= \sum_G P(G) \,\frac{P(G)}{Q(G)}
= \sum_G Q(G) \,\frac{P(G)^2}{Q(G)^2}
= \mathbb{E}_Q \!\left[ \frac{P^2}{Q^2} \right] \, .
\end{equation}
Let $E$ denote the event that $G$ belongs to some set of graphs, and let $1_E$ denote the indicator random variable for this event. If this second moment is bounded by a constant $C$, the Cauchy-Schwarz inequality gives
\begin{align}
P(E) = \mathbb{E}_P [1_E] = \mathbb{E}_Q \!\left[\frac{P}{Q} \,1_E\right]
\le \sqrt{ \mathbb{E}_Q \!\left[ \frac{P^2}{Q^2} \right] \,\mathbb{E}_Q [1_E^2] }
\le \sqrt{ C Q(E) } \, .
\end{align}
Then $Q(E) \to 0$ implies $P(E) \to 0$, and a bounded second moment is enough to imply $P \trianglelefteq Q$. Proving $Q \trianglelefteq P$ takes significantly more work; it uses a refinement of the second moment method pioneered in~\cite{robinson-wormald} that conditions on the number of short cycles. We will not discuss it here.
\pagebreak
If the reader is curious, we can describe how to bound the second moment. Since $P(G)$ is a sum over all possible group assignments~\eqref{eq:z}, expanding the second moment gives a sum over pairs of assignments,
\begin{align*}
\frac{P(G)^2}{Q(G)^2}
&= \frac{1}{q^{2n}} \left( \sum_\sigma \frac{P(G \mid \sigma)}{Q(G)} \right)^2
= \frac{1}{q^{2n}} \sum_{\sigma,\tau} \frac{P(G \mid \sigma) P(G \mid \tau)}{Q(G)^2} \\
&= \frac{1}{q^{2n}} \sum_{\sigma,\tau} \prod_{(i,j)} \begin{cases}
\displaystyle{\frac{p_{\sigma_i,\sigma_j} p_{\tau_i,\tau_j}}{p^2}} & \mbox{if $(i,j) \in E$} \\
\displaystyle{\frac{(1-p_{\sigma_i,\sigma_j})(1-p_{\tau_i,\tau_j})}{(1-p)^2}} & \mbox{if $(i,j) \notin E$} \, .
\end{cases}
\end{align*}
By linearity of expectation, we can move the expectation through this sum. Moreover, since each edge $(i,j)$ exists with probability $p$ in the Erd\H{o}s-R\'enyi\ model, and these events are independent, we can move the expectation into the product. Taking the sparse case $p_{rs} = c_{rs}/n$ and using the approximatinos $1+x \approx \mathrm{e}^x$ and $1/(1-x) \approx 1+x$, a few lines of algebra give
\begin{align}
\label{eq:second-p}
\mathbb{E}_Q \!\left[ \frac{P^2}{Q^2} \right]
&= \frac{1}{q^{2n}} \sum_{\sigma,\tau}
\prod_{(i,j)} \left( \frac{p_{\sigma_i,\sigma_j} p_{\tau_i,\tau_j}}{p} + \frac{(1-p_{\sigma_i,\sigma_j})(1-p_{\tau_i,\tau_j})}{1-p} \right)
\nonumber \\
&\approx \frac{1}{q^{2n}} \sum_{\sigma,\tau}
\exp\!\left[ \frac{c}{n}
\sum_{(i,j)} \left( \frac{c_{\sigma_i,\sigma_j}}{c} - 1 \right) \left( \frac{c_{\tau_i,\tau_j}}{c} - 1 \right) \right]
\, ,
\end{align}
where the $\approx$ hides a $\Theta(1)$ multiplicative error.
Now, the summand in~\eqref{eq:second-p} is a function only of the number of vertices assigned to each pair of groups by $\sigma$ and $\tau$. Define a $q \times q$ matrix $\alpha$, where $\alpha_{rs}$ is $q$ times the fraction of vertices $i$ such that $\sigma_i=r$ and $\tau_i=s$. If there are $n/q + o(n)$ vertices in each group, $\alpha$ is doubly stochastic. Then in the symmetric case, a few more lines of algebra give
\[
\mathbb{E}_Q \!\left[ \frac{P^2}{Q^2} \right]
= \frac{1}{q^{2n}} \sum_{\sigma,\tau} \exp\!\left[ \frac{c \lambda^2 n}{2} \left( |\alpha|_F^2 - 1 \right) \right] \, ,
\]
where $|\alpha|_F^2 = \mathrm{tr} (\alpha^T \alpha) = \sum_{rs} \alpha_{rs}^2$ denotes the Frobenius norm.
If $\sigma$ and $\tau$ are uncorrelated, then $\alpha_{rs} = 1/q$ for all $r,s$ and $|\alpha|_F^2 = 1$. In that case, the summand is $1$. But when $\sigma$ and $\tau$ are identical, $\alpha=\mathbf{1}$, $|\alpha|_F^2 = q$, and the summand is exponentially large. The question is thus whether the uncorrelated pairs dominate the sum. In physical terms there is a tug of war between entropy---the fact that most pairs of assigments $\sigma, \tau$ are uncorrelated---and energy, which favors correlated pairs.
\pagebreak
We can approximate this sum using Laplace's method (e.g.~\cite{achlioptas-moore-ksat}). For $q=2$ this gives a one-dimensional optimization problem; for larger $q$, it requires us to maximize a function over the space of doubly-stochastic matrices~\cite{achlioptas-naor}. Happily, we find that when $\lambda$ is small enough, entropy wins out, and the second moment is bounded by a constant.
These sorts of calculations, and their refinement conditioned on the number of small cycles, were used in~\cite{mossel-neeman-sly-impossible} to prove contiguity below the Kesten-Stigum threshold for $q=2$, and in~\cite{banks-etal-colt} to prove contiguity as long as
\begin{equation}
\label{eq:banks}
c \lambda^2 < \frac{2 \ln (q-1)}{q-1} \, ,
\end{equation}
giving a lower bound on the information-theoretic detectability threshold for general $q$. Separate arguments in those papers show that reconstruction is also information-theoretically impossible in this range.
To prove an upper bound on the information-theoretic threshold, it suffices to find a property---even one which takes exponential time to verify---that holds with high probability in the block model but not in $G(n,c/n)$. One such property is the existence of a partition which is as ``good'' as the planted one in terms of the number of edges between groups. A simple counting argument~\cite{banks-etal-colt} shows that when $c$ is sufficiently large, with high probability $G(n,c/n)$ has no such partition, and all such partitions in the block model are correlated with the planted one. Thus we can achieve both detection and reconstruction with exhaustive search.
Combined with~\eqref{eq:banks}, this determines the information-theoretic threshold to within a multiplicative constant. It also shows that there are $\lambda$ for which the information-theoretic threshold is below the Kesten-Stigum threshold for $q \ge 5$. In the disassortative case we knew this already; in the planted coloring model we have $c_\mathrm{in}=0$ and $\lambda = -1/(q-1)$, so the Kesten-Stigum threshold is at $c = (q-1)^2$. But a first-moment argument shows that the threshold for $q$-colorability in $G(n,c/n)$ is below this point for $q \ge 5$, so we can distinguish between the two models by searching exhaustively for a $q$-coloring.
A tighter argument that focuses on typical partitions rather than a single good one~\cite{abbe-sandon-isit-2016,abbe-sandon-more-groups-arxiv} shows that there exist $\lambda < 0$ such that the information-theoretic threshold is below the Kesten-Stigum threshold for $q \ge 4$. This proves the conjecture of~\cite{Decelle2011} that these thresholds become distinct for $q \ge 4$ in the disassortative case. The case $q=3$, where they are conjectured to coincide, remains open.
\pagebreak
\section{The Hard Regime}
Now that we know the Kesten-Stigum threshold and the information-theoretic thresholds are different for large enough $q$, what happens in between them? And why do we believe that detection and reconstruction are exponentially hard in this regime? Here's the physics picture---not all of it is rigorous yet, but rapid progress is being made.
\begin{figure}
\begin{center}
\includegraphics[width=0.8\columnwidth]{hard-regime-decelle}
\end{center}
\caption{The accuracy achieved by belief propagation in the planted coloring model with $q=5$ as a function of the average degree $c$, normalized so that $0$ is chance. The blue curve shows the result of initializing the messages randomly; as expected, below the Kesten-Stigum threshold at $c=(q-1)^2=16$, the uniform fixed point is stable and most initial messages fall into it, giving an accuracy of zero (i.e., no better than chance). The green curve shows another fixed point; it is quite accurate, but it has a very narrow basin of attraction. The red curve shows the Bethe free energy difference between these two fixed points. The information-theoretic detectability transition occurs at $c=13.2$ where this curve crosses zero. The clustering transition, where tempting nonuniform fixed points first appear, takes place at $c=12.8$. (From~\cite{Decelle2011}.)\label{fig:hard}}
\end{figure}
Take a look at Figure~\ref{fig:hard}, which is copied from~\cite{Decelle2011}. It shows the result of belief propagation on the planted coloring model with $q=5$ groups. (In this case the hard regime is particularly easy to see numerically.) There are two curves, corresponding to two different fixed points: the uniform one, and an accurate one which is correlated with the planted assignment.
Above the Kesten-Stigum threshold at $c=16$, the uniform fixed point is unstable. Belief propagation flies away from it, and arrives at the accurate fixed point. When $c < 12.8$, on the other hand, the accurate fixed point disappears and belief propagation always ends up at the uniform point.
In between these two values of $c$ the story is more interesting. The uniform fixed point is locally stable, and it attracts most initial states. If we initialize the messages randomly, we fall into it and perform no better than chance (the blue curve in Figure~\ref{fig:hard}). But if we are lucky enough to start close to the accurate fixed point, we find it is locally stable as well, achieving the accuracy shown in green. The problem is that the accurate fixed point has a very small basin of attraction---it only attracts an exponentially small fraction of initial messages. Indeed, we were only able to find it numerically by cheating and initializing the messages with the ground truth.
If we had the luxury of exponential time, we could explore the entire space, and find both the accurate fixed point and the uniform one. How would we choose between them? In Bayesian terms, we want to know which one contributes the most to the total probability $Z=P(G)$ of the graph. As we alluded to above, each fixed point $\psi$ of belief propagation has a quantity called the Bethe free energy: this is an estimate of $-(1/n) \log Z_\psi$, where $Z_\psi$ is the contribution to $Z$ from $\sigma$ distributed according to~\eqref{eq:bethe}. If we believe this estimate, we will select the fixed point with the lowest Bethe free energy, since the corresponding $\sigma$ have exponentially more probability in the posterior distribution. As shown in red in Figure~\ref{fig:hard}, the free energies of the uniform and accurate fixed points cross at $c = 13.2$. This is the information-theoretic threshold: the point at which Bayesian inference, given full access to the posterior distribution, would find an accurate partition.
\pagebreak
While detection is information-theoretically possible above this point, we believe that below the Kesten-Stigum threshold the accurate fixed point is exponentially hard to find. Physically, it lies behind a ``free energy barrier,'' a bottleneck of high-energy, low-probability states that separate it from the uniform, unmagnetized state. This is like a material whose lowest-energy state is a perfect crystal, but which almost always gets stuck in a glassy, amorphous configuration instead. It would rather be in the crystalline state, but it would take longer than the age of the universe to find it. Similar phenomena occur in low-density parity check codes~\cite{mezard-montanari-book}: in some regimes the correct codeword is hidden behind a free energy barrier, and message-passing algorithms for error correction get stuck in a glassy state.
Of course, by ``hard'' here we do not mean that this problem is NP-hard. It seems unlikely that a large class of other problems could be reduced to detection or reconstruction in the block model, even in an average-case sense. But we could hope to prove that these problems are hard for specific classes of algorithms: for instance, that Markov Chain Monte Carlo algorithms take exponential time to find an assignment correlated with the planted one, or that (as seems to be the case) belief propagation converges to the uniform fixed point from all but an exponentially small fraction of initial messages.
Between $c=12.8$ and $c=13.2$ is a curious region. The accurate fixed point still exists, but it has higher Bethe free energy than the uniform one. Even if we knew about it, we would choose the uniform fixed point instead, and conclude that there aren't really any communities. The accurate fixed point corresponds to a set or ``cluster'' of good-looking partitions, which agree with each other on most vertices; but there are exponentially many such clusters, separated from each other by a large Hamming distance, and like the bisections in Figure~\ref{fig:3reg} they would exist in a random graph as well. Focusing on any one of them would put us in danger of overfitting.
These transitions have deep analogies with phenomena in random constraint satisfaction problems~\cite{mezard-montanari-book,krzakala-etal-gibbs,moore-mertens-book}. The information-theoretic threshold is also the \emph{condensation} transition, above which a single cluster dominates the set of solutions; in a planted model, this cluster contains the planted state, and corresponds to the accurate fixed point. The point at which the accurate fixed point first appears is the \emph{clustering} transition, also known as the \emph{dynamical replica symmetry breaking transition}, where the landscape of solutions becomes bumpy. It is also the threshold for \emph{Gibbs extremality}, which is related to spatial decay of correlations in typical states, and for reconstruction on trees~\cite{mezard-montanari-reconstruction}.
Very recent work~\cite{coja-etal-cavity} shows that the Bethe free energy is asymptotically correct in some planted models, including the disassortative case of the block model, and rigorously identifies the information-theoretic threshold with the condensation transition~\cite{coja-new}. This makes it possible, in principle, to locate the information-theoretic threshold exactly. While the resulting calculation is not fully explicit, this gives a rigorous justification for numerical methods used in physics based on the cavity method. See also~\cite{ding-sly-sun,sly-sun-zhang} for recent results on random satisfiability that justify the cavity method even beyond the condensation transition.
We summarize all these transitions in Figure~\ref{fig:summary}.
\begin{figure}
\begin{center}
\includegraphics[width=0.9\columnwidth]{qeq23} \\
\bigskip
\includegraphics[width=0.9\columnwidth]{qge4} \\
\end{center}
\caption{A summary of known and conjectured phase transitions in the block model where the number of groups is $q=2, 3$ (top) and $q \ge 4$ (bottom).\label{fig:summary}}
\end{figure}
\section{Open Questions}
This is a fast-moving field, with a lively interplay of ideas between physics, probability, and computer science. Here are some problems that remain open which the reader might enjoy.
\begin{itemize}
\item Do the information-theoretic and computational thresholds coincide for $q=3$, and $q=4$ in the assortative case? For regular trees the Kesten-Stigum bound on tree reconstruction is tight for $q=3$ when the degree is sufficiently large~\cite{sly-reconstruction}. Can this be extended to Galton-Watson trees where the number of children is Poisson-distributed?
\item Can we prove that reconstruction is hard for specific algorithms in the hard regime? For instance, that natural Markov chains such as Glauber dynamics~\cite{LevinPeresWilmer2006,moore-mertens-book} have exponential mixing times, or that belief propagation with random initial messages succeeds with exponentially small probability?
\item In the hard regime, we can push an algorithm towards the accurate fixed point by giving it the true labels of a fraction of vertices. This is known as semisupervised learning or side information, and has been studied both in physics~\cite{semisupervised} and computer science~\cite{kanade-mossel-schramm,mossel-xu-side-information}. What can we say about phase transitions in this setting, as a function of the community structure and the amount or accuracy of the side information?
\item One interesting class of generalizations of the block model is the mixed-membership block model~\cite{ABFX08}, which allows communities to overlap: each vertex is associated with a $q$-dimensional vector, describing the extent to which it belongs to each community, and edges are generated based on these vectors. Some versions of this model~\cite{ball-karrer-newman} can also be viewed as low-rank matrix factorization with Poisson noise. Are there phase transitions in this model, parametrized by the density of the network and how much the communities overlap?
\pagebreak
\item Finally, the reader should be aware that community detection is just one of a vast number of problems in statistical inference. Many such problems, including matrix factorization, sparse recovery, and others have phase transitions where the ground truth suddenly becomes impossible to find when the data becomes too noisy or incomplete, as well as hard regions where inference is information-theoretically possible but known polynomial-time algorithms such as Principal Component Analysis (PCA) provably fail; see e.g.~\cite{banks-isit}. So if you enjoyed this article, you have your work cut out for you.
\end{itemize}
\section*{Acknowledgements}
This article is based on lectures and short courses I have given over the past few years, including at \'Ecole Normale Sup\'erieure, ICTS Bangalore, Northeastern, Institute d'\'Etudes Scientifiques de Cargese, \'Ecole de Physique des Houches, Beg Rohu, and the Santa Fe Institute, and seminars given at Harvard, Princeton, MIT, Stanford, Caltech, Michigan, Rutgers, Rochester, Northwestern, Chicago, Indiana, Microsoft Research Cambridge, l'Institut Henri Poincar\'e, the Simons Institute, and the Newton Institute. I am grateful to these institutions for their hospitality, and to Emmanuel Abbe, V. Arvind, Caterina De Bacco, Josh Grochow, David Kempe, Florent Krzakala, Dan Larremore, Stephan Mertens, Andrea Montanari, Elchanan Mossel, Joe Neeman, Mark Newman, Guilhem Semerjian, Cosma Shalizi, Jiaming Xu, Lenka Zdeborov{\'a}, and Pan Zhang for collaborations, conversations, and/or comments on an earlier draft. My work was supported by the John Templeton Foundation and the Army Research Office under grant W911NF-12-R-0012.
Vive la r\'esistance.
\bibliographystyle{plain}
|
2,877,628,088,442 | arxiv | \section{Introduction}
\subsection{Background}
\IEEEPARstart{O}{ver} the last few years, the proliferation of mobile devices connected to the Internet, such as smartphones and tablets, has led to an unprecedented increase in wireless traffic that is expected to grow with an annual rate of 53\% until 2020\cite{general:cisco_report}. To satisfy this growth, a goal has been set for the 5th generation (5G) of mobile networks to improve the capacity of current networks by a factor of 1000\cite{general:andrews_whatwill5gbe}. While traditional approaches improve the area spectral efficiency of the network through, e.g., cell densification, transmission in the millimeter-wave (mmWave) band, and massive MIMO\cite{general:andrews_whatwill5gbe}, studies have highlighted the repetitive pattern of user content requests\cite{cacheability:http, Femtocaching_and_D2D}, suggesting more efficient ways to serve them.
With \textit{proactive caching}, popular content is stored inside the network during off-peak hours (e.g., at night), so that it can be served locally during peak hours\cite{general:proactive_caching}. Two methods are distinguished in the literature: i) \textit{edge caching}\cite{edgecaching:femtocaching} when the content is stored at helper nodes, such as small-cell base stations (BSs), and ii) \textit{device caching}\cite{devcaching:molisch_scalinglaw} when the content is stored at the user equipment devices (UEs). While edge caching alleviates the backhaul constraint of the small-cells by reducing the transmissions from the core network, device caching offloads the BSs by reducing the cellular transmissions, which increases the rates of the active cellular UEs and reduces the dynamic energy consumption of the BSs\cite{general:bs_power_consumption}. The UEs also experience lower delays since the cached content is served instantaneously or through D2D communication from the local device caches.
The benefits of device caching in the offloading and the throughput performance have been demonstrated in \cite{devcaching:molisch_scalinglaw, devcaching:molisch_throughput_outage_tradeoff, devcaching:molisch_clustering, devcaching:local_global_gains, devcaching:d2d_optimization, devcaching:maximized_traffic_offloading}. In \cite{devcaching:molisch_scalinglaw}, the spectrum efficiency of a network of D2D UEs that cache and exchange content from a content library, is shown to scale linearly with the network size, provided that their content requests are sufficiently redundant. In \cite{devcaching:molisch_throughput_outage_tradeoff}, the previous result is extended to the UE throughput, which, allowing for a small probability of outage, is shown to scale proportionally with the UE cache size, provided that the aggregate memory of the UE caches is larger than the library size. To achieve these scaling laws, the impact of the D2D interference must be addressed by optimally adjusting the D2D transmission range to the UE density. In \cite{devcaching:molisch_clustering}, a cluster-based approach is proposed to address the D2D interference where the D2D links inside a cluster are scheduled with time division multiple access (TDMA). The results corroborate the scaling of the spectrum efficiency that was derived in \cite{devcaching:molisch_scalinglaw}. In \cite{devcaching:local_global_gains}, a mathematical framework based on stochastic geometry is proposed to analyze the cluster-based TDMA scheme, and the trade-off between the cluster density, the local offloading from inside the cluster, and the global offloading from the whole network is demonstrated through extensive simulations. In \cite{devcaching:d2d_optimization}, the system throughput is maximized by jointly optimizing the D2D link scheduling and the power allocation, while in \cite{devcaching:maximized_traffic_offloading}, the offloading is maximized by an interference-aware reactive caching mechanism.
Although the aforementioned works show positive results for device caching, elaborate scheduling and power allocation schemes are required to mitigate the D2D interference, which limit the UE throughput and increase the system complexity. The high impact of the D2D interference is attributed to the omni-directional transmission patterns that are commonly employed in the sub-6 GHz bands. While directionality could naturally mitigate the D2D interference and alleviate the need for coordination, it requires a large number of antennas, whose size is not practical in the microwave bands. In contrast, the mmWave bands allow the employment of antenna arrays in hand-held UE devices due to their small wavelength. Combined with the availability of bandwidth and their prominence in future cellular communications\cite{general:andrews_whatwill5gbe}, the mmWave bands are an attractive solution for D2D communication \cite{d2dmmaves:enabling_article, d2dmmwaves:exploiting_access_backhaul}.
The performance of the mmWave bands in wireless communication has been investigated in the literature for both outdoor and indoor environments, especially for the frequencies of 28 and 73 GHz that exhibit small atmospheric absorption \cite{mmwaves:rappaport_channel_modeling, mmwaves:rappaport_indoor_measurements}. According to these works, the coverage probability and the average rate can be enhanced with dense mmWave deployments when highly-directional antennas are employed at both the BSs and the UEs. MmWave systems further tend to be noise-limited due to the high bandwidth and the directionality of transmission\cite{mmwaves:andrews_kulkarni_rate_trends_for_blockage_param_values}. Recently, several works have conducted system-level analyses of mmWave networks with stochastic geometry\cite{mmwaves:heath_bai_coverage_and_rate, mmwaves:heath_bai_blockage_model_analysis, mmwaves:heath_bai_coverage_and_capacity}, where the positions of the BSs and the UEs are modeled according to homogeneous Poisson point processes (PPPs)\cite{stochgeom:haenggi_book}. This modeling has gained recognition due to its tractability\cite{stochgeom:andrews_tractable_classic}.
\subsection{Motivation and Contribution}
Based on the above, it is seen that device caching can significantly enhance the offloading and the delay performance of the cellular network, especially when the UEs exchange cached content through D2D communication. On the other hand, the D2D interference poses a challenge in conventional microwave deployments due to the omni-directional pattern of transmission. While directionality is difficult to achieve in the sub-6 GHz band for hand-held devices, it is practical in the mmWave frequencies due to the small size of the antennas. The high availability of bandwidth and the prominence of the mmWave bands in future cellular networks have further motivated us to consider mmWave D2D communication in a device caching application. To the best of our knowledge, this combination has only been considered in \cite{devcaching:molisch_tutorial}, which adopts the cluster-based TDMA approach for the coordination of the D2D links and does not exploit the directionality of mmWaves to further increase the D2D frequency reuse.
In this context, the contributions of our work are summarized as follows:
\begin{itemize}
\item We propose a novel D2D-aware caching (DAC) policy for device caching that facilitates the content exchange between the paired UEs through mmWave D2D communication and exploits the directionality of the mmWave band to increase the frequency reuse among the D2D links. In addition, we consider a half-duplex (HD) and a full-duplex (FD) version of the DAC policy when simultaneous requests occur inside a D2D pair.
\item We evaluate the performance of the proposed policy in terms of an offloading metric and the distribution of the content retrieval delay, based on a stochastic geometry framework for the positions of the BSs and the UEs.
\item We compare our proposal with the state-of-the-art most-popular content (MPC) policy through analysis and simulation, which shows that our policy improves the offloading metric and the 90-th percentile of the delay when the availability of paired UEs is sufficiently high and the content popularity distribution is not excessively peaked.
\end{itemize}
The rest of the paper is organized as follows. In Section \ref{section:caching_model_and_policies}, we present the proposed DAC and the state-of-the-art MPC policy. In Section \ref{section:system_model}, we present the system model. In Section \ref{section:offloading_analysis} and Section \ref{section:performance_analysis}, we characterize the performance of the two policies in terms of the offloading factor and the content retrieval delay respectively. In Section \ref{section:results}, we compare analytically and through simulations the performance of the caching policies. Finally, Section \ref{section:conclusion} concludes the paper.
\section{Background and Proposed Caching Policy}
\label{section:caching_model_and_policies}
In this section, based on a widely considered model for the UE requests, we present the state-of-the-art MPC policy and the proposed DAC policy.
\subsection{UE Request Model}
We assume that the UEs request content from a library of $L$ files of equal size $\sigma_{file}$\cite{edgecaching:femtocaching} and that their requests follow the Zipf distribution. According to this model, after ranking the files with decreasing popularity, the probability $q_i$ of a UE requesting the $i$-th ranked file is given by
\begin{equation}
q_i = \frac{i^{-\xi}}{\sum_{j=1}^L j^{-\xi}}, \mbox{ } 1 \leq i \leq L, \mbox{ } \xi \geq 0,
\end{equation}
where $\xi$ is the popularity exponent of the Zipf distribution. This parameter characterizes the skewness of the popularity distribution and depends on the content\footnote{Please note that the terms \textit{file} and \textit{content} are used interchangeably in the following.} type, (e.g., webpages, video, audio, etc.)\cite{cacheability:evidence_implications, cacheability:viewedge}.
\subsection{State-of-the-Art MPC Policy}
In device caching, every UE retains a cache of $K$ files, where $K<<L$, so that when a cached content is requested, it is retrieved locally with negligible delay instead of a cellular transmission. This event is called a \textit{cache hit} and its probability is called the \textit{hit probability}, denoted by $h$ and given by
\begin{equation}
h = \sum_{i \in \mathcal{C}} q_i \label{eq:hitprob_def},
\end{equation}
where $\mathcal{C}$ represents the cached contents of a UE, as determined by the caching policy. The MPC policy is a widely considered caching scheme\cite{devcaching:molisch_seminal_conf, edgecaching:giovanidis_optimal_geographic_random_policy, devcaching:molisch_clustering} that stores the $K$ most popular contents from the library of $L$ files in every UE, resulting in the maximum hit probability, given by
\begin{equation}
h_{mpc}= \sum_{i=1}^K q_i = \frac{\sum_{i=1}^K i^{-\xi}}{\sum_{j=1}^{L} j^{-\xi}}.
\label{eq:hmpc}
\end{equation}
\subsection{Proposed DAC Policy}
Although the MPC policy maximizes the hit probability, it precludes content exchange among the UEs since all of them store the same files. In contrast, a policy that diversifies the content among the UEs enables content exchange through D2D communication, resulting in higher offloading. Furthermore, thanks to the high D2D rate and the enhancement in the cellular rate due to the offloading, the considered policy may also improve the content retrieval delay, despite its lower hit probability compared with the MPC policy.
Based on this intuition, in the proposed DAC policy, the $2 K$ most popular contents of the library of $L$ files are partitioned into two non-overlapping groups of $K$ files, denoted by groups A and B, and are distributed randomly to the UEs, which are characterized as UEs A and B respectively. When a UE A is close to a UE B, the network may pair them to enable content exchange through D2D communication. Denoting by $h_A$ and $h_B$ the hit probabilities of the two UE types, three possibilities exist when a paired UE A requests content:
\begin{itemize}
\item the content is retrieved with probability $h_A$ through a cache hit from the local cache of UE A.
\item the content is retrieved with probability $h_B$ through a D2D transmission from the cache of the peer UE B.
\item the content is retrieved with probability $1-h_A-h_B$ through a cellular transmission from the associated BS of UE A.
\end{itemize}
The above cases are defined accordingly for a paired UE B. In Proposition 1 that follows, we formally prove that the probability of content exchange for both paired UEs are maximized with the content assignment of the DAC policy.
\begin{proposition}
Denoting by $\mathcal{C}_A$ and $\mathcal{C}_B$ the caches of UE A and B inside a D2D pair, and by $e_A$ and $e_B$ their probabilities of content exchange, $e_A$ and $e_B$ are maximized when $\mathcal{C}_A$ and $\mathcal{C}_B$ form a non-overlapping partition of the $2K$ most popular contents, i.e., $\mathcal{C}_A \cup \mathcal{C}_B = \{i \in \mathbb{N}: 1 \leq i \leq 2 K \}$ and $\mathcal{C}_A \cap \mathcal{C}_B = \emptyset$, in the sense that no other content assignment to $\mathcal{C}_A$ and $\mathcal{C}_B$ can \textit{simultaneously} increase $e_A$ and $e_B$.
\end{proposition}
\begin{IEEEproof}
See Appendix \ref{appendix:proposition_1}.
\end{IEEEproof}
When the paired UEs store non-overlapping content, their hit probabilities coincide with their content exchange probabilities, i.e., $e_A=h_B$ and $e_B=h_A$, hence, the DAC policy also maximizes $h_A$ and $h_B$ over all possible $2 K$ partitions in the sense of Proposition 1\footnote{Please note that $h_A$ and $h_B$ are still lower than $h_{mpc}$, since the MPC policy is not based on partitions.}. The $2K$ most popular contents can be further partitioned in multiple ways, but one that equalizes $h_A$ and $h_B$ is chosen for fairness considerations. Although exact equalization is not possible due to the discrete nature of the Zipf distribution, the partition that minimizes the difference $|h_A-h_B|$ can be found. Considering that this difference is expected to be negligible for sufficiently high values of $K$, $h_A$ and $h_B$ can be expressed as
\begin{equation}
h_A \approx h_B \approx h_{dac} = \frac{1}{2} \sum_{i=1}^{2K} q_i.
\end{equation}
Finally, since two paired UEs may want to simultaneously exchange content, with probability $h_{dac}^2$, we consider two cases for the DAC policy: i) an HD version, denoted by HD-DAC, where the UEs exchange contents with two sequential HD transmissions, and ii) an FD version, denoted by FD-DAC, where the UEs exchange contents simultaneously with one FD transmission. Although the FD-DAC policy increases the frequency reuse of the D2D transmissions compared with the HD-DAC policy, it also introduces self-interference (SI) at the UEs that operate in FD mode\cite{fd:survey} and increases the D2D co-channel interference. It therefore raises interesting questions regarding the impact of FD communication on the rate performance, especially in a mmWave system where the co-channel interference is naturally mitigated by the directionality.
\section{System Model}\label{section:system_model}
In this section, we present the network model, the mmWave channel model, the FD operation of the UEs, and the resource allocation scheme for the cellular and the D2D transmissions.
\subsection{Network Model}\label{sys:network}
\begin{figure}[!t]
\centering
\includegraphics[width=2.3in]{network_model}
\caption{A network snapshot in a rectangle of dimensions 300 m x 300 m consisting of BSs (triangles) and UEs (circles). The paired UEs are shown connected with a solid line.}
\label{fig:network_model}
\end{figure}
We consider a cellular network where a fraction of the UEs are paired, as shown in the snapshot of Fig. \ref{fig:network_model}. We assume that the BSs are distributed on the plane according to a homogeneous PPP $\Phi_{bs}$ of intensity $\lambda_{bs}$, while the UEs are distributed according to three homogeneous PPPs: the PPP $\Phi_u$ with intensity $\lambda_{u}$ representing the unpaired UEs, and the PPPs $\Phi_p^{(1)}$ and $\Phi_p^{(2)}$ with the same intensity $\lambda_{p}$ representing the paired UEs. We assume that $\Phi_u$ is independent of $\Phi_p^{(1)}$ and $\Phi_p^{(2)}$, while $\Phi_p^{(1)}$ and $\Phi_p^{(2)}$ are dependent due to the correlation introduced by the D2D pairings. Specifically, for every UE of $\Phi_{p}^{(1)}$, a D2D peer exists in $\Phi_{p}^{(2)}$ that is uniformly distributed inside a disk of radius $r_{d2d}^{max}$, or, equivalently, at a distance $r_{d2d}$ and an angle $\phi_{d2d}$ that are distributed according to the probability density functions (PDFs) $f_{r_{d2d}}(r)$ and $f_{\phi_{d2d}}(\phi)$, given by
\begin{subequations}
\begin{equation}
f_{r_{d2d}}(r) = \frac{2 r}{(r_{d2d}^{max})^2}, \mbox{ } 0<r< r_{d2d}^{max},
\label{eq:d2d_displacement}
\end{equation}
\begin{equation}
f_{\phi_{d2d}}(\phi) = \frac{1}{2 \pi},\mbox{ } 0 \leq \phi < 2\pi.
\end{equation}
\end{subequations}
We assume that the D2D pairings arise when content exchange is possible, based on the cached files of the UEs. In the DAC policy, the BSs distribute the content groups A and B independently and with probability 1/2 to their associated UEs, and a fraction $\delta$ of them, which are located within distance $r_{d2d}^{max}$, are paired. Defining the aggregate process of the UEs $\Phi_{ue}$ as
\begin{equation}
\Phi_{ue} \triangleq \Phi_{u} \cup \Phi_{p}^{(1)} \cup \Phi_{p}^{(2)},
\end{equation}
and its intensity $\lambda_{ue}$ as\footnote{Please note that $\Phi_{ue}$ is not a PPP due to the correlation introduced by the processes of the paired UEs, $\Phi_
{p}^{1}$ and $\Phi_{p}^{2}$. Nevertheless, its intensity can be defined as the average number of UEs per unit area.}
\begin{equation}
\lambda_{ue} = \lambda_u+2 \lambda_p,
\end{equation}
the ratio $\delta$ of the paired UEs is given by
\begin{equation}
\delta = \frac{2 \lambda_p}{\lambda_{ue}} = \frac{2 \lambda_p}{\lambda_u+2 \lambda_p}.
\end{equation}
Regarding the UE association, we assume that all the UEs are associated with their closest BS\footnote{Although different association criteria could have been considered, e.g., based on the maximum received power, the comparison of the two caching policies is not expected to be affected. Hence, we consider the closest BS association scheme due to its analytical tractability.}, in which case the cells coincide with the Voronoi regions generated by $\Phi_{bs}$. Denoting by $A_{cell}$ the area of a typical Voronoi cell, the equivalent cell radius $r_{cell}$ is defined as
\begin{equation}\label{def:rcell}
r_{cell} \triangleq \sqrt{ \frac{\mathbb{E} [A_{cell}]}{\pi}}=\frac{1}{\sqrt{\pi \lambda_{bs}}},
\end{equation}
and the association distance $r$ of a UE to its closest BS is distributed according to the PDF $f_r(r)$, given by\cite{stochgeom:andrews_tractable_classic}
\begin{gather}
f_{r_{}}(r)= \frac{2r}{r_{cell}^2} e^{-\left(\frac{r}{r_{cell}}\right)^2} = 2 \lambda_{bs}\pi r e^{-\lambda_{bs} \pi r^2}, \mbox{ } r>0.
\label{def:rassoc_pdf}
\end{gather}
\subsection{Channel Model}\label{sys:phy}
Regarding the channel model, we assume that the BSs and the UEs transmit with constant power, which is denoted by $P_{bs}$ and $P_{ue}$ respectively, and consider transmission at the mmWave carrier frequency $f_c$ with wavelength $\bar{\lambda}_c$ for both the cellular and the D2D communication through directional antennas employed at both the BSs and the UEs. The antenna gains are modeled according to the sectorized antenna model\cite{mmwaves:andrews_for_sectorized_model}, which assumes constant mainlobe and sidelobe gains, given by
\begin{subnumcases}{ G_{i}(\theta) =}
G_{i}^{max} &
if $|\theta| \leq \Delta\theta_{i}$, \\
G_{i}^{min} &
if $|\theta| > \Delta\theta_{i}$,
\end{subnumcases}
where $\Delta\theta$ is the antenna beamwidth, $\theta$ is the angle deviation from the antenna boresight, and $i \in \left\lbrace bs, ue \right\rbrace$.
Because the mmWave frequencies are subject to blockage effects, which become more pronounced as the transmission distance increases \cite{mmwaves:rappaport_channel_modeling}, the line-of-sight (LOS) state of the mmWave links is explicitly modeled. We consider the \textit{exponential model} \cite{mmwaves:heath_bai_blockage_model_analysis,mmwaves:rappaport_channel_modeling}, according to which a link of distance $r$ is LOS with probability $\mbox{P}_{los}(r)$ or non-LOS (NLOS) with probability $1-\mbox{P}_{los}(r)$, where $\mbox{P}_{los}(r)$ is given by
\begin{equation}
\mbox{P}_{los}(r) =e^{-\frac{r}{r_{los}}}.
\label{eq:plos}
\end{equation}
The parameter $r_{los}$ is the average LOS radius, which depends on the size and the density of the blockages \cite{mmwaves:heath_bai_blockage_model_analysis}. We further assume that the pathloss coefficients of a LOS and a NLOS link are $a_{L}$ and $a_{N}$ respectively, the LOS states of different links are independent, and the shadowing is incorporated into the LOS model \cite{mmwaves:heath_feasibility_spectrum_licenses_for_no_lognormal}. Finally, we assume Rayleigh fast fading where the channel power gain, denoted by $\eta$, is exponentially distributed, i.e., $\eta \! \sim \! Exp(1)$.
\subsection{FD-Operation Principle}
When a UE operates in FD mode, it receives SI by its own transmission. The SI signal comprises a direct LOS component, which can be substantially mitigated with proper SI cancellation techniques, and a reflected component, which is subject to multi-path fading. Due to the lack of measurements regarding the impact of the aforementioned components in FD mmWave transceivers, we model the SI channel as Rayleigh \cite{fd:si_model_rayleigh_relays}, justified by the reduction of the LOS component due to the directionality \cite{fd:si_model_nlos_matters_bss}. Denoting by $\eta_{si}$ the power gain of the SI channel including the SI cancellation scheme, and by $\kappa_{si}$ its mean value, i.e, $\kappa_{si} = \mathbb{E}[\eta_{si}]$, the power of the remaining SI signal, denoted by $I_{si}$, is given by
\begin{equation}
I_{si} = \eta_{si} P_{ue},
\label{eq:SI}
\end{equation}
where $\eta_{si} \! \sim \! \text{Exp}\left(\frac{1}{\kappa_{si}}\right)$.
\subsection{Resource Allocation and Scheduling}\label{sys:scheduling}
We focus on the downlink of the cellular system, which is isolated from the uplink through frequency division depluxing (FDD), since the uplink performance is not relevant for the considered caching scenario. We further consider an inband overlay scheme for D2D communication \cite{d2d:andrews_overlay_journal}, where a fraction $\chi_{d2d}$ of the overall downlink spectrum $BW$ is reserved for the D2D traffic, justified by the availability of spectrum in the mmWave band. Regarding the scheduling scheme, we consider TDMA scheduling for the active cellular UEs, which is suited to mmWave communication \cite{mmwaves:rappaport_for_tdma}, and uncoordinated D2D comnunication for the D2D UEs, relying on the directionality of the mmWave transmissions for the interference mitigation.
\section{Offloading Analysis}
\label{section:offloading_analysis}
In this section, the DAC and the MPC policies\footnote{Note that the same network topology, as described in Section \ref{sys:network}, has been assumed for both policies to ensure a fair performance comparison.} are compared in terms of their offloading performance, which can be quantified by the \textit{offloading factor} $F$, defined as the ratio of the average offloaded requests (i.e., requests that are not served through cellular connections) to the total content requests in the network, i.e.
\begin{equation}
F \triangleq \frac{\mathbb{E}[\mbox{offloaded requests}]}{\mbox{total requests}}.
\end{equation}
The offloading factor $F$ is derived for each policy as follows:
\begin{itemize}
\item In the MPC policy, a content request can be offloaded only through a cache hit, hence
\begin{equation}
F_{mpc} = h_{mpc}.
\end{equation}
\item In the DAC policy, in addition to a cache hit, a content request of a paired UE can be offloaded through D2D communication, hence
\begin{equation}
F_{dac} = \delta \cdot 2 h_{dac} + (1-\delta) \cdot h_{dac}=(1+\delta) h_{dac}.
\end{equation}
\end{itemize}
Based on the above, the relative gain of the DAC over the MPC policy in terms of the offloading factor, denoted by $F_{gain}$, is given by
\begin{equation}
F_{gain} = \frac{F_{dac}}{F_{mpc}} = (1+\delta) h_{ratio},
\label{eq:offloading_gain}
\end{equation}
where $h_{ratio}$ represents the ratio of the hit probabilities of the two policies, given by
\begin{equation}
h_{ratio} = \frac{h_{dac}}{h_{mpc}} = \frac{1}{2}\frac{\sum_{i=1}^{2 K} i^{-\xi}}{\sum_{j=1}^{K} j^{-\xi}}.
\label{hratio}
\end{equation}
We observe that $F_{gain}$ depends on the fraction of the paired UEs $\delta$, the UE cache size $K$ and the content popularity exponent $\xi$, but not the library size $L$. The impact of $K$ and $\xi$ on $h_{ratio}$ and, consequently, $F_{gain}$ is analytically investigated in Proposition 2 that follows.
\begin{proposition}
The ratio of the hit probabilities of the two policies, $h_{ratio}$, decreases monotonically with the popularity exponent $\xi$ and the UE cache size $K$. In addition, the limit of $h_{ratio}$ with high values of $K$ is equal to
\begin{equation}
\lim_{K \to \infty} h_{ratio} = \mbox{max}\left(2^{-\xi}, \frac{1}{2}\right).
\label{eq:hitratio_limit}
\end{equation}
\end{proposition}
\begin{IEEEproof}
See Appendix \ref{appendix:hitprob_analysis}.
\end{IEEEproof}
Proposition 2 implies that $h_{ratio}$ attains its minimum value for $\xi \to \infty$, and its maximum value for $\xi=0$, hence
\begin{equation}
\frac{1}{2}<h_{ratio} \leq 1 \implies \frac{1+\delta}{2}<F_{gain}\leq 1 + \delta.
\end{equation}
This result shows that for $\delta = 1$, representing the case of a fully paired network, the DAC policy always exhibits higher offloading than the MPC policy, while for $\delta = 0$, representing the case of
a fully unpaired network, the converse holds. For an intermediate value of $\delta$, the offloading comparison depends on $\xi$ and $K$ and can be determined through \eqref{eq:offloading_gain}. Finally, in Fig. \ref{fig:hitprob_ratio}, the convergence of $h_{ratio}$ to its limit value for high values of $K$ is depicted. This limit is a lower bound to $h_{ratio}$ and serves as a useful approximation, provided that $\xi$ is not close to 1 because, in this case, the convergence is slow.
\begin{figure}[!t]
\centering
\includegraphics[width=3in]{hitratio_convergence}
\caption{The hit probability ratio $h_{ratio}$ in terms of the UE cache size $K$ and the popularity exponent $\xi$.}
\label{fig:hitprob_ratio}
\end{figure}
\section{Performance Analysis}
\label{section:performance_analysis}
In this section, the DAC and the MPC policy are characterized in terms of their rate and delay performance. The complementary CDF (CCDF) of the cellular rate is derived in Section \ref{subsection:cellular_rate_arxi}, the CCDF of the D2D rate is derived in Section \ref{subsection:d2d_rate}, and the CDF of the content retrieval delay is derived in Section \ref{subsectio:delay_analysis}.
\subsection{Cellular Rate Analysis}
\label{subsection:cellular_rate_arxi}
Justified by the stationarity of the PPP\cite{stochgeom:haenggi_book}, we focus on a \textit{target UE} inserted at the origin of the network and derive the experienced cellular rate, denoted by $\mathcal{R}_{cell}$, when an uncached content is requested. The rate $\mathcal{R}_{cell}$ is determined by the cellular signal-to-interference-plus-noise ratio (SINR), denoted by $SINR_{cell}$, and the load of the associated cell, denoted by $\mathcal{N}_{cell}$, through the Shannon capacity formula, modified to include the effect of the TDMA scheduling as \cite{ stoch_geom:andrews_cellrate_with_load}
\begin{equation}
\mathcal{R}_{cell}=\frac{BW_{cell}}{\mathcal{N}_{cell}} \log \left(1+SINR_{cell}\right) \mbox{ [bps]}.
\label{eq:cellrate_shannon}
\end{equation}
Based on \eqref{eq:cellrate_shannon}, the distribution of $\mathcal{R}_{cell}$ is derived through the distribution of $SINR_{cell}$ and $\mathcal{N}_{cell}$ as
\begin{align}
&\text{P}(\mathcal{R}_{cell}>\rho)= \text{P}\left(SINR_{cell}>2^{\frac{\rho \mathcal{N}_{cell} }{BW_{cell}}}-1\right) = \nonumber\\ =
\sum_{n=1}^{\infty} \text{P}&(\mathcal{N}_{cell}=n) \text{P}\left(SINR_{cell}>2^{\frac{\rho n }{BW_{cell}}}-1 \Big| \mathcal{N}_{cell}=n \right) \overset{(i)}\approx \nonumber\\ \approx
\sum_{n=1}^{\infty} \text{P}&(\mathcal{N}_{cell}=n) \text{P}\left(SINR_{cell}>2^{\frac{\rho n }{BW_{cell}}}-1\right),
\label{def:cellrateccdf}
\end{align}
where (i) follows by treating $SINR_{cell}$ and $\mathcal{N}_{cell}$ as independent random variables\footnote{Please note that $SINR_{cell}$ and $\mathcal{N}_{cell}$ are dependent because the cell load $\mathcal{N}_{cell}$ is correlated with the size of the cell, which in turns influences both the signal received from the associated BS and the interference from the neighboring BSs. Nevertheless, this dependence cannot be modeled analytically, since the relation between the SINR and the cellular size is intractable, and is not expected to have a significant impact on $\mathcal{R}_{cell}$.}. The distributions of $\mathcal{N}_{cell}$ and $SINR_{cell}$ are derived in the following sections.
\subsubsection{Distribution of the cellular load}
\label{subsection:cell_load}
The distribution of $\mathcal{N}_{cell}$ depends on the cell size $A_{cell}$ and the point process of the active cellular UEs, denoted by $\Phi_{cell}$, as follows:
\begin{itemize}
\item Regarding $A_{cell}$, we note that due to the closest BS association scheme, the cells coincide with the Voronoi regions of $\Phi_{bs}$. Although the area distribution of a typical 2-dimensional Voronoi cell is not known, it can be accurately approximated by \cite{stochgeom:voronoi_area_distribution_approximation}
\begin{equation}
f_{A_{cell}}(a)\approx \frac{(\lambda_{bs} \kappa)^{\kappa} a^{\kappa-1} e^{-\kappa \lambda_{bs} a}}{\Gamma(\kappa)} \,, a>0 \,, \kappa=3.5.
\label{def:typicalvoronoidistr}
\end{equation}
The cell of the target UE, however, is stochastically larger than a randomly chosen cell, since the target UE is more probable to associate with a larger cell, and its area distribution can be derived from \eqref{def:typicalvoronoidistr} as \cite{stochgeom:load_distribution}
\begin{equation}
f_{A_{cell}}(a)=\frac{(\lambda_{bs} \kappa)^{\kappa+1} a^{\kappa} e^{-\kappa \lambda_{bs} a}}{\Gamma(\kappa+1)} \,, a>0 \,, \kappa=3.5.
\label{def:targetvoronoidistr}
\end{equation}
\begin{table}[t]
\renewcommand{\arraystretch}{1.3}
\caption{CELLULAR PROBABILITIES}
\centering
\scalebox{0.7}{
\begin{tabular}{|l | c | c|}
\cline{2-3}
\multicolumn{1}{c|}{} & $MPC$ & $DAC$\\\hline
$c_u$ & $1-h_{mpc}$ & $1-h_{dac}$ \\\hline
$c_p$ & $1-h_{mpc}$ & $1-2h_{dac}$ \\\hline
\end{tabular}
}
\label{table:cellular_probabilities}
\end{table}
\item Regarding $\Phi_{cell}$, it results from the independent thinning \cite{stochgeom:haenggi_book} of $\Phi_{ue}$, considering the probability of a UE being cellular. This probability is denoted by $c_u$ and $c_p$ for the case of an unpaired and a paired UE respectively, and its values are summarized in Table \ref{table:cellular_probabilities} for the two considered policies. Although $\Phi_{cell}$ is not PPP due to the correlation in the positions of the paired UEs, it can be treated as a PPP with density $\lambda_{cell}$, given by
\begin{equation}
\lambda_{cell} = \left[ (1-\delta) \cdot c_u + \delta \cdot c_p\right] \lambda_{ue}.
\end{equation}
This approximation is justified by the small cell radius of the mmWave BSs, which is expected to be comparable to the D2D distance of the paired UEs, so that their positions inside the cell are sufficiently randomized.
\end{itemize}
Based on the above, $\mathcal{N}_{cell}$ is approximated with the number of points of one PPP that fall inside the (target) Voronoi cell of another PPP, hence, it is distributed according to the gamma-Poisson mixture distribution \cite{stochgeom:load_distribution}, given by
\begin{equation}
\text{P}(\mathcal{N}_{cell}=n) = \frac{\Gamma(n+\kappa)}{\Gamma(\kappa+1)\Gamma(n)}\mu^{n-1} \left(1-\mu\right)^{\kappa+1} \mbox{, } n \geq 1 \mbox{, }
\label{eq:load_distribution_analysis}
\end{equation}
where
\begin{equation*}
\mu = \frac{\lambda_{cell}}{\kappa \lambda_{bs}+\lambda_{cell}}.
\end{equation*}
\subsubsection{Distribution of the cellular SINR}
\label{subsection:cellular_rate}
The cellular SINR is defined as
\begin{equation}
SINR_{cell} \triangleq \frac{S}{I+N},
\label{eq:cell_sinr}
\end{equation}
where
\begin{itemize}
\item $S$ is a random variable representing the received signal power from the associated BS, which is located at a distance $r$ from the target UE. Assuming that the BS and UE antennas are perfectly aligned, $S$ is given by
\begin{equation}\label{eq:signal_power}
S = \left(\frac{\bar{\lambda}_c}{4\pi}\right)^2 P_{bs} G_{bs}^{max}G_{ue}^{max} \eta r_{}^{-a}.
\end{equation}
\item $I$ is a random variable representing the received interference power from the other-cell BSs of $\Phi_{bs}$. Assuming that the UE density is sufficiently high, all the BSs have a UE scheduled and $I$ is given by
\begin{equation}\label{eq:interf_power}
I= \sum_{x \in \Phi_{bs}} \left(\frac{\bar{\lambda}_c}{4\pi}\right)^2 P_{bs} G_x \eta_x r_x^{-a_x},
\end{equation}
where $r_x$ and $G_x$ are the length and the gain of the interfering link respectively. The latter comprises the antenna gains of the interfering BS and the target UE.
\item $N$ is the noise power at the receiver, given by
\begin{equation}\label{eq:noise_power}
N = N_0 F_N BW_{cell},
\end{equation}
where $N_0$ is the noise power density, $F_N$ is the noise figure of the receiver, and $BW_{cell}$ is the cellular bandwidth.
\end{itemize}
Introducing the normalized quantities
\begin{align}
g_x &\triangleq \frac{G_x}{max(G_x)} = \frac {G_x}{G_{bs}^{max} G_{ue}^{max}}, \nonumber\\
\hat{S} &\triangleq \eta r_{}^{-a}, \nonumber\\
\hat{I} &\triangleq \sum_{x \in \Phi_{bs}} g_x \eta_x r_x^{-a_x}, \nonumber\\
\hat{N} &\triangleq \left(\frac{4\pi}{\bar{\lambda}_c}\right)^2 \frac{N_0 F_N BW_{cell}}{P_{bs} G_{bs}^{max} G_{ue}^{max}},
\end{align}
and applying \eqref{eq:signal_power}, \eqref{eq:interf_power}, and \eqref{eq:noise_power} to \eqref{eq:cell_sinr}, the expression for $SINR_{cell}$ is simplified to
\begin{gather}
SINR_{cell} = \frac{\hat{S}}{\hat{I}+\hat{N}} = \nonumber \\
=\frac{\eta r_{}^{-a}}{\sum_{x \in \Phi_{bs}} g_x \eta_x r_x^{-a_x}+\left(\frac{4\pi}{\bar{\lambda}_c}\right)^2 \frac{N_0 F_N BW_{cell}}{P_{bs} G_{bs}^{max} G_{ue}^{max}}}.
\label{eq:cell_sinr_normalized}
\end{gather}
The CCDF of $SINR_{cell}$ is subsequently derived as
\begin{align}
& \text{P}\left(SINR_{cell}>T\right) = \mathbb{E}_{r_{},a,\hat{I}}\left[\text{P}\left(\eta>(\hat{I}+\hat{N})Tr_{}^a\right)\right] \overset{(i)}{=} \nonumber \\
& = \mathbb{E}_{r_{},a,\hat{I}}
\left[ e^{-(\hat{I}+\hat{N})Tr^a} \right]
\overset{(ii)}{=}\mathbb{E}_{r_{},a} \left[ \mathcal{L}_{\hat{I}}(Tr_{}^a)e^{-\hat{N}Tr_{}^a}\right],
\label{cellular:sinrccdf_full}
\end{align}
where $(i)$ follows from the CCDF of the exponential random variable, and $(ii)$ from the Laplace transform of $\hat{I}$, denoted by $\mathcal{L}_{\hat{I}}(s)$. Considering that the impact of the interference is reduced due to the directionality of the mmWave transmissions, and that the impact of noise is increased due to the high bandwidth of the mmWave band, we assume that the system is \textit{noise-limited}, which means that $SINR_{cell}$ can be approximated by the cellular signal-to-noise ratio (SNR), denoted by $SNR_{cell}$, as
\begin{gather}
\text{P}\left(SINR_{cell}>T\right) \approx
\text{P}\left(SNR_{cell}>T\right) = \mathbb{E}_{r_{},a}\left[e^{-\hat{N}Tr_{}^a}\right] = \nonumber \\
=\int_0^{\infty} \left(e^{-\frac{r}{r_{los}}} e^{-\hat{N}Tr^{a_{L}}}+(1-e^{-\frac{r}{r_{los}}}) e^{-\hat{N}Tr^{a_{N}}} \right) f_r(r)dr,
\label{def:cellular_snr}
\end{gather}
where $f_r(r)$ is given by \eqref{def:rassoc_pdf}. Although the integral in \eqref{def:cellular_snr} cannot be solved in closed form, we present a tight approximation in Proposition 3 that follows.
\begin{proposition}
The CCDF of the cellular SINR can be accurately approximated by
\begin{equation}
\text{P}(SINR_{cell}>T) \approx J_1(T,a_{N})+J_2(T,a_{L})-J_2(T,a_{N}),
\label{eq:cellsinrccdf_final}
\end{equation}
where
\begin{align}
& J_1(T,a)= \frac{2}{a r_{cell}^2} \left(
\frac{\gamma\left(\frac{2}{a}, \hat{N} T r_1^{a}\right)}{(\hat{N}T)^{\frac{2}{a}}} -
\frac{\gamma\left(\frac{3}{a}, \hat{N} T r_1^{a}\right)}{r_1 (\hat{N}T)^{\frac{3}{a}}} \right), \\
&J_2(T,a)= \nonumber\\
& =\frac{2}{a r_{cell}^2} \left(
\frac{\gamma\left(\frac{2}{a}, \hat{N} T r_2^{a}\right)}{(\hat{N}T)^{\frac{2}{a}}} -
2\frac{\gamma\left(\frac{3}{a}, \hat{N} T r_2^{a}\right)}{r_2 (\hat{N}T)^{\frac{3}{a}}} +
\frac{\gamma\left(\frac{4}{a}, \hat{N} T r_2^{a}\right)}{r_2^2 (\hat{N}T)^{\frac{4}{a}}}\right),
\end{align}
with
\begin{gather*}
r_1 = \sqrt{3}r_{cell}, \\
r_{2} = \sqrt{6} \sqrt{1-\sqrt{\pi}\frac{r_{cell}}{2 r_{los}} e^{\left(\frac{r_{cell}}{2 r_{los}}\right)^2} \text{erfc}\left(\frac{r_{cell}}{2r_{los}}\right)} r_{cell}.
\end{gather*}
\end{proposition}
\begin{IEEEproof}
See Appendix \ref{appendix:cellular_SINR}.
\end{IEEEproof}
\subsection{D2D Rate Analysis}
\label{subsection:d2d_rate}
Similar to the cellular case, we focus on a paired target UE at the origin and derive the experienced D2D rate, denoted by $\mathcal{R}_{d2d}$, when a content is requested from the D2D peer. The following analysis applies only to the DAC policy, which is distinguished for the HD-DAC and the FD-DAC policy in the following sections.
\subsubsection{Distribution of the D2D rate for the HD-DAC policy}
\label{subsection:hd}
The D2D rate for the HD-DAC policy, denoted by $\mathcal{R}_{d2d}^{hd}$, is determined by the D2D SINR, denoted by $SINR_{d2d}^{hd}$, through the Shannon capacity formula as
\begin{equation}
\mathcal{R}_{d2d}^{hd}=\psi BW_{d2d} \log \left(1+SINR_{d2d}^{hd}\right) \mbox{ [bps]},
\label{eq:hd_d2drate_shannon}
\end{equation}
where $\psi$ denotes the HD factor, equal to $1/2$ when both paired UEs want to transmit. Subsequently, the CCDF of $\mathcal{R}_{d2d}^{hd}$ is determined by the CCDF of $SINR_{d2d}^{hd}$ as
\begin{equation}
\text{P}\left( \mathcal{R}_{d2d}^{hd} > \rho \right) =\text{P}\left(SINR_{d2d}^{hd} >2^{\frac{\rho}{\psi BW_{d2d}}}-1 \right).
\label{eq:hd_d2drate_shannon_ccdf}
\end{equation}
Regarding $SINR_{d2d}^{hd}$, it is defined as:
\begin{equation}
SINR_{d2d}^{hd} \triangleq \frac{S}{I+N},
\label{eq:d2d_hd_sinr}
\end{equation}
where
\begin{itemize}
\item $S$ is a random variable representing the received signal power from the D2D peer, located at a distance $r_{d2d}$ from the target UE. Assuming that the antennas of the two UEs are perfectly aligned, $S$ is given by
\begin{equation}\label{eq:d2d_hd_signal_power}
S = \left(\frac{\bar{\lambda}_c}{4\pi}\right)^2 P_{ue} (G_{ue}^{max})^2 \eta r_{d2d}^{-a}.
\end{equation}
\item $I$ is a random variable representing the received interference power from all transmitting D2D UEs. Denoting by $\Phi_{d2d}^{hd}$ the point process of the D2D interferers in the HD-DAC policy, $I$ is given by
\begin{equation}\label{eq:d2d_hd_interf_power}
I= \sum_{x \in \Phi_{d2d}^{hd}} \left(\frac{\bar{\lambda}_c}{4\pi}\right)^2 P_{ue} G_x \eta_x r_x^{-a_x},
\end{equation}
where $r_x$ and $G_x$ are the length and the gain of the interfering link respectively. The latter comprises the antenna gains of the interfering UE and the target UE. Since, in the HD-DAC policy, at most one UE from every D2D pair can transmit, the intensity of $\Phi_{d2d}^{hd}$ is given by
\begin{equation}
\lambda_{d2d}^{hd}=\left(1-(1-h_{dac})^2 \right) \lambda_{p}=\frac{\delta}{2} h_{dac}\left(2-h_{dac} \right) \lambda_{ue}.
\end{equation}
\item $N$ is the noise power at the receiver, which depends on the D2D bandwidth $BW_{d2d}$ and is given by
\begin{equation}\label{eq:d2d_hd_noise_power}
N = N_0 F_N BW_{d2d}.
\end{equation}
\end{itemize}
Introducing the normalized quantities
\begin{align}
g_x & \triangleq \frac{G_x}{\mbox{max}(G_x)} = \frac {G_x}{(G_{ue}^{max})^2}, \nonumber\\
\hat{S} & \triangleq \eta r_{d2d}^{-a}, \nonumber\\
\hat{I} & \triangleq \sum_{x \in \Phi_{bs}} g_x \eta_x r_x^{-a_x}, \nonumber\\
\hat{N} & \triangleq \left(\frac{4\pi}{\bar{\lambda}_c}\right)^2 \frac{N_0 F_N BW_{d2d}}{P_{ue} (G_{ue}^{max})^2},
\label{eq:hd_d2d_normalizations}
\end{align}
and applying \eqref{eq:d2d_hd_signal_power}, \eqref{eq:d2d_hd_interf_power}, and \eqref{eq:d2d_hd_noise_power} to \eqref{eq:d2d_hd_sinr}, the expression for $SINR_{d2d}^{hd}$ is simplified to
\begin{equation}
SINR_{d2d}^{hd} = \frac{\hat{S}}{\hat{I}+\hat{N}} = \frac{\eta r_{d2d}^{-a}}{\sum_{x \in \Phi_{d2d}^{hd}} g_x \eta_x r_x^{-a_x}+\left(\frac{4\pi}{\bar{\lambda}_c}\right)^2 \frac{N_0 F_N BW_{d2d}}{P_{ue}(G_{ue}^{max})^2}}.
\label{eq:d2d_hd_sinr_normalized}
\end{equation}
The CCDF of $SINR_{d2d}^{hd}$ is derived similarly to \eqref{cellular:sinrccdf_full} as
\begin{align}
&\text{P}(SINR_{d2d}^{hd}>T) = \mathbb{E}_{r_{d2d},a} \left[
\mathcal{L}_{\hat{I}}^{hd}(s)
e^{-\hat{N}s}\right], \,s=T r_{d2d}^a,
\label{eq:d2d_hd_sinrccdf}
\end{align}
where $\mathcal{L}_{\hat{I}}^{hd}(s)$ is the Laplace transform of the interference in the HD-DAC policy, and the expectation over $a$ and $r_{d2d}$ is computed through \eqref{eq:plos} and \eqref{eq:d2d_displacement} respectively. In contrast to the cellular case, the contribution of the interference in $SINR_{d2d}^{hd}$ is not negligible, even with directionality, due to the lower bandwidth expected for D2D communication, thus, $\mathcal{L}_{\hat{I}}^{hd}(s)$ is evaluated according to Proposition 4 that follows.
\begin{proposition}
The Laplace transform of the D2D interference in the HD-DAC policy, $\mathcal{L}_{\hat{I}}^{hd}(s)$, is given by
\begin{align}
\mathcal{L}_{\hat{I}}^{hd}(s)
\approx
e^{-\pi \delta h_{dac}(2-h_{dac})\lambda_{ue} \mathbb{E}_g
\left[
J_3\left(s,a_N \right) + J_4\left(s,a_N ;k\right) - J_4\left(s,a_L;k\right)
\right]},
\label{eq:d2d_hd_interf_laplace}
\end{align}
where
\begin{gather}
J_3(s, a) = \frac{1}{2} \Gamma\left(1-\frac{2}{a}\right)\Gamma\left(1+\frac{2}{a}\right) g^{\frac{2}{a}} s^{\frac{2}{a}}, \nonumber\\
J_4(s, a; k) = \sum_{l=0}^k \binom{k}{l} (-1)^l
\frac{r_4^{a+2} {}_2 F_1\left(1,1+\frac{l+2}{a};2+\frac{l+2}{a};-\frac{r_4^a}{gs}\right)}{(l+a+2)gs}, \nonumber\\
r_4 = \sqrt{(k+1)(k+2)} r_{los},
\label{eq:d2d_hd_interf_laplace_terms}
\end{gather}
$k$ denotes the order of the approximation, and the averaging is taken over the discrete random variable $g$ with distribution
\begin{subnumcases}{g=}
1 &
with prob $\frac{\Delta\theta_{ue}^2}{4\pi^2}$ \\
\frac{G_{ue}^{min}}{G_{ue}^{max}} &
with prob $2 \frac{\Delta\theta_{ue}(2\pi-\Delta\theta_{ue})}{4\pi^2}$ \\
\left(\frac{G_{ue}^{min}}{G_{ue}^{max}}\right)^2 & with prob $\frac{(2\pi-\Delta\theta_{ue})^2}{4\pi^2}$
\end{subnumcases}
\end{proposition}
\begin{IEEEproof}
See Appendix \ref{appendix:hd-dac}.
\end{IEEEproof}
As $k \to \infty$, more terms are added in the summation and the approximation becomes exact. Combining \eqref{eq:d2d_hd_interf_laplace} with \eqref{eq:d2d_hd_sinrccdf} into \eqref{eq:hd_d2drate_shannon_ccdf} yields the CCDF of $\mathcal{R}_{d2d}^{hd}$ where the final integration over $r_{d2d}$ can be evaluated numerically.
\subsubsection{Distribution of the D2D rate for the FD-DAC policy}
\label{subsection:fd}
As in the case of the HD-DAC policy, the D2D rate for the FD-DAC policy, denoted by $\mathcal{R}_{d2d}^{fd}$, is determined by the D2D SINR, denoted by $SINR_{d2d}^{fd}$, through the Shannon capacity formula as
\begin{equation}
\mathcal{R}_{d2d}^{fd}=BW_{d2d} \log \left(1+SINR_{d2d}^{fd}\right) \mbox{ [bps]}.
\label{eq:fd_d2drate_shannon}
\end{equation}
Subsequently, the CCDF of $\mathcal{R}_{d2d}^{fd}$ is derived from the CCDF of $SINR_{d2d}^{fd}$ as
\begin{equation}
\text{P}\left( \mathcal{R}_{d2d}^{fd} > \rho \right) =\text{P}\left(SINR_{d2d}^{fd} >2^{\frac{\rho}{BW_{d2d}}}-1 \right).
\label{eq:fd_d2d_rate_shannon_ccdf}
\end{equation}
Regarding $SINR_{d2d}^{fd}$, it is defined as
\begin{equation}
SINR_{d2d}^{fd} \triangleq \frac{S}{I+I_{si}+N},
\label{eq:d2d_fd_sinr}
\end{equation}
where
\begin{itemize}
\item $S$ is a random variable representing the received signal power from the D2D peer, given by \eqref{eq:d2d_hd_signal_power}.
\item $I_{si}$ is a random variable representing the SI power when the target UE operates in FD mode, given by \eqref{eq:SI}.
\item $I$ is a random variable representing the received interference power from all transmitting D2D UEs, given by
\begin{align}
I = &\sum_{x_1 \in \Phi_{p}^{(1)}} \left(\frac{\bar{\lambda}_c}{4\pi}\right)^2 P_{ue} \psi_{x_1} G_{x_1} \eta_{x_1} r_{x_1}^{-a_{x_1}}+ \nonumber\\
+ & \sum_{x_2 \in \Phi_{p}^{(2)}} \left(\frac{\bar{\lambda}_c}{4\pi}\right)^2 P_{ue} \psi_{x_2} G_{x_2} \eta_{x_2} r_{x_2}^{-a_{x_2}},
\end{align}
where $\Phi_{p}^{(1)}$ and $\Phi_{p}^{(2)}$ are the point processes of the paired UEs, and $\psi_{x}$ denotes the indicator variable for the event that the UE at position $x$ transmits.
\item $N$ is the noise power at the receiver, given by \eqref{eq:d2d_hd_noise_power}.
\end{itemize}
Defining $g$, $\hat{S}$ and $\hat{N}$ as in \eqref{eq:hd_d2d_normalizations} and introducing
\begin{align}
\hat{I} &= \sum_{x \in \Phi_{p}^{(1)}} \psi_x g_x \eta_x r_x^{-a_x}+\sum_{y \in \Phi_{p}^{(2)}} \psi_y g_y \eta_y r_y^{-a_y}, \nonumber\\
\hat{I_{si}} &= \left(\frac{4\pi}{\bar{\lambda}_c}\right)^2 \frac{\eta_{si}}{(G_{ue}^{max})^2},
\end{align}
the CCDF of $SINR_{d2d}^{fd}$ is derived similarly to \eqref{eq:d2d_hd_sinrccdf} as
\begin{equation}
\text{P}\left(SINR_{d2d}^{fd}>T\right) = \mathbb{E}_{r_{d2d},a}
\left[ \mathcal{L}_{\hat{I}}^{fd}(s) \mathcal{L}_{\hat{I_{si}}}(s)
e^{-\hat{N}s}\right],\, s=T r_{d2d}^a,
\label{d2d:sinrccdf_fd_full}
\end{equation}
where $\mathcal{L}_{\hat{I}}^{fd}(s)$ and $\mathcal{L}_{\hat{I_{si}}}(s)$ are the Laplace transforms of the external D2D interference and the SI respectively. Recalling that $\eta_{si} \! \sim \! \text{Exp}\left(\frac{1}{\kappa_{si}}\right)$, $\mathcal{L}_{\hat{I_{si}}}(s)$ is derived through the Laplace transform of the exponential random variable as
\begin{equation}
\mathcal{L}_{\hat{I_{si}}}(s)=
\mathbb{E} \left[
e^{- \left( \frac{4\pi}{ \bar{\lambda}_c G_{ue}^{max}}\right)^2 \eta_{si} s}
\right]
= \frac{1}{1+\left( \frac{4\pi}{ \bar{\lambda}_c G_{ue}^{max}}\right)^2 \frac{s}{\kappa_{si}}},
\label{eq:d2d_fd_laplace_si}
\end{equation}
while $\mathcal{L}_{\hat{I}}^{fd}(s)$ is derived in Proposition 5 that follows.
\begin{proposition}
The Laplace transform of the D2D interference in the FD-DAC policy, $\mathcal{L}_{\hat{I}}^{fd}(s)$, can be bounded as
\begin{align}
&\mathcal{L}_{\hat{I}}^{fd}(s) \geq e^{-\pi \delta \lambda_{ue} h_{dac} 2 \mathbb{E}_g
\left[ J_3\left(s,a_N \right) + J_4\left(s,a_N ;k\right) - J_4\left(s,a_L;k\right)\right]}, \label{eq:d2d_fd_interf_laplace_lb}
\\
& \mathcal{L}_{\hat{I}}^{fd}(s) \leq
e^{-\pi \delta \lambda_{ue} h_{dac} \mathbb{E}_g
\left[ J_3\left(2s,a_N \right) + J_4\left(2s,a_N ;k\right) - J_4\left(2s,a_L;k\right)\right]},
\label{eq:d2d_fd_interf_laplace_ub}
\end{align}
where $J_3\left(s,a \right)$ and $J_4\left(s,a ;k\right)$ are given by \eqref{eq:d2d_hd_interf_laplace_terms}.
\end{proposition}
\begin{IEEEproof}
See Appendix \ref{appendix:fd-dac}.
\end{IEEEproof}
Combining \eqref{eq:d2d_fd_interf_laplace_lb} and \eqref{eq:d2d_fd_interf_laplace_ub} with \eqref{eq:d2d_fd_laplace_si} into \eqref{d2d:sinrccdf_fd_full}, and applying the result to \eqref{eq:fd_d2d_rate_shannon_ccdf}, yields two bounds for the CCDF of $\mathcal{R}_{d2d}^{fd}$.
\subsection{Delay Analysis}
\label{subsectio:delay_analysis}
In this section, we characterize the delay performance of the MPC and the DAC policies through the \textit{content retrieval delay}, denoted by $D$ and defined as the delay experienced by a UE when retrieving a requested content from any available source. In the case of a cache hit, $D$ is zero, while in the cellular and the D2D case, it coincides with the transmission delay of the content to the UE\footnote{Additional delays caused by the retrieval of the content through the core network are beyond the scope of this work.}. The CDFs of $D$ for the MPC and the DAC policy are derived as follows:
\begin{itemize}
\item For the MPC policy, the requested content is retrieved from the local cache with probability $h_{mpc}$, or from the BS with probability $1-h_{mpc}$, hence
\begin{equation}
\text{P} \left(D<d \right)=h_{mpc}+(1-h_{mpc})\text{P} \left(\mathcal{R}_{cell} >\frac{\sigma_{file}}{d} \right),
\label{delay:mpc}
\end{equation}
where the CCDF of $\mathcal{R}_{cell}$ is given by \eqref{def:cellrateccdf}.
\item For the DAC policy, the case of the paired and the unpaired UE must be distinguished, since the unpaired UE lacks the option for D2D communication. For a paired UE, the requested content is retrieved from the local cache with probability $h_{dac}$, from the D2D peer with probability $h_{dac}$, or from the BS with probability $1-2h_{dac}$, while, for an unpaired UE, the requested content is retrieved from the local cache with probability $h_{dac}$, or from the BS with probability $1-h_{dac}$, yielding
\begin{align}
\text{P} &\left(D<d \right) = h_{dac}+ \delta h_{dac} \text{P} \left(\mathcal{R}_{d2d} >\frac{\sigma_{file}}{d} \right)+ \nonumber\\ &
+(1-h_{dac}-\delta h_{dac}) \text{P} \left(\mathcal{R}_{cell} >\frac{\sigma_{file}}{d} \right),
\label{delay:dac}
\end{align}
where the CCDF of $\mathcal{R}_{cell}$ is given by \eqref{def:cellrateccdf}, and the CCDF of $\mathcal{R}_{d2d}$ is given by \eqref{eq:hd_d2drate_shannon_ccdf} for the HD-DAC policy and by \eqref{eq:fd_d2d_rate_shannon_ccdf} for the FD-DAC policy.
\end{itemize}
\section{Results}
\label{section:results}
In this section, we compare the DAC and the MPC policies in terms of the offloading factor and the 90-th percentile of the content retrieval delay analytically and through Monte-Carlo simulations. Towards this goal, we present the simulation parameters in Section \ref{section:network_setup}, the results for the offloading in Section \ref{section:offloading_results}, and the results for the content retrieval delay in Section \ref{section:delay_results}.
\subsection{Simulation Setup}
\label{section:network_setup}
\begin{table}[t]
\renewcommand{\arraystretch}{1.3}
\caption{SIMULATION PARAMETERS}
\centering
\scalebox{0.7}{
\begin{tabular}{|l | c || l | c|}
\hline
$\lambda_{bs}$ & $127 \mbox{ BSs/km}^2$ & $N_0$ & -174 dBm/Hz \\
$\lambda_{ue}$ & $1270 \mbox{ UEs/km}^2$ & $F_N$ & 10 dB \\
$\delta$ & 0.5, 0.75, 1 & $\Delta\theta_{ue}$ & 30$^o$ \\
$r_{d2d}^{max}$ & 15 m & $\Delta\theta_{bs}$ & 10$^o$\\\cline{0-1}
$f_{c}$ & 28 GHz & $G_{bs}^{max}, $ & 18 dB\\
$BW$ & 2 GHz & $G_{bs}^{min}$ & -2 dB \\
$\chi_{d2d}$ & 20\% & $G_{ue}^{max}$ & 9 dB \\
$r_{los}$ & 30 m & $G_{ue}^{min}$ & -9 dB\\\cline{3-4}
$a_{L}$ & 2 & $\sigma_{file}$ & 100 MBs \\
$a_{N}$ & 3 & $L$ & 1000\\
$P_{bs}$ & 30 dBm & $K$ & 50, 100, 200\\
$P_{ue}$ & 23 dBm & $\xi$ & variable \\
$ \kappa_{si}$ & -80 dB & & \\ \hline
\end{tabular}
}
\label{table:parameters}
\end{table}
For the simulation setup of the DAC and the MPC policy, we consider a mmWave system operating at the carrier frequency $f_c$ of 28 GHz, which is chosen due to its favorable propagation characteristics \cite{general:rappaport_mmwaves_itwillwork} and its approval for 5G deployment by the FCC \cite{general:mmwaves_fcc_report}. Regarding the network topology, we consider a high BS density $\lambda_{bs}$ corresponding to an average cell radius $r_{cell}$ of 50 m, which is consistent with the trends in the densification of future cellular networks and the average LOS radius $r_{los}$ of the mmWave frequencies in urban environments \cite{mmwaves:andrews_kulkarni_rate_trends_for_blockage_param_values}. The latter is chosen to be 30 m, based on the layout for the Chicago and the Manhattan area \cite{mmwaves:andrews_kulkarni_rate_trends_for_blockage_param_values}. Regarding the antenna model of the BSs and the UEs, the gains and the beamwidths are chosen according to typical values of the literature \cite{mmwaves:andrews_tractable_selfbackhauled, d2dmmwaves:heath_bai_adhoc_older}, considering lower directionality for the UEs due to the smaller number of antennas that can be installed in the UE devices\footnote{A planar phased array with a beamwidth of $30^o$ can be constructed with 12 antenna elements\cite{general:skolnik}, requiring an area of approximately 3 cm$^2$ at 28 GHz, which is feasible in modern UE devices.}. Regarding the caching model, we consider a library of 1000 files of size 100 MBs and three cases for the UE cache size: i) $K=50$, ii) $K=100$, and iii) $K=200$, corresponding to the 5\%, 10\%, and 20\% percentages of the library size respectively. The simulation parameters are summarized in Table \ref{table:parameters}.
\subsection{Offloading Comparison}
\label{section:offloading_results}
As shown analytically in Section \ref{section:offloading_analysis}, the offloading gain of the DAC policy over the MPC policy $F_{gain}$ increases monotonically with the UE pairing probability $\delta$, and decreases monotonically with the UE cache size $K$ and the content popularity $\xi$, while it is not affected by the library size $L$. In this section, we validate the impact of $\delta$, $K$, and $\xi$ on $F_{gain}$ by means of simulations.
\begin{figure}[!t]
\centering
\includegraphics[width=3in]{offloading_gain_versus_xi}
\caption{The offloading gain of the DAC policy over the MPC policy, $F_{gain}$, in terms of the content popularity exponent $\xi$.}
\label{fig:offloading_gain_versus_xi}
\end{figure}
In Fig. \ref{fig:offloading_gain_versus_xi}, we plot $F_{gain}$ in terms of $\xi$ for $K=100$ and for $\delta= 0.5,\mbox{ } 0.75,\mbox{ } 1$, corresponding to three different percentages of paired UEs inside the network. We observe that the simulation results validate the monotonic increase and decrease of $F_{gain}$ with $\delta$ and $\xi$ respectively. The former is attributed to the higher availability of D2D pairs, which improves the opportunities for offloading in the DAC policy and does not affect the MPC policy, while the latter is attributed to the increasing gap in the hit probabilities of the two policies, as illustrated with the decrease of $h_{ratio}$ with $\xi$ in Fig. \ref{fig:hitprob_ratio} of Section \ref{section:offloading_analysis}. Based on the above, we observe that the maximum offloading gain of the DAC over the MPC policy is equal to 2 and it is achieved when $\delta=1$ and $\xi=0$, which corresponds to the case of a fully paired network and uniform content popularity respectively. For $\delta=1$, we further observe that the DAC policy outperforms the MPC policy regardless of the value of $\xi$, while for lower values of $\delta$, the DAC policy is superior only when $\xi<0.63$ for $\delta=0.75$, and when $\xi<0.97$ for $\delta=0.5$. Based on these observations, we can generalize that for a network with $\delta < 1$ the DAC policy offers higher offloading than the MPC policy for $\xi$ up to a threshold value, which decreases with $\delta$.
\begin{figure}[!t]
\centering
\includegraphics[width=3in]{offloading_min_delta}
\caption{The minimum fraction of pairs ($\delta$) required for the DAC policy to achieve higher offloading than the MPC policy in terms of the content popularity exponent $\xi$.}
\label{fig:offloading_min_delta}
\end{figure}
In Fig. \ref{fig:offloading_min_delta}, we plot the minimum $\delta$ that is required for the DAC policy to outperform the MPC policy, in terms of $\xi$ and for $K=50,\mbox{ }100,\mbox{ }200$. We can observe that the requirements for $\delta$ become more stringent with increasing $\xi$ and $K$, which widen the gap between the hit probability of the two policies, but the impact of $K$ is weaker than the impact of $\xi$, which is attributed to the low sensitivity of $h_{ratio}$ with $K$. This behavior can be explained with the bound of $h_{ratio}$ in \eqref{eq:hitratio_limit}, which represents the limit of $h_{ratio}$ when $K \to \infty$. The minimum $\delta$ for $K \to \infty$ is also depicted in Fig. \ref{fig:offloading_min_delta}, as well as the convergence of the other curves to it. When $\xi<0.5$ or $\xi>1.5$, the gap between the curves for finite $K$ are close to the bound, because $h_{ratio}$ converges quickly to its limit value. In contrast, when $0.5<\xi<1.5$, the gap between the curves and the bound is wider, because $h_{ratio}$ converges slowly to its limit value. Due to the slow convergence, for practical values of $K$, similar to ones considered in this work, $h_{ratio}$ is insensitive to $K$.
\begin{figure}[!t]
\centering
\includegraphics[width=3in]{offloading_gain_versus_K}
\caption{The offloading gain of the DAC policy over the MPC policy, $F_{gain}$, in terms of the UE cache size $K$.}
\label{fig:offloading_gain_versus_K}
\end{figure}
In Fig. \ref{fig:offloading_gain_versus_K}, we plot $F_{gain}$ in terms of $K$ for $\delta=0.75$ and $\xi = 0.3, \mbox{ } 0.6, \mbox{ }0.9$. We observe that, as $K$ increases, $F_{gain}$ decreases fast at low values of $K$ and, afterwards, tends slowly to its limit value, calculated by applying \eqref{eq:hitratio_limit} to \eqref{eq:offloading_gain}. For $\xi = 0.9$, the gap between the curve and the limit is high because of the slow convergence of \eqref{eq:hitratio_limit}, validating that $F_{gain}$ is insensitive to $K$, provided that $K$ is sufficiently high. In contrast, lower values of $K$ favor the DAC over the MPC policy.
\subsection{Delay Comparison}
\label{section:delay_results}
In this section, we validate the analytical expressions of Section \ref{section:performance_analysis} and compare the two caching policies in terms of the 90-th percentile of the content retrieval delay.
\subsubsection{Performance of the HD-DAC policy}
\label{sec:performance_HD_DAC}
In Fig. \ref{fig:hd_dac_rate}, we illustrate for the HD-DAC policy the CCDFs of the cellular rate $\mathcal{R}_{cell}$ and the D2D rate $\mathcal{R}_{d2d}^{hd}$, derived through analysis and simulations, for $\delta=1$, $K=200$, and $\xi=0.4$. A second order approximation ($k=2$) was sufficient for $\mathcal{L}_{\hat{I}}^{hd}(s)$ in \eqref{eq:d2d_hd_interf_laplace_terms}. We observe that $\mathcal{R}_{d2d}^{hd}$ is stochastically larger than $\mathcal{R}_{cell}$ for rates below 5 Gbps, yielding an improvement of 1.52 Gbps in the 50-th percentile, which means that the D2D UEs experience a rate that is higher than the cellular rate by at least 1.5 Gbps for the 50\% of the time. This improvement creates strong incentives for the UEs to cooperate and is attributed to the small D2D distance between the D2D UEs and the reduction of $\mathcal{R}_{cell}$ due to the TDMA scheduling. In contrast, the cellular UEs are more probable to experience rates above 5 Gbps, owing to the high difference between the cellular and the D2D bandwidth. Specifically, it is possible for a cellular UE to associate with a BS with low or even zero load and fully exploit the high cellular bandwidth, while a D2D UE is always limited by the $20\%$ fraction of bandwidth that is reserved for D2D communication.
In Fig. \ref{fig:hd_dac_delay}, we illustrate for the HD-DAC policy the CDFs of the cellular delay $D_{cell}$, the D2D delay $D_{d2d}^{hd}$, and the total delay $D$ that is experienced by a UE without conditioning on its content request. We observe that $D_{d2d}^{hd}$ is significantly lower than $D_{cell}$, which is consistent with Fig. \ref{fig:hd_dac_rate}, while the curve of $D$ is initiated at the value $0.286$ due to the zero delay of cache-hits. We further observe that the simulations for $D_{cell}$ do not match the theoretical curve as tightly as in the case of $\mathcal{R}_{cell}$, which is attributed to the reciprocal relation between the rate and the delay that magnifies the approximation error for the delay. Nevertheless, the match is improved in the case of the total delay due to the contribution of the D2D delay, which is approximated more accurately.
\subsubsection{Performance of the FD-DAC policy}
\label{sec:performance_FD_DAC}
In Fig. \ref{fig:fd_dac}, we illustrate for the FD-DAC policy the rate and the delay distribution for $\delta=1$, $K=200$, $\xi=0.4$, and a second order approximation for $\mathcal{L}_{\hat{I}}^{fd}(s)$. As seen in Fig. \ref{fig:fd_dac_rate}, both bounds for the CCDF of $\mathcal{R}_{d2d}^{fd}$ are very close to the simulation curve, hence, only the upper bound is considered for $D_{d2d}^{fd}$ in Fig. \ref{fig:fd_dac_delay}.
Compared with the HD-DAC policy, the FD-DAC policy yields a minor improvement in the 50-th percentile of $\mathcal{R}_{d2d}^{fd}$, which is higher than the percentile of $\mathcal{R}_{cell}$ by 1.62 Gbps, that is attributed to the absence of the HD factor that decreases $\mathcal{R}_{d2d}^{hd}$ by half. Nevertheless, the probability of bidirectional content exchange, equal to $0.08$ for the considered parameters, is small to significantly influence the results. The same observation holds for the CDFs of the content retrieval delay.
Motivated by the previous observation, in Fig. \ref{fig:fd_dac_high_xi}, we illustrate for the FD-DAC policy the rate and the content retrieval delay for $\xi=1.0$, in which case $h_{dac}=0.44$, resulting in a non-negligible probability for bidirectional content exchange. As seen in Fig. \ref{fig:fd_dac_rate_high_xi}, $\mathcal{R}_{d2d}^{fd}$ is reduced due to the higher D2D interference, while $\mathcal{R}_{cell}$ is significantly improved due to the higher offloading. Consequently, $\mathcal{R}_{d2d}^{fd}$ is higher than $\mathcal{R}_{cell}$, and the total delay is determined by the cache hits and the curve of the cellular delay,
as seen in Fig. \ref{fig:fd_dac_delay_high_xi}. Since the FD-DAC and the HD-DAC policy are distinguished when $h_{dac}$ is high, in which case the performance is not influenced by the D2D communication, only the HD-DAC policy is considered in the delay comparison with the MPC policy.
\begin{figure}[!t]
\centering
\subfloat[Rate]{
\includegraphics[width=1.6in]{dachd_rates_sim_vs_ana}
\label{fig:hd_dac_rate}
} \hfil
\subfloat[Delay]{
\includegraphics[width=1.6in]{dachd_delays_sim_vs_ana}
\label{fig:hd_dac_delay}
}
\caption{Rate and delay performance of the HD-DAC policy for $\delta=1$, $K=200$ and $\xi=0.4$ (Ana. stands for Analysis and Sim. for Simulation).}
\label{fig:hd_dac}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[Rate]{
\includegraphics[width=1.6in]{dacfd_rates_sim_vs_ana}
\label{fig:fd_dac_rate}
} \hfil
\subfloat[Delay]{
\includegraphics[width=1.6in]{dacfd_delays_sim_vs_ana}
\label{fig:fd_dac_delay}
}
\caption{Rate and delay performance of the FD-DAC policy for $\delta=1$, $K=200$, and $\xi=0.4$ (Ana. stands for Analysis, Sim. for Simulation, UB for Upper Bound, and LB for Lower Bound).}
\label{fig:fd_dac}
\end{figure}
\begin{figure}[!t]
\centering
\subfloat[Rate]{
\includegraphics[width=1.6in]{dacfd_rates_sim_vs_ana_high_xi}
\label{fig:fd_dac_rate_high_xi}
} \hfil
\subfloat[Delay]{
\includegraphics[width=1.6in]{dacfd_delays_sim_vs_ana_high_xi}
\label{fig:fd_dac_delay_high_xi}
}
\caption{Rate and delay performance of the FD-DAC policy for $\delta=1$, $K=200$, and $\xi=1.0$ (Ana. stands for Analysis, Sim. for Simulation, UB for Upper Bound, and LB for Lower Bound).}
\label{fig:fd_dac_high_xi}
\end{figure}
\subsubsection{Delay Comparison between the MPC and the HD-DAC policy}
The MPC policy maximizes the probability of zero delay through cache hits, but the HD-DAC policy may still offer lower delays due to the improvement in the transmission rates. Based on this observation, the two policies are compared in terms of the 90-th percentile of the content retrieval delay, which is an important QoS metric, representing the maximum delay that is experienced by the target UE for 90\% of the time.
In Fig. \ref{fig:hd_delay_90th_percentile}, we plot the delay percentiles for the HD-DAC and the MPC policy as a function of the popularity exponent $\xi$ for the cases: a) $K=50$, b) $K=100$, and c) $K=200$. As a general observation, the 90-th percentile of delay for both policies decreases with higher values of $K$, since both the hit probability and, in the case of the HD-DAC policy, the probability of D2D content exchange, are higher. The delay percentile of the HD-DAC policy also decreases with $\delta$, since the opportunities for D2D communication are improved with a larger number of D2D pairs, while the MPC policy is not affected. In Fig. \ref{fig:hd_delay_90th_percentile_K50}, the performance is comparable between the HD-DAC policy with $\delta=1.0$, and the MPC policy, for $\xi<1.0$. In Fig. \ref{fig:hd_delay_90th_percentile_K100}, the performance is comparable between the HD-DAC policy with $\delta=0.75$, and the MPC policy, for $\xi<0.8$. In Fig. \ref{fig:hd_delay_90th_percentile_K200}, the performance is comparable between the HD-DAC policy with $\delta=0.5$, and the MPC policy, for $\xi<0.4$. Based on these these observations, we conclude that, for low values of $\xi$, the HD-DAC policy is favored by larger UE caches and requires fewer D2D pairings to outperform the MPC policy, while for high values of $\xi$, the MPC policy is favored by larger UE caches due to the wide gap in the hit probabilities of the two policies, which justifies the superior performance of the MPC policy in these cases.
\begin{figure}[!t]
\centering
\subfloat[$K=50$]{\includegraphics[width=3in]{hd_delay_90th_percentile_K50}
\label{fig:hd_delay_90th_percentile_K50}
}\hfil
\subfloat[$K=100$]{
\includegraphics[width=3in]{hd_delay_90th_percentile_K100}
\label{fig:hd_delay_90th_percentile_K100}
}\hfil
\subfloat[$K=200$]{
\includegraphics[width=3in]{hd_delay_90th_percentile_K200}
\label{fig:hd_delay_90th_percentile_K200}
}
\caption{The 90-th percentile of the content retrieval delay $D$ in terms of $\xi$ for a) $K=50$, b) $K=100$, and c) $K=200$.}
\label{fig:hd_delay_90th_percentile}
\end{figure}
\section{Conclusion}\label{section:conclusion}
In this work, we have proposed a novel policy for device caching that combines the emerging technologies of D2D and mmWave communication to enhance the offloading and the delay performance of the cellular network. Based on a stochastic-geometry modeling, we have derived the offloading gain and the distribution of the content retrieval delay for the proposed DAC policy and the state-of-the-art MPC policy, which does not exploit content exchange among the UEs. By comparing analytically and through Monte-Carlo simulations the two policies, we have shown that the proposed policy exhibits superior offloading and delay performance when the availability of pairs in the system is sufficiently high and the popularity distribution of the requested content is not excessively skewed. In addition, motivated by the prospect of bidirectional content exchange, we presented an FD version of the proposed policy, which exhibits a small improvement over the HD version in terms of the delay performance, due to the low probability of bidirectional content exchange. According to the simulation results, increasing this probability does not yield a proportional improvement in performance due to the resulting prevalence of the cellular rate over the D2D rate, attributed to offloading.
As future work, we plan to generalize the proposed caching scheme to a policy that divides the cacheable content to an arbitrary number of groups and study the impact on performance.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\section*{Acknowledgements}
The authors would like to cordially thank the editor and the anonymous reviewers for their constructive suggestions that helped to improve the quality of this work.
\appendices
\section{Proof of Proposition 1}
\label{appendix:proposition_1}
Denoting by A and B the users of a D2D pair and by $\mathcal{C}_A$ and $\mathcal{C}_B$ their caches, the hit probabilities $h_A$ and $h_B$ of the two users can be expressed as
\begin{gather}
h_A = \sum_{i \in \mathcal{C}_A} q_i \nonumber\\
h_B = \sum_{i \in \mathcal{C}_B} q_i,
\end{gather}
and the exchange probabilities $e_A$ and $e_B$ as
\begin{gather}
e_A = \sum_{i \in {\mathcal{C}_B \cap \overline{\mathcal{C}_A}}} q_i \nonumber\\
e_B = \sum_{i \in {\mathcal{C}_A \cap \overline{\mathcal{C}_B}}} q_i,
\end{gather}
where $\overline{\mbox{} \cdot \mbox{}}$ signifies the complement in terms of the set of the library contents.
To prove that $e_A$ and $e_B$ are maximized when $\mathcal{C}_A$ and $\mathcal{C}_B$ form a partition of the $2K$ most popular contents, we need to show that i) the optimal $\mathcal{C}_A$ and $\mathcal{C}_B$ do not overlap, i.e., $\mathcal{C}_A \cap \mathcal{C}_B = \emptyset$, and ii) the optimal $\mathcal{C}_A$ and $\mathcal{C}_B$ cover the $2K$ most popular contents, i.e., $\mathcal{C}_A \cup \mathcal{C}_B = \{i \in \mathbb{N}: 1 \leq i \leq 2 K \}$. We prove both i) and ii) by contradiction. Regarding i), if the optimal $\mathcal{C}_A$ and $\mathcal{C}_B$ contained a common content, say $c \in \mathcal{C}_A \cap \mathcal{C}_B $, we could simultaneously increase $e_A$ and $e_B$ by replacing $c$ in $\mathcal{C}_A$ with a content from $\overline{\mathcal{C}_A} \cap \overline{\mathcal{C}_B}$. Therefore, $\mathcal{C}_A$ and $\mathcal{C}_B$ must not overlap. Regarding ii), if $\mathcal{C}_A$ contained a content $c$ that did not belong in the $2K$ most popular, then we could replace $c$ with an uncached content from $\{i \in \mathbb{N}: 1 \leq i \leq 2 K \}$, which would increase $e_B$ and $h_A$, while leaving $e_A$ and $h_B$ unaffected. Therefore, if $\mathcal{C}_A$ and $\mathcal{C}_B$ form a partition of $\{i \in \mathbb{N}: 1 \leq i \leq 2 K \}$, neither $e_A$, $e_B$ nor $h_A$, $h_B$ can be increased simultaneously with a different partition.
\section{Proof of Proposition 2}
\label{appendix:hitprob_analysis}
The ratio of the hit probabilities of the two policies, $h_{ratio}$, is given by \eqref{hratio}, which we repeat here for easier reference:
\begin{equation}
h_{ratio} = \frac{h_{dac}}{h_{mpc}} = \frac{1}{2} \frac{\sum_{i=1}^{2K} i^{-\xi}}{\sum_{j=1}^{K} j^{-\xi}}.
\label{eq:hratio_def}
\end{equation}
To prove that $h_{ratio}$ decreases monotonically with $\xi$, we differentiate $h_{ratio}$ in terms of $\xi$ as
\begin{align}
\frac{\partial h_{ratio}}{\partial \xi} & =
-\frac{1} {2 \left(\sum_{j=1}^{K} j^{-\xi}\right)^2}
\sum_{i=1}^{2K} \sum_{j=1}^{K} (ij)^{-\xi} \left[ \ln (i) - \ln(j) \right] = \nonumber\\
& = -\frac{1} {2 \left( \sum_{j=1}^{K} j^{-\xi}\right)^2}
\left[
\sum_{i=1}^{K} \sum_{j=1}^{K} (ij)^{-\xi} \left[ \ln (i) - \ln(j) \right] + \sum_{i=K+1}^{2K} \sum_{j=1}^{K} (ij)^{-\xi} \left[ \ln (i) - \ln( j) \right]
\right] \overset{(a)}= \nonumber\\
& = -\frac{1} {2 \left(\sum_{j=1}^{K} j^{-\xi}\right)^2}
\sum_{i=K+1}^{2K} \sum_{j=1}^{K} (ij)^{-\xi} \left[ \ln (i) - \ln(j) \right] \overset{(b)}< 0,
\end{align}
where $(a)$ follows because the first sum is eliminated due to symmetry, and $(b)$ follows because $\ln(i)>\ln(j)$ for the remaining indexes. Since the derivative of $h_{ratio}$ in terms of $\xi$ is negative, $h_{ratio}$ decreases monotonically with $\xi$.
To prove that $h_{ratio}$ decreases monotonically with $K$, we need to show that $h_{ratio}(K+1) < h_{ratio}(K)$. Introducing the notation $\mathsf{S}_K \triangleq \sum_{i=1}^{K} i^{-\xi}$ for clarity, the aforementioned inequality is transformed as
\begin{equation}
h_{ratio}(K+1) < h_{ratio}(K)
\, \Leftrightarrow \,
\frac{1}{2}\frac{\mathsf{S}_{2K+2}}{\mathsf{S}_{K+1}} < \frac{1}{2} \frac{\mathsf{S}_{2K}}{\mathsf{S}_K}
\, \Leftrightarrow \,
\frac{\mathsf{S}_{2K} + (2K+1)^{-\xi} + (2K+2)^{-\xi}}{\mathsf{S}_{K} + (K+1)^{-\xi}} < \frac{\mathsf{S}_{2K}}{\mathsf{S}_K}
\end{equation}
Manipulating the inequality yields
\begin{equation}
\frac{\mathsf{S}_{2K}}{\mathsf{S}_{K}} > \left( \frac{2 K+1}{K+1} \right)^{-\xi} + 2^{-\xi}.
\end{equation}
Splitting the odd and even indexes in $\mathsf{S}_{2K}$ as
\begin{equation}
\mathsf{S}_{2K} = \sum_{i=1}^{K} (2 i-1)^{-\xi} + \sum_{i=1}^{K} (2 i)^{-\xi}
= \sum_{i=1}^{K} (2 i-1)^{-\xi} + 2^{-\xi} \mathsf{S}_{K},
\end{equation}
the inequality is further simplified to
\begin{equation}
\frac{ \sum_{i=1}^{K} (2 i-1)^{-\xi} }{ \sum_{i=1}^{K} i^{-\xi} } > \left( \frac{2 K+1}{K+1} \right)^{-\xi}
\, \Leftrightarrow \,
\sum_{i=1}^{K} \left( \frac{2 i-1}{2 K+1} \right)^{-\xi} >
\sum_{i=1}^{K} \left( \frac{i}{ K+1} \right)^{-\xi}
\end{equation}
Comparing the sums term-by-term, the inequality holds provided that
\begin{equation}
\frac{2 i-1}{2 K+1} < \frac{i}{K+1} \, \Leftrightarrow \, i<K+1.
\end{equation}
Since the final inequality is true and all the steps of the derivation were reversible, the initial inequality is also proven.
To calculate the limit of $h_{ratio}$ for high values of $K$, we distinguish the cases $\xi >1$ and $\xi \leq 1$.
\begin{itemize}
\item For $\xi>1$, the sums in \eqref{eq:hratio_def} converge as $K \to \infty$, yielding
\begin{equation}
\lim_{K \to \infty} h_{ratio} = \frac{1}{2} \frac{\zeta(\xi)}{\zeta(\xi)} = \frac{1}{2},
\label{eq:hratio_lim_high_xi}
\end{equation}
where $\zeta(\cdot)$ is the Rieman-Zeta function.
\item For $\xi \leq 1$, the sums in \eqref{eq:hratio_def} diverge as $K \to \infty$, nevertheless, the limit can be calculated through an asymptotic expression of the sums, based on the Euler-McLaurin summation formula \cite{general:analytic_number_theory}. According to this formula, the discrete sum can be approximated with a continuous integral as
\begin{equation}
\sum_{i=1}^{K} i^{-\xi} \sim \int_{1}^K i^{-\xi} di + \epsilon(\xi) =
\begin{cases} \frac{K^{1-\xi}-1}{1-\xi} + \epsilon(\xi) & \mbox{if } \xi < 1 \\
\ln (K) + \epsilon(1) & \mbox{if } \xi = 1
\end{cases}, \label{eq:euler-mclaurin}
\end{equation}
where $\epsilon(\xi)$ represents the asymptotic error of the approximation, also known as the \textit{generalized Euler constant}\footnote{For the special case $\xi=1$,$\epsilon(1)$ is the well-known Euler–Mascheroni constant, equal to 0.57721..}. Applying \eqref{eq:euler-mclaurin} to \eqref{eq:hratio_def} yields
\begin{equation}
\lim_{K \to \infty} h_{ratio} =
\begin{cases}
\lim_{K \to \infty} \frac{1}{2} \frac{\frac{(2K)^{1-\xi}-1}{1-\xi} + \epsilon(\xi)} {\frac{K^{1-\xi}-1}{1-\xi} + \epsilon(\xi)} =
2^{-\xi} & \mbox{if } \xi < 1 \\
\lim_{K \to \infty} \frac{1}{2} \frac{\ln (2K) + \epsilon(1)} {\ln (K) + \epsilon(1)} =
\frac{1}{2} & \mbox{if } \xi = 1 \end{cases} . \label{eq:hratio_lim_small_xi}
\end{equation}
\end{itemize}
Finally, the limits in \eqref{eq:hratio_lim_high_xi} and \eqref{eq:hratio_lim_small_xi} can be combined in one compact expression as
\begin{equation}
\lim_{K \to \infty} h_{ratio} = \mbox{max}\left(2^{-\xi}, \frac{1}{2}\right), \mbox{ } \xi \geq 0
\end{equation}
\section{Derivation of the CCDF of the cellular SINR}
\label{appendix:cellular_SINR}
Defining
\begin{equation}
J_1(T,a) \triangleq \int_0^{\infty} \frac{2r}{r_{cell}^2}e^{-\left(\frac{r}{r_{cell}}\right)^2} e^{-\hat{N}Tr^{a}}dr
\label{eq:J1}
\end{equation}
and
\begin{equation}
J_2(T,a) \triangleq \int_0^{\infty} \frac{2r}{r_{cell}^2}
e^{-\left(\frac{r}{r_{cell}}\right)^2}
e^{-\frac{r}{r_{los}}}
e^{-\hat{N}Tr^a} dr,
\label{eq:J2}
\end{equation}
the CCDF of the cellular SINR is expressed as
\begin{equation}
\text{P}(SINR_{cell}>T) \approx \text{P}(SNR_{cell}>T) = J_1(T,a_{N})+J_2(T,a_{L})-J_2(T,a_N).
\label{eq:cell_sinr_appendix}
\end{equation}
Since the integrals in \eqref{eq:J1} and \eqref{eq:J2} cannot be evaluated in closed form, they are approximated as follows.
\begin{itemize}
\item Regarding $J_1$, the exponential term $e^{-\left(\frac{r}{r_{cell}}\right)^2}$ is approximated with a piecewise linear function as
\begin{equation}
e^{-\left(\frac{r}{r_{cell}}\right)^2} \approx
\left\{
\begin{array}{ll}
1-\frac{r}{r_1} & \text{for }r \leq r_1 \\
0 & \text{for }r>r_1
\end{array}
\right.,
\end{equation}
yielding
\begin{equation}
J_1(T,a) \approx \int_0^{r_1} \frac{2r}{r_{cell}^2}\left(1-\frac{r}{r_1}\right) e^{-\hat{N}Tr^{a}}dr=
\frac{2}{a r_{cell}^2} \left(
\frac{\gamma\left(\frac{2}{a}, \hat{N} T r_1^{a}\right)}{(\hat{N}T)^{\frac{2}{a}}} -
\frac{\gamma\left(\frac{3}{a}, \hat{N} T r_1^{a}\right)}{r_1 (\hat{N}T)^{\frac{3}{a}}} \right)
\label{eq:J1_approx},
\end{equation}
where $r_1$ is chosen so that the approximated value of $J_1$ is exact for $T=0$, i.e.,
\begin{equation}
J_1(0,a)=\int_0^{\infty} \frac{2r}{r_{cell}^2}e^{-\left(\frac{r}{r_{cell}}\right)^2}dr =
\int_0^{r_1} \frac{2r}{r_{cell}^2}\left(1-\frac{r}{r_1}\right)dr \mbox{ }\Rightarrow\mbox{ } r_1= \sqrt{3}r_{cell}.\label{eq:r1_derivation}
\end{equation}
\item Regarding $J_2$, the exponential term $e^{-\left(\frac{r}{r_{cell}}\right)^2}e^{-\frac{r}{r_{los}}}$ is approximated with a quadratic function as
\begin{equation}
e^{-\left(\frac{r}{r_{cell}}\right)^2}e^{-\frac{r}{r_{los}}} \approx
\left\{
\begin{array}{ll}
\left(1-\frac{r}{r_{2}}\right)^2 & \text{for }r \leq r_{2} \\
0 & \text{for }r>r_{2}
\end{array}
\right.,
\end{equation}
yielding
\begin{align}
&J_2(T,a)\approx \int_0^{\infty} \frac{2r}{r_{cell}^2} \left(1-\frac{r}{r_{2}}\right)^2 e^{-\hat{N}Tr^a} dr = \\&= \frac{2}{a r_{cell}^2} \left(
\frac{\gamma\left(\frac{2}{a}, \hat{N} T r_2^{a}\right)}{(\hat{N}T)^{\frac{2}{a}}} -
2\frac{\gamma\left(\frac{3}{a}, \hat{N} T r_2^{a}\right)}{r_2 (\hat{N}T)^{\frac{3}{a}}} +
\frac{\gamma\left(\frac{4}{a}, \hat{N} T r_2^{a}\right)}{r_2^2 (\hat{N}T)^{\frac{4}{a}}}\right), \label{eq:J2_approx}
\end{align}
where $r_2$ is chosen so that the approximated value of $J_2$ is exact for $T=0$, i.e.,
\begin{align}
J_2(0,a)&= \int_0^{\infty} \frac{2r}{r_{cell}^2}
e^{-\left(\frac{r}{r_{cell}}\right)^2}
e^{-\frac{r}{r_{los}}} dr =
\int_0^{r_2} \frac{2r}{r_{cell}^2}\left(1-\frac{r}{r_{2}}\right)^2dr
\nonumber\\ & \Rightarrow \mbox{ }
r_2=\sqrt{6} \sqrt{1-\sqrt{\pi}\frac{r_{cell}}{2 r_{los}} e^{\left(\frac{r_{cell}}{2 r_{los}}\right)^2} \text{erfc}\left(\frac{r_{cell}}{2r_{los}}\right)} r_{cell}.\label{eq:r2_derivation}
\end{align}
\end{itemize}
Combining \eqref{eq:J1_approx} and \eqref{eq:J2_approx} into \eqref{eq:cell_sinr_appendix} yields the final result.
\section{Derivation of the Laplace transform of the HD D2D interference}
\label{appendix:hd-dac}
The D2D interference in the HD-DAC policy is given by
\begin{equation}
\hat{I} = \sum_{x \in \Phi_{d2d}^{hd}} g_x \eta_x r_x^{-a_x}
\label{eq:appD:interf_definition}
\end{equation}
where $\Phi_{d2d}^{hd}$ is the PPP of the D2D interferers, with intensity
\begin{equation}
\lambda_{d2d}^{hd}=\left(1-(1-h_{dac})^2 \right) \lambda_{p}=\frac{\delta}{2} h_{dac}\left(2-h_{dac} \right) \lambda_{ue}.
\end{equation}
Based on \eqref{eq:appD:interf_definition}, the Laplace transform of the HD D2D interference, denoted by $\mathcal{L}_{\hat{I}}^{hd}(s)$, is derived as
\begin{align}
&\mathcal{L}_{\hat{I}}^{hd}(s)=\mathbb{E}[e^{-\hat{I} s}] =
\mathbb{E}_{\Phi_{d2d}^{hd}}
\left[\Pi_{x \in \Phi_{d2d}^{hd}} \mathbb{E}\left[e^{- g_x \eta_x r_x^{-a_x} s}\right]\right]\overset{(i)}= \nonumber\\
=&e^{-\lambda_{d2d}^{hd}\int_0^{2\pi} \int_0^{\infty}\left(1-\mathbb{E}\left[e^{- g \eta r^{-a} s}\right]\right) r dr d\phi } \overset{(ii)}=
e^{-\pi \delta h_{dac}\left(2-h_{dac} \right) \lambda_{ue} \int_0^{\infty} \left(1-\mathbb{E} \left[\frac{1}{1+ g r^{-a} s} \right]\right) r dr },
\label{eq:appD:interf_laplace}
\end{align}
where $(i)$ follows from the probability generating functional (PGFL) of the PPP \cite{stochgeom:haenggi_book}, and $(ii)$ from the Laplace transform of the exponential random variable.
Defining
\begin{align}
J_3(s,a) \triangleq \int_0^{\infty} \frac{g r^{-a} s}{1+g r^{-a} s} r dr
\end{align}
and
\begin{align}
J_4\left(s,a \right) \triangleq \int_0^{\infty} \frac{e^{-\frac{r}{r_{los}}}}{1+g r^{-a} s} r dr,
\end{align}
the integral in the exponent of \eqref{eq:appD:interf_laplace} can be expressed as
\begin{align}
& \int_0^{\infty} \left(1-\mathbb{E} \left[\frac{1}{1+ g r^{-a} s} \right]\right) r dr =
\mathbb{E}_g \left[ \int_0^{\infty}
\left(
1 -
\frac{e^{-\frac{r}{r_{los}}}}{1+ g r^{-a_L} s}
- \frac{1-e^{-\frac{r}{r_{los}}}}{1+ g r^{-a_N} s}
\right) r dr
\right] = \nonumber\\
= &
\mathbb{E}_g \left[
\int_0^{\infty} \left( 1 - \frac{1}{1+ g r^{-a_N} s} \right) r dr +
\int_0^{\infty} \frac{e^{-\frac{r}{r_{los}}}}{1+ g r^{-a_N} s} r dr -
\int_0^{\infty} \frac{e^{-\frac{r}{r_{los}}}}{1+ g r^{-a_L} s} r dr
\right] = \nonumber\\
= & \mathbb{E}_{g} \left[ J_3(s,a_N) + J_4(s,a_N)-J_4(s,a_L)\right]
\label{eq:appD:integral_in_exponent}
\end{align}
Subsequently, $J_3(s,a)$ is evaluated in closed form as
\begin{equation}
J_3(s,a) = \frac{1}{2}
\Gamma \left(1-\frac{2}{a}\right)
\Gamma \left(1+\frac{2}{a}\right)
(gs)^{\frac{2}{a}},
\label{eq:appD:psi3}
\end{equation}
while $J_4(s,a)$ is derived through the approximation
\begin{equation}
e^{-\frac{r}{r_{los}}} \approx \left(1-\frac{r}{r_4}\right)^k,
\label{eq:appD:approximation}
\end{equation}
yielding
\begin{align}
J_4\left(s,a \right) \approx \int_0^{r_4} \frac{
\left(1-\frac{r}{r_4}\right)^k
}{1+g r^{-a} s} r dr =
\sum_{l=0}^k {k \choose l} \frac{(-1)^l}{r_4^l}
\int_0^{r_4} \frac{
r^{l+1}
}{1+g r^{-a} s} dr = \nonumber\\
\sum_{l=0}^k {k \choose l} (-1)^l \frac{r_4^{a+2}}{(l+a+2)gs}
{}_2 F_1\left(1,1+\frac{l+2}{a};2+\frac{l+2}{a};-\frac{r_4^a}{gs}\right),
\label{eq:appD:psi4}
\end{align}
where $r_4$ is chosen so that the approximated value of $J_4$ is exact for $s=0$, i.e.,
\begin{equation}
J_4(0,a) = \int_0^{\infty} e^{-\frac{r}{r_{los}}} r dr
= \int_0^{r_4} \left(1-\frac{r}{r_4}\right)^k r dr
\mbox{ } \Rightarrow \mbox{ }
r_4 = \sqrt{(k+1)(k+2)} r_{los}.
\label{eq:appD:approximation_r4}
\end{equation}
Applying \eqref{eq:appD:approximation_r4} to \eqref{eq:appD:approximation}, we also observe that the approximation becomes exact as $k$ grows, since
\begin{equation}
\lim_{k \to \infty} \left(1-\frac{r}{\sqrt{(k+1)(k+2)} r_{los}}\right)^k = \lim_{k \to \infty} \left(1-\frac{r}{k r_{los}}\right)^k = e^{-\frac{r}{r_{los}}}.
\end{equation}
Finally, combining \eqref{eq:appD:psi3} and \eqref{eq:appD:psi4} into \eqref{eq:appD:integral_in_exponent} and the result to \eqref{eq:appD:interf_laplace} yields the final result. Please note that the remaining expectation over $g$ is trivial, since $g$ is a discrete random variable with the distribution
\begin{equation}
g=
\left\{
\begin{array}{ll}
1 & \mbox{with probability } \frac{\Delta\theta_{ue}^2}{4\pi^2} \\
\frac{G_{ue}^{min}}{G_{ue}^{max}} & \mbox{with probability } \frac{2 \Delta\theta_{ue}(2\pi-\Delta\theta_{ue})}{4\pi^2} \\
\left(\frac{G_{ue}^{min}}{G_{ue}^{max}}\right)^2 & \mbox{with probability } \frac{(2\pi-\Delta\theta_{ue})^2}{4\pi^2}
\end{array}
\right..
\label{def:normed_link_gain_as_rv}
\end{equation}
\section{Derivation of the Laplace transform of the FD interference}
\label{appendix:fd-dac}
The D2D interference in the FD-DAC policy is given by
\begin{equation}
\hat{I} = \sum_{x \in \Phi_{d2d}^{fd(1)}} g_x \eta_x r_x^{-a_x} + \sum_{y \in \Phi_{d2d}^{fd(2)}} g_y \eta_y r_y^{-a_y},
\label{eq:appE:fd_interf_definition}
\end{equation}
where $\Phi_{d2d}^{fd(1)}$ and $\Phi_{d2d}^{fd(2)}$ are the PPPs of the D2D interferers, both with intensity
\begin{equation}
\lambda_{d2d}^{fd}=h_{dac} \lambda_{p}=\frac{\delta}{2} h_{dac} \lambda_{ue},
\end{equation}
but dependent to each other due to the D2D pairings. Based on \eqref{eq:appE:fd_interf_definition}, the Laplace tranform of the D2D interference in the FD-DAC policy, denoted by $\mathcal{L}_{\hat{I}}^{fd}(s)$, is expressed as
\begin{align}
&\mathcal{L}_{\hat{I}}^{fd}(s)=
\mathbb{E} \left[e^{-\hat{I} s}\right] =
\mathbb{E} \left[
\Pi_{x \in \Phi_{d2d}^{fd(1)}}
e^{- g_x \eta_x r_x^{-a_x} s}\cdot
\Pi_{y \in \Phi_{d2d}^{fd(2)}}
e^{- g_y \eta_y r_y^{-a_y} s}
\right].
\label{eq:appE:fd_interf_laplace}
\end{align}
Due to the dependence of $\Phi_{d2d}^{fd(1)}$ and $\Phi_{d2d}^{fd(2)}$, \eqref{eq:appE:fd_interf_laplace} cannot be evaluated in closed form, nevertheless, it can be approximated with the following bounds\cite{fd:stoch_geom_anal}.
\begin{itemize}
\item From the FKG inequality
\begin{align}
&\mathcal{L}_{\hat{I}}^{fd}(s) \geq
\mathbb{E} \left[
\Pi_{x \in \Phi_{d2d}^{fd(1)}}
e^{- g_x \eta_x r_x^{-a_x} s}
\right] \cdot
\mathbb{E} \left[
\Pi_{y \in \Phi_{d2d}^{fd(2)}}
e^{- g_y \eta_y r_y^{-a_y} s} \right]
= \nonumber\\ & =
e^{- \pi \delta \lambda_{ue} h_{dac} 2 \mathbb{E}_g
\left[
J_3\left(s,a_N \right) + J_4\left(s,a_N \right) - J_4\left(s,a_L\right)
\right]}
\label{eq:appE:fd_interf_laplace_fkg_bound}
\end{align}
\item From the Cauchy-Schwarz inequality,
\begin{align}
&\mathcal{L}_{\hat{I}}^{fd}(s) \leq
\sqrt{
\mathbb{E} \left[
\Pi_{x \in \Phi_{d2d}^{fd(1)}}
e^{- 2 g_x \eta_x r_x^{-a_x} s}
\right] \cdot
\mathbb{E} \left[
\Pi_{y \in \Phi_{d2d}^{fd(2)}}
e^{- 2 g_y \eta_y r_y^{-a_y} s} \right]
}
= \nonumber\\& = e^{-\pi \delta \lambda_{ue} h_{dac} \mathbb{E}_g
\left[
J_3\left(2s,a_N \right) + J_4\left(2s,a_N\right) - J_4\left(2s,a_L\right)
\right]}
\label{eq:appE:fd_interf_laplace_cs_bound}
\end{align}
\end{itemize}
In \eqref{eq:appE:fd_interf_laplace_fkg_bound} and \eqref{eq:appE:fd_interf_laplace_cs_bound}, the functions $J_3(s,a)$ and $J_4(s,a)$ are given by \eqref{eq:appD:psi3} and \eqref{eq:appD:psi4} respectively.
\bibliographystyle{IEEEtran}
|
2,877,628,088,443 | arxiv | \section{Introduction}
The explosive increase of Electronic Medical Records (EMR) provides many opportunities to carry out data science research by applying data mining and machine learning tools and techniques. EMR contains massive and a wide range of information on patients concerning different aspects of healthcare, such as patient conditions, diagnostic tests, labs, imaging exams, genomics, proteomics, treatments, outcomes or claims, financial records \cite{IEEEhowto:1}. Particularly, the extensive and powerful patient-centered data enables data scientists and medical researchers to conduct their research in the field of personalized (precision) medicine. Personalized medicine is defined as \cite{IEEEhowto:2}: ``the use of combined knowledge (genetic or otherwise) about a person to predict disease susceptibility, disease prognosis, or treatment response and thereby improve that person’s health." In other words, the goal of precision medicine or personalized healthcare is to provide “the right treatment to the right patient at the right time”. Personalized medicine is a multi-disciplinary area that combines data science tools and statistics techniques with medical knowledge to develop tailor-made treatment, prevention and intervention plans for individual patients.
In this study we focus on a vulnerable demographic subgroup (African-American) at high-risk for hypertension (HTN), poor blood pressure control and consequently, adverse pressure-related cardiovascular complications. We use left ventricular mass indexed to body surface area (LVMI) as an indicator of heart damage risk. The ability to reduce LVMI would lead to an improvement in hypertension-related cardiovascular outcomes, which, in turn, would further diminish the excess risk of cardiovascular disease complications in African-Americans.
Based on individual clinical data with many features, our objective is to identify and prioritize personalized features to control and predict the amount of LVMI toward decreasing the risk of heart disease. First, we focus on selecting significant features among many features such as demographic characteristics, previous medical history, patient medical condition, laboratory test result, and Cardiovascular Magnetic Resonance (CMR) results that are used to determine LVMI and its reduction with treatment over time. Second, after features selection, we apply supervised machine learning methods to predict the amount of LVMI as a regression model. Third, this prediction method can be implemented as a decision support system (DSS) to assist medical doctors in controlling the amount of LVMI based on significant risk factors.
Figure \ref{Fig1} illustrates our approach in three consecutive steps; this approach is an integrated feature selection model that applies unsupervised learning for producing higher-level abstraction of input data and then using them as an input of a feature selection method for ranking important features. In the final step, supervised leaning method is implemented for forecasting LVMI and model evaluation. Based on the model evaluation results, steps $2$ and $3$ are applied iteratively to finalize the most important features (risk factors).
In this paper, we develop a new feature selection and representation method using a deep learning algorithm for personalized medicine. In our method, we apply stacked auto-encoders \cite{IEEEhowto:10} as a deep architecture for feature abstraction at higher levels. To our knowledge, it is one of the first methods that use stacked auto-encoders for feature learning and selection, the LVMI prediction approach developed in this research is also a new model of risk forecasting for cardiovascular and other health conditions.
\begin{figure*}
\centering
\includegraphics[scale= 0.22]{SAFS1}
\caption{An Illustration of the Three Consecutive Steps for our SAFS Approach}\label{Fig1}
\end{figure*}
The rest of this paper is organized as follows. Section II reviews the related works in deep feature selection methods. Section III explains deep learning overview and stacked auto-encoders. Section IV describes the developed feature selection method and prediction approach. Section V reports the personalized data and implementation results and finally section VI discusses about the results and conclusion.
\section{Related Works}
Deep learning, including feature representation and predictive modeling, has been researched and applied in a number of areas, such as computer vision, remote sensing, natural language processing and bioinformatics. The main reasons accounting for the extensive applications are improved prediction accuracy, capability of modeling processes of complex systems, and generating high-level representation of features and higher robustness modeling [12]. In this literature review, we focus on papers that develop feature selection models using deep learning algorithms. Among many related applications, few researchers have carried out feature selection using deep learning.
Zhou et al. \cite{IEEEhowto:11} used deep feature selection methods in natural language processing and sentiment classification. They proposed a novel semi-supervised learning algorithm called active deep network (ADN) by using restricted Boltzmann machines (RBMs). They applied unsupervised learning based on labeled reviews and abundant unlabeled reviews and then used active learning to identify and select reviews that should be labeled as training data. Also in \cite{IEEEhowto:13}, a semi-supervised learning algorithm based on deep belief network, called DBNFS, was proposed for feature selection. Authors applied the method for solving a sentiment classification problem to reduce high dimension of noisy vocabularies. In their method, several hidden layers have been replaced in DBNs, which are computationally expensive with the filter-based feature selection technique using chi-squared tests.
Deep feature selection was also applied in a Remote Sensing Scene classification/recognition problem. Zou et al. \cite{IEEEhowto:14} developed a new feature selection method by using deep learning that formulates the feature selection problem as a feature reconstruction problem. Their method is implemented as an iterative algorithm, which is based on DBN in an unsupervised learning way to train the inquired reconstruction weights. Finally features with small reconstruction errors were selected as input features for classification.
In biomedical and biomedicine research, deep learning algorithms and tools have been applied in different areas especially in precision medicine and personalized healthcare by using biological and genomics data \cite{IEEEhowto:6}.
Li et al. \cite{IEEEhowto:12} developed a deep feature selection (DFS) model for selecting input features in a deep neural network for multi-class data. They used elastic net to add a sparse one-to-one linear layer between the input layer and the first hidden layer of a multi-layer perception and select most important features according to their weights in the input layer given after training. They then applied their DFS model to solve the problem of enhancer-promoter interaction using genomics data. The method that they developed using elastic net is a new shrinkage approach that is flexible to be applied in different deep learning architectures. In terms of performance, their method did not outperform random forest in the experimental implementation that would be considered as a major weakness.
Another study in the biomedical area \cite{IEEEhowto:15} used DBN and unsupervised active learning to propose a new multi-level feature selection method for selecting genes/miRNAs based on expression profiles. The strength of the proposed approach is better performance in comparison with several feature selection methods. However, for high dimensional features it does not outperform some other methods such as relief algorithm.
In \cite{IEEEhowto:16} authors showed how unsupervised feature learning approach can be applied for cancer type classification from gene expression data. They proposed a two-phase model for feature engineering and then classification. In the first phase, Principal Component Analysis (PCA) technique was used for dimension reduction of the feature space and in the second phase, a deep sparse encoding was implemented on new data to obtain high-level abstractions for cancer diagnosis and classification. While their approach used stacked auto-encoders for representation learning, they focused on using deep feature extraction to achieve a better accuracy in cancer type prediction instead of finding significant factors in cancer diagnosis. Also in similar research \cite{IEEEhowto:17} focused on early detection of lung cancer, authors used deep learning for features extraction and applied a binary decision tree as a classifier to create a computer aided diagnosis (CAD) system that can automatically recognize and analyze lung nodules in CT images. They just implemented an auto-encoder for data representation and probably using stacked auto-encoders leads to higher accuracy.
In sum, feature selection based on deep learning techniques were used in a wide range of areas as reviewed above, which is evident by an increasing amount of related studies that have been developed in recent years (after 2013). Particularly, many studies in biomedical applications used or developed deep learning methods for feature selection or feature extraction to reduce the dimensionality of data with many features that increase the performance of prediction and classification tasks.
Precision medicine has emerged as a data-rich area with millions of patients being prospectively enrolled in the Precision Medicine Initiative (PMI). The data is big and noisy thus demanding for more sophisticated data science approaches. Therefore it can be concluded that there are ample opportunities to develop and apply deep learning for feature selection in the new data-rich areas, such as precision medicine.
From what we discussed, most related research works took the advantages of representation learning with deep architecture. In many of them, DBN was applied for data transforming and there lack comprehensive and systematic comparisons of performance among different deep architecture. In our approach we will use stacked auto-encoders with carefully selected architectures for feature selection that can overcome this problem.
\section{DEEP LEARNING AND STACKED AUTO-ENCODERS}
\subsection{Introduction to deep learning and its applications}
Deep Learning or Deep Machine Learning is a kind of machine learning algorithms that model input data to higher-level abstraction by using a deep architecture with many hidden layers composed of linear and non-linear transformations \cite{IEEEhowto:5}\cite{IEEEhowto:7}\cite{IEEEhowto:19}.
In a simple definition, deep learning means using a neural network with several layers of nodes between input and output and deep architectures that are composed of multiple levels of non-linear operations, such as neural nets with many hidden layers. There are three different types of deep architectures: 1) Feed Forward architectures (multilayer neural nets, convolutional nets), 2) Feed Back architectures (De-convolutional Nets) and 3) Bi-Directional architectures (Deep Boltzmann Machines, Stacked Auto-Encoders) \cite{IEEEhowto:5}.
Choice of data representation or feature representation plays a significant role in success of machine learning algorithms \cite{IEEEhowto:3}. For this reason, many efforts in developing machine learning algorithms focus on designing preprocessing mechanisms and data transformations for representation learning that would enable more efficient machine learning algorithms \cite{IEEEhowto:3}.
Deep learning applications encompass many domains. The major ones are Speech Recognition, Visual Object Recognition, Object Detection (Face Detection) and Bioinformatics or Biomedicine applications such as drug discovery and genomics \cite{IEEEhowto:7}. In biomedical and health science, increases in technological development, information systems and research equipment have generated a large amount of data with high features and dimensions. Since deep learning outperformed some other methods such as PCA or singular value decomposition in handling high-dimensional biomedical data, it has strong potential for dimension reduction and feature representation in biomedical and clinical research \cite{IEEEhowto:6}.
Deep Learning algorithms and tools have not been applied in biomedical and health science areas extensively. Deep supervised, unsupervised, and reinforcement learning methods can be applied in many complex biological and health personalized data and could play an important role in discovering and development of personalized healthcare and medicine \cite{IEEEhowto:6}.
\subsection{Introduction to stacked auto-encoders}
Among different kinds of deep architectures mentioned in previous section, four architectures are more popular in biomedical data analysis [6]. 1) The Convolutional Neural Network (CNN), which usually consists of one or more convolutional layers with sub-sampling layers and followed by fully connected layers as in a standard deep neural network. 2) Restricted Boltzmann Machine (RBM), which is comprised of one visible layer and one layer of hidden units. 3) Deep Belief Network (DBN), which is a generative graphical model consisted of multiple layers of hidden variables with connections just between layers (not units within layers). 4) Stacked auto-encoders, which is a neural network including multiple layers of sparse auto-encoders [5][8].
Training deep neural networks with multiple hidden layers is known to be challenging. Standard approach for learning neural network applies gradient-based optimization with back-propagation by initializing random weights in network concludes poor training results empirically when there are three or more hidden layers in deep network [4]. Hinton et al in [18] developed a greedy layer-wise unsupervised learning algorithm for training DBN parameters by using a RBM in the top of deep architecture. Bengio et al. [10] used greedy layer-wise unsupervised learning to train deep neural network when layer building block is an auto-encoder instead of the RBM. These two greedy layer-wise algorithms are the major algorithms to train a deep architecture based on DBN and Stacked auto-encoders. In this study we use stacked auto-encoders for data representation learning. Stacked auto-encoders shown in Figure \ref{Fig2} is constructed by stacking multiple layers of auto-encoder.
An auto-encoder is trained to reconstruct its own inputs by encoding and decoding processes. Let us define $w^{(k,l)}$, $w^{(k,2)}$, $b^{(k,l)}$, $b^{(k,2)}$ as the parameters of $k^{th}$ auto-encoder for weights in encoding and decoding process respectively. Encoding step of each layer is a forward process and mathematically described as below:
\begin{flalign}\label{eq1}
&a^{(k)}= f(z^{\left( k\right) }),&\\
&z^{(k+1)}= w^{\left( k,1\right) }a^{\left( k\right) }+b^{\left( k,1\right) }.&
\end{flalign}
In equation \ref{eq1}, $f(x)$ is an activation function for transforming data. If $n$ represents the location of middle layer in stacked auto-encoders, the decoding step is to apply the decoding stack of each auto-encoder as following \cite{IEEEhowto:9}:
\begin{flalign}
&a^{(n+k)}= f(z^{\left(n+k\right) }),&\\
&z^{(n+k+1)}= w^{\left(n+k,2\right) }a^{\left(n+k\right) }+b^{\left(n+k,2\right) }.&
\end{flalign}
The training algorithm for obtaining parameters of stacked auto-encoders is based on greedy layer-wise strategy \cite{IEEEhowto:10}. It means that each auto-encoder should be trained by encoding and decoding process one-by-one. By training this deep architecture, $a^{(n)}$ (the middle layer) gives the highest representation of the input \cite{IEEEhowto:9}. In the simplest case, when an auto-encoder with sigmoid activation function has only one hidden layer and takes input $x$, the output of encoding process will be:
\begin{flalign}
&z= \mathrm{Sigmoid}_1(wx+b).&
\end{flalign}
Therefore $z$ is the vector of transformed input in the middle layer. In the second step (decoding process), $z$ is transformed into the reconstruction $x^{'}$, i.e.,
\begin{flalign}
&x^{'}= \mathrm{Sigmoid}_2(W^{'}z+b^{'}).&
\end{flalign}
Finally, auto-encoder is trained by minimizing the reconstruction errors as below:
\begin{flalign}
&\mathrm{Loss}(x, x^{'})= \Vert x- x^{'}\Vert = &\notag\\
& \Vert x- \mathrm{Sigmoid}_2(W^{'} (\mathrm{Sigmoid}_1(Wx+b))+b^{'}) \Vert.&
\end{flalign}
Auto-Encoder with rich representation is an appropriate deep learning architecture to implement on biomedical data \cite{IEEEhowto:6}. In the next section we will use stacked auto-encoders for feature learning and selection.
\begin{figure}[H]
\centering
\includegraphics[scale= 0.2]{pic2}
\caption{Stacked Auto-Encoders}\label{Fig2}
\end{figure}
\section{METHODOLOGY}
The method developed in this study is an integrated feature selection method with stacked auto-encoders, which is called SAFS. The work flow of our approach is shown in Figure \ref{Fig3}, and it includes three major components as follows:
\begin{figure}[H]
\centering
\includegraphics[scale= 0.18]{Flow}
\caption{The Proposed SAFS Workflow}\label{Fig3}
\end{figure}
\subsection{Features Partitioning}
In the first step, we partition categorical features from continuous features when they co-exist in a data set. Here we focus on representation learning on continuous features since categorical features are often used as dummy variables in linear model based machine learning methods.
\subsection{Features Representation by Stacked Auto-Encoders}
The second step is unsupervised learning section. Continuous features are represented in higher-level abstraction by deep stacked auto-encoders. As mentioned in Section III stacked auto-encoders is one of the most popular deep architectures for representation learning especially for biomedical data. The deep architecture of stacked auto-encoders considered with $5$ layers as shown in Figure \ref{Fig4}.
\begin{figure}[H]
\centering
\includegraphics[scale= 0.3]{pic3}
\caption{A Deep Architecture for SAE}\label{Fig4}
\end{figure}
In this deep architecture, $N$ is assumed as the number of continuous features and $n$ is a parameter. The middle hidden layer has $N$ nodes, same as input and output layers, and the other two hidden layers have $n$ nodes as a variant. The represented data is given from middle layer and $n$ is determined in an iterative process between unsupervised learning step and supervised learning step.
\subsection{Supervised Learning}
In the third step, the represented continuous features are combined with categorical features and then supervised learning is applied on new dataset. It starts with feature selection, which can use any feature selection method such as random forests. Selected important features from a main dataset are entered in a supervised classifier for regression or classification and after training step. Model are then evaluated by some specific measures. If the results meet the criteria then selected features considered as significant variables, if not, the model tries different architecture for stacked auto-encoders by changing the number of nodes ($n$) in two hidden layers. This iterative process is continued until the model converges to some stop criteria or fixed number of iterations.
Our presented approach using deep learning for unsupervised non-linear sparse feature learning leads to a construction of higher level features for general feature selection methods and prediction tasks.
\section{IMPLEMENTATION FOR PERSONALIZED MEDICINE }
In this section we implement our methodology and demonstrate its performance using a precision medicine case to find significant risk factors. We demonstrate that SAFS approach selects better features in comparison when we apply feature selection without using represented data.
The data used in this paper are derived from a subgroup of African-Americans with hypertension and poor blood pressure control who have high risk of cardiovascular disease. Data are obtained from patients who were enrolled in the emergency department of Detroit Receiving Hospital. Among many features (more than 700) including demographic characteristics, previous medical history, patient medical condition, laboratory test results, and CMR results to evaluate for LVMI, $172$ remained after a pre-processing step, $106$ of which are continuous variables and $66$ categorical variables for $91$ patients. As mention in Section I, the goal is to find the most important risk factors that affect LVMI.
In this precision medicine case study, we used package H2O in R and applied our approach for $150$ different deep stacked auto-encoders. At first we used random forest method for feature selection and supervised learning. We used Mean Squared Error (MSE) as a measure for regression evaluation. In each iteration, random forest runs for a different number of trees. We compared the average of MSE in different runs based on each deep architecture.
The result shows that different deep architectures, by changing the number of nodes in the hidden layers, demonstrate different performance. The representation learning produces different data; therefore it leads to different performance. According to Figure \ref{Fig5}, the deep stacked auto-encoders with $n=42$ nodes in second and forth hidden layers yields least error (MSE= $63.93$) among all different architectures.
\begin{figure}[H]
\centering
\includegraphics[scale= 0.15]{result1}
\caption{Model Performance using Different Deep Architectures (stacked auto-encoders with different numbers of nodes in hidden layers)}\label{Fig5}
\end{figure}
By selecting the deep architecture with $n=42$, we ran SAFS approach and then compared it with the results that were generated by random forest method with un-represented input data based on different number of trees. The performance comparison was shown in Figure \ref{Fig6}. It is clear that our SAFS approach achieves a better accuracy (less error) for a different numbers of trees.
\begin{figure}[H]
\centering
\includegraphics[scale= 0.25]{result2}
\caption{Performance Comparison of SAFS and Random Forest}\label{Fig6}
\end{figure}
Since the LASSO based approaches are also popular and effective in feature selection from high dimension data, in the second attempt to examine the performance of our approach, we used LASSO for feature selection and prediction in SAFS method, which is implemented in R, by using package “glmnet”. The result shows that the best architecture with least MSE is achieved for $n= 74$ nodes in second and forth hidden layers of stacked auto-encoders (MSE = $361.31$). Figure 7 describes the accuracy comparison of SAFS approach and LASSO based on different amount of Lambda ($\lambda$) parameter, where $\lambda$ is a tuning parameter to determine the number of features to be selected. It is obvious that SAFS approach has better performance for a wide range of $\lambda$ values, representing many scenarios of feature selections.
\begin{figure}[H]
\centering
\includegraphics[scale= 0.35]{result3}
\caption{Performance Comparison of SAFS and LASSO}
\end{figure}
The results concluded from both implementations show that using SAFS approach and representation learning significantly outperform random forest and LASSO methods with unrepresented input data and also using random forest as a non-linear classifier in SAFS approach leads to better performance. The SAFS could be an appropriate choice for the precision medicine problem of identifying which patients at risk should be referred for more extensive testing and, which patients with increased LVMI are more or less likely to benefit from treatments.
According to our results obtained from SAFS approach using random forest, the top $15$ significant risk factors that affect LVMI are mentioned in Table I:
\begin {table}[H]
\small
\caption {Significant Risk Factors Affecting LVMI}\label{table1}
\centering
\begin{tabular}{| c | l | c |}
\hline\hline
\ Row & Feature Description & Weight\\
\hline
1 & Troponin Levels & 8.85\\
\hline
2 & Waist Circumference Levels & 8.07\\
\hline
3 & Plasma Aldosterone & 7.81\\
\hline
4 & Average Weight & 7.66\\
\hline
5 & Average Systolic Blood Pressure & 5.32\\
\hline
6 & Serum eGFR Levels & 5.21\\
\hline
7 & Total Prescribed Therapeutic Intensity Score & 4.79\\
\hline
8 & Pulse Wave Velocity & 4.66\\
\hline
9 & Ejection Duration & 4.46\\
\hline
10 & Cornell Product & 4.06\\
\hline
11 & Average Percent Compliance Per Pill Count & 2.99\\
\hline
12 & Plasma Renin Levels & 2.86\\
\hline
13 & Transformed Scale Physical Score & 2.64\\
\hline
14 & Cholesterol Levels & 2.53\\
\hline
15 & Transformed Scale Mental Score & 2.43\\
\hline\hline
\end{tabular}
\end {table}
While some of these risk factors represent data specific to the study design, all are plausibly linked with heart risk, representing potential physiological (numbers $1$, $2$, $3$, $4$, $5$, $6$, $8$, $9$, $10$, $12$, and $14$), behavioral (numbers $7$ and $11$), or psychosocial (numbers $13$ and $15$) markers or contributors. In future applications, such an approach to data analysis could be used to make better decisions based on personalized understanding of a specific target subgroup.
\section{Conclusion}
In this paper, a novel feature selection method called SAFS based on deep learning has been developed. The overall proposed approach includes three phases: representation learning, feature selection and supervised learning. For representation learning, we used stacked auto-encoders with $3$ hidden layers in an iterative learning process that chooses the best architecture for feature selection. We applied our model in a specific precision medicine problem with goal of finding the most important risk factors that affect LVMI, an indicator of cardiovascular disease risk, in African-Americans with hypertension. The results show that our method (SAFS) outperforms random forests and LASSO significantly in feature ranking and selection. In future work we will evaluate our model on related precision medicine problems aimed at reducing cardiovascular disparities and apply different deep architectures as well as stacked auto-encoders for representation learning.
|
2,877,628,088,444 | arxiv | \section{Introduction}
What happens to an impurity particle injected in a quantum fluid
at zero temperature?
According to the Landau criterion of superfluidity \cite{landau1941JETP} generalized to account for motion of a particle of a finite mass \cite{rayfield1966roton}, if the initial velocity of the impurity $v_0$ is less than the (mass-dependent) generalized critical velocity $v_c$, the impurity keeps moving forever without dissipation.\footnote{A necessary condition for $v_c>0$ is that that the dispersion of the fluid is not identically zero, which we assume throughout the paper. This means that we consider two- and three-dimensional superfluids and generic one-dimensional fluids, but not e.g. Fermi liquids. In practical terms, our consideration can be relevant for superfluid helium and metastable quantum fluids realized in ultracold atom experiments.} However, the kinematical argument beyond the generalized Landau criterion \cite{landau1941JETP,rayfield1966roton} is nonrigorous: It is based on the assumption that the {\it kinetic} energy is conserved, which is valid only approximately. Generally speaking, this argument does not exclude the possibility that corrections to the above approximation build up with time in such a way that the velocity of the impurity does relax to a zero or nonzero value in the long run \cite{suzuki2014creation,roberts2005casimir,roberts2006force,roberts2009superfluids}. Indeed, numerical and semi-numerical calculations for specific systems has shown that the velocity does drop below $v_0$ even when $v_0<v_c$ \cite{mathy2012quantum,shashi2014radio}. As a rule, numerical calculations are limited to finite times and therefore can not unambiguously provide an infinite-time asymptotic value of the velocity, $v_\infty$. In particular, an important qualitative question -- whether the impurity eventually stops -- often remains unanswered.
This issue has been recently addressed in the context of a specific model: An upper bound on $|v_0-v_\infty|$ has been rigorously derived for the impurity injected in the one-dimensional (1D) gas of free fermions \cite{Lychkovskiy2013}. The first goal of the present paper is to provide an analogous bound valid for an arbitrary quantum fluid in any dimensionality. We rigorously prove that $|{\mathbf v}_0-{\mathbf v}_\infty|$ is bounded from above for $|{\mathbf v}_0|<v_c$, the bound depending on the dispersion of the fluid, strength of the coupling between the impurity and the fluid, mass of the impurity and its initial velocity. In the limit of vanishing impurity-fluid coupling the bound reduces to ${\mathbf v}_\infty={\mathbf v}_0$, in accordance with the generalized Landau criterion of superfluidity \cite{landau1941JETP,rayfield1966roton}. In the case of finite interaction the bound quantifies the maximal possible drop of the velocity.
The second question we address is as follows: What happens to an impurity immersed in a quantum fluid at zero temperature and pulled by a small constant force? This question was previously studied for impurities in superfluid helium \cite{bowley1975roton,bowley1977motion,allum1977breakdown} and, recently, in 1D fluids \cite{Gangardt2009,schecter2012dynamics,schecter2012critical,Gamayun2014,Gamayun2014keldysh}. It was found that impurities in helium exhibit sawtooth velocity oscillations emerging from backscattering on rotons \cite{bowley1975roton,bowley1977motion,allum1977breakdown}. Similar {\it backscattering oscillations} (BO) have been found in 1D Tonks-Girardeau gas but only for sufficiently heavy impurities \cite{Gamayun2014}. For lighter impurities another dynamical regime has been observed -- {\it saturation of the velocity without oscillations} (SwO). Basing on the same kinematical constraint which underlies the generalized Landau argument \cite{landau1941JETP,rayfield1966roton}, we investigate how general quantum fluids can be classified with respect to the regimes of driven dynamics. We find that BO and SwO are the only two generic regimes. A criterion determining which one is realized for a particular fluid and impurity is derived.
It is worth emphasizing that all the methods and results presented in this Letter are universally valid both for one-dimensional fluids and higher-dimensional fluids despite the well-known dramatic difference between the former and the latter with respect to the structure of elementary excitations \cite{Giamarchi2003}. This constitutes the major advancement over recent works \cite{mathy2012quantum,knap2014quantum,Lychkovskiy2013,Gangardt2009,schecter2012dynamics,schecter2012critical,Gamayun2014,Gamayun2014keldysh,gamayun2015impurity} focused on 1D fluids which explicitly invoked special features of physics in one dimension.
\begin{figure}[t]
\includegraphics[width= \linewidth]{fig.pdf}
\caption{\label{fig 1}(color online) (a) and (b): Geometrical illustration of definitions of generalized critical velocity $v_c$, eq. \eqref{critical velocity}, and critical momentum transfer $q_c$, eq. \eqref{backscattering momentum}. $v_c$ is smaller than the sound velocity $v_s$ for $m>m_c$ (a), while $v_c=v_s$ for $m<m_c$ (b). The thick dot (orange online) marks the position of the critical momentum transfer $q_c$ which is finite for $m>m_c$ but vanishes for $m<m_c$. (c) and (d): Velocity of the impurity pulled by a small constant force {\it vs.} time.
Backscattering oscillations occur for $m>m_c$ (c). Velocity of the impurity saturates at $v_c$ without oscillations for $m<m_c$ (d). Inset: Generalized critical velocity as a function of the impurity mass.
In the limit of $m\to\infty$ the generalized critical velocity approaches the Landau critical velocity $v_{c \,{\rm L}}$.}
\end{figure}
{\em Setup and notations.}---
We consider a single impurity particle immersed in a quantum fluid.
The Hamiltonian of the combined impurity-fluid system reads
$
\hat H = \hat H_{\rm f}+\hat H_{\rm i}+\hat U,
$
where $\hat H_{\rm f}$, $\hat H_{\rm i}$ and $\hat U$ describe the fluid, the impurity and the impurity-fluid interaction, respectively.
$\hat H_{\rm f}$, $\hat H_{\rm i}$ and $\hat U$ are translationally invariant and isotropic (the latter requirement can be dropped at the price of the results and derivation being more bulky).
An eigenstate of $\hat H_{\rm f}$ with an energy $E_{\rm f}$ is denoted by $\ket{E_{\rm f}}$. Each $\ket{E_{\rm f}}$ is also an eigenstate of the momentum
The dispersion of the fluid, $\varepsilon(q)$, is defined as
a minimal eigenenergy which corresponds to a given momentum ${\mathbf q}$ with $|{\mathbf q}|=q$.
We use a special notation, $\ket{{\rm GS}}$, for the ground state of the fluid. We set the ground state energy of the fluid to zero and assume that the momentum in the ground state is zero.
This implies $\varepsilon(0)=0$ and $\varepsilon(q)\geq 0$.
The speed of sound is defined as
$
v_s\equiv\varepsilon'(0).
$
Note that we do not impose any restrictions on the strength of interactions between the elementary excitations of the fluid.
The Hamiltonian of the impurity reads
$
\hat H_{\rm i} =\hat {\mathbf P}_{\rm i}^2/(2m),
$
where $\hat {\mathbf P}_{\rm i}$ is the momentum of the impurity.
Interaction $\hat U$ is pairwise with an interaction potential $U(r)$.
We call the interaction {\it everywhere repulsive} whenever
\begin{equation}\label{potential}
U(r)\geq 0 ~~~~~~ \forall r.
\end{equation}
We denote product eigenstates of $\hat H_{\rm f}+\hat H_{\rm i}$ by
$
\ket{E_{\rm f},{\mathbf v}} \equiv |E_{\rm f} \rangle \otimes |{\mathbf v}\rangle,
$
where $|{\mathbf v} \rangle$ is the plane wave of the impurity with the momentum $m {\mathbf v}$. Initially the impurity-fluid system is in a product state
$
|{\rm GS}, {\mathbf v}_0\rangle = |{\rm GS} \rangle \otimes |{\mathbf v}\rangle,
$ i.e. the impurity moves in the fluid at zero temperature with velocity ${\mathbf v}_0$.\footnote{This initial state can be realized in ultracold atom experiments by accelerating a noninteracting impurity inside the atomic cloud and switching the impurity-atom interaction by means of the Feshbach resonance afterwards.}
Since the total momentum is an integral of motion, in what follows we restrict all operators to the subspace with the total momentum $m{\mathbf v}_0$.
The quantity we are interested in is the velocity of the impurity at infinite time. It is defined as
\begin{equation}\label{vinfty definition}
{\mathbf v}_\infty \equiv \frac{1}{m} \lim_{t\rightarrow\infty}\frac1t\int_0^t dt' \langle {\rm GS}, {\mathbf v}_0|e^{i \hat H t'}\hat {\mathbf P}_{\rm i} e^{-i \hat H t'} | {\rm GS}, {\mathbf v}_0 \rangle.
\end{equation}
Expanding the initial state in eigenstates $\ket{E}$ of the total Hamiltonian, $\hat H$, and integrating out oscillating exponents, one obtains
\begin{equation}\label{vinfty}
{\mathbf v}_\infty = \frac{1}{m}\sum_{\ket{E}}\big|\langle {\rm GS},{\mathbf v}_0|E \rangle\big|^2 \langle E|\hat {\mathbf P}_{\rm i}|E\rangle.
\end{equation}
Note that if $\hat H$ has degenerate eigenvalues, one should adjust the eigenbasis to diagonalize the matrix $\langle E'|{\rm GS}, {\mathbf v}_0\rangle\langle{\rm GS}, {\mathbf v}_0|E\rangle$ in every degenerate subspace.
\begin{table}[t]
\begin{ruledtabular}
\begin{tabular}{lccc}
& $v_{c {\rm L}}=v_s$ & \multicolumn{2}{c}{$v_{c {\rm L}}<v_s$}\\
& & $m<m_c$ & $m>m_c$ \\
regime & SwO & SwO & BO \\
\end{tabular}
\end{ruledtabular}
\caption{\label{table}%
Conditions determining which of the two dynamical regimes -- backscattering oscillations (BO) or saturation without oscillations (SwO) -- is realised in a specific fluid for a specific mass of the impurity.
}
\end{table}
{\em Perpetual motion.}---
We start from reviewing kinematical arguments which lead to the notion of critical velocity \cite{landau1941JETP,rayfield1966roton}. Consider an impurity with a velocity ${\mathbf v}_0$ which scatters off the fluid which is initially in its ground state. Assume that the impurity can not form a bound state with particles of the fluid. Assume further that the final state of the impurity-fluid system is a product eigenstate of noninteracting Hamiltonian $\hat H_{\rm f}+\hat H_{\rm i}$, ${\mathbf q}$ and $E_{\rm f}\geq \varepsilon(q)$ being respectively final momentum and energy of the fluid. If one disregards the contribution of the impurity-fluid coupling to its energy, then conservation laws lead to
\begin{equation}\label{energy conservation}
v_0 q \geq {\mathbf v}_0 {\mathbf q} = E_{\rm f} + \frac{q^2}{2 m} \geq \varepsilon(q) + \frac{q^2}{2 m},
\end{equation}
where $v_0 \equiv |{\mathbf v}_0|$.
If $v_0$ is sufficiently small, $v_0<v_c$, then for all ${\mathbf q} \neq 0$ the inequality \eqref{energy conservation} can not be fulfilled. The generalized critical velocity $v_c$ is defined as \cite{rayfield1966roton}
\begin{equation}\label{critical velocity}
v_c \equiv \inf_{q} \frac{\varepsilon(q) + \frac{q^2}{2 m}}{q}.
\end{equation}
Physically, $v_c$ is the minimal velocity which allows the impurity to create real excitations of the fluid, in the approximation of noninteracting final impurity-fluid state. The geometrical sense of the generalized critical velocity can be seen from Fig.~\ref{fig 1}: The line $v_c q$ is a tangent to the curve $\varepsilon(q) + \frac{q^2}{2 m}$.
Originally Landau defined the critical velocity in the limit $m\rightarrow \infty$ \cite{landau1941JETP}:
\begin{equation}
v_{c {\rm L}} \equiv \inf_{q}\left( \varepsilon(q)/q \right).
\end{equation}
Note that $v_{c {\rm L}}$ is an attribute of the fluid alone while $v_c$ is an attribute of the impurity-fluid system.
In is worth emphasizing that the definition of the generalized critical velocity \eqref{critical velocity}, although motivated by the Landau argument, stands alone and will be used beyond the scope of this argument in what follows.
Clearly, the argument by Landau reviewed above
is not rigorous: The impurity-fluid interaction is largely disregarded, its role being merely to justify why the transition from the initial to a final state occurs at all.
Our aim is to derive a rigorous relation between ${\mathbf v}_0$ and ${\mathbf v}_\infty$. To this end we prove the following
\begin{theorem}
Consider an impurity particle immersed in a quantum fluid. Initially the
system is prepared in the product state $|{\rm GS}, {\mathbf v}_0\rangle$ with the initial velocity of the impurity
$
v_0 \equiv |{\mathbf v}_0| < v_c.
$
The difference between the initial and infinite-time velocities of the impurity is bounded from above according to
\begin{equation}\label{lower bound general}
\begin{array}{ll}
|{\mathbf v}_0 -{\mathbf v}_\infty| \leq & \frac{1}{m(v_c-v_0)}
\Big(
\langle {\rm GS}, {\mathbf v}_0| \hat U | {\rm GS}, {\mathbf v}_0\rangle-
\\
&
\sum\limits_{\ket{E}}\big|\langle {\rm GS},{\mathbf v}_0|E \rangle\big|^2 \langle E| \hat U | E \rangle
\Big).
\end{array}
\end{equation}
If the interaction between the impurity and the fluid is everywhere repulsive, i.e. the condition (\ref{potential}) is fulfilled, then a more transparent bound holds:
\begin{equation}\label{lower bound}
|{\mathbf v}_0 -{\mathbf v}_\infty| \leq \frac{ \overline U}{m(v_c-v_0)},
\end{equation}
where $\overline U \equiv\int d {\mathbf r} \,\rho \, U(|{\mathbf r}|)$ and $\rho$ is the number density of the particles of the fluid.
\end{theorem}
This theorem generalises an analogous result obtained in \cite{Lychkovskiy2013} for a specific one-dimensional fluid. \\
{\em Proof.}
According to \eqref{vinfty}
\begin{align}\label{intermediate 1}
&
|{\mathbf v}_0-{\mathbf v}_\infty| = \nonumber\\
& =
\Big|
\sum_{\ket{E}}\sum_{\ket{E_{\rm f},{\mathbf v}}} ({\mathbf v}_0-{\mathbf v})\big|\langle E| E_{\rm f},{\mathbf v} \rangle\big|^2 \big|\langle {\rm GS},{\mathbf v}_0|E \rangle\big|^2
\Big| \nonumber \\
& \leq
\sum_{\ket{E}}\left(\sum_{\ket{E_{\rm f},{\mathbf v}}} |{\mathbf v}_0-{\mathbf v}| \big|\langle E| E_{\rm f},{\mathbf v} \rangle\big|^2 \right) \big|\langle {\rm GS},{\mathbf v}_0|E \rangle\big|^2
.
\end{align}
The sums are performed over the eigenstates $\ket{E}$ of $\hat H$ and over the eigenstates $\ket {E_{\rm f},{\mathbf v}}$ of ~$\hat H_{\rm f}+\hat H_{\rm i}$ with the total momentum $m{\mathbf v}_0$.
The key step is to notice that according to \eqref{critical velocity}
\begin{equation}\label{key inequality}
|{\mathbf v}_0-{\mathbf v}| \leq \frac{1}{m(v_c-v_0)} (E_{\rm f}+\frac{m{\mathbf v}^2}{2}-\frac{m {\mathbf v}_0^2}{2})
\end{equation}
for any $\ket {E_{\rm f},{\mathbf v}}$ with the total momentum $m {\mathbf v}_0$.
This inequality is of pure kinematical origin.
It leads to
\begin{align}\label{intermediate 2}
&\sum_{\ket{E_{\rm f},{\mathbf v}}} |{\mathbf v}_0-{\mathbf v}| \big|\langle E| E_{\rm f},{\mathbf v} \rangle\big|^2 \leq \nonumber\\
&\leq
\frac{1}{m(v_c-v_0)} \sum_{\ket{E_{\rm f},{\mathbf v}}} \langle E| \hat H_{\rm f}+\hat H_{\rm i} -\frac{{\mathbf v}_0^2}{2m}|E_{\rm f},{\mathbf v} \rangle \langle E_{\rm f},{\mathbf v} | E\rangle \nonumber\\
&= \frac{1}{m(v_c-v_0)}\left( E -\frac{{\mathbf v}_0^2}{2m} - \langle E| \hat U | E\rangle\right).
\end{align}
Substituting eq. \eqref{intermediate 2} into eq. \eqref{intermediate 1} one obtains the desired bound \eqref{lower bound general}.
If the impurity-fluid coupling is everywhere repulsive, one obtains the bound \eqref{lower bound} from the bound \eqref{lower bound general} by omitting the second term in the brackets in the r.h.s. of \eqref{lower bound general} and rewriting the first term according to
$
\langle {\rm GS},{\mathbf v}_0| \hat U | {\rm GS},{\mathbf v}_0\rangle= \overline U
$.
$\blacksquare$
In the reminder of the present section we discuss the above theorem. First, we stress that the bounds \eqref{lower bound general} and \eqref{lower bound} hold for an arbitrary interacting quantum fluid in arbitrary dimensions, in contrast to an earlier result \cite{Lychkovskiy2013} valid for a one-dimensional gas of free fermions. Remarkably, interactions between elementary excitations of the fluid renormalize $\varepsilon(p)$ but do not enter the bounds explicitly. Moreover, $\varepsilon(p)$ itself enters the bounds only through $v_c$.
Consider implications of the theorem in the weak impurity-fluid coupling limit. To define the latter we introduce a family of interaction potentials $U_\gamma(r)=\gamma U_1(r)$ parameterized by the dimensionless coupling $\gamma$. The weak coupling limit amounts to considering small $\gamma$ (i.e. expanding all quantities of interest around $\gamma=0$) after the thermodynamic limit ($N\to \infty$, $V=N/\rho$, $\rho$ fixed) is taken. The physical meaning of this limit is that the interaction energy is small compared to the total energy per particle but large compared to the level spacing.
The Landau criterion of superfluidity \cite{landau1941JETP} (generalised for impurities of finite mass \cite{rayfield1966roton}) can be rigorously proved in the weak coupling limit by virtue of the bound \eqref{lower bound general}. To this end, if the interaction is not everywhere repulsive, we invoke an additional, rather natural assumption that $\langle E| \hat U_1 | E \rangle \geq -C$ for any $|E\rangle$, where $C \geq 0$ is some constant independent on $N$ and $|E\rangle$. For example, if bound states of the impurity particle and particles of the fluid exist, $C$ is expected to be of order of the largest binding energy among all such "molecules". The case of everywhere repulsive interaction amounts to $C=0$. The bound \eqref{lower bound general}
complemented by the aforementioned assumption immediately leads to the Landau's statement ${\mathbf v}_\infty={\mathbf v}_0+O(\gamma)$ for $|{\mathbf v}_0|<v_c$ in the weak coupling limit.
It is worth emphasising that a straightforward perturbation theory in $\gamma$ does not lead to a correct many-body overlap $\big|\langle {\rm GS},{\mathbf v}_0|E \rangle\big|^2$ (see a thorough discussion of
this point in \cite{march1967many}), and, as a consequence, does not permit a universal calculation of $v_\infty$ directly from eq. \eqref{vinfty}.
This problem does not emerge when treating the r.h.s. of the bound \eqref{lower bound general} because the interaction term $\hat U$ enters the latter explicitly.
Since the bound \eqref{lower bound general} invokes exact many-body eigenstates, its immediate application beyond the perturbative regime is possible only for integrable systems. These include (i) an impurity in a 1D gas of free fermions or infinitely repulsive bosons \cite{mcguire1965interacting} and (ii) an impurity in a 1D gas of bosons, with masses of the impurity and host boson being equal, as well as boson-boson and boson-impurity couplings being equal (bosonic Yang-Gaudin model \cite{yang1967some, gaudin1983fonction}). In the former model it is possible to calculate $v_\infty$ directly by means of eq. \eqref{vinfty} \cite{Burovski2013} (see also \cite{gamayun2015impurity}). In the latter, more sophisticated model, an analogous analytical calculation would likely to be much more intricate (if ever possible) since calculating overlaps $\big|\langle {\rm GS},{\mathbf v}_0|E \rangle\big|^2$ within Bethe ansatz is a hard task. On the other hand, application of the bound \eqref{lower bound general} should be feasible in this model since it requires a much simpler calculation of a matrix element of a local operator. In the nonintegrable cases the bound \eqref{lower bound general} should be supplemented by some approximate method for calculating $\langle E| \hat U | E \rangle$ (e.g. perturbation theory, as is exemplified by the proof of Landau criterion presented above).
Now we turn to the bound \eqref{lower bound}. Though valid for a narrower class of interactions, it has the advantage of simplicity compared to the bound \eqref{lower bound general} and can be easily applied without resorting to any approximations and limits. An additional benefit of the bound \eqref{lower bound} is that it obviates two important points. First, the bound holds equally well for a finite fluid and in the thermodynamic limit. Second, the inequality (\ref{lower bound}) represents a nontrivial bound even for long range interactions, provided the interaction potential decreases with distance faster than~$1/r^D$, $D$ being the dimensionality of the system. The latter requirement ensures that $\overline U$ does not diverge at large distances. We expect that both observation generically hold for the bound \eqref{lower bound general} as well.
Possible divergence of $\overline U$ deserves further discussion. It can also emerge at small $r$. In particular, it prevents us from considering hard sphere impurity-fluid interaction. Divergence in $\overline U$ implies that the initial state $|{\rm GS}, {\mathbf v}_0\rangle$ has divergent energy and thus the problem is ill-formulated from the outset. How to correctly formulate the problem in this situation is an interesting open question.
We exemplify the usage of the bound \eqref{lower bound} in one and three dimensions. In the case of one dimension, we consider the pointlike repulsive impurity-fluid potential $U(x)=(U_0/\rho)\delta(x)$ with $U_0>0$ to obtain $|{\mathbf v}_0 -{\mathbf v}_\infty|\leq U_0 \left(m(v_c-v_0)\right)^{-1}$. In the context of ultracold atom experiments this potential is an excellent low-energy approximation to any real impurity-fluid coupling with positive scattering length $a$, $U_0$ being a function of $a$ and transverse confinement energy \cite{olshanii1998atomic}. This result has been earlier obtained for a special case of an impurity in a 1D gas of free fermions \cite{Lychkovskiy2013}; here it is proven for an arbitrary interacting 1D fluid.
In the case of three dimensions, we consider a ``square'' potential $U(r)=U_0 \theta(r_0-r)$. In this case the bound reads $|{\mathbf v}_0 -{\mathbf v}_\infty|\leq (4\pi/3)r_0^3 \rho U_0 \left(m(v_c-v_0)\right)^{-1}$. In the limit when the interaction range $r_0$ is much larger than the scattering length $a\simeq 2 \mu U_0 r_0^3/(3\hbar^2)$ (with $\mu$ being reduced mass) the bound can be expressed through the scattering length: $|{\mathbf v}_0 -{\mathbf v}_\infty|\lesssim 2\pi \hbar^2 a \rho \left(m\mu(v_c-v_0)\right)^{-1}$.
It is instructive to compare the above theorem with a rigorous result obtained in \cite{knap2014quantum}: The expectation value of the impurity velocity in the momentum-dependent ground state equals to the slope of the {\it total} dispersion of the impurity-fluid system which is generically nonzero.
Thus Ref. \cite{knap2014quantum} proves the very possibility of the perpetual motion of an impurity in a quantum fluid. However, it does not relate the initial velocity of the injected impurity, $v_0$, to its asymptotic velocity $v_\infty$, in contrast to the theorem presented above.
{\em Dynamics of driven impurity.}---
In the present section we consider an impurity weakly coupled to a fluid and driven by a small constant force. The kinematical reasoning summarized in the beginning of the previous section can be extended to the case with driving. This was done for mobile impurities in superfluid helium in Refs. \cite{bowley1975roton,bowley1977motion,allum1977breakdown}. We study a problem in a wider context of an arbitrary quantum fluid.
Consider the impurity to be initially at rest. The force accelerates it freely until its velocity reaches $v_c$. At this instant the impurity acquires a chance to scatter off the fluid. It is clear from eq. \eqref{energy conservation} that the scattering channel which opens first is the back scattering. In this process the impurity loses some momentum $q_c$ which is transferred to the fluid. The critical momentum transfer $q_c$ delivers minimum in eq. \eqref{critical velocity}:
\begin{equation}\label{backscattering momentum}
v_c q_c = \varepsilon(q_c) + \frac{q_c^2}{2 m}.
\end{equation}
The geometrical meaning of $q_c$ is illustrated in Fig. \ref{fig 1}~(a),(b): the line $v_c q$ touches the curve $\varepsilon(q) + \frac{q^2}{2 m}$ in the point $(q_c,v_c q_c)$. Note that $q_c$ is unrelated to $m v_c$.
Up to this point our presentation has closely followed Refs. \cite{bowley1975roton,bowley1977motion,allum1977breakdown}.
The central new observation is that the behavior of the impurity depends crucially on whether or not $q_c$ is zero. Consider first the case $q_c>0$ (see Fig \ref{fig 1}~(a)) which is relevant, in particular, for impurities in helium \cite{bowley1975roton,bowley1977motion,allum1977breakdown}. After the first scattering the velocity of the impurity drops by
$
\Delta v=q_c/m,
$
and the impurity starts to freely accelerate until its velocity again reaches $v_c$, after which the whole cycle is repeated. This is how backscattering oscillations emerge \cite{bowley1975roton,bowley1977motion,allum1977breakdown}.
Consider now the case when $q_c=0$, see Fig \ref{fig 1} (b). This case was not considered in \cite{bowley1975roton,bowley1977motion,allum1977breakdown} since it can not be realized with realistic impurities in superfluid helium (see below). In this case, as soon as the velocity of the impurity reaches $v_c$, the impurity starts to dissipate the pumped energy by producing infrared excitations of the fluid. In the limit of small force this leads to the saturation of its velocity at $v_c$ without oscillations (SwO).
One can see that whether or not $q_c$ is zero governs which of the two generic regimes, SwO or BO, is realized for a particular fluid and impurity. Note that $q_c=0$ ($q_c>0$) whenever $v_c=v_s$ ($v_c<v_s$), see Fig. \ref{fig 1}. The relations between $v_c$ and $v_s$, in turn, is determined by the Landau critical velocity of the fluid, $v_{c {\rm L}}$ and the mass of the impurity, $m$. As a result, in the fluid with $v_{c {\rm L}}=v_s$ (e.g. in the Bogolyubov gas of weakly coupled bosons) only SwO is possible, regardless of value of $m$. In contrast, in the fluid with $v_{c {\rm L}}<v_s$ both SwO and BO are possible, depending on the mass of the impurity: BO emerge in the case of a heavy impurity, $m>m_c$, while SwO takes place for a light impurity, $m<m_c$. The critical mass $m_c$ is determined from the equation
$
v_c(m_c)=v_s,
$
in which we explicitly indicate the dependence of the generalized critical velocity on the mass of the impurity, see eq. \eqref{critical velocity} and the inset in Fig. \ref{fig 1}. The amplitude of BO generically experiences a jump from a finite value to zero at $m=m_c$. Thus if one regards $m$ as a tunable parameter, the transition over $m_c$ is a nonequilibrium quantum phase transition. Conditions discriminating between the two dynamical regimes are summarized in Table \ref{table}.
Note that SWO was not observed in superfluid helium since sufficiently light impurities were lacking.
Existence of two dynamical regimes separated by a nonequilibrium quantum phase transition is consistent with the results of the detailed study of a specific 1D fluid \cite{Gamayun2014}.
BO get damped at finite forces since the direction (for $D>1$) and the value (for any dimensionality) of the momentum transfer vary from one scattering event to another. In Ref. \cite{Gamayun2014} a kinetic theory for an impurity in the Tonks-Girardeau gas has been developed and the damping rate has been calculated. This theory can be generalized to arbitrary fluids, which is, however, beyond the scope of the present paper.
The physical picture we put forward differs significantly from the picture developed in \cite{Gangardt2009,schecter2012dynamics,schecter2012critical} for 1D systems. The method of \cite{Gangardt2009,schecter2012dynamics,schecter2012critical} is based on adiabatically following the total dispersion of the impurity-fluid system ${\cal E}(p)$. Since ${\cal E}(p)$ is periodic in one dimension, the authors of \cite{Gangardt2009,schecter2012dynamics,schecter2012critical} conclude that Bloch-like oscillations of the velocity of the impurity develop, provided ${\cal E}(p)$ is a smooth function. This approach leaves no room for the SwO regime, in conflict with the results reported here and in Ref. \cite{Gamayun2014}. We note, however, that a key ingredient of the argument of Refs. \cite{Gangardt2009,schecter2012dynamics,schecter2012critical}, adiabaticity, as a rule can not be maintained for many-body gapless systems in the thermodynamic limit \cite{balian2007microphysics,polkovnikov2008breakdown,altland2008many,altland2009nonadiabaticity}. Although this issue has triggered an active discussion \cite{schecter2014comment,Gamayun2014reply}, it is not resolved so far and requires further studies \cite{Burovski2015}.
Note that the sawtooth oscillations in a 1D system has also been discussed in Ref. \cite{schecter2012critical}, but in in the limit of strong force and only provided ${\cal E}(p)$ has a cusp (see also a precursory work \cite{lamacraft2009dispersion}). These oscillations differ from those discussed here in amplitude and maximal velocity. We emphasize that smoothness of ${\cal E}(p)$ plays no role in our arguments, in contrast to Refs. \cite{schecter2012critical,lamacraft2009dispersion}.
{\em Summary and concluding remarks.}---
To summarise, we have studied two related settings. In the first setting a mobile impurity is injected with some initial velocity $v_0$ in a quantum fluid at zero temperature. We have rigorously derived upper bounds \eqref{lower bound general} and \eqref{lower bound} on the difference between the initial and the asymptotic velocities of the impurity, $|{\mathbf v}_0-{\mathbf v}_\infty|$, valid for $|{\mathbf v}_0|$ less than the mass-dependent generalized critical velocity~$v_c$.
These bounds imply that while the the velocity of the impurity can drop, it, generally speaking, does not drop to zero. This is consistent with the result of Ref. \cite{suzuki2014creation}: The impurity injected in the Bose-Einstein condensate creates a finite number of quasiparticles before relaxing to a steady state. On the other hand, our result disproves a suggestion of Refs. \cite{roberts2006force,roberts2009superfluids} (see also \cite{roberts2005casimir}) that perpetual motion of an impurity in a superfluid is nonexistent in thermodynamic limit due to the Casimir-like friction force.
We note that at any finite temperature $T$ the infinite-time velocity is most likely to vanish. The results \eqref{lower bound general} and \eqref{lower bound} remain relevant at low but nonzero temperatures if understood as bounds on the velocity at an intermediate timescale which is much less than the thermal relaxation timescale $\sim \hbar^7 m (v_s/k_{\rm B} T)^{2+2D} a^{-2 D}$, where $a$ is the scattering length \cite{klemens1955scattering,Baym1967,klemens1994thermal,CastroNeto1995}. For $D=3$ one gets relaxation timescale
$\sim 1\,{\mathrm s}
\left(\frac{m}{m_{\rm Rb}}\right)\left(\frac{v_s}{1\, {\rm mm}/{\rm s}}\right)^8 \left(\frac{T}{100 \, {\rm nK}}\right)^{-8} \left(\frac{a}{10 \, {\rm nm}}\right)^{-6}$
with $m_{\rm Rb}=85.47$ amu and other reference values relevant for ultracold atom experiments \cite{bloch2008many}.
In the second setting an impurity is pulled by a small constant force. We have demonstrated that, in general, two dynamical regimes can occur -- backscattering oscillations of the impurity velocity (BO) or velocity saturation without oscillations (SwO). For fluids with $v_{c {\rm L}}=v_s$ SwO is the only possible regime. For fluids with $v_{c {\rm L}}<v_s$ SwO occurs for light impurities while BO occur for heavy impurities, the two regimes being separated by a nonequilibrium quantum phase transition at some critical mass, see Table \ref{table} and inset in Fig. \ref{fig 1}.
Our treatment of the first problem is valid for any strength of impurity-fluid interaction, however the weaker is the interaction, the tighter are the bounds. Our treatment of the second problem is valid in the leading order of the weak coupling limit only. However, it is not necessarily the bare coupling which should be weak: If one is able to find a renormalizing unitary transformation which takes into account the dressing of the impurity in a particular fluid and leads to a small effective coupling, this suffices to validate our treatment.
{\em Note added.}--- In a very recent paper \cite{castin2015vitesse} the concept of mass-dependent generalized critical velocity of a mobile impurity in a Fermi superfluid is studied in great detail. In particular, the nonanalyticity of $v_c$ as a function of mass is discussed.
\begin{acknowledgments}
{\em Acknowledgements.}
The author is grateful to V. Cheianov, O. Gamayun, E. Burovskiy, M. Zvonarev, G. Shlyapnikov, G. Pickett, P. McClintock, V. Tsepelin, A. Fedorov and M. Schecter for fruitful discussions. The present work was supported by the ERC grant 279738-NEDFOQ.
\end{acknowledgments}
|
2,877,628,088,445 | arxiv | \section{Introduction}
The recent US study takes us to an alarming point that our cyberinfrastructures are as vulnerable as we are to the COVID-19. Internet Crime Complaint Center (IC3) of the Federal Bureau of Investigation (FBI) reports that there has been a 300\% increase in cybercrime since the beginning of the latest pandemic. They further warn that 95\% of the recorded breaches targeted mostly three critical infrastructures such as government, technology, and retail industry. While investigating the threat type, Cisco unveiled Advanced Persistent Threat (APT), ransomware such as Zero-day, Spyware, and Botnet Virus, occupy more than half of the total attacks with a loss of more than \$500k. Besides, it is worth frightfully mentioning that about 6 trillions of budgets are globally projected to tackle the potential crisis, which was only 1 trillion in 2018. Since the last decade, this budget has been overgrowing as the world has experienced the latest industrial iteration namely industry 4.0\cite{i4}. This critical infrastructure has laid the foundation for the desired smart systems, where the convergence happens between machines and humans, depending on data. Sensor-generated data play a vital role in monitoring the manufacturing process, such as predicting maintenance, detecting the equipment anomalies, etc. For example, the data-driven predictive maintenance systems supervise critical tasks such as electrical insulation, vibration or temperature analytic, etc. As a critical component, if data fails to comply with the security standard, all the actions associated with data would undeniably counterfeit the entire industrial ecosystem.
Even after deploying all the advanced technologies, i.e., Cisco Forrester Zero Trust security (2019) or Software Intel Guard Extension (SGX-2015) the world of the state-of-the-art has encountered unprecedented and, in some cases, unknown (i.e., WeLeakData, January 2020) cyberattacks. The latest attacks, including Advanced Persistent Threat (APT), Drive-by, Spear phishing, Session hijacking, Zero-day exploit, were triggered off on account of inadequate and untrusted data and network protection. Existing industrial security solutions inlcuding some public Blockchain-based approach (i.e, CertLedger) are designed after depending on the trust provided by the trusted third party (TTP) such as PKI (public key infrastructure) or cloud-driven trusted certificate authority. The system with such remedies has experienced adversaries along with several other issues such as adaption, delay, expenses, latency, etc. Moreover, any system taking service from a single trusted entity holds an imminent threat of data breaching or the intimidation of single point of failure (SPoF).
Therefore, the paper motivates demonstrating a framework to solve the existing security challenges (i.e., multi-party consents, the centralized trust of trusted certificate authority, etc.) for the Industry 4.0 Cyber-physical System (CPS). The blockchain-based Multi-Signature (MS) mechanism ensures collaborative trust rather than the traditional PKI-like single signing technique that often adheres to a single-point trust dependency and transparency loophole.
\subsection{Contributions and Organization}
Industry 4.0 CPS need to incorporate
collaborative trust building
rather than delegating the trust authority to a single honest or semi-honest entity\cite{i4}. Therefore, the demonstrated Blockhain based framework proposed to ensure security by a special certificateless authentication technique developed upon the MS scheme. The framework is reluctant to present-day cyber attacks. For precise clarification, the specific contributions can be briefed as below-
\begin{itemize}
\item \textbf{Certificateless multi-party security:}
The framework utilizes smart-edge computation, works over peer-to-peer (P2P) consortium, and eliminates the centralized trust model, i.e., dependency on the single PKI certificate authority (CA) for public keys. In addition, multiple Industry stakeholders collaboratively authenticate both the device identity and data. This unique feature guards data against forgery, enhances trust and overcomes the single-point-of-failure (SPoF).
\item \textbf{Efficient and reliable network}: Consortium Blockchain peers of the proposed framework validate and track device registration and communication activities into a transparent ledger inside the restricted channel. In consortium consensus protocol, the leader proposes the next block that significantly lower the reward costs. It shields against failures by subduing the influence of the potential malicious peers and finalizes the agreement on a new transaction without multiple confirmations, thus no waiting period requires after a particular block addition. Besides, the permissioned node selection (i.e.,PBFT, Chaincode, as in the Hyperledger Fabric (HLF) or Iroha (HLI), to be discussed in section III) and the banning of malicious node keeps the industry 4.0 cyber-space secure and threat free\cite{dht}.
\item \textbf{Distributed off-chain storage:} Instead of conventional cloud or database, the proposed framework adapts storing data at the distributed hash table (DHT), i.e. InterPlanatry File System (IPFS) or Kademila. The salient feature of P2P, version-controlled and content-addressed file system, ensures faster data transfer and reduces server dependency to save additional spending and bandwidth\cite{dht}.
\end{itemize}
The proposed technique obtains collaborative trust from Consortium Blockchain (CBC) peers instead of conventional PKI-driven CA, i.e., VeriSign, DigiCert, etc. Thus, it can circumvent the chance of SPoF on account of the TTP's betrayal. BC consensus in line with the multi-party consents (through multi-signature), ensure the entrusted security of several industrial stakeholders. Simultaneously, it reduces both response latency and risk associated with the trusted certificates issued by a single entity, i.e., Membership Service Provider (MSP), VeriSign, etc.
The rest of the article is organized as follows- the next section illustrates the suitability of the Consortium Blockchain technology, the security requirements and the typical adversaries of the critical Industry 4.0 enterprise. Then the subsequent section illustrates the proposed BC-based certificateless framework and its working components. Before the conclusion, the framework deployment and evaluation section reinforces the claims of improving the performance configured on the CBC, namely HLF.
\begin{figure}[htb!]
\centering
\includegraphics[width=\linewidth]{fig/f1.pdf}
\caption{ a) Communication flow of CPS b) CPS attacks BC can guard against}
\label{fig:fig_1}
\end{figure}
\begin{figure*}[htb!]
\centering\includegraphics[width=\linewidth]{fig/f2.pdf}
\caption{(a) Industry 4.0 CPS Trust and failure Challenges with Certificate Authority (b) Multisignature and BC Solution not requiring the Certificate Authority}
\label{fig:fig_2}
\end{figure*}
\section{Blockchain for Critical Industry 4.0 CPS}
Any trusted and reliable critical system essentially requires to be consistently capable of enduring prohibitive loss all the way it evolves. From the conservative development perspective, innovating such a decisive and crucial system naturally demands proven techniques rather than the naive approaches that often appear appealing at its first sight. Though, BC technology initially had attracted inundated public attention but afterward established its impressive attributes to unravel the advanced security breaches, i.e., APT, Zero-day attack, etc. as shown by Figure \ref{fig:fig_1}. It also helps explaining that CPS refers to the firm conjoining and coordination between cyber and physical resources. The CPS comprises software and physical components where each component works on the different temporal and spatial levels and continuously interacts with one another. BC with MS seems to be a promising alternative in response to those advanced attacks as shown by Figure \ref{fig:fig_1} (b) respectively.
\subsection{Critical Industry 4.0 Issues with Public Blockchain}
BC is resistant to modification as if once data recorded, the block cannot be altered \cite{bciot}. The link between subsequent blocks breaks and demands consensus of the participating nodes if any changes occur in the previously committed blocks. Unlike the PKI or cloud-driven CA, a BC requires running consensus i.e., proof of work/stake (PoW/PoS) and associated smart contracts (SC) protocols before updating a new block to the shared ledger. The reward and energy-intensive and mining nodes driven consensus works for public BC which best suits where an utterly untrusted network is required to be safe\cite{privacy}. It is unparalleled in cryptocurrency \cite{btc} but logically unsuited for the critical industry, i.e., smart-grid, food or health system, road signaling, etc. due to its slower performance.
\subsubsection*{Consortium Blockchain Advantages}
Whereas CBC such as Hyperledger or Quorum, has a selective setup where only trusted and invited members are allowed to join the network\cite{btc}. As new block verification does not require running PoW-like consensus, CBC can obviate additional expenses for setting up energy-intensive miner-nodes and compensate needed rewards or incentives. Instead, it adapts the fault tolerance consensus mechanism, where the leader proposes the next block to lower reward costs\cite{con}. It protects system from failures by subduing the influence of the potential malicious peers and thus finalizes the agreement on a new transaction without multiple confirmations. The transaction processing rate of this permissioned nature blockchain\cite{dht} is significantly higher than that of the miner-driven public BC,(i.e., for Bitcoin and Ethereum, about 4 to 5 transaction and for HLF, about 3,000 to 20,000 transaction per second) making it a convincing choice for the proposed data protection framework of the Industry 4.0 CPS.
\subsection{Challenges of the Cyber-physical System}
Since the 2011 Hannover Fair by German government, the world has experienced the latest iteration of the industrial ecosystem namely industry 4.0 because of its unique integration of the CPS\cite{i4}. The ultimate goal of building such automated connections between cyber space and physical space range from enhancing productivity to boosting revenue where BC comes to plug-in-play after cooperating with its existing cybernet technologies such as big data, deep Learning, etc \cite{Blockchain}. The critical industry 4.0 infrastructure demands security assurance from multiple stakeholders, instead of single trusted party such as CA. Figure \ref{fig:fig_2} (a) illustrates how the entire industry 4.0 ecosystem falls due to CA's betrayal \cite{li2018blockchain}.
\section{Blockchain based Security Framework}
The proposed security framework for critical industry 4.0 CPS works on four different levels, where sensor and smart edge devices work in the physical space, and consortium Blockchain network or DHT work in the cyberspace as demonstrated by Figure \ref{fig:fig_3} (A). Based on the 4 levels of communication, the working principle can be simplified through 3 different core stages. Firstly, the machine sensors get registered and establish communication using a BC-based MS technique. Secondly, the registered sensors submit data to the BC network subjective to be verified by BC consortium. Thirdly, the data is stored in the DHT after recording the log into the BC ledger\cite{dht}. As claimed in the earlier subsection, the framework utilizes edge computation instead of cloud computing\cite{edge}. The reasons are listed as following before explaining the steps within the proposed framework.
\begin{itemize}
\item As usually sensors have a weak computational ability that makes them prone to failure while interacting with its heavy storage node such as cloud \cite{Blockchain}. BC can quickly adopt smart edge computations with neater storage solution.
\item As the framework requires signing, smart edge device can securely do the required solving cryptography that the light-weight sensors are unable to do.
\item Besides, it can send data to the P2P DHT storage and can easily identify the DHT-address \cite{edge}.
\end{itemize}
In ordinary certificate-less authenticated encryption (CLAE) or identity-based encryption (IBE), partial key intends to resolve the key-escrow problem (keys are essential to decrypt). The proposed technique replaces it PKI-like key generation centre (KGC, works as a trusted center similar to CA) by adopting a CBC consisting of collaborating peers of the Industry 4.0 CPS. In the proposed framework, the BC peers, using the underlying system parameters, first generate and then dissipate the key pairs where the private parameters are known as the partial key. The partial secret (PS) of the BC consortium and the sensor's random secret in conjunction constitute the desired private key. It secures data from the physical space, i.e., embedded machine sensors to cyberspace, i.e., BC or DHT storage, purposing to check out single-point-dependency and guarantee mutual collaboration\cite{li2018blockchain}.
\begin{figure*}[htb!]
\centering\includegraphics[width=\linewidth]{fig/f3.pdf}
\caption{A. High-level Representation of the Proposed Security Framework, B. Working Flows as per the Framework Configuration}
\label{fig:fig_3}
\end{figure*}
\subsection{Certificateless Sensor Authentication by Multisignature}
MS means applying more than a single key in authentication, which seems so promising for multi-party industry 4.0 security. For example, in an industry 4.0 setup, the sensitive data generated by expensive equipment need to be equally consented by its owner, operator, insurer, or buyer. Each of the parties could be geographically dislocated but cooperate for a single mechanical event. In such a scenario, trusting a single party, i.e., CA, is expensive and not consistently reliable. The principal objective of signing data is to ensure that data has come from an authentic source, and upon producing the same message-digest to the recipient, it further proofs data integrity. Therefore, the framework incorporates MS to increase the trust and pertaining reliability and handle prohibitive costs all the way the industry 4.0 CPS evolves\cite{multiparty}.
\subsubsection*{Blockchain Replaces the Certficate Auhority}
As illustrated by Figure \ref{fig:fig_2} (b), the BC peers ensure generating a partial secret before responding to the sensor devices. The process begins through broadcasting the multiple public keys among the industry 4.0 sensors by BC consortium. The employed signature is a variant of the existing Boneh–Lynn–Shacham (BLS) signature scheme, which works upon two different elliptic curves (EC) over a finite prime field\cite{bls}. It has implemented a similar partial secret concept of IBE on top of BC and thus eliminates the Key $KGC$ \cite{li2018blockchain}. Suppose there are several industry stakeholders who have been collaborating with their public-private key pairs. Two different hash algorithms produce respective message digest throughout the process. A sensor runs system's security parameters to bring out the secret values. Then it sends their identity (ID) and after encrypting it using the aggregated public keys. BC peers verify and generates the PS and cosign by all of its participating peers. The framework initially requires signing by all; however, it is not necessarily required as its supports threshold signing implicitly depicted by the curved arrow of Figure \ref{fig:fig_2} (b). The industry sensors then verify and decrypt the PS and accomplish their public-private keys to send data to the BC. The proposed framework excludes PKI-like certificates by implementing the intermediary PS which in essence guarantee desired trust to the sensors\cite{multisig}.
\subsection{Transaction Verification and Storing}
Once industry 4.0 sensors are successfully registered to the BC consortium, they are ready to submit data as transaction proposal, as illustrated by Figure \ref{fig:fig_3} (B). Data usually gets transaction (Tx) fashion that includes the \textit{identity}, \textit{timestamp}, and \textit{action} of the CPS sensors. \textit{Action} may vary as per the request type, i.e., \textit{update}, \textit{store}, or \textit{access} to DHT. BC peers have to meet two necessary conditions to verify a transaction:
\begin{itemize}
\item [] \textit{i)} The public key obtained associates with the identity?
\item [] \textit{ii)} Can the transaction received be verified?
\end{itemize}
The transaction verification includes the steps illustrated by the framework configuration on HLF CBC \cite{GDPR2019}. Figure \ref{fig:fig_3} (A) demonstrate the proposed figure with 4 levels (1 to 4) and the extended Figure \ref{fig:fig_3} (B) illustrates the following steps from \textit{a} to \textit{f}.
\textit{\textbf{a) Propose}}: Client CPS devices initiate the process by registering the devices to the BC, as explained in the earlier section. Then it starts constructing the encrypted transaction proposal using the private keys and invoke the SC with the help of Software Development Kit (SDK) and required application programming interfaces (APIs).
\textit{\textbf{b) Endorse}}: SDK requests for endorsement, and BC peer verifies the Tx after authenticating the device identity (id) subject to ensure that data is coming from the right sources.
\textit{\textbf{c) Verify}}: The endorsement verification requires meeting the policy, i.e., business logic agreed by all stakeholder peers, by running the respective SC operation. The SC takes a Tx proposal as input and returns a multi-signed \textit{YES} or \textit{NO} in response to the SDK apps against step \textit{b}).
If the Tx proposal is determined as query function, the SDK App returns the data upon query execution with the help of respective APIs (i.e OAuth 2.0 REST API). In either case, the SDK apps forward the transaction with the required operations such as C-create, R-retrieve, U-update, and D-delete with the endorsement.
\textit{\textbf{d) Aggregate}}: After receiving the verified consent, the SDK apps aggregates all consents into single (or multiple if required) Tx and disseminates those to the Ordering Service Nodes (OSNs). The OSN works on the consensus protocols such as crash fault tolerance (CFT) with RAFT (etcd.io libraries) or practical byzantine fault tolerance (PBFT) within Apache Kafka platform.
\textit{\textbf{e) Commit}}: The Tx then relayed to the OSN, all channel peers validate each Tx of the block by specific SC validation and checking through concurrency control version. Any Tx fails the process is marked with an invalid flag inside the block. Thus, a desired new block is committed to the ledger.
\textit{\textbf{f) Store}}: The Gossip data dissemination protocol of the OSN concurrently broadcasts ledger data across the consortium to ensure synchronization of the latest version of the ledger. The pointer of the data is set out in the BC ledger, and at the same time, data-address is securely stored on the offline or online DHT data repository upon IPFS or HL CouchDB. It confirms the unique benefits such as traceability, P2P adaptability etc. that are already outlined in the earlier contribution section. DHT incorporation makes the framework robust, self-organizing, scalable, and fault-tolerant against routing attacks and false query injection attacks. The proposed framework mostly suits the DHT protocol, however, we have initially integrated IPFS and Kademlia considering the fixed-sized routing and malware attacks.
\begin{figure*}[htb!]
\centering\includegraphics[width=\linewidth]{fig/f4.pdf}
\caption{READ (a) vs WRITE (b) performance and Throughput (TP) Latency (LC) for two different setup, with 300 Workload, another with 500 workload}
\label{fig:fig_4}
\end{figure*}
\subsection{Smart Contract and Consensus Implementation}
As a transaction protocol or program, smart contract resiliently executes, manages assigned events and actions as per the agreement often called \textit{contract}. Because of its self-governing features, it can avoid the need of a trusted intermediary, prohibitive costs and accidental losses\cite{sc}. The framework was deployed both on IBM HLF and HLI, which facilitates writing customized Smart contract called the chaincode (CC) for specific tasks. Sample CC are available in the referred project link \footnote{Github Project Link:https://github.com/rahmanziaur/IIoTConsortiumBC}, provides the founding environment for authentication, access control, and validation. Being a platform-independent, HLF supports any language to write its codes\cite{GDPR2019}; however, because of related online resources, we mostly preferred \textit{Go} and, in some test-cases \textit{Java}. The customized CC replaces the existing codes for Membership Service and its dependencies. Consortium Blockchain entails default Membership Service Provider (MSP) that works as PKI-like Certificate Authority (CA). It provides ledger/other CC accessing APIs, state variables, or transaction context.
The policy is defined as an access control list (ACL) for simplicity. However, as explained earlier in the BC suitability subsection, BFT consensus has salient advantages over the existing PoW driven approaches\cite{dht}. HLF incorporates Apache Kafka that eases the execution of consensus, such as BFT, CFT, considering enterprise security requirements. CFT can withstand even half of the total nodes fail but do not guarantee adversary nodes what BFT does. It retains working even one third of total nodes falls. PBFT protocol seems more resilient to the system failure; however, the proposed framework is reluctant to both BFT and CFT\cite{bciot}.
\subsection{Framework Threat and Trust Model}
The communication between edge devices \cite{edge} (i.e. Dell, Hawaii, AD-Link gateway), accessories, and critical industry 4.0 machine sensors (i.e. motor vibration, force or humidity sensors) are secure. Besides, consortium peers (i.e. endorsement, anchor peers of the HLF/HLI are assumed to be trusted, semi-honest or honest-but-curious who issues multisigned PS instead of traditional certificates\cite{GDPR2019}. However, MS and its associated encryption technique together ensure secure communication between edge and BC level and guard network and data against data breaching, malware injection, etc.
\section{Blockchain Deployment and Evaluation}
The CBC of the proposed framework was deployed inside the Caliper evaluation toolkit for Hyperledger, which helps measuring a particular BC deployment with a set of previously defined enterprise use cases. IBM admits that no general tool provides performance evaluation for BC while releasing Caliper\cite{GDPR2019}. The extensive use-cases were set to overlap the industry 4.0 client requirement that generates the benchmark data. However, the latest version of the nodeJS package manager (NPM 8.0.1), docker, and curl were installed to setup the run-time environment inside the Ubuntu 18.04 LTS with 32 GB of memory where \textit{python2} , \textit{make}, \textit{g++} and git ensure additional supports. A typical configuration for the BC evaluation should have programs called \textit{Test Harness} that include client generation and observation and the deployed BC System Under Test (\textit{SUT}) and the RESTful SDK interfaces\cite{GDPR2019}. There are four (04) fundamental steps required to finalize the evaluation process as following.
\begin{itemize}
\item [] \textit{a}) A local \textit{verdaccio-server} for package publishing
\item [] \textit{b}) Connecting the repository to the server
\item [] \textit{c}) Binding the \textit{CLI} from the server-side, and
\item [] \textit{d}) Running the benchmark as per the configuration file
\end{itemize}
Then the performance benchmark with the tasks, firstly, invoke policy checking functions to READ and WRITE Tx into the ledger. Secondly, setup multiple test-cases for 4 to 32 number of peers representing industry 4.0 stakeholders. Finally, allocating the workloads from 100 to 1000 transaction per second among the peers representing the CPS sensor device data population. The throughput (TP) and latency (LC) are calculated based on the READ (RD) and WRITE (WR) performance where the delay LC is the time between the time of request and response of the transaction. TP is the ratio of committed Tx at a particular time.
\subsection{Performance Analysis}
The Hyperledger Caliper benchmark results show the performance based on four measurement metrics \textit{success rate}, (transaction successfully committed against the submitted proposal), \textit{latency} and the \textit{throughput}, and the \textit{resource consumption} for different test cases. The Figure \ref{fig:fig_4} (a) shows the system performance under the different number of sensor workload ranging from 100 (0.1k) to 1k workload where the HLF network occupies two (02) CC, four (04) peer nodes and three (03) OSNs running on Apache Kafka (or ZooKeeper) for PBFT. As seen in the Figure \ref{fig:fig_4} (b), the WRITE has 185 at 0.2k workload with the maximum success rate of 93\% and an average delay of 5 seconds. Notwithstanding, operation seems to have higher throughput (up to 470 in maximum) on a similar success rate at its best. The average delay appears to be half of the WRITE's delay, as writing has to incorporate OSNs on Apache Kafka. The benchmark evaluation explicitly illustrates that the setup configured has lower performance for higher number sensor-workload through the theoretical solution proves the consortium BC has significant adaptability for a higher number of nodes. As investigated, the local workload processing bottleneck affects throughput and latency. HLF Tx flow works demands enough responses against the submitted transaction proposals, in case the responses are queued due to network overhead, bandwidth, or processing-load consequences the latency-raising. On top of that, the general-purpose workstation configuration slower the evaluation for higher workloads. Here, the result portrays the relation between performance and scalability based on the previously executed READ, WRITE Operations. OSN and peer configuration left resembling initial setup with two test-cases run for 300 and 500 workloads. As depicted by the Figure \ref{fig:fig_4} (c), the deployed framework has lower scalability. For the first test-case (300 workloads), the throughput and latency reach 150 transaction per second and 64 transaction per second. However, for the other test-case, it comes with lower throughput and higher latency concerning the number of nodes ranging from 4 to 32. Caliper toolkit allows to run the node subset that endorses a particular SC. The proposed framework upon HLF Caliper benchmark fetches higher latency (i.e., about a minute for 32 peers) due to computation constraints. Besides improving the consensus in terms of a suitable MS scheme, further scopes include applying an enterprise standard ( i.e., 512 GB of RAM or more) system to improve the latency overhead currently it suffers from.
For the same setup, the framework's average response time is several times lesser than the system with the MSP. By default, MSP provides certificate service, i.e., Hyperledger Fabric MSP, VeriSign, etc. inside the consortium Blockchain network \cite{GDPR2019}. The result obtained shows that the proposed framework without MSP or any other certificate provider, including the trusted KGC, responds within 1 to 16 milliseconds. However, it delays 40 to 242 milliseconds with the default MSP of the Hyperledger CBC deployment. The response latency varies with the increase of workloads (number transactions to process) and co-signing BC peers. The proposed MS aligned framework (no MSP as discussed earlier) absorbs almost 4-times lower power than the public BC with traditional signing techniques. Also, it spurs about 50 percent lesser CPU than the consortium BC lined with the conventional MSP. Besides, the certificate-less mechanism, special remote procedure call, namely gRPC and mining-free consensus protocols, i.e., PBFT of HLF, jointly economize the energy and CPU usage it needs.
\section{Conclusion and Work to Ahead}
Relying on a single trusted party challenges the industry trust, makes it exposed to several advanced cyberattacks, and causes single-point-of-failures. Admittedly, critical industry infrastructure demands to incorporate cooperative trust-building rather than trusting a single entity. In response to such security issues, we have demonstrated the BC-based framework that ensures security without existing PKI-like certificates. The contribution includes unique adaption of multi-signature inside a reliable BC consortium subject to data protection with an efficient storing technique. The achieved performance support the framework applicability for the critical industry 4.0 enterprise CPS.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
|
2,877,628,088,446 | arxiv | \section{Introduction}
The internal composition and structure of small, terrestrial planets is generally difficult to characterize. As is well known, mass-radius relationships alone do not constrain the internal composition of a planet beyond a measurement of its bulk density. The internal structure is crucial, as it determines the bulk physical properties of planets and provides valuable insights into their formation, history, and present composition. \citet{unterborn:2016} found that the core radius, the presence of light elements in the core, and the existence of an upper mantle have the largest effects on the final mass and radius of a terrestrial exoplanet. The final mass and radius in turn directly determine the planet's habitability. For example, the core mass fraction affects the strength of a planet's magnetic field, which shields it against harmful radiation from the host star.
At present, we have $\sim$330 small planets ($<4R_{\oplus}$) with masses and radii constrained to better than 50\%\footnote{Based on data from the NASA Exoplanet Archive, https://exoplanetarchive.ipac.caltech.edu/}. Such measurement uncertainties are generally good enough to determine the general structure of many exoplanets. However, for low-mass terrestrial planets with thin atmospheres, planetary masses and radii must be measured to precisions better than 20\% and 10\%, respectively, in order to constraint the core mass fraction and structure \citep{Dorn:2015,Schulze:2020}.
However, high-precision measurements of low-mass exoplanets between $1-4 R_{\oplus}$ are challenging. Additionally, because of the large number of individual discoveries, and because (to date) they have been mostly detected around faint \textit{Kepler}/K2 \citep{Borucki:2010,Howell:2014} targets (with typical \textit{Kepler} and K2 magnitudes of $K\sim15$ and $K\sim12$, respectively, \citealt{Vanderburg:2016}), they are difficult to follow up with high-resolution radial velocity (RV) observations and thus, obtain precise masses and other fundamental physical properties. This has already begun to change with the Transiting Exoplanet Survey Satellite mission (TESS; \citealt{Ricker:2015}), as its main science driver is to detect and measure masses and radii for at least 50 small planets ($<4 R_{\oplus}$) around bright stars. At the time of writing, 24 such planets have already been confirmed, and almost all have masses and radii measured to better than $30\%$\footnote{https://exoplanetarchive.ipac.caltech.edu/}.
The discoveries of the TESS mission will also raise very important questions in exoplanet science. The one that we address here relates to the achievable precision with which we shall be able to constrain the fundamental parameters of a transiting planet, such as its mass, density and surface gravity. Given precise photometric and spectroscopic measurements of the host of a transiting planet system, it is possible to measure the planet surface gravity with no external constraints \citep{Southworth:2007}. On the other hand, measuring the mass or radius of a transiting planet requires some external constraint \citep{Seager:2003}. Since, until very recently, it has only been possible to measure the mass or radius of the closest isolated stars directly, theoretical evolutionary tracks or empirical relations between stellar mass and radius and other properties of the star have often been used (e.g., \citealt{Torres:2010}). However, these constraints typically assume that the star is representative of the population of systems that were used to calibrate these relations. In the case of theoretical evolutionary tracks, there may be systematic errors due to uncertainties in the physics of stellar structure, atmospheres, and evolution, or second-order properties of the star, such as its detailed abundance distribution, which can manifest as irreducible systematic uncertainties on the stellar parameters. For example, most evolutionary tracks assume a fixed solar abundance pattern scaled to the iron abundance [Fe/H] of the star, and thus, the same [$\alpha$/Fe] as the Sun. If the host star has a significantly different [$\alpha$/Fe] than the Sun, that will lead to incorrect inferences about the properties of the planet. By using evolutionary tracks that assume a solar [$\alpha$/Fe], one might infer an incorrect density and mass of the planet, and therefore an incorrect core/mantle fraction.
Thus, a direct, empirical or nearly empirical measurement of the radius or mass of the star that does not rely on assumptions that may not be valid is needed (see \citealt{Stassun2017} for a lengthier discussion on the merits and benefits of using empirical or semi-empirical measurements to infer exoplanet parameters). As has been demonstrated in numerous papers (see, e.g., \citealt{Stevens:2017}), with \textit{Gaia} \citep{Gaia:2018} parallaxes, coupled with the availability of absolute broadband photometry from the near-UV to the near-IR, it is now possible to measure the radii of bright ($V\la 12$ mag) stars. This allows for direct, nearly empirical measurements of the masses and radii of transiting planets and their host stars (and, indeed, any eclipsing single-lined spectroscopic binary) in a nearly empirical way. See \citet{Stevens:2018} for an initial exploration of the precisions with which these measurements can be made.
In this paper, we build upon the work of \citet{Stevens:2018} by also assessing the precision with which the surface gravity $g_p$ of transiting planets can be measured. Given that a measurement of $g_p$ only requires measurements of direct observables from the transit photometry and radial velocities without the need for external constraints, the precision on $g_p$ in principle improves with ever more data, assuming no systematic floor. Thus we seek to address two questions. First, with what fractional precision can $g_p$ be measured, and how does this compare to the fractional precision with which the density or mass can be measured? Second, how useful is $g_p$ as a diagnostic of a terrestrial planet's interior structure and, potentially, habitability?
Answering these questions is quite important because the surface gravity of a planet may be a more fundamental parameter than the radius and mass, at least in addressing certain questions, such as its habitability \citep{oneill:2007, valencia:2009,vanHeck:2011}. For example, the surface gravity, along with the equilibrium temperature and mean molecular weight, determines the scale height of any extant atmosphere. If a planet's surface gravity provides more of a lever arm in determining certain aspects of the planet's interior or atmosphere, and if we can achieve a better precision on the planet surface gravity measurement than the radius, then we can use that to better constrain the composition of the planet and, ultimately, its habitability. Thus, given the importance of the planetary surface gravity, mass and radius in constraining the habitability of a planet, it is critical to understand how well we can measure these properties.
Here we focus on the precision with which the surface gravity, density, mass, and radius of a transiting planet can be measured essentially empirically. We will employ methodologies that are similar to those used in \citet{Stevens:2018}, and thus this work can be considered a companion paper to that one.
\section{Analysis}
\label{sec:analysis}
We begin by deriving expressions for the surface gravity $g_p$, mass $M_p$, density $\rho_p$, and radius $R_p$, of a transiting planet in terms of observables from photometric and radial velocity observations, as well as a constraint on the stellar radius $R_{\star}$ from a \textit{Gaia} parallax combined with a bolometric flux from a spectral energy distribution (SED) \citep{Stassun2017,Stevens:2017,Stevens:2018}.
\subsection{Planet Surface Gravity\label{sec:gp}}
The planet surface gravity is defined as
\begin{equation}
g_{p} = \frac{GM_{p}}{R_{p}^{2}}.
\end{equation}
The radial velocity semi-amplitude $K_{\star}$ can be expressed as
\begin{equation}
\begin{split}
K_{\star} =\left(\frac{2\pi G}{P}\right)^{1/3} \frac{M_p\sin{i}}{(M_{\star}+M_p)^{2/3}} \frac{1}{\sqrt{1-e^2}} \\
\simeq28.4~{\rm m~s}^{-1}\left(\frac{P}{\rm yr}\right)^{-1/3} \frac{M_{p}\sin{i}}{M_{\rm J}}\bigg(\frac{M_{\star}}{M_{\odot}}\bigg)^{-2/3}
(1-e^2)^{-1/2},
\end{split}
\label{eqn:K}
\end{equation}
where $M_\star$ is the stellar mass, $P$ and $e$ are the planetary orbital period and eccentricity, $M_{\rm J}$ is Jupiter's mass, and $i$ is the inclination angle of the orbit. In the second equality, we have assumed that $M_p \ll M_{\star}$.
Using Newton's version of Kepler's third law and Equation~\ref{eqn:K}, the surface gravity can then be expressed as
\begin{equation}
g_{p} =\frac{2\pi}{P}
\frac{\sqrt{1-e^2}}{(R_p/a)^2}
\frac{K_{\star}}{\sin{i}}.
\end{equation}
For the majority of the following analysis, will assume circular orbits ($e=0$) for simplicity and drop the eccentricity dependence. This analysis could be repeated for eccentric orbits, but the algebra is tedious and does not lead to qualitatively new insights. The assumption of circular orbits thus provides a qualitative expectation of the uncertainties on the planetary parameters. Furthermore, in many cases it is justified because we expect many of the systems to which this analysis is applicable will have very small eccentricities. We also further assume that $\sin{i}=1$, which is approximately true for transiting exoplanets. Under these assumptions, we have that
\begin{equation}
g_{p} =\frac{2\pi K_{\star}}{P}
\left(\frac{a}{R_p}\right)^2.
\label{eqn:gp}
\end{equation}
The semimajor axis scaled to the planet radius can be converted to the semimajor axis scaled to the stellar radius by using the depth of the transit $\delta\equiv (R_p/R_{\star})^2$, which is a direct observable:
\begin{equation}
\frac{a}{R_{p}} = \frac{a}{R_{\star}}\left(\frac{R_{\star}}{R_{p}}\right) = \frac{a}{R_{\star}}\delta^{-1/2}.
\label{eqn:arparstarphys}
\end{equation}
We can then rewrite the scaled semimajor axis $a/R_{\star}$ in terms of the stellar density $\rho_{\star}$ (see e.g., \citealt{Sandford:2017} for a precise derivation):
\begin{equation}
\frac{a}{R_{\star}} =
\left(\frac{GP^2}{3\pi}\right)^{1/3}
\left(\rho_{\star}+k^3\rho_p\right)^{1/3},
\label{eqn:arstar}
\end{equation}
where $k\equiv R_p/R_{\star}$. Since typically $k\ll 1$,
\begin{equation}
\frac{a}{R_{\star}} = \left(\frac{GP^2\rho_{\star}}{3\pi}\right)^{1/3}.
\label{eqn:arstar8}
\end{equation}
Using Equation \ref{eqn:arparstarphys}, we find
\begin{equation}
\frac{a}{R_p} = \left(\frac{GP^2\rho_{\star}}{3\pi}\right)^{1/3}\delta^{-1/2}.
\end{equation}
Noting that, from Equation \ref{eqn:gp}, we can write
\begin{equation}
g_{p} = \frac{2\pi K_{\star}}{P}
\left(\frac{a}{R_p}\right)^2=\frac{2\pi K_{\star}}{P}
\left(\frac{a}{R_{\star}}\right)^2\delta^{-1}.
\label{eqn:gp2}
\end{equation}
Then $a/R_{\star}$ can be written in terms of observables as
\begin{equation}
\frac{a}{R_{\star}}= \frac{P}{\pi}\frac{\delta^{1/4}}{\sqrt{T\tau}}.
\label{eqn:arstarobs}
\end{equation}
where the observables are the orbital period $P$, transit time $T$ (full-width at half-maximum), the ingress/egress duration $\tau$, and the transit depth $\delta$.
Inserting this into Equation \ref{eqn:gp2}, we find that the planet surface gravity is given in terms of pure observables as (\citealt{Southworth:2007})
\begin{equation}
g_{p} = \frac{2K_{\star}P}{\pi T\tau} \delta^{-1/2}.
\label{eqn:gpobservables}
\end{equation}
Using linear propagation of uncertainties, and assuming no covariances between the observable parameters\footnote{See \citet{Carter:2008} for an exploration of the covariances between photometric and RV observable parameters.}, and the aforementioned assumptions ($M_P\ll M_{\star}$, $\sin{i}\sim 1$, $e=0$, {\rm and} $k\ll 1$), we can approximate the fractional uncertainty on the surface gravity as
\begin{equation}
\begin{split}
\bigg(\frac{\sigma_{g_p}}{g_p}\bigg)^{2} \approx \bigg(\frac{\sigma_{K_{\star}}}{K_{\star}}\bigg)^{2} +
\bigg(\frac{\sigma_{P}}{P}\bigg)^{2} +
\bigg(\frac{\sigma_{T}}{T}\bigg)^{2} + \\
\bigg(\frac{\sigma_{\tau}}{\tau}\bigg)^{2} +
\frac{1}{4}\bigg(\frac{\sigma_{\delta}}{\delta}\bigg)^{2}.
\end{split}
\label{eqn:gpuncertianty}
\end{equation}
\subsection{Planet Mass\label{sec:mp}}
We now turn to the uncertainty on the planet mass. We can approach this estimate two ways. First, we can start from Equation \ref{eqn:K}, again making the same simplifying assumptions, and solve for $M_p$ in terms of observables. We note that this method requires the intermediate step of deriving an expression for the host star mass in terms of direct observables, using the fact that
\begin{equation}
M_{\star}=\frac{4\pi}{3}\rho_{\star}R_{\star}^3
\label{eqn:mstarphys}
\end{equation}
and using Equations \ref{eqn:arparstarphys} and \ref{eqn:arstarobs} to write $\rho_{\star}$ in terms of observables. We find
\begin{equation}
M_{\star}=\frac{4P}{\pi G}\delta^{3/4}(T\tau)^{-3/2}R_{\star}^3.
\end{equation}
Using this, we can then derive the planet mass in terms of observables as
\begin{equation}
M_p = \frac{2}{\pi G}\frac{K_{\star}P}{T\tau}R_{\star}^2\delta^{1/2}.
\label{eqn:mpobs}
\end{equation}
A more straightforward approach is to use the fact that we have already derived the planet surface gravity in terms of observables. Starting from the definition of surface gravity, we can write
\begin{equation}
M_p=\frac{1}{G}g_p R_p^2=\frac{1}{G}g_p R_{\star}^2\delta.
\end{equation}
Using Equation \ref{eqn:gpobservables}, we arrive at the same expression as Equation \ref{eqn:mpobs}.
Using Equation \ref{eqn:mpobs}, we derive the fractional uncertainty on the planet mass in terms of the fractional uncertainty in the observables, again assuming no covariances and the simplifying assumptions stated before ($M_P\ll M_{\star}$, $\sin{i}\sim 1$, $e=0$, {\rm and} $k\ll 1$). We find
\begin{equation}
\begin{split}
\bigg(\frac{\sigma_{M_p}}{M_p}\bigg)^{2} \approx \bigg(\frac{\sigma_{K_{\star}}}{K_{\star}}\bigg)^{2} + \bigg(\frac{\sigma_{P}}{P}\bigg)^{2} +
\bigg(\frac{\sigma_{T}}{T}\bigg)^{2} + \\
\bigg(\frac{\sigma_{\tau}}{\tau}\bigg)^{2} +
\frac{1}{4}\bigg(\frac{\sigma_{\delta}}{\delta}\bigg)^{2} +
4\bigg(\frac{\sigma_{R_{\star}}}{R_{\star}}\bigg)^{2}.
\end{split}
\label{eqn:mpuncertainty}
\end{equation}
\subsection{Planet Density\label{sec:density}}
We derive the planet density $\rho_{p}$ in terms of observables. The planet density is given by
\begin{equation}
\rho_p = \frac{3M_p}{4\pi R_p^3}=\frac{3M_p}{4\pi R_{\star}^3}\delta^{-3/2}.
\end{equation}
We have already derived the mass of the planet in terms of observables in Equation \ref{eqn:mpobs}. Using this expression, we find
\begin{equation}
\rho_p = \frac{3}{2\pi^2G}\frac{K_{\star}P}{T\tau\delta R_{\star}}.
\label{eqn:rhopobs}
\end{equation}
From this equation, we derive the fractional uncertainty on the planet density in terms of the fractional uncertainty in the observables, again assuming no covariances and the simplifying assumptions stated before. We find
\begin{equation}
\begin{split}
\bigg(\frac{\sigma_{\rho_p}}{\rho_p}\bigg)^{2} \approx \bigg(\frac{\sigma_{K_{\star}}}{K_{\star}}\bigg)^{2} + \bigg(\frac{\sigma_{P}}{P}\bigg)^{2} +
\bigg(\frac{\sigma_{T}}{T}\bigg)^{2} + \\
\bigg(\frac{\sigma_{\tau}}{\tau}\bigg)^{2} +
\bigg(\frac{\sigma_{R_{\star}}}{R_{\star}}\bigg)^{2} +
\bigg(\frac{\sigma_{\delta}}{\delta}\bigg)^{2}.
\end{split}
\label{eqn:rhopuncertainty}
\end{equation}
\subsection{Planet Radius \label{sec:Rp}}
Finally, the planet radius uncertainty can be trivially derived from the definition of the transit depth $\delta$, assuming no limb darkening:
\begin{equation}
\delta = \bigg(\frac{R_{p}}{R_{\star}}\bigg)^{2}.
\label{eqn:deltadef}
\end{equation}
Then,
\begin{equation}
R_{p} = \sqrt{\delta}R_{\star},
\label{eqn:rpobs}
\end{equation}
and the fractional uncertainty on the planet radius is simply
\begin{equation}
\bigg(\frac{\sigma_{R_{p}}}{R_{p}}\bigg)^{2} \approx
\bigg(\frac{\sigma_{R_{\star}}}{R_{\star}}\bigg)^{2} +
\frac{1}{4}\bigg(\frac{\sigma_{\delta}}{\delta}\bigg)^{2}.
\label{eqn:runcertainty}
\end{equation}
We note that, by assuming that $\delta$ is a direct observable, we are fundamentally assuming no limb darkening of the star. Of course, in reality the presence of limb darkening means that the observed fractional depth of the transit is not equal to $\delta$, and thus the uncertainty in $\delta$ is larger than one would naively estimate assuming no limb darkening. However, assuming that the limb darkening is small (as it is for observations in the near-IR), or that it can be estimated a priori based on the properties of the star, or that the photometry is sufficiently precise that both the limb darkening and $\delta$ can be simultaneously constrained, the naive estimate of the uncertainty on $\delta$ assuming no limb darkening will not be significantly larger than that in the presence of limb darkening.
\section{Comparing the Estimated Uncertainties on the Planet Mass, Density, and Surface Gravity}
Comparing the expressions for the fractional uncertainty on $g_p$, $M_p$, and $\rho_p$ (Equations ~\ref{eqn:gpuncertianty},~\ref{eqn:mpuncertainty}, and~\ref{eqn:rhopuncertainty}, respectively), we can make some broad observations on the precision with which it is possible to measure these three planetary parameters.
First, comparing the uncertainties on $g_p$ and $M_p$, we note that the only difference is that $\sigma_{M_{p}}/M_{p}$ requires the additional term $4(\sigma_{R_{\star}}/R_{\star})^2$. \citet{Stevens:2018} estimates that it should be possible to infer the stellar radii of bright hosts ($G\la 12$ mag) to an accuracy of order $1\%$ using the final \textit{Gaia} data release parallaxes, currently-available absolute broadband photometry, and spectrophotometry from \textit{Gaia} and the Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx; \citealt{SPHEREx:2018}). The exact level of accuracy will depend on the stellar spectral type and the final parallax precision. It is likely that $R_{\star}$ may dominate the error budget relative to the other terms, with the possible exception of the uncertainty in $\tau$. We note that TESS is able to measure $\tau$ more precisely than either {\it Kepler} or K2 were able to for systems with similar physical parameters and noise properties, primarily because the TESS bandpass is redder than that of {\it Kepler}, and thus the stellar limb darkening is smaller and less degenerate with $\tau$. Overall, we generically expect the planetary surface gravity to be measured to smaller fractional precision than the planet mass.
We now turn to the uncertainty on planetary density. When comparing the expressions for the uncertainty in $M_p$ to $\rho_p$, we note that the uncertainty due to the depth enters as $(1/4)(\sigma_\delta/\delta)$ for $M_p$, whereas it enters as simply $\sigma_\delta/\delta$ for $\rho_p$. For large planets, the depth should be measurable to a precision of $\sim$1\% or better, particularly in the TESS bandpass, similar to the best expected precision on $R_{\star}$. Thus, we expect $\sigma_{\delta}$ to be comparable to $\sigma_{R_{\star}}$, and thus both should contribute at the $\sim$1\% level to $\sigma_{\rho_p}$. On the other hand, we expect $\sigma_{R_{\star}}$ to dominate over the transit depth for $M_p$. Thus, for any given system, we generally expect the following hierarchy: $\sigma_{M_p}/M_p>\sigma_{\rho_p}/\rho_p>\sigma_{g_p}/\sigma_{g_p} > \sigma_{R_p}/R_p$.
Similarly, there is a hierarchy in the precision with which the observed parameters $T$, $P$, $K_{\star}$, $\delta$, $\tau$, and $R_{\star}$ are measured. For the relatively small sample of planets confirmed from TESS so far, we find that in general, the most precise observable parameter is the orbital period, followed by the stellar radius, the transit depth, the RV semi-amplitude, and the transit duration, such that: $\sigma_{T}/T>\sigma_{K}/K>\sigma_{\delta}/\delta>\sigma_{R_{\star}}/R_{\star} >\sigma_{P}/P$. The ingress/egress time $\tau$ is not always reported in discovery papers, so we do not include it in this comparison. However, we generally expect that it will be measured to a precision that is worse than $T$ \citep{Carter:2008, Yee:2008}.
This hierarchy is in agreement with the findings of \citet{Carter:2008} and \citet{Yee:2008}, who derived the following approximate relations for the uncertainties in the parameters of a photometric transit (assuming no limb darkening):
\begin{eqnarray}
\frac{\sigma_{\delta}}{\delta} &\simeq& Q^{-1}\\
\frac{\sigma_T}{T} &\simeq& Q^{-1}\sqrt{\frac{2\tau}{T}}\\
\frac{\sigma_\tau}{\tau} &\simeq& Q^{-1}\sqrt{\frac{6T}{\tau}},
\label{eqn:transuncertainties}
\end{eqnarray}
where $Q$ is the signal-to-noise ratio of the combined transits, defined as
\begin{equation}
Q\equiv(N_{\rm tr}\Gamma_{\rm phot}T)^{1/2}\delta,
\label{eqn:Q}
\end{equation}
where $N_{\rm tr}$ is the effective number of transits that were observed, and $\Gamma_{\rm phot}$ is the photon collection rate\footnote{Alternatively, assuming all measurements have a fractional photometric uncertainty $\sigma_{\rm phot}$, and there are $N$ measurements in transit, the total signal-to-noise ratio can be defined as $Q\equiv \sqrt{N}(\delta/\sigma_{\rm phot})$.}. We note that Equation \ref{eqn:Q} implicitly assumes uncorrelated photometric uncertainties. Since, in general, $\sigma_{\tau} > \sigma_{T}$, we have that $\sigma_{\delta}/\delta<\sigma_{T}/T<\sigma_{\tau}/\tau$.
In the above equations, we have ignored the uncertainty in the transit midpoint $t_c$ as it does not enter into the expressions for the uncertainties in $R_p$, $M_p$, $\rho_p$, or $g_p$. We also assumed that the uncertainty in the baseline (out of transit) flux is negligible, which is generally a good assumption, particularly for space-based missions such as {\it Kepler}, K2, and TESS, where the majority of the measurements are taken outside of transit.
We note that, particularly for small planets, when the limb darkening is significant, and when only a handful of transits have been observed, $\tau$ may be poorly measured (i.e., precisions of $\ga 5\%$), and therefore its uncertainty will dominate the error budget. In such cases, it may be more prudent to use additional external constraints, such as stellar isochrones, to improve the overall parameters of the system (at the cost of losing the nearly purely empirical nature of the inferences as assumed in the derivations above). See \citet{Stevens:2018} for additional discussion.
\section{Validation of Our Analytic Estimates}
\label{validation}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig1.pdf}
\end{center}
\vspace{-3mm}
\caption{Reported fractional uncertainties in $M_{p}$ (diamonds), $g_{p}$ (squares), $\rho_{p}$ (circles), and $R_{p}$ (triangles) versus our model-independent analytic estimates for a variety of transiting planets. These include a fiducial hot Jupiter (dark blue), Kepler-93b (pink), KELT-26b (sky blue), KELT-26b without external constraints (green, open symbols), K2-106b (brown), and HD 21749b (gold). For K2-106b and HD 21749b, arrows point from the fractional uncertainties reported in the discovery papers to our `forensic' estimates of the uncertainties that could be achieved had the authors adopted only empirical constraints. The open symbols are systems for which no external constraints were used. A dashed, gray one-to-one line is plotted for reference.}
\label{fig:fig1}
\end{figure}
We test the analytic expressions derived in Section~\ref{sec:analysis} using four confirmed exoplanets and one fiducial hot Jupiter simulated by \citet{Stevens:2018}. The confirmed exoplanets are KELT-26b \citep{Rodriguez:2020}, HD 21749b \citep{Dragomir:2019}, K2-106b \citep{Adams:2017,Guenther:2017}, and Kepler-93b \citep{Ballard:2014,Dressing:2015}. These systems have masses and radii between $\sim$4--448 $M_{\oplus}$ and $\sim$1.4--21 $R_{\oplus}$.
We estimated the expected analytic uncertainties on the planet parameters by inserting the values of $T$, $P$, $K_{\star}$, $\delta$, $\tau$, $R_{\star}$, and their respective uncertainties from the discovery papers into Equations~\ref{eqn:gpuncertianty},~\ref{eqn:mpuncertainty}, \ref{eqn:rhopuncertainty}, and~\ref{eqn:runcertainty}. Then, we compared the analytic uncertainties to those reported in the discovery papers, which were derived from MCMC analyses and not using the analytic approximations presented here. For parameters with asymmetric uncertainties, we took the average of the upper and lower bounds and adopted that as the uncertainty.
We note that the discovery papers of all of the examples we present here (except for KELT-26b) do not provide the transit duration $T$ (or the full width at half maximum of the transit), but rather $T_{14}$, which is defined as the difference between the fourth and first contact (see, e.g., \citealp{Carter:2008}). Since we are generally interested in $T$, we calculate it from the given observables using
\begin{equation}
T = T_{14} - \tau
\label{eqn:T}
\end{equation}
and we estimate its uncertainty with the relationship from \citet{Carter:2008} and \citet{Yee:2008}:
\begin{equation}
\frac{\sigma_{T}}{T} =\left(\sqrt{\frac{1}{3}}\frac{\tau}{T}\right)\frac{\sigma_{\tau}}{\tau}.
\label{eqn:sigmaT}
\end{equation}
We use Equations~\ref{eqn:T} and~\ref{eqn:sigmaT} to calculate the transit duration values and uncertainties for all the systems for which only $T_{14}$ is given.
There are other ways of estimating the uncertainty in $T$, such as by propagating the uncertainty on $T$ from Equation~\ref{eqn:T}, or by assuming that the uncertainty in $T$ is approximately equal to that of $T_{14}$. However, these approaches overestimate the uncertainty on $T$ as compared to Equation~\ref{eqn:sigmaT} because they do not account for the covariance between the measurements of $T_{14}$ and $\tau$. Therefore, we adopt the uncertainty in $T$ from Equation~\ref{eqn:sigmaT} for all the exoplanets referenced here.
Finally, for systems where the transit depth $\delta$ is not provided, but rather the planet-star radius ratio $R_{p}/R_{\star}$, we use linear propagation of error to estimate $\sigma_{\delta}$, finding
\begin{equation}
\bigg(\frac{\sigma_{\delta}}{\delta}\bigg)^{2} = 4\bigg(\frac{\sigma_{R_{p}/R_{\star}}}{R_{p}/R_{\star}}\bigg)^{2}.
\label{eqn:uncrpvsdelta}
\end{equation}
And we adopt the fractional uncertainties on $R_{\star}$ as reported in the papers. In some cases these were derived using external constraints, such as stellar models, and thus may be underestimates or overestimates of the empirical uncertainty in $R_{\star}$ derived from the stellar SED and parallax.
The fractional uncertainties calculated using our analytic approximations for the five planets in our sample are listed in Table~\ref{tab:analytic_numerical} and shown in Figure~\ref{fig:fig1}. As is clear from Figure~\ref{fig:fig1}, our estimates are broadly in agreement with the fractional uncertainties quoted in the discovery papers. However, we note that the fractional uncertainties we predict for certain quantities are systematically larger or smaller than those reported in the papers. After a careful `forensic' analysis, we have tracked down the reason for these discrepancies. In the two most discrepant cases, it is because the authors used external constraints on the properties of the host star (such as $T_{\rm eff}$, [Fe/H], and $\log{g_\star}$) combined with stellar evolutionary tracks, to place priors on the stellar parameters $R_\star$ and $M_\star$. In one case (HD 21749), the resulting constraint on $\rho_{\star}$ is {\it tighter} than results from empirical constraint on the stellar density $\rho_{\star}$ from the light curve. In the other case, the adopted constraint on $\rho_{\star}$ is {\it weaker} than results from empirical constraint on the stellar density $\rho_{\star}$ from the light curve, but nevertheless the external (weaker) constraints on $M_{\star}$ and $R_{\star}$ were adopted, rather than the (tighter) empirical constraints. In the remaining cases, either external constraints were not assumed, and as a result their parameter uncertainties agree well with our analytic estimates, or the external constraints were negligible compared to the empirical constraints and thus the empirical constraints dominated, again leading to agreement with our analytic estimates. We ultimately conclude that our analytic estimates are reliable; however, we describe in detail our forensic analysis of the systems for pedagogical purposes. In the following subsections, we discuss each system in further detail.
Before doing so, however, we stress that the advantage of empirical, model-independent approximations like the ones presented here is that they do not assume that the physical properties of any particular system is representative of the systems used to calibrate the empirical models, or that the properties of the systems necessarily agree with the theoretical predictions. For example, theoretical models that make assumptions about the elemental abundances of the host star may not apply to the particular system under consideration. Therefore, although our empirical approach may lead to weaker constraints on the parameters of the planets, we believe it leads to more robust constraints on these parameters.
\setlength{\tabcolsep}{4pt}
\begin{deluxetable*}{lcccc|ccccc}
\tablecolumns{10}
\tablecaption{Analytic and reported fractional
uncertainties\label{tab:analytic_numerical}}
\tablehead{\multicolumn{1}{c}{\textbf{Planet}} & \multicolumn{4}{c|}{\textbf{\textbf{Analytic}}} & \multicolumn{4}{c}{\textbf{Literature}} & \multicolumn{1}{c}{\textbf{Reference}}\\
\cline{2-5}
\cline{6-9}
\colhead{} & \colhead{$M_p$} & \colhead{$g_p$} & \colhead{$\rho_p$} &\multicolumn{1}{c|}{$R_p$} & \colhead{$M_p$} & \colhead{$g_p$} & \colhead{$\rho_p$} & \colhead{$R_p$} & \colhead{}}
\startdata
Fiducial HJ & 0.06 & 0.04 & 0.06 &0.01& 0.05 & 0.05 & 0.05 &0.01 & \citet{Stevens:2018} \\
KELT-26b & 0.34 & 0.34 & 0.34 &0.03 & 0.33 & 0.37 & 0.35 &0.03& \citet{Rodriguez:2020} \\
KELT-26b$^{\star}$ & 0.35 & 0.35 & 0.35 &0.03 & 0.33 & 0.37 & 0.35 &0.03& \\
Kepler-93b & 0.17 & 0.16 & 0.17 & 0.01& 0.17 & 0.16 &0.17 &0.01 & \citet{Ballard:2014} \\
HD 21749b$^{\dagger}$ & 0.26 & 0.25 & 0.26 &0.05 & 0.09 & 0.16 & 0.21 & 0.06& \citet{Dragomir:2019}\\
&(0.09) & (0.14) & (0.18) & & & & & \\
K2-106b$^{\dagger}$ & 0.25 & 0.14 & 0.18 & 0.10& 0.11 & 0.13 & 0.34 & 0.10& \citet{Guenther:2017}\\
&(0.11) & & (0.32) & & & & & \\
\enddata
\tablenotetext{}{\textbf{Notes.} The first four columns are the analytic uncertainties in $M_{p}$ (Eqn~\ref{eqn:mpuncertainty}), $g_{p}$ (Eqn~\ref{eqn:gpuncertianty}), $\rho_{p}$ (Eqn~\ref{eqn:rhopuncertainty}), and $R_{p}$ (Eqn~\ref{eqn:runcertainty}), while the next four are the uncertainties in those parameters reported in the literature. Planets with a $^{\dagger}$ were analyzed using external constraints from stellar evolutionary models. KELT-26b$^{\star}$ is KELT-26 analyzed without external constraints. The quantities in parentheses below HD 21749b and K2-106b are the values we recover if we assume external constraints, as explained in Sections~\ref{sec:hd21749b} and~\ref{sec:k2106}.}
\end{deluxetable*}
\subsection{A Fiducial Hot Jupiter}
\citet{Stevens:2018} simulated the photometric time series and RV measurements for a typical hot Jupiter ($M_{p}= M_{\rm J}$ and $R_{p}= R_{\rm J}$) on a 3 day orbit transiting a G-type star using VARTOOLS \citep{Hartman:2016}. They injected a Mandel-Agol transit model \citep{Mandel:2002} into an (out-of-transit flux) normalized light curve, and simulated measurement offsets by drawing from a Gaussian distribution with 1 millimagnitude dispersion. They furthermore assumed a cadence of 100 seconds. They note that these noise properties are typical of a single ground-based observation of a hot Jupiter from a small, $\sim$1 m telescope. For the RV data, they simulated 20 evenly-spaced measurements, each with 10 m $\rm s^{-1}$ precision (which they assumed was equal to the scatter, or `jitter'). They then performed a joint photometric and RV fit to the simulated data using EXOFASTv2 \citep{Eastman:2017,eastman:2019} to model and estimate the star and planet's properties. They simulated three different cases: a circular ($e=0$) orbit and equatorial transit, or an impact parameter of $b=0$, an eccentric orbit with $e=0.5$ and $b=0$, and a circular orbit and $b=0.75$. We consider the parameters and uncertainties for the case of a circular orbit and equatorial transit, for which our equations are most applicable, and use the best-fit values and uncertainties from Table 1 in \citet{Stevens:2018}.
The fractional uncertainties in the planet mass, surface gravity, and planetary bulk density quoted in \citet{Stevens:2018} are all roughly 5\%, whereas the fractional uncertainty in the planet radius is 1.7\%. These uncertainties are in very good agreement with our analytic estimates, as Figure~\ref{fig:fig1} and Table~\ref{tab:analytic_numerical} show.
\subsection{A Real Hot Jupiter}
KELT-26b is an inflated ultra hot Jupiter on a 3.34 day, polar orbit around a early Am star characterized by \citet{Rodriguez:2020}. It has a mass and radius of $M_{p}=1.41^{+0.43}_{-0.51} M_{\rm J}$ and $R_{p}=1.940^{+0.060}_{-0.058} R_{\rm J}$, respectively. The photometry (which included TESS data) and radial velocity data were jointly fit using EXOFASTv2, and included an essentially empirical constraint on the radius of the star from the spectral energy distribution and the \textit{Gaia} Data Release 2 (DR2) parallax, as well as theoretical constraints from the MESA Isochrones and Stellar Tracks (MIST) stellar evolution models \citep{Dotter:2016,Choi:2016,Paxton:2011,Paxton:2013,Paxton:2015}. Therefore, unlike the fiducial hot Jupiter discussed above, this system was modeled using both external empirical constraints and external theoretical constraints. The uncertainties in the planet parameters reported by \citet{Rodriguez:2020} are $\sim$34\% for the mass, $\sim$33\% for the surface gravity, $\sim$33\% for the bulk density, and 3.8\% for the planet's radius. These are very close to our estimates of the fractional uncertainties of these parameters, implying that the constraints from the MIST evolutionary tracks have little effect on the inferred parameters of the system.
To test this hypothesis, we reanalyzed this system with EXOFASTv2 without using the external theoretical constraints from the MIST isochrones, that is, only using the spectral energy distribution of the star, its parallax from \textit{Gaia} DR2, and the light curves and radial velocities. The uncertainties from this analysis are 35\% for the planetary mass, surface gravity, and the density, and 3.3\% for the radius. These are consistent with the uncertainties derived from the analysis using the MIST evolutionary tracks as constraints. The fractional uncertainties from the original paper and the analysis without constraints are shown in sky blue (with constraints) and green (without) in Figure~\ref{fig:fig1}. We conclude that the inferred parameters of the system derived using purely empirical constraints are as precise (and likely more accurate) than those inferred using theoretical evolutionary tracks. Therefore, at least for systems similar to KELT-26, we see no need to invoke theoretical priors.
\subsection{Kepler-93b}
Kepler-93b is a terrestrial exoplanet on a 4.7 day period discovered by \citet{Ballard:2014}. It has a mass and radius of $M_{p}= 4.02 \pm0.68~M_{\oplus}$ and $R_{p}= 1.483\pm0.019~R_{\oplus}$. With a radius uncertainty of only 1.2\%, it is one of the most precisely characterized exoplanets to date. \citet{Ballard:2014}
used asteroseismology to precisely constrain the stellar density, and then used it as a prior in their MCMC analysis, leading to the remarkably precise planet radius. Their analysis did not use external constraints from stellar evolutionary models, however. \citet{Dressing:2015} revisited Kepler-93 and collected HARPS-N \citep{mayor:2003} spectra, which they combined with archival Keck/HIRES spectra to improve upon the planet's mass estimate. They thus reduced the uncertainty in the mass of Kepler-93b from $\sim$40\% \citep{Ballard:2014} to $\sim$17\%. We used the photometric parameters ($T$, $\tau$, and $\delta$) from \citet{Ballard:2014} and the semi-amplitude $K_{\star}$ from \citet{Dressing:2015} to test our analytic estimates. We compared our results to the reported uncertainties in $M_{p}$, $g_{p}$ and $\rho_{p}$ from \citet{Dressing:2015}, since they provide slightly more precise properties. The uncertainties in the properties of Kepler-93b are all $\sim$17\%, and 1.2\% for the radius, which are in excellent agreement with our analytic estimates, as shown in Figure~\ref{fig:fig1} and Table~\ref{tab:analytic_numerical}. Interestingly, this implies that the asteroseismological constraint on $\rho_{\star}$ does not significantly improve the overall constraints on the system.
\subsection{HD 21749b}
\label{sec:hd21749b}
HD 21749b is a warm sub-Neptune on a 36 day orbit transiting a K4.5 dwarf discovered by \citet{Dragomir:2019}. The planet has a radius of $2.61^{+0.17}_{-0.16} R_{\oplus}$ determined from TESS data, and a mass of $22.7^{+2.2}_{-1.9} M_{\oplus}$ constrained from high-precision, radial velocity data from the HARPS spectrograph at the La Silla Observatory in Chile. \citet{Dragomir:2019}
performed an SED fit combined with a parallax from \textit{Gaia} DR2
to constrain the host star's radius to $R_{\star} = 0.695\pm 0.030 R_{\odot}$. They then used the \citet{Torres:2010} relations to derive a stellar mass of $M_{\star} =0.73\pm0.07 M_{\odot}$, although they do not specify what values of $T_{\rm eff}$, [Fe/H], and $\log{g_\star}$ they adopt as input into those equations, or from where they derive these values. We assume they were determined from high-resolution stellar spectra. Finally, they performed a joint fit of their data and constrained the planetary parameters with the EXOFASTv2 modeling suite, using their inferred values of $M_{\star}$ and $R_{\star}$ as priors.
When comparing our analytic approximations of the fractional uncertainties in $M_{p}$, $g_{p}$, and $\rho_{p}$ to the uncertainties in the paper, we find that our estimates are systematically larger than those of \citet{Dragomir:2019} by 34\% ($M_{p}$), 60\% ($g_{p}$), and 80\% ($\rho_{p}$).
Understanding the nature of such discrepancies requires a closer examination of the methods employed by \citet{Dragomir:2019} as compared to ours. The fundamental difference is that their uncertainties in the planetary properties are dominated by their more precise {\it a priori} uncertainties on $M_{\star}$ and $R_{\star}$ (and thus $\rho_{\star}$), rather than the empirically constrained value of $\rho_{\star}$ from the light curve and radial velocity measurements. On the other hand, we estimate the uncertainty on $\rho_{\star}$ directly from observables (e.g., the light curve and the RV data).
Because their prior on $\rho_{\star}$ is more constraining than the value of $\rho_{\star}$ one would obtain from the light curve, and because the inferred planetary parameters critically hinge upon $\rho_{\star}$, this ultimately leads to smaller uncertainties in the planetary parameters than we obtain purely from the light curve observables.
To show why this is true, we begin by comparing their prior in $\rho_{\star}$ (the value they derive from their estimate of $M_{\star}$ and $R_{\star}$, which we will denote $\rho_{\star, \rm prior}$) to the uncertainty in $\rho_{\star}$ from observables (denoted $\rho_{\star, \rm obs}$).
Their prior on $\rho_{\star}$ can be trivially calculated from $\rho_{\star} = 3M_{\star}/4\pi R_{\star}^{3}$, and its uncertainty, through propagation of error, is therefore simply
\begin{equation}
\bigg(\frac{\sigma_{\rho_{\star, \rm prior}}}{\rho_{\star, \rm prior}}\bigg)^{2} \approx
\bigg(\frac{\sigma_{M_{\star}}}{M_{\star}}\bigg)^{2} +
9\bigg(\frac{\sigma_{R_{\star}}}{R_{\star}}\bigg)^{2}.
\end{equation}
Inserting the appropriate values from \citet{Dragomir:2019} yields\footnote{We note that the actual value reported in Section 3.1 of \citet{Dragomir:2019} is $\rho_{\star} =3.09 \pm 0.23$ g cm$^{-3}$, but after careful analysis, we believe that this value is probably a typographical error, as it differs from the value we derive and from the posterior value in Table 1 of the paper.} $\rho_{\star, \rm prior} = 3.07\pm 0.49$ g cm$^{-3}$. This represents a fractional uncertainty of $\sigma_{\rho_{\star, \rm prior}}/\rho_{\star, \rm prior}=0.16$.
Now, combining Equations~\ref{eqn:arstar8} and~\ref{eqn:arstarobs}, we can express $\rho_{\star, \rm obs}$ and its uncertainty in terms of transit observables as
\begin{equation}
\rho_{\star, \rm obs} = \bigg(\frac{3P}{G\pi^{2}}\bigg)\delta^{3/4} (T\tau)^{-3/2}.
\label{eqn:rho_obs}
\end{equation}
Therefore,
\begin{equation}
\begin{split}
\bigg(\frac{\sigma_{\rho_{\star, \rm obs}}}{\rho_{\star, \rm obs}}\bigg)^{2} \approx
\bigg(\frac{\sigma_P}{P}\bigg)^{2} +
\frac{9}{16}\bigg(\frac{\sigma_{\delta}}{\delta}\bigg)^{2} + \frac{9}{4}\bigg(\frac{\sigma_{T}}{T}\bigg)^{2}+\\
\frac{9}{4}\bigg(\frac{\sigma_{\tau}}{\tau}\bigg)^{2}.
\label{eqn:error_rho_obs}
\end{split}
\end{equation}
Inserting the fractional uncertainties on $P$, $R_p$, $T$, and $\tau$ from the discovery paper into Equation~\ref{eqn:error_rho_obs}, we find $\sigma_{\rho_{\star, \rm obs}}/{\rho_{\star, \rm obs}}=0.37$. This is larger and
less constraining than the fractional uncertainty in the prior on $\rho_{\star, \rm obs}$ from \citet{Dragomir:2019} by a factor of 2.3. Thus, we expect the prior on $\rho_{\star, \rm obs}$ to dominate over the constraint from the light curve.
However, despite being considerably less constraining than the prior, the empirical constraint on $\rho_{\star, \rm obs}$ can still influence the posterior value if the central value is significantly different than the prior value. Inserting the values of $P$, $\delta$, $T$, and $\tau$ in Equation~\ref{eqn:rho_obs}, we find a central value of $\rho_{\star, \rm obs} = 5.56 \pm 2.06$ g cm$^{-3}$. This value is $(5.56-3.07)/2.06=1.2\sigma$ discrepant from the prior value. Thus, there is a weak tension between the empirical and prior values of $\rho_{\star, \rm obs}$ that should be explored.
If we include the eccentricity in the expression for $\rho_{\star}$, we find much closer agreement between $\rho_{\star, \rm obs}$ and $\rho_{\star, \rm prior}$.
From \citet{winn:2010}, we can express the scaled semi-major axis as a function of eccentricity as
\begin{equation}
\frac{a}{R_{\star}}= \frac{P}{\pi}\frac{\delta^{1/4}}{\sqrt{T\tau}} \bigg(\frac{\sqrt{1-e^{2}}}{1+e\sin{\omega}}\bigg).
\label{eqn:arstarobseccentricity}
\end{equation}
We can then combine this equation with Equation~\ref{eqn:arstar8} to find the ratio between the inferred $\rho_{\star}$ assuming a circular orbit ($\rho_{\star,{\rm obs, c}}$) and that for an eccentric orbit ($\rho_{\star,{\rm obs, e}}$):
\begin{equation}
\rho_{\star,{\rm obs, e}}=\rho_{\star,{\rm obs, c}} \bigg(\frac{\sqrt{1-e^{2}}}{1+e\sin{\omega}}\bigg)^3.
\end{equation}
Inserting the values from the paper ($e=0.188$ and $\omega=98^\circ$) yields $\rho_{\star,\rm obs} = 5.56$~g cm$^{-3}$$\times 0.568=3.16$~g cm$^{-3}$, and assuming the same fractional uncertainty as $\rho_{\star, \rm obs,c}$ of $0.37$ (which we discuss below), we get a value of $\rho_{\star, \rm obs,e} = 3.16\pm 1.17$ g cm$^{-3}$, which is $\sim 0.1 \sigma$ greater than the prior, and in much better agreement than our estimate without including eccentricity. The reason why the eccentricity significantly affects $\rho_{\star, \rm obs}$ in this case, despite the fact that it is relatively small ($e= 0.188$), is that for this system, the argument of periastron is $\omega \simeq 90^{\circ}$, which implies that the transit occurs near periastron, and thus the transit is shorter than if the planet were on a circular orbit by a factor of
\begin{equation}
\frac{T_{\rm e}}{T_{\rm c}} \simeq \frac{\sqrt{1-e^2}}{1+e} = 0.827.
\end{equation}
Thus $\tau$ is shorter by the same factor. Since $\rho_{\star} \propto (T \tau)^{-3/2}$, by assuming $e=0$, one overestimates the density by factor of
\begin{equation}
\bigg(\frac{\sqrt{1-e^2}}{1+e}\bigg)^{-3} = 0.565^{-1},
\end{equation}
approximately recovering the factor above.
The eccentricity also affects the uncertainty in $\rho_{\star}$ in the following way:
\begin{equation}
\begin{split}
\frac{\rho_{\star,{\rm obs, e}}}
{\rho_{\star,{\rm obs, c}}}
\propto \bigg(\frac{\sqrt{1-e^{2}}}{1+e\sin{\omega}}\bigg)^3 \simeq (1-3/2e^{2})(1-3e\sin{\omega})\\
\simeq 1 - 3e\sin{\omega},
\end{split}
\end{equation}
where we have assumed that $e \ll 1$. Propagating the uncertainty leads to a final value of $\rho_{\star, \rm obs,e} = 3.16 \pm 2.04$ g cm$^{-3}$, which is only $\sim$0.04$\sigma$ greater than $\rho_{\star, \rm prior}$. Thus, the eccentricity plays a significant role in the parameter uncertainties for this system. The uncertainty in the prior constraint on $\rho_{\star}$ is a factor of $\sim$1.7 times smaller than that derived from the data alone. We therefore conclude that the prior adopted by \citep{Dragomir:2019} dominates over the empirical value of $\rho_{\star}$ from the data ($\rho_{\star, \rm obs}$). This also explains why their final value and uncertainty in $\rho_{\star}$ ($3.03^{+0.50}_{-0.47}$ g cm$^{-3}$) is so close to their prior ($\rho_{\star, \rm prior} = 3.07\pm 0.49$ g cm$^{-3}$).
Assuming that the uncertainty on their priors for $M_{\star}$ and $R_{\star}$ indeed dominates the fractional uncertainty in the resulting planet parameters, we can reproduce their uncertainties in $M_{p}$, $g_{p}$, and $\rho_{p}$ using their prior to recover their reported fractional uncertainties as follows.
For the surface gravity $g_{p}$, we have
\begin{equation}
g_{p} = \frac{GM_{p}}{R_{p}^{2}},
\end{equation}
and
\begin{equation}
M_{p} = \bigg(\frac{P}{2\pi G}\bigg)^{1/3} M_{\star}^{2/3} (1-e^2)^{1/2}K_{\star}
\label{eqn:mass_prior}
\end{equation}
while the planet radius can be expressed as
\begin{equation}
R_{p} = \delta^{1/2}R_{\star}
\end{equation}
Therefore,
\begin{equation}
g_{p} = \bigg(\frac{P}{2\pi G}\bigg)^{1/3} M_{\star}^{2/3} K_{\star} \delta^{-1} R_{\star}^{-2} (1-e^{2})^{1/2}.
\label{eqn:gpriors}
\end{equation}
Instead of simplifying $g_{p}$ in terms of observables (as we have done in Equation~\ref{eqn:gpobservables}), we express it in terms of $M_{\star}$ and $R_{\star}$. Using propagation of error, the uncertainty is
\begin{equation}
\begin{split}
\bigg(\frac{\sigma_{g_{p}}}{g_{p}}\bigg)^{2} \approx
\frac{1}{9}\bigg(\frac{\sigma_P}{P}\bigg)^{2} +
\frac{4}{9}\bigg(\frac{\sigma_{M_{\star}}}{M_{\star}}\bigg)^{2} + \bigg(\frac{\sigma_{K_{\star}}}{K_{\star}}\bigg)^{2}+\\
\bigg(\frac{\sigma_{\delta}}{\delta}\bigg)^{2} + 4\bigg(\frac{\sigma_{R_{\star}}}{R_{\star}}\bigg)^{2}.
\label{eqn:gpriorerror}
\end{split}
\end{equation}
where we have assumed $(1-e^{2})^{1/2} \approx 1$ since the eccentricity is small.
Inserting the appropriate values from Table 1 in \citet{Dragomir:2019} in Equations~\ref{eqn:gpriors} and~\ref{eqn:gpriorerror}, we recover a fractional uncertainty in the surface gravity of $\sigma_{g_p}/g_p = 0.14$, which is 12.5\% different from the value reported in \citet{Dragomir:2019}, and thus agrees much better with their results than our initial estimate.
For the planet's mass, we start from Equation~\ref{eqn:mass_prior} and propagate its uncertainty as
\begin{equation}
\bigg(\frac{\sigma_{M_{p}}}{M_{p}}\bigg)^{2} \approx
\frac{1}{9}\bigg(\frac{\sigma_P}{P}\bigg)^{2} +
\frac{4}{9}\bigg(\frac{\sigma_{M_{\star}}}{M_{\star}}\bigg)^{2} + \bigg(\frac{\sigma_{K_{\star}}}{K_{\star}}\bigg)^{2},
\label{eqn:Mppriorerror}
\end{equation}
implying a fractional uncertainty in the mass of $\sigma_{M_p}/M_p = 0.093$, which is only $\sim$0.3\% discrepant from the uncertainty in the paper.
Finally, we replicate the analysis for the planet's density, starting with
\begin{equation}
\rho_p = \frac{3M_p}{4\pi R_p^3}=\frac{3M_p}{4\pi R_{\star}^3}\delta^{-3/2}.
\label{eqn:rhop2}
\end{equation}
Therefore,
\begin{equation}
\rho_{p} = \frac{3}{4\pi}\bigg(\frac{P}{2\pi G}\bigg)^{1/3} M_{\star}^{2/3} K_{\star} \delta^{-3/2} R_{\star}^{-3} (1-e^{2})^{1/2}.
\end{equation}
And the uncertainty in $\rho_{p}$ is thus
\begin{equation}
\begin{split}
\bigg(\frac{\sigma_{\rho_{p}}}{\rho_{p}}\bigg)^{2} \approx
\frac{1}{9}\bigg(\frac{\sigma_P}{P}\bigg)^{2} +
\frac{4}{9}\bigg(\frac{\sigma_{M_{\star}}}{M_{\star}}\bigg)^{2} + \bigg(\frac{\sigma_{K_{\star}}}{K_{\star}}\bigg)^{2}+\\
\frac{9}{4}\bigg(\frac{\sigma_{\delta}}{\delta}\bigg)^{2} + 9\bigg(\frac{\sigma_{R_{\star}}}{R_{\star}}\bigg)^{2},
\end{split}
\label{eqn:rhopGuenther}
\end{equation}
which leads to $\sigma_{\rho_{p}}/\rho_{p} = 0.18$, while the paper reports $\sigma_{\rho_{p}}/\rho_{p} = 0.21$, which is a $\sim$14\% difference.
In summary, we can roughly reproduce the uncertainties in \citet{Dragomir:2019} to better than 15\% if we assume that such uncertainties are dominated by the priors on the stellar mass and radius. In Figure~\ref{fig:fig1}, we plot both our initial fractional uncertainties and the recovered uncertainties as pairs connected by golden arrows that point in the direction of the `recovered' uncertainties based on our forensic analysis.
\subsection{K2-106b}
\label{sec:k2106}
K2-106b is the inner planet in a system of two transiting exoplanets discovered by \citet{Adams:2017} and later characterized by \citet{Guenther:2017}. It is on an ultra short, 0.57 day orbit around a G5V star. It has a mass and radius of $8.36^{+0.96}_{-0.94} M_{\oplus}$ and $1.52\pm0.16 R_{\oplus}$, leading to a high bulk density of $\rho_{p} = 13.1^{+5.4}_{-3.6}$ g cm$^{-3}$. \citet{Guenther:2017} used data from the K2 mission combined with multiple radial velocity observations from the High Dispersion Spectrograph (HDS; \citealt{Noguchi:2002}), the Carnegie Planet Finder Spectrograph (PFS; \citealt{Crane:2006}), and the FIber-Fed Echelle Spectrograph (FIES; \citealp{Frandsen:1999, Telting:2014}) to confirm and analyze this system. They performed a multi-planet joint analysis of the data using the code {\tt pyaneti} \citep{Barragan:2017} and derived the host star's mass and radius using the {\tt PARSEC} model isochrones and the interface for Bayesian estimation of stellar parameters from \citet{daSilva:2006}.
As with HD 21749b, we found large discrepancies ($\sim$50\%) between our analytic estimates of the uncertainties on the planetary mass and density and the literature values for K2-106b. Unlike HD 21749b, however, the reason for this discrepancy is that the uncertainty in the density of the host star, and thus in the properties of the planet, is dominated by the data, including the light curve + radial velocity, rather than the prior. To see why this is true, we perform a similar analysis as in Section~\ref{sec:hd21749b}, and begin by comparing the uncertainty in the density from the observables $\rho_{\star, \rm obs}$ to the density from the prior $\rho_{\star, \rm prior}$.
First, we have that the uncertainty in $\rho_{\star}$ based purely on the prior fractional uncertainties on $M_{\star}$ and $R_\star$ is given by
\begin{equation}
\bigg(\frac{\sigma_{\rho_{\star, \rm prior}}}{\rho_{\star, \rm prior}}\bigg)^{2} \approx
\bigg(\frac{\sigma_{M_{\star}}}{M_{\star}}\bigg)^{2} +
9\bigg(\frac{\sigma_{R_{\star}}}{R_{\star}}\bigg)^{2}.
\end{equation}
Inserting the values of $\sigma_{M_\star}/M_\star$ and $\sigma_{R_\star}/R_\star$ from the paper, we derive a fractional uncertainty on the density of the star from the prior of $\sigma_{\rho_{\star,prior}}/\rho_{\star,prior}=0.31$. On the other hand, using Equation~\ref{eqn:error_rho_obs}, the fractional uncertainty in the stellar density from pure observables $\rho_{\star, \rm obs}$ is $\sigma_{\rho_{\star, \rm obs}}/\rho_{\star, \rm obs}=0.15$, a factor of $\sim$2 times smaller than the fractional uncertainty on $\rho_\star$ estimated from the prior.
We can compute the uncertainty in the planetary mass assuming the fractional uncertainty on $M_{\star}$ from the prior and the fractional uncertainty on the measured semi-amplitude $K_{\star}$ using Equation~\ref{eqn:Mppriorerror}:
\begin{equation}
\bigg(\frac{\sigma_{M_{p}}}{M_{p}}\bigg)^{2} \approx
\frac{4}{9}\bigg(\frac{\sigma_{M_{\star}}}{M_{\star}}\bigg)^{2} + \bigg(\frac{\sigma_{K_{\star}}}{K_{\star}}\bigg)^{2},
\end{equation}
where we have assumed that $\sigma_{P}/P \ll 1$. We find $\sigma_{M_p}/M_p = 0.11$, which is only 1\% different from the uncertainty reported in \citet{Guenther:2017}.
Further, we infer that \citet{Guenther:2017} estimated the density of the planet by combining their estimate of the mass of the planet by adopting the prior value of $M_\star$, along with the observed values of $K_\star$ and $P$, with the radius of the planet derived by adopting the prior value of $R_\star$ and the observed value of transit depth (and thus $R_p/R_\star$). Thus we infer that \citet{Guenther:2017} estimated the uncertainty in the planet density via,
\begin{equation}
\bigg(\frac{\sigma_{\rho_{p}}}{\rho_{p}}\bigg)^{2} \approx
\bigg(\frac{\sigma_{M_{p}}}{M_{p}}\bigg)^{2} +
9\bigg(\frac{\sigma_{R_{\star}}}{R_{\star}}\bigg)^{2} + \frac{9}{4}\bigg(\frac{\sigma_{\delta}}{\delta}\bigg)^{2},
\end{equation}
again assuming that $\sigma_{P}/P \ll 1$. Substituting the values quoted in \citet{Guenther:2017} into the expression above, we find $\sigma_{\rho_{p}}/\rho_{p} = 0.32$, whereas they quote a fractional uncertainty of $\sigma_{\rho_{p}}/\rho_{p} = 0.34$, a $\sim$6\% difference. On the other hand, if we analytically estimate the fractional uncertainty on the density of K2-106b using pure observables (Equation~\ref{eqn:rhopuncertainty}), but assume their reported value and uncertainty on $R_{\star}$, we find $\sigma_{\rho_{p}}/\rho_{p} \sim 0.18$, i.e., a factor of $\sim$2 times smaller.
In the case of the surface gravity of the planet, however, we find that our analytic estimates and that reported in the paper only differ by $\sim 8\%$. The reason is in part because the uncertainty in the stellar density from the light curve dominates the uncertainty in the planet properties. In this case, the light curve and radial velocity data tightly constrain the stellar density, which implies that $M_{\star} \appropto R_{\star}^{3}$. This constraint on $\rho_{\star}$ causes the prior estimates of the stellar mass, radius, and their uncertainties to cancel out in the expression for the planet density:
\begin{equation}
g_{p} \propto P^{1/3}K_{\star}M_{\star}^{2/3}\delta^{-1}R_{\star}^{-2}.
\end{equation}
Assuming $M_{\star} \propto \rho_\star R_\star^{3}$, and $\rho_{\star}\sim~{\rm constant}$, we find
\begin{equation}
g_{p} \propto P^{1/3}K_{\star}R_{\star}^{2}\delta^{-1}R_{\star}^{-2} = P^{1/3}K_{\star}\delta^{-1}.
\label{eqn:gpapprox}
\end{equation}
The reason why Equation~\ref{eqn:gpapprox} and Equation~\ref{eqn:gpobservables} do not agree is because Equation \ref{eqn:gpapprox} does not include the full contributions of the uncertainties in the light curve observables $P$, $\delta$, $T$, and $\tau$.
Figure~\ref{fig:fig1} shows the fractional uncertainties in $M_{p}$, $g_{p}$, and $\rho_{p}$ for K2-106b and brown arrows pointing from the original values we estimate to the `recovered' values.
We reanalyzed K2-106 with EXOFASTv2 to derive stellar and planetary properties without using either the MIST stellar tracks, the Yonsei Yale stellar evolutionary models (YY; \citealp{Yi:2001}), or the \citealp{Torres:2010} relationships that are built into EXOFASTv2. We first constrained the stellar radius by fitting the star's SED to stellar atmosphere models to infer the extinction $A_V$ and bolometric flux, which when combined with its distance from \textit{Gaia} EDR3 \citep{gaia:2020} provides a (nearly empirical) constraint on $R_{\star}$. We find a fractional uncertainty in $R_{\star}$ of 2.4\%, while \citet{Guenther:2017} derive a fractional uncertainty of 10\% using the measured values of $T_{\rm eff}$, $\log{g_\star}$, and [Fe/H] from their HARPS and HARPS-N spectra, combined with constraints from the {\tt PARSEC} model isochrones \citep{daSilva:2006}.
We used our estimate of the stellar radius to recalculate the fractional uncertainties in $M_{p}$, $g_{p}$, $\rho_{p}$, and $R_{p}$ using our analytic expressions, and the constraints on the empirical parameters $P,~K_{\star},~T,~\tau$, and $\delta$ from \citet{Guenther:2017}. Our derived fractional uncertainty in the planetary radius is 3.0\%, whereas \citet{Guenther:2017} find 10\% \footnote{We note that when $\sigma_{R_p/R_{\star}} \ll \sigma_{R_{\star}}/R_{\star}$, the fractional uncertainty on the planetary radius is equal to fractional uncertainty on the radius of the star (Eqn.~\ref{eqn:rpobs}). While this is approximately the case given fractional uncertainty in $R_{\star}$ estimated by \citet{Guenther:2017}, for our estimate the uncertainty in $R_p/R_{\star}$ of $1.9\%$ contributes somewhat to the our estimated fractional uncertainty in $R_p$.}. Our derived fractional uncertainty on the density of the planet is a factor of 2.3 times {\it smaller} than reported by \citep{Guenther:2017}. This is because the radius of the star enters into their estimate of $\rho_p$ as $R_{\star}^{-3}$ (Eqn.~\ref{eqn:rhop2}), whereas our estimate of $\rho_p$ only depends linearly on $R_{\star}$ (Eqn.~\ref{eqn:rhopobs}). We estimate an uncertainty in the planet mass of 15\%, a bit larger than that reported by \citet{Guenther:2017}, as it scales as the square of the radius of the star (Eqn.~\ref{eqn:mpobs}). Finally, the planetary surface gravity uncertainty that we estimate is 14\%, almost the same as that estimated by \citet{Guenther:2017}, as it does not depend directly on $\sigma_{R_{\star}}/R_{\star}$.
We conclude that a careful reanalysis of the K2-106 system using purely empirical constraints may well result in a significantly more precise constraint on the density of K2-106b, which is already a strong candidate for an exceptionally dense super-Earth.
\section{Discussion\label{sec:discussion}}
\begin{figure}
\begin{center}
\includegraphics[width=\columnwidth]{fig2.pdf}
\end{center}
\vspace{-3mm}
\caption{1$\sigma$ and 2$\sigma$ mass-radius ellipses for K2-229b \citep{Dai:2019}. The red ellipses assume $\rm M_{p}$ and $\rm R_{p}$ are uncorrelated, random variables. The black ellipses are the result of correlating $\rm M_{p}$ and $\rm R_{p}$ via the added constraint of surface gravity. Planets whose masses and radii lie along the blue solid solid line would have a constant core mass fraction of 0.565, whereas those that lie along the blue dotted line would have a constant core mass fraction of 0.29. Planets forming with iron abundances as expected from K2-229 Fe/Mg abundances will follow the blue dotted solid line.}
\label{fig:fig2}
\end{figure}
Here we discuss the importance of achieving high-precision measurements of planetary masses, surface gravities, densities and radii, and their overall role in a planet's habitability.
The mass and radius of a planet are arguably its most fundamental quantities. The mass is a measure of how much matter a planet accreted during its formation and is also tightly connected to its density and surface gravity, which we discuss below. The mass also determines whether it can acquire and retain a substantial primordial atmosphere. Atmospheres are essential for a planet to maintain weather and thus life (see, e.g., \citealp{Dohm:2013}). In addition, the planetary core mass and radius (themselves a function of the total mass) are related to the strength of a planet's global magnetic field, although the strength of the field does depend on other factors, such as the rotation rate of the planet and other aspects of its interior. The presence of a substantial planetary magnetic field is vital in shielding against harmful electromagnetic radiation from the host star. This is especially true for exoplanets orbiting M dwarfs, which are much more active than Sun-like stars. Without a magnetic field to shield against magnetic phenomena such as flares and Coronal Mass Ejections (CMEs), planets around such stars may undergo mass loss and atmospheric erosion on relatively short timescales (see, e.g., \citealt{Kielkopf:2019}). The initial mass may also determine whether planets will have moons, a factor which has been hypothesized to play a role in the habitability of a planet, as it does for the Earth. Some authors have even proposed that Mars- and Earth-sized moons around giant planets may themselves be habitable (see, e.g., \citealp{Heller:2014,Hill:2018}).
The mean density of a planet is also important as it is a first-order approximation of its composition. Based on their density, we can classify planets as predominantly rocky (typically Earth-sized and super Earths) or gaseous (Neptune-sized and hot Jupiters). A reliable determination of the density and structure of a planet helps to constrain its habitability.
Next, we briefly discuss a few aspects of the importance of knowledge of a planet's surface gravity. First, the surface gravity dictates the escape velocity of the planet, as well as the planet's atmospheric scale height, $h$, defined as
\begin{equation}
h = \frac{k_{b}T_{\rm eq}}{\mu g_{p}},
\end{equation}
where $k_{b}$ is the Boltzmann constant, $T_{\rm eq}$ is the planet equilibrium temperature, and $\mu$ is the atmospheric mean molecular weight.
The surface gravity is connected to mass loss events and the ability of a terrestrial planet to retain a secondary atmosphere. Perhaps most importantly, gravity may be a main driver of plate tectonics on a terrestrial planet. One of the most fundamental questions about terrestrial or Earth-like planets is whether they can have and sustain active plate tectonics or if they are in the stagnant lid regime, like Mars or Venus \citep{vanHeck:2011}. On Earth, plate tectonics are deeply linked to habitability for several crucial reasons. Plate tectonics regulate surface carbon abundance by transporting some $\rm CO_{2}$ out of the atmosphere and into the interior, which helps maintain a stable climate over long timescales \citep{Sleep:2001, unterborn:2016}. An excess of carbon dioxide can result in a runaway greenhouse effect, as in the case of Venus. Plate tectonics also drive the formation of surface features like mountains and volcanoes, and play an important role in sculpting the topography of a rocky planet. Weather can then bring nutrients from mountains to the oceans, contributing to the biodiversity of the oceans. Some authors have argued that plate tectonics, dry land (such as continents), and continents maximize the opportunities for intelligent life to evolve \citep{Dohm:2013}.
However, the origin and mechanisms of plate tectonics are poorly understood on Earth, and are even more so for exoplanets. The refereed literature on this topic includes inconsistent conclusions regarding the conditions required for plate tectonics, and in particular how the likelihood of plate tectonics depends on the mass of the planet. For example, there is an ongoing debate about whether plate tectonics are inevitable or unlikely on super Earths. \citet{valencia:2009} used a convection model and found that the probability and ability of a planet to harbor plate tectonics increases with planet size. On the other hand, \citet{oneill:2007} came to the opposite conclusion, finding that plate tectonics are less likely on larger planets, based on numerical simulations. The resolution to this debate will have important consequences for our assessment of the likelihood of life on other planets.
\subsection{Surface gravity as a proxy for the core mass fraction}
The surface gravity of a planet may also play an important role in constraining other planetary parameters, like the core mass fraction. Here, we considered K2-229b ($R_{p} = 1.197^{+0.045}_{-0.048}~R_{\oplus}$ and $M_{p} =2.49 ^{+0.42}_{-0.43}~M_{\oplus}$, \citealt{Dai:2019}), a potential super Mercury first discovered by \citet{Santerne:2018}. This planet has well measured properties and the prospects for improving the precision of the planet parameters are good given the brightness of the host star.
We calculate the core mass fraction of K2-229b as expected from the planet's mass and radius, $\rm CMF_{\rho}$, which is the mass of the iron core divided by the total mass of the planet: $\rm CMF_{\rho}$ = $\rm M_{Fe}/M_{p}$. We compare this to the CMF as expected from the refractory elemental abundances of the host star, $\rm CMF_{star}$. This definition assumes that a rocky planet's mass is dominated by Fe and oxides of Si and Mg. Therefore, the stellar Fe/Mg and Si/Mg fractions are reflected in the planet's core mass fraction. The mass and radius of K2-229b are consistent with a rocky planet with a 0.57 core mass fraction (CMF), while the relative abundances of Mg, Si, and Fe of the host star K2-229 (as reported in \citealt{Santerne:2018}) predict a core mass fraction of 0.29 \citep{Schulze:2020}. Figure~\ref{fig:fig2} shows mass-radius (M-R) ellipses for K2-229b when the mass and radius are assumed to be uncorrelated (red) and correlated via the added constraint of surface gravity (black). While apparently enriched in iron, the enrichment is only significant at the 2$\sigma$ level. The surface gravity, however, is correlated to the mass and gravity, reducing the uncertainty in $\rm CMF_{\rho}$ (black): the M-R ellipse that includes the surface gravity constraint reduces the uncertainty in the differences of CMF measures. This arises because the planet's density and surface gravity only differ by one factor of $R_{p}$. Because the black contours closely follow the line of constant 0.57 CMF, we assert that surface gravity and planet radius may be a better proxy for core mass fraction than mass and radius. Indeed, at the current uncertainties, we calculate that the additional constraint of surface gravity reduces the uncertainty in the $\rm CMF_{\rho}$ of K2-229b from 0.182 to 0.165. This is important given that we have demonstrated that the surface gravity of a planet is likely to be one of the most precisely measured properties of the planet. Furthermore, the fractional precision of the surface gravity measurement can be arbitrarily improved with additional data, at least to the point where systematic errors begin to dominate.
\section{Conclusions\label{sec:conclusions}}
One of the leading motivations of this paper was the answer to the question: ``given photometric and RV observations of a given exoplanet system, can we measure a planet's surface gravity better than its mass?" At first glance, the surface gravity depends on the mass itself, so it seems that the gravity should always be less constrained. However, upon expressing the mass, gravity and density as a function of photometric and RV parameters, we see that the mass and density have an extra dependence on the stellar radius, which makes the surface gravity generically easier to constrain to a given fractional precision than the mass or density. When expressed in terms of pure observables, a hierarchy in the precisions on the planet properties emerges, such that the surface gravity is better constrained than the density, and the latter is in turn better constrained than the mass. The surface gravity is a crucial planetary property, as it dictates the scale height of a planet's atmosphere. It is also a potential driver of plate tectonics, and as we show in this paper, can be an excellent proxy to constrain a planet's core mass fraction to better facilitate the discrimination of planet composition as different from its host star. With current missions like TESS, we expect to achieve high precisions in the photometric parameters. State-of-the-art RV measurements can now reach precisions in the semi-amplitude of $< 5\%$. As a result, the uncertainties in the ingress/egress duration $\tau$ and the host star radius $R_{\star}$ may be the limiting factors in constraining the properties of low-mass terrestrial planets.
\acknowledgements{We would like to thank Andrew Collier Cameron for his suggestion that the surface gravity of a transiting planet may be more well constrained than its mass, radius, or density. R.R.M. and B.S.G. were supported by the Thomas Jefferson Chair for Space Exploration endowment from the Ohio State University. D.J.S. acknowledges funding support from the Eberly Research Fellowship from The Pennsylvania State University Eberly College of Science. The Center for Exoplanets and Habitable Worlds is supported by the Pennsylvania State University, the Eberly College of Science, and the Pennsylvania Space Grant Consortium. The results reported herein benefited from collaborations and/or information exchange within NASA’s Nexus for Exoplanet System Science (NExSS) research coordination network sponsored by NASA’s Science Mission Directorate. J.G.S. acknowledges the support of The Ohio State School of Earth Sciences through the Friends of Orton Hall research grant. W.R.P. was supported from the National Science Foundation under Grant No. EAR-1724693.
}
\software{{\tt EXOFASTv2} \citep{Eastman:2017, eastman:2019}}.
\newpage
|
2,877,628,088,447 | arxiv |
\section{Introduction}There exist tens of billions of mobile devices distributed at network edges such as smartphones and laptop computers. They are equipped with powerful CPUs but a large population are idle at any given time instant. Scavenging the enormous amount of distributed computation resources can provide a new platform for mobile cloud computing and furthermore alleviate the problems of network congestion and long latency for the classic cloud computing. This vision has been driving extensive research
in both the academia and industry under various names such as \emph{edge computing} and \emph{fog computing}{\color{black}{\cite{taleb2017multi,mao2017mobile2,taleb2017mobile,chiangfog}}}. One technology for materializing the vision is \emph{mobile cooperative computing}, namely the cooperation between mobiles in computing by sharing computation resources and thereby improving their utilizations. This technology, referred to as \emph{co-computing} for simplicity, is the theme of this paper. {\color{black}{Specifically, this paper presents co-computing algorithms for enabling energy-efficient \emph{peer-to-peer} (P2P) computation offloading that \emph{exploits CPU-state information for scavenging spare computation resources at mobiles.
}}}
\subsection{\color{black}{Related Work}}
\subsubsection{\color{black}{Multi-Access Edge Computation Offloading}}
\emph{Mobile edge computing} (MEC), initiated by ETSI, refers to providing mobiles with cloud-computing capabilities and IT service from \emph{base stations} (BSs) or \emph{access points} (APs) at the edge of mobile networks. {\color{black}{It was renamed as multi-access edge computing as its applications have been broadened to include radio access networks (including WiFi) and multiple-access technologies \cite{taleb2017multi}.}} The recent inclusion of MEC on the roadmap of developing next-generation network architecture has motivated active research on developing wireless techniques for offloading \cite{patel2014mobile}.
{\color{black}{This has led to the emergence of an active area, called \emph{multi-access edge computation offloading} (MECO), that merges two disciplines: wireless communications and mobile computing \cite{mao2017mobile2}}}. Making a binary decision on offloading-or-not involves a straightforward comparison of mobile-energy consumption for computing given data by offloading and local computing. {\color{black}{However, compared with traditional traffic offloading \cite{chen2015energy} and green wireless-communication design \cite{wu2012green}, designing computation offloading is more challenging as it has to jointly consider two different objectives, energy-efficient computing and energy-efficient transmissions, in a more complex system for MECO.}} In particular, energy-efficient techniques are designed in \cite{zhang:MobileMmodel:2013} for controlling the CPU frequency for local computing and transmission rate for offloading. They are integrated with wireless energy transfer technology in \cite{you2015energyJSAC} to power mobiles for enhancing mobile energy savings. By program partitioning, a task can be divided for \emph{partial offloading} (and partial local computing) \cite{mao2017mobile2}. Various approaches have been developed for partial offloading such as live (in-computing) prefetching of mobile data to the server for reducing communication overhead \cite{ko2017live} and optimal program partitioning using integer programming \cite{MahmoodiTCC16}.
The design of multiuser MECO systems involves the new research issue of joint radio-and-computation resource allocation\cite{you2016energy,chen2015efficient,lyumulti:2016:ProxiCloud,mao2017stochastic} for achieving system-level objectives (e.g., minimum sum mobile-energy consumption). Specifically, the \emph{centralized} resource allocation is studied in \cite{you2016energy}, where an offloading priority function is derived to facilitate making binary offloading decisions for individual users. On the other hand, algorithms for \emph{distributed} resource allocation are designed in \cite{chen2015efficient}, \cite{lyumulti:2016:ProxiCloud} by solving formulated integer optimization problems using game theory and decomposition techniques. Last, server scheduling is also a relevant topic for designing multiuser MECO systems and has been studied in \cite{guo2016energy,yang2015multi,zhao2015cooperative} for coping with various issues including heterogeneous latency requirements, sub-task dependency and cloud selection, respectively.
MEC and MECO are enabled by the edge clouds implemented by dedicated servers (e.g., BSs or APs). However, in view of the exponentially-increasing IoT devices and computation traffic, the massive users accessing the servers will incur overwhelming communication overhead and soon exhaust the servers' capacities. On the other hand, latest mobile devices, e.g., smartphones and laptop computers equipped with multi-core processors, are comparable with normal servers in terms of computing power. Scavenging the excessive computation resources in massive idling mobile devices drives active research on co-computing discussed in the sequel.
\subsubsection{Mobile Cooperative Computing}\label{Sec:IntroCo-Computing}
Recent research on mobile co-computing is characterized by the themes of resource sharing and cooperative computing \cite{song2014energy,wang2016cooperative,pu2016d2d,XCao1704,sheng2015energy}. An online algorithm is proposed in \cite{song2014energy} for implementing co-computing and result-sharing, and thereby achieving the optimal energy-and-traffic tradeoff. Since users have no commitments for cooperation, one important aspect of co-computing research is to design schemes for incentivizing them for sharing computation resources, where a suitable design tool is game theory adopted in \cite{wang2016cooperative}. From the aspect of wireless communication, P2P offloading in co-computing can be efficiently implemented using the well developed \emph{device-to-device} (D2D) communication technology. This direction is pursued in \cite{pu2016d2d} where offloading based on D2D transmissions is controlled by a cooperation policy optimized using Lyapunov optimization theory. {\color{black}{In addition, let a \emph{helper} refer to a cooperative mobile that shares computation resources with peers. A joint computation-and-communication cooperation protocol is proposed in \cite{XCao1704}, where the helper not only computes part of the tasks offloaded from the user, but also acts as a relay node to forward the tasks to the MEC server.}} Last, an interesting type of sensor networks is proposed in \cite{sheng2015energy} to implement co-computing between sensors based on the discussed partial offloading technique.
In view of the above prior work, one key fact that is overlooked is that the \emph{non-causal CPU-state information} (NC-CSI)\footnote{\color{black}{Causal information refers to information on present and past events, while non-causal information is on future events.}} referring to the time profile of CPU state, can be exploited by the helper to design energy-efficient co-computing.
Acquiring such information is feasible by leveraging advancements in two areas, namely \emph{CPU profiling} and \emph{CPU-utilization prediction}. The former measures the usage of computation tasks by constructing CPU profile trees \cite{chun2011clonecloud} or integrating the CPU distribution and time-series profiles \cite{wood2007black}. In the industry, CPU profiling has been implemented by e.g., Apple Inc., via tracking the core-and-thread usage by devices.
{\color{black}{On the other hand, leveraging the time correlation of computation loads, the short-term CPU utilization (e.g., a few seconds) can be predicted based on simple linear models, such as \emph{autoregressive moving average} (ARMA) model in \cite{dinda1999evaluation}. While the long-term CPU utilization can be modeled by a non-linear function; its prediction requires sophisticated techniques from machine learning, such as Bayesian learning and \emph{Gaussian process regression} (GPR) \cite{bui2016energy} which is non-parametric without specifying the prediction parameters. More details about the prediction-model selection can be found in \cite{islam2012empirical}.}} The availability of technologies for CPU profiling and utilization prediction motivates the current design to exploit NC-CSI for improving the performance of co-computing.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=11.5cm]{./FigChannels.pdf}
\caption{Cascaded wireless-and-computation channels for mobile co-computing.}
\label{Fig:Channels}
\end{center}
\end{figure}
\vspace{-3pt}
\vspace{-3pt}
\subsection{{\color{black}{Motivations and Contributions}}}
{\color{black}{In this work, leveraging the advantages of NC-CSI, we contribute to the area of mobile co-computing by addressing two new issues. \emph{The first is how to exploit NC-CSI for opportunistically scavenging spare computation resources.} One key characteristic of co-computing is that a helper assigns a higher priority for computing local tasks and their random arrivals result in time variations in the computation resources available for sharing. The existing designs for co-computing are unable to fully scavenge dynamic computation resources at a helper due to transmission latency. In the current work, we propose a novel solution for overcoming this drawback by exploiting NC-CSI acquired from computation prediction. This allows a mobile to plan transmission \emph{in advance} so as to fully utilize random CPU idling periods at a helper.
The second issue not yet address in prior work is \emph{how to exploit NC-CSI for minimizing mobile energy consumption.} Note that the said dynamic spare resources create a virtual \emph{computation channel} where the channel throughput is the number of computed bits. This gives an interesting interpretation of co-computing as adaptive transmission over the cascaded \emph{wireless-and-computation channels} shown in Fig.~\ref{Fig:Channels}. Such interpretation gives rise to the following design challenges for minimizing mobile energy consumption. On one hand, transmitting offloaded data from a mobile to a helper too far advance before the helper's CPU is available will necessarily increase the data rate and thus mobile energy consumption. On the other hand, transmitting data too late will miss the opportunities of using the helper's CPU. In other words, ``transmission" over the computation channel should rely on the helper-CPU resource whose usage must satisfy the \emph{real-time constraints}. Specifically, CPU cycles available at a particular time instant must be used in \emph{real-time} but not earlier or later. This is contrast to stored energy for transmission over wireless channels that allows flexible usage in time. The above dilemma is solved in this work by exploiting NC-CSI to minimize mobile transmission-energy consumption while fully utilize the helper's random computation resource.
}}
{\color{black}{To the best of the authors' knowledge, this work presents the first attempt to exploit NC-CSI for scavenging spare computation resources at the helper and minimizing mobile energy consumption for mobile co-computing systems. The specific system model and contributions are summarized as follows.}}
Consider a mobile co-computing system comprising one helper and one user, both equipped with single antenna. The user needs to process the input data for a particular computation task before a given deadline. The input data arrives at the user either at a single time instant or spread over the time duration before the deadline, referred to as \emph{one-shot} and \emph{bursty} arrivals, respectively. Based on the model of partial offloading, the user splits the input data for processing locally and at the helper, leading to the problem of \emph{data partitioning}. Consider the mobile user. To model the energy consumption in local computing, it is assumed that processing a single bit requires a fixed number of CPU cycles, each of which consumes a fixed amount of energy. Moreover, the transmission-energy consumption incurred in the offloading process depends on the rate based on the Shannon's equation. Next, consider the helper. The available computation resource for co-computing is modeled as a fixed monotone-increasing curve in the plane of computable bits versus time, called the \emph{helper's CPU-idling profile}. Assume that the helper uses a buffer to store data transmitted by the user and has non-causal knowledge of the profile as well as other information including the channel and local computing energy. Using this information, it controls the transmission by the user, leading to the problem of \emph{adaptive offloading}.
The main contributions of the work are summarized as follows.
\begin{enumerate}
{\color{black}{
\item{\emph{Adaptive Offloading with One-Shot Data Arrival:} Consider one-shot data arrival at the user. Given a fixed number of input-data bits for offloading and co-computing, the said problem of adaptive offloading is formulated to minimize the transmission-energy consumption under the deadline and buffer constraints. This complex problem is solved as follows. First, for the large buffer case where the buffer size at the helper is no smaller than the offloaded bits, the formulated non-convex problem is equivalently transformed into a convex problem. By deriving the necessary and sufficient conditions for the optimal solution, we characterize the structure of the optimal policy and present algorithms for computing it. Geometrically, the optimal policy involves finding a shortest path under the constraints of the helper's CPU-idling profile and buffer size. On the other hand, the corresponding problem for the smaller buffer case is still non-convex. To tackle this challenge, we propose a tractable approach called \emph{proportional CPU-utilization} and prove that it is asymptotically optimal.}
\item{\emph{Energy-Efficient Data Partitioning with One-Shot Data Arrival:} Next, building on the solution for adaptive offloading, the said data partitioning problem is formulated to minimize user's energy consumption. Directing solving this problem is intractable due to the lack of closed-form expression for the objective function. We address this difficulty by proving that the formulated problem is convex even without a closed-form expression. Then a sub-gradient method is applied to compute the optimal data-partitioning policy. }
\item{\emph{Mobile Co-Computing with Bursty Data Arrivals:} The versatility of the above solution approach is further demonstrated by an extension to the case with bursty data arrivals at the user. For tractability, we consider a simple scheme of \emph{proportional data partitioning} for each instant of data arrival using a uniform ratio, which is an optimization variable. Accounting for the data causality constraints, i.e., the input-data bit cannot be offloaded or computed before it arrives, the corresponding adaptive offloading and data partitioning policies can be modified from the counterparts with one-shot data arrival.}}
}
\end{enumerate}
\section{System Model}\label{Sec:Sys}
Consider one co-computing system shown in Fig.~\ref{Fig:SysMobile}, comprising one user and one helper both equipped with single antenna\footnote{\color{black}{For simplicity, we consider a system comprising one helper serving one user. However, the random CPU state at the helper implies that the helper serves another user or locally generated tasks in the CPU-busy time (see Fig.~\ref{Fig:PrimaryCPUState}). Then the user in the current system aims at scavenging the remaining random CPU-idling time at the helper.}}. The user is required to finish a computation task with either one-shot or bursty input-data arrival before a deadline $T$. To this end, it adaptively offloads partial/all data to the helper for co-computing based on the control policy developed at the helper. The helper operates at a constant CPU frequency but with intermittent local computing tasks. It is assumed that the helper has sufficient energy for receiving and computing the data from the user\footnote{{\color{black}{Before offloading, the user is assumed to send a probing signal to the helper and receive feedback comprising NC-CSI as well as information on whether the helper has sufficient energy for cooperation.}} }. The specific models and assumptions are described in the following sub-sections.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=14cm]{./FigSysMobile.pdf}
\vspace{-1pt}
\caption{Model of one co-computing system.}
\label{Fig:SysMobile}
\end{center}
\end{figure}
\subsection{Model of the Helper's CPU-Idling Profile}
The \emph{helper's CPU-idling profile} is defined as the user's data (in bits) that can be computed by the helper in the duration $t\in[0,T]$, which is denoted as $U_{{\rm{bit}}}(t)$ and modeled shortly.
\begin{definition}[Helper-CPU State Information]\label{Def:Primary}\emph{Helper-CPU state information refers to the CPU state over time, which can be modeled by the helper-CPU event space, process and epochs defined as follows. Let $\boldsymbol{\mathcal{E}}=\{\mathcal{E}_1, \mathcal{E}_2\}$ denote the helper-CPU's \emph{event space}, where $\mathcal{E}_1$ and $\mathcal{E}_2$ denote the events that the helper-CPU changes the state from busy-to-idle and from idle-to-busy, respectively. The helper-CPU \emph{process} can be then defined as the time instants for a sequence of helper-CPU events $\{\mathcal{E}_2, \mathcal{E}_1, \mathcal{E}_2, \cdots\}$: $0=s_0<s_1<s_2<\cdots<s_{\tilde{K}-1}<s_{\tilde{K}}=T$. The time interval between two consecutive events\footnote{{\color{black}{In this work, the events correspond to instantaneous CPU-state transitions and thus the time spent on each event is zero.}} } is called an \emph{epoch} with length $\tau_k=s_{k}-s_{k-1}$, for $k=1,\cdots, \tilde{K}$}.
\end{definition}
\begin{figure}[t!]
\centering
\subfigure[Helper-CPU's event space, process and epochs.]{\label{Fig:PrimaryCPUState}
\includegraphics[width=11cm]{./FigSysHelperCPUSEvent.pdf}}
\subfigure[Helper's CPU-idling profile.]{\label{Fig:PrimaryCPUProfile}
\includegraphics[width=10.5cm]{./FigSysHelperCPUProfile.pdf}}
\caption{Model of the helper's CPU process and CPU-idling profile.}
\end{figure}
\vspace{-5pt}
\begin{assumption}\label{Ass:NC-CSI}\emph{The helper has \emph{non-causal helper-CPU state information.} }
\end{assumption}
{\color{black}{This assumption corresponds to the case where the helper performs CPU profiling or predicts the CPU utilization by e.g., linear regression \cite{dinda1999evaluation} or machine learning \cite{bui2016energy} (see discussion in Section~\ref{Sec:IntroCo-Computing}).}} It allows the off-line design of co-computing policies in the sequel.
One sample path of the helper-CPU's process is shown in Fig.~\ref{Fig:PrimaryCPUState}.
For each epoch, say epoch $k$, let $a_k$ represent the CPU-state indicator, where the values of 1 and 0 for $a_k$ indicate the idle and busy states, respectively. Moreover, let $f_h$ denote the constant CPU frequency of the helper and $C$ the number of CPU cycles required for computing $1$-bit of input-data of the user. Based on the above definitions, the helper's CPU-idling profile can be modeled as
\begin{equation}\label{Eq:CPUIdlingProfile}
U_{{\rm{bit}}}(t)=\left[\sum_{k=1}^{\bar{k}(t)} a_k \tau_k+ a_{\bar{k}(t)+1} \left(t-\sum_{k=1}^{\bar{k}(t)} \tau_k\right)\right] \frac{f_h}{C}, \quad 0\le t\le T,
\end{equation}
where $\bar{k}(t)=\max\{k: \sum_{j=1}^{k} \tau_j \le t\}$, as illustrated in Fig.~\ref{Fig:PrimaryCPUProfile}. Observe from the figure that the profile can be also represented by a sequence $\{U_{{\rm{bit}},1}, U_{{\rm{bit}},2}, \cdots\}$, with $U_{{\rm{bit}},k}=U_{{\rm{bit}}}(s_k)$. Based on Assumption~\ref{Ass:NC-CSI}, the helper has non-causal knowledge of helper's CPU-idling profile. Last, the helper is assumed to reserve a $Q$-bit buffer for storing the offloaded data before processing in the CPU as shown in Fig.~\ref{Fig:SysMobile}.
\subsection{Models of Local Computing and Offloading}\label{Mod:SU}
Consider both forms of data arrivals at the user. The one-shot data arrival assumes that an $L$-bit input data arrives at time $t=0$ and thus the helper-CPU's event space and process follow from Definition~\ref{Def:Primary}. On the other hand, the bursty data arrivals form a stochastic process. For ease of exposition, it is useful to define a stochastic process combing the two precesses for data arrivals and helper-CPU. The definition is in Definition~\ref{Def:Combined} and illustrated in Fig.~\ref{Fig:SecPri}.
\begin{definition}[Combined Stochastic Process for Bursty Data Arrivals] \label{Def:Combined}\emph{For the case of bursty data arrivals, let $\hat{\boldsymbol{\mathcal{E}}}=\{\mathcal{E}_1, \mathcal{E}_2, \mathcal{E}_3\}$ denote the combined event space where $\mathcal{E}_1$, $\mathcal{E}_2$ are given in Definition~\ref{Def:Primary} and $\mathcal{E}_3$ denotes the event that new data arrives at the user. The corresponding process is a sequence of variables: $0=\hat{s}_0<\hat{s}_1<\hat{s}_2<\cdots<\hat{s}_{\tilde{K}-1}<\hat{s}_{\tilde{K}}=T$, denoting the time instants for a sequence of events $\{\mathcal{E}_1, \mathcal{E}_2, \mathcal{E}_3, \cdots\}$. Moreover, for each time instant, say $\hat{s}_k$, let $L_k$ denote the size of data arrival where $L_k=0$ for events $\mathcal{E}_1$ and $\mathcal{E}_2$ and $L_k\neq 0$ for event $\mathcal{E}_3$. In addition, $L_{\tilde{K}}=0$, otherwise the data arriving at the deadline cannot be computed. Then the total input data $L=\sum_{k=1}^{\tilde{K}} L_k.$ }
\end{definition}
\begin{figure}[t]
\begin{center}
\includegraphics[width=11cm]{./FigSysCombinedEvent.pdf}
\caption{Combined stochastic process for bursty data arrivals.}
\label{Fig:SecPri}
\end{center}
\end{figure}
\begin{assumption}\emph{The user has \emph{non-causal} knowledge of bursty data arrivals in the duration $[0, T]$.}
\end{assumption}
{\color{black}{ The assumption (of non-causal knowledge) means that at time $t=0$, the user has the information of future data arrivals in the duration $[0, T]$ including their arrival-time instants and amounts of computation loads. The information can be acquired by computation prediction techniques similar to those for CPU-utilization prediction (see discussion in Section~\ref{Sec:IntroCo-Computing}).}} Moreover, the user is assumed to send the information to the helper together with parametric information including the channel gain, CPU frequency as well as energy consumption per bit, allowing the helper to control the operations of offloading and data partitioning. This spares the user from co-computing control that consumes energy.
Based on the definitions and assumptions, the models of local computing and offloading are described as follows. First, consider local computing. Let $f$ denote the constant CPU frequency at the user. For the case of one-shot data arrival, as shown in Fig.~\ref{Fig:SysMobile}, the user offloads $\ell$-bit data to the helper and computes the remaining $(L-\ell)$-bit using its local CPU. Due to the deadline constraint, local computing should satisfy: $C(L-\ell)/f\le T.$ It follows that the user should offload at least $\ell_{\min}^+$-bit data, where $\ell_{\min}\!=\!L-(f T/C)$ and $(x)^+\!=\!\max\{x, 0\}$.
Following the practical model in \cite{Robert:CMOS:1992}, each CPU cycle consumes the energy of $P_{\rm{cyc}}\!=\!\gamma f^2$ where $\gamma$ is a constant determined by the switched capacitance. As such, $(L-\ell) C P_{\rm{cyc}}$ gives energy consumption for local computing at the user. This model is extended to the case of bursty data arrivals in Section~\ref{Sec:RandomComputLoad}.
Next, consider offloading. For both the cases of one-shot and bursty data arrivals, let $\ell_k$ with $1\le k\le \tilde{K}$ denote the offloaded data size in epoch $k$.
Since constant-rate transmission within each epoch is energy-efficient \cite{PrabBiyi:EenergyEfficientTXLazyScheduling:2001}, the offloading rate in epoch $k$, denoted by $r_k$, is fixed as $r_k=\ell_k/\tau_k$.
Let $p_k$ represent the transmission power in epoch $k$, then the achievable transmission rate $r_k$ (in bits/s) is $r_k=W \log_{2} \left(1+\dfrac{p_k h^2}{N_0}\right)$
where $h$ is the channel gain and assumed to be fixed throughout the computing duration, $W$ the bandwidth, and $N_0$ the variance of complex-white-Gaussian-channel noise\footnote{\color{black}{In this paper, the D2D interference for co-computing is treated as channel noise. It is possible for the helper to mitigate the inference by using interference-cancellation techniques, thereby increasing the transmission date rate. However, the proposed design remains largely unchanged except for modifying the noise variance accordingly.}}. Thus, the energy consumption of the user for offloading $\ell_k$-bit data in epoch $k$, denoted by $E_k(\ell_k)$, is given as $E_k(\ell_k)=p_k \tau_k=\dfrac{\tau_k}{h^2} f\!\left(\ell_k/\tau_k\right),$
where the function $f(x)$ is defined by $f(x)=N_0 (2^{\frac{x}{W}}-1)$ based on the Shannon's equation.
For ease of exposition, the energy and time the user spends on receiving co-computing results are assumed negligible, as they are typically much smaller than the offloading counterparts. Extending the current analysis to include such overhead is straightforward though tedious.
\vspace{-5pt}
\subsection{Model of Co-Computing}\label{Sec:MoCocomputing}
The offloaded data is assumed to be firstly stored in the helper's buffer and then fetched to the CPU for co-computing. {\color{black}{To avoid overloading the helper's CPU, we assume that co-computing can be performed only during helper-CPU idle epochs.}} As such, let $T_{\rm{end}}$ and $K = k(T_{\rm{end}})$ denote the \emph{actual} completion time and corresponding epoch index with $T_{\rm{end}}\le T$ and $K= \tilde{K}$ (or $\tilde{K}-1$) depending on whether the last epoch is idle (or busy). Note that the idling CPU resource can only be utilized in \emph{real-time} which means \emph{a CPU cycle available now cannot be used in the future}, referred to as the \emph{CPU real-time constraints} in the sequel. Let $d_k(\ell_k)$ denote the computed data size at the helper's CPU during epoch $k$ and $B_k$ the remaining data size at the end of epoch $k$ (or the beginning of epoch $k+1$) with $B_0=0$. Under the CPU real-time constraints, $d_k(\ell_k)$ and $B_k$ evolve as
\vspace{-5pt}
\begin{align}\label{Eq:BEvolve}
& \text{(CPU real-time constraints)}\nonumber\\
& d_k(\ell_k)= \min\left\{ B_{k-1}+\ell_k, \frac{a_k \tau_k f_h}{C}\right\}, ~~B_k=\sum_{j=1}^k \ell_j-\sum_{j=1}^{k} d_j(\ell_j), ~~k=1,\cdots, K,
\end{align}
where $(B_{k-1}+\ell_k)$ is the computable data size in epoch $k$ and $(a_k \tau_k f_h/C)$ the available CPU resource. As a result of above constraints, a feasible co-computing strategy should satisfy the following deadline and buffer constraints.
\begin{itemize}
\item[1)]{\emph{Deadline constraint}: It requires the offloaded $\ell$-bit data to be computed within the deadline:
\vspace{-10pt}
\begin{equation}\label{Eq:Deadline}
\sum_{k=1}^{K} d_k(\ell_k)=\sum_{k=1}^{K} \ell_k=\ell.
\end{equation}}
\item[2)]{\emph{Buffer constraints}: Buffer overflow is prohibited, imposing the constraints:
\vspace{-3pt}
\begin{equation}\label{Eq:Buffer}
B_k=\sum_{j=1}^k \ell_j-\sum_{j=1}^{k} d_j(\ell_j)\le Q, ~~ k=1,\cdots, K.
\end{equation}
}
\end{itemize}
\vspace{-5pt}
{\color{black}{
\section{Mobile Cooperative Computing with One-Shot Data Arrival}\label{Sec:EgyOneShot}
In this section, assume that the user has one-shot data arrival and the helper has a finite buffer. We design energy-efficient co-computing algorithms for adaptive offloading and data partitioning.
\subsection{Problem Formulation}\label{Sec:EGProblem}
Consider that the user has an $L$-bit input-data arrival at time $t=0$. The problem of energy-efficient co-computing is formulated as two sub-problems: the slave problem corresponding to adaptive offloading and the master one to data partitioning.
\subsubsection{Slave Problem of Adaptive Offloading} Given user's $\ell$-bit offloaded data to the helper, the slave problem aims at minimizing the user's transmission-energy consumption under the deadline and buffer constraints, which can be formulated as:
\begin{equation*}~~({\bf P1})~
\begin{aligned}
\min_ {\boldsymbol{\ell}\ge \boldsymbol{0}} \quad& \sum_{k=1}^{K} \dfrac{\tau_k}{h^2} f\!\left(\dfrac{\ell_k}{\tau_k}\right) \\
\rm{s.t.}\quad & \sum_{k=1}^{K} d_k(\ell_k)=\sum_{k=1}^{K} \ell_k=\ell,
\\ & \sum_{j=1}^k \ell_j-\sum_{j=1}^{k} d_j(\ell_j) \le Q, \quad k=1,\cdots, K,
\end{aligned}
\end{equation*}
where $\boldsymbol{\ell}\overset{\triangle}{=}[\ell_1, \ell_2, \cdots, \ell_K]$ and $\boldsymbol{\ell} \ge \boldsymbol{0}$ means that $\ell_k\ge 0, \forall k$.
Let $\{\ell_k^*\}$ solve Problem P1 and thus specify the optimal offloading strategy. Then $E_{\rm{off}}(\ell)=\sum\nolimits_{k=1}^{K} \dfrac{\tau_k}{h^2} f\!\left(\ell_k^*/\tau_k\right)$ denote the minimum transmission-energy consumption.
\subsubsection{Master Problem of Data Partitioning} Given $E_{\rm{off}}(\ell)$, the master problem partitions the $L$-bit data for local computing and offloading. Under the criterion of minimum user's energy consumption, the problem can be formulated as below:
\begin{equation*}({\bf P2})\qquad
\begin{aligned}
\min_ {\ell } \quad& (L-\ell) C P_{\rm{cyc}}+ E_{\rm{off}}(\ell)
\qquad\rm{s.t.}~~
& \ell_{\min}^+ \le \ell \le L,
\end{aligned}
\end{equation*}
where $\ell_{\min}^+$ enforces the deadline for local computing (see Section~\ref{Mod:SU}).
\subsection{Energy-Efficient Adaptive Offloading}\label{Sec:EgyPolicyOneShot}
In this sub-section, we present a tractable approach for solving the complex Problem P1, by defining an \emph{offloading feasibility tunnel} and using it as the tool to derive the \emph{string-pulling} policy for the energy-efficient offloading.
First, one can observe that Problem P1 is feasible if and only if the offloaded data size is no larger than the maximum helper-CPU resource (in bits), i.e., $\ell \le U_{{{\rm{bit}}}, K}$. To simplify the procedure, we first solve Problem P1 conditioned on the full-utilization of helper-CPU, namely $\ell=U_{{{\rm{bit}}}, K}$. Then, the solution is modified for the case of underutilization, namely $\ell<U_{{{\rm{bit}}}, K}$.
\subsubsection{Full-Utilization of Helper-CPU \emph{[$\ell = U_{{{\rm{bit}}}, K}$]}} \label{Sec:FullCPU}The design approach consists of constructing an offloading feasibility tunnel and pulling a string (shortest path) over the tunnel as follows.
\noindent\underline{\textbf{a) Offloading Feasibility Tunnel}}\\
To define the tunnel, we first derive two sets of constraints that specify the ceiling and floor of the tunnel. For the current case, one key observation is that to meet the deadline constraint, the feasible solution should utilize all the helper-CPU idle epochs. Mathematically, this introduces a set of \emph{helper-CPU computing-speed constraints} on the computed bits in each epoch $d_k(\ell_k)$ as:
\begin{equation}\label{Eq:Simplification}
d_k(\ell_k)\!=\!\dfrac{a_k \tau_k f_h}{C} \le B_{k-1}+\ell_k,~~~\text{and}~~~\sum_{j=1}^{k} d_j(\ell_j)=U_{{{\rm{bit}}},k}, \qquad k=1,\cdots, K.
\end{equation}
Combining \eqref{Eq:Simplification} with the remaining bits for computing, namely $B_k\!=\!\sum_{j=1}^k \ell_j\!-\!\sum_{j=1}^{k} d_j(\ell_j)$, yields
\begin{equation}\label{Eq:LowerBoundary}
\text{(Minimum accumulated offloaded data size)}~~~
\sum_{j=1}^k \ell_j \ge U_{{{\rm{bit}}},k} , \qquad k=1,\cdots, K.
\end{equation}
Each of the above constraints specifies the \emph{minimum accumulated offloaded data size} at a particular time instant. Next, substituting the helper-CPU computing-speed constraints in \eqref{Eq:Simplification} into the buffer constraint in \eqref{Eq:Buffer} leads to
\begin{equation}\label{Eq:UpperBoundary}
\text{(Maximum accumulated offloaded data size)}~~~
\sum_{j=1}^k \ell_j \le \min\{U_{\rm{bit},k}+Q, \ell\}, ~~ k=1,\cdots, K,
\end{equation}
which imposes the \emph{maximum accumulated offloaded data size} at each time instant.
Let an offloading policy be specified by a sequence of offloaded bits for different epochs: $\boldsymbol{\ell}=[\ell_1, \ell_2, \cdots, \ell_{K}]$. Then the \emph{offloading feasibility tunnel} is defined as follows.
\begin{definition}[Offloading Feasibility Tunnel]\emph{
Let $\mathcal{T}(\ell)$ denote the offloading feasibility tunnel for the total offloaded data size $\ell$, defined as the set of feasible offloading policies under the constraints in \eqref{Eq:LowerBoundary}, \eqref{Eq:UpperBoundary} and the deadline. Mathematically,
\begin{align}\label{Eq:OffloadingFeaTunnel}
&\mathcal{T}(\ell)\!=\!\! \left\{ \boldsymbol{\ell} ~\bigg|~U_{{\rm{bit}},k} \!\le\! \sum_{j=1}^k \ell_j\le \min\{U_{{\rm{bit}},k}+Q, \ell\},
~\text{for}~ k\!=\!1,\cdots, K-1, ~\text{and}~ \sum_{k=1}^{K} \ell_k \!=\! \ell\right\}.
\end{align}}
\end{definition}
Graphically, the set of constraints in \eqref{Eq:LowerBoundary} depicts the floor of the tunnel and that in \eqref{Eq:UpperBoundary} its celling. Since constant-rate transmission within each epoch is optimal, the definition of the offloading feasibility tunnel can be plotted in the plane of number of bits versus time as illustrated in Fig.~\ref{Fig:OptimalScheduling}. One can observe that the tunnel floor is the helper's CPU-idling profile and shifting the floor upwards by the buffer size gives the tunnel ceiling. Specifically, for the case where the helper has a \emph{large buffer} for storing the offloaded data, referring to the case where $Q\ge L$, we have the following remark.
\begin{remark}[Offloading Feasibility Tunnel for Large Buffer]\label{Rem:OffTunLarge}\emph{Consider that the helper has a large buffer. It has $\sum_{j=1}^k \ell_j\le \min\{U_{{\rm{bit}},k}+Q, \ell\}=\ell$, and thus the corresponding offloading feasibility tunnel can be reduced to the one that has a ceiling bounded by the total offloaded data size $\ell$ and the same floor as that of \eqref{Eq:OffloadingFeaTunnel}. Mathematically,
\begin{align}\label{Eq:OffloadingFeaTunnelLarge}
&\mathcal{T}(\ell)\!=\!\! \left\{ \boldsymbol{\ell} ~\bigg|~U_{{\rm{bit}},k} \!\le\! \sum_{j=1}^k \ell_j\le \ell,
~\text{for}~ k\!=\!1,\cdots, K-1, ~\text{and}~ \sum_{k=1}^{K} \ell_k \!=\! \ell\right\}.
\end{align}}
\end{remark}
\begin{figure}[t]
\begin{center}
\includegraphics[height=5.6cm]{./FigOptimalScheduling.pdf}
\caption{An offloading feasibility tunnel (shaded in gray) and the energy-efficient transmission policy (the ``pulled string" in red) for the case of a small buffer at the helper.}
\label{Fig:OptimalScheduling}
\end{center}
\end{figure}
Using the said tunnel, Problem P1 can be equivalently transformed into Problem P3 below.
\begin{equation*}
({\bf P3})\qquad
\begin{aligned}
\min_ {\boldsymbol{\ell}\ge \boldsymbol{0}} \quad& \sum_{k=1}^{K} \dfrac{\tau_k}{h^2} f\!\left(\dfrac{\ell_k}{\tau_k}\right)
\qquad \rm{s.t.}~~
& \boldsymbol{\ell} \in \mathcal{T}(\ell).
\end{aligned}
\end{equation*}
It is easy to prove that Problem P3 is a convex optimization problem which can be solved by the Lagrange method. Instead, using the defined offloading feasibility tunnel, we show that the optimal policy has a ``string-pulling" structure in the sequel. Before the derivation, we define the ``string-pulling" policy and offer a remark introducing its application in transmission control.
\begin{definition}[``String-Pulling" Policy]\label{Def:StrPul}\emph{Given a tunnel with a floor and ceiling (see Fig.~\ref{Fig:OptimalScheduling}), the ``string-pulling" policy is a scheme to construct the \emph{shortest path} from a starting point to an ending point through the tunnel, which can be performed by pulling a stretched string from the same starting point to the ending point through the tunnel.}
\end{definition}
\begin{remark}[General ``String-Pulling" Transmission Control]\label{Rem:SP}\emph{The well-know class of ``string-pulling" policies for adapting transmissions arises from two simple facts.
\begin{enumerate}
\item The transmission energy is a \emph{convex increasing} function of the rate.
\item Given data and time duration, the \emph{constant-rate transmission} is energy-efficient, corresponding to a straight-line segment in the throughput-time plane.
\end{enumerate}
Time-varying transmission constraints, such as random energy or data arrivals, create a feasibility tunnel in the said plane. Given the above facts, a control policy for energy-efficient or throughput-optimal transmission is reflected as a ``pulled string" across the tunnel\cite{yang2012optimal,tutuncuoglu2012optimum,zafer2005calculus}.}
\end{remark}
\noindent\underline{\textbf{b) Optimal String-Pulling Policy}}\\
The offloading policy that specifies the set of offloaded bits for solving Problem P3 is shown to belong to the class of ``string-pulling" policies in the following proposition.
\begin{proposition}[Energy-Efficient Offloading Policy]\label{Prop:EgMinCondi}\emph{In the offloading feasibility tunnel $\mathcal{T}(\ell)$, the energy-efficient transmission policy $\boldsymbol{\ell}^*$ can be derived by forming a \emph{shortest path} connecting the starting and ending points, i.e., $(0, 0)$ and $(T_{\rm{end}}, \ell)$. Specifically, $\ell_k^*\!=\! r_k^* \tau_k$ where the optimal offloading rate in each epoch, $r_k^*$, satisfies the following necessary and sufficient conditions.
\begin{itemize}
\item[1)]{The offloading rate does not change unless the helper-CPU changes its state.}
\item[2)]{The offloading rate can increase only when the helper-CPU changes to enter the idle state. In this case, the buffer is fully occupied at the time instant, e.g., at time $s_k$, it has $B_{k-1}=Q$.}
\item[3)]{The offloading rate can decrease only when the helper-CPU changes to enter the busy state. In this case, the buffer is empty at the time instant, e.g., at time $s_{k}$, it has $B_{k-1}=0$.}
\end{itemize}
}
\end{proposition}
The shortest path in Proposition~\ref{Prop:EgMinCondi} can be derived by the ``string-pulling" policy defined in Definition~\ref{Def:StrPul} and illustrated in Fig.~\ref{Fig:OptimalScheduling}. The result can be proved by showing that the optimal offloading policy satisfies the facts specified in Remark~\ref{Rem:SP}. The details are straightforward and omitted for brevity.
For computing the policy, There exist a generic algorithm for finding a shortest path across a tunnel as in Proposition~\ref{Prop:EgMinCondi} \cite{tutuncuoglu2012optimum, zafer2005calculus}. The key idea is to \emph{recursively} find the set of tuning time instants and the slopes of path segments (offloading rates in the current case) between the time instants. The algorithm can be modified for computing the energy-efficient offloading policy. To this end, several definitions are needed. Consider the starting time-instant $s_i$ where $i$ is the epoch index. Define the \emph{reference rate region} of epoch $k$ by $\boldsymbol{R}_k=\{r_k | R_{k}^{\min}\le r_k\le R_{k}^{\max}\}$ for $s_i+1\le k\le K $, where
\begin{align}
R_{k}^{\min}=\dfrac{U_{{\rm{bit}},k}-\sum_{j=1}^i \ell_j^*}{s_k-s_i}~~\text{and}~~
R_{k}^{\max}=\dfrac{\min\{U_{{\rm{bit}},k}+Q, \ell\}-\sum_{j=1}^i \ell_j^*}{s_k-s_i}
\end{align} are the minimum and maximum offloading rate that result in the empty and fully-occupied buffer at the end of epoch $k$, respectively. Note that when $i=0$, $\sum_{j=1}^i \ell_j^*=0$ and when $i>0$, $\ell_j^*$ for $1\le j\le i$ is the decided offloaded data size. In addition, define $R_i^{\rm{end}}$ as the \emph{reference constant rate} at time instant $s_i$, given as $R_i^{\rm{end}}=\frac{\ell-\sum_{j=1}^i \ell_j^*}{T_{\rm{end}}-s_i}$, corresponding to the slope of a straight-line connecting the staring and ending points.
Note that the rates \{$R_{k}^{\min}$, $R_{k}^{\max}$, $R_i^{\rm{end}}$\} may not be feasible but used for comparison. The detailed algorithm is presented in Algorithm~\ref{Alg:P4}.
\begin{algorithm}[t!]
\caption{Computing the Energy-Efficient Offloading Policy for Solving Problem P3.}
\label{Alg:P4}
\begin{itemize}
\item{\textbf{Step 1} [Initialization]: $n=1$, $i_n^s=1$ and $k=i_n^s$ where $i_n^s$ is the epoch index of the starting time instant for the $n$-th constant rate.
}
\item{\textbf{Step 2} [Determine the ``string-pulling" offloading policy]:\\
(1) Check whether the reference rate $R_{i_n^s}^{\rm{end}}$ is feasible: if $R_{i_n^s}^{\rm{end}}\in \bigcap_{j=i_n^s}^{K} \boldsymbol{R}_j$, transmit at rate $R_{i_n^s}^{\rm{end}}$ from epoch $i_n^s$ to $K$ and terminate the algorithm, otherwise, go to the next sub-step.
\\(2) Find the next turning time instant of the shortest path and compute the offloading rate:
\begin{itemize}
\item[i)]{While $\boldsymbol{R}_{k+1}\in \bigcap_{j=i_n^s}^k \boldsymbol{R}_j$, update by $k=k+1$, otherwise, go to the next sub-step.}
\item[ii)]{If $\boldsymbol{R}_{k+1}> \bigcap_{j=i_n^s}^k \boldsymbol{R}_j$, then $i_n^e=m$ where $m=\max\{j~|~R_{j}^{\max}=r_k^* \}$ and $r_k^*=\max\{\bigcap_{j=1}^k \boldsymbol{R}_j\}$. For $i_n^s \le k\le i_n^e$, the optimal offloaded data size is $\ell_k^*=r_k^* \tau_k$ . \\
If $\boldsymbol{R}_{k+1}< \bigcap_{j=i_n^s}^k \boldsymbol{R}_j$, then $i_n^e=m$ where $m=\max\{j~|~R_{j}^{\min}=r_k^* \}$ and $r_k^*=\min\{\bigcap_{j=1}^k \boldsymbol{R}_j\}$. For $i_n^s \le k\le i_n^e$, the optimal offloaded data size is $\ell_k^*=r_k^* \tau_k$ .}
\end{itemize}
\item{\textbf{Step 3} [Repeat]: Let $n=n+1$, $i_n^s=i_{n-1}^e+1$, $k=i_n^s$;
update $\boldsymbol{R}_k$ and go to Step 2. }
}
\end{itemize}
\end{algorithm}
\begin{remark}[Buffer Gain]\label{Rem:BufferGain}\emph{ It can be observed from Fig.~\ref{Fig:OptimalScheduling} that increasing the buffer size will shift up the tunnel ceiling, enlarging the tunnel area. This allows the pulled string to approach a single straight-line and thereby reduce the transmission-energy consumption. However, the buffer gain saturates when the buffer size exceeds the total offloaded bits, corresponding to a large buffer.
}
\end{remark}
\begin{remark}[Effect of Helper's CPU-Idling Profile]\emph{It can be observed from Fig.~\ref{Fig:OptimalScheduling} that the helper's CPU-idling profile significantly affects the energy-efficient P2P transmission policy. Specifically, when the helper has a large buffer, the optimal offloading policy is only constrained by the tunnel floor (see \eqref{Eq:OffloadingFeaTunnelLarge}). Given total helper-CPU idling duration, the user can achieve the minimum transmission-energy consumption if the helper's CPU first stays at the busy state and then switches to the idle state that lasts until the deadline. The reason is that in this scenario, the user has a long consecutive duration for transmitting enough input data for fully utilizing helper-CPU idle epochs, resulting in low transmission rates.}
\end{remark}
\subsubsection{Underutilization of Helper-CPU \emph{[$\ell < U_{{{\rm{bit}}}, K}$]}} \label{Sec:UnderutilizationEgy} This case is desirable in two scenarios. First, the spare CPU resource at the helper is rich such that its full utilization may not be necessary or even possible. Second, when the channel is unfavorable, it is beneficial to reduce the offloaded data size which may under-utilize the helper's CPU. To characterize the corresponding policy structures, in the following, we first consider the large buffer case and derive its optimal offloading policy. While for the case of small buffer, the corresponding problem is highly complex. To address this challenge, we design a sub-optimal policy using the insight from the large buffer counterpart. \\
\noindent\underline{\textbf{a) Large Buffer}}\\
Consider that the helper has a large buffer (i.e., $Q\ge L$). For the case of underutilization of helper-CPU, the offloaded bits $\ell$ is below the helper's spare CPU capacity. The corresponding optimal offloading strategy can be designed by extending the solution approach for the full-utilization counterpart. This essentially involves defining an \emph{effective offloading feasibility tunnel} with a lower floor with respect to (w.r.t.) the original one in \eqref{Eq:OffloadingFeaTunnelLarge}. See the details below.
Recall the CPU real-time constraints, namely that a CPU cycle available now cannot be used in the future. Then given the helper's CPU-idling profile $U_{\text{bit}, K}$ and offloaded data bits for computing $\ell$, the amount of underutilized CPU resource, measured by the accumulated unused computable bits in each epoch, cannot exceed $\Delta(\ell)=(U_{{{\rm{bit}}}, K}-\ell)$-bit. Otherwise, computing the $\ell$-bit of offloaded data by the deadline is infeasible. Mathematically,
\begin{equation*}
U_{{{\rm{bit}}},k}-\sum_{j=1}^{k} d_j(\ell_j)\le \Delta(\ell),~\text{for}~ k=1, \cdots, K-1,~~\text{and}~~ U_{{{\rm{bit}}},K}-\sum_{j=1}^{K} d_j(\ell_j)=\Delta(\ell)
\end{equation*} where $d_j(\ell_j)$ gives the bits computed in epoch $j$ as defined earlier. Combing the constraints with the property of accumulated computed bits: $0\le\! \sum_{j=1}^{k} d_j(\ell_j)\le \min\left\{U_{{{{\rm{bit}}}},k}, \sum_{j=1}^{k} \ell_j\right\},$ which can be observed from \eqref{Eq:BEvolve}, yields the bounds on the accumulated computed bits below:
\begin{equation}\label{Eq:NewComputed}
[U_{{{\rm{bit}}},k}-\Delta(\ell)]^{+} \le \sum_{j=1}^k d_j(\ell_j)\le \min\left\{U_{{{{\rm{bit}}}},k}, \sum_{j=1}^{k} \ell_j\right\}, ~k=1,\cdots, K.
\end{equation}
Using \eqref{Eq:NewComputed}, the \emph{effective offloading feasibility tunnel} is defined as follows.
\begin{definition}[Effective Offloading Feasibility Tunnel]\emph{Assume that the helper has a large buffer. For the case of underutilization, the effective offloading feasibility tunnel, denote by $\bar{\mathcal{T}}(\ell)$, is defined as the set of policies with accumulated offloaded bits constrained as
\begin{align}\label{Eq:EffectiveRegion}
&\bar{\mathcal{T}}(\ell)=\left\{ \boldsymbol{\ell} ~\bigg|~[U_{{{\rm{bit}}},k}-\Delta(\ell)]^{+} \le \sum_{j=1}^{k} \ell_j\le \ell , ~ \text{for}~k=1,\cdots, K-1,~\text{and}~ \sum_{k=1}^{K} \ell_k = \ell~\right\}.
\end{align}}
\end{definition}
The effective offloading feasibility tunnel can be constructed by shifting downwards the full-utilization counterpart $\mathcal{T}(\ell)$ in \eqref{Eq:OffloadingFeaTunnelLarge} by $(U_{{{\rm{bit}}}, K}-\ell)$ and then cutting regions where the number of bits is below $0$.
Next, one important property of the defined effective offloading feasibility tunnel is stated in the proposition below, proved in Appendix~\ref{App:OptRegion}.
\begin{proposition}\label{Pro:OptRegion}\emph{Assume that the helper has a large buffer. For the case of underutilization, the energy-efficient transmission policy can be derived by forming a shortest path in the effective offloading feasibility tunnel.
}
\end{proposition}
Based on Proposition~\ref{Pro:OptRegion}, Problem P1 can be transformed into the problem with constraints replaced by the effective offloading feasibility tunnel. The new problem has the same form as Problem P3 and only differs in the definitions of offloading feasibility tunnel. Thus, it can be solved using the same ``string-pulling" approach as in Section~\ref{Sec:FullCPU}.
\noindent\underline{\textbf{b) Small Buffer}}\\
For this case, we show that computing the optimal policy is highly complex without yielding useful insight. To address this difficulty, we propose a tractable \emph{proportional CPU-utilization} scheme which is asymptotically optimal.
First, similar to the case of large buffer,
given the helper's CPU-idling profile and the deadline constraint, the amount of unused computable bits is $\Delta(\ell)=(U_{{{\rm{bit}}}, K}-\ell)$-bit and the accumulated computed bis can be bounded as \eqref{Eq:NewComputed}. Combining them with the buffer constraints in \eqref{Eq:Buffer} yields the following constraints on the accumulated offloaded bits:
\begin{equation}
\sum_{j=1}^k d_j(\ell_j) \le \sum_{j=1}^{k} \ell_j\le \sum_{j=1}^k d_j(\ell_j)+Q,~~k=1, \cdots K.
\end{equation} Therefore, Problem P3 can be transformed into Problem P4 as follows.
\begin{equation*}~~({\bf P4})~
\begin{aligned}
\min_ {\boldsymbol{\ell}\ge \boldsymbol{0}} \quad& \sum_{k=1}^{K} \dfrac{\tau_k}{h^2} f\!\left(\dfrac{\ell_k}{\tau_k}\right) \\
\rm{s.t.}\quad & \sum_{j=1}^k d_j(\ell_j) \le \sum_{j=1}^{k} \ell_j\le \sum_{j=1}^k d_j(\ell_j)+Q, \quad k=1,\cdots, K,
\\ & [U_{{{\rm{bit}}},k}-\Delta(\ell)]^{+} \le \sum_{j=1}^k d_j(\ell_j)\le U_{{{{\rm{bit}}}},k}, \quad k=1,\cdots, K,\\
& \sum_{k=1}^{K} d_k(\ell_k)=\sum_{k=1}^{K} \ell_k=\ell.
\end{aligned}
\end{equation*}
Since $d_j(\ell_j)$ is a non-affine function of $\ell_j$ (see \eqref{Eq:BEvolve}), Problem P4 is a non-convex optimization problem that is difficult to solve. The intractability arises from determining the time instants and levels (in terms of unused CPU cycles) the helper-CPU should be under-utilized, which are coupled due to residual unused CPU resource delivered from one epoch to the next. The conventional approach for solving this type of optimization problem is using dynamic programming, which requires discretizing the continuous state space, bringing high complexity but without yielding useful insight on the policy structures. To tackle the difficulty, we propose the following practical scheme of \emph{proportional CPU-utilization}.
\begin{definition}[Proportional CPU-Utilization]\label{Def:ProCPUUtili}\emph{Consider the helper has a small buffer. For the case of underutilization, in each CPU idle epoch, the proportional CPU-utilization scheme assigns a fixed number of CPU cycles to the user per second without adjusting the CPU frequency. As a result, the user can fully utilize the allocated CPU resource. Let $\tilde{f}_h$ denote the number of allocated CPU cycles per second. Mathematically, $\tilde{f}_h=f_h \frac{\ell}{U_{{\rm{bit}},K}}$. }
\end{definition}
This scheme can be implemented by the advanced hyper-threading technique \cite{koufaty2003hyperthreading} which allows multi-thread to time-share one physical CPU via proportional CPU resource allocation. Under this scheme, we define $\tilde{U}_{{\rm{bit}},k}$ as an \emph{effective helper's CPU-idling profile}, give as $\tilde{U}_{{\rm{bit}},k}=U_{{\rm{bit}},k}\frac{\ell}{U_{{\rm{bit}},K}}$, for $k=1, \cdots, K$.
Then the current case of underutilization of helper-CPU can be reduced to the counterpart of full-utilization in Section~\ref{Sec:FullCPU} and efficiently solved using the same approach. Furthermore, this scheme is shown to be \emph{asymptotically optimal} in the following proposition.
\begin{proposition}[Asymptotic Optimality]\label{Prop:ProCPUUtil}\emph{The proportional CPU-utilization scheme is the optimal offloading policy when the buffer size $Q\to 0$.}
\end{proposition}
This proposition is proved in Appendix~\ref{App:ProCPUUtil}. It indicates that when the buffer size is smaller, the performance of proposed scheme gets closer to that of the optimal one.
}}
\subsection{Energy-Efficient Data Partitioning}\label{Sec:OptDataEgy}
The direct derivation for energy-efficient data partitioning in Problem P2 is intractable due to the lack of closed-form expression for the minimum transmission-energy consumption, i.e., $E_{\rm{off}}(\ell)$, which can be observed from Proposition~\ref{Prop:EgMinCondi}. To overcome this difficulty, in this sub-section, Problem P2 is proved to be a convex optimization problem, allowing the optimal solution to be computed by a sub-gradient method.
First, to guarantee that both the adaptive offloading and local computing are feasible, the offloaded data bits should satisfy: $\ell_{\min}^{+}\le \ell \le \min\left\{U_{{\rm{bit}}, K}, L\right\}.$ Therefore, Problem P2 is feasible if and only if $\ell_{\min}^+\le U_{{\rm{bit}}, K}$. Next, let $\ell^{(1)}$ and $\ell^{(2)}$ denote two offloaded data bits. Since the offloading feasibility tunnel $\mathcal{T}(\ell)$ in \eqref{Eq:OffloadingFeaTunnel} can be regarded as one special case of $\bar{\mathcal{T}}(\ell)$ in \eqref{Eq:EffectiveRegion} for which $\ell=U_{{\rm{bit}},K}$, we only consider the effective offloading feasibility tunnel in this sub-section. One important property of the tunnel is presented below, proved in
Appendix~\ref{App:PropRegi}.
\begin{lemma}\label{Lem:PropRegi}\emph{
Let $\boldsymbol{\ell}^{(1)} \in \bar{\mathcal{T}}(\ell^{(1)})$ and $\boldsymbol{\ell}^{(2)} \in \bar{\mathcal{T}}(\ell^{(2)})$. Then, for $0\le \lambda \le 1$,
\begin{equation}
\lambda \boldsymbol{\ell}^{(1)} + (1-\lambda) \boldsymbol{\ell}^{(2)} \in \bar{\mathcal{T}} (\lambda \ell^{(1)} + (1-\lambda) \ell^{(2)}).
\end{equation}
}
\end{lemma}
Using Lemma~\ref{Lem:PropRegi}, the convexity of the function $E_{\rm{off}}(\ell)$ is stated in the following lemma.
\begin{lemma}[Convexity of Minimum Transmission-Energy Function]\label{Lem:MasConvTime}\emph{
The function of minimum transmission-energy consumption, $E_{\rm{off}}(\ell)$, is a convex function w.r.t $\ell$. }
\end{lemma}
Lemma~\ref{Lem:MasConvTime} is proved in Appendix~\ref{App:MasConvTime}.
Using this lemma, it can be easily verified that Problem P2 is a convex optimization problem. Directly applying KKT conditions yields the key result of this sub-section in the proposition below.
\begin{proposition}[Energy-Efficient Data Partitioning]\label{Prop:OptDivisionEgy}\emph{Given the computation load $L$ and deadline $T$ at the user, the energy-efficient data-partitioning policy solving Problem P2 is:
\begin{equation*}
\ell^*=\max\left\{\ell_{\min}^+, \min\left\{\ell_0,U_{{\rm{bit}},K}, L\right\}\right\}
\end{equation*}
where $\ell_0$ is the solution for $E_{\rm{off}}^{'}(\ell_0)=C P_{\rm{cyc}}$ and $E_{\rm{off}}^{'}(\ell)$ denotes the first derivative of $E_{\rm{off}}(\ell)$.
}
\end{proposition}
Although the function $E_{\rm{off}}^{'}(\ell)$ has no closed form, $\ell_0$ in Proposition~\ref{Prop:OptDivisionEgy} can be easily computed via advanced convex optimization techniques, e.g., the sub-gradient method, yielding the optimal data partitioning using the formula in the proposition.
{\color{black}{Last, $E_{\rm{off}}(\ell)$ can be lower-bounded as $E_{\rm{off}}(\ell)\ge \frac{T_{\rm{end}}}{h^2} f(\ell/T_{\rm{end}})$. Combining it with Proposition~\ref{Prop:OptDivisionEgy} gives the following corollary.
\begin{corollary}[Minimum Offloading]\emph{Given the computation load $L$ and deadline $T$ at the user, if it satisfies that $T_{\rm{end}} f^{-1}(C_mP_{\rm{cyc}}h^2)\le \ell_{\min}^+$, the energy-efficient data partitioning selects the minimum data size for offloading, i.e., $\ell^*=\ell_{\min}^+$.}
\end{corollary}
This corollary reduces the complexity for computing the data partitioning policy if the said condition is satisfied. Moreover, it is coincident with the intuition that if the user has a bad channel or local computing consumes small energy, it is preferred to reduce the offloaded bits.
}}
{\color{black}{
\begin{remark}[Offloading to Multiple Helpers]\emph{The current results can be extended to the case where the user can offload input data to multiple helpers. The corresponding design can be formulated as a hierarchical optimization problem. Specifically, the slave problem aims at minimizing the energy consumption for offloading to one particular helper, for which the optimal offloading policy can be derived by the same ``string-pulling" approach. On the other hand, the master problem focuses on partitioning input data for local computing and offloading to multiple helpers. This optimization problem can be proved to be also convex using Lemma~\ref{Lem:MasConvTime}, thus the optimal data partitioning policy can be computed by the sub-gradient method.}
\end{remark}}
\begin{remark}[Co-Computing Networks]\emph{Our current design can be used as a building block for implementing different types of networks such as multi-helper networks and multi-access networks. For multi-helper networks, the helper selection can be performed as follows. Assume each user selects one helper that is within a certain distance and has the largest amount of idling computation resource given the deadline. Once the cooperation is initiated, the helper is assumed to be dedicated for co-computing with this user until the deadline. Next, consider multi-access networks where multiple users offload computation to one helper. The designs of adaptive offloading and data partitioning can be integrated with computation resource allocation at the helper such as the proposed proportional CPU-utilization scheme (see Definition~\ref{Def:ProCPUUtili}).}
\end{remark}}
\section{Mobile Cooperative Computing with Bursty Data Arrivals}\label{Sec:RandomComputLoad}
In this section, the solution approach for energy-efficient co-computing as developed in Section~\ref{Sec:EgyOneShot} is extended to the case of bursty data arrivals. The data bursty introduces a set of so-called \emph{data causality constraint} defined in the sequel. Due to the new constraints, the original algorithms for offloading and data partitioning need be redesigned. This essentially involves defining an alternative offloading feasibility tunnel accounting for bursty data arrivals.
\subsection{Problem Formulation}
Consider the user has bursty data arrivals at time instants $\!\{\hat{s}_k\}$ as shown in Fig.~\ref{Fig:SecPri} and the helper has a large buffer (i.e., $Q \!\ge\! \!\sum_{k=1}^K L_k$)\footnote{Note that $T_{\rm{end}}=T$ if the last epoch of the helper-CPU idling profile is idle. Moreover, the extension to the case of small buffer can be modified from those for the large buffer case using the similar approach for the one-shot arrival counterpart, and thus omitted for brevity.}.
{\color{black}{Allowing each instant of data arrivals to have different partitioning ratios makes the optimization problem intractable without yielding useful insight. To tackle this challenge, we first propose a tractable \emph{proportional data partitioning} scheme as defined below, which allows using the similar ``string-pulling" approach in the sequel.}}
\begin{definition}[Proportional Data Partitioning]\emph{For the $k$-th event time-instant $\hat{s}_k$, let $L_{k,\rm{off}}$ denote the size of partitioned data for offloading. The scheme of proportional data partitioning divides the data of each arrival for local computing and offloading using a fixed ratio: $\dfrac{L_{1,\rm{off}}}{L_1}=\dfrac{L_{2,\rm{off}}}{L_2}=\cdots=\dfrac{L_{K,\rm{off}}}{L_K}=\vartheta$, where $\vartheta$ is called the \emph{data-partitioning ratio}.}
\end{definition}
Note that when there is no data arrival at time instant $\hat{s}_k$, $L_k=0$ (see Section~\ref{Mod:SU}). The data-partitioning ratio $\vartheta$ is the optimization variable in the problem of data partitioning.
Based on the above definition, the problem of energy-efficient co-computing for bursty data arrivals can be decomposed as the following slave and master problems.
\subsubsection{Slave Problem of Adaptive Offloading} First, we derive a set of data causality constraints arising from bursty data arrivals. They reflect the simple fact: an input-data bit cannot be offloaded or computed before it arrives. Equivalently, for each event time-instant $\hat{s}_k$, the user partitions $(\vartheta L_k)$-bit data for offloading given a fixed data-partitioning ratio $\vartheta$. The accumulated offloaded data size cannot exceed size of the $\vartheta$-fraction of the accumulated data size for every time instant. Mathematically,
\begin{equation}\label{Eq:DataCausalityOff}
\text{(Data causality constraints for offloading)} \quad \sum_{j=1}^k \ell_j \le \sum_{j=1}^{k-1} \vartheta L_j, \quad k=1,\cdots, K.~~~
\end{equation}
\begin{remark}[Similarities with Energy-Harvesting Transmissions]\emph{
The data causality constraints are analogous with the energy causality constraints for energy-harvesting transmissions \cite{tutuncuoglu2012optimum,OzelUlukus:TransEnergyHarvestFading:OptimalPolicies:2011}. The latter specify that the accumulated energy consumed by transmission cannot exceed the total harvested energy by any time instant. The data constraints are due to random data arrivals while the energy counterparts arise from random energy arrivals. The above analogy together with that in Remark~\ref{Rem:SP} establish an interesting connection between the problem mathematical structures in the two different areas: energy-harvesting communications and co-computing.}
\end{remark}
By modifying Problem P1 to include the above constraints and assuming large buffer, the problem of
energy-efficient offloading is formulated as:
\begin{equation*}({\bf P5})\qquad
\begin{aligned}
\min_ {\boldsymbol{\ell}\ge \boldsymbol{0} } \quad& \sum_{k=1}^{K} \dfrac{\tau_k}{h^2} f\!\left(\dfrac{\ell_k}{\tau_k}\right) \\
\rm{s.t.}\quad
& \sum_{k=1}^{K} d_k(\ell_k)=\sum_{k=1}^{K} \ell_k=\sum_{k=1}^{K-1} \vartheta L_k, \\
& \sum_{j=1}^k \ell_j \le \sum_{j=1}^{k-1} \vartheta L_j, &\quad k=1,\cdots, K.
\end{aligned}
\end{equation*}
Let $\hat{E}_{\rm{off}}(\vartheta)=\sum_{k=1}^{K} \dfrac{\tau_k}{h^2} f\!\left(\ell_k^*/\tau_k\right)$ denote the minimum transmission-energy consumption where $\{\ell_k^*\}$ solve Problem P5.
\subsubsection{Master Problem of Proportional Data Partitioning}
Given $\hat{E}_{\rm{off}}(\vartheta)$, the master problem focuses on optimizing the data-partitioning ratio $\vartheta$ under the criterion of the minimum user's energy consumption. Let $\ell_{\rm{loc},k}$ denote the size of data for local computing at the user in epoch $k$. A set of data causality constraints for local computing can be derived similarly as \eqref{Eq:DataCausalityOff}:
\begin{equation}\label{Eq:LocalComputingDataCausality}
\text{(Data causality constraints for local computing)}~ \sum_{j=1}^k \ell_{\rm{loc},j}\le \sum_{j=1}^{k-1} (1-\vartheta)L_j, ~ k=1,\cdots, \tilde{K}.
\end{equation}
Note that for local computing, it has $\tilde{K}$ epochs determined by the deadline $T$. Assume that the user's CPU performs local computing whenever there exists computable data or otherwise stays idle. Let $d_{\rm{loc},k}(\ell_{\rm{loc},k})$ denote the bits computed locally in epoch $k$ and $B_{\rm{loc},k}$ the bits of remaining data at the end of epoch $k$. Due to the CPU real-time constraints mentioned earlier, $d_{\rm{loc},k}(\ell_{\rm{loc},k})$ and $B_{\rm{loc},k}$ evolve as:
\begin{equation*}\label{Eq:LocalComputedData}
d_{\rm{loc},k}(\ell_{\rm{loc},k})\!=\! \min\left\{ B_{\rm{loc},k-1}+\ell_{\rm{loc},k}, \frac{ \tau_k f}{C}\right\} ~\text{and}~B_{\rm{loc},k}\!=\!\sum_{j=1}^k \ell_{\rm{loc},j}-\sum_{j=1}^{k} d_{\rm{loc},j}(\ell_{\rm{loc},j}), ~~ k=1,\cdots, \tilde{K},
\end{equation*} with $B_{\rm{loc},0}=0$. Under the data causality constraints in \eqref{Eq:LocalComputingDataCausality}, the problem of proportional data partitioning can be formulated as follows.
\begin{equation*}({\bf P6})\qquad
\begin{aligned}
\min_ {\vartheta, \boldsymbol{\ell}_{\rm{loc}}\ge \boldsymbol{0} } \quad& \left[\sum_{k=1}^{\tilde{K}-1} (1-\vartheta) L_k\right] C P_{\rm{cyc}}+\hat{E}_{\rm{off}}(\vartheta)\\
\rm{s.t.}\quad
& \sum_{k=1}^{\tilde{K}} \ell_{\rm{loc},k}= \sum_{k=1}^{\tilde{K}} d_{\rm{loc},k}(\ell_{\rm{loc},k})= \sum_{k=1}^{\tilde{K}-1} (1-\vartheta)L_k,\\
& \sum_{j=1}^k \ell_{\rm{loc},j}\le \sum_{j=1}^{k-1} (1-\vartheta)L_j, & k=1,\cdots, \tilde{K}.
\end{aligned}
\end{equation*}
\subsection{Energy-Efficient Adaptive Offloading}\label{Sec:BurstyP2P}
In this sub-section, the energy-efficient offloading policy is derive by defining an alternative offloading feasibility tunnel accounting for the bursty data arrivals.
The problem feasibility conditions are decided by the offloading feasibility tunnel summarized shortly. One of necessary conditions is that the total offloaded data is no larger than the helper's CPU resource, i.e., $\sum_{k=1}^{K-1} \vartheta L_k\le U_{{\rm{bit}},K}$. In the following, we solve Problem P5 conditioned on the full-utilization and underutilization of helper-CPU, respectively.
\subsubsection{Full-Utilization of Helper-CPU}
The solution approach requires the definition of an offloading feasibility tunnel determined by the data causality constraints.
To define the tunnel, we derive the conditions that specify the floor and ceiling of the tunnel. First, similar to Section~\ref{Sec:FullCPU}, the deadline constraint imposes the constraints on the minimum accumulated offloaded data size in \eqref{Eq:LowerBoundary}, specifying the tunnel floor. Next, the data causality constraints for offloading in \eqref{Eq:DataCausalityOff} determine the tunnel ceiling. Combing them together, we define the corresponding offloading feasibility tunnel as follows.
\begin{align}\label{Eq:OffFeaTunnel}
& \text{(Offloading Feasibility Tunnel for Bursty Data Arrivals)}\nonumber\\
&\mathcal{T}_B (\vartheta)\!=\!\left\{ \boldsymbol{\ell} ~\bigg|~U_{{{\rm{bit}}},k} \le \sum_{j=1}^k \ell_j \le \sum_{j=1}^{k-1} \vartheta L_j, ~\text{for}~ k=1,\cdots, K-1,~\text{and}~ \sum_{k=1}^{K} \ell_k \!=\!\sum_{k=1}^{K-1} \vartheta L_k\right\}.
\end{align}
The graphical illustration for the tunnel is given in Fig.~\ref{Fig:BurstyOptimalScheduling}. It suggests that Problem P5 is feasible if and only if the tunnel ceiling is always not below the tunnel floor. Mathematically, $U_{{{\rm{bit}}},k} \le \sum_{j=1}^{k-1} \vartheta L_j$, for $k=1, \cdots, K$.
\begin{figure}[t]
\begin{center}
\includegraphics[height=5.6cm]{./FigBurstyOptimalScheduling.pdf}
\caption{An offloading feasibility tunnel (shaded in gray) for the case of bursty data arrivals and the energy-efficient transmission policy (the ``pulled string" in red).}
\label{Fig:BurstyOptimalScheduling}
\end{center}
\end{figure}
Given Problem P5 is feasible, it can be transformed to the one that replaces the constraints with the offloading feasibility tunnel. Again, the corresponding energy-efficient offloading policy can be computed by the said ``string-pulling" algorithm.
\subsubsection{Underutilization of Helper-CPU \emph{[$\ell < U_{{\rm{bit}}, K}$]}} \label{Sec:UnderutilizationEgyBursty}
For this case, similar to the one-shot data arrival counterpart, the key step is to define an \emph{effective offloading feasibility tunnel}.
Similar to Section~\ref{Sec:UnderutilizationEgy}, given the helper's CPU-idling profile and the deadline constraint, the amount of unused computable bits is $\bar{\Delta}(\vartheta) = (U_{{\rm{bit}},K}-\sum_{k=1}^{K-1} \vartheta L_k)$-bit and the accumulated computed bis can be bounded similar to that in \eqref{Eq:NewComputed}. Using \eqref{Eq:NewComputed} and the data causality constraints for offloading in \eqref{Eq:DataCausalityOff}, an effective offloading feasibility tunnel is defined as follows.
\begin{align}\label{Eq:EffectiveOffTunnel}
& \text{(Effective Offloading Feasibility Tunnel for Bursty Data Arrivals)}\nonumber\\
&\bar{\mathcal{T}}_B(\vartheta)=\left\{ \boldsymbol{\ell} ~\bigg|~[U_{{{\rm{bit}}},k}-\bar{\Delta}(\vartheta)]^{+} \le \sum_{j=1}^k \ell_j \le \sum_{j=1}^{k-1} \vartheta L_j, \right.\nonumber\\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\left.~\text{for}~ k=1,\cdots, K-1,~\text{and}~ \sum_{k=1}^{K} \ell_k =\sum_{k=1}^{K-1} \vartheta L_k\right\}.
\end{align}
Note that compared with the offloading feasibility tunnel $\mathcal{T}_B(\vartheta)$, the current tunnel has a lower floor, which can potentially reduce the transmission-energy consumption. Moreover, since $\mathcal{T}_B(\vartheta)$ can be regarded as a special case of the current tunnel $\bar{\mathcal{T}}_B(\vartheta)$ for which $\sum_{j=1}^{K-1} \vartheta L_j=U_{{\rm{bit}},K}$, the feasibility conditions for Problem P5 can be easily derived stated in the following lemma.
\begin{lemma}\label{Lem:FeaConP5}\emph{Problem P5 is feasible if and only if $0\le \vartheta \le \vartheta_{\max}$ where
\begin{equation}\label{Eq:MaxTheta}
\vartheta_{\max}=\min\left\{1, \min_k \left\{ \frac{U_{{\rm{bit}},K}-U_{{\rm{bit}},k}}{\sum_{j=k}^{K-1} L_j} \right\}
\right\}.
\end{equation}}
\end{lemma}
Next, given Problem P5 is feasible, the lemma below states one important property of the defined effective offloading feasibility tunnel, proved by a similar method for Proposition~\ref{Pro:OptRegion}.
\begin{lemma}\label{Lem:BurstyOptRegion}\emph{Consider the helper has a large buffer and the user has bursty data arrivals for offloading. For the case of underutilization, the energy-efficient transmission policy can be derived by forming a shortest path in the effective offloading feasibility tunnel.
}
\end{lemma}
Thus, Problem P5 for the current case can be transformed to the one replacing the constraints with the effective offloading feasibility tunnel, and solved by the ``string-pulling" approach.
\subsection{Energy-Efficient Proportional Data Partitioning}
In this sub-section, the energy-efficient proportional data partitioning is transformed into the same form as the counterpart with one-shot data arrival and solved using a similar method.
First, consider the feasibility of Problem P6. It is feasible if and only if there exists one data-partitioning ratio, for which both the adaptive offloading and local computing at the user are feasible. For each ratio, the former can be verified in the slave Problem P5 in the preceding sub-section and the latter is analyzed as follows. Similar to the effective offloading feasibility tunnel, given on the constraints of deadline and data causality for local computing, we define an effective local-computing feasibility tunnel as
\begin{align}\label{Eq:EffectiveLocTunnel}
& \text{(Effective Local-Computing Feasibility Tunnel)}\nonumber\\
&\bar{\mathcal{T}}_{\rm{B, loc}}(\vartheta)=\left\{ \boldsymbol{\ell}_{\rm{loc}} ~\bigg|~\left[\frac{\hat{s}_k f}{C}-\bar{\Delta}_{\rm{loc}}(\vartheta)\right]^{+} \le \sum_{j=1}^k \ell_{\rm{loc},j}\le \sum_{j=1}^{k-1} (1-\vartheta)L_j, \right.\nonumber\\ & ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~\left.~\text{for}~ k=1,\cdots, \tilde{K}-1,~\text{and}~ \sum_{k=1}^{\tilde{K}} \ell_{{\rm{loc}},k} =\sum_{k=1}^{\tilde{K}-1} (1-\vartheta) L_k\right\}
\end{align}
where $\bar{\Delta}_{\rm{loc}}(\vartheta)=\frac{T f}{C}-\sum_{j=1}^{\tilde{K}-1} (1-\vartheta)L_j$. The local computing is feasible if and only if the tunnel ceiling is not below the tunnel floor. Combing the feasibility conditions for local computing and offloading yields the feasibility conditions for Problem P6 in the following lemma.
\begin{lemma}\label{Lem:FeaConP6}\emph{Problem P6 is feasible if and only if $\vartheta_{\min}\le \vartheta \le \vartheta_{\max}$ where
\begin{equation}
\vartheta_{\min}=\left[1- \min_k \left\{ \frac{f(T-\hat{s}_k)/C}{\sum_{j=k}^{\tilde{K}-1} L_j}
\right\}\right]^+
\end{equation}
and $\vartheta_{\max}$ is defined in \eqref{Eq:MaxTheta}.}
\end{lemma}
Using Lemma~\ref{Lem:FeaConP6}, Problem P6 can be transformed as:
\begin{equation*}({\bf P7})\qquad
\begin{aligned}
\min_ {\vartheta} \quad& \left[\sum_{k=1}^{\tilde{K}-1} (1-\vartheta) L_k\right] C P_{\rm{cyc}}+\hat{E}_{\rm{off}}(\vartheta)
\qquad \rm{s.t.}~~
& \vartheta_{\min}\le\vartheta\le \vartheta_{\max}.
\end{aligned}
\end{equation*}
Problem P7 has a similar form as that of Problem P2. Using the similar approach, Problem P7 can be proved to be a convex problem and the optimal data-partitioning ratio can be computed using the sub-gradient method. The details are omitted for brevity.
\section{Simulation Results}\label{Sec:Simu}
The simulation parameters are set as follows unless specified otherwise. First, the computation deadline is set as $T = 0.1$ s. For local computing, the CPU frequency is $f=1$ GHz. The required number of CPU cycles per bit is $C=500$ cycle/bit and each CPU cycle consumes energy $P_{\text{cyc}}=10^{-10}$ J with $\gamma\!=\!10^{-28}${\color{black}\cite{chen2015efficient,you2016energy}}. For offloading, {\color{black}{we assume that the signal attenuation from the user to the helper is $60$ dB corresponding to an equal distance of $10$ meter, and the channel $h$ is generated from Rayleigh fading \cite{ju2014throughput}.}}
Moreover, the bandwidth $B=1$ MHz and the variance of complex-white-Gaussian-channel noise $N_0=-70$ dBm. Next, for the helper, its CPU frequency is $f_h=5$ GHz. The helper-CPU state alternates between idle and busy. Both the idle and busy intervals follow independent exponential distributions where the expected busy interval fixed as $0.02$ s and the expected idling interval being a variable.
\subsection{One-Shot Data Arrival}
Consider the case where the user has one-shot input data arrival and the helper has a large buffer.
We evaluate the performance of \emph{computing probability} and user's energy consumption. Specifically, computing probability is defined as the probability that the user finishes the given computation load via simultaneous offloading and local computing. For comparison, a \emph{benchmark policy} is considered, for which the P2P transmission rate follows the helper's CPU-idling profile and the data partitioning is optimized using the sub-gradient algorithm.
\begin{figure}[t!]
\centering
\subfigure[Computing probability.]{\label{Fig:Simu_Successful}
\includegraphics[width=8cm]{./FigSimu_Successful_Resub.pdf}}
\subfigure[User's energy consumption.]{\label{Fig:Simu_Egy}
\includegraphics[width=8cm]{./FigSimu_Egy_Resub.pdf}}
\caption{\color{black}{Effects of helper-CPU idling interval on the computing probability and user's energy consumption for the case of one-shot data arrival and a large buffer at the helper.}}
\end{figure}
Fig.~\ref{Fig:Simu_Successful} shows the curves of computing probability versus the expected helper-CPU idling interval. One can observe that the computing probability increases when the user has the decreasing computing load $L$ or the increasing idling interval. Moreover, computing probability grows at a higher rate when the helper has a relatively small expected CPU idling interval.
The curves of the user's energy consumption versus the expected helper-CPU idling intervals are plotted in Fig.~\ref{Fig:Simu_Egy}. Several observations are made as follows. First, the energy consumption is monotone-decreasing with the growing of helper-CPU idling interval since it allows the user to reduce the transmission rate for reducing transmission-energy consumption. However, the energy consumption saturates when the expected helper-CPU idling interval is large. Next, observe that the optimal policy achieves substantially higher energy savings compared with the benchmark policy since the former exploits the helper-CPU busy intervals for P2P transmission.
\begin{figure}[t!]
\begin{center}
\includegraphics[width=8cm]{./FigSimu_Egy_Buffer_Resub.pdf}
\caption{\color{black}{Effects of buffer size on the user's energy computation.}}
\label{Fig:Simu_Egy_Buffer}
\end{center}
\end{figure}
Last, the effects of buffer size on the user's energy consumption are shown in Fig.~\ref{Fig:Simu_Egy_Buffer}. Consider one baseline \emph{lazy-first} scheme which postpones the CPU co-computing in the early idle epochs and then fully utilizes the helper's CPU resource in the later epochs. The computation load is set as $L=0.7$ Mb. One can observe that with the grow of the buffer size, user's energy consumption firstly decreases owing to the buffer gain and then saturates when the buffer size is large. Next, compared with the lazy-first scheme, the proposed scheme of proportional CPU-utilization contributes to less user's energy consumption when the buffer size is small but has more energy consumption when the buffer exceeds a threshold ({\color{black}{about $0.55$ Mb}}). The reason is that for the former case, the offloading policy tends to follow the helper's CPU profile, and the proportional CPU-utilization scheme can distribute the buffer gain to all idle epochs and thereby lead to less variation on the offloading rates. While when the buffer is sufficiently large, the lazy-first scheme is the optimal policy as shown in Section~\ref{Sec:UnderutilizationEgy}. This observation is coincident with Remark~\ref{Rem:BufferGain}. Other observations are similar to those from Fig.~\ref{Fig:Simu_Egy}.
\vspace{-5pt}
\subsection{Bursty Data Arrivals}
Consider the case where user has bursty data arrivals. Specifically, the data inter-arrival interval follows the exponential distribution and for each arrival, the data size is uniformly distributed. The expected helper-CPU idling interval is set as $0.02$ s. Consider a benchmark policy for performance comparison, for which the adaptive offloading follows the curve of the floor of effective offloading feasibility tunnel and the proportional data partitioning is optimized using the sub-gradient algorithm.
\begin{figure}[t!]
\centering
\subfigure[Computing probability.]{\label{Fig:Simu_Successful_Bursty}
\includegraphics[width=8cm]{./FigSimu_Successful_Bursty_Resub.pdf}}
\subfigure[User's energy consumption.]{\label{Fig:Simu_Egy_Bursty}
\includegraphics[width=8cm]{./FigSimu_Egy_Bursty_Resub.pdf}}
\caption{\color{black}{Effects of user's expected data arrival size on the computing probability and user energy consumption for the case of bursty data arrivals.}}
\end{figure}
Fig.~\ref{Fig:Simu_Successful_Bursty} depicts the curves of computing probability versus the user's expected data arrival size under different expected data inter-arrival intervals. It is interesting to observe that the computing probability decreases \emph{linearly} with the user's expected data arrival size. Moreover, the decreasing rate is higher when the user has more frequent data arrivals resulted from a shorter expected data inter-arrival duration.
The curves of user's average energy consumption versus the expected data arrival size are shown in Fig.~\ref{Fig:Simu_Egy_Bursty}. One can observe that the user's energy consumption is almost \emph{linearly} increasing with the grow of expected data arrival size. Moreover, the energy consumption grows more sharply when the user has more frequent data arrivals. Last, the optimal policy achieves higher energy savings compared with the benchmark policy, especially when the user has a large data arrival rate.
{\color{black}{\section{Conclusion}
In this paper, we have presented a new design for mobile cooperative computing that enables a user to exploits NC-CSI shared by a cooperative helper for fully utilizing random computation resources at the helper by offloading with minimum energy consumption. The designed optimal policies for controlling offloading have been formulated as constrained optimization problems and solved using convex optimization theory. Thereby, we have re-discovered the optimal ``string-pulling" structure in the polices that also lies in those for controlling transmissions for energy-harvesting systems. This work opens a new direction for mobile cooperative computing, namely applying computation prediction to enable scavenging of random computation resources at edge devices. Along this direction, there lie many promising opportunities. In particular, the current design for a single user-helper pair can be extended to complex co-computing networks, addressing design issues such as applying computation prediction to facilitate joint radio-and-computation resource allocation and helper probing.}}
|
2,877,628,088,448 | arxiv | \section{INTRODUCTION}
Deep inelastic scattering from polarized targets continues to excite
interest among both theorists and experimentalists. When an electron
scatters off a spin-one target, such as a deuteron, new information
not present in case of a spin-half target can be obtained
\cite{hood89}. A new, leading twist tensor-polarized structure
function, $b_1(x,Q^2)$, can be determined by measuring the cross section from a target aligned along the beam, and subtracting the cross
section for an unpolarized target. The function $b_1(x,Q^2)$ vanishes if
the spin-one target is made up of spin-half nucleons at rest or in a
relative $s$-wave. In the parton model it measures the difference in
the quark momentum distributions of a helicity $1$ and $0$ target,
\begin{equation}
b_1 = \frac{1}{2}(2q^0_\uparrow - q^1_\uparrow - q^1_\downarrow),
\label{hel}
\end{equation}
where $q^m_\uparrow$ ($q^m_\downarrow$) is the probability to find a
quark with momentum fraction $x$ and spin up (down) along the
$z$-axis, in a hadron (nucleus) with helicity $m$, moving with
infinite momentum along $z$-axis. $b_1(x,Q^2)$ has not yet been measured
experimentally. Two recent papers have studied the effect of multiple
scattering on $b_1$ and found large contributions at small-$x$. Our
aim is to explore these issues in the context of vector meson
dominance (VMD), where some of the uncertainties evident in
Refs.~\cite{niko97,edel97} are more explicit. Within the range of
these uncertainties we find that multiple scattering does produce a
large contribution to $b_1(x,Q^2)$ at small $x$. Our estimates are
smaller than those of Refs.~\cite{niko97,edel97} by factors of $1.5$
-- $2.5$, differences which are not unexpected given the conceptual differences
between their approaches and ours. Nikolaev and Sh\"afer\cite{niko97}
use the pomeron structure function in proton to extract the
diffractive shadowing contribution. Their results have been presented
for $Q^2=10\, {\rm GeV}^2$. Edelmann {\it et al.\/}\cite{edel97}
estimate $b_1$ by expressing it in terms of
$\displaystyle{\frac{\delta F_2}{F_2}}$. Our analysis does not
support such a simple scaling relation between $b_1$ and the shadowing
of $F_2$, although the two originate in the same double scattering
mechanism. Furthermore, the authors of Ref.~\cite{edel97} do not
specify the scale at which their results should apply. Given these
differences of approach, we view our results as qualitative
confirmation of the work of Refs.~\cite{niko97,edel97} in a specific,
rather well defined, model.
Deep inelastic scattering from nuclear targets is usually discussed in the
context of the ``convolution model'',\cite{jaffe88} where it is assumed that
the
constituents of the nucleus scatter incoherently. An essential assumption
of the convolution model is that a quark residing inside the nucleon
absorbs the virtual photon (in a typical DIS process) while the
fragments of nucleus and the constituents propagate into the final
state without interaction or interference. In the convolution model
$b_1$ vanishes if the $d$-state admixture in the deuteron is ignored
\cite{hood89}. The contribution to $b_1$ from the deuteron $d$-state
was studied in Ref. \cite{khan91}, along with the contribution from
double scattering from the two nucleons, which amounts to a coherent
contribution to the amplitude. In Ref.~\cite{khan91} the double scattering
process was studied at the parton level.
In the present work, we investigate the behavior of $b_1(x,Q^2)$ in a
vector meson dominance (VMD) \cite{baur78} model. Of course deep inelastic
scattering at large-$Q^2$ should be discussed in terms of quarks and
gluons. If taken literally at large $Q^2$, VMD has the wrong $Q^2$
dependence. VMD can be used, however, to provide ``boundary value data'' ---
{\it i.e.\/} starting values for parton distribution functions --- at low-$Q^2$
where the assumptions of VMD are well-founded. We choose VMD because it lends
itself to the treatment of multiple scattering effects that violate the
convolution model and may give rise to a significant contribution to $b_1$.
Also, because vector meson production data are available, cross sections
necessary for our analysis can be found in the literature. The cost of this
increased certainty is the need to identify a scale $Q_0^2$ to be assigned to
the output of the model.
Double scattering (which we will refer to as ``shadowing'') and the
$d$-state admixture in the deuteron play a crucial role in our VMD
treatment as they do in Refs.~\cite{khan91,niko97,edel97}. According
to VMD, the virtual photon can fluctuate between a bare photon state and a
superposition of hadronic states with same quantum numbers as the
photon ($J^{PC} = 1^{--}$). In the simplest form of VMD this state is
taken to be a superposition of $\rho$, $\omega$ and $\phi$ mesons,
\begin{equation}
\sqrt{\alpha}\left|h\right> = \sum_V\frac{e}{f_V}\frac{m_V^2}{m_V^2+Q^2}
\left|V\right>,
\label{vmd}
\end{equation}
where $em_V^2/f_V$ is the photon vector meson coupling, $\sqrt{\alpha}
\left|h\right>$ is the hadronic component of the photon, and $Q^2$ is
the virtuality of the spacelike virtual photon. As usual in VMD, we
assume that
the vector meson interacts diffractively with the nucleon and that the
$t$-dependence of the VMD amplitudes can be taken from vector meson
photoproduction.
The VMD contribution to $b_1$ is constrained at both large and small $x$ by
simple physical effects. For multiple scattering effects to be significant, the
time (or distance) over which the virtual vector meson can propagate through the
target nucleus (known as the ``coherence length''$\equiv\lambda$) must be long
enough for the meson to undergo more than one interaction with the target. We
shall see that the coherence length is determined kinematically by the
uncertainty principle. At large Bjorken-$x$ ($x\ge 0.3$) $\lambda$ is smaller
than a single nucleon, so double scattering contributions to $b_1$ are
suppressed. At small $x$, double scattering can be important. In order to
contribute to $b_1$ it must distinguish between the helicity $\pm 1$ and
helicity $0$ states of the deuteron. If the amplitude for $\gamma^\ast
p\rightarrow V X$ fell quickly with (transverse) momentum transfer,
corresponding to long range in impact parameter space, then shadowing could not
distinguish the orientation of the nucleons in the deuteron and $b_1$ would be
small at small $x$. At the opposite extreme, if $\gamma^\ast p\rightarrow V X$
were flat in momentum transfer, corresponding to a $\delta$ function in impact
parameter, then shadowing would occur only when one nucleon was directly ``in
front of'' the other. The quadrupole admixture in the deuteron wavefunction
produces just such a deformation of the wavefunction in one helicity state
relative to the other. The deuteron is a relatively large bound state, and the
range of vector meson electroproduction is limited, so the actual situation most
closely resembles the second scenario and leads to a significant enhancement in
$b_1$ at small $x$.\footnote{We thank G.~Piller and N.~Nikolaev for valuable suggestions on this topic.}
The paper has been organized as follows. In Section II, we present the
theoretical formulation of the model, reviewing VMD and the double
scattering analysis. Section III contains calculations and results.
Throughout the paper, we have attempted to keep the analysis simple and
self-explanatory.
\section {Formulation of the Model}
$b_1$ measures a tensor (spin-two) correlation of the momentum
distribution of quarks with the spin of the target in DIS. Such a
correlation must vanish in a spin-$\frac{1}{2}$ target on account of
the Wigner-Eckart Theorem. In principle, any spin-one target
can have a non-vanishing $b_1$. Two nucleons bound in an $s$-wave
cannot give $b_1\neq 0$. What is not so obvious, perhaps, is that a
$d$-state admixture in a $J=1$ bound state of two nucleons will
generically give $b_1\ne 0$
[1,5]. It is well known that the ground
state of deutron is not spherically symmetric: it is (primarily)
an admixture of
the states $^3S_1$ ($\ell=0$,$S=1$) and $^3D_1$ ($\ell=2$,$S=1$).
This admixture produces (and was first detected through) the
observation of an electromagnetic quadrupole moment of the deuteron.
The observation of a non-zero $b_1$ through the asymmetry $b_1/F_1$
probes the same aspects of the nucleon-nucleon force as does the
deuteron's quadrupole moment.
To expose the physical significance of various structure functions, it
is useful to describe Compton scattering in terms of helicity
amplitudes. The lepton scattering cross section from a hadron target involves
the hadron tensor
\begin{equation}
W_{\mu\nu}(p,q,H_1,H_2)={1\over4\pi}\int d^4x \, e^{iq\cdot x}
\left<p,H_2|[J_\mu(x),J_\nu(0)]|p,H_1\right>
\label{hadtens}
\end{equation}
which is the imaginary part of the forward current-hadron scattering amplitude.
Here $H_1$ and $H_2$ are components of the target spin along a
quantization axis and $J_\mu$ is the electromagnetic current. $W_{\mu\nu}$
can be decomposed into a set of four linearly independent structure functions
for a spin-half target using parity and time reversal invariance, while for
a spin-one target, the number of linearly independent structure function is
eight \cite{hood89}. Thus, $b_1(x,Q^2)$ can be related to the helicity
amplitudes $A_{h_1H_1,h_2H_2}$ for the process $h_1
+H_1 \rightarrow h_2 +H_2$, where $h_j$ ($H_j$) labels the helicity
of the photon (target), and
\begin{equation}
A_{h_1H_1,h_2H_2} = \epsilon^{\ast\mu}_{h_2}\epsilon^\nu_{h_1}W_{\mu\nu},
\label{amp}
\end{equation}
$\epsilon^\mu_h$ is the polarization vector of photon of helicity $h$. It can
be shown that
\begin{equation}
b_1(x,Q^2) = {1\over2}(2A_{+0,+0} -A_{++,++} - A_{+-,+-}).
\label{pol}
\end{equation}
The structure functions $W_1$ and $W_2$ in unpolarized DIS of an
electron from proton target can be described in terms of the
photoabsorption cross section $\sigma_T$ and $\sigma_L$ for transverse
(helicity $\pm1$) and longitudinal (helicity $0$) photons respectively
as
\begin{eqnarray}
W_1 & = & \frac{K}{4\pi^2\alpha}\sigma_T,
\label{W1}\\
W_2 & = & \frac{K}{4\pi^2\alpha}(\sigma_T+\sigma_L)\frac{Q^2}{Q^2+\nu^2}.
\label{W2}
\end{eqnarray}
$K$ is the incident virtual photon flux, $\nu$ is its energy in laboratory
frame.
Taking suitable combinations of helicities we can separate out
$b_1$. We separate the contributions to $b_1$ into single and double
scattering terms. The single scattering terms are given by the
convolution formalism and reflect the $d$-state admixture in the
deuteron ground state.\cite{hood89,khan91}. We put these aside and
focus on the double scattering contributions, which are given by,
\begin{equation}
b_1^{(2)}(x,Q^2)\ = \frac{Q^2}{8\pi^2x\alpha}\left\{\left.\delta\sigma^{(2)}_{\gamma^\ast D}
\right|_{m=0} - \left.\delta\sigma^{(2)}_{\gamma^\ast D}\right|_{m=1}\right\},
\label{lab}
\end{equation}
where
$\delta\sigma^{(2)}_{\gamma^\ast D}$ signifies the double scattering
shadowing correction contribution to the deuteron cross section,
\begin{equation}
\sigma_{\gamma^\ast D} = \sigma^{(1)}_{\gamma^\ast p} + \sigma^{(1)}_{\gamma
^\ast n} + \delta\sigma^{(2)}_{\gamma^\ast D}.
\label{cross}
\end{equation}
In all cases $\sigma$ refers to the cross section for transverse photons ---
the subscript $ T$ has been dropped for simplicity.
Glauber multiple scattering theory \cite{glauber59} is usually used
to describe the interaction of high energy particles with nuclei. The
basic assumption of the Glauber treatment is that the amplitude for a
high energy hadron to interact with a nucleus can be built from the
scattering amplitude off individual nucleons. Here we shall employ
another analysis \cite{gribov72} that uses a Feynman diagram
technique, which reduces to the optical model results of Glauber
theory in the limit of large A with a general one particle nuclear
density. The double scattering contribution to photon-nucleus
scattering can be represented diagramatically as shown in
Fig.~\ref{fig1}a. The following assumptions are made in this
analysis:~ spin and any other internal degrees of freedom are
neglected (except where necessary to isolate $b_1$), all the nucleons
are assumed to be equivalent, the momentum transfer from the incident
hadron (here the vector meson) to target nucleus is small, the nucleus
and nucleons are nonrelativistic. The vector mesons act as the
intermediate states during the double scattering. Therefore the
double scattering picture looks as in
Fig.~\ref{fig1}b, and the singularities of the amplitude $T$ as a
function of momenta of the intermediate vector mesons $V$ are
isolated. They correctly correspond to the
propagation of the vector meson between nucleons.
We fix the momentum of the virtual photon to be in $z$
direction
\begin{equation}
q^\mu = (\nu,0,0,\sqrt{Q^2+\nu^2}),
\label{photon}
\end{equation}
and define $\vec x = (\vec b, z)$. The momentum $V^\mu$ of the vector
meson can be seen to be
\begin{equation}
V^\mu = q^\mu + P^\mu - P'^\mu = (\nu,k_x,k_y,q_z+k_z),
\label{meson}
\end{equation}
where $P^\mu$, $P'^\mu$ are the four-momenta of initial and
final nucleon states, $N$ and $N'$ respectively, and $t=-(P-P')^2 =
k^2 $ is the momentum transfer squared. Finally, the expression for
the double scattering takes the form
\begin{eqnarray}
A^{(2)}_{\gamma^\ast D} & = & \frac{-1}{(2\pi)^3 2M}\int d^2 b\,
dz\, |\psi(\vec b,z)|^2
\sum_V \int d^3k\, T'_{\gamma^\ast N\rightarrow NV}\times\nonumber\\
& \times &
\frac{e^ {i(k_zz+\vec k_\perp\cdot\vec b)}}
{\nu^2-M_V^2-{\vec k_\perp}^2-(q_z+k_z)^2 +i\epsilon} T'_{NV\rightarrow
\gamma^\ast N},
\label{form}
\end{eqnarray}
here N,V represent the nucleon and intermediate vector meson respectively,
$T'_{\gamma^\ast N\rightarrow NV}$ is the production amplitude for vector
mesons ($\rho^0$, $\omega$, $\phi$), M is nucleon's mass and $M_V$ is the mass
of vector meson. The amplitude $T'$ depends on the momentum transfer in
$t$- channel --- the subprocess $\gamma^\ast N\rightarrow NV$ is not limited to
the {\it forward} direction. For small $t$, $t\approx -\vec k_\perp^2$, so the
$t$-dependence determines the range of shadowing in impact parameter space.
Even if the nucleons are misaligned in $\vec b$ space by a distance of the order
of the range of $\gamma^\ast p \rightarrow V X$, the vector meson can still
undergo a second interaction with the other nucleon. If the range of the
production amplitude in $\vec b$ were smaller than the deuteron wavefunction,
then we
could approximate $T'$ by its value at $t=0$. Since this is not the case we
shall have to integrate over $t$. The vector meson is not on-shell --- hence the
$i\epsilon$. The energy of the meson state is given by $E_V =
\sqrt{M_V^2+q_z^2}$ (for $\nu^2>> |k|^2$), $q_z^2 = Q^2+\nu^2$. The energy
difference that defines the virtuality of the vector meson is therefore given by
$\Delta E = \sqrt{M_V^2+\nu^2+Q^2} - \nu$, which for large values of photon
energy can be written as $\Delta E = \displaystyle{\frac{Q^2+M_V^2}{2\nu}}$.
Therefore the
vector mesons can exist for a time $\Delta t \sim \displaystyle{\frac{1}
{\Delta E}}$, and can
propagate a distance $\lambda$, called its coherence length, $\lambda = \Delta t
= 1/\Delta E$,
\begin{equation}
\lambda = \frac{2\nu}{M_V^2+Q^2} = \frac{Q^2}{Mx(M_V^2+Q^2)}.
\label{coher}
\end{equation}
For significant shadowing or double scattering to occur, the coherence
length of the intermediate vector meson should be of order the typical
internucleon separation in the nucleus, $\sim 1.7$ fm. Thus, these
effects increase as $x$ decreases. Multiple scattering is most
prominant at small $x$ and for low-mass vector mesons. One can thus
justify the use of only lowest mass vector mesons in the present
case. Our model resembles partonic approaches to low-$Q^2$ shadowing
(see for example \cite{brodsky90}), where the virtual photon converts
to a $q\bar q$ pair at a distance before the target proportional to
$\displaystyle{\frac{1}{x}}$ in the laboratory frame. Shadowing is then explained in
terms of $\bar q-$nucleon scattering amplitude. The symmetric $q\bar
q$ pairs at not too large $Q^2$, with a transverse separation $\sim
\displaystyle{\frac{1}{\sqrt{Q^2}}}$ can be viewed as a meson, the strong color
interaction between quark and antiquark increasing with increasing
separation.
Now we return to Eq.~(\ref{form}). The optical theorem relates the
total cross section to the imaginary part of the forward scattering
amplitude as $\delta\sigma_{\gamma^\ast D} = \frac{1}{W^2_D}
\left.{\rm Im}\, A^{(2)}_{\gamma^\ast D}\right|_{t=0}$, with $W^2_D$ the
total center of mass energy of the $\gamma^\ast$-$D$ system, $W^2_D =
2W^2_N$, $W_N^2 = (p+q)^2 \equiv 2M\nu - Q^2$. To simplify
Eq.~(\ref{form}), we carry out the $k_z$ integration. Given the sign
of the exponential, only the singularity in upper half $k_z-$ plane
contributes. Since the vector meson interacts diffractively with the
nucleon, the double scattering diagram looks as shown in
Fig.~\ref{fig2}. The optical theorem relates the resulting on-shell
amplitude to the differential cross section for vector meson
photoprodution,
\begin{equation}
\left.\frac{d\sigma}{dt}\right|_{t=k^2} =
\frac{1}{16\pi}\sum_V\frac{|T'_{\gamma^\ast N
\rightarrow NV}|_{t=k^2}^2}{W_N^4},
\label{photo}
\end{equation}
where $\displaystyle{\left.\frac{d\sigma}{dt}\right|_{t=k^2\approx -{\vec
k_\perp}^2} =\left.\frac{d\sigma}{dt}\right|_{t=0}e^{-a{\vec k_\perp}^2}}$.
We estimate the $t$-dependence from photoproduction data where
$a \approx 10.4$, $10.0$ and $7.3\, {\rm GeV}^{-2}$ for $\rho$, $\omega$
and $\phi$ vector-mesons respectively.
Next, we consider the deuteron form factor terms in Eq.~(\ref{form}). We can
write the deuteron wavefunction as mixture of $s$-and $d$-states ($m=1$)
\begin{equation}
\psi_{m=1} = \frac{u_0(r)}{r}Y^0_0({\Omega})\chi^1_1 + \frac{u_2(r)}{r}\left\{\sqrt{\frac{3}{5}}
Y^2_2({\Omega})\chi^{-1}_1 - \sqrt{\frac{3}{10}} Y^1_2({\Omega})\chi^0_1 + \sqrt{\frac{1}{10}}
Y^0_2({\Omega})\chi^1_1\right\},
\label{rep}
\end{equation}
where $Y$'s are the spherical harmonics and $\chi$'s are the spin wave
functions. Using the orthogonality of the $\chi$ functions, we get
\begin{eqnarray}
|\psi_{m=1}|^2 & = & \frac{u_0^2}{r^2}Y^{0\ast}_0 Y^0_0 +
\sqrt{\frac{1}{10}}
\frac{u_0u_2}{r^2}Y^{0\ast}_0 Y^0_2 + \frac{u_2^2}{r^2}\frac{3}{5}
Y^{2\ast}_2 Y^2_2 + \frac{3}{10}\frac{u_2^2}{r^2}Y^{1\ast}_2 Y^1_2\nonumber\\
& + &
\sqrt{\frac{1}{10}}\frac{u_0u_2}{r^2} Y^{0\ast}_2 Y^0_0 +
\frac{1}{10}\frac{u_2^2}{r^2}Y^{0\ast}_2 Y^0_2,
\label{ortho}
\end{eqnarray}
with the wave functions normalized by
\begin{equation}
\int_0^\infty dr\, [u_0^2(r) + u_2^2(r)] = 1.
\label{norml}
\end{equation}
Similarly for $m=0$,
\begin{eqnarray}
|\psi_{m=0}|^2 & = & \frac{u_0^2}{r^2} Y^{0\ast}_0 Y^0_0 -
\sqrt{\frac{2}{5}}
\frac{u_0u_2}{r^2} Y^{0\ast}_0 Y^0_2 + \frac{3}{10}\frac{u_2^2}{r^2}
Y^{-1\ast}_2 Y^{-1}_2 +
\frac{2}{5}\frac{u_2^2}{r^2} Y^{0\ast}_2 Y^0_2\nonumber\\
& + &
\frac{3}{10}\frac{u_2^2}{r^2} Y^{1\ast}_2 Y^1_2 -\sqrt{\frac{2}{5}}
\frac{u_0u_2}{r^2} Y^{0\ast}_2 Y^0_0.
\label{and}
\end{eqnarray}
Subtracting Eq.~(\ref{ortho}) from (\ref{and}) gives
\begin{equation}
|\psi|^2_{m=0} - |\psi|^2_{m=1} =
\frac{-3}{4\sqrt{2}\pi}\frac{u_0(r)u_2(r)}{r^2}
(3\cos^2\theta - 1).
\label{sub}
\end{equation}
Combining the results summarized in Eqs.~(\ref{lab}),(\ref{form}),
(\ref{photo}) and (\ref{sub}) the final expression for the function
$b_2^{(2)}(x,Q^2)$ ($=2xb_1^{(2)}$) emerges
\begin{eqnarray}
b_2^{(2)}(x,Q^2)\ & = & \frac{-3}{(\pi)^4}\frac{Q^2}{16\sqrt{2}\alpha}\,{\rm Im}\, i
\int d^2 b\,
\int dz \, u_0(r)u_2(r)
\frac{2z^2-b^2}{(z^2+b^2)^2}\times\nonumber\\
& \times &
\sum_V \int d^2\vec k_\perp e^{iz/\lambda} e^{i\vec k_\perp\cdot\vec b
-a\vec k_\perp^2}
\frac{M_V^4}{(M_V^2+Q^2)^2} \left.\frac{d\sigma}{dt}\right|_{\gamma N
\rightarrow VN,t=0}.
\label{looks}
\end{eqnarray}
Note that the crucial quadrupole factor ($3\cos^2\theta -1$)
translates into ($2z^2-b^2$) in Eq.~(\ref{looks}).
With similar arguments, the shadowing contribution to the unpolarized
structure function can be shown to be
\begin{eqnarray}
\delta F^{(2)}_1 & = & \frac{-Q^2}{16\pi^4 \alpha}\,{\rm Im}\, i\int d^2 b\,
\int dz \, \frac{1}{z^2+b^2}\left\{u_0^2(r) +\frac{3}{4}u_2^2(r)
\frac{b^4}{(z^2+b^2)^2}\right\}\times\nonumber\\
& \times &
\sum_V \int d^2\vec k_\perp e^{iz/\lambda}e^{i\vec k_\perp\cdot\vec b
-a\vec k_\perp^2}
\frac{M_V^4}{(M_V^2+Q^2)^2} \left.\frac{d\sigma}{dt}\right|_{\gamma N\rightarrow VN,t=0}.
\label{F1}
\end{eqnarray}
Since the diffractive photoproduction of vector mesons takes place via
pomeron exchange, the differential cross section for forward
scattering is of the form
$\displaystyle{\left.\frac{d\sigma}{dt}\right|_{\gamma^\ast N\rightarrow VN,
t=0}}\sim
W^{4(\alpha _P(0)-1)}$, where $\alpha_P(t=0) = 1+\delta$ is the soft
pomeron intercept. Thus it can be seen that the scaling violations in
$b_1^{(2)}(x,Q^2)$ are of the order of $\displaystyle{\frac{1}
{Q^{2(1-2\delta)}}}$, and
the contribution vanishes at large $Q^2$. In these models, structure
function vanishes at large $Q^2$ and scaling can be restored within
the context of the model only if one takes into account the continuum
of heavier mesons (GVMD). Rather, we take the point of view that VMD
should not describe the $Q^2$ dependence because it is intrinsically a
low-$Q^2$ effective theory. VMD provides an estimate of certain (in
this case multiple scattering) contributions to the structure function
at a low scale, which are then mapped into the large-$Q^2$ domain by
standard QCD evolution.
\section{CALCULATIONS AND RESULTS}
The resulting behavior of $b_1^{(2)}(x,Q^2)$ using Eq.~(\ref{looks}) is
shown in Figs.~\ref{fig3}-\ref{fig4}. We have used the Bonn potential
\cite{mach87} for deuteron wave function in the calculations. The
differential cross section for production of vector
mesons $\rho^0, \omega, \phi$ has been taken from reference
\cite{zeus96} and earlier data from the references therein, and have
values for forward scattering $\sim 139.0$, $10.4$ and $7.2
\, \displaystyle{\frac{\mu b}{{\rm GeV}^2}}$ for $\rho$($W=70\, $GeV),
$\omega$($W=80\, $GeV) and
$\phi$($W=70\, $Gev) respectively. Here $W$ corresponds to the mean
photon-proton center of mass energy. In
Fig.~\ref{fig3} we have presented the variation of $b_2^{(2)} =
2xb_1^{(2)}$ with $x$, for $10^{-4} \leq x \leq 1.0$ at $Q^2 = 0.1$, $1.0$,
$4.0$, and $10.0$ GeV$^2$. We observe that $b_2^{(2)}$ is significant
toward small $x$ values, behaving as $\sim\displaystyle{\frac{(
1-x)^{2\delta}}{x^{2\delta}}}$,
and is in general agreement with \cite{niko97,edel97}.
In Fig.~\ref{fig4} we have given the $Q^2$ behavior of $b_2^{(2)}(x,Q^2)$,
as predicted by the VMD model, at different values of $x$. That
$b_1$ vanishes at $Q^2=0$ is clear from Eq.~(\ref{looks}). It
vanishes at large $Q^2$ because the vector meson propagators and
the vector meson electroproduction cross section both fall with $Q^2$.
This can be explained by the
reduction in the coherence length of the vector mesons as $Q^2$
increases, at a fixed photon energy. Fig.~\ref{fig5} shows the
double scattering contribution to $F_2$ in deuteron using
Eq.~(\ref{F1}).
A few comments are in order here. Our results are more specific than those of
Refs.~\cite{niko97,edel97} because we have made more specific assumptions about
the nature of the intermediate hadronic state. Of course we could add further
excited vector mesons to our calculation, however their contribution would be
suppressed at the low $Q^2$ at which we work. We must still confront the
question: At which scale should we graft our VMD results onto standard QCD
evolution? Since we are interested in qualitative rather than quantitative
behavior, some uncertainty can be tolerated. $Q^2=0.1\, {\rm GeV}^2$ is clearly
too small --- QCD evolution is not justified at such small $Q^2$. $Q^2=
10\, {\rm GeV}^2$ is clearly too large --- simple vector dominance
is not justified at
such large $Q^2$. A choice in the range of the $\rho$ mass seems appropriate
where both VMD and QCD have claims to applicability.
To summarize, we have presented a model for the double scattering
contribution to the tensor structure function $b_1(x,Q^2)$ of the
deuteron. The analysis is based on double scattering of vector mesons
in electron-deuteron scattering. We have found that
the double
scattering contribution to $b_1(x,Q^2)$ is significant for $x\le 0.1$
and behaves as $\sim\displaystyle{\frac{(1-x)^{2\delta}}{x^{1+2\delta}}}$.
At large Bjorken-$x$ ($x\ge 0.3$) the
vector mesons can propagate only over distance scales of order the
size of a single nucleon, and multiple scattering contributions are
not significant. At very small $x$ ($x\le 10^{-2}$), the coherence
length of the meson increases and hence the contribution increases. Our
results agree qualitatively with those obtained in Refs.~\cite{niko97,edel97},
and confirm the fact that a significant enhancement in $b_1$ can be expected at
small $x$ due to the quadrupole deformation of the deuteron.
|
2,877,628,088,449 | arxiv | \section{Introduction}
\label{intro}
Kohn-Sham density functional theory (DFT) is the most widely used theoretical tool in material science
and in quantum chemistry \cite{burke2012perspective}. Its main ingredient is an accurate approximation
for the exchange-correlation energy-density functional for the electron system.
The basic approach consists in approximating
this functional within the Local Density Approximation (LDA) or, in the case of spin-polarized systems,
the Local Spin-Density Approximation (LSDA), using as an input accurate results of
{\it ab initio}
calculations of the equation of state of the uniform electron system.
The LSDA allowed researchers to make quite accurate
predictions for the ground state properties of a huge variety of materials \cite{ParrYang,KohnBecke}.
Known limitations of the LSDA approach to the study of condensed matter systems
are a less accurate representation of
excited-state properties (because the DFT is a ground state theory),
and the partial neglect of strong
electron-electron correlation effects in which electron-electron repulsion plays a prominent role,
like those
arising, e.g., between core electrons in transition-metals and transition-metal compounds.
While numerous, more sophisticated approximations than LSDA exist,
including, e.g, generalized gradient approximations (GGA), meta-GGA, hyper-GGA, hybrid and generalized random-phase approximations\cite{perdew2001jacob,tao2003climbing,staroverov2004tests} or the LDA+$U$ methods \cite{anisimov1991band} including an effective Hubbard interaction term $U$, there is
at present no systematic procedure to improve
the accuracy of existing approximations
to the DFT of electron systems,
and to systematically converge to the exact
density functional.\\
In recent years, ultracold atoms trapped in optical lattices (OLs) have proven
to be an ideal platform to perform quantum simulations of
phenomena in the presence of strong inter-particle
correlations \cite{bloch2008many}. Most of the early theoretical works are
focussed on single-band discrete-lattice Hamiltonians ---
the most relevant being the Hubbard model --- which
properly describe the experimental realization of such systems
if the OL is very deep and the interactions are sufficiently weak~\cite{jaksch2005cold}.
These models capture the phenomenology of strongly correlated systems,
but they do not allow to make quantitative predictions of
real materials' properties, as opposed to DFT methods.
More recently, researchers addressed also shallow OLs employing
continuous-space models, studying phenomena
such as itinerant ferromagnetism~\cite{pilati2014}, bosonic superfluid-Mott insulator transitions~\cite{pilati2012}, and pinning localization transitions~\cite{de2012phase,boeris2016mott,astrakharchik2016one}.
It has also been proposed to use OL experiments as a testbed to develop more accurate approximations
for the exchange-correlation functional of DFT~\cite{dft}. In this respect, cold-atom systems offer
crucial advantages with respect to solid-state systems, since experimentalists
are able to independently control the density inhomogeneity and the interaction strength by tuning, respectively, the OL intensity and a magnetic field close to a Feshbach resonance.\\
DFT methods have already been employed to study ultracold fermionic gases, allowing to investigate phenomena such as ferromagnetism and antiferromagnetism in repulsive Fermi gases in shallow OLs~\cite{dft}, vortex dynamics in superfluid Fermi gases~\cite{bulgac2011real,bulgac2014quantized}, superfluidity and density modulations in dipolar Fermi gases~\cite{ancilotto2016kohn}, vortices in rotating dipolar Fermi gases~\cite{ancilotto2015kohn}, and the formation of ferromagnetic domains in trapped clouds~\cite{zintchenko2016ferromagnetism}.
These studies employed exchange-correlation functionals based on the LSDA. However, it has not yet been analyzed in detail in which regimes this approximation is reliable.\\
The main goals of this article are (i) to assess the accuracy of the LSDA for repulsive Fermi gases in
OLs and (ii) to provide an accurate benchmark for future studies aiming at developing beyond-LSDA approximations. To this aim, we mostly focus on the one-dimensional geometry, for which quantum Monte Carlo (QMC) simulations based on the fixed-node method to circumvent the sign problem provide exact results~\cite{ceperley1991fermion}, within statistical uncertainties. Quantum fluctuations are known to play a more relevant role in one-dimension than in higher dimensional geometries, implying that the case we consider is a challenging testbed for the LSDA.
A systematic comparison between DFT calculations of ground state energies and density profiles for a
half-filled OL against the (exact) outcomes of the QMC simulations is presented. This allows us to
map the regime of OL intensities and interaction strengths
where the LSDA is accurate.
Furthermore, we consider a three-dimensional repulsive Fermi gas in a simple-cubic OL at quarter
filling, making also in this case a comparison between DFT calculations and QMC simulations for the
ground state energies.\\
The rest of the article is organized as follows: Secs.~\ref{secdft} and \ref{secqmc} provide the main details of the DFT calculations and of the QMC simulations, respectively. In Sec~\ref{sec1D} the results for the ground state energy and the density profiles of a half-filled one-dimensional OL are discussed. Section~\ref{sec3D} reports predictions for the ground state energy of the three-dimensional Fermi gas at quarter filling. Our conclusions and the outlook are reported in Sec.~\ref{conclusions}.
\begin{figure}
\includegraphics[width=1.0\columnwidth]{fig1.eps}
\caption{(Color online). Ground state interaction energy $E_{\mathrm{int}}= E-E_{\mathrm{id}}$ of a one-dimensional repulsive Fermi gas $E$ in a half filled OL, as a function of the interaction parameter $a_{1D}/d$, where $a_{1D}$ is the one-dimensional scattering length and $d$ is the OL periodicity. $E$, $E_{\mathrm{id}}$, and $E_{\mathrm{fp}}$ are the energies of the interacting, the noninteracting, and the fully-polarized (Tonks-Girardeau) gas, respectively. The three datasets correspond to three OL intensities $V_0$, expressed in unit of the recoil energy $E_r = \hbar^2\pi^2/(2md^2)$. Empty symbols represent DFT results, full symbols the QMC ones. Here and in the other figures for the one-dimensional OL the system size is $L=26d$.}
\label{fig1}
\end{figure}
\begin{figure}
\includegraphics[width=1.0\columnwidth]{fig2.eps}
\caption{Relative error of the DFT ground state energy $E_{\mathrm{DFT}}$ with respect to the exact QMC result $E_{\mathrm{QMC}}$ as a function of the interaction parameter $a_{1D}/d$. The density is fixed at half filling $n=1/d$, and the three datasets correspond to three OL intensities $V_0/E_r$.}
\label{fig2}
\end{figure}
\section{Density functional theory for atomic Fermi gases in optical lattices}
\label{secdft}
In this Section, we consider a generic continuous-space Hamiltonian describing a two-component Fermi gas in $D$ dimensions:
\begin{equation}
H = \sum_{\sigma=\uparrow,\downarrow}
\sum_{i_\sigma=1 }^{N_\sigma }\left(-\Lambda\nabla^2_{i_\sigma} + V(\mathbf{x}_{i_\sigma} )\right)
+ \sum_{i_\uparrow,i_\downarrow}v(x_{i_\uparrow i_\downarrow})
\;,
\label{hamiltonian}
\end{equation}
where $\Lambda=\hbar^2/2m$, with $m$ the atomic mass and $\hbar$ the reduced Planck constant. The indices $i_\uparrow$ and $i_\downarrow$ label atoms of the two components, which we refer to as spin-up and spin-down
fermions, respectively.
The total number of fermions is $N = N_\uparrow + N_\downarrow$, and
$x_{i_\uparrow i_\downarrow} = \left|\mathbf{x}_{i_\uparrow}-\mathbf{x}_{i_\downarrow}\right|$ is the relative distance between opposite-spin fermion pairs.
$V(\mathbf{x})=V_0\sum_{\alpha=1}^D \sin^2\left(x_{\alpha}\pi/d\right)$ is a simple-cubic optical lattice potential with periodicity $d$
and intensity $V_0$, conventionally expressed in units of recoil energy $E_r=\Lambda\left(\pi/d\right)^2$. The system size $L$ is an integer multiple of the OL periodicity, and periodic boundary conditions are assumed.
$v(x)$ is a model repulsive potential, defined in Sections~\ref{sec1D} and \ref{sec3D} for the one-dimensional and the three-dimensional cases, respectively. Its intensity can be tuned in experiments using Feshbach resonances~\cite{chin}. Off-resonant intraspecies interactions in dilute atomic clouds are negligible at low temperature; hence they are not included in the Hamiltonian.\\
The Hohenberg-Kohn (HK) theorem \cite{hohenbergkohn}
states that the ground state energy $E$ of the many-body system
described by the Hamiltonian (\ref{hamiltonian})
is a functional of the one-particle densities $(\rho_\uparrow,\rho_\downarrow)$:
\begin{equation}
\label{spin_dependent_functional}
E\left[\rho_\uparrow,\rho_\downarrow\right] = \int d\mathbf{x}\, V( \mathbf{x})
\left[ \rho_\uparrow\left(\mathbf{x}\right) + \rho_\downarrow\left(\mathbf{x}\right) \right]+
F\left[\rho_\uparrow,\rho_\downarrow \right].
\end{equation}
The first term is the potential energy due to the external potential $V(\mathbf{x})$. The second term
is an unknown but universal functional which includes the kinetic energy and interaction functionals,
$F\left[\rho_\uparrow,\rho_\downarrow \right] \equiv T\left[\rho_\uparrow,\rho_\downarrow \right] +
V_{int}\left[\rho_\uparrow,\rho_\downarrow \right]
$,
but does not explicitly depend on the external potential $V( \mathbf{x})$.
\begin{figure}
\includegraphics[width=1.0\columnwidth]{fig3.eps}
\caption{Local density $\rho(x)$ of a repulsive Fermi gas in a half-filled one-dimensional OL, as a
function of the spatial coordinate $x$. The three datasets correspond to three OL intensities
$V_0/E_r$, while the interaction strength is fixed at the intermediate value $a_{1D}/d = -1$. $d$ is
the OL periodicity. The lines represent the DFT results, the empty symbols represent the QMC data. The
total system size is $L=26d$. Here and in Figs.~\ref{fig4}, \ref{fig5} and \ref{fig6} we only visualize
the range $0\leqslant x \leqslant 4d$ for the sake of
clarity. The QMC data for $x<0.3d$ have been removed to make the DFT curves more visible.}
\label{fig3}
\end{figure}
\begin{figure}
\includegraphics[width=1.0\columnwidth]{fig4.eps}
\caption{Difference between the local density determined via DFT $\rho_{\mathrm{DFT}}(x)$ and the one obtained via QMC simulations $\rho_{\mathrm{QMC}}(x)$.
The interaction strength is $a_{1D}/d=-1$ and the (average) density is $n=1/d$.
The different symbols correspond to different OL intensities $V_0/E_r$. The thick continuous (red) curve represents the OL intensity profile, in arbitrary units.}
\label{fig4}
\end{figure}
In the Kohn-Sham formulation of the HK theorem \cite{kohnsham}
one writes the universal functional $F$ in the form
\begin{equation}
\label{kohn_sham_functional}
F\left[\rho_\uparrow,\rho_\downarrow \right]=T_0\left[\rho_\uparrow,\rho_\downarrow \right]
+V_H\left[\rho_\uparrow,\rho_\downarrow \right]
+E_{XC}\left[\rho_\uparrow,\rho_\downarrow \right],
\end{equation}
where
$T_0$ is the kinetic energy of
a fictitious system of {\it non-interacting} fermions, with the same densities
of the original one,
described by single-particle orbitals
$\{ \phi _\uparrow ^i({\mathbf x}),\,i=1,N_\uparrow \}$,
$\{ \phi _\downarrow ^i({\mathbf x}),\,i=1,N_\downarrow \}$,
(such that the total density is simply
$\rho ({\mathbf x})=\rho_\uparrow + \rho_\downarrow=\sum _i^{N_\uparrow }
|\phi _\uparrow ^i({\mathbf x})|^2+ \sum _i^{N_\downarrow }|\phi _\downarrow ^i({\mathbf x})|^2$~),
$V_H\equiv
{1 \over 2} \int d\mathbf{x}\, d\mathbf{x}^\prime \, \rho _\uparrow ({\mathbf x})
\rho _\downarrow ({\mathbf x}^\prime) v(x_{i_\uparrow i_\downarrow}) $
is the mean field (Hartree) expression for the interparticle interaction,
and
$E_{XC} =(T-T_0)+(V_{int}-V_H)$ is the
exchange-correlation energy functional.\\
The success of DFT for electrons (even at the LDA level of approximation)
is due to a partial cancellation between the
terms contained in $E_{XC}$, thus reducing the impact
on the final results of any approximations done to approximate this term.
We will show that this holds true, at least for weak to intermediate
interactions strengths, also for the fermionic gases investigated here.\\
While in the case of long-range Coulomb interactions, relevant for electrons in solids,
one usually writes separately the mean-field energy $V_H$
and the exchange-correlation term $E_{XC}$,
for short-range interactions relevant for atomic gases the mean-field term depends only on the
local densities, and can thus be combined with the exchange-correlation term in
a single energy functional $E_{HXC}$.
For consistency with the literature, we will refer to it as exchange-correlation term
(instead of using the more appropriate ``Hartree-exchange-correlation'' name).\\
A simple yet often reliable treatment of $E_\mathrm{HXC}$ is the local spin-density approximation
\begin{equation}
\label{LSDAXC}
E_\mathrm{HXC}\left[\rho_\uparrow,\rho_\downarrow\right] = \int d\mathbf{x}\, \rho (\mathbf{x})
\epsilon_\mathrm{HXC} \left( \rho_\uparrow\left(\mathbf{x}\right) , \rho_\downarrow\left(\mathbf{x}\right) \right),
\end{equation}
where the functional is replaced by an integral over the interaction energy density
of a uniform system with the same local spin-densities.
By imposing stationarity of the functional~(\ref{spin_dependent_functional}) with respect to
variations of the densities $\rho_\uparrow$ and $\rho_\downarrow$ one obtains a set of
Schr\"odinger-type equations
(the Kohn-Sham equations):
\begin{equation}
\label{kseq}
\hat {H}_{\rm KS}\,\phi ^i_\sigma ({\bf x})\equiv \!
\left[-\Lambda \nabla ^2 \!+\! V({\bf x}) \!+\!
{\partial (\rho \epsilon _{HXC}) \over \partial \rho
_\sigma }\right] \!\phi ^i_\sigma ({\bf x})=
\epsilon _i \phi ^i_\sigma ({\bf x}).
\end{equation}%
From the eigenstates of the Kohn-Sham equations
one can compute the density profiles and the ground state energy.\\
The LSDA exchange-correlation functional for one dimensional two-component Fermi gases with contact
interaction was derived in Ref.~\cite{abedinpour2007emergence} from the exact Bethe-Anstatz solution
for the ground state energy. The functional for three-dimensional Fermi gases with short-range
repulsive interactions has been obtained in Ref.~\cite{dft} using fixed-node diffusion Monte Carlo
simulations, similarly to the seminal work by Ceperley and Alder~\cite{ceperleyalder} who determined the equation of state of the uniform electron gas, upon which the parametrizations
for $E_{XC}$ commonly employed in electronic-structure calculations have been built (see, e.g., ~\cite{perdew1986}).\\
\begin{figure}
\includegraphics[width=1.0\columnwidth]{fig5.eps}
\caption{Local density $\rho(x)$ of a repulsive Fermi gas in a half-filled one-dimensional OL, as a function of the spatial coordinate $x$. The three panels display data for three values of the interaction strength $a_{1D}/d$ at the same OL intensity $V_0/E_r=3$. The continuous lines represent the DFT results, the circles represent the QMC data.}
\label{fig5}
\end{figure}
\begin{figure}
\includegraphics[width=1.0\columnwidth]{fig6.eps}
\caption{Difference between the local density determined via DFT $\rho_{\mathrm{DFT}}(x)$ and the one obtained via QMC simulations $\rho_{\mathrm{QMC}}(x)$. The OL intensity is $V_0/E_r = 3$ and the (average) density is $n=1/d$. The different symbols correspond to different values of the interaction parameter $a_{1D}/d$.
The think continuous (red) curve represent the OL intensity profile (referred to the right vertical axis).}
\label{fig6}
\end{figure}
\section{Fixed-node Diffusion Monte Carlo simulations}
\label{secqmc}
The ground state properties of the Hamiltonian (\ref{hamiltonian}) can be determined also via quantum Monte Carlo simulations based on the diffusion Monte Carlo (DMC) algorithm \cite{reynolds1982fixed}.
The DMC algorithm allows one to sample the ground state wave function by stochastically
evolving the Schr\"odinger equation in imaginary time.
In order to circumvent the sign problem, which would otherwise hinder fermionic Monte Carlo simulations, one introduces the fixed-node constraint, which forces the nodal surface of the many-body wave function to coincide with that of a trial wave function $\psi_T$.
If the nodal surface of $\psi_T$ is exact, this method provides unbiased estimates of the ground state energy. In the general case, one obtains a rigorous upper bound for the exact ground state energy, which is very close to the exact result if the nodes of $\psi_T$ are good approximations of the ground state nodal surface (see, {\it e.g.}, \cite{foulkes}).
In this study, we employ Jastrow-Slater trial wave functions defined as:
\begin{equation}
\psi_T({\bf X})= D_\uparrow(N_\uparrow) D_\downarrow(N_\downarrow) \prod_{i_\uparrow,i_\downarrow}f(x_{i_\uparrow i_\downarrow}) \;,
\label{psiT}
\end{equation}
where ${\bf X}=({\bf x}_1,..., {\bf x}_N)$ is the spatial configuration vector and $D_{\uparrow(\downarrow)}$ denotes the Slater determinant of single-particle orbitals of the particles with up (down) spin, and $x_{i_\uparrow i_\downarrow} =\left |\bf{x}_{i_\uparrow}- \bf{x}_{i_\downarrow} \right|$ indicates the relative distance between any opposite-spin fermion pair.
The Jastrow correlation term $f(x)$ is taken to be the solution of the s-wave radial Schr\"odinger equation describing the scattering of two particles in free space, as described in details in Refs. \cite{pilati2010,pilati2014}. Since $f(x)>0$, the nodal surface is determined by the Slater determinants, and therefore by the choice for the single-particle orbitals.
We use the $N_\uparrow$ ($N_\downarrow$) lowest-energy single-particle eigenstates $\phi_{j}(\bf{x})$ (with $j=0,\dots,N_{\uparrow(\downarrow)}-1$), which satisfy the single-particle Schr\"odinger equation in the external potential: $\left[-\Lambda \nabla^2 +V({\bf x})\right]\phi_{j}(\mathbf{x}) = e_j \phi_{j}(\mathbf{x})$, with the eigenvalues $e_j$. We determine these orbitals via exact diagonalization of the finite matrix obtained within a discrete-variable representation based on high-order finite-difference formulas for the Laplacian. The discretization error can be reduced at the point to be negligible compared to the statistical uncertainty of the Monte Carlo simulation.\\
While the fixed-node constraint might introduce a systematic bias, the predictions made with this approach have often been found to be extremely accurate. For example, the recent measurements performed at LENS with a strongly-repulsive Fermi gas in the upper branch of a Feshbach resonance\cite{valtolina2017exploring} --- which have been analyzed within the spin-fluctuation theory of Ref. \cite{recati2011spin} --- have been found to agree with previous predictions for the equation of state and the Stoner ferromagnetic instability obtained via fixed-node DMC simulations in Ref. \cite{pilati2010}.\\
Interestingly, it was shown in Ref. \cite{ceperley1991fermion} that in the one-dimensional case the fixed-node approach is, in fact, exact since the nodal surface consists only of the many-particle configurations where two identical fermions occupy the same point. This implies that any Slater-determinant wave function, as the trial wave function we use in this article, has the same nodes as the exact ground state~\cite{casula2008quantum,astrakharchikgiorgini,matveeva2016one}.
Therefore, the data we provide for the one-dimensional Fermi gas in the OL represent an exact benchmark, useful to measure the accuracy of the DFT calculations based on the LSDA or of any other computational tool.
Furthermore, in order to compute unbiased expectation values also for the density profiles, we employ the standard forward walking technique~\cite{boronat}.
\section{One-dimensional atomic Fermi gas in an optical lattice}
\label{sec1D}
Let's consider a one-dimensional Fermi gas with a zero-range repulsive interaction defined as $v(x_{i_\uparrow} -x_{i_\downarrow})= g\delta(x_{i_\uparrow} -x_{i_\downarrow})$, where the coupling constant $g$ is related to the one-dimensional scattering length $a_{1D}$ by the relation: $g=-2\hbar^2/(ma_{1D})$. We address the case of repulsive interaction, where $g \geqslant0$ and, correspondingly $a_{1D} \leqslant 0$.
The $a_{1D}\rightarrow -\infty $ limit corresponds to the noninteracting Fermi gas, while the $a_{1D}\rightarrow 0^- $ corresponds to the strongly-interacting limit where distinguishable fermions fermionize \cite{girardeau1960,girardeau,astrakharchikgiorgini,jochim},
i.e. their energy and density profiles correspond to those of indistinguishable (i.e. spin polarized) fermions \cite{guan}. For consistency with the more familiar case of infinitely repulsive bosons,
we refer to this limit as
Tonks-Girardeau limit~\cite{guanshu}.
In the following, we parametrize the interaction strength with the adimensional ratio $a_{1D}/d$, where $d$ is the OL periodicity.\\
This one-dimensional model is relevant to describe the experimental setup of an ultracold atomic gas confined in a tight cigar-shaped waveguide, sufficiently strong to prevent thermal excitations to higher radial modes. In this regime, the values of the one-dimensional scattering length can be determined from the experimental parameters, specifically from the three-dimensional s-wave scattering length $a$ and the radial harmonic confining frequency~\cite{olshanii1998atomic}. The one-dimensional scattering length can be tuned from the noninteracting to the Tonks-Girardeau limit --- and also beyond --- by approaching a confinement induced resonance. This is can be performed by exploiting a Feshbach resonance to modify the three-dimensional scattering length and/or by tuning the strength of the radial confinement.\\
We focus on a half-filled OL at the density $n=1/d$, so that on average there is one fermion per well. In this configuration the correlation effects are enhanced and strong quasi long-range antiferromagnetic order arises as one increases the OL intensity $V_0$ and/or the interaction strength \cite{PhysRevA.96.021601}. Therefore this regime represents a challenging testbed for the DFT calculations performed within the LSDA. It is worth pointing out that as a consequence of the Mermin-Wagner theorem in one dimension proper long-range antiferromagnetic order is not possible and the ground state is paramagnetic.\\
In Fig.~\ref{fig1} the ground state interaction energy of the half-filled OL is reported as a function of the interaction parameter $a_{1D}/d$, for three OL intensities.
The numerical values corresponding to (a selection of) these datasets are reported in Table I of the supplemental material~\cite{suppmat}.
In the noninteracting $a_{1D}/d\rightarrow -\infty$ limit the ground state energy $E$ converges to the noninteracting gas results $E_{\mathrm{id}}$, so that the interaction energy defined as $E_{\mathrm{int}}=E-E_{\mathrm{id}}$ vanishes. In the strongly-interacting $a_{1D}/d \rightarrow 0$ limit, $E$ approaches the energy of a fully polarized (i.e., with $N_{\uparrow}=N$ and $N_{\downarrow}=0$) gas $E_{\mathrm{fp}}$, analogously to the case of bosons with
infinitely repulsive delta-function interaction described by the Tonks-Girardeau theory~\cite{girardeau1960}.
The discrepancies between the DFT prediction and the exact QMC results are surprisingly small, in particular for the shallow lattice of intensity $V_0/E_r=1$. In order to better visualize these discrepancies, we display in Fig.~\ref{fig2} the relative error of the DFT prediction $E_{\mathrm{DFT}}$ with respect to the corresponding QMC result $E_{\mathrm{QMC}}$. One observes that this relative error is smaller than $1\%$ in a broad range of interaction strengths, up to $a_{1D}/d<-0.2$. Only very close to the Tonks-Girardeau limit one has quite large (negative) relative errors.
It is also worth noticing that the relative error is non-monotonic, being positive for weak and intermediate interaction strengths and
negative close to the Tonks-Girardeau limit.
\\
In order to shed light on the origin of the inaccuracy of the DFT prediction for the ground state
energy, it is useful to inspect also the predictions for the density profiles, which are
one of the main ingredients of the DFT formalism. Figure~\ref{fig3} shows the total density
$\rho(x)=\rho_{\uparrow}(x)+\rho_{\downarrow}(x)$ as a function of the spatial coordinate $x$ at the
intermediate interaction strength $a_{1D}/d=-1$.
The corresponding numerical values are reported in Table II of the supplemental on-line material~\cite{suppmat}. One notices again a remarkable level of accuracy,
at all three OL intensities considered.
In order to highlight the (small) discrepancies, we show in Fig.~\ref{fig4} the difference
between the density profiles predicted by DFT and by QMC simulations. In shallow OLs with intensities $V_0/E_r \lesssim 2$ the DFT predictions exceed
the QMC results at the peaks of the OL, meaning that DFT underestimates the density inhomogeneity.
Instead, in relatively deep OLs $V_0/E_r \gtrsim 2$, the DFT result is higher than the QMC one at the minima of the OL potential, meaning that in this case
DFT overestimates the density inhomogeneity.\\
Next, in Fig.~\ref{fig5} we show three density profiles
for three values of the interaction parameter $a_{1D}/d$, for a relatively deep OL with intensity
$V_0/E_r=3$. While for weak and moderately strong interactions the agreement between DFT and QMC
simulations is, again, remarkably accurate, at the strongest interaction strength $a_{1D}/d=-0.2$ the discrepancy becomes sizable. In order to better visualize this discrepancy, we plot in Fig.~\ref{fig6} the difference between the DFT and the QMC results. Consistently with the results discussed above, in this relatively deep OL the DFT prediction is higher than the exact QMC data at the minima of the OL potential, while it is lower than that at the maxima.
Note that DFT applied to strongly localized electron systems also tends to favor, within the
LSDA, electron densities that are more inhomogeneous than the true ones, leading to
overbinding of atoms in molecules, and overestimating
the calculated cohesive energies in solids.
This well known deficiency of the LSDA for electrons is
alleviated to a large extent by the use of
the GGA approach, where functionals depend on the local density as well as
on the spatial variation of the density, $\nabla \rho$. Computationally such corrections are
as simple to use as the LSDA itself.
This suggests
that a possible improvement over LSDA for fermionic gases could be the addition of
gradient corrections $\alpha \nabla \rho$, with adjustable phenomenological
parameters $\alpha $, to
the LSDA functional.
\begin{figure}
\includegraphics[width=1.0\columnwidth]{fig7.eps}
\caption{Ground state interaction energy per particle $E_{\mathrm{int}}/N=(E-E_{\mathrm{ni}})/N$ of a repulsive Fermi gas in a three-dimensional OL at quarter filling $nd^3=0.5$, as a function of the OL intensity $V_0/E_r$. The energy unit is the recoil energy $E_r$. Data for two values of the interactions strength $a/d$ are shown. Full symbols connected by lines represent the QMC data, empty symbols represent the DFT results. The system size is $L=4d$.
}
\label{fig7}
\end{figure}
\section{Three-dimensional atomic Fermi gas in a simple-cubic optical lattice}
\label{sec3D}
We now address a three-dimensional atomic Fermi gas in a simple-cubic OL. Since the zero-range (Fermi-Huang) pseudopotential supports two-body bound states in three-dimensions~\cite{huang}, we model the inter-species interactions using a purely repulsive potential with short but finite range, namely the hard-sphere model defined as: $v(r)=+\infty$ if $r<R_0$ and zero
otherwise. This allows us to employ ground state computational methods, such as the DFT and the QMC methods considered in this article, while with the zero-range model the repulsive atomic state would be a highly excited (metastable) state, being the zero-temperature state a gas of bosonic molecules.
At zero temperature, the properties of a sufficiently dilute atomic
gas are universal, meaning that they depend only on the two-body scattering length $a$. For the hard-sphere model, one has $a=R_0$.
As the gas parameter $na^3$ increases, other details of the potential might become
relevant, the most important being the effective range and the $p$-wave scattering length. The regime where these nonuniversal effects become sizable has been carefully analyzed both in homogeneous gases~\cite{pilati2010} and in OL systems~\cite{pilati2014}. In this article, we consider a range of gas parameter where these effects are negligible.\\
We perform DFT calculations within the LSDA, using the exchange-correlation functional for the two-component Fermi gas with short-range interspecies interactions that has been reported in Ref.~\cite{dft}. This functional was obtained from fixed-node DMC simulations of the equation of state of the homogeneous Fermi gas, and an accurate parametrization based on the Fermi liquid and the polaron theories was provided.\\
We consider an OL at quarter filling, i.e., $nd^3=0.5$. Figure~\ref{fig7} displays the comparison between the DFT and the QMC results for the ground state energy as a function of the OL intensity
\footnote{The QMC simulations have been performed with two system sizes, namely $L=4d$ and $L=6d$. We verified that including the finite-size correction corresponding to the noninteracting gas, as discussed in Ref.~\cite{LinZong}, the finite-size effect on the ground state energy becomes negligible.}.
In order to better visualize the discrepancies, the quantity plotted is the interaction energy $E_{\mathrm{int}}=E-E_{\mathrm{id}}$. Two interaction strengths are considered, corresponding to two values of the ratio $a/d$. At the moderate interaction strength $a/d=0.04$, the discrepancy is remarkably small even is relatively deep OLs. The relative error on the total energy $E$ reaches $0.2\%$ at $V_0/E_r=4$, which is the deepest lattice we consider.
For very strong interactions $a/d = 0.1$, the DFT results agree with the QMC data only in shallow lattices $V_0/E_r\lesssim 1$, while significant discrepancies develop in deeper OLs. For example, at $V_0/E_r=4$ the relative error on the total energy $E$ is $1\%$.
This analysis shows how the interplay between the correlations induced by strong interatomic interactions and the pronounced inhomogeneity due to deep external potential causes the breakdown of the LSDA, meaning that more accurate approximations for the exchange-correlation functional are needed.\\
We emphasize that, in contrast with the one-dimensional case where the fixed-node DMC simulations
provide unbiased data, in three-dimensions the fixed-node DMC results represent an upper-bound for
the exact ground state energy. This upper-bound is believed to be extremely close to the exact
results, as demonstrated, for example, by the agreement between fixed-node DMC simulations (in deep
OLs) with state-of-the-art constraint-path simulations of the Hubbard model~\cite{zhang,pilati2014},
and also with the recent experiments performed at LENS for strongly-repulsive homogeneous Fermi
gases mentioned in Sec.~\ref{secqmc}. Still, accurate experimental measurements of the zero-temperature equation of state in three-dimensional OLs would represent an extremely valuable benchmark for the fixed-node approach and for the DFT calculations.
\section{Conclusions and outlook}
\label{conclusions}
We performed a detailed benchmark of DFT
calculations based on the LSDA for the exchange-correlation functional against QMC simulations. We
considered a one-dimensional Fermi gas in a half-filled OL and a three-dimensional Fermi gas in a
simple-cubic OL at quarter filling. The one-dimensional case is special since the QMC results obtained
with the fixed-node approach are unbiased, being affected only by statistical uncertainties. This
allowed us to demonstrate that the LSDA is extremely accurate in a broad range of interaction
strengths and of OL intensities. Still, important inaccuracies of the LSDA emerge in the close vicinity of the
strongly-interacting Tonks-Girardeau limit and in very deep OLs (if interactions are not weak).
We argue that the data we provide (see the supplemental material~\cite{suppmat})
represent an ideal testbed to further develop the Kohn-Sham DFT formalism beyond the LSDA. This
might include the use of gradient-dependent correction terms in the
total energy density functional.
Also in the case of the three-dimensional Fermi gas the agreement between DFT and QMC data is remarkable, at least in shallow OLs, and we hope that our study will motivate further experiments aiming at accurately measuring the equation of state and the density profiles of repulsive Fermi gas in OLs, in particular close to half-filling, a regime which is particularly challenging for any computational technique, including the QMC and the DFT methods we considered in this article. These measurements could be used as a testbed to develop more accurate exchange-correlation functional in higher dimensions.\\
The Kohn-Sham DFT formalism provides theoreticians with a useful computational tool to predict the properties of ultracold atomic gases in complex experimental configurations. Albeit approximate
in its practical implementations,
this method allows one to address much larger system sizes than those amenable to other computational methods such as, e.g., QMC algorithms, at the point to consider models of trapped clouds with realistic system sizes~\cite{zintchenko2016ferromagnetism}. Furthermore, it can be easily extended to simulate dynamical properties. The analysis we provide in this article is valuable since it maps out the regime where the most commonly adopted approximation for the exchange-correlation functional, namely the LSDA, is reliable, thus providing a useful guide for future studies.\\
\noindent
We acknowledge the CINECA award under the ISCRA initiative, for the availability of high performance computing resources and support.
S. P. and F. A. acknowledge financial support from the BIRD 2016 project ``Superfluid properties of Fermi gases in optical potentials'' of the University of Padova.
\bibliographystyle{epj}
|
2,877,628,088,450 | arxiv | \section{Introduction}
The complete knot Floer chain complex $CFK^{\infty}(K)$ is a bifiltered, Maslov graded chain complex associated to a knot $K\subset S^3$, introduced by Ozsv\'{a}th and Szab\'{o} \cite{OS04c}, and independently by Rasmussen \cite{Ras03}. \textit{A priori}, the bifiltered chain homotopy type of $CFK^{\infty}(K)$ is an isotopy invariant of the knot $K$. By exploiting the TQFT-like aspects of the theory, however, it turns out that $CFK^{\infty}(K)$ also contains a lot of interesting information about the concordance class of $K$, see \cite{Hom15} for a survey. One typical example is the Ozsv\'{a}th-Szab\'{o} $\tau$-invariant, which came to stage relatively early and attracted a lot of attention \cite{OS03}. Roughly speaking, $\tau(K)$ is a concordance homomorphism that takes its values in $\mathbb{Z}$ and is defined by examining $\widehat{CFK}(K)$, which is a small portion of $CFK^{\infty}(K)$. Moreover, $\tau$ provides a lower bound to the smooth four-genus of a knot i.e. $|\tau(K)|\leq g_4(K)$. Among many applications, Oszv\'{a}th and Szab\'{o} showed that $\tau$ can be used to resolve a conjecture of Milnor first proven by Kronheimer and Mrowka using gauge theory \cite{KM93}, that $g_4(T_{p,q})=\frac{(p-1)(q-1)}{2}$, where $T_{p,q}$ is the $(p,q)$-torus knot.
Recently, by using more information from $CFK^{\infty}(K)$, Ozsv\'{a}th, Stipsicz, and Szab\'{o} introduced a more powerful concordance invariant that generalizes $\tau$ \cite{OSS}. This invariant takes the form of a homomorphism from the smooth knot concordance group $\mathcal{C}$ to the group of piecewise linear functions on $[0,2]$. Thus, for every knot $K\subset S^3$, they associate a piecewise linear function $\Upsilon_K(t)$, where $t\in[0,2]$ that depends only on the concordance type of $K$.
Besides being a concordance homomorphism, $\Upsilon_K(t)$ also enjoys many other nice properties, some of which we list below.
\begin{itemize}
\item[(1)] ({Symmetry}) $\Upsilon_K(t)=\Upsilon_K(2-t),$
\item[(2)] (4-genus bound) $|\Upsilon_K(t)|\leq tg_4(K)$ for $0\leq t\leq 1$,
\item[(3)] (Recovers $\tau$) The slope of $\Upsilon_K(t)$ at $t=0$ is $-\tau(K)$.
\end{itemize}
In this paper we study how $\Upsilon$ behaves under the cabling operation. Such results for $\tau$ can be found in \cite{Hed05a,Hed05b,Hed09,Hom14,Pet13,VC10}, among which we restate two results. The first one is due to Hedden, obtained by carefully comparing the knot Floer chain complex of a knot and that of its cable.
\begin{thm}(\cite{Hed09})\label{Hed}Let $K\subset S^3$ be a knot, and $p>0$, $n \in \mathbb{Z}$. Then
\begin{displaymath}
p\tau(K)+\frac{pn(p-1)}{2}\leq \tau(K_{p,pn+1})\leq p\tau(K)+\frac{(pn+2)(p-1)}{2},
\end{displaymath}
\end{thm}
Later Van Cott used the genus bound and homomorphism property satisfied by $\tau$, together with nice constructions of cobordism between cable knots to extend the above inequality to $(p,q)$-cables.
\begin{thm}(\cite{VC10})\label{VC}
Let $K\subset S^3$ be a knot, and $(p,q)$ be a pair of relatively prime numbers such that $p>0$. Then
\begin{displaymath}
p\tau(K)+\frac{(p-1)(q-1)}{2}\leq \tau(K_{p,q})\leq p\tau(K)+\frac{(p-1)(q+1)}{2},
\end{displaymath}
\end{thm}
In view of the success in understanding the effect of knot cabling on the $\tau$-invariant, it is natural to wonder what happens to $\Upsilon$. In this paper we show that a portion of $\Upsilon$ behaves very similarly to $\tau$. Indeed, adapting the strategies of Hedden and Van Cott on studying $\tau$ to the context of $\Upsilon$, we can prove the following result:
\begin{thm} \label{main}
Let $K\subset S^3$ be a knot, and $(p,q)$ be a pair of relatively prime numbers such that $p>0$. Then
\begin{displaymath}
\Upsilon_K(pt)-\frac{(p-1)(q+1)t}{2}\leq \Upsilon_{K_{p,q}}(t)\leq \Upsilon_K(pt)-\frac{(p-1)(q-1)t}{2},
\end{displaymath}
when $0 \leq t\leq \frac{2}{p}$.
\end{thm}
Note that by differentiating the above inequality at $t=0$, we recover Theorem \ref{VC}. One also easily sees this inequality is sharp by examining the case when $K$ is the unknot: when $q>0$, then the upper bound is achieved, and when $q<0$ the lower bound is achieved. However, when $p>2$ the behavior of $\Upsilon_{K_{p,q}}(t)$ for $t\in[\frac{2}{p},2-\frac{2}{p}]$ is still unknown to the author.\\
Theorem \ref{main} can often be used to determine the $\Upsilon$ function of cables with limited knowledge of their complete knot Floer chain complexes. For an example, we show how our theorem can be used to deduce $\Upsilon_{(T_{2,-3})_{2,2n+1}}(t)$ for $n\geq 8$; these examples are, on the face of it, rather difficult to compute, since none of them is an $L$-space knot. Despite this, armed only with Theorem \ref{main} and the knot Floer homology groups of the knots in this family, we are able to obtain complete knowledge of $\Upsilon$.
Perhaps more striking, however, Theorem \ref{main} can be used to easily show that certain subsets of the smooth concordance group freely generate infinite-rank summands. To this end, let $D=Wh^{+}(T_{2,3})$ be the untwisted positive whitehead double of the trefoil knot and let $J_n=((D_{p,1})...)_{p,1}$ denote the $n$-fold iterated $(p,1)$-cable of $D$ for some fixed $p>1$ and some positive integer $n$. Theorem \ref{main}, together with general properties of $\Upsilon$, yield the following
\begin{cor}\label{cor}
The family of knots $J_n$ for $n=1,2,3,...$ are linearly independent in $\mathcal{C}$ and span an infinite-rank summand consisting of topologically slice knots.
\end{cor}
To the best of the author's knowledge, Corollary \ref{cor} provides the first satellite operator on the smooth concordance group of topologically slice knots whose iterates (for a fixed knot) are known to be independent; moreover, in this case they are a summand. Note $J_n$ has trivial Alexander polynomial. The first known example of infinite-rank summand of knots with trivial Alexander polynomial is generated by $D_{n,1}$ for $n\in \mathbb{Z}^+$, due to Kim and Park \cite{KP16}. Their example, however, like the families of topologically slice knots studied by \cite{OSS}, involved rather non-trivial calculuatons of $\Upsilon$ (for instance, those of \cite{OSS} involved rather technical caclulations with the bordered Floer invariants). The utility of Theorem \ref{main} is highlighted by the ease with which the above family is handled. We also refer the interested reader to \cite{FPR16} for a host of other applications of Theorem \ref{main} to the study of the knot concordance group.\\
To conclude the introduction, it is worth mentioning that Hom achieved a complete understanding of the behavior of $\tau$ under cabling by introducing the $\epsilon$-invariant \cite{Hom14}. In particular, $\tau(K_{p,q})$ is always one of the two bounds appearing in Theorem \ref{VC}, depending on the value of $\epsilon(K)$. However, in the context of $\Upsilon$ the story is not true, even for $L$-space knots whose knot Floer chain complexes are relatively simple. For instance, $\Upsilon_{(T_{2,3})_{n,2n-1}}(t)$ is computed in \cite{OSS} but it does not equal either bound appearing in Theorem \ref{main}. This suggests the behavior of $\Upsilon$ under cabling is more complicated. For example, it would be interesting if one can find suitable auxiliary invariants serving a similar role as $\epsilon$-invariant.\\
\textbf{Outline.} The organization of the rest of the paper is as following: in Section 2, we review the definition of $\Upsilon$. In Section 3, we prove our main theorem. In Section 4 we give two applications of Theorem \ref{main}.
\subsection*{Acknowledgments.} I wish to thank my advisor Matt Hedden, for teaching me with great patience, and lending me his vision through wonderful suggestions that shape this work. I also thank Andrew Donald and Kouki Sato for many nice comments on earlier versions of this paper.
\section{Preliminaries}
We work over $\mathbb{F}=\mathbb{Z}/2\mathbb{Z}$ throughout the entire paper. We also assume that the reader is familiar with the basic setup of knot Floer homology. For more details, see \cite{OS04c,Ras03}.
In this section, we will briefly review the construction of $\Upsilon_K(t)$ of a given knot $K$, setting up some notations at the same time. The original definition of $\Upsilon_K(t)$ is based on a $t$-modified knot Floer chain complex, see \cite{OSS}. Shortly thereafter, Livingston reformulated $\Upsilon_K(t)$ in terms of the complete knot Floer chain complex $CFK^{\infty}(K)$. We find it convenient to work with Livingston's definition, which we recall below.
Denote $CFK^{\infty}(K)$ by $C(K)$ for convenience. Note that $C(K)$ comes with a $\mathbb{Z}\oplus\mathbb{Z}$-filtration, namely the Alexander filtration and the algebraic filtration. Actually, to be more precise, $C(K)$ is only well defined up to bifiltered chain homotopy equivalence, unless we fix some compatible Heegaard diagram for $K$ and some auxiliary data.
Now for any $t\in[0,2]$, one can define a filtration on $C(K)$ as follows. First, define a real-valued (grading) function on $C(K)$ by $$\mathcal{F}_t=\frac{t}{2}Alex+(1-\frac{t}{2})Alg,$$ which is a convex linear combination of Alexander and algebraic gradings. Associated to this function, one can construct a filtration given by $(C(K), \mathcal{F}_t)_s=(\mathcal{F}_t)^{-1}(-\infty,s]$. It is easy to see that the filtration induced by $\mathcal{F}_t$ is compatible with the differential of $C(K)$, i.e. $\mathcal{F}_t(\partial x)\leq \mathcal{F}_t(x)$, $\forall x\in C(K)$. Let $$\nu(C(K), \mathcal{F}_t)=\min\{s\in\mathbb{R}|H_{0}((C(K), \mathcal{F}_t)_s)\rightarrow H_{0}(C(K))\ \text{is\ nontrivial}\}.$$
Here $H_{0}$ stands for the homology group with Maslov grading $0$. With these preparations, $\Upsilon$ is defined as following.
\begin{defn}
$\Upsilon_K(t)=-2\nu(C(K), \mathcal{F}_t)$.
\end{defn}
It is proven in \cite{Liv14} that the above definition of $\Upsilon_K(t)$ is equivalent to the one given by Ozsv\'{a}th, Stipsicz, and Szab\'{o} in \cite{OSS}.
\section{Proof of the main theorem}
The proof of Theorem \ref{main} will be divided into two parts: in Subsection 3.1 we will prove the inequality for the $(p,pn+1)$ cable of a knot by adapting Hedden's strategy in \cite{Hed05a,Hed09}, and then in Subsection 3.2 we will upgrade the inequality to cover the $(p,q)$-cable of a knot by applying Van Cott's argument in \cite{VC10}.
\subsection{Upsilon of $(p,pn+1)$-cable}
Following \cite{Hed05a,Hed05b,Hed09}, we will begin with introducing a nice Heegaard diagram which encodes both the original knot $K$ and its cable $K_{p,pn+1}$.
For any knot $K\subset S^3$, there exists a compatible Heegaard diagram $H=(\Sigma, \{\alpha_1,...,\alpha_g\},\{\beta_1,...,\beta_{g-1},\mu\},w,z)$. Moreover, by stabilizing we can assume $\mu$ to be the meridian of the knot $K$ and that it only intersects $\alpha_g$, and there is a $0$-framed longitude of the knot $K$ on $\Sigma$ which does not intersect $\alpha_g$. From now on, we will always assume that the Heegaard diagram $H$ for $K$ satisfies all these properties.
Let $H=(\Sigma, \{\alpha_1,...,\alpha_g\},\{\beta_1,...,\beta_{g-1},\mu\},w,z)$ be a Heegaard diagram for the knot $K$ as above. By modifying $\mu$ and adding an extra base point $z'$, we can construct a new Heegaard diagram with three base points $H(p,n)=(\Sigma, \{\alpha_1,...,\alpha_g\},\{\beta_1,...,\beta_{g-1},\tilde{\beta}\},w,z,z')$. More precisely, $\tilde{\beta}$ is obtained by winding $\mu$ along an $n$-framed longitude $(p-1)$ times, and the new base point $z'$ is placed at the tip of the winding region such that the arc $\delta'$ connecting $w$ and $z'$ has intersection number $p$ with $\tilde{\beta}$. Note $\tilde{\beta}$ can be deformed to $\mu$ through an isotopy that does not cross the base points $\{w,z\}$. See Figure~\ref{longitude} and Figure~\ref{winding} for an example. The power of $H(p,n)$ lies in the fact that it specifies both $K$ and $K_{p,pn+1}$ at the same time, as pointed out by the following lemma.
\begin{lem}(Lemma 2.2 of \cite{Hed05a})
Let $H(p,n)$ be a Heegaard diagram described as above. Then
\begin{enumerate}
\item Ignoring $z'$, we get a doubly-pointed diagram $H(p,n,w,z)$ which specifies $K$.
\item Ignoring $z$, we get a doubly-pointed diagram $H(p,n,w,z')$ which specifies the cable knot $K_{p,pn+1}$.
\end{enumerate}
\end{lem}
This implies that the two knot Floer chain complexes $CFK^\infty(H(p,n,w,z))$ and $CFK^\infty(H(p,n,w,z'))$ are closely related. More precisely, by forgetting the Alexander filtrations, both $CFK^\infty(H(p,n,w,z))$ and $CFK^\infty(H(p,n,w,z'))$ are isomorphic to $CF^\infty(H(p,n,w))$. Therefore, in order to get a more transparent correspondence between these two complexes, we will compare the Alexander gradings of the intersection points with respect to the two different base points $z$ and $z'$.
For the sake of a clearer discussion, we fix some notation and terminology to deal with the intersection points. For convenience, we assume $n\geq 0$ through out the discussion and remark that the case when $n<0$ can be handled in a similar way. Note $\tilde{\beta}$ intersects $\alpha_g$ at $2(p-1)n+1$ points, and we label them as $x_0$, ..., $x_{2(p-1)n}$, starting at the out-most layer from left to right, and then the second layer from left to right, and so on. On the other hand, $\tilde{\beta}$ could also intersect other $\alpha$-curves besides $\alpha_g$, and we label these points by $y^{(k)}_{0}$,..., $y^{(k)}_{2(p-1)-1}$. Here $k$ enumerates the intersections of the $n$-framed longitude with $\alpha_i$, $i\neq g$, and the order of this enumeration is irrelevant. The lower index is again ordered following a layer by layer convention, from outside to inside, but we require that \textit{$y^{(k)}_0$ can be connected to $x_{2n}$ by an arc on $\tilde{\beta}$ which neither intersects $\delta$ nor $\delta'$}, the short arcs connecting the base points. See Figure~\ref{winding} for an example. The generators will be partitioned into $p$ classes: all the generators of the form $\{x_{2i},\textbf{a}\}$ or $\{y^{(k)}_{2i},\textbf{b}\}$ will be called \emph{even intersection points} or \emph{$0$-intersection points}, and \emph{odd intersection points} otherwise; odd generators of the form $\{x_{2i+1},\textbf{a}\}$ or $\{y^{(k)}_{2\lceil \frac{i+1}{n}\rceil-1},\textbf{b}\}$ will be called \emph{$(p-\lceil \frac{i+1}{n}\rceil)$-intersection points}. Here $\textbf{a},\textbf{b}$ are $(g-1)$-tuple in $Sym^{g-1}(\Sigma)$. Note that essentially we are classifying odd intersection points into $(p-1)$ classes by the following principle: if its $\tilde{\beta}$-component sits on the $i$-th layer (we count the layers from outside to inside), then it is called a $(p-i)$-intersection point.
\begin{figure}[!ht]
\centering{
\resizebox{110mm}{!}{\input{longitude.pdf_tex}}
\caption{A compatible Heegaard diagram $H$ for $K$. $\lambda$ is a $2$-framed longitude, and according to our assumption that the $0$-framed longitude can be chosen not to hit $\alpha_g$, $\lambda$ can be chosen to intersect $\alpha_g$ twice.}\label{longitude}
}
\end{figure}
\begin{figure}[!ht]
\centering{
\resizebox{125mm}{!}{\input{drawing.pdf_tex}}
\caption{A example of $H(p,n)$ with $n=2$ and $p=3$, corresponding to the Heegaard diagram shown in Figure~\ref{longitude}. There is an obvious arc of $\tilde{\beta}$ connecting $x_4$ and $y^{1}_0$, which neither intersects $\delta$ nor $\delta'$. By our convention, there is an arc of $\tilde{\beta}$ connecting $x_4$ and $y^{2}_0$ satisfying the same property as well, though it is not shown in the figure. The shaded region represents a domain connecting $\{x_1,\textbf{a}\}$ and $\{x_2,\textbf{a}\}$; the darkened color indicates the multiplicity is 2, while the lighter colored region has multiplicity 1. }\label{winding}
}
\end{figure}
We denote the Alexander grading by $A$ (by $A'$) when we use the base point $z$ (base point $z'$).
The comparison of Alexander filtrations is summarized in the following proposition.
\begin{prop}\label{comparison}
With the choice of Heegaard diagrams as described above and let $\textbf{x}$ be an $l$-intersection point, where $l\in\{0,1,...,p-1\}$, then
$$A'(\textbf{x})=pA(\textbf{x})+\frac{pn(p-1)}{2}+l.$$
\end{prop}
Proposition \ref{comparison} can be viewed as a generalization of the comparison used in\cite{Hed05a,Hed09}, in which only $\{x_i,\textbf{a}\}$ for $i\leq n$ were shown to satisfy the above equation. In studying $\tau$, having just a comparison for $\{x_i,\textbf{a}\}$ for $i\leq n$ would suffice: first, Hedden observed that in the case when $|n|$ is sufficiently large they account for the top Alexander graded generators of $\widehat{CFK}(K_{p,pn+1})$ that determine $\tau(K_{p,pn+1})$; second, the behavior of $\tau$ for small $n$ can be deduced from the large-$n$ case by using crossing change inequality of $\tau$. In contrast, the lower Alexander graded elements of $CFK^\infty(K_{p,pn+1})$ may play a role in $\Upsilon$, even though they do not affect $\tau$. Therefore in the current paper we have to carry out a comparison for all types of generators. To accomplish this goal, we quote and extend some of the lemmas used in \cite{Hed05a,Hed09} below, after which Proposition \ref{comparison} will follow easily.
\begin{lem}
When $1\leq j\leq (p-1)n$, we have
\begin{equation}
A(\{x_{2j-1},\textbf{a}\})-A(\{x_{2j},\textbf{a}\})=0
\end{equation}
\begin{equation}
A'(\{x_{2j-1},\textbf{a}\})-A'(\{x_{2j},\textbf{a}\})=p-\lceil\frac{j}{n}\rceil
\end{equation}
For an arbitrary $k$, when $0\leq i\leq (p-2)$, we have
\begin{equation}
A(\{y^{(k)}_{2i+1},\textbf{a}\})-A(\{y^{(k)}_{2i},\textbf{a}\})=0
\end{equation}
\begin{equation}
A'(\{y^{(k)}_{2i+1},\textbf{a}\})-A'(\{y^{(k)}_{2i},\textbf{a}\})=p-(i+1)
\end{equation}
\end{lem}
\begin{proof}
Note that there is a Whitney disk $\phi$ connecting $\{x_{2j-1},\textbf{a}\}$ to $\{x_{2j},\textbf{a}\}$ (See Figure~\ref{winding}). It is the product of a constant map in $Sym^{g-1}(\Sigma)$ and the map represented by the disk which connects $x_{2j-1}$ and $x_{2j}$, with boundary consisting of a short arc of $\alpha_g$ and an arc of $\tilde{\beta}$ that spirals into the winding region $p-\lceil\frac{j}{n}\rceil$ times and then makes a turn out. We can see that $n_w(\phi)=n_z(\phi)=0$ and $n_{z'}(\phi)=p-\lceil\frac{j}{n}\rceil$. Therefore, $A(\{x_{2j-1},\textbf{a}\})-A(\{x_{2j},\textbf{a}\})=n_z(\phi)-n_w(\phi)=0$ and $A'(\{x_{2j-1},\textbf{a}\})-A'(\{x_{2j},\textbf{a}\})=n_{z'}(\phi)-n_w(\phi)=p-\lceil\frac{j}{n}\rceil$. We have obtained equation (3.1) and (3.2). The proof for (3.3) and (3.4) will follow a similar line, and hence is omitted. \\
\end{proof}
\begin{lem}
When $0\leq j\leq (p-1)n$, we have
\begin{equation}
p(A(\{x_{0},\textbf{a}\})-A(\{x_{2j},\textbf{a}\}))=A'(\{x_{0},\textbf{a}\})-A'(\{x_{2j},\textbf{a}\})
\end{equation}
For an arbitrary $k$, when $0\leq i\leq (p-2)$, we have
\begin{equation}
p(A(\{y^{(k)}_{0},\textbf{b}\})-A(\{y^{(k)}_{2i},\textbf{b}\}))=A'(\{y^{(k)}_{0},\textbf{b}\})-A'(\{y^{(k)}_{2i},\textbf{b}\})
\end{equation}
\begin{equation}
p(A(\{x_{2n},\textbf{a}\})-A(\{y^{(k)}_0,\textbf{b}\}))=A'(\{x_{2n},\textbf{a}\})-A'(\{y^{(k)}_0,\textbf{b}\}).
\end{equation}
\end{lem}
\begin{figure}[!ht]
\centering{
\resizebox{110mm}{!}{\input{epsilonclass.pdf_tex}}
\caption{The thickened curve $\gamma$ represents the $\epsilon$-class between $\{x_0,\textbf{a}\}$ and $\{x_6,\textbf{a}\}$. Note that the arc $\delta$ and $\delta'$ which connect based points do not intersect $\gamma$.}\label{epsilonclass1}
}
\end{figure}
\begin{figure}[!ht]
\centering{
\resizebox{110mm}{!}{\input{epsilonclass2.pdf_tex}}
\caption{The thickened curve is an arc on $\tilde{\beta}$ connecting $y^{(1)}_{0}$ and $y^{(1)}_{2}$ that does not intersect $\delta$ nor $\delta'$.}\label{epsilonclass2}
}
\end{figure}
\begin{proof}
First we prove Equation (3.5). Note that $\epsilon(\{x_{2j},\textbf{a}\},\{x_{0},\textbf{a}\})$ can be represented by a curve $\gamma$ on $\Sigma$, which is obtained by first connecting $x_{2j}$ to $x_0$ along $\alpha_g$, and then by an arc on $\tilde{\beta}$ which starts from $x_0$ and winds $j$ times counterclockwise to arrive at $x_{2j}$ (Figure~\ref{epsilonclass1}). Note that $[\epsilon(\{x_{2j},\textbf{a}\},\{x_{0},\textbf{a}\})]=0\in H_1(S^3,\mathbb{Z})$, hence $[\gamma]=\Sigma l_i\alpha_i+\Sigma k_i \beta_i$, with $\beta_g$ is viewed as $\tilde{\beta}$. Let $c=\gamma-\Sigma l_i\alpha_i-\Sigma k_i \beta_i$, then $c$ bounds a domain on $\Sigma$. Note that since $\delta\cdot \gamma = \delta' \cdot \gamma=0$, we have $\delta'\cdot c= \delta'\cdot (-k_g \tilde{\beta})= -k_g p = p (\delta\cdot (-k_g \tilde{\beta})) =p(\delta\cdot c)$, where ``$\cdot$'' stands for the intersection number. Equation (3.5) follows. \\
The proofs for the other two equations follow a similar line. Note the key point in the above argument is that the $\epsilon$-class of the two generators can be represented by a curve $\gamma$ whose arc on $\tilde{\beta}$ does not intersect the arc $\delta$ nor $\delta'$, implying $\delta\cdot \gamma = \delta' \cdot \gamma=0$. For Equation (3.6), note $y^{(k)}_{0}$ and $y^{(k)}_{2i}$ can be joined by an arc on $\tilde{\beta}$ satisfying the aforementioned property (see Figure~\ref{epsilonclass2} for an example). Recall by our convention, $y^{(k)}_0$ can be connected to $x_{2n}$ by an arc on $\tilde{\beta}$ which neither intersects $\delta$ nor $\delta'$, hence Equation (3.7) follows.
\end{proof}
Let $C(i)=\{(g-1)-tuples\textit{ } \textbf{a}|\textit{ }A(\{x_0,\textbf{a}\})=i\}$.
\begin{lem}(Lemma 3.4 of \cite{Hed05a})
Let $\textbf{a}_1\in C(j_1)$ and $\textbf{a}_2\in C(j_2)$, then
\begin{displaymath}
\begin{aligned}
A(\{x_{i},\textbf{$\textbf{a}_1$}\})-A(\{x_{i},\textbf{$\textbf{a}_2$}\})=j_1-j_2\\
A'(\{x_{i},\textbf{$\textbf{a}_1$}\})-A'(\{x_{i},\textbf{$\textbf{a}_2$}\})=p(j_1-j_2).
\end{aligned}
\end{displaymath}
\end{lem}
Now we are ready to prove Proposition 3.2.
\begin{proof}[Proof of Proposition 3.2.]
We want to prove that if $\textbf{x}$ an $l$-intersection point, where $l\in\{0,1,...,p-1\}$, then
$A'(\textbf{x})=pA(\textbf{x})+\frac{pn(p-1)}{2}+l$. As pointed out in Lemma 2.5 of \cite{Hed09}, $A'(\{x_0,\textbf{a}\})=pA(\{x_0,\textbf{a}\})+\frac{pn(p-1)}{2}$. Note that for any other intersection point $\textbf{u}$, as long as $A'(\{x_0,\textbf{a}\})-A'(\textbf{u})=p(A(\{x_0,\textbf{a}\})-A(\textbf{u}))$, then we have $A'(\textbf{u})=pA(\textbf{u})+\frac{pn(p-1)}{2}$ as well. Now the case when $l=0$ (even intersection points) follows easily from this obersvation, Lemma 3.4, and Lemma 3.5. The other cases is then an easy consequence of the $l=0$ case and Lemma 3.3.
\end{proof}
Let $C=CF^\infty(H(p,n,w))$ be the chain complex obtained by forgetting the Alexander filtraion. And let $(\mathcal{F}_{t})=\frac{t}{2} A+(1-\frac{t}{2}) Alg$, and $(\mathcal{F}_{t})'=\frac{t}{2} A'+(1-\frac{t}{2}) Alg$ be two grading functions on $C$ defined by using the two Alexander gradings $A$ and $A'$ corresponding to $z$ and $z'$ respectively. Then the filtrations corresponding to $(\mathcal{F}_{t})$ and $(\mathcal{F}_{t})'$ satisfy the following relation.
\begin{lem} For $p, n\in \mathbb{Z}$, $p>0$, and $0\leq t\leq \frac{2}{p}$, we have
$$(C, \mathcal{F}^{'}_{t})_{s+\frac{pn(p-1)t}{4}}\subset (C, \mathcal{F}_{pt})_s \subset (C, \mathcal{F}^{'}_{t})_{s+\frac{(pn+2)(p-1)t}{4}}.$$
\end{lem}
\begin{proof}
Let $\textbf{x}$ be a generator of $C$. Assume $U^{-k}\textbf{x}\in (C,\mathcal{F}_{pt})_s$, then $$ \frac{pt}{2}(A(\textbf{x})+k)+(1-\frac{pt}{2})k =\frac{pt}{2}A(\textbf{x})+k \leq s.$$
Combine the above inequality with Prop. 3.2, we have
\begin{displaymath}
\begin{aligned}
&\frac{t}{2}(A'(\textbf{x})+k)+(1-\frac{t}{2})k\\
\leq &\frac{t}{2}(pA(\textbf{x})+\frac{pn(p-1)}{2}+p-1+k)+(1-\frac{t}{2})k\\
=&\frac{pt}{2}A(\textbf{x})+k+\frac{t}{2}\frac{pn(p-1)}{2}+\frac{t}{2}(p-1)\\
\leq &s+\frac{(pn+2)(p-1)t}{4}.
\end{aligned}
\end{displaymath}
Hence $U^{-k}\textbf{x}\in (C,\mathcal{F}^{'}_{t})_{s+\frac{(pn+2)(p-1)t}{4}}$, and therefore $$(C, \mathcal{F}_{pt})_s \subset (C, \mathcal{F}^{'}_{t})_{s+\frac{(pn+2)(p-1)t}{4}}.$$
Similarly, if we assume $U^{-k}\textbf{x}\notin (C,\mathcal{F}_{pt})_s$, then $$ \frac{pt}{2}(A(\textbf{x})+k)+(1-\frac{pt}{2})k > s.$$
Again, in view of the above inequality and Prop. 3.2, we have
\begin{displaymath}
\begin{aligned}
&\frac{t}{2}(A'(\textbf{x})+k)+(1-\frac{t}{2})k\\
\geq &\frac{t}{2}(pA(\textbf{x})+\frac{pn(p-1)}{2}+k)+(1-\frac{t}{2})k\\
=&\frac{pt}{2}A(\textbf{x})+k+\frac{t}{2}\frac{pn(p-1)}{2}\\
> &s+\frac{pn(p-1)t}{4}
\end{aligned}
\end{displaymath}
Hence $U^{-k}\textbf{x}\notin (C,\mathcal{F}^{'}_{t})_{s+\frac{pn(p-1)t}{4}}$, and therefore $$(C,\mathcal{F}^{'}_{t})_{s+\frac{pn(p-1)t}{4}}\subset (C,\mathcal{F}_{pt})_s.$$
\end{proof}
\begin{proof}[Proof of Theorem \ref{main} for $(p,pn+1)$-cable.] Recall that $$\nu(C,\mathcal{F}_t)=\min\{s|\ H_{0}((C, \mathcal{F}_t)_s)\rightarrow H_{0}(C)\ \text{is \ nontrivial} \},$$
and $\nu(C,{\mathcal{F}_t}^{'})$ is understood similarly. Now set $s=\nu(C,\mathcal{F}_{pt})$ in lemma 3.6, we have $$\nu(C,\mathcal{F}_{pt})+\frac{pn(p-1)t}{4} \leq\nu(C,{\mathcal{F}_t}^{'})\leq \nu(C,\mathcal{F}_{pt})+\frac{(pn+2)(p-1)t}{4}.$$
Recall $\Upsilon_K(pt)=-2\nu(C,\mathcal{F}_{pt})$ and $\Upsilon_{K_{p,pn+1}}(t)=-2\nu(C,\mathcal{F}'_{t})$, so by mutiplying $-2$ the above inequality translates to$$\Upsilon_K(pt)-\frac{(pn+2)(p-1)t}{2}\leq \Upsilon_{K_{p,pn+1}}(t)\leq \Upsilon_K(pt)-\frac{pn(p-1)t}{2}.$$
\end{proof}
\subsection{Upsilon of $(p,q)$-cable}
Denote the smooth knot concordance group by $\mathcal{C}$. Let $\theta : \mathcal{C} \rightarrow \mathbb{R}$ be a concordance homomorphism such that $|\theta(K)|\leq g_4(K)$ and $\theta(T_{p,q})=\frac{(p-1)(q-1)}{2}$ when $p,q>0$. In \cite{VC10}, Van Cott proved that if we fix a knot $K$ and $p>0$, and let $$h(l)=\theta(K_{p,l})-\frac{(p-1)l}{2},$$ then we have
\begin{equation}
-(p-1)\leq h(n)-h(r)\leq 0,
\end{equation}
when $n>r$ such that both $n$ and $r$ relatively prime to $p$.
\begin{rmk}
The concordance homorphism studied by Van Cott has range $\mathbb{Z}$ rather than $\mathbb{R}$, but by checking the argument in \cite{VC10}, it is straightforward to see that this choice will not affect the inequality stated above.
\end{rmk}
Now note that fixing $t\in (0, 2/p]$, $\frac{-\Upsilon_{K}(t)}{t}$ is a concordance homorphism which lower bounds the four-genus, and $\frac{-\Upsilon_{T_{p,q}}(t)}{t}=\frac{(p-1)(q-1)}{2}$ when $q>0$ (\cite{LVC15}). So we can take $\theta$ to be $\frac{-\Upsilon_{K}(t)}{t}$ and apply inequality (3.8), from which we get
\begin{equation}
0\leq \bar{h}(n,t)-\bar{h}(r,t)\leq (p-1)t,
\end{equation}
where $\bar{h}(n,t)=\Upsilon_{K_{p,n}}(t)+\frac{(p-1)nt}{2}$. It is easy to see inequality (3.9) is true at $t=0$ as well, and hence it holds for $0\leq t\leq \frac{2}{p}$.
Following essentially the argument of Corollary 3 in \cite{VC10}, we conclude the proof of our main theorem as below:
\begin{proof}[Proof of Theorem \ref{main} for $(p,q)$-cable.]
Recall $0\leq t \leq \frac{2}{p}$. First we will show that $$\Upsilon_{K_{p,q}}(t)\geq \Upsilon_K(pt)-\frac{(p-1)(q+1)t}{2}.$$ Take $r$ to be any integer such that $q\geq pr+1$, then by inequality (3.9) we have $$\bar{h}(q,t)-\bar{h}(pr+1,t)\geq 0.$$
In view of the definition of $\bar{h}$, the above inequality translates to
\begin{equation}
\Upsilon_{K_{p,q}}(t) \geq \Upsilon_{K_{p,pr+1}}(t)-\frac{(p-1)(q-pr-1)t}{2}.
\end{equation}
From the previous subsection, we have $$\Upsilon_{K_{p,pr+1}}(t)\geq \Upsilon_K(pt)-\frac{(pr+2)(p-1)t}{2}.$$
Combining this and inequality (3.10), we get $$\Upsilon_{K_{p,q}}(t)\geq \Upsilon_K(pt)-\frac{(p-1)(q+1)t}{2}.$$
The other half of the inequality follows from an analogous argument by considering $\bar{h}(pl+1, t)-\bar{h}(q,t)\geq 0,$ where $l$ is an integer such that $q\leq pl+1$. We omit the details.
\end{proof}
\section{Applications}
\subsection{Computation of $\Upsilon_{(T_{2,-3})_{2,2n+1}}(t)$}
In this subsection, we show how one can compute $\Upsilon_{(T_{2,-3})_{2,2n+1}}(t)$ by using our theorem together with $\widehat{HFK}((T_{2,-3})_{2,2n+1})$, for $n\geq 8$. Note none of these knots is an L-space knot. For easier illustration, we only give full procedure of the computation for the case $K=(T_{2,-3})_{2,17}$. The general case can be done in a similar way.
By Proposition 4.1 of \cite{Hed05a}, for $i\geq 0$, we have
\begin{displaymath}\widehat{HFK}(K,i)\cong
\begin{cases}
\mathbb{F}_{(2)} &i=10\cr
\mathbb{F}_{(1)} &i=9 \cr
\mathbb{F}_{(1)}\oplus \mathbb{F}_{(0)} &i=8 \cr
\mathbb{F}_{(0)}\oplus \mathbb{F}_{(-1)} &i=7\cr
\mathbb{F}_{(i-8)} &0\leq i\leq 6\cr
0 & \text{otherwise}
\end{cases}
\end{displaymath}
Here the subindex stands for the Maslov grading. Note that by using the symmetry $\widehat{HFK}_d(K,i)=\widehat{HFK}_{d-2i}(K,-i)$ \cite{OS04c}, the above equation actually tells us the whole $\widehat{HFK}(K)$.
Now thinking $CFK^{\infty}(K)$ as $\widehat{HFK}(K)\otimes\mathbb{F}[U,U^{-1}]$ when regarded as an $\mathbb{F}[U,U^{-1}]$-module, we see the lattice points supporting generators with Maslov grading $0$ are $(0,7)$, $(7,0)$, $(i,8-i)$, where $-1\leq i \leq 9$. Here, for example, $(0,7)$ means the corresponding generator has algebraic grading $0$ and Alexander grading $7$.
Note by Theorem 1.2 of \cite{Hed09}, $\tau(K)=7$. In view of Theorem 13.1 in \cite{Liv14}, we see that for $t\in[0,\epsilon]$, $\Upsilon_K(t)=-2s$, where $s=\frac{t}{2}\cdot 7+(1-\frac{t}{2})\cdot 0=\frac{7t}{2}$ and $\epsilon$ is sufficiently small. In other words, when $t$ is small, the $\Upsilon_K$ is determined by the $\mathcal{F}_t$ grading of the generator at $(0,7)$. Now by Theorem 7.1 in \cite{Liv14}, singularities of $\Upsilon_K(t)$ can only occur at time $t$ when there is a line of slope $1-\frac{2}{t}$ that contains at least two lattice points supporting generators of Maslov grading $0$. The only $t\in(0,1)$ satisfying this property is $\frac{2}{3}$, giving a line of slope $-2$ that passes through the lattice points $(0,7)$ and $(-1,9)$.
So far, we can see that $\Upsilon_K(t)$ is either one of the two below, depending on whether $\frac{2}{3}$ is a singular point or not.
\begin{equation}\Upsilon_K(t)=
\begin{cases}
-7t, &t\in[0,\frac{2}{3}]\cr
2-10t, &t\in[\frac{2}{3},1]
\end{cases}
\end{equation}
Or
\begin{equation}
\Upsilon_K(t)=-7t, \qquad t\in[0,1].
\end{equation}
Note $T_{2,-3}$ is alternating, so we can apply Theorem 1.14 in \cite{OSS} to obtain $\Upsilon_{T_{2,-3}}(t)=1-|1-t|$, when $t\in [0,2]$. Applying Theorem \ref{main} we see that when $\frac{1}{2}\leq t\leq 1$, we have
\begin{displaymath}
2-11t\leq \Upsilon_K(t)\leq 2-10t.
\end{displaymath}
Now we see only (4.1) satisfies this constraint and hence $\Upsilon_K(t)$ is determined.
More generally, we have
\begin{prop}
For $n\geq 8$,
$$\Upsilon_{(T_{2,-3})_{2,2n+1}}(t)=
\begin{cases}
-(n-1)t, &t\in[0,\frac{2}{3}]\cr
2-(n+2)t, &t\in[\frac{2}{3},1]
\end{cases}$$
\end{prop}
\begin{proof}
Same as the discussion above. We refer the reader to \cite{Hed05a} for the formula of $\widehat{HFK}((T_{2,-3})_{2,2n+1})$.
\end{proof}
\subsection{An infinite-rank summand of topologically slice knots}
Let $D$ denote the untwisted positive whitehead double of the trefoil knot. Fix an integer $p>2$ and let $J_n=((D_{p,1})...)_{p,1}$ denote the $n$-fold iterated $(p,1)$-cable of $D$ for some positive integer $n$. Recall Corollary \ref{cor} states that $J_n$ for $n=1,2,3,...$ are linearly independent in $\mathcal{C}$ and span an infinite-rank summand consisting of topologically slice knots. To prove this, we first establish two lemmas.
\begin{lem}
Let $\xi_n$ be the first singularity of $\Upsilon_{J_n}(t)$, then $\xi_n \in [\frac{1}{p^n},\frac{2}{1+p^n}]$. In particular, $\xi_i<\xi_j$ whenever $i>j$.
\end{lem}
\begin{proof}
We first deal with the lower bound. Recall for any knot $K$, $\Upsilon_K(t)=-\tau(K)t$ when $t<\frac{1}{g_3(K)}$ \cite{Liv14}. Note $\tau(D)=g_3(D)=1$ by \cite{Hed07} and hence $\tau(J_n)=p^n$ by \cite{Hed09}. This implies $g_3(J_n)=p^n$ since we have $p^n\leq g_4(J_n)\leq g_3(J_n)\leq p^n$. Therefore, $\xi_n\geq \frac{1}{p^n}$.\\
We move to establish the upper bound. Note $CFK^\infty(D)\cong CFK^\infty(T_{2,3})\oplus A$, where $A$ is an acyclic chain complex \cite{HKL16}. Therefore $\Upsilon_D(t)=\Upsilon_{T_{2,3}}(t)=|1-t|-1$. In particular, $\Upsilon_D(t)= t-2$ when $1\leq t\leq 2$. Apply Theorem \ref{main} we get $\Upsilon_{J_1}(t)\geq pt-2-(p-1)t=t-2$ when $\frac{1}{p}\leq t\leq \frac{2}{p}$. Inductively we have $\Upsilon_{J_n}(t)\geq t-2$ when $\frac{1}{p^n}\leq t\leq \frac{2}{p^n}$. Suppose $\xi_n>\frac{2}{1+p^n}$, then $\exists \epsilon>0$ such that $\Upsilon_{J_n}(\frac{2}{1+p^n}+\epsilon)=-p^n(\frac{2}{1+p^n}+\epsilon)=\frac{-2p^n}{1+p^n}-\epsilon p^n<\frac{2}{1+p^n}-2<\frac{2}{1+p^n}+\epsilon-2$, which is a contradiction. Therefore, $\xi_n\leq \frac{2}{1+p^n}$.
\end{proof}
Let $\Delta\Upsilon'_K(t_0)$ denote the slope change of $\Upsilon_K(t)$ at $t_0$, i.e. $\Delta\Upsilon'_K(t_0)=\lim_{t\searrow t_0}\Upsilon'_K(t_0)-\lim_{t\nearrow t_0}\Upsilon'_K(t_0)$. Recall in general $\frac{t_0}{2}\Delta\Upsilon'_K(t_0)$ is an integer \cite{OSS}. The following lemma shows in some cases, we can determine the value of $\frac{t_0}{2}\Delta\Upsilon'_K(t_0)$.
\begin{lem}
Let $K$ be a knot in $S^3$ such that $\tau(K)\geq 0$ and let $\xi$ be the first singularity of $\Upsilon_K(t)$. If $0<\xi<\frac{4}{g_3(K)+\tau(K)}$, then $\frac{\xi}{2}\Delta\Upsilon'_K(\xi)=1$.
\end{lem}
\begin{proof}
Depicting the chain complex $CFK^\infty(K)$ as lattice points in the plane, by Theorem 7.1 (3) of \cite{Liv14}, we know there is a line of slope $1-\frac{2}{\xi}$ containing at least two lattice points $(i,j)$ and $(i',j')$ supporting generators of Maslov grading $0$. Since $\xi$ is the first singularity, by Theorem 13.1 of \cite{Liv14} we know, say, $(i,j)=(0,\tau(K))$. So we have $\frac{j'-\tau(K)}{i'}=1-\frac{2}{\xi}$, which implies $0<\xi=\frac{2i'}{i'-j'+\tau(K)}<\frac{4}{g_3(K)+\tau(K)}$. This together with the genus bound property of knot Floer homology $|i'-j'|\leq g_3(K)$ would constrain $|i'|=1$. By Theorem 7.1 (4) of \cite{Liv14}, $\frac{\xi}{2}\Delta\Upsilon'_K(\xi)=|i'|=1.$
\end{proof}
\begin{proof}[Proof of Corollary \ref{cor}]
Note all $J_n$ have trivial Alexander polynomial and hence are topologically slice \cite{Fre82}. The linear independence follows from Lemma 4.2: suppose $\Sigma k_iJ_{n_i}=0$ in $\mathcal{C}$ for some $k_i\neq 0$ and $n_1>...>n_l$, since $\Upsilon_{\Sigma k_iJ_{n_i}}(t)=\Sigma k_i\Upsilon_{J_{n_i}}(t)$ it possesses first singularity at $\xi_{n_1}$, which contradicts to $\Upsilon_{\Sigma k_iJ_{n_i}}(t)\equiv 0$. To see they span a summand, note by Lemma 4.2 and 4.3, $\frac{\xi_n}{2}\Delta\Upsilon'_{J_n}(\xi_n)=1$. Now one can easily see the homomorphism $\mathcal{C}\longrightarrow \mathbb{Z}^\infty$ given by $[K]\mapsto (\frac{\xi_n}{2}\Delta\Upsilon'_K(\xi_n))_{n=1}^\infty$ is an isomorphism when restricted to the subgroup spanned by $J_n$ and hence the conclusion follows.
\end{proof}
\begin{rmk}
One can actually replace $D$ by any topologically slice knot $K$ with $\tau(K)=g(K)=1$ and even consider mixed iterated cable $((K_{p_1,1})...)_{p_n,1}$. We chose $J_n$ for the sake of an easier illustration. The linear independence of certain subfamily of mixed iterated cables of $D$ was also observed by Feller, Park, and Ray \cite{FPR16}.
\end{rmk}
|
2,877,628,088,451 | arxiv | \section{Introduction}
A variety of lightly damped Alfv\'{e}nic modes can occur in magnetically confined plasmas. Interaction with fast particles can destabilise these modes, leading to a loss of confinement of these particles. In fusion applications, such losses could reduce heating and damage plasma facing components. One such mode is the toroidicity-induced shear Alfv\'{e}n eigenmode (TAE), which occurs due to the coupling of successive poloidal harmonics in a toroidal plasma. This coupling results in frequency gaps in the spectrum of spatially-localised modes resonant on particular flux surfaces. TAEs\, which have extended radial structure, can exist in these frequency gaps. One potentially significant source of damping for these global modes is the resonant transfer of energy from these modes to the strongly damped continuum modes.
Physically, continuum damping represents dissipative effects, such as charge separation and mode conversion which occur near continuum resonances. Such behaviour could be described using two-fluid and kinetic plasma models. However, it is not necessary to model these mechanisms to compute continuum damping. Continuum damping can be calculated using resistive MHD\ as the limit of damping as resistivity is reduced to zero \cite{damping_of_GAEs_in_tokamaks_due_to_resonant_absorption}. In ideal MHD\, a continuum resonance corresponds to a regular singular point. The correct treatment of such a singularity is dictated by the causality condition, analogous to the analysis of Landau damping of plasma oscillations \cite{landau_damping}. Analytic \cite{continuum_damping_of_high_n_TAWs,continuum_damping_of_ideal_TAEs,resonant_damping_of_TAEs_in_tokamaks,continuum_damping_of_low_n_TAEs} and numerical \cite{computational_approach_to_continuum_damping_in_3D_published} methods have been developed for calculating continuum damping using ideal MHD\ based on this condition. In this paper we outline a numerical method in which singular finite elements are used to ensure that the singularity in the ideal MHD\ treatment of continuum damping is properly represented. While we describe the calculation of continuum damping of a TAE\, the method used could be adapted to the continuum damping of other global modes in toroidal plasmas.
Singular finite elements are used in a number of fields, with examples of applications occurring in fracture mechanics \cite{sfe_for_crack_propagation}, electromagnetism \cite{edge_elements_for_eddy_current_problems} and viscous flow \cite{sfe_for_Stokes_flow}. To our knowledge, singular finite element methods have only been employed in one case in plasma physics to solve the ideal MHD\ Newcomb equation describing a marginally stable cylindrical plasma MHD\ \cite{galerkin_method_for_DEs_with_regular_singular_points}. This method was applied to the analysis of resistive instabilities. In these cases the asymptotic behaviour around and location of the singular point is known. Singular finite elements have not previously been applied to MHD\ problems with finite frequency oscillations. In the case we analyse, the location of continuum resonances is dependent on the TAE\ eigenvalue. Consequently, determining this eigenvalue using the singular finite element method is an iterative process, in which the estimated location of the resonance must be updated after each iteration.
In section \ref{sec:TAE_equation} of this paper we outline a TAE\ model due to Berk \textit{et al.}\ and show that the resulting eigenvalue equation can be expressed in terms of Hermitian operators. The Frobenius method is applied to this expression to determine the form of the continuum resonance singularities in section \ref{sec:singularities}. Subsequently, in section \ref{sec:FEM} a finite element method is described which incorporates elements with this form. This method is applied to a TAE\ in section \ref{sec:verification} and verified by comparison to the complex contour method.
\section{Shear Alfv\'{e}n eigenmode equation} \label{sec:TAE_equation}
In this paper we consider a TAE\ in a low $\beta$, large aspect ratio circular tokamak with a perfectly conducting wall at the edge of the plasma. For this case Berk \textit{et al.}\ have derived the following coupled mode equation for shear Alfv\'{e}n waves in ideal MHD \cite{continuum_damping_of_low_n_TAEs}:
\begin{eqnarray}
\noindent \frac{d}{dr}\left [r^3\left (\frac{\omega^2}{{v_A}^2}-k_{\parallel m}^2 \right ) \frac{dE_m}{dr}\right]+\frac{d}{dr}\left (\frac{\omega^2}{{v_A}^2} \right )r^2E_m-\left (m^2-1 \right )\left(\frac{\omega^2}{v_A^2}-k_{\parallel m}^2 \right )rE_m && \nonumber \\
\noindent +\frac{d}{dr}\left [\frac{5\epsilon r^4}{2a} \left (\frac{dE_{m+1}}{dr}+\frac{dE_{m-1}}{dr} \right ) \right ]=0 , \label{eq:wave_equation}
\end{eqnarray}
in which $E_m$ is the $m$'th poloidal Fourier component of $\frac{\delta \Phi}{r}$, where $\delta \Phi$ is the perturbation to the electric potential due to the wave. The gauge is set such that the magnetic vector potential $\mathbf{A}$ associated with these oscillations is parallel to the equilibrium magnetic field. The variable $r$ is the radial coordinate in the flux-type straight-field-line coordinates defined by Berk \textit{et al.}\ \cite{continuum_damping_of_low_n_TAEs}. The inverse aspect ratio $\epsilon$, is the ratio of the minor radius $a$ to the major radius $R_0$, $\omega$ is the angular frequency of the mode and $k_{\parallel}=\frac{1}{R_0}\left (n - \frac{m}{q} \right )$ is the wave number parallel to the magnetic field for poloidal and toroidal mode number $m$ and $n$ respectively. For a particular flux surface $v_A = \frac{B}{\mu_0 \rho}$ is the Alfv\'{e}n speed and $q$ is the safety factor.
Dividing by $a$, equation (\ref{eq:wave_equation}) can be written more compactly as:
\begin{eqnarray}
\Omega^2 L_{\Omega} \left [ E_j \right ] - L_{k} \left [ E_j \right ] = \frac{d}{dx} \left [ \left ( \Omega^2 D_{\Omega,i,j} \left ( x \right ) - D_{k,i,j} \left ( x \right ) \right ) \frac{dE_j}{dx} \right ] && \nonumber \\
+ \left ( \Omega^2 A_{\Omega,i,j} \left ( x \right ) - A_{k,i,j} \left ( x \right ) \right )E_j = 0 . \label{eq:matrix_equation}
\end{eqnarray}
Here we define:
\begin{eqnarray}
D_{\Omega,i,j} = \frac{N}{N_0} \left ( x^3 \delta_{i,j} + \frac{5}{2} \epsilon x^4 \left ( \delta_{i-1,j} \delta_{i+1,j} \right ) \right ) , && \\
D_{k,i,j} = x^3 \left (n - \frac{m}{q} \right ) ^2 \delta _{i,j} , && \\
A_{\Omega,i,j} = \left ( x^2 \frac{d}{dx} \left (\frac{N}{N_0} \right ) - \left (m^2 - 1 \right ) x \left (\frac{N}{N_0} \right ) \right ) \delta_{i,j} , && \\
A_{k,i,j} = -\left ( m^2 - 1 \right ) \left (n - \frac{m}{q} \right ) ^2 x \delta_{i,j} ,
\end{eqnarray}
in which $x = \frac{r}{a}$ is the normalised radial coordinate and $\Omega = \frac{\omega R_0}{v_{A,0}}$ is the normalised frequency, where $v_{A,0}$ is the Alfv\'{e}n speed at the magnetic axis. The operators $L_{\Omega}$ and $L_{k}$ can be shown to be Hermitian. As ideal MHD\ is non-dissipative, the corresponding operators must be Hermitian for arbitrary geometry. Thus the equations of linearised ideal MHD\ spectral analysis may always be expressed in the form seen in equation (\ref{eq:matrix_equation}), regardless of geometry and simplifying assumptions. Therefore, the methods described here for a simplified case could in principle be applied in three-dimensional geometry or where there is interaction between shear-Alfv\'{e}n and magneto-sonic waves.
If operator $L$ is Hermitian, then:
\begin{equation}
\int_{a}^{b} \left ( L \left [ \mathbf{y} \right ] \right )^{\dagger} {\mathbf{y}}' dx = \int_{a}^{b} \mathbf{y}^{\dagger} \left ( L \left [ {\mathbf{y}}' \right ] \right ) dx , \label{eq:hermitian}
\end{equation}
for all $\mathbf{y}$ and ${\mathbf{y}}'$. If $L$ is a second order differential operator:
\begin{equation}
L \left [ \mathbf{y} \right ] = \left ( L_2 \left (x \right ) \frac{d^2}{dx^2} + L_1 \left (x \right ) \frac{d}{dx} + L_0 \left (x \right ) \right )\left [ \mathbf{y} \right ] . \label{eq:sodo}
\end{equation}
Substitute equation (\ref{eq:sodo}) into equation (\ref{eq:hermitian}) and integrate by parts,
\begin{eqnarray}
\int_{a}^{b} \left ( L_2 \left (x \right ) \frac{d^2 \mathbf{y}}{dx^2} + L_1 \left (x \right ) \frac{d \mathbf{y}}{dx} + L_0 \left (x \right ) \mathbf{y} \right )^{\dagger} {\mathbf{y}}' dx && \nonumber \\
= \int_{a}^{b} \left ( \frac{d^2 {\mathbf{y}}^{\dagger}}{dx^2} L_2 \left (x \right ) {\mathbf{y}}' + \frac{d {\mathbf{y}}^{\dagger}}{dx} \left ( \frac{d L_2 \left (x \right ) }{dx} - L_1 \left (x \right ) \right ) {\mathbf{y}}' + {\mathbf{y}}^{\dagger} L_0 \left (x \right ) {\mathbf{y}}' \right ) dx .
\end{eqnarray}
The last step assumes that $\mathbf{y} = 0$ for $x = a$ and $x = b$. Hence, to ensure that $L$ is Hermitian, we require that:
\begin{eqnarray}
L_2 = {L_2}^{\dagger} , && \\
2 \frac{d L_2}{dx} - L_1 = {L_1}^{\dagger} , && \\
\frac{d^2 L_2}{dx^2} - \frac{d L_1}{dx} + L_1 = {L_0}^{\dagger} .
\end{eqnarray}
Therefore, for Hermitian matrices $L_1$ and $L_0$ this implies that $\frac{d L_2}{dx} = L_1 $, and thus the equation $L E_m = 0$ has the same form as equation (\ref{eq:matrix_equation}). Clearly, $L_1$ and $L_0$ are Hermitian for a circular tokamak with the large aspect ratio approximation described in equation (\ref{eq:matrix_equation}). However, it can also be shown that the force operator for linearised ideal MHD\ can be expressed in a symmetric form \cite{Principles_of_MHD}. Therefore, it follows that matrices $L_1$ and $L_0$ will be Hermitian for arbitrary geometry. Thus the equations of ideal MHD\ spectral analysis may always be expressed in the form seen in equation (\ref{eq:matrix_equation}), regardless of geometry and simplifying assumptions. Therefore, the methods described here for a simplified case could in principle be applied in three-dimensional geometry or where there is interaction between shear-Alfv\'{e}n and magneto-sonic waves.
\section{Continuum resonance singularities} \label{sec:singularities}
Regular singularities occur for $x$ such that the determinant of the matrix $D_{i,j} = \left ( \Omega^2 D_{\Omega,i,j} \left ( r \right ) - D_{k,i,j} \left ( r \right ) \right )$ is zero. At this point the inverse of the operator $L = \Omega ^2 L_{\Omega}-L_{k}$ is unbounded. The behaviour of the wave function near this pole can be found using a Frobenius expansion. Let $z = x - x_r$, where $x_r$ is the location of a pole due to a continuum resonance, and let $A_{i,j} = \left ( \Omega^2 A_{\Omega,i,j} \left ( r \right ) - A_{k,i,j} \left ( r \right ) \right )$. In general $x_r$ will be displaced into the complex plane as $\Omega$ become complex due to continuum damping. Where the inverse of $D$ exists, from equation (\ref{eq:matrix_equation}) we can derive the following expression:
\begin{equation}
\frac{d^2 E_j}{dz^2} + D_{i,j}^{-1} \frac{d D_{i,j}}{dz} \frac{dE_j}{dz} + D_{i,j}^{-1} A_{i,j} E_j = 0 . \label{eq:simplified}
\end{equation}
Assume that $\left. \frac{d \left \| D_{i,j} \right \|}{dx} \right |_{x=x_r} \neq 0$. That is, consider a case where the continuum resonance does not coincide with a stationary point of the continuous spectrum. Hence, near the pole, the inverse of matrix $D_{i,j}$ can be approximated in terms of its adjugate as:
\begin{equation}
D_{i,j}^{-1} \approx \frac{\textup{adj} \left ( D_{i,j} \right )}{z \left. \frac{d \left \| D_{i,j} \right \|}{dx} \right |_{x=x_r}} .
\end{equation}
Thus it is possible to write equation (\ref{eq:simplified}) as
\begin{equation}
\frac{d^2 E_j}{dz^2} + \frac{1}{z} M_{i,j} \frac{dE_j}{dz} + \frac{1}{z} N_{i,j} E_j = 0 , \label{eq:Fobenius}
\end{equation}
where
\begin{eqnarray}
M_{i,j} = \left [ \left. \frac{\textup{adj} \left ( D \right ) _{i,k}}{\frac{d \left \| D_{i,j} \right \|}{dx} } \frac{d D_{k,j}}{dx} \right ] \right |_{x=x_r} = \delta_{i,j}, && \\
N_{i,j} = \left [ \left. \frac{\textup{adj} \left ( D \right ) _{i,k}}{\frac{d \left \| D_{i,j} \right \|}{dx} } A_{k,j} \right ] \right |_{x=x_r} .
\end{eqnarray}
Using the Frobenius method, express the solution near the resonance as:
\begin{equation}
E_j = z^k \sum_{l=0}^{\infty } a_{l,j} z^l ,
\end{equation}
where $a_{0,i} \ne 0$ for some $i$. Combining this requirement with equation (\ref{eq:Fobenius}) leads to the indicial equation:
\begin{equation}
k^2 a_{0,i} = 0 .
\end{equation}
Hence, the indicial equation has the double root $k = 0$. Therefore, the indicial equation does not provide two linearly independent solutions. As a consequence, the solution will have general form:
\begin{equation}
E_j = z^k \sum_{l=0}^{\infty } \left ( a_{l,j} z^l + b_{l,j} z^l \ln \left ( z \right )\right ) ,
\end{equation}
where $a_{0,i} \ne 0$ or $b_{0,i} \ne 0$ for some $i$. Thus, the indicial equations are:
\begin{eqnarray}
k^2 a_{0,i} + 2k b_{0,i} = 0 , && \\
k^2 b_{0,i} = 0 .
\end{eqnarray}
For these equations $k = 0$ remains a solution, and if $b_{0,i} = 0$ the previous solution is recovered. Additional non-trivial solutions are found with $b_{0,i} \ne 0$ for some $i$. Thus the solution in the vicinity of the resonance can be approximated to first order as $E_m = a_m + b_m \ln \left ( x - x_r \right )$.
It is possible to express the normalised real frequency $\Omega_r$ and damping $\Omega_i$ as normalised complex frequency $\Omega = \Omega_r + i \Omega_i$. For complex $\Omega$, the pole due to the continuum resonance will generally have an imaginary component. Consequently, it is necessary to define an analytic continuation of the logarithmic function in the vicinity of the complex pole. The causality condition requires that the logarithmic function is found by analytic continuation of the function on a path that with the real axis encloses the singularity. Physically, this derives from the requirement that a perturbation to the plasma precedes the response it causes. A branch cut exists where $\Re \left (x \right ) = \Re \left (x_r \right )$ and $\Im \left (x \right ) < \Im \left (x_r \right )$ if $\Im \left (x_r \right ) > 0$ or $\Im \left (x \right ) > \Im \left (x_r \right )$ if $\Im \left (x_r \right ) < 0$. Thus the logarithmic function in the Frobenius approximation will be:
\begin{equation}
\ln _{\pm} \left ( x - x_r \right ) = \ln \left | x - x_r \right | \mp \pi i \arg \left ( x - x_r \right ) \pm 2 \pi i \Theta \left ( \Re \left ( x - x_r \right ) \right ) , \label{eq:aclog}
\end{equation}
where $\textup{sgn} \left ( \Im \left ( x_r \right ) \right ) = \pm 1$ and $\Theta \left ( x_r \right )$ is the Heaviside step function.
\section{Finite element method} \label{sec:FEM}
In the Galerkin method the eigenvalue problem $\left ( L_a - \lambda L_b \right ) \mathbf{u} = 0$ is discretised by deriving a weak formulation of the problem and approximating the solution as a linear combination of a finite number of basis functions. The weak formulation is expressed in terms of bi-linear forms $a\left ( u, v\right )$ and $b\left ( u, v\right )$ defined for $u,v \in V$ where $V$ is a Hilbert space. The eigenfunction $u \in V$ and eigenvalue $\lambda$ are such that $a\left ( u, v\right ) - \lambda b\left ( u, v\right ) = 0$ $\forall v \in V$. This problem is discretised by solving for $u^h , v^h \in V^h$ where $V^h$ is a $h$-dimensional subset of $V$, representing the space spanned by a finite set of basis functions. This allows the problem to be represented as a generalised matrix eigenvalue problem, for which efficient numerical solution procedures exist. The solution obtained using this method is such that its error $e^h$ (the difference between it and the exact solution to the original eigenvalue problem) is Galerkin orthogonal to the space spanned by the chosen basis functions. That is $a\left ( e^h , v^h \right ) - \lambda b\left ( e^h, v^h \right ) = 0$ $\forall$ $v^h$. Thus this solution represents a projection of the exact solution onto the chosen space $V^h$. The accuracy of the solution obtained depends on how accurately the eigenfunction can be approximated by a linear combination of the chosen basis elements. Finite element method results can be made to converge with an increasing number of basis functions.
For simplicity, triangular functions with uniform spacing were chosen to form the basis set. These are defined as follows,
\begin{equation}
v_{m,i,j} \left ( x \right ) = \left \{ \begin{array}{rcl}
\left ( 1 - \frac{\left | x - x_i \right |}{\Delta} \right ) \delta_{m,j} &\mbox{ for } x \in \left (x_i - \Delta , x_i + \Delta \right ) \\
0 &\mbox{for } x \not\in \left (x_i - \Delta , x_i + \Delta \right )
\end{array} \right.
\end{equation}
where the centre of the $i$th basis function is located at $x_i = \frac{i-1}{N-1}$, $N$ is the number of elements and $\Delta = \frac{1}{N-1}$ is the spacing between the centres of adjacent functions. This gives a piecewise linear approximate solution $E_m = \sum_{i=0}^{N} \phi _{m,i,j} v_{m,i,j} \left ( x \right )$.
It is possible to express the TAE\ wave equation in terms of a sesquilinear form by taking the scalar product with a function $E_m'$ and integrating from $x = 0$ to $x = 1$. In the absence of continuum resonances this leads to the expression:
\begin{equation}
\Omega^2 \int_{0}^{1} \frac{dE_i^*}{dx} \left [ \left (D_{\Omega,i,j} \left ( x \right ) - A_{\Omega,i,j} \left ( x \right ) \right ) \right ] \frac{dE_j}{dx} dx = \int_{0}^{1} E_i^* \left [ D_{k,i,j} \left ( x \right ) - A_{k,i,j} \left ( x \right ) \right ] E_j dx
\end{equation},
for all continuous $E_m'$ where $E_m\left ( 0 \right ) = E_m'\left ( 0 \right ) = E_m\left ( 1 \right ) = E_m'\left ( 1 \right ) = 0$. However, due to the discontinuity associated with a logarithmic singularity, it is necessary to exclude the continuum resonance from the integration. Let $x_r^-$ and $x_r^+$ be the real valued lower and upper bounds of a region containing the continuum resonance $\Re \left ( x_r \right )$, where $\Re \left ( x_r \right ) - x_r^- \ll 1 $ and $x_r^+ - \Re \left ( x_r \right ) \ll 1 $. If the region where $x \in \left [ x_r^-, x_r^+ \right ]$ is removed from the integration, this leads to the appearance of surface terms:
\begin{eqnarray}
\hspace{-1 cm} \noindent \Omega^2 \left [\left. E_i^* D_{\Omega,i,j}\left ( x \right ) \frac{dE_j}{dx} \right |_{x_r^-}^{x_r^+} + \left ( \int_{0}^{x_r^-} dx + \int_{x_r^+}^{1} dx \right ) \left \{ \frac{dE_i^*}{dx} \left ( \left (D_{\Omega,i,j} \left ( x \right ) - A_{\Omega,i,j} \left ( x \right ) \right ) \right ) \frac{dE_j}{dx} \right \} \right ] && \nonumber \\
\noindent = \left [ \left. E_i^* D_{k,i,j}\left ( x \right ) \frac{dE_j}{dx} \right |_{x_r^-}^{x_r^+} + \left ( \int_{0}^{x_r^-} dx + \int_{x_r^+}^{1} dx \right ) \left \{ E_i^* \left ( D_{k,i,j} \left ( x \right ) - A_{k,i,j} \left ( x \right ) \right ) E_j \right \} \right ] .
\end{eqnarray}
for all $E_m'$ continuous on $x \in \left (0 , x_r^- \right ) \cup \left (x_r^+ , 1 \right )$ where $E_m\left ( 0 \right ) = E_m'\left ( 0 \right ) = E_m\left ( 1 \right ) = E_m'\left ( 1 \right ) = 0$. Removing this part of the domain from the integration is equivalent to multiplying the integrand by a weight function $g\left (x \right )$ which is equal to $0$ for $x \in \left ( a,b \right )$ and $1$ elsewhere. The equation clearly lacks any information on the excluded region. Thus the equation expresses a necessary, but not sufficient condition for the solution $E_m$. Consequently, it is necessary to restrict $E_m$ in the excluded region to those solutions found using the Frobenius expansion. This can be achieved by replacing the finite element basis functions where $x \in \left [ x_r^-, x_r^+ \right ]$ with appropriate singular finite elements. Although, the above applies to cases with one continuum resonance, however it could readily be generalised to cases with multiple resonances.
The basis functions described above are replaced with alternative functions reflecting the lowest order terms of the Frobenius expansion over an inner region $x \in \left (a , b \right )$. The bounds $a$ and $b$ are chosen such that $\left ( x_r^- , x_r^+ \right ) \subset \left (a , b \right )$ and both $a$ and $b$ are integer multiples of $\Delta$. The singular basis functions used are defined in terms of the analytically continued logarithmic function expressed in equation (\ref{eq:aclog}),
\begin{equation}
v_{m,log,j} = \left \{ \begin{array}{rcl}
\ln _{\pm} \left ( x - x_r \right ) \delta _{m,j} &\mbox{ for } x \in \left (a , b \right ) \\
\left (1 - \frac{\left (a-x \right )}{\Delta} \right ) \ln _{\pm} \left ( a - x_r \right ) \delta _{m,j} &\mbox{ for } x \in \left (a - \Delta , a \right ) \\
\left (1 - \frac{\left (x-b \right )}{\Delta} \right ) \ln _{\pm} \left ( b - x_r \right ) \delta _{m,j} &\mbox{ for } x \in \left (b , b + \Delta \right ) .
\end{array} \right.
\end{equation}
Such basis functions are illustrated in figure \ref{fig:log_sfe}. By including the discontinuity at $x = \Re \left ( x_r \right )$ due to the continuum resonance pole, such singular basis functions ensure that continuum damping is represented by the imaginary component of the eigenvalue.
Elements which are constant on the inner region are also chosen, reflecting the constant terms in the expansion,
\begin{equation}
v_{m,const,j} = \left \{ \begin{array}{rcl}
\delta _{m,j} \textup{ for } x \in \left (a , b \right ) \\
\left (1 - \frac{\left (a-x \right )}{\Delta} \right ) \delta _{m,j} &\mbox{ for } x \in \left (a - \Delta , a \right ) \\
\left (1 - \frac{\left (x-b \right )}{\Delta} \right ) \delta _{m,j} &\mbox{ for } x \in \left (b , b + \Delta \right ) .
\end{array} \right.
\end{equation}
This type of basis function is illustrated in figure \ref{fig:constant_fe}. To improve convergence, basis functions were also defined which were linear on the inner region, representing the next lowest order terms in the expansion,
\begin{equation}
v_{m,lin,j} = \left \{ \begin{array}{rcl}
\left ( x - x_r \right ) \delta _{m,j} &\mbox{ for } x \in \left (a , b \right ) \\
\left (1 - \frac{\left (a-x \right )}{\Delta} \right ) \left (a - x_r \right ) \delta _{m,j} &\mbox{ for } x \in \left (a - \Delta , a \right ) \\
\left (1 - \frac{\left (x-b \right )}{\Delta} \right ) \left (b - x_r \right ) \delta _{m,j} &\mbox{ for } x \in \left (b , b + \Delta \right ) .
\end{array} \right.
\end{equation}
An illustration of such a basis function is provided in \ref{fig:linear_fe}. Inclusion of linear terms reflects the existence of a real component of the solution on the inner region which is anti-symmetric in both real and imaginary parts about the continuum resonance location.
\begin{figure}[h]
\centering
\includegraphics[width=80mm]{lnn_basis_function_tt_3.pdf}
\includegraphics[width=80mm]{lnp_basis_function_tt_3.pdf}
\caption{\label{fig:log_sfe}
Plots (a) and (b) illustrate basis functions which are logarithmic on the inner region $x \in \left ( a,b \right)$, $v_{m,logarithmic,j}$ where $\Im \left ( x_r \right )$ is positive and negative respectively. In each case $\Im \left ( x_r \right )$ is finite, resulting in a continuous real component (blue, solid) and an imaginary component with a step-discontinuity at $x = \Re \left ( x_r \right ) $ (gold, dashed).
}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=80mm]{constant_basis_function_3.pdf}
\caption{\label{fig:constant_fe}
A plot illustrating a basis function which is linear over the inner region $x \in \left ( a,b \right)$, $v_{m,constant,j}$.
}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=80mm]{linear_basis_function_3.pdf}
\caption{\label{fig:linear_fe}
A plot illustrating a basis function which is linear over the inner region $x \in \left ( a,b \right)$, $v_{m,linear,j}$.
}
\end{figure}
The location of the pole due to the continuum resonance is not known \textit{a priori}, as solutions to $\left \| D_{i,j} \right \| = 0$ are dependent on $\Omega$. If there is an error in the estimated resonance location, the basis functions above will not be able to accurately represent the solution near the resonance. Thus, the solution obtained will depend on the width of the excised region. Consequently it is necessary to apply the singular finite element using an iterative technique. The technique is applied as follows:
\begin{enumerate}
\item Compute the real frequency component using the finite element technique incorporating only standard linear elements. This is reasonably accurate, provided that the continuum damping is small in relation to the real frequency component.
\item Estimate the pole location $x_r$, by solving $\Omega_C \left ( x_r \right ) = \Omega$, where $\Omega _C$ is the normalised continuum frequency.
\item Add a small imaginary component to the estimated pole location $x_r$, reflecting that $\Omega _i < 0$ as required by the causality condition. The sign of this component is estimated based on a truncated Taylor series expansion which implies $\Im \left ( x_r \right ) \approx \Omega _i \left. \frac{\partial \Omega _C}{\partial x} \right | _{x=\Re \left ( x_{r} \right )} ^{-1}$.
\item Using singular finite elements, find $\Omega$ as a function of the width of the excised region. A larger excised width reduces the effect of the error in the pole location, though the remaining logarithmic part should be sufficiently large that it accurately reflects the solution.
\item Update the estimate for complex $\Omega$ by estimating the limit for the $\Omega$ based on the largest value of the excise width.
\item Update the estimate for the pole location. Approximate the real component using $\Omega_C \left ( \Re \left ( x_r \right ) \right ) \approx \Re \left ( \Omega \right )$ and then approximate the imaginary component based on the truncated Taylor series in step (iii).
\item Repeat the previous three steps to determine increasingly accurate values for $\Omega$ and $x_r$, which numerical experiment shows converge to constant values. As this occurs the dependence of $\Omega$ on the width of the excised region is removed.
\item Demonstrate convergence with respect to the number of radial grid points, $N$ and the width of the inner region, $b-a$.
\end{enumerate}
The complex contour method was used to verify the results obtained using singular finite elements. In this technique the eigenvalue problem is solved over a complex contour which is deformed around the complex poles due to continuum resonances in accordance with the causality condition \cite{computational_approach_to_continuum_damping_in_3D_published}. The complex contour chosen is parameterised by $ x=t+i \alpha t \left ( 1-t \right ) $, where $t \in \left (0,1 \right )$. A similar Galerkin method can be used to find the complex eigenvalues in this case, using a basis set composed exclusively of triangular functions. These basis functions are defined along the complex contour in terms of the contour parameter $t$. For the chosen TAE\ case the equilibrium parameters $q\left (r \right )$ and $N\left (r \right )$ are analytic on the region of interest, allowing evaluation along the chosen complex contour.
\section{Verification} \label{sec:verification}
A TAE\ mode due to the coupling of the $\left ( m , n \right ) = \left ( 1 , 1 \right )$ and $\left ( 2 , 1 \right )$ harmonics was studied using the simplified model outlined in section \ref{sec:singularities}. This analysis was done for a tokamak with aspect ratio $10$ ($\epsilon = 0.1$). The safety factor profile was chosen to be $q\left (x \right ) = q_0 + \left ( q_a - q_0 \right ) x^2$, where $q_0 = 1.0$ and $q_a = 3.0$. The plasma density profile was selected to be $N\left (x \right )=\frac{N_0}{2} \left (1-\tanh \left (\frac{x-\Delta_1}{\Delta_2}\right ) \right )$, where $\Delta_1 = 0.7$ and $\Delta_2 = 0.05$. A set of basis functions was chosen with the number of radial grid points $n = 401$ and inner region width $b - a = 0.0125$. The convergence of the damping ratio $\frac{\Omega _i}{\Omega _r}$ using the iterative technique outlined in section \ref{sec:FEM} is shown in figure \ref{fig:iterative_convergence}. After five iterations, it was estimated that the normalised frequency was $\Omega = 0.326 - 0.00572i$, corresponding to a damping ratio of $\frac{\Omega _i}{\Omega _r} = -0.0176$ (considering the case where the excised region had width $x_r^+ - x_r^- = 0.005$). A decrease in the inner region to $b-a = 0.0075$ and an increase to $b-a = 0.015$ altered the damping ratio by just $-0.81\%$ and $0.36\%$ respectively. Increasing the number of radial grid points used to $561$ and $721$ changed the damping ratio by just $-0.24\%$ and $0.03\%$ respectively, indicating convergence had also occurred with respect to this parameter. The mode structure obtained for the latter resolution is shown in figure \ref{fig:TAE}. Using a finite element complex contour method, the normalised frequency computed was $\Omega = 0.326 - 0.00571i$ and hence the damping ratio was $\frac{\Omega _i}{\Omega _r} = -0.0175$. The convergence of this result with respect to the deformation parameter $\alpha$ is shown in figure \ref{fig:contour_convergence}. The difference between the damping ratios computed using the contour and singular finite element techniques is $0.31\%$.
\begin{figure}[h]
\centering
\includegraphics[width=80mm]{sfe_convergence_plot_6.pdf}
\caption{\label{fig:iterative_convergence}
Damping ratio $\frac{\Omega_i}{\Omega_r}$ as a function of the width of the excised region $x_r^+ - x_r^-$ over five iterations using the singular finite element method. After each iteration $x_r$ is updated based on the value of $\Omega$ for $x_r^+ - x_r^- = 0.01$. The blue circle, gold box, green triangle, orange inverted triangle and purple diamond markers correspond to the first, second, third, fourth and fifth iterations respectively. The damping ratio found using the complex contour method is indicated by the black line.
}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=80mm]{n1m1_2_TAE_721rgp_MJH.pdf}
\caption{\label{fig:TAE}
Mode structure for a TAE\ due to coupling of the $\left ( m , n \right ) = \left ( 1 , 1 \right )$ and $\left ( 2 , 1 \right )$ harmonics with complex frequency $\Omega = 0.3258 -0.00569543 i$, found using the singular finite element method. Solid lines represent real quantities and dashed lines represent imaginary quantities. The continuum resonance pole is located at $x_r = 0.767504 -0.000511 i$. $N = 721$ radial grid points were used with inner region width $b-a = 0.0125$ and excised region width $x_r^+ - x_r^- = 0.005$.
}
\end{figure}
\begin{figure}[h]
\centering
\includegraphics[width=80mm]{contour_convergence_plot_4.pdf}
\caption{\label{fig:contour_convergence}
Damping ratio as a function of the number of radial grid points $N$ for different contour deformation parameters $\alpha$, using the complex contour method. The blue circle, orange square and green triangle correspond to $\alpha = -0.01$, $-0.02$ and $-0.05$. The damping ratio found using the singular finite element method is indicated by the black line.
}
\end{figure}
While the boundary condition $E_1 \left ( 0 \right ) = 0$ used here differs from that used by Berk \textit{et al.} \cite{continuum_damping_of_low_n_TAEs}, introducing this condition does not alter the results obtained to within three significant figures. In the singular finite element solution, $E_1$ assumes an approximately constant value very close to the magnetic axis as required for regular solutions of equation \ref{eq:wave_equation} near the origin. Using a shooting method implementation of the contour method, discussed in \cite{comparison_of_methods_for_numerical_calculation_of_continuum_damping}, with the boundary conditions of Berk \textit{et al.} gives $\frac{\Omega _i}{\Omega _r} = -0.0175$.
\section{Conclusion}
We have described a singular finite element method which successfully reproduces the TAE\ frequency and continuum damping found using the complex contour method. As the continuum damping computed using the latter method has previously been shown to closely agree with the results of resistive MHD\ \cite{computational_approach_to_continuum_damping_in_3D_published}, this agreement demonstrates the validity of the singular finite element method. The small errors in the results of these finite element methods are due to the limited accuracy inherent in approximating a solution with a limited number of finite elements. In the case of the singular finite element technique, these limitations arise due to the finite width of the singular and regular elements as well as the location of the pole used to construct the former.
The singular finite element technique presented here could be readily integrated into existing codes. This would be done by replacing standard finite elements with appropriate singular finite elements in the regions around continuum resonances. Unlike the complex contour technique, it does not require analytic continuation of equilibrium quantities. This is advantageous as finite element plasma stability codes typically employ numerical representations of these quantities which are based on spline interpolation and do not have analytic continuations over the domain of interest \cite{computational_approach_to_continuum_damping_in_3D_published}. In contrast to the complex contour method, the singular finite element method calculates the mode structure for real values of the radial coordinate $r$, rather than over a complex path in that variable. Moreover, less resolution is required to solve eigenvalue equations in ideal MHD equations than resistive MHD . Thus, the ideal MHD\ singular finite element technique presented may allow calculation of continuum damping in more complicated geometries than has previously been practical, such as for stellarators. However, this singular finite element technique requires the user to demonstrate that the solution has converged with respect to four different parameters (grid resolution, excised width, singular element width and location). In contrast, convergence with respect to two parameters is required in the resistive technique (grid resolution and resistivity parameter) and the complex contour technique (grid resolution and contour deformation).
\section*{Acknowledgments}
We gratefully acknowledge the assistance provided by Em. Prof. R. L. Dewar, particularly in suggesting that a finite element approach be taken to the problem of continuum damping in plasma. We would also like to acknowledge helpful discussions regarding the continuum damping problem with Dr. G. R. Dennis.
|
2,877,628,088,452 | arxiv | \section{
\section{Introduction}
\label{S1}
Solar flares and coronal mass ejections (CMEs) result from magnetic reconnection changing magnetic topologies and releasing energy from magnetic loops and active regions produced by the Sun's magnetic dynamo. Sufficiently energetic flares and CMEs can produce large-scale propagating waves, most plausibly in the (magnetosonic) fast-mode and Alfv\'en modes, and pulse-like disturbances. Examples include ``EUV waves'' observed in the lower corona at EUV wavelengths \citep{thompson_etal_1998, warmuth_2007, veronig_etal_2010, patsourakos_and_vourlidas_2012, webb_and_howard_2012}, originally called EIT waves \citep{thompson_etal_1998}, and Moreton waves observed in the chromosphere using H$_{\alpha}$ \citep{moreton_1960,uchida_1968}. CMEs and other sufficiently fast plasma motions - not necessarily faster than the fast-mode speed \citep{pomoell_etal_2008} - can lead to fast-mode waves that steepen nonlinearly into shocks. Usually idealised as abrupt discontinuities, these shocks compress, heat, and alter the bulk velocity of the plasma, amplify and rotate the magnetic field, and can accelerate particles. CME-driven shocks are visible directly in white-light observations \citep{vourlidas_etal_2003,schmidt_etal_2016}. Since shocks convert the kinetic energy of a disturbance into thermal, magnetic, and accelerated particle energy, driven shocks are expected to persist longer than blast-wave shocks, for which the shock has propagated well away from the driver.
Originally defined by near-Earth space observations, solar energetic particles (SEPs) are produced between the Sun and Earth as a result of solar activity, as reviewed for example by \citet{reames_1999} and \citet{klecker_etal_2006}. SEPs have several important space-weather consequences, including radiation damage to technological systems ({\it e.g.} degradation of solar cells and electrical circuit components) and humans ({\it e.g.} astronauts and air travelers), modifying the Earth's radiation belts and environment \citep{baker_etal_2013}, and causing increased particle precipitation into the ionosphere, with associated changes in ionization, plasma density, and radio propagation effects.
The general importance of SEPs and their many associated solar and interplanetary phenomena is that they involve physics that is fundamental, unusually well observed (with high temporal resolution remote imaging data from gamma rays to radio waves, plus {\it in-situ} particle, magnetic-field, and wave observations), and also widely applicable across the fields of astrophysics, plasma physics, and space physics. For instance, SEP production and propagation involves the acceleration of particles in reconnection regions and by shocks and turbulence, the scattering of particles by magnetic (and electric) field turbulence and self-generated waves, the evolution and dynamics of turbulence, interplanetary magnetic-field connectivity, and the propagation and evolution of CMEs and shocks in the corona and solar wind. Similarly, electrons that are accelerated in solar flares and move down towards the chromosphere lead to (reverse drift) Type III solar radio bursts and X-rays, while the associated downward-going ions produce gamma-rays and X-rays. Precipitation of these particles leads to chromospheric heating, expansion, and evaporation fronts. Electrons accelerated outwards lead to (normal drift) Type III bursts in the corona and solar wind, as well as the prompt component of SEPs; the corresponding ions become prompt SEPs. Shocks, whether blast waves produced by flares or CME-driven shocks, contribute strongly to SEPs; the electrons also produce Type II solar radio bursts in the corona and/or solar wind. Finally, Moreton waves, EUV waves, and EIT waves are signatures of dynamic activity that are sometimes associated with SEP acceleration.
Multiple unresolved issues exist concerning the production and propagation of SEPs from the Sun to Earth, their association with flares, CMEs, and the multiple signatures of activity summarized above, and the physics of these signatures themselves. It is plausible that a definitive answer to the questions of whether and how efficiently coronal shocks accelerate SEPs will require carefully combining in-situ and remote sensing observations with realistic global modelling ({\it e.g.} \citet{lario_etal_2017a}. These observations and models will not only be for SEPs but also for the related phenomena of flares, erupting filaments and CMEs, EUV and Moreton waves, and Type II and III bursts. Some of these, especially the propagation and properties of CMEs and their white-light/EUV and Type II radio signatures, are also relevant to forecasting space weather \citep{schmidt_etal_2013, cairns_and_schmidt_2015, kozarev_etal_2015, schmidt_and_cairns_2015, schmidt_and_cairns_2016}. Resolving these issues is a major focus of the {\it Parker Solar Probe} \citep{fox_etal_2016} and {\it Solar Orbiter} \citep{muller_etal_2013} missions.
In this article we briefly review and summarise these issues and then address them using the major solar and interplanetary events associated with the events of 4 November 2015. Arguably these are an ideal set of events to study. First, all three are major events that occur in essentially the same coronal and interplanetary configurations ({\it e.g.} the same large-scale magnetic connectivity and structures such as streamers and coronal holes) and conditions (except for seed energetic particles) but with two sites for the flares and associated CMEs. Both these are on the side of the Sun facing Earth, one near the west limb and one near disk center. Second, a very complete set of ground-based and spacecraft observations exists, ranging from remote X-ray to radio wavelengths for light to {\it in-situ} plasma, field, and energetic particle measurements, making these events amenable to comprehensive data-theory comparisons. Third, significant SEP, space-weather, flare, CME, EUV wave, Moreton wave, hard X-ray, microwave, and radio events were produced, all of direct interest for observers, theorists, modelers, simulators, and operational space-weather staff. What is more, a relatively complete set of observations exists for these. Fourth, the events show strong commonalities (M-class flares, observable EUV waves, CMEs, and Type II and III bursts), yet also strong differences (magnetic connectivity, SEP occurrence, and radio bursts. Fifth, an unusually strong space-weather event occurred in association with the third event on 4 November 2015, especially with regard to aviation radar systems \citep{marque_etal_2018}.
Our primary goal is to detail the solar and interplanetary observations for the three events, describing the common and different features, identifying the aspects that are and are not understood now, and providing the basic observations in a form amenable for future, more detailed, comparisons with theoretical and modeling analyses. The secondary goal is to make progress on understanding these solar and interplanetary phenomena, especially those associated with SEPs, shocks, and magnetic-field configurations, by showcasing the necessary elements of a comprehensive analysis including multi-instrument observations and relevant modeling.
The article proceeds by reviewing the issues involved with SEPs, flares, shocks, CMEs, waves, and related signatures (Section \ref{S2}). It then describes the evolution of the parent active regions, coronal magnetic field, and the X-ray and microwave flares for the 4 November 2015 events (Section \ref{S3}). The failed filament eruptions for the first two flares, the Moreton wave for the first flare, and the EUV waves and CMEs for all three flares are described in Section \ref{S4} and shown to be mutually consistent. Section \ref{S5} overviews the radio events, including the properties of the over five Type IIs involved, the relative lack of Type IIIs, and the strong microwave and Type IV emission for the first and third events. Section \ref{S6} details the interplanetary plasma and magnetic field context, showing the arrival of a shock early on 4 November associated with an earlier event and the shock and CME associated with the third event. The SEP observations are detailed in Section \ref{S7}. The space-weather aspects of the events are briefly discussed in Section \ref{S8}. A summary of the observations and associated theoretical implications is provided in Section \ref{S9}.
\section{Detailed Theoretical and Observational Context}
\label{S2}
SEPs consist of electron, proton, and ion populations with energies in the range of tens of KeV to a few GeV. SEP events can be loosely categorized into impulsive and gradual events, distinguished by the timescales of their intensity profiles and properties such as composition and ionization states \citep{ luhn_etal_1984,cane_etal_1986,reames_stone_1986,reames_1988,luhn_etal_1987,reames_1999, klecker_etal_2006}. Impulsive events are attributed to particle acceleration in regions producing solar flares, presumably in magnetic-reconnection regions, and gradual events to acceleration by CME-driven shocks. However, a number of events exhibit characteristics of both impulsive and gradual events ({\it i.e.} timing of intensity profiles and ratios of heavy ions at high energies with hybrid characteristics), blurring the distinction between acceleration at reconnection sites and at shocks \citep{kallenrode_etal_1992, torsti_etal_2002, Klecker_Mobius_Popecki_2006}.
{\it In-situ} observations of CME-driven shocks and their associated energetic particles have shown that particle acceleration at shocks typically results from shock drift acceleration in the quasi-perpendicular regime and diffusive shock acceleration in the quasi-parallel regime \citep[{\em e.g.}][and references therein]{lee_1983,decker_vlahos_1986,kennel_etal_1986,jones_ellison_1991,lee_2005,cohen_2006,desai_giacalone_2016}. Here the two regimes are defined in terms of $\theta_{Bn}$, the angle between the upstream magnetic field ${\bf B}$ and the normal to the local shock surface: the quasi-perpendicular and quasi-parallel regimes correspond to $45^{\circ} \lesssim \theta_{Bn} \lesssim 90^{\circ}$ and $\theta_{Bn} \lesssim 45 ^{\circ}$, respectively. Without {\it in-situ} measurements of shocks and magnetic fields in the corona, determining which acceleration processes take place close to the Sun is very challenging and requires careful examination of multi-wavelength remote observations.
Analyses of white light and EUV observations, supported by radio imaging and radio Type II dynamic spectra, have found that shocks can form as low in the corona as heliocentric distances of $1.2$ to $2.2 ~R_{\odot}$ \citep{klassen_etal_1999,veronig_etal_2010,Ma2011,bain_etal_2012,gopalswamy_etal_2013,carley_etal_2013,nitta_etal_2014}, where $~R_{\odot}$ denotes the solar radius. Several mechanisms can give rise to shock formation in the corona, including blast waves caused by a sudden release of flare-related energy \citep{Vrsnak_etal_2006} and erupting CMEs which drive shocks as they propagate outwards \citep{dauphin_etal_2006, zimovets_etal_2012}. Determining whether blast-wave or CME-driven shocks are relevant to particular events \citep{howard_pizzo_2016}, and especially to associated Type II bursts and EUV waves, is of particular interest \citep{cane_and_erickson_2005, cairns_2011}. Essentially all interplanetary Type II bursts are interpreted in terms of CME-driven shocks \citep{reiner_etal_1998, bale_etal_1999, cairns_2011}, but this may not be correct for coronal Type IIs.
Type II bursts are interpreted theoretically in terms of: shock-drift acceleration and magnetic mirror reflection of electrons at shocks; development of a beam distribution of reflected electrons; growth of Langmuir waves via the beam instability; and nonlinear wave--wave processes that convert Langmuir wave energy into radio emission near the electron plasma frequency $f_{pe}$ and near $2f_{pe}$ (the so-called fundamental and harmonic radiation, respectively). Relevant reviews include those of \cite{nelson_and_melrose_1985}, \cite{bastian_etal_1998}, and \cite{cairns_2011}. These theories require the source regions to be where the shock is strongly quasi--perpendicular with 80$^{\circ}$~ $\lesssim \theta_{Bn} \lesssim$~90$^{\circ}$~ \citep{holman_and_pesses_1983, cairns_1986, knock_etal_2001, schmidt_and_cairns_2012b, cairns_and_schmidt_2015, schmidt_and_cairns_2016}. Interestingly, multi-frequency mapping of some Type II bursts shows that source regions at different frequencies can be aligned along a direction that is strongly inclined to the radial \citep{Nelson_and_Robinson_1975, Klein_etal_1999}. This is not expected if the electrons are produced at quasi-perpendicular regions of the shock ({\it e.g.} near the nose for overlying loop fields or at lateral expanding flank regions for quasi-radial fields) or at quasi-parallel regions of the shock ({\it e.g.} near the nose for quasi-radial ${\bf B}$). Recent semi-empirical studies \citep{kozarev_etal_2015, lario_etal_2017a} have suggested that the regions of expected shock acceleration may vary with time, and may move to different locations on the shock surface, depending on the parameters governing acceleration efficiency. Combining remote observations with modeling approaches allows determination of relevant parameters for electron and ion acceleration: $\theta_{Bn}$, the spatial profile of the Alfv\'en speed $V_{A}$, and the lateral expansion of the driving CME \citep{warmuth_and_mann_2005,Temmer_etal_2013, zucca_etal_2014,kozarev_etal_2015,lario_etal_2017a,lario_etal_2017b}.
Recent high-cadence observations of large-scale coronal transients, known as ``EUV waves'' (or ``EIT waves'', ``coronal bright fronts (CBFs)'', and ``large-scale coronal propagating fronts (LCPFs)''), suggest that they are signatures of magnetosonic waves or shocks \citep{warmuth_etal_2004, veronig_etal_2010, kozarev_etal_2011, downs_etal_2012}. Here we consistently use the term ``EUV waves'' to avoid unnecessary confusion. EUV waves have been widely studied in the last several years due largely to the significantly improved EUV images in terms of spatial and temporal resolution, spectral coverage, and multipoint views available from the {\it SOlar and Heliospheric Observatory} (SOHO), {\it Solar and TErrestrial RElations Observatory} (STEREO), and {\it Solar Dynamics Observatory} (SDO). We now know that EUV waves are very common during sufficiently impulsive solar eruptions and several studies have characterized them in detail \citep{veronig_etal_2010, patsourakos_etal_2010, hoilijoki_etal_2013}.
The ubiquity of EUV waves during solar eruptions has raised the question of whether they signify shocks or compression waves responsible for accelerating particles observed during the early stages of SEP events. Extending classic works \citep{krucker_etal_1999,torsti_etal_1999}, recent analysis of the temporal relation between the evolution of EUV waves on the solar disk and the {\it in-situ} onset of SEP fluxes for a large sample of events during Cycle 23 has shown a general consistency with wave/shock acceleration for protons but not for electrons \citep{miteva_et_al_2014}. Correspondingly, some analyses find evidence for SEP injections when EUV waves reach the magnetic footpoint of the spacecraft \citep{rouillard_etal_2012} whereas others do not \citep{miteva_etal_2014,lario_etal_2014}. This discrepancy points to the likely complexity of the interactions between the EUV wave, shock (whether blast-wave or CME-driven), CME, flare, and the global coronal magnetic field.
The two STEREO spacecraft and the near-Earth spacecraft {\it Advanced Composition Explorer} (ACE), SOHO, and {\it Wind} allow study of SEP events from multiple vantage points. Observing the same event from a broad range of longitudes \citep[{\it e.g.}]{dresing_etal_2012,lario_etal_2013,dresing_etal_2014,lario_etal_2014,gomez_herrero_etal_2015} allows us to constrain the longitudinal extent of particle acceleration by shocks and associated magnetic connectivity. For some events very different SEP fluxes and profiles are observed at closely separated spacecraft \citep{KlassenEtAl16}, for others the entire SEP event is very localised in longitude, and for still others the SEP event is observed at all longitudes. These observations and complementary modeling efforts are beginning to unravel the complexity in time, longitude, energy, and species of particle acceleration and transport through the inhomogeneous coronal and solar wind \citep{PachecoEtAl17, afanasiev_and_vainio_2013, kozarev_etal_2013}, as well as the associations with radio emissions \citep{cane_etal_2002, schmidt_etal_2014a, cairns_and_schmidt_2015, schmidt_and_cairns_2016}.
The properties of the seed-particle distributions incident on the shock (whether from the ambient background, a flare site, or pre-processed by another event) also affect both the shock-accelerated particle distribution functions ({\it e.g.} the ``injected'' particles subject to propagation analyses \citep{ battarbee_etal_2013,agueda_etal_2014}) and related phenomena such as Type II bursts \citep{cairns_etal_2003, knock_etal_2003a, kozarev_etal_2015, schmidt_etal_2016, lario_etal_2017b}. The properties of pre-existing and self-generated turbulence also affect the effectiveness of diffusive shock acceleration ({\it e.g.} \citet{vainio_etal_2014}). Similarly, in-situ observations of relativistic electrons and ions yield injection / release times and propagation distances that constrain the locations and duration of acceleration events and the effectiveness of wave-particle scattering and diffusion between the source and observer, {\it e.g.} \citet{agueda_etal_2014}.
Magnetic connectivity between the SEP source and the observing location is required, unless sufficient cross-field scattering and diffusion exist, for SEPs to be observed. This requires particles to either be accelerated on field lines connecting to the observer or have access to these open field lines \citep{lario_etal_2017b}. Modeling of solar and interplanetary magnetic structures is then required, for instance using PFSS or other approaches such as MHD simulations \citep{luhmann_etal_2017}, the Archimedean (hereafter Parker) spiral \citep{parker_1958}, or generalized data-driven models \citep{li_etal_2016}.
We return to diagnostics of particle acceleration in flares, clearly vital if the effects of shocks and flares are to be identified, separated, and constrained in detail. Flare signatures are observed at H$_{\alpha}$, white-light, UV to EUV, X-ray, and gamma-ray wavelengths. Flares involve substantial heating (sometimes to over $10$~MK \citep{Lin1981, Caspi2014}), changed magnetic topologies, and particle acceleration. Magnetic reconnection is thus directly relevant but other processes likely contribute to the heating and particle acceleration \citep{Fletcher2011}. The spatial sizes of flaring regions vary widely, from very compact regions ({\it e.g.} the size of low-lying loops) to the size of entire active regions. Similarly the corresponding time-scales and total energy releases also vary widely, from impulsive to long duration and classes A to X, respectively.
Thermal emission from the heated plasma is one component of flare radiation ({\it e.g.} soft X-rays and UV and EUV radiation). Radiation is also emitted by or as a result of energetic particles precipitating into the chromosphere from higher-up acceleration regions; examples include H$_{\alpha}$ radiation and EUV radiation, as well as X-rays produced by bremsstrahlung from energetic electrons with either thermal or nonthermal distribution functions. The X-ray spectra and timescales of bursts can distinguish between thermal and non-thermal electron populations \citep{Holman2011}.
Another crucial signature of electron acceleration, but also of connection to open magnetic field lines, are Type III solar radio bursts \citep{suzuki_and_dulk_1985, bastian_etal_1998, li_etal_2008a, reid_and_ratcliffe_2014}. These involve the accelerated electrons developing a beam distribution function by time-of-flight effects, growth of Langmuir waves by the beam instability, and nonlinear coupling of the Langmuir waves to produce $f_{p}$ and $2f_{p}$ radiation. Type IIIs thus are believed to differ from Type IIs in the source of the accelerated electrons, the beam's detailed formation mechanism, and the beam's characteristic speed (speeds greater than $20$ electron thermal speeds {it versus} $3$). Type IIIs have widely different starting and ending frequencies, intensities, and drift rates and can drift to lower and higher frequencies, corresponding conventionally to electrons moving away from (normal frequency drift) and towards (reverse frequency drift) the Sun, respectively. The specific reasons Type IIIs are important for SEP, flare, and CME physics is that they are signatures of open magnetic fields accessible to accelerated electrons, are interpreted in terms of electron acceleration in magnetic reconnection regions, and can lead to SEP particles.
Velocity dispersion analyses of the energetic electrons in Type III and SEP events yield injection times and estimated propagation distances (presumably along ${\bf B}$ and assuming negligible energy losses) \citep{lin_1985}. The time-varying pitch-angle distributions can also be compared with theoretical predictions and used to constrain the timing, number, and relative sizes of injections of energetic particles and the transport conditions along the observer's magnetic field line(s) \citep{agueda_etal_2014}. These constraints can then be compared with independent arguments based on the timing, spatial locations, and magnetic connectivity of Type II and III bursts, flares, CMEs, and shocks. A major issue with understanding SEP electrons associated with Type IIIs is that the relativistic electrons appear to have injection times that are typically \,10--\,20 minutes later than the sub-relativistic electrons (energies $\approx 10-50$~KeV) that produce Type III bursts near $1$~AU \citep{krucker_etal_1999, haggerty_and_roelof_2002, haggerty_etal_2003}.
Numerical simulations and associated theoretical formalisms for predicting the acceleration and transport of ion SEPs typically involve idealisations concerning one or more of the mean free path, scattering, magnetic field, dimensions, shock geometry, acceleration process, or imposed analytic approximations. For instance, the wave-particle scattering near the shock may be calculated with full time-dependent self-consistency \citep{ng_etal_2003} or assumed to proceed to completion with a steady-state diffusive shock acceleration solution \citep{lee_1983,lee_2005,zank_etal_2000,li_etal_2005,li_etal_2009,verkhoglyadova_etal_2010,vainio_etal_2014,hu_etal_2017}. Similarly a constant mean free path may be assumed for the particle transport \citep{Marsh2013,Marsh2015} or magnetic moment-induced focusing to small pitch-angles, magnetic-turbulence effects, and associated pitch-angle scattering included \citep{Jokipii66,matthaeus_etal_2003,zhang_etal_2009,shalchi_etal_2010,hu_etal_2017}, or cross-field diffusion \citep{zhang_etal_2009,droge_etal_2014,he_wan_2015}. Typically, the background magnetic field is assumed to be Parker-like and drift effects are ignored, but drift effects are sometimes included and found important \citep{Marsh2013,Marsh2015} and the magnetic field is sometimes significantly non-Parker-like \citep{schulte_etal_2011,schulte_etal_2012,li_etal_2016}. Finally, the formalisms available sometimes go beyond the usual one-dimensional approximation to two dimensions \citep{kozarev_etal_2013}, or even three \citep{zhang_etal_2009,Marsh2013,Marsh2015}, and they can couple simulations of specific CMEs and their shocks with the particle transport formalism \citep{zank_etal_2000,verkhoglyadova_etal_2010,kozarev_etal_2013,hu_etal_2017}. Existing theory and simulations thus often experience major challenges explaining observed SEP events.
\section{Active Region Evolution, X-ray, Microwave, and Optical Flares}
\label{S3}
We begin the analysis of the 4 November 2015 solar eruptions with an overview of the source active regions, followed by the X-ray and microwave flare observations that define the initial stages of the three events. Table \ref{table_summary} provides a summary of these and other associated observations.
\subsection{Source Active Regions}
Figure~\ref{Fig_SunOverview} overviews the Sun on 4 November 2015, showing a full-disk continuum image and a line-of-sight magnetogram from the {\it Helioseismic and Magnetic Imager} \citep[HMI;][]{Scherrer2012} onboard SDO together with an H$_{\alpha}$ filtergram from the Kanzelh\"ohe Observatory \citep{Poetzi2015}. The two most prominent active regions present on the visible solar hemisphere on 4 November are NOAA AR 12443 located close to disk center (N6,W10), which is the source of Event~3, and AR 12445 located close to the western limb (N16,W76), which is the source of Events~1 and 2. AR12445 emerged and evolved very fast, over a period of four days, whereas AR 12443 was a long-lived active region.
NOAA AR~12443 is an extended AR region of McIntosh class Fck and magnetic Hale class $\beta\delta$ on 4 November. It developed when it was on the back side of the Sun and rotated onto the visible solar hemisphere on 28 October 2015. Figure~\ref{Fig_AR12443} shows snapshots of the evolution of NOAA AR 12443 on four days before 4 November when it produced Event~3 of our study. In contrast, AR~12445 developed very quickly. Figure~\ref{Fig_AR12445} shows the evolution of NOAA AR 12445 from 1 November when it was first visible and its fast flux emergence and development until 4 November, when it produced Events~1 and 2. On 4 November, its McIntosh class was Ekc and Hale class $\beta\delta$.
\subsection{X-ray and Microwave Emission for the Three Flares on 4 November 2015 -- Flare-related Electron Acceleration and Escape}
The microwave, soft, and hard X-ray emissions (SXR and HXR, respectively), associated with the three flares on 4 November 2015, are signatures and indicators of the electron-heating and acceleration processes occurring in the flaring active regions. HXRs at photon energies above about $20$~KeV are dominantly bremsstrahlung from nonthermal electrons interacting with the dense low corona and chromosphere. Microwaves, that is radio emission at frequencies between 1~GHz and several tens of GHz, are usually attributed to gyrosynchrotron radiation of electrons with energies between about $100$~KeV and a few MeV. It is necessary to look at the behavior of the spectrum in order to identify potentially competing processes: weak (flux densities below 100 sfu), slowly evolving bursts can also be due to thermal bremsstrahlung, and emission up to a few GHz to collective plasma processes.
\subsubsection{Event 1}
Event 1 occurred at heliographic position (N15,W64) and reached GOES class M1.9 (GOES start time: 03:20~UT, peak time: 03:26~UT). The time histories of the RHESSI {\it Hard X-Ray} (HXR), GOES {\it Soft X-Ray} (SXR) and the microwave emission from the {\it Nobeyama Radio Polarimeters} (NoRP) of the first flare are displayed in Figure~\ref{Fig_MW1}. They show an impulsive HXR and microwave burst during the rise phase of the SXR burst. The RHESSI HXR burst is observed to high energies, up to about 500~KeV, with the peaks near 03:24~UT above $25$~KeV and the $3 -- 12$~KeV channels peaking near 03:25~UT. The microwave flux density spectrum has its maximum between 17 and 35~GHz, with a peak flux density of about $950$~sfu near 03:24 -- 03:25~UT. Although not exceptionally high, the flux density is well above values that can be achieved by thermal bremsstrahlung. The burst is hence due to gyrosynchrotron emission. The combination of a high peak frequency and moderately high flux density suggests that the emission occurs in a compact source with rather strong magnetic field, presumably at low coronal altitudes. The strong HXR and microwave emissions show that the parent flare is a very efficient accelerator of electrons to near-relativistic energies.
Figure~\ref{Fig_aia1} shows snapshots of Event 1 observed by the SDO / Atmospheric Imaging Assembly (AIA) using the $131$~{\AA} EUV filter, sensitive to hot flaring plasma at temperatures of about 10~MK. The three images shown are recorded during the early rise, the peak and decay phase of the event. We overplot RHESSI $6--12$ and $30--100$~KeV sources as well as a $17$~GHz microwave image synthesized during the peak of the event from the Nobeyama Radioheliograph (NoRH). The RHESSI images have been reconstructed with detectors $2$ to $8$ \citep{Lin2002, Hurford2002}, but even with the fine grids included we are not able to resolve the emission from the flare loop and footpoints. All three instruments show that the flare is very compact, with the flare emission originating from a small loop arcade. The endpoints of the loops coincide with flare kernels observed in AIA $1700$~{\AA}. From the RHESSI and AIA images we estimate the distance between the loop footpoints to be about $20$~Mm, and the loop height $< 10$~Mm. These characteristics support the interpretation of the high peak microwave frequency observed by NoRP being due to the microwaves originating at low coronal altitudes in a compact source. The AIA $131$~{\AA} sequence plotted in Figure~\ref{Fig_aia1} also shows the impulsive filament eruption towards the South that also originates from the flaring region.
\begin{landscape}
\begin{table}
\begin{adjustwidth}{-3cm}{}
\caption{Short summary of the phenomena associated with the events of 4 November 2015.}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c}
\hline
& AR & SXR & GOES & & & & & EUV & CME & Shock / & Space & SEPs \\
Event \# & Location & start time & X-ray & HXR & GHz & Type IIs & Type IIIs & wave speed & speed & CME & weather & \\
& & [UT] & class & & & & & [km s$^{-1}$] & [km s$^{-1}$] & at Earth & & \\
\hline
1 & 12445 & 03:20 & M1.9 & impulsive & Yes & Metric & Weak metric & 750 & 328 & No / No & No & Electrons \\
& N15W64 & & & & No IP & IP & & S-SE dir. & PA 280$^{\circ}$~ & & &No ions\\
& & & & & & & & & & & &\\
& & & & & & & & & & & &\\
2 & 12445 & 11:55 & M2.6 & short \& & No & Metric & No metric & 600 & 252 & No / No & No & No electrons \\
& N12W73 & & & weak & No IP & No IP & & S-SE dir. & PA 280$^{\circ}$~ & & & No ions \\
& & & & & & & & & & & &\\
& & & & & & & & & & & &\\
3 & 12443 & 13:31 & M3.7 & strong \& & Yes & Metric & Metric & 700 & 580 & Yes / Yes & Yes & Electrons \\
& N09W04 & & & hard & IP & IP & & N-NW dir. & Halo & & & Ions \\
\hline
\end{tabular}
\label{table_summary}
\end{adjustwidth}
\end{table}
\end{landscape}
\begin{figure}[htbp]
\includegraphics[width=1.0\columnwidth]{fullSun_4Nov2015.png}
\caption{Overview of the Sun at 13:37~UT on 4 November 2015. ({\it Left}) SDO/HMI continuum image, ({\it middle}) SDO/HMI line-of-sight magnetogram, ({\it right}) H$_{\alpha}$ image from the Kanzelh\"ohe Observatory, all recorded during the rise phase of Event~3.}
\label{Fig_SunOverview}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{AR_12443.png}
\caption{Evolution of NOAA AR 12443 from 1 to 4 November 2015. ({\it Top}) SDO/HMI continuum images and ({\it bottom}) SDO/HMI line-of-sight magnetograms (with the grayscale saturated at $\pm 1000$~G).}
\label{Fig_AR12443}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{AR_12445.png}
\caption{Evolution of NOAA AR 12445 from 1 to 4 November 2015. (Top) SDO/HMI continuum images and (bottom) SDO/HMI line-of-sight magnetograms (with the grayscale saturated at $\pm 1000$~G).}
\label{Fig_AR12445}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=0.9\columnwidth]{figure4_rev_astrid_3april2018.png}
\caption{Temporal histories of the NoRP microwave ({\it top panel}), RHESSI HXR, and GOES SXR ({\it bottom panel}) emission during Event~1.}
\label{Fig_MW1}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{event1_aia_rhessi.png}
\caption{Snapshots showing the evolution of Event 1 in AIA $131$~{\AA} filtergrams. Red and blue lines are contours from RHESSI X-ray images reconstructed during the early rise phase ({\it left}) and peak ({\it middle}) in the $6--12$ and $30--100$~KeV energy bands, respectively. The {\it rightmost image} is after the event. Yellow contours are from a NoRH $17$~GHz image at event peak. Units are arcseconds from Sun center.}
\label{Fig_aia1}
\end{figure}
\subsubsection{Event 2}
The RHESSI HXR time profiles of the two later events are shown in Figure~\ref{Fig_HXR2-3}. Event 2 occurred at heliographic position N12W73 and reached GOES class M2.6 at (GOES start time: 11:55~UT, peak time: 12:03~UT). RHESSI covered the full impulsive phase of Event 2 and observed enhanced HXR emission up to energies of about $50$~KeV. The HXR emission of this event is clearly weaker and softer than the first event. However, the AIA $131$~{\AA} and RHESSI images plotted in Figure~\ref{Fig_aia2} reveal that Event~2 is homologous with Event~1, as regards the occurrence in the same region, the small and compact flare loops (also reflected in the short HXR and SXR emission profiles), and the associated ejection of filament material toward the south. At microwave frequencies RSTN ({\it Radio Solar Telescope Network}, operated by the US Air Force) sees a weak burst of gyrosynchrotron emission with peak frequency $8.8$~GHz, between 12:00 and 12:04 UT. Despite the homology, Event 2 is a much less efficient electron accelerator than Event 1 based on the X-ray and microwave emissions.
\subsubsection{Event 3}
Event 3 occurred close to disk center, at heliographic position (N09,W04). It is the largest of the three events under study, with a GOES class M3.7 (start time: 13:31~UT, peak time: 13:52~UT). GOES and RHESSI light curves are shown in the bottom panel of Figure~\ref{Fig_HXR2-3}. Classifying the GOES light curves in Figures \ref{Fig_MW1} and \ref{Fig_HXR2-3} using the system of \citet{cane_etal_1986}, Events 1 and 2 are impulsive events and Event 3 a gradual event. The RHESSI hard X-ray observations are restricted to before 13:43~UT due to spacecraft night. However, a comparison with the light curves from {\it FERMI}/GBM shows that RHESSI missed no major burst. The HXR burst is observed up to photon energies of about $100$~KeV. The radio emission, however, does show efficient electron acceleration, starting with a moderately strong burst with peak frequency near $9$~GHz in the impulsive phase ($\approx$ 13:38--13:50), and followed by a long-duration burst in the post-impulsive phase (14:00--15:00 UT; Type IV burst) that was mainly observed at frequencies below 3 GHz (cf. Section \ref{S5}). Figure~\ref{Fig_aia3} shows snapshots during the early rise, peak, and decay phase of Event~3 in AIA~131~{\AA} together with RHESSI HXRs. In contrast to the compact Events 1 and 2, Event 3 shows an extended flare arcade. The East-West extent of the overall flaring region as observed in the AIA EUV emission is about $140$~Mm. The RHESSI emission is concentrated mostly to the brightest flaring loops observed in AIA $131$~{\AA} (compare the middle and right panels), with a loop footpoint separation of about $50$~Mm, corresponding to a loop apex height of $25$~Mm for a semicircular loop.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{event2_goes_rhessi.png}
\includegraphics[width=1.0\columnwidth]{event3_goes_rhessi.png}
\caption{Time histories of the RHESSI HXR emission during Event 2 ({\it top}) and Event 3 ({\it bottom}).}
\label{Fig_HXR2-3}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{event2_aia_rhessi.png}
\caption{Snapshots showing the evolution of Event 2 in AIA $131$~{\AA} filtergrams. Red and blue lines are contours from RHESSI X-ray images reconstructed during the early rise phase ({\it left}) and peak ({\it middle}) in the $6--12$ and $20--50$~KeV energy bands, respectively. The {\it rightmost image} is after the event.}
\label{Fig_aia2}
\end{figure}
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{event3_aia_rhessi.png}
\caption{Snapshots showing the evolution of Event 3 in AIA $131$~{\AA} filtergrams. Red and blue lines are contours from RHESSI X-ray images reconstructed during the early rise phase ({\it left}) and peak ({\it middle}) in the $6--12$ and $20--50$~KeV energy bands, respectively. The {\it rightmost image} is after the event.}
\label{Fig_aia3}
\end{figure}
\section{Failed Filament Eruptions, EUV Waves, and CMEs}
\label{S4}
\subsection{Filament Eruptions and EUV Waves}
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\columnwidth]{events_20151104_v2}
\caption{Summary of eruption evolution for the three events occurring on 4 November 2015. ({\bf a})--({\bf c}) Event 1, beginning with a compact flare-brightening at (N14,W65) at 03:23~UT, followed by a filament eruption and a EUV front which largely propagate in a southwesterly direction. The filament eruption partly failed, with material falling back to the surface and erupting outwards. ({\bf d})--({\bf f}) Event 2 has similar evolutionary characteristics to Event 1, beginning with a flare-brightening from the same active region and a filament eruption. The filament is surrounded by two distinct yellow structures, the outer one being an EUV front, while the inner one develops into a loop-like feature seen in panel (f). The filament eruption is smaller than Event 1 and largely failed. ({\bf g})--({\bf i}) Event 3 occurs at disk center, beginning with a brightening of coronal loops and the propagation of a diffuse EUV front in the northwest direction.}
\label{fig:eruption_summary}
\end{figure}
Figure~\ref{fig:eruption_summary} summarises the eruption evolution seen in EUV for the three events, clearly showing moving material and propagating wave features in all three cases. Red, green, and blue colors correspond to AIA data number values for the filters $171$, $193$, and $211$~\AA, respectively. Events 1 and 2 show very similar morphology eruptions off the west limb, generally directed towards the south, while Event 3 shows the event at disk center, beginning with a brightening of coronal loops, followed by the propagation of a diffuse front, largely in northwesterly direction out to $\approx 50$$^{\circ}$~~W longitude, before decreasing in intensity.
For Events 1 and 2, most of the intensity appeared off-limb. In order to analyse the off-limb kinematics of Event 1, we extracted intensity traces from lines at several angles originating from the active region as indicated in Figure~\ref{fig:dt_analysis}a. For one angle, intensity traces at successive times were stacked to produce a distance--time map. The distance-time maps were made for the $171$, $193$, and $211$~\AA~ filters and then combined in the usual RGB format so activity at the three wavelengths may be represented simultaneously in a single map. Panel b shows one such distance-time map for the $-60$$^{\circ}$~ trace indicated by the red line in Panel a. The map reveals that the eruption along this direction was composed of several distinct features, including a bright yellow EUV front that becomes visible at 03:22~UT and reaches a speed of $990$~km~s$^{-1}$; this was followed by escaping filamentary material with a speed of $255$~km~s$^{-1}$ and finally the failed filamentary eruption which starts to fall back to the solar surface at 03:45~UT. Traces taken along angles from $-60$$^{\circ}$~ to $-30$$^{\circ}$~ show similar behaviour, while traces along angles from $-20$$^{\circ}$~ to $0$$^{\circ}$~ only show the bright yellow front (no following filament), propagating at speeds of $\sim 800$~km~s$^{-1}$ in this direction. No radially propagating feature could be traced in the $10$$^{\circ}$~ trace or above this angle.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\columnwidth]{dt_analysis
\caption{EUV kinematics summary for Events 1 and 2. ({\bf a}) Snapshot showing the EUV structures. The blue lines that originate at the erupting active region show the directions along which distance-time maps were produced. ({\bf b}) Distance--time map taken along the red line traced at angle $-60$$^{\circ}$~ in panel ({\bf a}), showing the EUV front, filament eruption, and the failed section of the filament eruption. ({\bf c}) Event 2 eruption in the same format as ({\bf a}). The line highlighted in red is the trace ($-70$)$^{\circ}$~ along which the distance--time map shown in panel ({\bf d}) was constructed. It shows several erupting features including the outer EUV front and the erupting loop.}
\label{fig:dt_analysis}
\end{figure}
The distance--time analysis performed for Event 2 is shown in Figure \ref{fig:dt_analysis}c and d, where the example distance--time map is from the $-70$$^{\circ}$~ trace as indicated in Panel c. Again several eruption-related features may be identified, the fastest of which is a bright EUV front that begins just after 12:00~UT and propagates with a speed of $1030$~km~s$^{-1}$. This is followed by a much slower front (most likely the feature which develops into a coronal loop in the images) and then non-erupting filamentary material -- the failed filament eruption is not as pronounced as Event 1. The same features may be identified in the distance--time maps that are oriented towards the south, {\it i.e.}, $-70$$^{\circ}$~ to $-30$$^{\circ}$~, while traces at $-20$$^{\circ}$~ to $+10$$^{\circ}$~ show only the slow secondary yellow front (loop) propagating at speeds of $\approx 350$~km~s$^{-1}$.
EUV waves from Events 1 and 2 were also seen to propagate on the disk, limited to the south to southeast directions. The speed of the wave as it propagated on the disk was measured along the great circle passing the flare \citep{nitta_et_al_2013}. The speed was lower on the disk than that measured off-the-limb. It was $\approx 750$~km~s$^{-1}$ and $\approx 600$~km~s$^{-1}$, respectively, for Events 1 and 2. The EUV wave from Event 3 was diffuse and anisotropic, identifiable only westwards of AR~12443 with the brightest part moving northwestward (Figure~\ref{event3_kinematics}). The speed in that direction was $\approx 700$~km~s$^{-1}$. The EUV wave was seen between 13:40~UT and 13:50~UT, comparable to the period in which Type II bursts were observed (see Section~5). A more detailed analysis of this EUV wave will be given elsewhere.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\columnwidth]{event_3_wave_kinematics
\caption{The kinematics of the EUV wave for Event 3. ({\it Left}): The speed and acceleration are calculated along great circles averaged in $24$ $15$$^{\circ}$~-wide sectors that originate at the flare center. ({\it Right}): Space--time plot of the sector marked in black in the left panel. The normalized GOES $1-8$~\AA\ light curve is overplotted using a black line superposed on a broader white curve. Even in running-- difference images the wave front is diffuse. The selected sector is one of the few that let us trace the front edge of the wave.}
\label{event3_kinematics}
\end{figure}
The early stages of the failed eruptions in the low corona were analysed using SDO/AIA for Events 1 and 2, in order to better understand the CMEs they drove. Figure \ref{fig:eruption_patrick} shows the failed filament eruption for Event 1, using 304~\AA~data from SDO/AIA. The compact flare of Event 1 was followed by a filament eruption toward the Southwest in the plane of the sky, along a curved path that is initially at a small angle to the local chromospheric surface. An EUV front and a weak Moreton wave (identified in movies of H$_{\alpha}$ filtergrams from GONG and Kanzelh\"ohe Observatory) accompanied the eruption, propagating both southwards on-disk and in a southwesterly direction off-limb. By 03:37~UT the EUV front had left the AIA field of view and started to dissipate in intensity. While some of the filament was completely ejected, a large portion of it fell back to the solar surface, both towards the active region, and to an area south of the active region. The falling material produced bursts of $304$~\AA~radiation when it hit the chromosphere, presumably from the heating of the plasma.
\begin{figure}[t!]
\centering
\includegraphics[width=1.0\columnwidth]{20151104_filament_kinematics_multipanel
\caption{Filament eruption kinematics from $304$~\AA\ SDO/AIA observations of Event 1. ({\bf a}) Background-subtracted distance-time plot using the curved trajectory in ({\bf c}). ({\bf b})--({\bf d}) Snapshot $304$~\AA\ SDO/AIA images at three times. The filament is ejected behind and somewhat slower than the EUV wave, and the failed eruption material returns at slightly lower than gravitational acceleration ($0.27$~km~s$^{-2}$).}
\label{fig:eruption_patrick}
\end{figure}
The motion of the filament material can be analysed quantitatively by studying the location (or distance) of bright or dark features as a function of time along a specified path \citep{McCauley:2015}. Taking into account the curved path followed by the erupting material in the bottom panels of Figure \ref{fig:eruption_patrick} leads to the distance--time diagram in the top panel. The linear segment of one fast feature in the top panel corresponds to a speed of $348$~km~s$^{-1}$, with others having top speeds of $\approx 500$~km~s$^{-1}$. The downwards curvature corresponds to acceleration sunwards. The parabolic line corresponds to an acceleration of $0.22$~km~s$^{-2}$, agreeing very closely with the Sun's predicted gravitational acceleration (of $0.27$~km~s$^{-2}$) at this distance. Note that there is a difficulty understanding the paths taken by the erupting and falling material: if the curved eruption path is interpreted as following the local magnetic field, then the more vertical path taken by the falling matter suggests either that the field direction has changed substantially or that the falling matter does not follow the field lines.
\subsection{CMEs for Events 1 and 2}
Figure~\ref{figure_c2} shows the evolution of the first two CMEs in the field of view of the C2 camera ($2 -- 6 ~R_{\odot}$) of the {\it Large Angle and Spectrometric Coronagraph} \citep[LASCO]{brueckner_etal_1995} on board SOHO. Figure~\ref{figure_c2}a is a direct image from LASCO~C2 taken at 00:00~UT on 4 November, where the black contours identify the location of the coronal streamers shaping the pre-event topology of the solar corona. Figures~\ref{figure_c2}b-i are difference images at the indicated times and the white contours indicate the location of the coronal streamers identified in Figure~\ref{figure_c2}a. Figures~\ref{figure_c2}b--c at 02:24~UT and 03:12~UT show an unrelated CME (first seen in LASCO~C2 at 02:00~UT) that occurred prior to the events under study. This preceding CME came from a small sigmoid eruption from the decaying AR 12441, which was downgraded to a non-numbered region on 4 November.
\begin{figure}[htbp]
\centering
\includegraphics[width=1.0\columnwidth]{figure_c2_version2}
\caption{The evolution of the CMEs for Events 1 and 2, as seen by the LASCO instrument's C2 camera onboard SOHO. See text for details.}
\label{figure_c2}
\end{figure}
Figures~\ref{figure_c2}d--f (second row) show the evolution of the CME associated with Event 1, which was first seen in C2 at 03:48~UT. It propagated mostly within the streamer, and by ~05:00~UT it broke into different parts. In fact it is extremely difficult to see it in the field of view of LASCO~C3 ($3.5-30 ~R_{\odot}$). The weak and fragmented appearance of this CME may be due to its propagation within the coronal streamer. The estimated plane-of-sky speed at a position angle of $280$$^{\circ}$~ ({\it i.e.} $10$$^{\circ}$~ above the equatorial plane) was $328 \pm 8$~km s$^{-1}$.
Similarly, Figures~\ref{figure_c2}g--i (third row) show the evolution of the CME associated with Event 2. This CME was first seen in the LASCO/C2 field of view at 12:36~UT. It propagated within the streamer and by ~14:36~UT, just before the CME from AR 12443 associated with the Event 3 occurred, it was already very difficult to see in the LASCO~C2 images. The estimated plane-of-sky speed at a position angle of 280$^{\circ}$~ ({\it i.e.} $10$$^{\circ}$~ above the equatorial plane) was $252\pm$14~km s$^{-1}$.
Although the CMEs associated with Events 1 and 2 were initially impulsive without a clear driver, the magnetic structure in which they propagated ({\it i.e.} the coronal streamer) might have played a role in their fast weakening, ragged structure, and rapid decay. Similarly, the generation of the metric Type II emissions observed (see Section~5) may have been favored by the closed magnetic field structure at the base of coronal streamer encountered by these two weak CMEs \citep[{\it e.g.}][]{kong_etal_2015}. As shown below, the fast decay of the CMEs did not favor the production of interplanetary Type II emissions.
\subsection{CME for Event 3}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{cme_event3}
\caption{The evolution of the CME for Event 3. See text for detail.}
\label{cme_event3}
\end{figure}
According to the CDAW LASCO CME catalog (\url{http://cdaw.gsfc.nasa.gov/CME\_list}), the CME associated with Event 3 first appeared at 14:48 UT in the West to Northwest directions and then developed into a full halo CME; see Figures \ref{cme_event3}b and c, respectively. Its average plane-of-the-sky speed in the LASCO field of view was $\approx 580$~km~s$^{-1}$ with a slight acceleration that led to a speed of $\approx 620$~km~s$^{-1}$ at the last measurable height of $15 ~R_{\odot}$. Note that the CME front is far from smooth, suggestive of multiple flux ropes or several distinct erupting structures contributing to it. Close examination of the LASCO difference movies reveals a very diffuse front, indicated by a green curve in Figure \ref{cme_event3}a, that moves northward between 14:00 UT and 14:24 UT. The average speed in the image plane was $\approx 1200$~km~s$^{-1}$.
\section{Coronal and Interplanetary Radio Bursts}
\label{S5}
Figure~\ref{fig:radio_summary} shows a summary of all radio bursts occurring on 4 November 2015 during the period 03:00--16:00~UT in the domain $0.01-1000$~MHz. The figure consists of a mosaic of plots from the Wind/WAVES, Learmonth, Culgoora, Orf\'{e}es, Nan\,cay Decametric Array (referred to as NDA below), and Callisto (Bleien, Mauritius, and Gauribidanur) spectrographs.
\begin{landscape}
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth,trim=3cm 0cm 0cm 2cm]{radio_summary}
\caption{({\bf a}) Summary of all radio dynamic spectra for the events on 4 November 2015. Constructed from a mosaic of {\it Wind}/WAVES, Learmonth, Culgoora, Orf\'{e}es, Nan\,cay (NDA), and Callisto (Bleien, Mauritius, and Gauribidanur). ({\bf b}) Event 1 was associated with a metric Type II radio burst that began at $\approx 600$~MHz that does not extend obviously below $10$~MHz. Weak Type III radio bursts occur prior to the Type II burst and then start at the same frequency as the Type II near 03:30:15, 03:31:15, and 03:33:00~UT. These are associated with a small interplanetary Type III observed by {\it Wind}/WAVES in ({\bf a}). ({\bf c}) Event 2 was associated with strong metric Type II emission, beginning at $\approx 800$~MHz. No interplanetary Type IIs or IIIs were observed for this event. ({\bf d})-({\bf e}) Event 3 was associated with at least three metric Type II bursts (see the details in ({\bf e}), multiple metric Type III bursts, and a Type IV burst over the band $20 - 700$~MHz. These metric events led to an interplanetary Type II and numerous interplanetary Type IIIs.}
\label{fig:radio_summary}
\end{figure}
\end{landscape}
From Figure~\ref{fig:radio_summary}a, the three events start near 03:25~UT, 12:00~UT and 13:40~UT. All three events have strong and complex metric Type II radio bursts, all with multiple lanes and both fundamental and harmonic emission. The metric Type III characteristics differ strongly between Events 1 and 2 on the one hand, and Event 3 on the other hand. Unlike the other two, Event 3 has a strong Type IV burst.
\subsection{Event 1}
Figure~\ref{fig:radio_summary}b is an enlargement of Event 1's dynamic spectrum using Learmonth and Culgoora data. It is complex, showing evidence for multiple lanes and several time-varying fundamental and harmonic bands. The strongest harmonic emission starts near $450$~MHz, an unusually high frequency, at $\approx$ 03:25~UT and drifts to $\approx 50$~MHz over period of $\approx 12$ minutes, although evidence for weaker Type II-like emission exists near $800$~MHz near 03:24~UT. Weak fast-drifting signals from near 03:23:30 to near 03:25~UT at frequencies $\approx 100 - 800$~MHz are likely Type III bursts. They are the low-frequency counterpart of the impulsive microwave burst (Figure \ref{Fig_MW1}). These bursts are cut off near 100 MHz, with no counterpart in the high corona and interplanetary space. Similar Type III bursts start near 03:30:15, 03:31:15, and 03:33:00~UT close to the frequency of the harmonic Type II burst and drift to lower frequency. These latter bursts might be interpreted in terms of SA events \citep{cane_etal_1981, cane_etal_1984, bougeret_etal_1998, reiner_and_kaiser_1999}, where SA variously stands for Shock Accelerated \citep{cane_etal_1981, cane_etal_1984} or Shock Associated \citep{bougeret_etal_1998} or complex Type III-like \citep{reiner_and_kaiser_1999} events, as discussed further below. They are most likely related to the weak interplanetary Type III that is visible in expanded views of the {\it Wind}/WAVES data near these times in the restricted frequency range $8 -- 14$~MHz and then becomes clearly visible in the approximate frequency range $150 - 600$~KHz and period 03:40 - 03:50~UT. The fact that there is no strong Type III emission from coronal (RSTN) to interplanetary ({\it Wind}/WAVES) frequencies for this event, despite the evidence for efficient electron acceleration from X-ray and microwave observation (see Section \ref{S3}), suggests that either most of the accelerated electrons did not reach open magnetic field lines \citep{axisa_1974, Kle:al-10}, or that they were radio-quiet along open field lines \citep{li_and_cairns_2012, li_and_cairns_2013}. The observation of SEP electrons for Event 1 (see Section~7.1) provides strong evidence for the latter interpretation. The multiple lanes of Type II emission, possibly with band-splitting as well, indicate either i) that a single shock has multiple source regions with different densities simultaneously producing observable radio emission or ii) that more than one shock exists and produces radio emission in distinct regions with different densities.
\subsection{Event 2}
Figure~\ref{fig:radio_summary}c shows Event 2's Type II radio burst, observed using Orf\'{e}es and NDA. Overall, this burst was much weaker than the other two. It is also complex, with evidence for multiple lanes that have different frequency-drift rates and sometimes overlap. The event has clear fundamental and harmonic structure, starting at the unusually high frequencies of $\approx 350$~MHz and $\approx 700$~MHz near 12:01:30~UT, and drifts to $30$~MHz over a period of 12~minutes. Interestingly, the Type II event starts with quite an intense harmonic that diminishes quickly. By the time the event enters the NDA frequency domain ($\approx 10 - 80$~MHz) it has faded to primarily a weak fundamental band, again with evidence for multiple lanes, and perhaps some very weak harmonic emission. There was no significant interplanetary Type II activity associated with this radio event or the faint microwave burst seen approximately 12:00 -- 12:04 UT (Section 3). There was also no discernible Type III emission at coronal or interplanetary frequencies, a small difference from the weak Type III emission for Event~1, again interpretable in terms of electrons not being released onto open field lines or not having the right properties to produce observable radio emission.
\subsection{Event 3}
Figures~\ref{fig:radio_summary}d and e show enlargements of the radio bursts of Event 3 above 10~MHz. This was by far the most complex of the three radio burst events. It begins with Type~III bursts starting shortly before the Type~II burst appears. These initial Type IIIs are easily seen near $1$ GHz and below $80$~MHz, while the first Type II emission begins near 13:42~UT and 200~MHz. This Type II mainly exists at NDA frequencies and shows very complex and sporadic bands of emission. Indeed an unusually large number of multiple lanes (not distinguishing between bands and split-bands) are identifiable (at least $6$ and perhaps up to $11$ or beyond depending on the observer's definition) and for an unusually long period, from 13:42~UT near $200$~MHz until about 14:08 UT near $30$~MHz. In addition a broadband, long-lasting, Type IV radio burst exists between about $30$ and $1000$~MHz, typically at higher frequencies than the Type II burst and extending long after the Type II burst has ceased. The Type IV burst is particularly intense at frequencies above 700~MHz, extends to unusually low and high frequencies, and appears to show strong vertically-aligned fine structures, particularly above the NDA domain for 13:50 \,--\, 13:56~UT and after about 14:00~UT. More detailed analysis is required to determine whether or not the fine structures show the increase of duration with decreasing frequency that characterises Type III bursts. The Type IV emission also shows pulsations in intensity on timescales of minutes.
Type III bursts are also observed during much of Event 3, starting with a strong set in the period 13:40 \,--\, 13:42~UT before the Type II emission starts and then continuing intermittently before some more intense events occur at frequencies below the Type II burst during the period 13:49 \,--\, 13:54~UT. These latter events intensify at the frequency of the Type II burst, suggesting a physical association, but they may also have counterparts at frequencies above $\approx 150$~MHz in the Orf\'{e}es domain.
\subsection{Interplanetary and {\it In-Situ} Observations}
Returning to Figure~\ref{fig:radio_summary}a for Event 3, the interplanetary extensions of the metric Type IIs are clearly present until at least 16:00~UT and a frequency of 500~KHz, but plausibly until 17:15~UT and 200~KHz. Fundamental and harmonic pairs are evident. Furthermore, multiple lanes are present again, with one restricted to above 4~MHz before about 14:30~UT and the limits on the other set described previously.
Multiple distinct interplanetary Type IIIs are evident in the frequency range 400~kHz to 14~MHz of Figure~\ref{fig:radio_summary}a, as detailed more below, merging into an extended burst below 200~kHz that has little structure (with this color bar) and lasts until after 16:00~UT. This type of unusually long and bright group of Type III bursts is often called a Type III-L event \citep{cane_etal_2002}. Note that the two sets of metric Type IIIs discussed above connect to multiple individual interplanetary Type IIIs and the merged burst below $200$~kHz.
Next we analyse the {\it Wind} radio and plasma-wave data to constrain observationally the presence or absence of energetic electrons that reach $1$~AU near Earth. Figure \ref{Fig_Carolina_peaks} shows a partial RAD1 dynamic spectrum ($\approx 5 - 250$~kHz) for 4 November 2015, while Figure \ref{fig:radio_summary} shows the entire RAD1 (to ~1 MHz) and RAD2 ($\approx 1 - 14$~ MHz) domains. Figure \ref{Fig_Carolina_peaks} primarily shows some weak intermittent individual Type IIIs from 13:30 \,--\, 13:50~UT and then a saturated, longlasting emission that might be identified as a Type III-L burst \citep{cane_etal_2002}. However, Figure \ref{fig:radio_summary}a clearly shows multiple individual Type IIIs below $\approx 1$~MHz throughout the approximate period 13:30 \,--\, 14:30, although above about $\approx 4$~MHz separation into two groups appears reasonable. Evidence thus exists for Type III electron beams that are radio-quiet above $\approx 4$~MHz but radio-loud below $\approx 1$~MHz.
\begin{figure}[h]
\centering
\includegraphics[width=1.0\columnwidth]{Carolina_RAD1_TypeIII}
\caption{{\it Wind} WAVES/RAD1 dynamic spectrum for 4 November 2015.}
\label{Fig_Carolina_peaks}
\end{figure}
Figure \ref{Fig_Carolina_peaks} shows direct evidence for generation of Langmuir waves associated with the multiple interplanetary Type III bursts present. This evidence is the intensification of the ``plasma line'' near $f_{pe}$, corresponding to the green line near $20$~kHz from 08:00 to 14:00~UT that becomes red in the approximate period 15:00 $-$ 16:30~UT and ranges in frequency between about 20 and 30 kHz. Thus, during at least this latter period it appears that Type III electrons are present and unstable to the growth of Langmuir waves.
\section{Magnetic Field Configuration and Interplanetary Conditions}\label{plasma}
\label{S6}
\subsection{Coronal Magnetic Fields and Heliospheric Connectivity}
Figure~\ref{nariaki_pfss_fig}, based on the PFSS model, as adopted by \citet{Schrijver:2003} and implemented in \textsf{SolarSoft}, predicts that the open field lines that reach the ecliptic plane at the source surface, placed at a heliocentric distance of $2.5 ~R_{\odot}$, come from three places: the western periphery of AR~12445 (negative polarity -- purple lines), a coronal hole to the Southeast of the region (positive polarity -- green lines), and the eastern periphery of AR~12443 (positive polarity). Similar results were obtained (not shown) by the National Solar Observatory using its PFSS model with GONG magnetogram data.
\begin{figure}[ht]
\centering
\includegraphics[width=1.0\columnwidth]{nariaki_pfss_fig}
\caption{Predictions of a PFSS model with HMI photospheric magnetic field data for a synoptic magnetic map at the photosphere in Carrington coordinates around the time of the second flare, except that the Carrington longitudes are translated to Earth-view longitudes. The {\it open circles} show the photospheric locations of the flares with solar rotation corrected. The {\it red circle} is the footpoint of the magnetic field line traced from L1 to the source surface at $2.5 ~R_{\odot}$ using Wind spacecraft velocity data and the Parker spiral (second method in the text). Open field regions are marked in two colors ({\it green}: positive, and {\it pink}: negative).}
\label{nariaki_pfss_fig}
\end{figure}
Instead of using the PFSS model with photospheric magnetic field data, we can start at the Earth and use the solar wind speed observed by the {\it Wind} spacecraft and the Parker spiral model for ${\bf B}({\bf r})$ to estimate the heliographic coordinates on the source surface of the field line that crosses the Earth at a given time. From there, we can map the field line to the photosphere using the PFSS model. Using this approach, the field line that was connected to the Earth at 12:00 UT on 4 November was rooted in the coronal hole between AR~12443 and AR~12445, and its polarity matches that observed at L1. Using a more sophisticated heliospheric MHD simulation model (provided by Predictive Science, Inc.) shows that the footprint of the Earth-connected field lines became rooted in AR~12443 at 4 November 00:00~UT. This connection lasted until the next day. A note of caution is that the first eruption may distort the magnetic field configuration (and the other plasma structures) from the PFSS and Predictive Science, Inc. predictions for the second event.
A different approach altogether is to extrapolate $1$~AU {\it in-situ} solar wind observations back in time and space. Figure \ref{fig:BoLi_1} maps the large-scale magnetic field lines and solar wind velocity streams in the solar equatorial plane, extending up to a distance of \mbox{2~AU} and constructed using a 2D solar wind model \citep{schulte_etal_2011, schulte_etal_2012} and the approach of \citet{li_etal_2016}. The process involves fitting {\it Wind} measurements of $B_\phi$, $B_r$, and $v_{r}$ at $1$~AU to an analytic model, permitting calculation of these quantities and the plasma density from $1$~AU to an inner boundary (nominally at the photosphere). The analytic model assumes the magnetic field to be frozen-in to the plasma, the wind sources to be constant over a solar rotation (so that the wind and its magnetic field lines form a constant pattern that rotates with the Sun, thereby not modelling CME effects properly), the flow speed to be constant along each streamline, and the plasma to corotate with the Sun at the inner boundary. The model and fitting procedure allow the magnetic field to be non-radial at the inner boundary.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{cropped_fig_Blines-vsw_51_24_ISSI_interval}
\caption{Magnetic map in the solar equatorial plane for the interval 27 October to 11 November 2015, calculated using {\it Wind} spacecraft data and the approach of \citet{li_etal_2016}. {\it Dashed blue lines} show the nominal Parker spirals while {\it blue diamonds} show the position of the Earth on specific days. {\it Dashed green lines} show the Sun--Earth line for each day.}
\label{fig:BoLi_1}
\end{figure}
On 4 November, Earth is near the bottom of Figure \ref{fig:BoLi_1}. The field line that reaches Earth on 4 November leaves the Sun about $45$$^{\circ}$~ westward from the Sun\,--\,Earth line. It is quite closely Parker-like, albeit longer, but its neighbors are not: the nearest eastward field line is directed almost radially near $1$~AU (and so lies at an angle of order $45$$^{\circ}$~ to the nominal Parker direction) and so initially moves westward rather than eastward, while the nearest westward field line does not proceed far Sunward due to it entering a region with magnetic field close to zero (note the opposite directions for the lines that reach $1$~AU between 2 and 4 November), and the next westward field line is part of a loop disconnected from the Sun that does reach 0.1~AU. These aspects suggest that electrons produced by Events 1 and 2 very close to the west-limb source (whether flare- or shock-produced) should not be magnetically connected to Earth on 4 November, unless there is substantial scattering or the acceleration region has a large angular width. Semi-quantitatively, it appears that scattering through $25 - 60 $$^{\circ}$~ of longitude is needed, on considering the location of AR 12445 on the Sun and comparing the angular distances at $1$~AU between the corresponding field lines that connect to Earth's locations for 31 October to 2 November with the field line for 4 November. However, the disk center source for Event 3 should be magnetically connected to $1$~AU about $15$ degrees eastward of Earth (the angular distance between the field lines that connect to Earth on 4 and 5 November). Thus, Event 3 should be better magnetically connected than Events 1 and 2 but still not well connected.
\subsection{Interplanetary Plasma and Field Observations}\label{ipconditions}
Figure \ref{fig_context} overviews the in-situ plasma, magnetic field, and particle observations in the vicinity of the Earth from 2 November (DOY 306) to 9 November (DOY 313) 2015. The figure shows from top to bottom: (a) $175-315$~KeV electron intensities and (b) $1.9-4.8$ ion intensities observed by ACE/EPAM \citep{gold_etal_1998}, the solar-wind proton (c) speed, (d) density, and (e) temperature observed by ACE/SWEPAM \citep{mccomas_etal_1998}, the (f) magnetic-field magnitude and (g,h) field angles in Radial Tangential Normal (RTN) coordinates, and (i) $B_z$ component in the Geocentric Solar Ecliptic (GSE) frame observed by ACE/MAG \citep{smith_etal_1998}, and the (j) Dst, (k) AE, and (l) Kp geomagnetic activity indices. The red line in the temperature plot corresponds to the solar-wind proton temperature predicted for non-ICME periods using the solar wind speed and expression of \citet{Elliott_2012}. The vertical solid lines mark interplanetary shock passages observed in-situ and shaded regions indicate ICME periods identified by I.G. Richardson and H.V. Cane (\url{http://www.srl.caltech.edu/ACE/ASC/DATA/level3/icmetable2.html}).
The enhanced magnetic field and density region observed on 3 November corresponds to a stream-stream interaction region, produced when the high-speed stream (speeds $v_{sw} \approx 700$~kms$^{-1}$) observed during 3\,--\,4 November compressed the preceding slow solar wind ($v_{sw} \approx 320$~kms$^{-1}$). The first vertical line corresponds to a reverse shock associated with that CIR. This shock was coincident with the local peak of the energetic proton intensity increase associated with the CIR.
A second interplanetary shock was observed on 4 November 03:24~UT, shortly before the onset of a weak solar electron event (see first label ``SEP'' in the top panel). An interval with ICME signatures was observed 10.8 hours after this shock, as indicated with the first shaded area from 14:10~UT to 19:26~UT in Figure \ref{fig_context}. These signatures included smooth magnetic-field rotation, low temperature and bi-directional solar-wind electron flux (the latter not shown). No significant increase of the magnetic-field magnitude was observed, suggesting that the spacecraft crossed close to one of the flanks rather than near the central part of the ICME. A second SEP event, showing electron and proton increases, was observed when the spacecraft was inside the ICME. The connection of these first two SEP events to Events 1\,--\,3 is discussed in detail in the next Section. Here we emphasise that the two shocks just discussed reach $1$~AU far too early
be associated with the three solar events on 4 November 2015.
We add for the purpose of the space weather discussion in Section \ref{S8} that a third interplanetary shock was observed on 6 November 17:35~UT, followed by an ICME covering the period 7 November 06:00~UT to 8 November 16:00~UT. This ICME showed very clear signatures, and it was likely the interplanetary counterpart of the CME ejected on 4 November in association with Event 3. The time elapsed is 52 hours and the averaged speed is $800$~km~s$^{-1}$. This average transit speed lies between the two plane-of-the-sky estimates of $\approx 600$ and $1200$~km~s$^{-1}$ estimated for the associated CME in Section 4.3.
\begin{figure}
\centering
\includegraphics[width=1.0\columnwidth]{ACE_contextual_IMF_SW_KP_added}
\caption{Overview of in-situ observations: ({\bf a}) $175--315$~KeV electron intensities, ({\bf b}) $1.9--4.8$~MeV ion intensities, ({\bf c}) solar wind proton speed, ({\bf d}) solar wind proton density, ({\bf e}) solar wind proton temperature, ({\bf f}) magnetic field magnitude, ({\bf g}) magnetic field latitudinal angle, ({\bf h}) magnetic field azimuthal angle, ({\bf i}) $B_{z}$, the $z$ GSE component of the field, ({\bf j}) Dst index, ({\bf k}) AE index, and ({\bf l}) Kp index. {\it Blue vertical lines} identify shocks while {\it shaded areas} correspond to ICME periods. See text for details.}
\label{fig_context}
\end{figure}
\section{SEPs}\label{seps}
\label{S7}
\subsection{SEP Observations}
Figure \ref{fig_seps} shows energetic particle, plasma flow, and magnetic field observations during 4 November 2015. From top to bottom the figure shows: (a) energetic electron intensities observed by SOHO/EPHIN \citep{muller-mellin_etal_1995} at three energy bands, (b) energetic proton intensities observed by ACE/EPAM, SOHO/EPHIN and SOHO/ERNE \citep{torsti_etal_1995} at five energy bands between $1$ and $32$~MeV, solar wind (c) speed, (d) magnetic field magnitude, and (e,f) magnetic field angular coordinates observed by ACE during 4 November. The shaded area corresponds to the first ICME shown previously in Figure \ref{fig_context}. The arrows in the first two panels mark the times of X-ray flares associated with the three events under study. During this period, the SOHO spacecraft was rotated 180 degrees from its nominal pointing, meaning that its particle instruments were pointing perpendicular to the nominal Parker spiral direction and missed the field-aligned particles expected to arrive first.
Two electron increases, with $0.25--0.74$~MeV onset times at 04:11~UT $\pm$2~minutes and 14:19~UT $\pm$2~minutes, were observed. The first increase, associated with Event 1, was clearly observed by the EPHIN instrument and the deflected electron channels of ACE (Figures \ref{fig_context} and \ref{fig_seps}). The second electron increase, associated with Event 3, showed higher intensities than the first electron increase and was accompanied by energetic protons reaching energies up to 60~MeV. Both the electron events and the ion event showed clear velocity dispersion.
The electron event 1 was not accompanied by an increase of the proton intensity, possibly because it was masked by the decaying proton intensity associated with the prior CIR. The small proton increase starting at 12:00 UT on 4 November, seen at the higher-MeV energies (see the blue curve in the second panel of Figure \ref{fig_seps}), started too early to be associated with Event 2. Therefore, there was no significant in-situ particle increase associated with Event 2. (Interestingly, however, modeling presented in Section 7.3 below suggests that this small proton event could be due to protons accelerated in Event 1 once its shock and CME are well away from the Sun.) During the period under analysis, there were no observations available from STEREO-B (communications with the spacecraft stopped in October 2014). STEREO-A was affected by reduced data return during the pass through solar conjunction, but the beacon data available show that the two SEP increases observed at L1 and described in Figures \ref{fig_context} and \ref{fig_seps} were not observed by STEREO-A (at that time located at a heliographic longitude of $\approx 168$ degrees). Thus the observed SEP electron events do not cover $360$$^{\circ}$~ of heliolongitude at 1~AU.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{particles_one_day_IMF_SW_rev.png}
\caption{Energetic electron and proton, solar-wind speed and magnetic-field vector observations during 4 November 2015, from instruments on SOHO and ACE at L1. {\it Thick vertical black arrows} mark the start times of Events 1-3.}
\label{fig_seps}
\end{figure}
Figure \ref{fig_seps_comp} shows the intensities and abundance ratios of multiple energetic ions observed by ACE/EPAM, ACE/ULEIS and SOHO/EPHIN between November 2 and November 9 in 2015. Specifically the figure provides data for protons, helium (technically both $^{4}$He and $^{3}$He, although usually the fraction of $^{3}$He is expected to be negligible), carbon, and oxygen. The shaded areas correspond to ICME periods. While hypothetical SEPs ions from Event 1 might be masked by the CIR-associated increase starting on November 3, all of these species showed clear increases associated with Event 3. The He/H ratio remained close to 0.1 during the whole of Event 3, which corresponds to the typical values found during impulsive (flare-associated) SEP events (see, {\it e.g.}, Reames 1999). The C/O flux ratio clearly separates the periods with CIR- and SEP-related energetic particles, as found previously \citep{mason_sanderson_1999}, providing additional arguments against Events 1 and 2 producing significant SEP ions. Note that the composition signatures of gradual and impulsive events can sometimes be blurred \citep[{\it e.g.}]{cohen_2006}), as apparently found here.
\begin{figure}[t]
\centering
\includegraphics[width=0.9\columnwidth]{ACE_ULEIS_SIS_composition.pdf}
\caption{Intensities and abundance ratios of multiple energetic ions observed by ACE/EPAM, ACE/ULEIS and SOHO/EPHIN between November 2 and November 9, 2015: {\bf a}) $1.06-1.9$~MeV proton intensity, {\bf b}) $0.905-1.28$~MeV/n helium intensity, {\bf c}) helium to proton abundance ratio at $~1$~MeV/n, {\bf d}) $4.3-7.8$~MeV proton intensity, {\bf e}) $4.3-7.8$~MeV/n helium intensity, {\bf f}) $4.3-7.8$~helium to proton ratio, {\bf g}) $0.16-0.32$~MeV/n carbon intensity, {\bf h}) $0.16-0.32$~MeV/n oxygen intensity, and {\bf i}) $0.16-0.32$~MeV/n carbon to oxygen ratio. Increases associated with CIRs and SEPs are labeled, as are ICME periods ({\it shaded regions}) and shocks ({\it blue vertical lines}).}
\label{fig_seps_comp}
\end{figure}
\subsection{Electron Anisotropy Observations and Modeling of SEPs related to Event 3}
\label{sect_electron_SEPs}
The electron data above $250$~KeV in Figure \ref{fig_seps} show only a single onset for Event 3. However, at the lower energies presented in Figures \ref{fig_anisotropy1} and \ref{fit_electrons} the electron event has a first peak approximately ten minutes after the onset, followed by a flat interval and then a second major increase of the particle intensity. Furthermore, analyses presented next of {\it Wind}/3DP sectored data show two clear episodes of velocity dispersion during the rising phase of the two peaks. ({\it SOHO}'s orientation prevents similar analyses for this period.) This two-step electron rise is unlikely to be due to local effects because it was observed by both the {\it Wind} and {\it ACE} spacecraft, at that time separated by $130$~Earth radii and both immersed in the ICME and observing a steady field orientation.
Figure \ref{fig_anisotropy1} shows the pitch-angle distributions of $82--135$~KeV electrons observed by {\it Wind}/3DP in association with Event 3. During the rising phase of the two peaks, {\it Wind}/3DP observed pitch-angle distributions peaking at small pitch-angles, indicating a good magnetic connection to the solar source region. Moreover, the first order anisotropy index shows a double-peak shape in agreement with the two-step increase of the intensities. These observations suggest that the energetic electron event was composed of two successive groups of injections separated by $\approx 30$ minutes. Later on, after about 15:00~UT, the non-monotonic evolution of the pitch-angle distributions ({\it i.e.} the observation of bidirectional distributions with an increase of sunward propagating electrons) signals that the spacecraft was inside an ICME. These electrons may have been reflected by the converging magnetic-field lines at the opposite ICME leg or by a reflecting magnetic barrier located significantly beyond 1~AU. In summary, the activity on 4 November 2015 was accompanied by an electron event (Event 1) that occurred in the sheath region between an ICME shock and its driver, and by an SEP event with enhanced proton and electron intensities (Event 3) that occurred within the magnetic obstacle of the ICME.
\begin{figure}[t]
\centering
\includegraphics[width=0.6\columnwidth]{Nina_pitch_angles_electrons_wind}
\caption{Energetic electron observations by {\it Wind}/3DP. From top to bottom: Pitch-angle distributions with color-coded intensity, pitch-angle corresponding to each pitch-angle bin, electron intensities observed in each bin, and the first-order anisotropy index. (The peaks near 13:30 $--$ 13:45~UT in the bottom two panels are likely solar-flare light contamination and should be ignored.)}
\label{fig_anisotropy1}
\end{figure}
We modeled the early phase of the $82--135$~KeV electron event observed by {\it Wind}/3DP using simulations of the interplanetary transport of solar energetic particles, followed by the optimization of the injection and transport parameters. The transport model \citep{AguedaEtAl08} solves the focused transport equation \citep[see][for the full equation]{Ruffolo95}, which is essentially one-dimensional along a magnetic field line. It assumes the electron solar source at $2 ~R_{\odot}$ and an Archimedean spiral magnetic-flux tube connecting the Sun and the spacecraft defined by the solar wind speed measured {\it in-situ}. (This approximation is not unreasonable for standard particle transport from the Sun but ignores the electrons observed moving Sunwards and this period's ICME environment.) Interplanetary pitch-angle scattering is parametrized assuming a pitch-angle diffusion coefficient that resembles the predictions of the ``standard model'' \citep{Jokipii66,JaekelEtS92}: the mean free path characterizes the degree of pitch-angle scattering. Following previous works \citep[{\it e.g.}][]{KallenrodeEtAl92}, we take the electron radial mean free path, $\lambda_r$, to be spatially constant.
We used the {\sf SEPinversion} software available in SEPServer (\url{http://server.sepserver.eu}) to infer the release time history and the value of the electron radial mean free path. {\sf SEPinversion} uses an inversion approach to fit the observations and it allows an estimation of the timing and intensity of the release without any {\it a priori} assumption on the profile.
Figure \ref{fit_electrons} shows the electromagnetic and particle data together with the best possible fit inferred using {\sf SEPinversion}. The best fit (second panel) is obtained assuming $\lambda_r = 0.12$~AU and multiple electron injections that occur in two primary groups; this is sometimes loosely called a two-episode release time profile, but notice that the best-fit model contains at least six electron injections, not two. The finding of two primary groups of releases is qualitatively consistent with the two-episode injection scenario signalled by the anisotropy index (Figure \ref{fig_anisotropy1}, bottom panel).
\begin{figure}[t]
\centering
\includegraphics[width=0.6\columnwidth]{fit_electrons}
\caption{{\it From top to bottom}: Radio spectra observed by {\it Wind}/WAVES (colour-coded by intensity, with red large and black background) and 13 MHz microwave emission (white curve, relative intensity) on 4 November 2015 (for comparison purposes with the particle data, the emissions were shifted by $-500$~s); $82 -- 135$~KeV electron source profile deduced at $2 ~R_{\odot}$; omni-directional intensities observed by {\it Wind}/3DP in the energy range $30 -- 400$~KeV, with the black curve showing the best fit to the $82 -- 135$~KeV data for $\lambda_r = 0.12$~AU; the two bottom panels show the modeled and observed $82 - 135$~KeV electron pitch-angle distributions normalized to the maximum value ($1 =$ {\it red} to $0 =$ {\it blue}).}
\label{fit_electrons}
\end{figure}
Figure \ref{fit_electrons} shows that the model results cannot quantitatively explain the injection timing, as the two groups of inferred injections start $\approx 10 -- 20$ minutes after the beginning of the radio emission. The exact timing of the electron releases with respect to the electromagnetic emissions depends in part on the length of the interplanetary path, with a longer path allowing the injections to move earlier in time and so agree better with the radio events. The analysis in Figure \ref{fit_electrons} assumes a nominal Parker spiral, whereas the field line length may be different. For instance, since the electrons were detected within an ICME, the field lines may be longer because of twisting \citep[but see]{kahler_etal_2011}. Alternatively, the field line that reaches Earth on 4 November in Figure \ref{fig:BoLi_1} is predicted to have a length of $1.2$~AU, while the nominal spiral is $1.1$~AU long. This would allow the electrons to be released $\approx 0.1/1.1\times 25 = 2.5$~minutes earlier, corresponding to moving the releases in Figure \ref{fit_electrons} to the left by $2.5$~minutes. Since the impulsive coronal radio emissions start $\approx 10 - 20$~minutes earlier than the first set of predicted electron releases, the increased field line lengths in Figure \ref{fig:BoLi_1} are insufficient to explain the timing difference. Instead, an enhanced field-line length in the range $1.5 -- 1.9$~AU is required, a clear quantitative problem.
The two groups of releases inferred from the modeling at first glance appear to agree with the two main groups of radio emissions apparent in the $13$~MHz emissions in Figure \ref{fit_electrons} (top panel, white curve) near $\approx$ 13:30 \,--\, 13:45 and 13:50 \,--\, 14:20~UT. However, Figure \ref{fig:radio_summary} clearly shows multiple individual Type IIIs below $\approx 1$~MHz throughout the approximate period 13:30 \,--\, 14:30~UT, although above about $\approx 4$~MHz separation into two groups appears reasonable. Evidence thus exists for Type III electron beams that are radio-quiet above $\approx 4$~MHz but radio-loud below $\approx 1$~MHz, as modelled by \citet{li_and_cairns_2012,li_and_cairns_2013}. The radio emissions and the Langmuir waves and energetic electrons observed at $1$~AU show release of electrons into interplanetary space that are connected to L1. The concept of Type III cannibalism \citep{li_etal_2002} is a plausible way to explain qualitatively why the multiple distinct decametric Type IIIs observed correspond to only two distinct signatures in the electron pitch-angle data.
\subsection{SEP Proton Event Modeling}
As summarised in Section 2, there are many challenges involved in modelling specific SEP events quantitatively, with many formalisms, models, and approximations involved. The approach adopted here is to present a modelling challenge to the community for the events of 4 November 2015 and to present one unusual analysis (that includes drift effects but assumes a constant, energy-independent, mean free path along Parker spiral fields) whose surprisingly good results are intended to stimulate the community into a concerted effort.
Analysis and interpretation of energetic protons observed from the 4 November 2015 events were hampered by the rotation of SOHO, which resulted in the ERNE instrument pointing perpendicular to the mean Parker spiral direction. Thus, applicable flux anisotropy profiles are not available and an analogue to the inversion method used for electrons in Section \ref{sect_electron_SEPs} is not possible. Similarly, although {\it Wind}/3DP observed flux anisotropies below \mbox{11 MeV} (not shown here), the observations were inadequate for producing reliable inversion results.
An alternative to applying an inversion method to data to obtain the particle transport parameters and mean free paths (usually assuming transport confined to a single field line) is to allow for drifts and cross-field motion and to assume values for the prevalent mean free paths. This approach can also place constraints on the magnetic connectivity between the SEP source and observing location. Accordingly, we performed a number of simulations of SEP
(proton) transport for the 4 November 2015 events using the full-orbit propagation model of \citet{Marsh2013}, capable of accounting for drifts and deceleration effects. The input database was constructed using the approximation of a constant, energy-independent, proton mean free path of $0.3$~AU.
The results of the simulations were fed into the SEP forecasting tool described by \citet{Marsh2015}, assuming a solar wind speed of $700$~km s$^{-1}$ (see Figure \ref{fig_context}) and the associated Parker-spiral magnetic field connectivity for an assumed unipolar outward-pointing field at the coronal base. (See the comments in Section~7.2 about the actual magnetic environment.) The tool generates synthetic time profiles for protons at $1$~AU by assuming injection at a flare-related shock, with a width of $48$ degrees centered at the flare location, and with the injection constrained by the observed magnitude of the soft X-ray flare and a published correlation between soft X-ray flare magnitude and peak SEP intensity \citep{Dierckxsens2015}.
For the foregoing parameters, the simulation tool predicts only barely observable (proton) SEP levels for Events 1 and 2, as shown by the fluxes predicted before ~15:00 UT in Figure~\ref{fig:SPARX_ERNE}, not inconsistent with the observations in Figure \ref{fig_seps}. The situation is different for Event 3, for which Figure \ref{fig:SPARX_ERNE} displays both the flux of \mbox{12.6 -- 53.5 MeV} protons observed at SOHO/ERNE and the flux of \mbox{10 -- 60 MeV} protons predicted by the simulation tool. (For this event an additional static background flux of \mbox{$10^{-3}$ protons cm$^{-2}$ s$^{-1}$ sr$^{-1}$} was added to the tool's predictions to produce the displayed results.)
Figure \ref{fig:SPARX_ERNE} suggests that the \citet{Marsh2015} model predicts the existence, timing, and qualitative size (within one to two orders of magnitude) of Event 3's SEPs quite well. Additional support for the model working surprisingly well (despite ignoring the ICME environment and non-Parker field lines) is that running the simulation for $v_{sw} = 500$~km~s$^{-1}$, corresponding to more common solar-wind speeds, and associated Parker magnetic connectivity leads to the prediction of SEPs for Events 1 and 2 but no SEPs for Event 3. These aspects are both inconsistent with the observations in Figures \ref{fig_seps} and \ref{fig:SPARX_ERNE}.
More detailed comparisons of the predicted and observed proton fluxes in Figure \ref{fig:SPARX_ERNE} illustrate the important roles of magnetic connectivity. First, while the observed and predicted onsets are very similar (near \mbox{15:30 UT}), the peaks are not. The decline after the observed peak at approximately \mbox{20:00 UT} coincides with a return to outwards magnetic polarity (near 19:45 UT on 4 November in Figure \ref{fig_context} that is not included in the model. Second, the time profiles for Event 3 and the low observed and predicted proton fluxes for Events 1 and 2 shown in Figure \ref{fig:SPARX_ERNE} are strong evidence that the proton SEP event observed at $1$~AU on 4 November is due to Event 3, located near the central meridian and that the effective solar-wind speed $v_{sw} \approx$ 700~km~s$^{-1}$. Third, for Event 3, connectivity to Earth (SOHO) is sub-optimal, with the early phase of the event not resulting in any flux. Rather, connected field lines sweep over the observer at \mbox{1~AU} after the flare has occurred, consistent with a near-isotropic proton population impacting the SOHO/ERNE detector (which was pointing perpendicular to the mean magnetic-field direction). Differences in onset time between simulations and observations may be due to variations in the exact shape of the interplanetary magnetic field, the proton mean free path, or the spatial extent of the acceleration region at the Sun. Fourth, we further deduce that the decay of the fluxes may be due to the non-trivial magnetic connectivity, and the transition between magnetic-field polarities. This is consistent with recent studies in SEP propagation, which confirm that magnetic polarity reversal boundaries are efficient at preventing particles from crossing them \citep{Battarbee2017}.
Further work might follow several approaches: First, to repeat the foregoing analysis for multiple mean free paths so as to find the optimum value, requiring extensive computational resources not available to our collaboration. Second, to include the non-Parker-like fields in Figure \ref{fig:BoLi_1} into the foregoing orbit calculations \citep{zhang_etal_2009,Marsh2013} and to compare the theoretical results, thereby directly assessing the importance of non-Parker fields for this event. Third, to perform the more standard analyses that include diffusive shock acceleration, scattering, magnetic focusing, and cross-field transport in one dimension \citep{li_etal_2009,verkhoglyadova_etal_2010, vainio_etal_2014, droge_etal_2014, he_wan_2015, hu_etal_2017} and in two dimestions \citep{hu_etal_2017}, thereby allowing assessment of the different physics included and of multi-dimensional effects. Finally, the CMEs, shocks, and background medium should be modelled as accurately as possible using multi-dimensional MHD simulations \citep{kozarev_etal_2013,schmidt_etal_2013,schmidt_etal_2016,hu_etal_2017} and coupled with particle acceleration formalisms \citep{verkhoglyadova_etal_2010,kozarev_etal_2013,hu_etal_2017} to predict the SEP properties, with comparisons elucidating the roles of the shock evolution, 3D background plasma, non-Parker magnetic fields, and the physical processes considered.
\begin{figure}[h]
\centering
\includegraphics[width=0.8\columnwidth]{ISSI_Nov4_2015_SPARX_updated.pdf}
\caption{Combined integral ERNE HED proton observations on 4 November for \mbox{12.6 -- 53.5~MeV} ({\it orange}) are compared with the temporal profile predicted for \mbox{10 -- 60~MeV} protons by the SPARX forecasting software ({\it blue}), assuming a mean solar wind speed of \mbox{700~km s$^{-1}$}. At the time of this event, SOHO was rotated 180 degrees, so ERNE was pointing perpendicular to the mean magnetic field.}
\label{fig:SPARX_ERNE}
\end{figure}
\section{Space Weather: Event Timing and Characteristics}
\label{S8}
Event 3 occurred near the centre of the solar disk and was associated with a broad CME and extended dimming regions most clearly seen in AIA images in the $193 \AA$ and $211 \AA$ channels (Figure \ref{fig:eruption_summary}). Its angular width was estimated to be $226$$^{\circ}$~ by CACTus \citep{Robbrecht04}. Such a CME is generally thought to be Earth-directed. Accordingly, NOAA/SWPC made predictions of the arrival of the CME-driven shock wave at the Earth. This was part of their routine solar wind prediction using the WSA-ENLIL model (see www.swpc.noaa.gov/products/wsa-enlil-solar-wind-prediction). A pressure pulse representing the CME is inserted at ENLIL's inner boundary at $21.5 ~R_{\odot}$. The required CME parameters consist of the time at which the CME passes the heliocentric distance of $21.5 ~R_{\odot}$, the direction (latitude and longitude), width (half angle), and radial velocity of the CME. After Event 3, three runs were made with different CME parameters as we can see in the NGDC archive (www.ngdc.noaa.gov/enlil). The parameters used in these three ENLIL runs are summarized in Table \ref{table_nariaki}.
The prediction that was closest to the actual shock arrival (6 November 2015, 17:34~UT) was made at 20:00~UT on 4 November ($\approx$6.5 hours after the flare onset). This is shown in Figure \ref{figure_enlil}. The predicted arrival was 06:00~UT on 7 November. The other two runs predicted later arrivals (15:00~UT on 7 November and 03:00~UT the next day), indicating the present capability for predicting the CME shock arrival time is $\approx \pm 12$ hours \citep{vrsnak_etal_2014}. Note that the CME parameters were obtained using only Earth-based observations, since STEREO A and B had not resumed operations after their solar-opposition passes earlier in 2015. Therefore, there were large uncertainties in obtaining the parameters of the cone model. Another source that affects the CME propagation in heliospheric simulations is how well the ambient solar wind is characterized \citep{vrsnak_etal_2013}. Including the CMEs associated with Events 1 and 2, there were several CMEs of different sizes and speeds within a few days before Event 3, so it is possible that the actual heliosphere encountered by Event 3's CME was significantly different than modelled in the ENLIL simulations.
A different method to estimate interplanetary travel times of ICMEs was proposed by \citet{CSM:Kle-15} and \citet{CSM:al-17}. These authors used the fluence of the soft X-ray or microwave (8.8~GHz) burst associated with the CME to estimate the propagation speed of the CME front. This speed estimate is fed into the empirical propagation model of \citet{Gop:al-01b} to predict the arrival time at $1$~AU of the ICME's magnetic cloud, rather than the shock. The SXR burst of 4 November 2015 in the $0.1 - 0.8$~nm band rose to a maximum of about $3 \cdot 10^{-5}$~W~m$^{-2}$ within about 22~minutes, producing a fluence of about $2.6 \cdot 10^{-2}$~J~m$^{-2}$. The CME speed inferred from the empirical relationship given by Equation~2 of \citet{CSM:Kle-15} is 950~km~s$^{-1}$. The estimated arrival time at $1$~AU is then 05:20~UT on Nov 07.
ACE/SWEPAM solar wind data (Figure \ref{fig_context}) show that a time interval where the proton temperature is less than half the expected value for a standard solar wind stream \citep{ell05} starts at 6~UT (between 5:53 and 6:21~UT) on 7~November. At about the same time the solar wind speed starts a systematic decrease which lasts over several hours. These indications of the arrival of the ICME at ACE are close to the time predicted using the SXR burst fluence, within $\approx 1$~hour.
Microwave observations from the Sagamore Hill station of the {\it RSTN} show an impulsive burst ($\approx$ 13:37\,--\,13:47~UT) that rises to a peak flux of $782$~sfu at $8.8$~GHz and has a weak longer-lasting tail (not shown). The weak tail is probably thermal bremsstrahlung. The impulsive burst has a fluence of $1.5 \cdot 10^{-17}$~J~m$^{-2}$~Hz$^{-1}$. Equation~1 and the coefficients for $9$~GHz in Table~2 of \citet{CSM:al-17} translate this into a CME speed of $460$~km~s$^{-1}$, which leads to an arrival prediction of 16:35~UT on 8~November. This is clearly much later than observed: the microwave method leads to a considerable underestimation of the CME speed in the corona. Comparing the predictions using the SXR burst fluence {\it versus} the microwaves for Event 3 suggests that the SXR method is considerably more accurate in this particular case. This agreement is exciting and suggests that the methods of \citet{CSM:al-17} need to be tested in detail for additional events.
Once at Earth the CME related to Event $3$ produced a set of space weather phenomena. Indeed, Figure \ref{fig_context} shows multiple southwards excursions of $B_{z}$ in the sheath region between the shock and the tangential discontinuity in front of the CME material, together with a smooth rotation of $\theta_{RTN}$ and $B_{z}$ changing from southwards to northwards. A standard geomagnetic substorm occurred with a sudden storm commencement and a significant intensification of the ring current and auroral zone activity. Specifically, the minimum $D_{st}$ was $\approx -100$~nT and the AE and Kp indices were enhanced from the shock arrival until about 2000~UT on 7 November (maximum values $\approx 1500$~nT and $6$, respectively) when $B_{z}$ returned to being northwards.
A major space-weather effect of Event 3 was that the Swedish aviation radar systems were severely impacted by an extremely strong radio burst at GHz frequencies \citep{Opgenoorth:2016}. The effect was also seen in other European countries. The extreme intensity of the radio burst was only in a limited frequency range. A detailed study by \citet{marque_etal_2018} shows that the perturbations coincide in time with two intense peaks of the radio bursts near the operational frequencies of the secondary air traffic control radar. This makes the radio burst the prime candidate to explain the Swedish air traffic incident. Such radio bursts may be outside the usual space-weather forecasts. At least, the impact of such events appears to be both temporally and spatially limited.
\begin{figure}
\centerline{\includegraphics[width=0.9\columnwidth]{enlil_snapshot_20151107_0600}}
\caption{Snapshots of spatial maps in ({\it left}) the ecliptic plane and ({\it middle}) the meridional plane through the Sun-Earth line, as well as ({\it right} temporal variations at Earth's location for the ({\it upper row}) plasma density and ({\it lower row}) plasma speed. According to this run, the predicted shock-arrival time was 06:00~UT on 7 November 2015.}
\label{figure_enlil}
\end{figure}
\begin{table}
\caption{CME parameters used for ENLIL runs by NOAA/SWPC
\begin{tabular}{ccccccc}
\hline
ID$^{a}$ & $t_{CME}$$^{b}$ & Lat. &
Long. & Half Angle & Vel.$^{c}$ & Time (predicted)\\
1 & 4 Nov 19:00 & 11 & 7 & 53$^{\circ}$~ & 828 & 7 Nov 06:00\\
2 & 4 Nov 20:29 & -12 & 35 & 38$^{\circ}$~& 584 & 8 Nov 03:00\\
3 & 4 Nov 19:28 & 2 & 12 & 48$^{\circ}$~ & 762 & 7 Nov 15:00\\
\hline
\end{tabular}
\noindent
a. The ID corresponds to the time at which ENLIL was run: 1. 20:00 on 4 November,\\
\hspace{0.1cm} 2. 00:00 on 5 November and 3. 02:00 on 5 November.\\
b. The time at which the CME is at $21.5~R_{\odot}$.\\
c. Radial velocity in km s$^{-1}$\\
\label{table_nariaki}
\end{table}
\section{Discussion and Conclusions}
\label{S9}
The solar events of 4 November 2015 present an ideal case for a multi-instrument and multi-event study with similar, closely spaced in time, events that are different in detail. They also provide a very good opportunity to address some of the current discrepancies in the observations and theory of solar activity. All three events had associated M-class X-ray flares, EUV waves, CMEs, and CME-driven shocks. The first two events occurred near the western limb above AR 12445, which showed very rapid emergence (in just three to four days). They were homologous with similar characteristics but also had differences. Their EUV waves were mostly directed southward from the active region, with the brightest parts being low above the solar disk. Any shocks likely occurred close to the source (consistent with the weak Moreton wave observed), and along the dominant direction of propagation southward from the source region. The first outburst had a strong microwave flare, observed by the Nobeyama radioheliograph and polarimeters; its X-ray and microwave properties point to a very strongly localised and low source, consistent with the high starting frequency for a Type II burst. Event 2, however, had only a weak microwave burst but came from the same active region.
Events 1\,--\,3 each produced a Type II burst, none of them identical or simple. Each can be interpreted in terms of multiple-lanes and/or split-bands and both fundamental and harmonic emission, often with very different intensities. Event~3 can be interpreted in terms of at least three Type II bursts or one Type II burst with at least six multiple lanes and split-bands. The Type II (and III) bursts observed should be excellent tests of theory, as well as the current understanding of their association with flare, CME, and SEP phenomena.
Events 1 and 2 had unusually high-frequency Type II bursts, either high-frequency Type III bursts cutoff below $\approx 100$~MHz (Event 1) or no metric Type III bursts (Event 2), and minimal interplanetary Type III emission. These characteristics, especially the relative lack of interplanetary Type III activity, suggest that the associated shocks and flare sites were, despite both producing compact HXR sources seen by RHESSI, unable to produce accelerated particles that could either i) escape effectively into the high corona and interplanetary space, or ii) effectively produce radio emission in these regions, or iii) both. The absence of Type~III emission at meter and longer wavelengths has long been recognized as a typical property of strong flares in the western solar hemisphere that are not accompanied by SEP events \citep{Kle:al-10, Kle:al-11}. We emphasize nevertheless that electron beams can be present but radio-quiet in some frequency domains and radio-loud in others \citep{li_and_cairns_2012, li_and_cairns_2013}, as observed here for Event 3.
It is important to outline the connection between the dynamics of filament eruptions, EUV waves, CMEs, and Type II radio bursts for the homologous Events 1 and 2. Given the observational evidence, we provide one possible scenario. With the onset of the flares, relatively bright, impulsive EUV fronts were launched both on-disk and off-limb, reaching quite high speeds ($\gtrsim 900$~km~s$^{-1}$) early in the eruptions. The primarily south to southwest propagation direction for both was likely due to a ``funnel'' of weaker magnetic fields in that direction. This funnel may have focused the energy of the eruptions, allowing them to reach the high EUV intensity and speeds observed. The Type II bursts for both Events 1 and 2 began shortly after the onset of these bright, southward-directed EUV waves, indicating that the waves may have steepened into shocks quite early on and produced the radio emission. The Type II burst of Event 1 was much wider in frequency coverage than that of Event 2, indicating simultaneous emission from a range of coronal densities/heights, which agrees broadly with the EUV observations of the southern EUV front. The Type II burst continued for about ten minutes after the EUV front had left the AIA field of view. The EUV front of Event 2 had a smaller instantaneous height extent, was dimmer, and the associated Type II burst was narrower in frequency range and duration. We thus find strong temporal correlation, as well as qualitative correspondence between both the characteristics of the EUV and radio emission, and the scale of the two Types of emission, by comparing the two events.
Shortly after the onset of these seemingly impulsive EUV fronts, they were followed by much slower but persistent partial filament eruptions \,--\, initially in the southwest direction, eventually turning to the West. These drove the coronal waves in the west- and north-west directions, which eventually appeared as the bright CME fronts in coronagraphs. However, the slow filament eruptions only added to the fronts' energy for a short period of time, their peak speeds approaching the weak CME speeds, and decreasing afterwards. This may explain why the CMEs were weak, slow, and short-lived \,--\, especially the CME of Event 2, which only had a failed filament eruption drive it. The slowness of the CMEs, despite the existence of fast wave-like disturbances in the low corona (the EUV waves), may be why the CMEs did not produce any Type II bursts in the high corona, as the Type II emission ended before the CMEs appeared in the LASCO C2 field of view. In the case of Event 2, the CME appeared in C2 a full half-hour after the Type II burst had ended.
Another important point is that the filament eruptions and EUV waves for Events 1 and 2 propagated primarily in the south to southwest directions, whereas the corresponding CMEs moved primarily west to northwest. This may have been due to the southwest-propagating off-limb portions of the EUV waves being decelerated by the overdense coronal plasma of the streamer, in which the flare site was embedded. At the same time, the radial portion of the compressive front kept propagating out through less dense material, additionally being driven by the partial filament eruptions. All of the above considerations lead us to conclude that the weak shock waves in Events 1 and 2 occurred early in the events, in the low corona, and were likely co-spatial to the observed EUV wavefronts. No Type II emission was observed after the EUV waves subsided, the quickly dissipating CMEs being the only interplanetary signatures of the events.
The {\it in-situ} particle observations show a lack of high-energy protons for the first two events, while energetic electrons were observed only for Event 1. This is despite the nominally good magnetic connections for both the flare and CME sources (see Section 6 and 7, however). The CMEs from the first two events were directed mostly westwards near the Ecliptic from the west-limb active region, and exhibited a typical dome-like structure, even if the EUV waves were observed to propagate predominantly southward.
Event~3, in contrast to Events~1 and 2, was associated with lower frequency Type II emission, a very long-lasting Type IV burst between at least $10$~GHz and a few tens of MHz, and with coronal and interplanetary Type III bursts throughout the period 13:40\,--\,14:30~UT. Most but not all of these Type IIIs continue from above $10$~MHz to below $1$~MHz, with many of those that are not continuous appearing to be radio-quiet in the approximate domain $4 -- 30$~MHz, but radio-loud at higher and lower frequencies. The first Type IIIs started during the HXR and microwave bursts and can be connected qualitatively by timing observations, the theoretical SEP transport model of \citet{AguedaEtAl08}, and significantly longer, non-Parker, field lines to the first set of SEP electrons observed by the {\it Wind}/3DP instrument. There are quantitative difficulties, however, with field line lengths of $1.5 -- 1.9$~AU needed but available models (Figure \ref{fig:BoLi_1}) only yielding $\approx 1.1 - 1.2$~AU. The timing and two-step nature of the SEP electron profile despite having relatively continuous Type IIIs below $\approx 4$~MHz, at least some of which are not seen in the ORF\'{E}ES and DAM spectra below about $30$~MHz, and more than two injections in the transport model's best-fit solution all require further work. Consideration of better magnetic field connectivity models, the ICME environment, cannibalisation of Type III electron streams \citep{li_etal_2002}, and some electron beams being radio-quiet in some frequency domains and radio-loud in others \citep{li_and_cairns_2012, li_and_cairns_2013} appear necessary. Event~1 should also be modelled using this approach and associated implications drawn.
Some of Event~3's Type III bursts appear to be restricted to frequencies below the frequencies of simultaneous Type II bursts. One interpretation of this is that the shock wave producing the Type II bursts is also an accelerator of the electrons that produce Type~III bursts, a possible alternative to the usual scenario that Type III electron beams originate in magnetic reconnection regions. Alternatively, the Type III streams originate below the Type II shock but only become radio-loud after crossing the shock \citep{li_and_cairns_2012}. Another alternative is that the ``missing'' Type III radiation is generated outside of the CME and shock plasma but is prevented from reaching the observers at $1$~AU by propagation effects associated with the high-density regions behind the shock and CME.
The third event's CME was a half-halo in the southern hemisphere from the central disk active region. It showed much higher speeds in LASCO data than the first two. It drove an interplanetary shock; both the CME and shock reached Earth unexpectedly quickly (in $\approx$ two days), were not robustly or adequately predicted by ENLIL, and caused a space weather event with a geomagnetic storm. Interestingly, the SXR peak fluence method of \citet{CSM:al-17} predicted Event 3's arrival well and should be tested more.
While SEP electrons were observed for Event~1, SEP protons were not and no SEPs were observed for Event~2. The relative lack of significant SEPs for these western events, despite the nominally good magnetic connectivity, the relatively bright EUV waves, and the intense, high frequency Type II bursts, may be due to the strong deceleration and narrow extents of the CME-driven shocks and so their inability to produce significant particle energization in the low corona. This is supported by the observed EUV disturbances and CMEs being fast and slow, respectively. The absence of strong Type III bursts for these events is an indication that the flare-accelerated particles either remained confined in the low corona or else were present but radio-quiet. Event~1's SEP electrons provide direct evidence for the radio-quiet interpretation \citep{li_and_cairns_2012, li_and_cairns_2013}. In addition, clearly, if there were significant flare- or shock-accelerated SEP protons for Event~1 or any SEPs at all for Event~2 then they did not find their way to open field lines connected to L1. This might be due to propagation effects or to particle injection onto non-connected field lines. Our magnetic-mapping and proton-transport analyses provide good evidence that the magnetic connectivity was not nominal for Events 1 and 2. Additionally, the CME eruptions were not radial for these events, complicating the study of connection points between the shock, flare site, and 1~AU. More detailed modelling of these events is required to differentiate between these interpretations. Even now, however, the analyses cast significant doubt on the typical relevance of simple connection analyses.
Event 3 actually produced significant levels of both electron and proton SEPs near $1$~AU, consistent with the observation of a rather fast and broad CME and with Type III bursts that argue for particle escape from the active region. The SEP event had very different electron and ion behaviors: a prompt, two-step profile for the electrons with rapid rises (near 14:10 and 14:40~UT, respectively) and obvious (by factors of $10 -- 100$) flux increases for $35 -- 500$~KeV electrons, versus prompt (starting around 15:00~UT), slow rise (peak near 20:00~UT), and subtle (by $< 50\%$) flux increase for $10 -- 140$~MeV ions, with some evidence for 1~MeV ion increases and little below that energy. Focusing on the protons, since the electron SEPs and Type IIIs are discussed earlier, the proton acceleration and transport model of \citet{Marsh2015} leads to surprisingly reasonable agreement with the existence, onset time, single peak nature, and qualitative size of Event~3's proton SEP event. It also predicts weak, near background, fluxes of $10 -- 60$~MeV SEP protons for Events 1 and 2 that cannot be ruled out yet observationally. The model does not include the ICME environment or non-Parker field lines but shows strong magnetic connectivity effects, since changing the solar wind speed (and associated Parker connectivity) to $500$~km~s$^{-1}$ from the observed value $\approx 700$~km~s$^{-1}$ leads to the prediction of clearly observable SEPs for Events~1 and 2 but none for Event~3, inconsistent with the observations. Future modelling work must address these effects.
In summary, the three events that occurred on 4 November 2015 show both similarities and differences from standard events and each other, despite having very similar interplanetary conditions and only two flare sites and CME genesis regions. They are therefore targets for further in-depth observational studies, and for testing both existing and new theories and models, of flares, CMEs, the acceleration and transport of energetic particles, Type II, III, IV, microwave, and SA bursts, and related SEPs. Comparing the remote and {\it in-situ} observations of the three events, it remains possible that two traits of CME-related SEP-rich events are having i) sustained Type II emission to low enough frequencies and ii) a sustained high-speed shock, in order to ensure sufficient energization of the particles. It is also possible that once the SEPs have gained enough energy, they can scatter efficiently perpendicularly to the magnetic field and so perfect magnetic connectivity is not required for them to reach the $1$~AU observer. However, the results of this article show that magnetic connectivity is often not nominal ({\it e.g.} well described by a Parker spiral) and that both flare and CME sources of SEPs exist and may co-exist. While many aspects of Event 3's SEPs can be explained by the models presented, multiple aspects of the foregoing plasma, radio, X-ray, and energetic particle phenomena remain unexplained in detail at this time. More elaborate descriptions of the coronal shock dynamics and dynamic magnetic connectivity conditions are necessary for the study of both early-stage and later SEPs, particularly for widely separated observers. In addition, the results of this work reveal the complexity and interrelation of the chain of phenomena associated with solar eruptions. They demonstrate the need for strong integration of {\it in-situ} and remote magnetic, spectroscopic, particle, radio, and X-ray observations of active regions, flares, CMEs, radio and X-ray emissions, and SEPs with advanced theoretical models in order to gain deeper and more correct understanding of these phenomena.
\begin{acknowledgments}
The authors thank ISSI Bern for their hospitality to and support of the International Team on ``The Connection Between Coronal Shock Wave Dynamics and Early SEP Production'', from which the majority of this work resulted. A.~Veronig acknowledges Austrian Science Fund (FWF) grant P27292-N20. D.~Lario was supported by NASA grants NNX15AD03G and NNX16AF73G and the NASA Program NNH17ZDA001N-LW. N.~Nitta acknowledges support from NSF grant AGS-1259549. This work utilizes data obtained by the Global Oscillation Network Group (GONG) program, managed by the National Solar Observatory, which is operated by AURA, Inc. under a cooperative agreement with the National Science Foundation. It also uses data acquired by instruments operated by the Big Bear Solar Observatory, High Altitude Observatory, Learmonth Solar Observatory, Udaipur Solar Observatory, Instituto de Astrof\'{\i}sica de Canarias, and Cerro Tololo Interamerican Observatory.
Data from the SOHO/ERNE instrument was provided by the Space Research Laboratory at the University of Turku, Finland. The authors thank all groups providing data.
\end{acknowledgments}
\bibliographystyle{spr-mp-sola}
|
2,877,628,088,453 | arxiv |
\section{Introduction}\label{sec:intro}
When making decisions with visual data, such as automated vehicle navigation with blurry images, uncertainty quantification is critical.
The relevant uncertainty pertains to a low-dimensional set of semantic properties, such as the locations of objects.
However, there is a wide class of image-valued estimation problems---from super-resolution to inpainting---for which there does not currently exist a method of producing semantically meaningful uncertainties.
Many methods exist for getting per-pixel intervals~\cite{gal2016dropout,oala2020interval,angelopoulos2022image}, but directly giving uncertainty on semantically meaningful image properties has remained challenging.
We make progress on this problem by bringing techniques from quantile regression and distribution-free uncertainty quantification together with a disentangled latent space learned by a \emph{generative adversarial network} (GAN).
We call the coordinates of this latent space \emph{semantic factors}, as each controls one meaningful aspect of the image, like age or hair color.
Our method takes a corrupted image input and predicts each semantic factor along with an uncertainty interval that is guaranteed to contain the true semantic factor.
When the model is unsure, the intervals are large, and vice-versa.
By propagating these intervals through the GAN coordinate-wise, we can visualize uncertainty directly in image-space without resorting to per-pixel intervals---see Figure~\ref{fig:teaser}.
The result of our procedure is a rich form of uncertainty quantification directly on the estimates of semantic properties of the image.
\begin{figure}[t]
\vskip 0.2in
\begin{center}
\centerline{
\includegraphics[width=0.6\linewidth]{figures/teaser_phil_edit_2.pdf}}
\caption{ \textbf{Uncertainty intervals over semantic factors} produced by our method.
We express the intervals in a disentangled latent space that allows us to factorize the uncertainty into meaningful components.}
\label{fig:teaser}
\end{center}
\vspace{-1cm}
\end{figure}
More concretely, we receive input images $X$ and then predict an \emph{uncertainty interval} for each of $D$ \emph{semantic factors} $Z_d$, $d=1,...,D$, which are the elements of a disentangled latent space.
The method involves training an \emph{encoder} (a neural network that takes images as input and produces outputs in the latent space of the GAN) on $(X,Z)$ pairs to give us three different outputs:
\begin{enumerate}
\item \textbf{The point prediction,} $f(X)$. This is the encoder's best guess at the semantic factors $Z$.
\item \textbf{The estimated lower conditional quantile,} $q_{\frac{\alpha}{2}}(X)$. The encoder believes that $q_{\frac{\alpha}{2}}(X)$ is a lower bound on the value of $Z$ given $X$.
\item \textbf{The estimated upper conditional quantile,} $q_{1-\frac{\alpha}{2}}(X)$. The encoder believes that $q_{1-\frac{\alpha}{2}}(X)$ is an upper bound on the value of $Z$ given $X$.
\end{enumerate}
Once the above encoder is trained, as described in Section~\ref{subsec:training}, we use it to form an uncertainty interval for each semantic factor.
However, for the $d$th element of the latent code, the naive interval $(q_{\frac{\alpha}{2}}(X)_d,q_{1-\frac{\alpha}{2}}(X)_d)$ is not guaranteed to contain the ground truth value in finite samples.
We propose to perform a calibration procedure to fix this problem, yielding the sets
\begin{equation}
\mathcal{T}(X)_d = \left[q^{\rm cal}_{\frac{\alpha}{2}}(X)_d, q^{\rm cal}_{1-\frac{\alpha}{2}}(X)_d\right],
\end{equation}
where $q^{\rm cal}$ is a calibrated version of $q$ constructed using the tools in Section~\ref{subsec:calibration}.
Once we have done so, the intervals will contain $\alpha$ fraction of the true latent codes with high probability.
In other words, for any user-chosen levels $\alpha$ and $\delta$, we can output intervals that with probability $1-\delta$ satisfy
\begin{equation}
\label{eq:interval-form}
\mathbb{E}\left[\frac{1}{D}\Big|\big\{d : Z_{d} \in \mathcal{T}(X)_d\big\}\Big|\right] \ge 1 - \alpha,
\end{equation}
for a new test point $(X,Z)$, regardless of the distribution of the data, the encoder used, and the number of data points used in the calibration procedure.
This guarantee, described more carefully in Definition~\ref{def:rcps}, says that the intervals cover $1-\alpha$ fraction of the semantic factors unless our calibration data is not representative of our test data (which only happens with a probability $\delta$ which goes to $0$ as the number of calibration data points grows).
We visualize each of the $d \in \{1,...,D\}$ intervals in latent space by propagating the $d$th lower and upper endpoints through the generator with all other entries in the latent fixed to the point prediction (see Section~\ref{subsec:visualization} for a formal explanation).
\subsection{Central Contribution}
To our knowledge, this is the first algorithm for uncertainty intervals on a learned semantic representation with formal statistical guarantees.
By propagating these intervals through the generator, we are able to visualize uncertainty in an intuitive new way that directly encodes semantic meaning.
This is an important step towards interpretable uncertainties in general image-valued estimation problems.
\section{Method}
\label{sec:method}
\subsection{Notation and goal}
Our data consist of pairs $(X,Z)$---the corrupted image $X$ in $\mathcal{X}=[0,1]^{H\times W}$, and the latent code $Z \in \mathcal{Z}$, where $\mathcal{Z} = \mathbb{R}^D$.
As mentioned in the introduction, we think of $Z$ as a disentangled representation with $d$ \emph{semantic factors}---\textit{i.e.} factors of variation corresponding to interpretable features in an image, such as hair color and expression.
For simplicity, assume each of the $d$ dimensions controls a single semantic factor; in practice, we ignore those that do not.
In our sampling model, $X$ is generated from $Z$ by composing two functions.
The first function is a fixed generator $G : \mathcal{Z} \to \mathcal{Y}$, where $\mathcal{Y}=[0,1]^{H \times W}$, which takes the latent vector $Z$ and produces the ground truth image $Y \in \mathcal{Y}$ (for ease of notation, we assume $X$ and $Y$ have the same shape).
The second function is a corruption model, $F : \mathcal{Y} \to \mathcal{X}$, which degrades the ground truth image $Y$ to produce the corrupted image $X$, for example by randomly masking out part of the image.
To summarize our data-generating process, we have
\begin{equation}
Y = G(Z)\text{ and } X = F(Y);
\end{equation}
\textbf{Goal \#1.} Our first goal is to train an encoder $E$ to recover $Z$ from $X$---in other words, to invert the mapping $F\circ G$---with a heuristic notion of uncertainty.
The encoder's point prediction will be a function $f : \mathcal{X} \to \mathcal{Z}$.
The uncertainty will be parameterized by two functions, $q_{1-\frac{\alpha}{2}} : \mathcal{X} \to \mathcal{Z}$ and $q_{\frac{\alpha}{2}} : \mathcal{X} \to \mathcal{Z}$, denoting our estimates of the $1-\frac{\alpha}{2}$ and $\frac{\alpha}{2}$ conditional quantiles, respectively.
These conditional quantiles are potentially bad estimates; they do not natively possess the statistical guarantee we desire.
\textbf{Goal \#2.} Having trained the encoder and the conditional quantile estimates, we will output uncertainty intervals in the disentangled latent space.
Each dimension will get its own interval, which has the form in~\eqref{eq:interval-form}.
Ultimately, our uncertainty intervals
will come with the following statistical guarantee:
\begin{definition}[Risk-Controlling Prediction Set (RCPS)]
\label{def:rcps}
A set-valued function $\mathcal{T} : \mathcal{X} \to 2^\mathcal{Z}$ is an $(\alpha,\delta)$ risk-controlling prediction set if ($\alpha$,$\delta$)-Risk-Controlling Prediction Set if
\begin{equation}
\label{eq:rcps-guarantee}
\P\Big(\mathbb{E}\big[L(\mathcal{T}(X),Z)\big] > \alpha \Big) \leq \delta,
\end{equation}
where
\begin{equation} L(\mathcal{T}(X),Y) = 1-\frac{\Big|\big\{d : Z_d \in \mathcal{T}(X)_d\big\}\Big|}{HW}.
\end{equation}
\end{definition}
The reader should note here that the function $\mathcal{T}$ depends on the calibration data.
The outer probability in~\eqref{eq:rcps-guarantee} is over the randomness in this calibration procedure; the inner expectation is over the new test point, $(X,Y)$.
The reader should note that in our setting, because we can generate infinite data from the GAN, it is always possible to drive $\delta$ arbitrarily close to $0$, effectively making $\mathcal{T}$ nonrandom.
In the two following subsections, we address each of our goals separately.
\subsection{Goal \#1: Training the encoder for quantile regression}
\label{subsec:training}
Our job in this subsection is to learn the three functions $f$, $q_{\frac{\alpha}{2}}$, and $q_{1-\frac{\alpha}{2}}$.
We do so by training a neural network with three different loss functions, one for each of three linear heads on top of the same feature extractor---see Figure~\ref{fig:training} for the training protocol.
\begin{figure}[t]
\begin{center}
\centerline{
\includegraphics[width=0.5\columnwidth]{figures/training-procedure-darker.pdf}
}
\caption{\textbf{Our training pipeline}, visualized above, shows our training process for the prediction $\hat{f}$, lower quantile $q_{\frac{\alpha}{2}}$ and the upper quantile $q_{1-\frac{\alpha}{2}}$.}
\label{fig:training}
\end{center}
\vspace{-0.5cm}
\end{figure}
\textbf{Loss function for point prediction.} We supervise the point prediction with two loss functions.
The first is an $L_1$ loss directly in the latent space, and encourages $f(X)$ to be close to $Z$.
\begin{equation}
\label{eq:l1-loss}
\mathcal{L}_1\big(f(x),z\big) = \big|\big|f(x)-z\big|\big|_1
\end{equation}
The second is an \emph{identity loss} on the generated image $G(f(X))$, which encourages $G(f(X))$ to contain objects that have the same ``identity'' --- or semantic meaning --- as those in $Y$.
We calculate the identity loss using a pretrained embedding function, $\mathrm{ID} : \mathcal{Y} \to \mathbb{R}^{d'}$ for some $d'$, which projects $Y$ to an embedding space where images with different identities land far away from one another.
Finally, we calculate our loss using cosine similarity,
\begin{equation}
\label{eq:id-loss}
\mathcal{L}_{\rm ID}\big(x, y\big) = \frac{ \langle \mathrm{ID} (x), \mathrm{ID} (y) \rangle }{||\mathrm{ID} (x)|| \cdot ||\mathrm{ID} (y)||},
\end{equation}
Finally, we combine these two loss functions to form the loss function for the prediction $f$,
\begin{equation}
\label{eq:full-loss}
\mathcal{L}_{\rm pred}\big(f(x),z\big) = \mathcal{L}_1\big(f(x),z\big) + c\mathcal{L}_{\rm ID}\big(G(f(x)),G(z)\big),
\end{equation}
where $c=0.7$ was chosen to balance the two losses.
\textbf{Loss function for quantile regression.} Quantile regression~\cite{koenker1978regression,chaudhuri1991global,koenker2001quantile,koenker2005quantile,koenker2011additive,koenker2017quantile,koenker2018handbook} is a statistical method for estimating the conditional quantiles of a distribution.
The key idea of quantile regression is to supervise the regressor using a \emph{quantile loss},
\begin{equation}
\label{eq:quantile-loss}
\begin{aligned}
\mathcal{L}^{\beta}_{\rm q}(q_{\beta}(x),z) = \big(z-q_{\beta}(x)\big)\beta
\ind{z > q_{\beta}(x)} + \big(q_{\beta}(x) - z\big)(1-\beta)
\ind{z \leq q_{\beta}(x)}.
\end{aligned}
\end{equation}
The minimizer of the quantile risk is the true $\beta$ conditional quantile of $Z|X$.
We supervise our conditional quantile estimates $q_{\frac{\alpha}{2}}$ and $q_{1-\frac{\alpha}{2}}$ with two separate instances of the quantile loss, $\mathcal{L}^{\frac{\alpha}{2}}_{\rm q}$ and $\mathcal{L}^{1-\frac{\alpha}{2}}_{\rm q}$ respectively.
This concludes our explanation of the model training procedure, summarized in Algorithm~\ref{alg:gqan-encoder-training}.
Experimental details, such as the particular model architecture we use, are available in Section~\ref{sec:experiments}.
\input{algorithms/qgan-encoder-training}
\subsection{Goal \#2: Calibration}
\label{subsec:calibration}
Having trained the model, we now calibrate it to achieve the statistical guarantee in Definition~\ref{def:rcps} using a set of calibration data $\big\{(X_i,Z_i)\big\}_{i=1}^n$ generated from the model and the upper-confidence bound procedure from~\cite{bates2021distribution}.
The output of the procedure will be the function $\mathcal{T}$ from~\eqref{eq:interval-form}; specifically, we will learn the calibrated conditional quantiles $q^{\rm cal}_{\frac{\alpha}{2}}$ and $q^{\rm cal}_{1-\frac{\alpha}{2}}$.
Our procedure will calibrate the conditional quantiles by rescaling their size multiplicatively.
We will ultimately choose a multiplicative factor $\hat{\lambda}$ that gives us the desired guarantee.
Towards that end, we index a family of uncertainty intervals scaled by a free parameter $\lambda$ for each semantic factor,
\begin{equation}
\begin{aligned}
\mathcal{T}_{\lambda}(X)_d = \bigg[ f(X)_d - \lambda \big(f(X)_d-q_{\frac{\alpha}{2}}(X)_d\big)_+, \; \; \;
f(X)_d + \lambda \big(q_{1-\frac{\alpha}{2}}(X)_d-f(X)_d\big)_+ \bigg].
\end{aligned}
\label{eq:uncertainty_interval}
\end{equation}
When $\lambda$ grows, the interval $\mathcal{T}_{\lambda}(X)_d$ also grows, and thus, the loss function $L(\mathcal{T}_{\lambda}(X),Y)$ shrinks.
Therefore, by taking $\lambda$ large enough, we can always ensure the loss is zero.
The challenge ahead is to pick $\hat{\lambda}$ to be the smallest value such that $\mathcal{T}_{\hat{\lambda}}$ is an RCPS as in~\eqref{eq:rcps-guarantee}.
The algorithm for selecting $\hat{\lambda}$ involves forming an upper confidence bound (UCB) for the risk, then picking the smallest value of $\lambda$ such that the upper confidence bound falls below $\alpha$.
We give Hoeffding's UCB below, although we use the stronger Hoeffding-Bentkus bound from~\cite{bates2021distribution} in practice:
\begin{equation}
\label{eq:hoeffding}
\hat{R}^+(\lambda) = \frac{1}{n}\sum\limits_{i=1}^n L\left( \mathcal{T}_{\lambda}(X_i), Y_i \right) + \sqrt{\frac{1}{2n}\log \frac{1}{\delta}}.
\end{equation}
Note that in our setting, we can always generate enough samples to drive $\delta \to 0$; however, if we only had a finite sample from some population, this would not be the case.
We can then select $\hat{\lambda}$ by scanning from large to small values, $\hat{\lambda} = \min\left\{\lambda : \hat{R}^+(\lambda') \leq \alpha, \;\; \forall \alpha' \geq \alpha \right\}$.
\begin{proposition}[$\mathcal{T}_{\hat{\lambda}}$ is an RCPS~\cite{bates2021distribution}]
\label{prop:rcps-guarantee}
With $\hat{\lambda}$ selected as above, $\mathcal{T}_{\hat{\lambda}}$ satisfies Definition~\ref{def:rcps}.
\end{proposition}
For the proof of this fact, along with a discussion the tighter confidence bounds used in our experiments and extensions to the underlying theory, see~\cite{bates2021distribution} and~\cite{angelopoulos2021learn}.
Having proven that $\mathcal{T}_{\hat{\lambda}}$ is an RCPS, we can simply set $\mathcal{T}(X) = \mathcal{T}_{\hat{\lambda}}(X)$ in~\eqref{eq:interval-form}; in other words, we set $q^{\rm cal}_{\frac{\alpha}{2}}(X)_d = f(X)_d - \hat{\lambda} \big(f(X)_d-q_{\frac{\alpha}{2}}(X)_d\big)_+$ and $q^{\rm cal}_{1-\frac{\alpha}{2}}(X)_d = f(X)_d + \hat{\lambda} \big(q_{1-\frac{\alpha}{2}}(X)_d-f(X)_d\big)_+$.
\subsubsection*{Visualizing uncertainty intervals in image space}
\label{subsec:visualization}
We briefly describe our method for visualizing latent-space uncertainty intervals.
In order to see the effect of a single semantic factor, we set it to either the lower or upper quantile and hold the other factors fixed to the point prediction. More specifically, for a particular dimension $d \in \{1,...,D\}$, define the following vector:
\begin{equation}
\begin{aligned}
\hat{Z}^d_{k} = \big(f(X)_1,...,f(X)_{d-1},\;\;\; q^{cal}_{k}(X)_d,\;\;\;f(X)_{d+1},...,f(X)_{D}\big).
\end{aligned}
\end{equation}
We visualize the lower and upper quantiles in image space as $G(\hat{Z}^{d}_{\frac{\alpha}{2}})$ and $G(\hat{Z}^{d}_{1-\frac{\alpha}{2}})$ respectively.
Since each semantic factor corresponds to an attribute, visualizing the lower and upper quantiles per-factor gives interpretable meaning to the latent-space intervals.
For example, in Figure~\ref{fig:teaser}, the images of the child smiling give a range of possible expressions the model believes are consistent with the underlying image.
\section{Experiments}\label{sec:experiments}
\subsection{Dataset descriptions}
\textbf{FFHQ} We use the StyleGAN framework pretrained using the Flickr-Faces-HQ (FFHQ) dataset~\cite{stylegan1_2019}. FFHQ is a publicly available dataset consisting of 70,000 high-quality images at $1024 \times 1024$ resolution. FFHQ has significant variation among age, ethnicity and image backgrounds. The data used to train the quantile encoder and for the experiments in this section is sampled from the generator pretrained on FFHQ.
\textbf{CLEVR.} In order to have a controlled setup where we can easily identify disentangled factors of variation, we generate synthetic images of objects based on the CLEVR dataset ~\cite{johnson2017clevr}. This dataset provides a programmatic way of generating synthetic data by explicitly varying specific semantic factors. We create a synthetic dataset by varying $\{color,shape\}$ and fixing the other factors such as lighting, material and camera jitter.
\vspace{-0.25cm}
\subsection{Experimental setup}
\textbf{Model architectures}
In all our experiments, we use the StyleGAN2 ~\cite{karras2020stylegan2} framework for the generator architecture $G$. For the experiments involving faces, we use the pretrained model available from the official repository. For the CLEVR-2D experiments, we train a simpler variant of StyleGAN2 generator from scratch. For the quantile regression, we use a standard architecture: the encoder network consists of a ResNet-50 backbone~\cite{he2016deep} with the final layer branching into the point prediction and conditional quantiles. The branching module is a standard combination of convolution and activation blocks followed by a fully connected layer of the expected output dimension; we call these the \emph{heads} of the model. Specific details of the model architecture are provided in the supplementary material.
\textbf{Model training}
We start by pretraining the generative model or acquiring an off-the-shelf pretrained generative model for the task at hand.
In generative models such as StyleGAN, the style space that offers fine grained control over image attributes, is very high dimensional. From this high dimensional space, we extract only the disentangled dimensions following previous work on style space analysis ~\cite{elad2021psp}. In order to better focus the encoder's capacity only on the disentangled dimensions, we mask out the irrelevant dimensions for applying the quantile loss. However, the pointwise loss in ~\eqref{eq:l1-loss} is applied to the full style vector to ensure that the pointwise prediction is able to match the true latents accurately, while the quantile heads focus on learning variablity only in the disentangled dimensions.
During the encoder training ~\ref{subsec:training}, the generative model $G$ is held frozen and only the parameters of the encoder $E$ are updated. The point prediction and conditional quantile heads are trained jointly with the Ranger optimizer ~\cite{elad2021psp} and a flat learning rate of 0.001 for all our experiments.
For the image super-resolution training, we augment the input dataset by using different levels of downsampled inputs, \textit{i.e.}, we take the raw input and apply a random downsampling factor from $\{1, 4, 8, 16, 32\}$ and resize it to the original dimensions. For the image inpainting task, we vary the difficulty by choosing a random threshold to create the mask -- lower thresholds implies fewer pixels are masked and vice-versa. The mask is concatenated to the image resulting in a $C+1$-channel input to the encoder, where $C$ is the number of image channels. The detailed description of the mask generation procedure can be found in the supplementary material.
\textbf{Calibration and Evaluation}
For both the synthetic object experiments with CLEVR and face experiments with FFHQ, we train the quantile encoder on data points sampled from the latent space of the pretrained generative model. This ensures that we have access to the \textit{true} latents that resulted in each image. We generate 100k samples per model and generate a random 80-10-10 split for training, calibration and validation.
\vspace{-0.25cm}
\begin{figure}[t]
\includegraphics[width=0.48\textwidth]{figures/celeba_prediction_visualization_2.pdf}
\hfill
\includegraphics[width=0.48\textwidth]{figures/clevr_prediction_2.pdf}
\caption{\textbf{Semantically meaningful uncertainty intervals} produced by our method on example images sampled from the generator trained on the FFHQ dataset (left) and CLEVR dataset (right). The corrupt image is provided as input to the encoder which outputs a pointwise prediction and quantile predictions for each style dimension.
We plot the calibrated and uncalibrated intervals as well as their visualizations in image-space. [Best viewed in color, zoom in for detail.]}
\label{fig:ffhq_clevr_predictions}
\vspace{-0.6cm}
\end{figure}
\subsection{Findings}
In the following experiments, we explore different properties of our intervals. The problem types include image super-resolution and image inpainting. The risk level $\alpha$ and the user-specified error threshold $\delta$ are fixed to 0.1, unless specified otherwise.
\vspace{-0.25cm}
\subsubsection{Producing semantic uncertainties}
\textbf{Goal.} We qualitatively verify that the proposed approach outlined in Section~\ref{sec:method} produces visually meaningful uncertainties.
\textbf{Description.} We train a quantile encoder with Algorithm~\ref{alg:gqan-encoder-training}.
Then we generate $n=5000$ images by sampling latents and propagating them through the encoder-generator combination. We use these $n$ images as calibration data for the procedure in~\ref{subsec:calibration}.
Finally, we randomly sample a new test point, pass it through the calibrated encoder, and form the uncertainty intervals in image space as in Section~\ref{subsec:visualization}.
\textbf{Results.} The results are illustrated in Figure~\ref{fig:ffhq_clevr_predictions} on images sampled from the generator trained on FFHQ (left) and CLEVR (right). In case of the FFHQ image, the the person is wearing glasses in the lower quantile image and not in the upper; hence, the model is not certain that the person is wearing glasses.
The model also expresses some uncertainty about the amount of gray versus brown hair.
This outcome was predictable from the input image, where the fact that the person is wearing glasses is not obvious, and there is some hair color ambiguity. The results on the CLEVR dataset are analogous.
The lower and upper quantile images yield similar colors, which is predictable from the blurry input.
The model predicts that both a cylinder and sphere would be consistent with this blurry input.
The calibrated quantiles cover the ground truth color value while the uncalibrated ones do not.
\begin{figure}[!t]
\includegraphics[width=0.48\textwidth]{figures/inpainting_uncertainty_viz_2.pdf} \hfill
\includegraphics[width=0.48\textwidth]{figures/superres_uncertainty_viz_2.pdf}
\caption{\textbf{Visualizing adaptivity} \textbf{[Left]} A random mask is applied to the same input image in each row. When there is no mask (1st row), the lower and upper quantiles are extremely close to the pointwise prediction. As we increase the regions that are being masked, the intervals predicted by the quantile encoder expand, as indicated by the variability on the lower and upper quantile predictions. \textbf{[Right]} We show the results of the encoder on two sets of images. The corruption intensity is varied across each set, the input image in the top row is not corrupted while the input in the bottom row is under-sampled by 16x. In both case, we can observe that the most diverse prediction is in the bottom row where the input is corrupted the most. [Best viewed in color. Zoom in for detail]}
\label{fig:adaptivity}
\vskip -0.2in
\end{figure}
\subsubsection{Exploratory results with purposeful corruptions}
\textbf{Goal.}
We probe the uncertainty quantification procedure to see if it will have the expected qualitative behavior.
\textbf{Description of experiment.}
We sampled images from the held out set of the FFHQ pretrained GAN and applied purposeful corruptions to check if the resulting quantile estimates had semantic meaning. Both image resolution and image masking were used as corruption models. We qualitatively analyze the results by visualizing the predictions and perform a quantitative measurement as well by computing image based metrics. For the qualitative analysis, we use 500 images at each difficulty level as inputs to our encoder.
\textbf{Qualitative results.} Figure~\ref{fig:adaptivity} shows the results of this experiment for Image inpainting (left) and super-resolution (right). In the inpainting case, when nothing is masked, the quantiles are roughly identical. When the eyes are masked, the quantiles indicate the model does not know if the person was wearing glasses.
When the mouth is masked, the model expresses uncertainty as to whether the woman is showing her teeth in the smile. Finally, when almost everything is masked, the quantile images are very different, representing individuals with entirely different identities. Similar behavior can be observed in the super-resolution case. The results are shown for two separate inputs; in both cases, the input in the top row is uncorrupted while in the bottom row, it is undersampled by 16x. The model is able to predict almost perfectly in the absence of corruption. The quantile predictions are extremely close as well. In the presence of corruption, both the pointwise prediction is off (as expected) and the quantile edges display much higher variability including in attributes such as hair shape, glasses, facial hair and perceived gender. The results from both these experiments point to an expected qualitative behavior of our proposed approach: the model exhibits more uncertainty with increased information loss at the input.
\textbf{Quantitative results.} For each input, we compute the calibrated uncertainty interval using our approach and compute the \emph{identity} loss specified in Equation~\ref{eq:id-loss}, and \emph{perceptual} loss between the upper and lower edges. We repeat the procedure for each image by varying the input difficulty, similar to~\ref{subsec:difficulty_variation}. It can be observed from Table~\ref{tab:metrics} that both perceptual and ID losses increase with increasing perceived input difficulty. This substantiates our claim that the calibrated quantiles display more variability as we increase the difficulty of the task. Note that most of the style dimensions only affect attributes like hair color/glasses/facial hair that do not necessarily change the identity of the individual. Given this observation, the change in ID loss is very much indicative of the variability of the quantile predictions.
\begin{table}[ht]
\caption{\textbf{Measuring variability over quantiles:} Perceptual loss (L-PIPS) and ID Loss between the upper and lower calibrated quantiles.}
\begin{center}
\begin{sc}
\begin{small}
\begin{tabular}{lccc}
\hline
\multicolumn{1}{l}{Metric} & \multicolumn{1}{l}{Easy} & \multicolumn{1}{l}{Medium} & \multicolumn{1}{l}{Hard} \\ \hline
ID Loss & 0.04 & 0.06 & 0.09 \\ \hline
Perceptual Loss & 0.16 & 0.23 & 0.28 \\ \hline
\end{tabular}
\label{tab:metrics}
\end{small}
\end{sc}
\end{center}
\vspace{-0.5cm}
\end{table}
\subsubsection{Interval sizes as a function of problem difficulty} \label{subsec:difficulty_variation}
\textbf{Goal.}
\begin{figure}[ht]
\vspace{-0.5cm}
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.48\textwidth]{figures/FFHQ_superres_set_sizes.pdf} &
\includegraphics[width=0.48\textwidth]{figures/CLEVR_superres_set_sizes.pdf}
\end{tabular}
\caption{\textbf{Adapting to varying corruption levels:} Distribution of set-sizes for different input corruption levels for super-resolution on FFHQ and CLEVR. More results in supplementary material.}
\label{fig:set_sizes}
\end{center}
\vspace{-0.5cm}
\end{figure}
We seek to construct intervals that adapt to the uncertainty of the input, \textit{i.e.}, result in lower values for easier inputs and higher values for harder inputs.
This experiment tells us how informative our intervals are about when the model makes an error.
\textbf{Description of experiment.}
For the image super-resolution case, we create the difficulty levels $\{\mathrm{easy},\mathrm{ medium }, \mathrm{hard}\}$ that correspond to $\{1\text{x}, 8\text{x}, 32\text{x}\}$ downsampled versions. All downsampled versions are resized to the same dimensions before presenting to the encoder. For image inpainting, we vary the masking threshold to create $\{\text{easy}, \text{medium}, \text{hard}\}$ difficulty levels which indicate the fraction of the image being masked, with $\{\text{easy}, \text{medium}, \text{hard}\}$ corresponding to $\{10\mhyphen 15\%, 40\mhyphen 50\%, 90\mhyphen95\%\}$. The results are computed on held out set sampled from the StyleGAN generative model pretrained on the FFHQ dataset. In order to obtain the set size for each input, we scale the quantile width using the threshold obtained after the RCPS calibration procedure.
\textbf{Results.}
Figure~\ref{fig:set_sizes} shows the set sizes for the super-resolution corruption model as a function of problem difficulty on two datasets, FFHQ and CLEVR. As expected, the set sizes increase with increasing problem difficulty indicating increasing uncertainty as corruption level increases.
\vspace{-0.25cm}
\section{Related Work}
\label{sec:related_work}
\vspace{-0.2cm}
\textbf{GANs for Inverse problems.}
The remarkable image generation properties of recent approaches such as BigGAN~\cite{bigGAN2019} and StyleGAN~\cite{karras2020stylegan2} has led to the increasing use of these models to solve inverse problems relating to image restoration such as image super-resolution and completion. All prior methods that use GANs for inverse problems such as the ones that use the pretrained generative model as an image prior~\cite{imageprior2020,semanticimageprior2020,pulse2020,brgm2020} or the ones that train an encoder model to project the input into the generator's latent space~\cite{indomain_editing2020,elad2021psp,e2e2021,ohayon2021high} focus on the accuracy of the point estimate but not on the uncertainty level of the input.
\textbf{GANs for Interpretability.}
Despite providing no guarantees of image likelihood, unlike others such as Normalized Flow~\cite{glow2018} or Score-based models~\cite{scoremodels2021}, GANs have been used to develop interpretable approaches to image generation. The widespread use of GANs as opposed to other generative models in the interpretability is done due to the availability of an disentangled latent space~\cite{gancontrols2020,stylespace2021}, which is a property we utilize in our work.
\textbf{Quantile Regression.} Quantile regression was first proposed in~\cite{koenker1978regression}.
Since then, many papers have used the technique, applying it to machine learning~\cite{hwang2005simple,meinshausen2006quantile,natekin2013gradient}, medical research~\cite{armitage2008statistical}, and more.
Most relevant to us is conformalized quantile regression~\cite{romano2019conformalized}, which gives quantile regression a marginal coverage guarantee using conformal prediction.
Our work instead uses risk-controlling prediction sets, a different distribution-free uncertainty quantification technique.
\textbf{Conformal prediction and distribution-free uncertainty quantification.}
At the core of our proposed method is the distribution-free, marginal risk-control technique studied in~\cite{bates2021distribution} and~\cite{angelopoulos2021learn}.
These ideas have their roots in the distribution-free marginal guarantees of conformal prediction, proposed in~\cite{vovk1999machine}.
Conformal prediction is a flexible method for forming prediction sets that satisfy a marginal coverage guarantee, under no assumptions besides the exchangeability of the test point with the calibration data~\cite{vovk1999machine,vovk2005algorithmic,lei2013conformal,lei2013distribution,lei2014classification,angelopoulos2021gentle}.
Conformal prediction has been studied in computer vision~\cite{hechtlinger2018cautious,cauchois2020knowing,angelopoulos2020sets,romano2020classification,angelopoulos2021private}, natural language~\cite{fisch2020efficient}, drug discovery~\cite{fisch2021few}, criminal justice~\cite{berk2020improving}, and more.
We are not aware of work applying the notions of conformal prediction and quantile regression to the latent space in generative models.
\section{Conclusion}\label{sec:conclusion}
Experiments indicate the latent space uncertainty intervals express a useful semantic notion of uncertainty previously unavailable in computer vision.
The intervals contain most of the true semantic factors.
These intervals are useful for semantically explaining a model's uncertainties in a way humans can understand---for example, on high-level features in the image.
Depending on the available disentangled latent space, this notion of uncertainty can be quite rich.
Limitations of our method include a) that we have only applied the technique to GAN generated data, b) that the calibration data must be reflective of the data distribution, and c) that we assume access to a disentangled latent space.
We see our work as part of a larger tapestry of results in generative models, and our technique will remain applicable as progress gets made on real-data backprojection, calibration under distribution shifts, and disentanglement.
\section{Ethics}\label{sec:ethics}
The ethics of generative modeling itself has been called into question given recent events, \textit{e.g.}, the development of deep fakes.
Nonetheless, we believe the downstream consequences of this work will likely be positive.
The techniques herein do not change the predictions of a generative model; they simply provide a calibrated notion of its uncertainty in a relevant semantic space.
Thus, the standard criticism of generative modeling---that it will enable widespread deep fakes---is not applicable.
Furthermore, we expect having a statistically valid and semantically rich notion of uncertainty will provide a sobering reliability assessment of these models, perhaps mitigating the chance of harmful failures.
Finally, although we use face datasets due to their ubiquity in this literature, we have attempted to ethically treat topics like gender and race where they arise.
\section{Model architectures}
\subsection{Face experiments}
For the encoder, we use a resnet-50 backbone followed by projection heads that output pointwise, lower and upper quantile predictions. Each projection head consists of a convolution layer followed by a Leaky-Relu activation and a global average pooling layer. The input to each projection head is the output of the backbone network -- a feature map of size $512 \times 4 \times 4$ and the output dimension is the number of style dimensions -- in the case of the pretrained FFHQ styleGAN2 used in our experiments, this value is 9088.
For the generator, we use a FFHQ pretrained styleGAN2 trained to output faces of resolution $1024 \times 1024$ obtained from the official implementation. No discriminator is used during training.
\subsection{CLEVR experiments}
For the encoder, we use a resnet-18 backbone followed by projection heads that output pointwise, lower and upper quantile predictions. Each projection head consists of a convolution layer followed by a Leaky-Relu activation and a global average pooling layer. The input to each projection head is the output of the backbone network -- a feature vector of size $512$ and the output dimension is the number of style dimensions -- in the case of the pretrained CLEVR styleGAN2 used in our experiments, this value is 204.
For the generator, we use a modified version of styleGAN2 that is trained to output images of resolution $128 \times 128$. In order to have a controlled latent space, we reduce the size of the style vectors from $512$ in the original model to $12$. This was done to reduce the size of the resulting style dimension from $9088$ to $204$. Since the model was trained on the CLEVR dataset which has less variability compared to other datasets such as FFHQ, the model was able to converge successfully even at this reduced capacity.
\section{Training details}
\subsection{Input preprocessing}
For the face experiments, the inputs to the encoder are resized to $256 \times 256$ and are rescaled to $[-1, 1]$ range. For the super-resolution experiment, the original input is first downsampled as required (i.e. 8x/16x etc) and is resized back to the input resolution $256 \times 256$. For the image inpainting experiment, the corruption mask is generated using the procedure outlined in Section~\ref{subsec:mask_generation}. The image input is then masked to only expose the unmasked parts -- hence the corruption; the mask is concatenated along with the image as an additional input. Example of a masked image is shown the main manuscript in Figure 4.
The procedure described above is repeated for the CLEVR epxeriments with the exception that the input size is $128 \times 128$.
\subsection{Mask generation procedure for image inpainting}\label{subsec:mask_generation}
For generating a corruption model for image completion, we generate a binary mask in a controlled manner. For each input image of size $H \times W \times C$, we start by generating a random mask of size $H \times W$ where each pixel value in contained in the interval $[0, 1]$. For each difficulty level as mentioned in the manuscript (\textit{easy, medium, hard}), we activate only those pixels in the mask whose values lie less than a corresponding threshold. For eg: for the \textit{easy} level, we mask the pixels whose values are less than 0.3. By changing this threshold, we can vary the difficulty level of the masked input. We use the following thresholds: $\{easy: 0.3, medium: 0.6, hard: 0.9\}$. These thresholds were obtained by visual inspection. Intuitively, the threshold can be interpreted as the fraction of pixels that are masked at a given difficulty level -- 30\% being the easier case and 90\% being the harder case.
\begin{figure}[!h]
\vskip 0.2in
\begin{center}
\begin{tabular}{cc}
\includegraphics[width=0.7\textwidth]{figures/inpainting_example.pdf}
\end{tabular}
\caption{\textbf{Inpainting masks:} Masked inputs at different difficulty levels.}
\label{fig:inpainting_masks}
\end{center}
\end{figure}
\subsection{Masking irrelevant style dimensions} \label{subsec:style_masking}
In StyleGAN models, the style space vector is very high dimensional. However, previous work on style space analysis [39] has shown that only few of those dimensions are reliably disentangled. In order to better focus the encoder's capacity only on the disentangled dimensions, we mask out the irrelevant dimensions ensuring that the quantile loss is only applied to the disentangled dimensions.
For instance, for a FFHQ pretrained model trained to produce an output of size $1024\times1024$ has a style space dimension of 9088. In order to better focus the encoder's capacity only on the disentangled dimensions, we mask out the irrelevant dimensions. More concretely, we apply the loss function described in ~\eqref{eq:quantile-loss} to the masked latent, $\mathcal{L}_{q_{\beta}}(m \odot x, m \odot z)$, with $m$ being the mask that contains `1' for the disentangled dimensions and `0' otherwise and $\odot$ indicating element-wise product. Note that the masking is applied only to the quantile loss and not the pointwise loss in ~\eqref{eq:l1-loss}. This ensures that the pointwise prediction is able to match the true latents accurately, while the quantile heads focus on learning variablity only in the disentangled dimensions.
\section{Effect of calibration on coverage}
The guarantee in Definition~\ref{def:rcps} tells us that the risk will always be controlled, but it does not tell us that our control will be tight. This experiment tells us how conservative our procedure is, \textit{i.e.}, how closely we match our desired risk and error levels.
\begin{figure}[ht]
\begin{minipage}{0.32\linewidth}
\includegraphics[width=\textwidth]{figures/FFHQ_superres_risk_calibration.pdf}
\label{fig:figure1}
\end{minipage}%
\hfill
\begin{minipage}{0.32\linewidth}
\includegraphics[width=\textwidth]{figures/FFHQ_inpainting_risk_calibration.pdf}
\label{fig:figure2}
\end{minipage}%
\hfill
\begin{minipage}{0.32\linewidth}
\includegraphics[width=\textwidth]{figures/CLEVR_superres_risk_calibration.pdf}
\label{fig:figure3}
\end{minipage}
\caption{\textbf{Calibration:} Comparison of distribution of empirical risk for 100 calibration runs before and after performing the RCPS calibration procedure. We show results on FFHQ and CLEVR for the Image super-resolution and inpainting corruption models, calibrating for risk level $\alpha=0.1$.}
\label{fig:coverage}
\end{figure}
Since we work in realm of generated data for model training and calibration, we have access to the true latents $Z_d$ which ensures a precise measurement of the average risk. We do a random 50-50 split on the calibration set, where we calibrate on one split and evaluate on the other. To validate the power of the procedure, we repeat this process 100 times. For each run, we report the average risk incurred by our model over the evaluation split.
Figure~\ref{fig:coverage} compares the average risk of the quantile encoders across different corruption models and datasets, before and after calibration. The performance of the uncalibrated quantile encoder is problem / dataset dependent, \textit{i.e.}, the base model has lower risk in the FFHQ super-resolution problem compared to the inpainting problem or the CLEVR super resolution problem . However, for all settings, the calibration procedure results in lower risk that satisfies the guarantee specified in Definition~\ref{def:rcps}.
\end{document} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.